mirror of
https://github.com/LCTT/TranslateProject.git
synced 2025-01-19 22:51:41 +08:00
commit
5c5f930bfc
@ -7,37 +7,37 @@
|
||||
|
||||
**`/dev/urandom` 不安全。加密用途必须使用 `/dev/random`**
|
||||
|
||||
事实:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
|
||||
*事实*:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
|
||||
|
||||
**`/dev/urandom` 是<ruby>伪随机数生成器<rt>pseudo random number generator</rt></ruby>(PRND),而 `/dev/random` 是“真”随机数生成器。**
|
||||
|
||||
事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”不“真”随机完全无关
|
||||
*事实*:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”“不真”随机完全无关。(参见:“Linux 随机数生成器的构架”一节)
|
||||
|
||||
**`/dev/random` 在任何情况下都是密码学应用更好地选择。即便 `/dev/urandom` 也同样安全,我们还是不应该用它。**
|
||||
|
||||
事实:`/dev/random` 有个很恶心人的问题:它是阻塞的。(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
|
||||
*事实*:`/dev/random` 有个很恶心人的问题:它是阻塞的。(参见:“阻塞有什么问题?”一节)(LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
|
||||
|
||||
**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
|
||||
**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐出不安全的随机数给你。**
|
||||
|
||||
事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
|
||||
*事实*:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。(参见:“那熵池快空了的情况呢?”一节)
|
||||
|
||||
问题的关键还在后头:`/dev/random` 怎么知道有系统会*多少*可用的信息熵?接着看!
|
||||
|
||||
**但密码学家老是讨论重新选种子(re-seeding)。这难道不和上一条冲突吗?**
|
||||
|
||||
事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
|
||||
*事实*:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。(参见:“重新选种”一节)
|
||||
|
||||
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
|
||||
|
||||
**好,就算你说的都对,但是 `/dev/(u)random` 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?**
|
||||
|
||||
事实:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
|
||||
*事实*:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。(参见:“random 和 urandom 的 man 页面”一节)
|
||||
|
||||
man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 `/dev/urandom` 。
|
||||
|
||||
虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
|
||||
|
||||
所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
|
||||
所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。(参见:“正道”一节)
|
||||
|
||||
------
|
||||
|
||||
@ -45,9 +45,9 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
|
||||
|
||||
我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
|
||||
|
||||
首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?
|
||||
首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?(参见:“真随机”一节)
|
||||
|
||||
另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
|
||||
另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长(LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。(参见:“你是在说我笨?!”一节)
|
||||
|
||||
并且我非常乐意听到不一样的观点。但我只是认为单单地说 `/dev/urandom` 坏是不够的。你得能指出到底有什么问题,并且剖析它们。
|
||||
|
||||
@ -55,43 +55,43 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
|
||||
|
||||
绝对没有!
|
||||
|
||||
事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
|
||||
事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?(参见:“random 和 urandom 的 man 页面”一节)
|
||||
|
||||
整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
|
||||
|
||||
### 真随机
|
||||
|
||||
什么叫一个随机变量是“真随机的”?
|
||||
随机数是“真正随机”是什么意思?
|
||||
|
||||
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
|
||||
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为对于随机模型大家见仁见智,讨论很快变得毫无意义。
|
||||
|
||||
在我看来“真随机”的“试金石”是量子效应。一个光子穿过或不穿过一个半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣了,我也不好多说什么了。
|
||||
|
||||
密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。它们更关心的是<ruby>不可预测性<rt> unpredictability</rt></ruby>。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
|
||||
密码学家一般都会通过不去讨论什么是“真随机”来避免这种哲学辩论。他们更关心的是<ruby>不可预测性<rt>unpredictability</rt></ruby>。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
|
||||
|
||||
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
|
||||
|
||||
## 两种安全,一种有用
|
||||
### 两种安全,一种有用
|
||||
|
||||
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
|
||||
|
||||
你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我很理解你。
|
||||
你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我支持你。
|
||||
|
||||
但是等等,你说你要*用*它们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
|
||||
|
||||
事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里。
|
||||
事情是这样的,你的真随机、量子力学加护的随机数即将被用进不理想的现实世界算法里去。
|
||||
|
||||
因为我们使用的大多数算法并不是<ruby>理论信息学<rt>information-theoretic</rt></ruby>上安全的。它们“只能”提供 **计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
|
||||
因为我们使用的几乎所有的算法都并不是<ruby>信息论安全性<rt>information-theoretic security </rt></ruby>的。它们“只能”提供**计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享和<ruby>一次性密码本<rt>One-time pad</rt></ruby>(OTP)算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
|
||||
|
||||
但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上的安全的。
|
||||
但所有那些大名鼎鼎的密码学算法,AES、RSA、Diffie-Hellman、椭圆曲线,还有所有那些加密软件包,OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API,都仅仅是计算意义上安全的。
|
||||
|
||||
那区别是什么呢?理论信息学上的安全肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
|
||||
那区别是什么呢?信息论安全的算法肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们是因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
|
||||
|
||||
除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
|
||||
除非哪个聪明的家伙破解了算法本身 —— 在只需要更少量计算力、在今天可实现的计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
|
||||
|
||||
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
|
||||
|
||||
真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
|
||||
真相是,如果我们最先进的哈希算法被破解了,或者最先进的分组加密算法被破解了,你得到的这些“哲学上不安全”的随机数甚至无所谓了,因为反正你也没有安全的应用方法了。
|
||||
|
||||
所以把计算性上安全的随机数喂给你的仅仅是计算性上安全的算法就可以了,换而言之,用 `/dev/urandom`。
|
||||
|
||||
@ -103,7 +103,7 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
|
||||
|
||||
![image: mythical structure of the kernel's random number generator][1]
|
||||
|
||||
“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
|
||||
“真正的随机性”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random` 和 `/dev/urandom` 从里面生成随机数。
|
||||
|
||||
“真”随机数生成器,`/dev/random`,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,它会阻塞程序直至有足够的熵进入系统。
|
||||
|
||||
@ -123,25 +123,25 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
|
||||
|
||||
![image: actual structure of the kernel's random number generator before Linux 4.8][2]
|
||||
|
||||
> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
|
||||
|
||||
你看到最大的区别了吗?CSPRNG 并不是和随机数生成器一起跑的,以 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
|
||||
你看到最大的区别了吗?CSPRNG 并不是和随机数生成器一起跑的,它在 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
|
||||
|
||||
另外一个重要的区别是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 `/dev/random` 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
|
||||
|
||||
Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
|
||||
> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了,我们跳过。
|
||||
|
||||
Linux 内核只使用事件的到达时间来预估熵的量。根据模型,它通过多项式插值来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
|
||||
|
||||
说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
|
||||
|
||||
最后强调一下终点:`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 喂的输入。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
|
||||
最后要明确一下:`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 饲喂的。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
|
||||
|
||||
##### Linux 4.8 以后
|
||||
|
||||
在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
|
||||
|
||||
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
|
||||
|
||||
*我们很快会理解*为什么这不是一个安全问题。
|
||||
在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
|
||||
|
||||
*我们很快会理解*为什么这不是一个安全问题。(参见:“CSPRNG 没问题”一节)
|
||||
|
||||
### 阻塞有什么问题?
|
||||
|
||||
@ -149,7 +149,7 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
|
||||
|
||||
这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不干活你干嘛搭建它呢?
|
||||
|
||||
> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
|
||||
> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?操作问题。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
|
||||
|
||||
但其实有个更深刻的问题:人们不喜欢被打断。它们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
|
||||
|
||||
@ -157,23 +157,23 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
|
||||
|
||||
到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道它们会做些什么。
|
||||
|
||||
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
|
||||
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用、难用、不方便都是次要的?
|
||||
|
||||
这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,`/dev/urandom` 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
|
||||
|
||||
### CSPRNG 没问题
|
||||
|
||||
现在情况听上去很沧桑。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
|
||||
现在情况听上去很惨淡。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
|
||||
|
||||
实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个块加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
|
||||
实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个分组加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
|
||||
|
||||
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。块加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕,到头来都一样。
|
||||
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。分组加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕,到头来都一样。
|
||||
|
||||
### 那熵池快空了的情况呢?
|
||||
|
||||
毫无影响。
|
||||
|
||||
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 位,不需要更多了。
|
||||
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。“足够”的下限可以是 256 位,不需要更多了。
|
||||
|
||||
介于我们一直在很随意的使用“熵”这个概念,我用“位”来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
|
||||
|
||||
@ -211,7 +211,7 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
|
||||
|
||||
我们在回到 man 页面说:“使用 `/dev/random`”。我们已经知道了,虽然 `/dev/urandom` 不阻塞,但是它的随机数和 `/dev/random` 都是从同一个 CSPRNG 里来的。
|
||||
|
||||
如果你真的需要信息论理论上安全的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`。
|
||||
如果你真的需要信息论安全性的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`。
|
||||
|
||||
man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
|
||||
|
||||
@ -227,7 +227,7 @@ man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
|
||||
|
||||
### 正道
|
||||
|
||||
本篇文章里的观点显然在互联网上是“小众”的。但如果问问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
|
||||
本篇文章里的观点显然在互联网上是“小众”的。但如果问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
|
||||
|
||||
比如我们看看 [Daniel Bernstein][5](即著名的 djb)的看法:
|
||||
|
||||
@ -238,8 +238,6 @@ man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
|
||||
>
|
||||
> 对密码学家来说这甚至都不好笑了
|
||||
|
||||
|
||||
|
||||
或者 [Thomas Pornin][6] 的看法,他也是我在 stackexchange 上见过最乐于助人的一位:
|
||||
|
||||
> 简单来说,是的。展开说,答案还是一样。`/dev/urandom` 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 `/dev/urandom` “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
|
||||
@ -260,13 +258,13 @@ Linux 的 `/dev/urandom` 会很乐意给你吐点不怎么随机的随机数,
|
||||
|
||||
FreeBSD 的行为更正确点:`/dev/random` 和 `/dev/urandom` 是一样的,在系统启动的时候 `/dev/random` 会阻塞到有足够的熵为止,然后它们都再也不阻塞了。
|
||||
|
||||
> 与此同时 Linux 实行了一个新的<ruby>系统调用<rt>syscall</rt></ruby>,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个系统调用,而不是一个字节设备(LCTT 译注:指不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
|
||||
> 与此同时 Linux 实行了一个新的<ruby>系统调用<rt>syscall</rt></ruby>,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为:阻塞到有足够的熵为止,然后再也不阻塞了。当然,这是个系统调用,而不是一个字节设备(LCTT 译注:不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
|
||||
|
||||
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
|
||||
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中保存一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
|
||||
|
||||
显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不用关心系统是不是正确关机了,比如可能你系统崩溃了。
|
||||
|
||||
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
|
||||
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在 Linux 系统安装程序一般会保存一个种子文件,所以基本上问题不大。
|
||||
|
||||
虚拟机是另外一层问题。因为用户喜欢克隆它们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。
|
||||
|
@ -1,16 +1,16 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (bodhix)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10804-1.html)
|
||||
[#]: subject: (How to Restart a Network in Ubuntu [Beginner’s Tip])
|
||||
[#]: via: (https://itsfoss.com/restart-network-ubuntu)
|
||||
[#]: author: (Sergiu https://itsfoss.com/author/sergiu/)
|
||||
|
||||
如何在 Ubuntu 中重启网络【新手提示】
|
||||
Linux 初学者:如何在 Ubuntu 中重启网络
|
||||
======
|
||||
|
||||
你[是否正在使用基于 Ubuntu 的系统,然后发现无法连接网络][1]?你一定会很惊讶,很多很多的问题都可以简单地通过重启服务解决。
|
||||
你[是否正在使用基于 Ubuntu 的系统,然后发现无法连接网络][1]?你一定会很惊讶,很多的问题都可以简单地通过重启服务解决。
|
||||
|
||||
在这篇文章中,我会介绍在 Ubuntu 或者其他 Linux 发行版中重启网络的几种方法,你可以根据自身需要选择对应的方法。这些方法基本分为两类:
|
||||
|
||||
@ -18,11 +18,11 @@
|
||||
|
||||
### 通过命令行方式重启网络
|
||||
|
||||
如果你使用的 Ubuntu 服务器版,那么你已经在使用命令行终端了。如果你使用的是桌面版,那么你可以通过快捷键 Ctrl+Alt+T [Ubuntu 键盘快捷键][3] 打开命令行终端。
|
||||
如果你使用的 Ubuntu 服务器版,那么你已经在使用命令行终端了。如果你使用的是桌面版,那么你可以通过快捷键 `Ctrl+Alt+T` [Ubuntu 键盘快捷键][3] 打开命令行终端。
|
||||
|
||||
在 Ubuntu 中,有多个命令可以重启网络。这些命令,一部分或者说大部分,也适用于在 Debian 或者其他的 Linux 发行版中重启网络。
|
||||
|
||||
#### 1\. network manager service
|
||||
#### 1、network manager 服务
|
||||
|
||||
这是通过命令行方式重启网络最简单的方法。它相当于是通过图形化界面重启网络(重启 Network-Manager 服务)。
|
||||
|
||||
@ -32,17 +32,17 @@ sudo service network-manager restart
|
||||
|
||||
此时,网络图标会消失一会儿然后重新显示。
|
||||
|
||||
#### 2\. systemd
|
||||
#### 2、systemd
|
||||
|
||||
**service** 命令仅仅是该命令的一个封装(同样的还有 init.d 系列脚本和 Upstart 相关命令)。 **systemctl** 命令的功能远多于 **service** 命令。通常我更喜欢使用这个命令。
|
||||
`service` 命令仅仅是这个方式的一个封装(同样的也是 init.d 系列脚本和 Upstart 相关命令的封装)。`systemctl` 命令的功能远多于 `service` 命令。通常我更喜欢使用这个命令。
|
||||
|
||||
```
|
||||
sudo systemctl restart NetworkManager.service
|
||||
```
|
||||
|
||||
这时,网络图标又会消失一会儿。 如果你想了解 **systemctl** 的其他选项, 可以参考 man 帮助文档。
|
||||
这时,网络图标又会消失一会儿。 如果你想了解 `systemctl` 的其他选项, 可以参考 man 帮助文档。
|
||||
|
||||
#### 3\. nmcli
|
||||
#### 3、nmcli
|
||||
|
||||
这是 Linux 上可以管理网络的另一个工具。这是一个功能强大而且实用的工具。很多系统管理员都喜欢使用该工具,因为它非常容易使用。
|
||||
|
||||
@ -60,11 +60,11 @@ sudo nmcli networking on
|
||||
|
||||
你可以通过 man 帮助文档了解 nmcli 的更多用法。
|
||||
|
||||
#### 4\. ifup & ifdown
|
||||
#### 4、ifup & ifdown
|
||||
|
||||
这两个命令直接操作网口,切换网口是否可以收发包的状态。这是 [Linux 中最应该了解的网络命令][4] 之一。
|
||||
|
||||
使用 ifdown 关闭所有网口,再使用 ifup 重新启用网口。
|
||||
使用 `ifdown` 关闭所有网口,再使用 `ifup` 重新启用网口。
|
||||
|
||||
通常推荐的做法是将这两个命令一起使用。
|
||||
|
||||
@ -72,9 +72,9 @@ sudo nmcli networking on
|
||||
sudo ifdown -a && sudo ifup -a
|
||||
```
|
||||
|
||||
**注意:** 这种方法不会让网络图标从系统托盘中消失,另外,你也无法进行网络连接。
|
||||
注意:这种方法不会让网络图标从系统托盘中消失,另外,各种网络连接也会断。
|
||||
|
||||
**其他工具: nmtui (点击展开)**
|
||||
#### 补充工具: nmtui
|
||||
|
||||
这是系统管理员们常用的另外一种方法。它是在命令行终端中管理网络的文本菜单工具。
|
||||
|
||||
@ -86,21 +86,21 @@ nmtui
|
||||
|
||||
![nmtui Menu][5]
|
||||
|
||||
**注意** 在 **nmtui** 中,可以通过 **up** 和 **down 方向键** 选择选项。
|
||||
注意:在 nmtui 中,可以通过 `up` 和 `down` 方向键选择选项。
|
||||
|
||||
选择 **Activate a connection** :
|
||||
选择 “Activate a connection”:
|
||||
|
||||
![nmtui Menu Select "Activate a connection"][6]
|
||||
|
||||
按下 **Enter** 键,打开 **connections** 菜单。
|
||||
按下回车键,打开 “connections” 菜单。
|
||||
|
||||
![nmtui Connections Menu][7]
|
||||
|
||||
接下来,选择前面带 **星号 (*)** 的网络。在这个例子中,就是 MGEO72。
|
||||
接下来,选择前面带星号(*)的网络。在这个例子中,就是 MGEO72。
|
||||
|
||||
![Select your connection in the nmtui connections menu.][8]
|
||||
|
||||
按下 **Enter** 键。 **关闭** 你的网络连接。
|
||||
按下回车键。 这就将“停用”你的网络连接。
|
||||
|
||||
![nmtui Connections Menu with no active connection][9]
|
||||
|
||||
@ -108,19 +108,19 @@ nmtui
|
||||
|
||||
![Select the connection you want in the nmtui connections menu.][10]
|
||||
|
||||
按下 **Enter** 键。这样就重启了所选择的网络连接。
|
||||
按下回车键。这样就重新激活了所选择的网络连接。
|
||||
|
||||
![nmtui Connections Menu][11]
|
||||
|
||||
双击 **Tab** 键,选择 **Back** :
|
||||
按下 `Tab` 键两次,选择 “Back”:
|
||||
|
||||
![Select "Back" in the nmtui connections menu.][12]
|
||||
|
||||
按下 **Enter** 键,回到 **nmtui** 的主菜单。
|
||||
按下回车键,回到 nmtui 的主菜单。
|
||||
|
||||
![nmtui Main Menu][13]
|
||||
|
||||
选择 **Quit** :
|
||||
选择 “Quit” :
|
||||
|
||||
![nmtui Quit Main Menu][14]
|
||||
|
||||
@ -132,9 +132,9 @@ nmtui
|
||||
|
||||
显然,这是 Ubuntu 桌面版用户重启网络最简单的方法。如果这个方法不生效,你可以尝试使用前文提到的命令行方式重启网络。
|
||||
|
||||
NM 程序是 [NetworkManager][15] 的系统托盘程序标志。我们将使用它来重启网络。
|
||||
NM 小程序是 [NetworkManager][15] 的系统托盘程序标志。我们将使用它来重启网络。
|
||||
|
||||
首先,查看顶部状态栏。 你会在系统托盘找到一个网络图标 (因为我使用 Wi-Fi,所以这里是一个 Wi-Fi 图标)。
|
||||
首先,查看顶部状态栏。你会在系统托盘找到一个网络图标 (因为我使用 Wi-Fi,所以这里是一个 Wi-Fi 图标)。
|
||||
|
||||
接下来,点击该图标(也可以点击音量图标或电池图标)。打开菜单。选择 “Turn Off” 关闭网络。
|
||||
|
||||
@ -160,7 +160,7 @@ Ubuntu 没有可以直接 “刷新 WiFi 网络” 的选项,它有点隐蔽
|
||||
|
||||
选择对应的网络修改你的 WiFi 连接。
|
||||
|
||||
你无法马上看到可用的无线网络列表。打开网络列表之后,大概需要 5 秒才会显示其他可用的无线网络。
|
||||
你无法马上看到可用的无线网络列表。打开网络列表之后,大概需要 5 秒才会显示其它可用的无线网络。
|
||||
|
||||
![Select another wifi network in Ubuntu][19]
|
||||
|
||||
@ -168,7 +168,7 @@ Ubuntu 没有可以直接 “刷新 WiFi 网络” 的选项,它有点隐蔽
|
||||
|
||||
现在,你就可以选择你想要连接的网络,点击连接。这样就完成了。
|
||||
|
||||
**总结**
|
||||
### 总结
|
||||
|
||||
重启网络连接是每个 Linux 用户在使用过程中必须经历的事情。
|
||||
|
||||
@ -184,7 +184,7 @@ via: https://itsfoss.com/restart-network-ubuntu
|
||||
作者:[Sergiu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[bodhix](https://github.com/bodhix)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,130 +1,125 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10812-1.html)
|
||||
[#]: subject: (What is 5G? How is it better than 4G?)
|
||||
[#]: via: (https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all)
|
||||
[#]: author: (Josh Fruhlinger https://www.networkworld.com/author/Josh-Fruhlinger/)
|
||||
|
||||
什么是 5G?它如何比 4G 更快?
|
||||
什么是 5G?它比 4G 好在哪里?
|
||||
==========================
|
||||
|
||||
### 5G 网络将使无限网络吞吐量提高 10 倍并且能够替代有线宽带。但是它们什么时候能够投入使用呢,为什么 5G 和物联网如此紧密地联系在一起呢?
|
||||
> 5G 网络将使无线网络吞吐量提高 10 倍并且能够替代有线宽带。但是它们什么时候能够投入使用呢,为什么 5G 和物联网如此紧密地联系在一起呢?
|
||||
|
||||
![Thinkstock][1]
|
||||
|
||||
[5G 无线][2] 是一个概括的术语,用来描述一系列更快的无线网络的标准和技术,理想上比 4G 快了 20 倍并且延迟降低了 120 倍,为物联网的发展和对新高带宽应用的支持奠定了基础。
|
||||
[5G 无线][2] 是一个概括的术语,用来描述一系列更快的无线互联网的标准和技术,理论上比 4G 快了 20 倍并且延迟降低了 120 倍,为物联网的发展和对新的高带宽应用的支持奠定了基础。
|
||||
|
||||
## 什么是 5G?科技还是流行词?
|
||||
### 什么是 5G?科技还是流行词?
|
||||
|
||||
技术在世界范围内完全发挥它的潜能需要数年时间,但同时当今一些 5G 网络服务已经投入使用。5G 不仅是一个技术术语,也是一个营销术语,并不是市场上的所有 5G 服务是标准的。
|
||||
这个技术在世界范围内完全发挥它的潜能还需要数年时间,但同时当今一些 5G 网络服务已经投入使用。5G 不仅是一个技术术语,也是一个营销术语,并不是市场上的所有 5G 服务是标准的。
|
||||
|
||||
**[来自世界移动大会:[The time of 5G is almost here][3].]**
|
||||
- [来自世界移动大会:[5G 时代即将来到][3]]
|
||||
|
||||
## 5G 速度 vs 4G
|
||||
### 5G 与 4G 的速度对比
|
||||
|
||||
无线技术的每一代,最大的呼吁是增加速度。5G 网络潜在的的峰值下载速度达到[20 Gbps,一般在 10 Gbps][4]。这不仅仅是比当前 4G 网络更快,4G 目前峰值大约 1 Gbps,并且比更多家庭的有线网络连接更快。5G 提供的网络速度能够与光纤一较高下。
|
||||
无线技术的每一代,最大的呼吁是增加速度。5G 网络潜在的峰值下载速度可以达到[20 Gbps,一般在 10 Gbps][4]。这不仅仅比当前 4G 网络更快,4G 目前峰值大约 1 Gbps,并且比更多家庭的有线网络连接更快。5G 提供的网络速度能够与光纤一较高下。
|
||||
|
||||
吞吐量不是 5G 仅有的速度提升;它还有的特点是极大降低了网络延迟*。* 这是一个重要的区分:吞吐量用来测量花费多久来下载一个大文件,而延迟由网络瓶颈决定,延迟在来回的沟通中减慢了响应速度。
|
||||
吞吐量不是 5G 仅有的速度提升;它还有的特点是极大降低了网络延迟。这是一个重要的区分:吞吐量用来测量花费多久来下载一个大文件,而延迟由网络瓶颈决定,延迟在往返的通讯中减慢了响应速度。
|
||||
|
||||
延迟很难量化,因为它在无数的网络状态中变化,但是 5G 网络在理想情况下有能力使延迟率在 1 ms 内。总的来说,5G 延迟将比 4G 降低 60 到 120 倍。这会使很多应用变得可能,例如当前虚拟现实的延迟使它变得不实际。
|
||||
延迟很难量化,因为它因各种网络状态变化而变化,但是 5G 网络在理想情况下有能力使延迟率在 1 ms 内。总的来说,5G 延迟将比 4G 降低 60 到 120 倍。这会使很多应用变得可能,例如当前虚拟现实的延迟使它变得不实际。
|
||||
|
||||
## 5G 技术
|
||||
### 5G 技术
|
||||
|
||||
5G 技术的基础有一系列标准定义,在过去的 10 年里一直在研究更好的部分。这些里面最重要的是 5G New Radio,或者 5G NR*,* 由 3GPP(一个为移动电话开发协议的标准化组织) 组织标准化。5G NR 规定了很多 5G 设备操作的方式,于 2018 年 7 月 完成终版。
|
||||
|
||||
**[[从 PluralSight 上移动设备管理的课程并且学习如何在你的公司在不降低用户体验的情况下保护设备][6]]**
|
||||
5G 技术的基础有一系列标准定义,在过去的 10 年里一直在研究更好的部分。这些里面最重要的是 5G New Radio(5G NR),由 3GPP(一个为移动电话开发协议的标准化组织)组织标准化。5G NR 规定了很多 5G 设备操作的方式,[于 2018 年 7 月 完成终版][5]。
|
||||
|
||||
很多独特的技术同时出现来尽可能地提升 5G 的速度并降低延迟,下面是一些重要的。
|
||||
|
||||
## 毫米波
|
||||
### 毫米波
|
||||
|
||||
5G 网络大部分使用在 30 到 300 GHz 范围的频率。(正如名称一样,这些频率的波长在 1 到 10 毫米之间)这些高频范围能够[在每个时间单元比低频信号携带更多的信息][7],4G 当前使用的就是通常频率在 1 GHz 以下的低频信号,或者 WiFi,最高 6 GHz。
|
||||
5G 网络大部分使用在 30 到 300 GHz 范围的频率。(正如名称一样,这些频率的波长在 1 到 10 毫米之间)这些高频范围能够[在每个时间单元比低频信号携带更多的信息][7],4G LTE 当前使用的就是通常频率在 1 GHz 以下的低频信号,或者 WiFi,最高 6 GHz。
|
||||
|
||||
毫米波技术传统上是昂贵并且难于部署的。科技进步已经克服了这些困难,这也是 5G 在如今成为了可能。
|
||||
毫米波技术传统上是昂贵并且难于部署的。科技进步已经克服了这些困难,这也是 5G 在如今成为了可能的原因。
|
||||
|
||||
## 小的单元
|
||||
### 小蜂窝
|
||||
|
||||
毫米波传输的一个缺点是当他们传输通过物理对象的时候更容易被干扰。
|
||||
毫米波传输的一个缺点是当它们传输通过物理对象的时候比 4G 或 WiFi 信号更容易被干扰。
|
||||
|
||||
为了克服这些,5G 基础设施的模型将不同于 4G。替代了大的移动天线桅杆,我们开始接受作为景观的一部分,5G 网络将由[穿越城市大概间距 250 米的更小的基站]提供支持,创建更小的服务区域。
|
||||
为了克服这些,5G 基础设施的模型将不同于 4G。替代了大的像景观一样移动天线桅杆,5G 网络将由[分布在城市中大概间距 250 米的更小的基站][8]提供支持,创建更小的服务区域。
|
||||
|
||||
## 大量的 MIMO
|
||||
这些 5G 基站的功率要求低于 4G,并且可以更容易地连接到建筑物和电线杆上。
|
||||
|
||||
尽管 5G 基站比 4G 的对应部分小多了,但他们却打包了更多的天线。这些天线是[多输入多输出的(MIMO)][9],意味着在相同的数据信道能够同时处理多个双向会话。5G 网络能够处理比 4G 网络超过 20 倍的会话。
|
||||
### 大量的 MIMO
|
||||
|
||||
大量的 MIMO 保证了[基站容量限制下的彻底的提升],允许单个基站承载更多的设备会话。这就是 5G 可能推动物联网更广泛应用的原因。理论上,更多的网络连接的无限设备能够部署在相同的空间而不会使网络被压垮。
|
||||
尽管 5G 基站比 4G 的对应部分小多了,但它们却带了更多的天线。这些天线是[多输入多输出的(MIMO)][9],意味着在相同的数据信道能够同时处理多个双向会话。5G 网络能够处理比 4G 网络超过 20 倍的会话。
|
||||
|
||||
## 波束成形
|
||||
大量的 MIMO 保证了[基站容量限制下的极大提升][10],允许单个基站承载更多的设备会话。这就是 5G 可能推动物联网更广泛应用的原因。理论上,更多的连接到互联网的无线设备能够部署在相同的空间而不会使网络被压垮。
|
||||
|
||||
### 波束成形
|
||||
|
||||
确保所有的会话来回地到达正确的地方是比较棘手的,尤其是前面提到的毫米波信号的干涉问题。为了克服这些问题,5G 基站部署了更高级的波束技术,使用建设性和破坏性的无线电干扰来使信号有向而不是广播。这在一个特定的方向上有效地加强了信号强度和范围。
|
||||
|
||||
## 5G 可获得性
|
||||
### 5G 可获得性
|
||||
|
||||
第一个 5G 商用网络 [2018 年 5 月在卡塔尔推出][12]。自那以后,网络已经扩展到全世界,从阿根廷到越南。[Lifewire 有一个不错的,经常更新的列表][13].
|
||||
第一个 5G 商用网络 [2018 年 5 月在卡塔尔推出][12]。自那以后,5G 网络已经扩展到全世界,从阿根廷到越南。[Lifewire 有一个不错的,经常更新的列表][13].
|
||||
|
||||
牢记一点的是,尽管这样,目前不是所有的 5G 网络都履行了所有的技术承诺。一些早期的 5G 产品依赖于现有的 4G 基础设施,减少了可以获得的潜在速度;其他服务为了市场目的标榜 5G 但是并不符合标准。仔细观察美国无限运营商的产品都会表现出一些陷阱。
|
||||
牢记一点的是,尽管这样,目前不是所有的 5G 网络都履行了所有的技术承诺。一些早期的 5G 产品依赖于现有的 4G 基础设施,减少了可以获得的潜在速度;其它服务为了市场目的而标榜 5G 但是并不符合标准。仔细观察美国无线运营商的产品都会发现一些陷阱。
|
||||
|
||||
## 无线运营商和 5G
|
||||
### 无线运营商和 5G
|
||||
|
||||
技术上讲,5G 服务如今在美国已经可获得了。但声明中包含的注意事项因运营商而异,表明 5G 普及之前还有很长的路要走。
|
||||
|
||||
Verizon 可能是早期 5G 最大的推动者。它宣告到 2018 年 10 月 将有 4 个城市成为 [5G 家庭][14]的一部分, 一项需要你的其他设备通过 WiFi 来连接特定的 5G 热点,由热点连接到网络服务。
|
||||
Verizon 可能是早期 5G 最大的推动者。它宣告到 2018 年 10 月 将有 4 个城市成为 [5G 家庭][14]的一部分,这是一项需要你的其他设备通过 WiFi 来连接特定的 5G 热点,由热点连接到网络的服务。
|
||||
|
||||
Verizon 计划四月在 Minneapolis 和 Chicago 首次展示 5G 移动服务,该服务将在这一年内传播到其他城市。访问 5G 网络将会花费消费者每月额外的费用加上购买能够实际访问 5G 的手机花费(稍后会详细介绍)。作为附加,Verizon 的部署被称作 [5G TF][16],实际上不符合 5G NR 的标准。
|
||||
Verizon 计划四月在 [Minneapolis 和 Chicago 发布 5G 移动服务][15],该服务将在这一年内传播到其他城市。访问 5G 网络将需要消费者每月额外花费费用,加上购买能够实际访问 5G 的手机花费(稍后会详细介绍)。另外,Verizon 的部署被称作 [5G TF][16],实际上不符合 5G NR 的标准。
|
||||
|
||||
AT&T [声明在 2018 年 12 月将有美国的 12 个城市可以使用 5G][17],在 2019 年的末尾将增加 9 个城市,但最终在这些城市里,只有商业区能够访问。为了访问 5G 网络,需要一个特定的 Netgear 热点来连接到 5G 服务,然后为手机和其他设备提供一个 Wi-Fi 信号。
|
||||
AT&T [声明在 2018 年 12 月将有美国的 12 个城市可以使用 5G][17],在 2019 年的末尾将增加 9 个城市,但最终在这些城市里,只有市中心商业区能够访问。为了访问 5G 网络,需要一个特定的 Netgear 热点来连接到 5G 服务,然后为手机和其他设备提供一个 Wi-Fi 信号。
|
||||
|
||||
与此同时,AT&T 也在推出 4G 网络的速度提升计划,被成为 5GE,即使这些提升和 5G 网络没有关系。([这会向后兼容][18]。)
|
||||
与此同时,AT&T 也在推出 4G 网络的速度提升计划,被成为 5GE,即使这些提升和 5G 网络没有关系。([这会向后兼容][18])
|
||||
|
||||
Sprint 将在 2019 年 5 月之前在四个城市提供 5G 服务,在年末将有更多。但是 Sprint 的 5G 产品充分利用了 MIMO 单元,他们[没有使用毫米波信道][19],意味着 Sprint 的用户不会看到像其他运营商一样的速度提升。
|
||||
|
||||
T-Mobile 追求一个相似的模型,它[在 2019 年年底之前不会推出 5G 服务][20]因为他们没有手机能够连接到它。
|
||||
T-Mobile 采用相似的模型,它[在 2019 年年底之前不会推出 5G 服务][20],因为他们没有手机能够连接到它。
|
||||
|
||||
一个障碍可能阻止 5G 速度的迅速传播是需要铺开所有这些小的单元基站。他们小的尺寸和较低的功耗需求使它们技术上比 4G 技术更容易部署,但这不意味着它能够很简单的使政府和财产拥有者信服来到处安装一堆基站。Verizon 实际上建立了[向本地民选官员请愿的网站][21]来加速 5G 基站的部署。
|
||||
一个可能阻止 5G 速度的迅速传播的障碍是需要铺开所有这些小蜂窝基站。它们小的尺寸和较低的功耗需求使它们技术上比 4G 技术更容易部署,但这不意味着它能够很简单的使政府和财产拥有者信服到处安装一堆基站。Verizon 实际上建立了[向本地民选官员请愿的网站][21]来加速 5G 基站的部署。
|
||||
|
||||
## ** 5G 手机:何时可获得?何时可以买?**
|
||||
### 5G 手机:何时可获得?何时可以买?
|
||||
|
||||
第一部声称为 5G 手机的是 Samsung Galaxy S10 5G,将在 2019 年夏末首发。你可以从 Verizon 订阅一个“[Moto Mod][22]”,用来[转换 Moto Z3 手机为 5G 兼容设备][23]。
|
||||
第一部声称为 5G 手机的是 Samsung Galaxy S10 5G,将在 2019 年夏末首发。你也可以从 Verizon 订阅一个“[Moto Mod][22]”,用来[转换 Moto Z3 手机为 5G 兼容设备][23]。
|
||||
|
||||
但是除非你不能忍受作为一个早期使用者的诱惑,你会希望再等待一下;一些奇怪和隐约的关于运营商的问题意味着可能你的手机[不兼容你的运营商的整个 5G 网络][24]。
|
||||
但是除非你不能忍受作为一个早期使用者的诱惑,你会希望再等待一下;一些关于运营商的奇怪和突显的问题意味着可能你的手机[不兼容你的运营商的整个 5G 网络][24]。
|
||||
|
||||
一个可能令你吃惊的落后者是苹果:分析者坚信最早直到 2020 年以前 iPhone 不会与 5G 兼容。但这符合该公司的特点;苹果在 2012 年末也落后于三星发布兼容 4G 的手机。
|
||||
一个可能令你吃惊的落后者是苹果:分析者确信最早直到 2020 年以前 iPhone 不会与 5G 兼容。但这符合该公司的特点;苹果在 2012 年末也落后于三星发布兼容 4G 的手机。
|
||||
|
||||
不可否认,5G 洪流已经到来。5G 兼容的设备[在 2019 年统治了巴塞罗那世界移动大会][3],因此期待视野里有更多的选择。
|
||||
|
||||
## 为什么人们已经在讨论 6G 了?
|
||||
### 为什么人们已经在讨论 6G 了?
|
||||
|
||||
一些专家说缺点是[5G 不能够达到延迟和可靠性的目标][27]。这些完美主义者已经在探寻 6G,来试图解决这些缺点。
|
||||
|
||||
这是一个[研究新的能够融入 6G 技术的小组],它们自称
|
||||
|
||||
The Center for Converged TeraHertz Communications and Sensing (ComSenTer)。根据说明,他们努力让每个设备的带宽达到 100Gbps。
|
||||
有一个[研究新的能够融入 6G 技术的小组][28],自称为“融合 TeraHertz 通信与传感中心”(ComSenTer)。根据说明,他们努力让每个设备的带宽达到 100Gbps。
|
||||
|
||||
除了增加可靠性,还突破了可靠性并增加速度,6G 同样试图允许上千的并发连接。如果成功的话,这个特点将帮助物联网设备联网,使在工业设置中部署上千个传感器。
|
||||
|
||||
即使仍在胚胎当中,6G 已经由于新发现的 [tera-hretz 中基于网络的潜在的中间人攻击][29]的紧迫性面临安全的考虑。好消息是有大量时间来解决这个问题。6G 网络直到 2030 之前才可能出现。
|
||||
即使仍在胚胎当中,6G 已经由于新发现的 [在基于 tera-hretz 的网络中潜在的中间人攻击][29]的紧迫性面临安全的考虑。好消息是有大量时间来解决这个问题。6G 网络直到 2030 之前才可能出现。
|
||||
|
||||
**阅读更多关于 5G 网络:**
|
||||
阅读更多关于 5G 网络:
|
||||
|
||||
* [How enterprises can prep for 5G networks][30]
|
||||
* [5G vs 4G: How speed, latency and apps support differ][31]
|
||||
* [Private 5G networks are coming][32]
|
||||
* [5G and 6G wireless have security issues][33]
|
||||
* [How millimeter-wave wireless could help support 5G and IoT][34]
|
||||
|
||||
|
||||
在 [Facebook][35] 和 [LinkedIn][36] 上加入网络世界社区来评论当前最热门的话题。
|
||||
* [企业如何为 5G 网络做准备][30]
|
||||
* [5G 与 4G:速度、延迟和应用支持的差异][31]
|
||||
* [私人 5G 网络即将到来][32]
|
||||
* [5G 和 6G 无线存在安全问题][33]
|
||||
* [毫米波无线技术如何支持 5G 和物联网][34]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html#tk.rss_all
|
||||
via: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html
|
||||
|
||||
作者:[Josh Fruhlinger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,8 +1,8 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10809-1.html)
|
||||
[#]: subject: (Command line quick tips: Cutting content out of files)
|
||||
[#]: via: (https://fedoramagazine.org/command-line-quick-tips-cutting-content-out-of-files/)
|
||||
[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/)
|
||||
@ -12,7 +12,7 @@
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora 发行版是一个功能齐全的操作系统,有出色的图形化桌面环境。用户可以很容易地通过单击动作来完成任何典型任务。所有这些美妙的易用性掩盖了其底层强大的命令行细节。本文是向你展示一些常见命令行实用程序的系列文章的一部分。让我们进入 shell 来看看 **cut**。
|
||||
Fedora 发行版是一个功能齐全的操作系统,有出色的图形化桌面环境。用户可以很容易地通过单击动作来完成任何典型任务。所有这些美妙的易用性掩盖了其底层强大的命令行细节。本文是向你展示一些常见命令行实用程序的系列文章的一部分。让我们进入 shell 来看看 `cut`。
|
||||
|
||||
通常,当你在命令行中工作时,你处理的是文本文件。有时这些文件可能很长,虽然可以完整地阅读它们,但是可能会耗费大量时间,并且容易出错。在本文中,你将学习如何从文本文件中提取内容,并从中获取你所需的信息。
|
||||
|
||||
@ -20,7 +20,8 @@ Fedora 发行版是一个功能齐全的操作系统,有出色的图形化桌
|
||||
|
||||
### cut 使用
|
||||
|
||||
为了演示这个例子,在系统上使用一个标准的大文件,如 _/etc/passwd_。正如本系列的前一篇文章所示,你可以执行 _cat_ 命令来查看整个文件:
|
||||
为了演示这个例子,在系统上使用一个标准的大文件,如 `/etc/passwd`。正如本系列的前一篇文章所示,你可以执行 `cat` 命令来查看整个文件:
|
||||
|
||||
```
|
||||
$ cat /etc/passwd
|
||||
root:x:0:0:root:/root:/bin/bash
|
||||
@ -35,9 +36,10 @@ adm:x:3:4:adm:/var/adm:/sbin/nologin
|
||||
```
|
||||
name:password:user-id:group-id:comment:home-directory:shell
|
||||
```
|
||||
假设你只想要系统上所有账户名的列表,如果你只能从每一行中删除 _name_ 值。这就是 _cut_ 命令派上用场的地方!它一次处理一行输入,并提取该行的特定部分。
|
||||
|
||||
_cut_ 命令提供了以不同方式选择一行的部分的选项,在本示例中需要两个,_d_ 是指定要使用的分隔符类型,_f_ 是指定要删除行的哪个字段。_-d_ 选项允许你声明用于分隔行中值的 _delimiter_。在本例中,冒号(:)用于分隔值。_-f_ 选项允许你选择要提取哪个字段或哪些字段值。因此,在本例中,输入的命令是:
|
||||
假设你只想要系统上所有账户名的列表,如果你只能从每一行中删除 “name” 值。这就是 `cut` 命令派上用场的地方!它一次处理一行输入,并提取该行的特定部分。
|
||||
|
||||
`cut` 命令提供了以不同方式选择一行的部分的选项,在本示例中需要两个,`-d` 和 `-f`。`-d` 选项允许你声明用于分隔行中值的分隔符。在本例中,冒号(`:`)用于分隔值。`-f` 选项允许你选择要提取哪些字段值。因此,在本例中,输入的命令是:
|
||||
|
||||
```
|
||||
$ cut -d: -f1 /etc/passwd
|
||||
@ -48,13 +50,13 @@ adm
|
||||
...
|
||||
```
|
||||
|
||||
太棒了,成功了!但是你将输出打印到标准输出,在终端会话中意味着它需要占据屏幕。如果你需要稍后完成另一项任务所需的信息,这该怎么办?如果有办法将 _cut_ 命令的输出保存到文本文件中,那就太好了。对于这样的任务,shell 有一个简单的内置功能,重定向功能(_>_)。
|
||||
太棒了,成功了!但是你将输出打印到标准输出,在终端会话中意味着它需要占据屏幕。如果你需要稍后完成另一项任务所需的信息,这该怎么办?如果有办法将 `cut` 命令的输出保存到文本文件中,那就太好了。对于这样的任务,shell 有一个简单的内置功能,重定向功能(`>`)。
|
||||
|
||||
```
|
||||
$ cut -d: -f1 /etc/passwd > names.txt
|
||||
```
|
||||
|
||||
这会将 cut 的输出放到一个名为 _names.txt_ 的文件中,你可以使用 _cat_ 来查看它的内容:
|
||||
这会将 `cut` 的输出放到一个名为 `names.txt` 的文件中,你可以使用 `cat` 来查看它的内容:
|
||||
|
||||
```
|
||||
$ cat names.txt
|
||||
@ -65,11 +67,7 @@ adm
|
||||
...
|
||||
```
|
||||
|
||||
使用两个命令和一个 shell 功能,可以很容易地使用 _cat_ 从一个文件进行识别、提取和重定向一些信息,并将其保存到另一个文件以供以后使用。
|
||||
|
||||
* * *
|
||||
|
||||
_[ _Joel Mbugua_][2]_ 在 _[_Unsplash_][3] 上的照片._
|
||||
使用两个命令和一个 shell 功能,可以很容易地使用 `cat` 从一个文件进行识别、提取和重定向一些信息,并将其保存到另一个文件以供以后使用。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -78,7 +76,7 @@ via: https://fedoramagazine.org/command-line-quick-tips-cutting-content-out-of-f
|
||||
作者:[Stephen Snow][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,252 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10811-1.html)
|
||||
[#]: subject: (How To Install And Configure NTP Server And NTP Client In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
如何在 Linux 上安装、配置 NTP 服务器和客户端?
|
||||
======
|
||||
|
||||
你也许听说过这个词很多次或者你可能已经在使用它了。在这篇文章中我将会清晰的告诉你 NTP 服务器和客户端的安装。
|
||||
|
||||
之后我们将会了解 **[Chrony NTP 客户端的安装][1]**。
|
||||
|
||||
### 什么是 NTP 服务?
|
||||
|
||||
NTP 意即<ruby>网络时间协议<rt>Network Time Protocol</rt></ruby>。它是通过网络在计算机系统之间进行时钟同步的网络协议。换言之,它可以让那些通过 NTP 或者 Chrony 客户端连接到 NTP 服务器的系统保持时间上的一致(它能保持一个精确的时间)。
|
||||
|
||||
NTP 在公共互联网上通常能够保持时间延迟在几十毫秒以内的精度,并在理想条件下,它能在局域网下达到低于一毫秒的延迟精度。
|
||||
|
||||
它使用用户数据报协议(UDP)在端口 123 上发送和接受时间戳。它是个 C/S 架构的应用程序。
|
||||
|
||||
### NTP 客户端
|
||||
|
||||
NTP 客户端将其时钟与网络时间服务器同步。
|
||||
|
||||
### Chrony 客户端
|
||||
|
||||
Chrony 是 NTP 客户端的替代品。它能以更精确的时间更快的同步系统时钟,并且它对于那些不总是在线的系统很有用。
|
||||
|
||||
### 为什么我们需要 NTP 服务?
|
||||
|
||||
为了使你组织中的所有服务器与基于时间的作业保持精确的时间同步。
|
||||
|
||||
为了说明这点,我将告诉你一个场景。比如说,我们有两个服务器(服务器 1 和服务器 2)。服务器 1 通常在 10:55 完成离线作业,然后服务器 2 在 11:00 需要基于服务器 1 完成的作业报告去运行其他作业。
|
||||
|
||||
如果两个服务器正在使用不同的时间(如果服务器 2 时间比服务器 1 提前,服务器 1 的时间就落后于服务器 2),然后我们就不能去执行这个作业。为了达到时间一致,我们应该安装 NTP。
|
||||
|
||||
希望上述能清除你对于 NTP 的疑惑。
|
||||
|
||||
在这篇文章中,我们将使用下列设置去测试。
|
||||
|
||||
* **NTP 服务器:** 主机名:CentOS7.2daygeek.com,IP:192.168.1.8,OS:CentOS 7
|
||||
* **NTP 客户端:** 主机名:Ubuntu18.2daygeek.com,IP:192.168.1.5,OS:Ubuntu 18.04
|
||||
|
||||
### NTP 服务器端:如何在 Linux 上安装 NTP?
|
||||
|
||||
因为它是 C/S 架构,所以 NTP 服务器端和客户端的安装包没有什么不同。在发行版的官方仓库中都有 NTP 安装包,因此可以使用发行版的包管理器安装它。
|
||||
|
||||
对于 Fedora 系统,使用 [DNF 命令][2] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo dnf install ntp
|
||||
```
|
||||
|
||||
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][3] 或者 [APT 命令][4] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo apt install ntp
|
||||
```
|
||||
|
||||
对基于 Arch Linux 的系统,使用 [Pacman 命令][5] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo pacman -S ntp
|
||||
```
|
||||
|
||||
对 RHEL/CentOS 系统,使用 [YUM 命令][6] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo yum install ntp
|
||||
```
|
||||
|
||||
对于 openSUSE Leap 系统,使用 [Zypper 命令][7] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo zypper install ntp
|
||||
```
|
||||
|
||||
### 如何在 Linux 上配置 NTP 服务器?
|
||||
|
||||
安装 NTP 软件包后,请确保在服务器端的 `/etc/ntp.conf` 文件中取消以下配置的注释。
|
||||
|
||||
默认情况下,NTP 服务器配置依赖于 `X.distribution_name.pool.ntp.org`。 如果有必要,可以使用默认配置,也可以访问<https://www.ntppool.org/zone/@>站点,根据你所在的位置(特定国家/地区)进行更改。
|
||||
|
||||
比如说如果你在印度,然后你的 NTP 服务器将是 `0.in.pool.ntp.org`,并且这个地址适用于大多数国家。
|
||||
|
||||
```
|
||||
# vi /etc/ntp.conf
|
||||
|
||||
restrict default kod nomodify notrap nopeer noquery
|
||||
restrict -6 default kod nomodify notrap nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
server 0.asia.pool.ntp.org
|
||||
server 1.asia.pool.ntp.org
|
||||
server 2.asia.pool.ntp.org
|
||||
server 3.asia.pool.ntp.org
|
||||
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
|
||||
driftfile /var/lib/ntp/drift
|
||||
keys /etc/ntp/keys
|
||||
```
|
||||
|
||||
我们仅允许 `192.168.1.0/24` 子网的客户端访问这个 NTP 服务器。
|
||||
|
||||
由于默认情况下基于 RHEL7 的发行版的防火墙是打开的,因此要允许 ntp 服务通过。
|
||||
|
||||
```
|
||||
# firewall-cmd --add-service=ntp --permanent
|
||||
# firewall-cmd --reload
|
||||
```
|
||||
|
||||
更新配置后要重启服务:
|
||||
|
||||
对于 sysvinit 系统。基于 Debian 的系统需要去运行 `ntp` 而不是 `ntpd`。
|
||||
|
||||
```
|
||||
# service ntpd restart
|
||||
# chkconfig ntpd on
|
||||
```
|
||||
|
||||
对于 systemctl 系统。基于 Debian 的需要去运行 `ntp` 和 `ntpd`。
|
||||
|
||||
```
|
||||
# systemctl restart ntpd
|
||||
# systemctl enable ntpd
|
||||
```
|
||||
|
||||
### NTP 客户端:如何在 Linux 上安装 NTP 客户端?
|
||||
|
||||
正如我在这篇文章中前面所说的。NTP 服务器端和客户端的安装包没有什么不同。因此在客户端上也安装同样的软件包。
|
||||
|
||||
对于 Fedora 系统,使用 [DNF 命令][2] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo dnf install ntp
|
||||
```
|
||||
|
||||
对于 Debian/Ubuntu 系统,使用 [APT-GET 命令][3] 或者 [APT 命令][4] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo apt install ntp
|
||||
```
|
||||
|
||||
对基于 Arch Linux 的系统,使用 [Pacman 命令][5] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo pacman -S ntp
|
||||
```
|
||||
|
||||
对 RHEL/CentOS 系统,使用 [YUM 命令][6] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo yum install ntp
|
||||
```
|
||||
|
||||
对于 openSUSE Leap 系统,使用 [Zypper 命令][7] 去安装 ntp。
|
||||
|
||||
```
|
||||
$ sudo zypper install ntp
|
||||
```
|
||||
|
||||
我已经在 CentOS7.2daygeek.com` 这台主机上安装和配置了 NTP 服务器,因此将其附加到所有的客户端机器上。
|
||||
|
||||
```
|
||||
# vi /etc/ntp.conf
|
||||
```
|
||||
|
||||
```
|
||||
restrict default kod nomodify notrap nopeer noquery
|
||||
restrict -6 default kod nomodify notrap nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
server CentOS7.2daygeek.com prefer iburst
|
||||
driftfile /var/lib/ntp/drift
|
||||
keys /etc/ntp/keys
|
||||
```
|
||||
|
||||
更新配置后重启服务:
|
||||
|
||||
对于 sysvinit 系统。基于 Debian 的系统需要去运行 `ntp` 而不是 `ntpd`。
|
||||
|
||||
```
|
||||
# service ntpd restart
|
||||
# chkconfig ntpd on
|
||||
```
|
||||
|
||||
对于 systemctl 系统。基于 Debian 的需要去运行 `ntp` 和 `ntpd`。
|
||||
|
||||
```
|
||||
# systemctl restart ntpd
|
||||
# systemctl enable ntpd
|
||||
```
|
||||
|
||||
重新启动 NTP 服务后等待几分钟以便从 NTP 服务器获取同步的时间。
|
||||
|
||||
在 Linux 上运行下列命令去验证 NTP 服务的同步状态。
|
||||
|
||||
```
|
||||
# ntpq –p
|
||||
或
|
||||
# ntpq -pn
|
||||
|
||||
remote refid st t when poll reach delay offset jitter
|
||||
==============================================================================
|
||||
*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
|
||||
```
|
||||
|
||||
运行下列命令去得到 ntpd 的当前状态。
|
||||
|
||||
```
|
||||
# ntpstat
|
||||
synchronised to NTP server (192.168.1.8) at stratum 3
|
||||
time correct to within 508 ms
|
||||
polling server every 64 s
|
||||
```
|
||||
|
||||
最后运行 `date` 命令。
|
||||
|
||||
```
|
||||
# date
|
||||
Tue Mar 26 23:17:05 CDT 2019
|
||||
```
|
||||
|
||||
如果你观察到 NTP 中输出的时间偏移很大。运行下列命令从 NTP 服务器手动同步时钟。当你执行下列命令的时候,确保你的 NTP 客户端应该为未活动状态。(LCTT 译注:当时间偏差很大时,客户端的自动校正需要花费很长时间才能逐步追上,因此应该手动运行以更新)
|
||||
|
||||
```
|
||||
# ntpdate –uv CentOS7.2daygeek.com
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
|
||||
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -1,49 +1,46 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10817-1.html)
|
||||
[#]: subject: (Installing Ubuntu MATE on a Raspberry Pi)
|
||||
[#]: via: (https://itsfoss.com/ubuntu-mate-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
在 Raspberry Pi 上安装 Ubuntu MATE
|
||||
在树莓派上安装 Ubuntu MATE
|
||||
=================================
|
||||
|
||||
_**简介: 这篇快速指南告诉你如何在 Raspberry Pi 设备上安装 Ubuntu MATE。**_
|
||||
> 简介: 这篇快速指南告诉你如何在树莓派设备上安装 Ubuntu MATE。
|
||||
|
||||
[Raspberry Pi][1] 是目前最流行的单板机并且是制造商的首选。[Raspbian][2] 是基于 Debian 的 Pi 的官方操作系统。它是轻量级的,内置了教育工具和能在大部分场景下完成工作的工具。
|
||||
|
||||
[安装 Raspbian][3] 安装同样简单,但是与 [Debian][4] 一起的问题是慢的升级周期和旧的软件包。
|
||||
|
||||
在 Raspberry Pi 上运行 Ubuntu 给你带来一个更丰富的体验和最新的软件。当在你的 Pi 上运行 Ubuntu 时我们有几个选择。
|
||||
|
||||
1. [Ubuntu MATE][5] :Ubuntu MATE 是仅有的原生支持 Raspberry Pi 包含一个完整的桌面环境的分发版。
|
||||
2. [Ubuntu Server 18.04][6] \+ 手动安装一个桌面环境。
|
||||
3. 使用 [Ubuntu Pi Flavor Maker][7] 社区构建的镜像,_这些镜像只支持 Raspberry Pi 2B 和 3B 的变种_并且**不能**更新到最新的 LTS 发布版。
|
||||
[树莓派][1] 是目前最流行的单板机并且是创客首选的板子。[Raspbian][2] 是基于 Debian 的树莓派官方操作系统。它是轻量级的,内置了教育工具和能在大部分场景下完成工作的工具。
|
||||
|
||||
[安装 Raspbian][3] 安装同样简单,但是与 [Debian][4] 随同带来的问题是慢的升级周期和旧的软件包。
|
||||
|
||||
在树莓派上运行 Ubuntu 可以给你带来一个更丰富的体验和最新的软件。当在你的树莓派上运行 Ubuntu 时我们有几个选择。
|
||||
|
||||
1. [Ubuntu MATE][5] :Ubuntu MATE 是仅有的原生支持树莓派且包含一个完整的桌面环境的发行版。
|
||||
2. [Ubuntu Server 18.04][6] + 手动安装一个桌面环境。
|
||||
3. 使用 [Ubuntu Pi Flavor Maker][7] 社区构建的镜像,这些镜像只支持树莓派 2B 和 3B 的变种,并且**不能**更新到最新的 LTS 发布版。
|
||||
|
||||
第一个选择安装是最简单和快速的,而第二个选择给了你自由选择安装桌面环境的机会。我推荐选择前两个中的任一个。
|
||||
|
||||
这里是一些磁盘镜像下载链接。在这篇文章里我只会提及 Ubuntu MATE 的安装。
|
||||
|
||||
### 在 Raspberry Pi 上安装 Ubuntu MATE
|
||||
### 在树莓派上安装 Ubuntu MATE
|
||||
|
||||
去 Ubuntu MATE 的下载页面获取推荐的镜像。
|
||||
|
||||
![][8]
|
||||
|
||||
试验 ARM64 版只应在你需要在 Raspberry Pi 服务器上运行像 MongoDB 这样的 64-bit 应用时使用。
|
||||
试验性的 ARM64 版本只应在你需要在树莓派服务器上运行像 MongoDB 这样的 64 位应用时使用。
|
||||
|
||||
[ 下载为 Raspberry Pi 准备的 Ubuntu MATE][9]
|
||||
- [下载为树莓派准备的 Ubuntu MATE][9]
|
||||
|
||||
#### 第 1 步:设置 SD 卡
|
||||
|
||||
镜像文件一旦下载完成后需要解压。你应该简单的右击来提取它。
|
||||
镜像文件一旦下载完成后需要解压。你可以简单的右击来提取它。
|
||||
|
||||
可替换地,下面命令做同样的事。
|
||||
也可以使用下面命令做同样的事。
|
||||
|
||||
```
|
||||
xz -d ubuntu-mate***.img.xz
|
||||
@ -51,7 +48,7 @@ xz -d ubuntu-mate***.img.xz
|
||||
|
||||
如果你在 Windows 上你可以使用 [7-zip][10] 替代。
|
||||
|
||||
安装 **[Balena Etcher][11]**,我们将使用这个工具将镜像写入 SD 卡。确保你的 SD 卡有至少 8 GB 的容量。
|
||||
安装 [Balena Etcher][11],我们将使用这个工具将镜像写入 SD 卡。确保你的 SD 卡有至少 8 GB 的容量。
|
||||
|
||||
启动 Etcher,选择镜像文件和 SD 卡。
|
||||
|
||||
@ -59,21 +56,19 @@ xz -d ubuntu-mate***.img.xz
|
||||
|
||||
一旦进度完成 SD 卡就准备好了。
|
||||
|
||||
#### 第 2 步:设置 Raspberry Pi
|
||||
#### 第 2 步:设置树莓派
|
||||
|
||||
你可能已经知道你需要一些外设才能使用 Raspberry Pi 例如 鼠标,键盘, HDMI 线等等。你同样可以[不用键盘和鼠标安装 Raspberry Pi][13] 但是这篇指南不是那样。
|
||||
你可能已经知道你需要一些外设才能使用树莓派,例如鼠标、键盘、HDMI 线等等。你同样可以[不用键盘和鼠标安装树莓派][13],但是这篇指南不是那样。
|
||||
|
||||
* 插入一个鼠标和一个键盘。
|
||||
* 连接 HDMI 线缆。
|
||||
* 插入 SD 卡 到 SD 卡槽。
|
||||
|
||||
|
||||
|
||||
插入电源线给它供电。确保你有一个好的电源供应(5V,3A 至少)。一个不好的电源供应可能降低性能。
|
||||
插入电源线给它供电。确保你有一个好的电源供应(5V、3A 至少)。一个不好的电源供应可能降低性能。
|
||||
|
||||
#### Ubuntu MATE 安装
|
||||
|
||||
一旦你给 Raspberry Pi 供电,你将遇到非常熟悉的 Ubuntu 安装过程。在这里的安装过程相当直接。
|
||||
一旦你给树莓派供电,你将遇到非常熟悉的 Ubuntu 安装过程。在这里的安装过程相当直接。
|
||||
|
||||
![选择你的键盘布局][14]
|
||||
|
||||
@ -83,7 +78,7 @@ xz -d ubuntu-mate***.img.xz
|
||||
|
||||
![添加用户名和密码][16]
|
||||
|
||||
在设置了键盘布局,时区和用户凭证后,在几分钟后你将被带到登录界面。瞧!你快要完成了。
|
||||
在设置了键盘布局、时区和用户凭证后,在几分钟后你将被带到登录界面。瞧!你快要完成了。
|
||||
|
||||
![][17]
|
||||
|
||||
@ -98,9 +93,9 @@ sudo apt upgrade
|
||||
|
||||
![][19]
|
||||
|
||||
一旦更新完成安装你就可以开始了。你可以根据你的需要继续安装 Raspberry Pi 平台的为 GPIO 和其他 I/O 准备的特定软件包。
|
||||
一旦更新完成安装你就可以开始了。你可以根据你的需要继续安装树莓派平台为 GPIO 和其他 I/O 准备的特定软件包。
|
||||
|
||||
是什么让你考虑在 Raspberry 上安装 Ubuntu,你对 Raspbian 的体验如何呢?在下方评论来让我知道。
|
||||
是什么让你考虑在 Raspberry 上安装 Ubuntu,你对 Raspbian 的体验如何呢?请在下方评论来让我知道。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
@ -109,7 +104,7 @@ via: https://itsfoss.com/ubuntu-mate-raspberry-pi/
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,82 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10808-1.html)
|
||||
[#]: subject: (This is how System76 does open hardware)
|
||||
[#]: via: (https://opensource.com/article/19/4/system76-hardware)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
System76 是如何打造开源硬件的
|
||||
================================
|
||||
|
||||
> 是什么让新的 Thelio 台式机系列与众不同。
|
||||
|
||||
![在计算机上显示度量和数据][1]
|
||||
|
||||
大多数人对他们电脑的硬件一无所知。作为一个长期的 Linux 用户,当我想让我的无线网卡、视频卡、显示器和其他硬件与我选择的发行版共同运行时,也一样遇到了挫折。商业品牌的硬件通常使判断这些问题变得很困难:为什么以太网驱动、无线驱动或者鼠标驱动和我们预期的不太一样?随着 Linux 发行版变得成熟,这可能不再是问题,但是我们仍能发现触控板和其它外部设备的怪异行为,尤其是当我们对底层的硬件知道的不多时。
|
||||
|
||||
像 [System76][2] 这样的公司致力于解决这些问题,以提升 Linux 用户体验。System76 生产了一系列的 Linux 笔记本、台式机和服务器,甚至提供了它自己的 Linux 发行版 [Pop! OS][3] 作为客户的一个选择。最近我有幸参观了 System76 在 Devnver 的工厂并揭开它的新台式机产品线 [Thelio][5] [的神秘面纱][5]。
|
||||
|
||||
### 关于 Thelio
|
||||
|
||||
System76 宣称 Thelio 的开源硬件子板(被命名为木星之后的第 5 个卫星的名字 Thelio Io)是它在市场上独特的特点之一。Thelio Io 取得了开源硬件协会的认证 [OSHWA #us000145][6],并且有 4 个用于存储的 SATA 端口和一个控制风扇和用于电源按钮控制的嵌入式控制器。Thelio IO SAS 取得了 [OSHWA #us000146][7] 认证,并且有 4 个用于存储的 U.2 端口,没有嵌入式控制器。在展示时,System76 显示了这些组件如何调整风扇通过底盘来优化部件的性能。
|
||||
|
||||
该控制器还管理电源键,和围绕该电源键的 LED 光环,当被按下时它以 100% 的亮度发光。这提供了触觉和视觉上的确认:该主机已经启动电源了。当电脑在使用中,该按钮被设置为 35% 的亮度,当在睡眠模式,它的亮度在 2.35% 和 25% 之间跳动。当计算机关闭后,LED 保持朦胧的亮度,因此能够在黑暗的房间里找到电源控制。
|
||||
|
||||
Thelio 的嵌入式控制器是一个低功耗的 [ATmega32U4][8] 微控制器,并且控制器的设置可以使用 Arduino Micro 进行原型设计。Thelio Io 主板变化的多少取决于你购买哪种 Thelio 型号。
|
||||
|
||||
Thelio 可能是我见过的设计的最好的电脑机箱和系统。如果你曾经亲身体验过在一个常规的 PC 的内部进行操作的话,你可能会同意我的观点。我已经做了很多次了,因此我能以自己过往的糟糕经历来证明这点。
|
||||
|
||||
### 为什么做开源硬件?
|
||||
|
||||
该主板是在 [KiCAD][9] 设计的,你可以在 [GitHub][10] 上按 GPL 许可证访问 Thelio 所有的设计文件。因此,为什么一个与其他 PC 制造商竞争的公司会设计一个独特的接口并公开授权呢?可能是该公司认识到开源设计及根据你的需要调整和分享一个 I/O 主板设计的能力的价值,即便你是市场上的竞争者。
|
||||
|
||||
![在 Thelio 启动时 Don Watkins 与 System76 的 CEO Carl Richell 谈话][11]
|
||||
|
||||
*在 [Thelio 发布会][12] Don Watkins 与 System76 的 CEO Carl Richell 谈话。*
|
||||
|
||||
我问 System76 的设计者和 CEO [Carl Richell][13],该公司是否担心过公开许可它的硬件设计意味着有人可以采取它的独特设计并用它来将 System76 驱逐出市场。他说:
|
||||
|
||||
> 开源硬件对我们所有人都有益。这是我们未来提升技术的方式,并且使得每个人获取技术更容易。我们欢迎任何想要提高 Thelio 设计的人来这么做。开源该硬件不仅可以帮助我们更快的改进我们的电脑,并且能够使我们的消费者 100% 信任他们的设备。我们的目标是尽可能地移除专利功能,同时仍然能够为消费者提供一个有竞争力的 Linux 主机。
|
||||
>
|
||||
> 我们已经与 Linux 社区一起合作了 13 年之久,来为我们的笔记本、台式机、服务器创造一个完美顺滑的体验。我们长期专注于为 Linux 社区提供服务,提供给我们的客户高水准的服务,我们的个性使 System76 变得独特。
|
||||
|
||||
我还问 Carl 为什么开源硬件对 System76 和 PC 市场是有意义的。他回复道:
|
||||
|
||||
> System76 创立之初的想法是技术应该对每个人是开放和可获取的。我们还没有到达 100% 开源创造一个电脑的程度,但是有了开源硬件,我们迈出了接近目标的必不可少的一大步。
|
||||
>
|
||||
> 我们生活在技术变成工具的时代。计算机在各级教育和很多行业当中是人们的工具。由于每个人特定的需要,每个人对于如何提升电脑和软件作为他们的主要工具有他们自己的想法。开源我们的计算机可以让这些想法成为现实,从而反过来促进技术成为一个更强大的工具。在一个开源环境中,我们持续迭代来生产更好的 PC。这有点酷。
|
||||
|
||||
我们总结了我们讨论的关于 System76 技术路线的对话,包含了开源硬件 mini PC,甚至是笔记本。在 System76 品牌下的已售出的 mini PC 和笔记本是由其他供应商制造的,并不是基于开源硬件的(尽管它们用的是 Linux 软件,是开源的)。
|
||||
|
||||
设计和支持开放式硬件是 PC 产业中的变革者,也正是它造就了 System76 的新 Thelio 台式机电脑产品线的不同。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/system76-hardware
|
||||
|
||||
作者:[Don Watkins][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://system76.com/
|
||||
[3]: https://opensource.com/article/18/1/behind-scenes-popos-linux
|
||||
[4]: /article/18/11/system76-thelio-desktop-computer
|
||||
[5]: https://system76.com/desktops
|
||||
[6]: https://certification.oshwa.org/us000145.html
|
||||
[7]: https://certification.oshwa.org/us000146.html
|
||||
[8]: https://www.microchip.com/wwwproducts/ATmega32u4
|
||||
[9]: http://kicad-pcb.org/
|
||||
[10]: https://github.com/system76/thelio-io
|
||||
[11]: https://opensource.com/sites/default/files/uploads/don_system76_ceo.jpg (Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.)
|
||||
[12]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-FKWFxFv
|
||||
[13]: https://www.linkedin.com/in/carl-richell-9435781
|
||||
|
||||
|
@ -0,0 +1,67 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10814-1.html)
|
||||
[#]: subject: (8 environment-friendly open software projects you should know)
|
||||
[#]: via: (https://opensource.com/article/19/4/environment-projects)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
|
||||
|
||||
8 个你应该了解的环保开源项目
|
||||
======
|
||||
|
||||
> 通过给这些致力于提升环境的项目做贡献来庆祝地球日。
|
||||
|
||||
![][1]
|
||||
|
||||
在过去的几年里,我一直在帮助 [Greenpeace][2] 建立其第一个完全开源的软件项目,Planet 4. [Planet 4][3] 是一个全球参与平台,Greenpeace 的支持者和活动家可以互动并参与组织。它的目标是让人们代表我们的星球采取行动。我们希望邀请参与并利用人力来应对气候变化和塑料污染等全球性问题。开发者、设计师、作者、贡献者和其他通过开源支持环保主义的人都非常欢迎[参与进来][4]!
|
||||
|
||||
Planet 4 远非唯一关注环境的开源项目。对于地球日,我会分享其他七个关注我们星球的开源项目。
|
||||
|
||||
[Eco Hacker Farm][5] 致力于支持可持续社区。它建议并支持将黑客空间/黑客基地和永续农业生活结合在一起的项目。该组织还有在线项目。访问其 [wiki][6] 或 [Twitter][7] 了解有关 Eco Hacker Farm 正在做的更多信息。
|
||||
|
||||
[Public Lab][8] 是一个开放社区和非营利组织,它致力于将科学掌握在公民手中。它于 2010 年在 BP 石油灾难后形成,Public Lab 与开源合作,协助环境勘探和调查。它是一个多元化的社区,有很多方法可以做[贡献][9]。
|
||||
|
||||
不久前,Opensource.com 的社区管理者 Don Watkins 撰写了一篇 [Open Climate Workbench][10] 的文章,该项目来自 Apache 基金会。 [OCW][11] 提供了进行气候建模和评估的软件,可用于各种应用。
|
||||
|
||||
[Open Source Ecology][12] 是一个旨在改善经济运作方式的项目。该项目着眼于环境再生和社会公正,它旨在重新界定我们的一些肮脏的生产和分配技术,以创造一个更可持续的文明。
|
||||
|
||||
促进开源和大数据工具之间的合作,以实现海洋、大气、土地和气候的研究,“ [Pangeo][13] 是第一个推广开放、可重复和可扩展科学的社区。”大数据可以改变世界!
|
||||
|
||||
[Leaflet][14] 是一个著名的开源 JavaScript 库。它可以做各种各样的事情,包括环保项目,如 [Arctic Web Map][15],它能让科学家准确地可视化和分析北极地区,这是气候研究的关键能力。
|
||||
|
||||
当然,没有我在 Mozilla 的朋友就没有这个列表(这不是个完整的列表!)。[Mozilla Science Lab][16] 社区就像所有 Mozilla 项目一样,非常开放,它致力于将开源原则带给科学界。它的项目和社区使科学家能够进行我们世界所需的各种研究,以解决一些最普遍的环境问题。
|
||||
|
||||
### 如何贡献
|
||||
|
||||
在这个地球日,做为期六个月的承诺,将一些时间贡献给一个有助于应对气候变化的开源项目,或以其他方式鼓励人们保护地球母亲。肯定还有许多关注环境的开源项目,所以请在评论中留言!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/environment-projects
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE
|
||||
[2]: http://www.greenpeace.org
|
||||
[3]: http://medium.com/planet4
|
||||
[4]: https://planet4.greenpeace.org/community/#partners-open-sourcers
|
||||
[5]: https://wiki.ecohackerfarm.org/start
|
||||
[6]: https://wiki.ecohackerfarm.org/
|
||||
[7]: https://twitter.com/EcoHackerFarm
|
||||
[8]: https://publiclab.org/
|
||||
[9]: https://publiclab.org/contribute
|
||||
[10]: https://opensource.com/article/17/1/apache-open-climate-workbench
|
||||
[11]: https://climate.apache.org/
|
||||
[12]: https://wiki.opensourceecology.org/wiki/Project_needs
|
||||
[13]: http://pangeo.io/
|
||||
[14]: https://leafletjs.com/
|
||||
[15]: https://webmap.arcticconnect.ca/#ac_3573/2/20.8/-65.5
|
||||
[16]: https://science.mozilla.org/
|
@ -1,22 +1,23 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10807-1.html)
|
||||
[#]: subject: (Tracking the weather with Python and Prometheus)
|
||||
[#]: via: (https://opensource.com/article/19/4/weather-python-prometheus)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
使用 Python 和 Prometheus 跟踪天气
|
||||
======
|
||||
创建自定义 Prometheus 集成以跟踪最大的云提供者:地球母亲。
|
||||
|
||||
> 创建自定义 Prometheus 集成以跟踪最大的云端提供商:地球母亲。
|
||||
|
||||
![Tree clouds][1]
|
||||
|
||||
开源监控系统 [Prometheus][2] 有跟踪多种类型的时间序列数据的集成,但如果不存在你想要的集成,那么很容易构建一个。一个经常使用的例子使用云提供商的自定义集成,它使用提供商的 API 抓取特定的指标。但是,在这个例子中,我们将与最大云提供商集成:地球。
|
||||
开源监控系统 [Prometheus][2] 集成了跟踪多种类型的时间序列数据,但如果没有集成你想要的数据,那么很容易构建一个。一个经常使用的例子使用云端提供商的自定义集成,它使用提供商的 API 抓取特定的指标。但是,在这个例子中,我们将与最大云端提供商集成:地球。
|
||||
|
||||
幸运的是,美国政府已经测量了天气并为集成提供了一个简单的 API。获取红帽总部下一个小时的天气预报很简单。
|
||||
|
||||
|
||||
```
|
||||
import requests
|
||||
HOURLY_RED_HAT = "<https://api.weather.gov/gridpoints/RAH/73,57/forecast/hourly>"
|
||||
@ -25,7 +26,7 @@ def get_temperature():
|
||||
return result.json()["properties"]["periods"][0]["temperature"]
|
||||
```
|
||||
|
||||
现在我们已经完成了与地球的整合,现在是确保 Prometheus 能够理解我们想要内容的时候了。我们可以使用 [Prometheus Python 库][3]中的 _gauge_ 创建一个注册:红帽总部的温度。
|
||||
现在我们已经完成了与地球的集成,现在是确保 Prometheus 能够理解我们想要内容的时候了。我们可以使用 [Prometheus Python 库][3]中的 gauge 创建一个注册项:红帽总部的温度。
|
||||
|
||||
|
||||
```
|
||||
@ -37,13 +38,12 @@ def prometheus_temperature(num):
|
||||
return registry
|
||||
```
|
||||
|
||||
最后,我们需要以某种方式将它连接到 Prometheus。这有点依赖 Prometheus 的网络拓扑:Prometheus 与我们的服务通信更容易,还是反向更容易。
|
||||
最后,我们需要以某种方式将它连接到 Prometheus。这有点依赖 Prometheus 的网络拓扑:是 Prometheus 与我们的服务通信更容易,还是反向更容易。
|
||||
|
||||
第一种是通常建议的情况,如果可能的话,我们需要构建一个公开注册入口的 Web 服务器,并配置 Prometheus _收刮_(scrape)它。
|
||||
第一种是通常建议的情况,如果可能的话,我们需要构建一个公开注册入口的 Web 服务器,并配置 Prometheus 收刮(scrape)它。
|
||||
|
||||
我们可以使用 [Pyramid][4] 构建一个简单的 Web 服务器。
|
||||
|
||||
|
||||
```
|
||||
from pyramid.config import Configurator
|
||||
from pyramid.response import Response
|
||||
@ -58,11 +58,10 @@ config.add_view(metrics_web, route_name='metrics')
|
||||
app = config.make_wsgi_app()
|
||||
```
|
||||
|
||||
这可以使用任何 Web 网关接口 (WSGI) 服务器运行。例如,假设我们将代码放在 **earth.py** 中,我们可以使用 **python -m twisted web --wsgi earth.app** 来运行它。
|
||||
这可以使用任何 Web 网关接口(WSGI)服务器运行。例如,假设我们将代码放在 `earth.py` 中,我们可以使用 `python -m twisted web --wsgi earth.app` 来运行它。
|
||||
|
||||
或者,如果我们的代码连接到 Prometheus 更容易,我们可以定期将其推送到 Prometheus 的[推送网关][5]。
|
||||
|
||||
|
||||
```
|
||||
import time
|
||||
from prometheus_client import push_to_gateway
|
||||
@ -73,7 +72,7 @@ def push_temperature(url):
|
||||
time.sleep(60*60)
|
||||
```
|
||||
|
||||
URL 是推送网关的 URL。它通常以 **:9091** 结尾。
|
||||
这里的 URL 是推送网关的 URL。它通常以 `:9091` 结尾。
|
||||
|
||||
祝你构建自定义 Prometheus 集成成功,以便跟踪一切!
|
||||
|
||||
@ -81,10 +80,10 @@ URL 是推送网关的 URL。它通常以 **:9091** 结尾。
|
||||
|
||||
via: https://opensource.com/article/19/4/weather-python-prometheus
|
||||
|
||||
作者:[Moshe Zadka ][a]
|
||||
作者:[Moshe Zadka][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -1,74 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: reviewer: (wxy)
|
||||
[#]: publisher: (wxy)
|
||||
[#]: url: (https://linux.cn/article-10815-1.html)
|
||||
[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-command/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
在 Linux 中如何使用 iotop 和 iostat 监控磁盘 I/O 活动?
|
||||
===================================================
|
||||
======================================
|
||||
|
||||
你知道在 Linux 中我们使用什么工具检修和监控实时的磁盘活动吗?
|
||||
你知道在 Linux 中我们使用什么工具检修和监控实时的磁盘活动吗?如果 [Linux 系统性能][1]变慢,我们会用 [top 命令][2] 来查看系统性能。它被用来检查是什么进程在服务器上占有如此高的使用率,对于大多数 Linux 系统管理员来说很常见,现实世界中被 Linux 系统管理员广泛采用。
|
||||
|
||||
如果 **[Linux 系统性能][1]**变慢,我们会用 **[top 命令][12]** 来查看系统性能。
|
||||
|
||||
它被用来检查是什么进程在服务器上占有如此高的使用率。
|
||||
|
||||
对于大多数 Linux 系统管理员来说很常见。
|
||||
|
||||
现实世界中被 Linux 系统管理员广泛采用。
|
||||
|
||||
如果在进程输出中你没有看到很大的不同,你仍然有选择查看其他东西。
|
||||
|
||||
我会建议你在 top 输出中检查 `wa` 状态因为大多数时间服务器性能由于在硬盘上的高 I/O 读和写降低了性能。
|
||||
|
||||
如果它很高或者波动,很可能就是它造成的。因此,我们需要检查硬盘上的 I/O 活动。
|
||||
如果在进程输出中你没有看到很大的不同,你仍然有选择查看其他东西。我会建议你在 `top` 输出中检查 `wa` 状态,因为大多数时间里服务器性能由于在硬盘上的高 I/O 读和写降低了性能。如果它很高或者波动,很可能就是它造成的。因此,我们需要检查硬盘上的 I/O 活动。
|
||||
|
||||
我们可以在 Linux 中使用 `iotop` 和 `iostat` 命令监控所有的磁盘和文件系统的磁盘 I/O 统计。
|
||||
|
||||
### 什么是 iotop?
|
||||
|
||||
iotop 是一个类似 top 的工具来显示实时的磁盘活动。
|
||||
`iotop` 是一个类似 `top` 的工具,用来显示实时的磁盘活动。
|
||||
|
||||
iotop 监控 Linux 内核输出的 I/O 使用信息并且显示一个系统中进程或线程的当前 I/O 使用情况。
|
||||
`iotop` 监控 Linux 内核输出的 I/O 使用信息,并且显示一个系统中进程或线程的当前 I/O 使用情况。
|
||||
|
||||
它显示每个进程/线程读写 I/O 带宽。它同样显示当等待换入和等待 I/O 的线程/进程 时间花费的百分比。
|
||||
它显示每个进程/线程读写 I/O 带宽。它同样显示当等待换入和等待 I/O 的线程/进程花费的时间的百分比。
|
||||
|
||||
Total DISK READ 和 Total DISK WRITE 的值表示了一方面进程和内核线程之间的总的读写带宽,另一方面表示内核块设备子系统的。
|
||||
`Total DISK READ` 和 `Total DISK WRITE` 的值一方面表示了进程和内核线程之间的总的读写带宽,另一方面也表示内核块设备子系统的。
|
||||
|
||||
Actual DISK READ 和 Actual DISK WRITE 的值表示在内核块设备子系统和下面硬件(HDD,SSD,等等。)对应的实际磁盘 I/O 带宽。
|
||||
`Actual DISK READ` 和 `Actual DISK WRITE` 的值表示在内核块设备子系统和下面硬件(HDD、SSD 等等)对应的实际磁盘 I/O 带宽。
|
||||
|
||||
### 如何在 Linux 中安装 iotop ?
|
||||
|
||||
我们可以轻松在包管理器的帮助下安装,因为该软件包在所有的 Linux 发行版仓库中都可以获得。
|
||||
|
||||
对于 **`Fedora`** 系统,使用 **[DNF 命令][3]** 来安装 iotop。
|
||||
对于 Fedora 系统,使用 [DNF 命令][3] 来安装 `iotop`。
|
||||
|
||||
```
|
||||
$ sudo dnf install iotop
|
||||
```
|
||||
|
||||
对于 **`Debian/Ubuntu`** 系统,使用 **[API-GET 命令][4]** 或者 **[APT 命令][5]** 来安装 iotop。
|
||||
对于 Debian/Ubuntu 系统,使用 [API-GET 命令][4] 或者 [APT 命令][5] 来安装 `iotop`。
|
||||
|
||||
```
|
||||
$ sudo apt install iotop
|
||||
```
|
||||
|
||||
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman Command][6]** 来安装 iotop。
|
||||
对于基于 Arch Linux 的系统,使用 [Pacman Command][6] 来安装 `iotop`。
|
||||
|
||||
```
|
||||
$ sudo pacman -S iotop
|
||||
```
|
||||
|
||||
对于 **`RHEL/CentOS`** 的系统,使用 **[YUM Command][7]** 来安装 iotop。
|
||||
对于 RHEL/CentOS 的系统,使用 [YUM Command][7] 来安装 `iotop`。
|
||||
|
||||
```
|
||||
$ sudo yum install iotop
|
||||
```
|
||||
|
||||
对于使用 **`openSUSE Leap`** 的系统,使用 **[Zypper Command][8]** 来安装 iotop。
|
||||
对于使用 openSUSE Leap 的系统,使用 [Zypper Command][8] 来安装 `iotop`。
|
||||
|
||||
```
|
||||
$ sudo zypper install iotop
|
||||
@ -76,72 +64,70 @@ $ sudo zypper install iotop
|
||||
|
||||
### 在 Linux 中如何使用 iotop 命令来监控磁盘 I/O 活动/统计?
|
||||
|
||||
iotop 命令有很多参数来检查关于磁盘 I/O 的变化
|
||||
`iotop` 命令有很多参数来检查关于磁盘 I/O 的变化:
|
||||
|
||||
```
|
||||
# iotop
|
||||
```
|
||||
|
||||
[![][9]![][9]][10]
|
||||
![10]
|
||||
|
||||
如果你想检查那个进程实际在做 I/O,那么运行 iotop 命令加上 `-o` 或者 `--only` 参数。
|
||||
如果你想检查那个进程实际在做 I/O,那么运行 `iotop` 命令加上 `-o` 或者 `--only` 参数。
|
||||
|
||||
```
|
||||
# iotop --only
|
||||
```
|
||||
|
||||
[![][9]![][9]][11]
|
||||
|
||||
**细节:**
|
||||
|
||||
* **`IO:`** 它显示每个进程的 I/O 利用率,包含磁盘和交换。
|
||||
* **`SWAPIN:`** 它只显示每个进程的交换使用率。
|
||||
![11]
|
||||
|
||||
细节:
|
||||
|
||||
* `IO`:它显示每个进程的 I/O 利用率,包含磁盘和交换。
|
||||
* `SWAPIN`: 它只显示每个进程的交换使用率。
|
||||
|
||||
### 什么是 iostat?
|
||||
|
||||
iostat 被用来报告中央处理单元(CPU)的统计和设备与分区的输出/输出的统计。
|
||||
`iostat` 被用来报告中央处理单元(CPU)的统计和设备与分区的输出/输出的统计。
|
||||
|
||||
iostat 命令通过观察与他们平均传输率相关的设备活跃时间来监控系统输入/输出设备载入。
|
||||
`iostat` 命令通过观察与它们平均传输率相关的设备活跃时间来监控系统输入/输出设备负载。
|
||||
|
||||
iostat 命令生成的报告可以被用来改变系统配置来更好的平衡物理磁盘之间的输入/输出负载。
|
||||
`iostat` 命令生成的报告可以被用来改变系统配置来更好的平衡物理磁盘之间的输入/输出负载。
|
||||
|
||||
所有的统计都在 iostat 命令每次运行时被报告。该报告包含一个 CPU 头部,后面是一行 CPU 统计。
|
||||
所有的统计都在 `iostat` 命令每次运行时被报告。该报告包含一个 CPU 头部,后面是一行 CPU 统计。
|
||||
|
||||
在多处理器系统中,CPU 统计被计算为系统层面的所有处理器的平均值。一个设备头行显示后紧跟一行每个配置设备的统计。
|
||||
在多处理器系统中,CPU 统计被计算为系统层面的所有处理器的平均值。设备头行后紧跟显示每个配置的设备一行的统计。
|
||||
|
||||
iostat 命令生成两种类型的报告,CPU 利用率报告和设备利用率报告。
|
||||
`iostat` 命令生成两种类型的报告,CPU 利用率报告和设备利用率报告。
|
||||
|
||||
### 在 Linux 中怎样安装 iostat?
|
||||
|
||||
iostat 工具是 sysstat 包的一部分,所以我们可以轻松地在包管理器地帮助下安装因为在所有的 Linux 发行版的仓库都是可以获得的。
|
||||
`iostat` 工具是 `sysstat` 包的一部分,所以我们可以轻松地在包管理器地帮助下安装,因为在所有的 Linux 发行版的仓库都是可以获得的。
|
||||
|
||||
对于 **`Fedora`** 系统,使用 **[DNF Command][3]** 来安装 sysstat。
|
||||
对于 Fedora 系统,使用 [DNF Command][3] 来安装 `sysstat`。
|
||||
|
||||
```
|
||||
$ sudo dnf install sysstat
|
||||
```
|
||||
|
||||
对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET Command][4]** 或者 **[APT Command][5]** 来安装 sysstat。
|
||||
对于 Debian/Ubuntu 系统,使用 [APT-GET Command][4] 或者 [APT Command][5] 来安装 `sysstat`。
|
||||
|
||||
```
|
||||
$ sudo apt install sysstat
|
||||
```
|
||||
|
||||
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman Command][6]** 来安装 sysstat。
|
||||
对于基于 Arch Linux 的系统,使用 [Pacman Command][6] 来安装 `sysstat`。
|
||||
|
||||
```
|
||||
$ sudo pacman -S sysstat
|
||||
```
|
||||
|
||||
对于 **`RHEL/CentOS`** 系统,使用 **[YUM Command][7]** 来安装 sysstat。
|
||||
对于 RHEL/CentOS 系统,使用 [YUM Command][7] 来安装 `sysstat`。
|
||||
|
||||
```
|
||||
$ sudo yum install sysstat
|
||||
```
|
||||
|
||||
对于 **`openSUSE Leap`** 系统,使用 **[Zypper Command][8]** 来安装 sysstat。
|
||||
对于 openSUSE Leap 系统,使用 [Zypper Command][8] 来安装 `sysstat`。
|
||||
|
||||
```
|
||||
$ sudo zypper install sysstat
|
||||
@ -149,9 +135,9 @@ $ sudo zypper install sysstat
|
||||
|
||||
### 在 Linux 中如何使用 sysstat 命令监控磁盘 I/O 活动/统计?
|
||||
|
||||
在 iostat 命令中有很多参数来检查关于 I/O 和 CPU 的变化统计信息。
|
||||
在 `iostat` 命令中有很多参数来检查关于 I/O 和 CPU 的变化统计信息。
|
||||
|
||||
不加参数运行 iostat 命令会看到完整的系统统计。
|
||||
不加参数运行 `iostat` 命令会看到完整的系统统计。
|
||||
|
||||
```
|
||||
# iostat
|
||||
@ -169,7 +155,7 @@ loop1 0.00 0.00 0.00 0.00 1093
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令加上 `-d` 参数查看所有设备的 I/O 统计。
|
||||
运行 `iostat` 命令加上 `-d` 参数查看所有设备的 I/O 统计。
|
||||
|
||||
```
|
||||
# iostat -d
|
||||
@ -184,7 +170,7 @@ loop1 0.00 0.00 0.00 0.00 1093
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令加上 `-p` 参数查看所有的设备和分区的 I/O 统计。
|
||||
运行 `iostat` 命令加上 `-p` 参数查看所有的设备和分区的 I/O 统计。
|
||||
|
||||
```
|
||||
# iostat -p
|
||||
@ -206,7 +192,7 @@ loop1 0.00 0.00 0.00 0.00 1093
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令加上 `-x` 参数显示所有设备的详细的 I/O 统计信息。
|
||||
运行 `iostat` 命令加上 `-x` 参数显示所有设备的详细的 I/O 统计信息。
|
||||
|
||||
```
|
||||
# iostat -x
|
||||
@ -224,7 +210,7 @@ loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.
|
||||
loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
```
|
||||
|
||||
运行 iostat 命令加上 `-d [设备名]` 参数查看具体设备和它的分区的 I/O 统计信息。
|
||||
运行 `iostat` 命令加上 `-d [设备名]` 参数查看具体设备和它的分区的 I/O 统计信息。
|
||||
|
||||
```
|
||||
# iostat -p [Device_Name]
|
||||
@ -242,7 +228,7 @@ sda2 0.18 6.76 80.21 0.00 3112916 36924
|
||||
sda1 0.00 0.01 0.00 0.00 3224 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令加上 `-m` 参数以 `MB` 为单位而不是 `KB` 查看所有设备的统计。默认以 KB 显示输出。
|
||||
运行 `iostat` 命令加上 `-m` 参数以 MB 为单位而不是 KB 查看所有设备的统计。默认以 KB 显示输出。
|
||||
|
||||
```
|
||||
# iostat -m
|
||||
@ -260,7 +246,7 @@ loop1 0.00 0.00 0.00 0.00 1
|
||||
loop2 0.00 0.00 0.00 0.00 1 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令使用特定的间隔使用如下的格式。在这个例子中,我们打算以 5 秒捕获的间隔捕获两个报告。
|
||||
运行 `iostat` 命令使用特定的间隔使用如下的格式。在这个例子中,我们打算以 5 秒捕获的间隔捕获两个报告。
|
||||
|
||||
```
|
||||
# iostat [Interval] [Number Of Reports]
|
||||
@ -290,7 +276,7 @@ loop1 0.00 0.00 0.00 0.00 0
|
||||
loop2 0.00 0.00 0.00 0.00 0 0 0
|
||||
```
|
||||
|
||||
运行 iostat 命令 与 `-N` 参数来查看 LVM 磁盘 I/O 统计报告。
|
||||
运行 `iostat` 命令与 `-N` 参数来查看 LVM 磁盘 I/O 统计报告。
|
||||
|
||||
```
|
||||
# iostat -N
|
||||
@ -307,7 +293,7 @@ sdc 0.01 0.12 0.00 2108 0
|
||||
2g-2gvol1 0.00 0.07 0.00 1204 0
|
||||
```
|
||||
|
||||
运行 nfsiostat 命令来查看 Network File System(NFS)的 I/O 统计。
|
||||
运行 `nfsiostat` 命令来查看 Network File System(NFS)的 I/O 统计。
|
||||
|
||||
```
|
||||
# nfsiostat
|
||||
@ -320,7 +306,7 @@ via: https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
校对:[wxy](https://github.com/wxy)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
@ -0,0 +1,63 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Edge computing is in most industries’ future)
|
||||
[#]: via: (https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Edge computing is in most industries’ future
|
||||
======
|
||||
Nearly every industry can take advantage of edge computing in the journey to speed digital transformation efforts
|
||||
![iStock][1]
|
||||
|
||||
The growth of edge computing is about to take a huge leap. Right now, companies are generating about 10% of their data outside a traditional data center or cloud. But within the next six years, that will increase to 75%, [according to Gartner][2].
|
||||
|
||||
That’s largely down to the need to process data emanating from devices, such as Internet of Things (IoT) sensors. Early adopters include:
|
||||
|
||||
* **Manufacturers:** Devices and sensors seem endemic to this industry, so it’s no surprise to see the need to find faster processing methods for the data produced. A recent [_Automation World_][3] survey found that 43% of manufacturers have deployed edge projects. Most popular use cases have included production/manufacturing data analysis and equipment data analytics.
|
||||
|
||||
* **Retailers** : Like most industries deeply affected by the need to digitize operations, retailers are being forced to innovate their customer experiences. To that end, these organizations are “investing aggressively in compute power located closer to the buyer,” [writes Dave Johnson][4], executive vice president of the IT division at Schneider Electric. He cites examples such as augmented-reality mirrors in fitting rooms that offer different clothing options without the consumer having to try on the items, and beacon-based heat maps that show in-store traffic.
|
||||
|
||||
|
||||
|
||||
* **Healthcare organizations** : As healthcare costs continue to escalate, this industry is ripe for innovation that improves productivity and cost efficiencies. Management consulting firm [McKinsey & Co. has identified][5] at least 11 healthcare use cases that benefit patients, the facility, or both. Two examples: tracking mobile medical devices for nursing efficiency as well as optimization of equipment, and wearable devices that track user exercise and offer wellness advice.
|
||||
|
||||
|
||||
|
||||
While these are strong use cases, as the edge computing market grows, so too will the number of industries adopting it.
|
||||
|
||||
**Getting the edge on digital transformation**
|
||||
|
||||
Faster processing at the edge fits perfectly into the objectives and goals of digital transformation — improving efficiencies, productivity, speed to market, and the customer experience. Here are just a few of the potential applications and industries that will be changed by edge computing:
|
||||
|
||||
**Agriculture:** Farmers and organizations already use drones to transmit field and climate conditions to watering equipment. Other applications might include monitoring and location tracking of workers, livestock, and equipment to improve productivity, efficiencies, and costs.
|
||||
|
||||
**Energy** : There are multiple potential applications in this sector that could benefit both consumers and providers. For example, smart meters help homeowners better manage energy use while reducing grid operators’ need for manual meter reading. Similarly, sensors on water pipes would detect leaks, while providing real-time consumption data.
|
||||
|
||||
**Financial services** : Banks are adopting interactive ATMs that quickly process data to provide better customer experiences. At the organizational level, transactional data can be more quickly analyzed for fraudulent activity.
|
||||
|
||||
**Logistics** : As consumers demand faster delivery of goods and services, logistics companies will need to transform mapping and routing capabilities to get real-time data, especially in terms of last-mile planning and tracking. That could involve street-, package-, and car-based sensors transmitting data for processing.
|
||||
|
||||
All industries have the potential for transformation, thanks to edge computing. But it will depend on how they address their computing infrastructure. Discover how to overcome any IT obstacles at [APC.com][6].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-1019389496-100794424-large.jpg
|
||||
[2]: https://www.gartner.com/smarterwithgartner/what-edge-computing-means-for-infrastructure-and-operations-leaders/
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://blog.schneider-electric.com/datacenter/2018/07/10/why-brick-and-mortar-retail-quickly-establishing-leadership-edge-computing/
|
||||
[5]: https://www.mckinsey.com/industries/high-tech/our-insights/new-demand-new-markets-what-edge-computing-means-for-hardware-companies
|
||||
[6]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
@ -0,0 +1,186 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Open architecture and open source – The new wave for SD-WAN?)
|
||||
[#]: via: (https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all)
|
||||
[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/)
|
||||
|
||||
Open architecture and open source – The new wave for SD-WAN?
|
||||
======
|
||||
As networking continues to evolve, you certainly don't want to break out a forklift every time new technologies are introduced. Open architecture would allow you to replace the components of a system, and give you more flexibility to control your own networking destiny.
|
||||
![opensource.com \(CC BY-SA 2.0\)][1]
|
||||
|
||||
I recently shared my thoughts about the [role of open source in networking][2]. I discussed two significant technological changes that we have witnessed. I call them waves, and these waves will redefine how we think about networking and security.
|
||||
|
||||
The first wave signifies that networking is moving to the software so that it can run on commodity off-the-shelf hardware. The second wave is the use of open source technologies, thereby removing the barriers to entry for new product innovation and rapid market access. This is especially supported in the SD-WAN market rush.
|
||||
|
||||
Seemingly, we are beginning to see less investment in hardware unless there is a specific segment that needs to be resolved. But generally, software-based platforms are preferred as they bring many advantages. It is evident that there has been a technology shift. We have moved networking from hardware to software and this shift has positive effects for users, enterprises and service providers.
|
||||
|
||||
**[ Don’t miss[customer reviews of top remote access tools][3] and see [the most powerful IoT companies][4] . | Get daily insights by [signing up for Network World newsletters][5]. ]**
|
||||
|
||||
### Performance (hardware vs software)
|
||||
|
||||
There has always been a misconception that the hardware-based platforms are faster due to the hardware acceleration that exists in the network interface controller (NIC). However, this is a mistaken belief. Nowadays, software platforms can reach similar performance levels as compared to hardware-based platforms.
|
||||
|
||||
Initially, people viewed hardware as a performance-based vehicle but today this does not hold true anymore. Even the bigger vendors are switching to software-based platforms. We are beginning to see this almost everywhere in networking.
|
||||
|
||||
### SD-WAN and open source
|
||||
|
||||
SD-WAN really took off quite rapidly due to the availability of open source. It enabled the vendors to leverage all the available open source components and then create their solution on top. By and large, SD-WAN vendors used the open source as the foundation of their solution and then added additional proprietary code over the baseline.
|
||||
|
||||
However, even when using various open source components, there is still a lot of work left for these vendors to make it to a complete SD-WAN solution, even for reaching a baseline of centrally managed routers with flexible network architecture control, not to talk about complete feature set of SD-WAN.
|
||||
|
||||
The result of the work done by these vendors is still closed products so the fact they are using open source components in their products is merely a time-to-market advantage but not a big benefit to the end users (enterprises) or service providers launching hosted services with these products. They are still limited in flexibility and vendor diversity is only achieved through a multi-vendor strategy which in practice means launching multiple silo services each based on a different SD-WAN vendor without real selection of the technologies that make each of the SD-WAN services they launch.
|
||||
|
||||
I recently came across a company called [Flexiwan][6], their goal is to fundamentally change this limitation of SD-WAN by offering a full open source solution that, as they say, “includes integration points in the core of the system that allow for 3rd party logic to be integrated in an efficient way.” They call this an open architecture, which, in practical terms, means a service provider or enterprise can integrate his own application logic into the core of the SD-WAN router…or select best of breed sub-technologies or applications instead of having these dictated by the vendor himself. I believe there is the possibility of another wave of SD-WAN with a fully open source and open architecture to SD-WAN.
|
||||
|
||||
This type of architecture brings many benefits to users, enterprises and service providers, especially when compared to the typical lock-in of bigger vendors, such as Cisco and VMware.
|
||||
|
||||
With an open source open architecture, it’s easier to control the versions and extend more flexibility by using the software offered by different providers. It offers the ability to switch providers, not to mention being easier to install and upgrade the versions.
|
||||
|
||||
### SD-WAN, open source and open architecture
|
||||
|
||||
An SD-WAN solution that is an open source with open architecture provides a modular and decomposed type of SD-WAN. This enables the selection of elements to provide a solution.
|
||||
|
||||
For example, enterprises and service providers can select the best-of-breed technologies from independent vendors, such as deep packet inspection (DPI), security, wide area network (WAN) optimization, session border controller (SBC), VoIP and other traffic specific optimization logic.
|
||||
|
||||
Some SD-WAN vendors define an open architecture in such a way that they just have a set of APIs, for example, northbound APIs, to enable one to build management or do service chaining. This is one approach to an open architecture but in fact, it’s pretty limited since it does not bring the full benefits that an open architecture should offer.
|
||||
|
||||
### Open source and the fear of hacking
|
||||
|
||||
However, when I think about an open source and open architecture for SD-WAN, the first thing that comes to mind is bad actors. What about the code? If it’s an open source, the bad actor can find vulnerabilities, right?
|
||||
|
||||
The community is a powerful force and will fix any vulnerability. Also with open source, the vendor, who is the one responsible for the open source component will fix the vulnerability much faster than a closed solution, where you are unaware of the vulnerability until a fix is released.
|
||||
|
||||
### The SD-WAN evolution
|
||||
|
||||
Before we go any further, let’s examine the history of SD-WAN and its origins, how we used to connect from the wide area network (WAN) to other branches via private or public links.
|
||||
|
||||
SD-WAN offers the ability to connect your organization to a WAN. This could be connecting to the Internet or other branches, to optimally deliver applications with a good user-experience. Essentially, SD-WAN allows the organizations to design the architecture of their network dynamically by means of software.
|
||||
|
||||
### In the beginning, there was IPSec
|
||||
|
||||
It started with IPSec. Around two decades back, in 1995, the popular design was that of mesh architecture. As a result, we had a lot of point-to-point connections. Firstly, mesh architectures with IPSec VPNs are tiresome to manage as there is a potential for 100s of virtual private network (VPN) configurations.
|
||||
|
||||
Authentically, IPSec started with two fundamental goals. The first was the tunneling protocol that would allow organizations to connect the users or other networks to their private network. This enabled the enterprises to connect to networks that they did not have a direct route to.
|
||||
|
||||
The second goal of IPSec was to encrypt packets at the network layer and therefore securing the data in motion. Let’s face it: at that time, IPSec was terrible for complicated multi-site interconnectivity and high availability designs. If left to its defaults, IPSec is best suited for static designs.
|
||||
|
||||
This was the reason why we had to step in the next era where additional functionality was added to IPSec. For example, IPSec had issues in supporting routing protocols using multicast. To overcome this, IPSec over generic routing encapsulation (GRE) was introduced.
|
||||
|
||||
### The next era of SD-WAN
|
||||
|
||||
During the journey to 2008, one could argue that the next era of WAN connectivity was when additional features were added to IPSec. At this time IPSec became known as a “Swiss army knife.” It could do many things but not anything really well.
|
||||
|
||||
Back then, you could create multiple links, but it failed to select the traffic over these links other than by using simple routing. You needed to add a routing protocol. For advanced agile architectures, IPSec had to be incorporated with other higher-level protocols.
|
||||
|
||||
Features were then added based on measuring the quality. Link quality features were added to analyze any delay, drops and to select alternative links for your applications. We began to add tunnels, multi-links and to select the traffic based on the priority, not just based on the routing.
|
||||
|
||||
The most common way to the tunnel was to have IPSec over GRE. You have the GRE tunnel that enables you to send any protocol end-to-end by using IPSec for the encryption. All this functionality was added to achieve and create dynamic tunnels over IPSec and to optimize the IPSec tunnels.
|
||||
|
||||
This was a move in the right direction, but it was still complex. It was not centrally managed and was error-prone with complex configurations that were unable to manage large deployments. IPSec had far too many limitations, so in the mid-2000s early SD-WAN vendors started cropping up. Some of these vendors enabled the enterprises to aggregate many separate digital subscriber lines (DSL) links into one faster logical link. At the same time, others added time stamps and/or sequence numbers to packets to improve the network performance and security when running over best effort (internet) links.
|
||||
|
||||
International WAN connectivity was a popular focus since the cost delta between the Internet and private multiprotocol label switching (MPLS) was 10x+ different. Primarily, enterprises wanted the network performance and security of MPLS without having to pay a premium for it.
|
||||
|
||||
Most of these solutions sat in-front or behind a traditional router from companies like Cisco. Evidently, just like WAN Optimization vendors, these were additional boxes/solutions that enterprises added to their networks.
|
||||
|
||||
### The next era of SD-WAN, circa 2012
|
||||
|
||||
It was somewhere in 2012 that we started to see the big rush to the SD-WAN market. Vendors such as Velocloud, Viptela and a lot of the other big players in the SD-WAN market kicked off with the objective of claiming some of the early SD-WAN success and going after the edge router market with a full feature router replacement and management simplicity.
|
||||
|
||||
Open source networking software and other open source components for managing the traffic enabled these early SD-WAN vendors to lay a foundation where a lot of the code base was open source. They would then “glue” it together and add their own additional features.
|
||||
|
||||
Around this time, Intel was driving data plane development kit (DPDK) and advanced encryption standard (AES) instruction set, which enabled that software to run on commodity hardware. The SD-WAN solutions were delivered as closed solutions where each solution used its own set of features. The features and technologies chosen for each vendor were different and not compatible with each other.
|
||||
|
||||
### The recent era of SD-WAN, circa 2017
|
||||
|
||||
A tipping point in 2017 was the gold rush for SD-WAN deployments. Everyone wanted to have SD-WAN as fast as possible.
|
||||
|
||||
The SD-WAN market has taken off, as seen by 50 vendors with competing, proprietary solutions and market growth curves with a CAGR of 100%. There is a trend of big vendors like Cisco, Vmware and Oracle acquiring startups to compete in this new market.
|
||||
|
||||
As a result, Cisco, which is the traditional enterprise market leader in WAN routing solutions felt threatened since its IWAN solution, which had been around since 2008, was too complex (a 900-page configuration and operations manual). Besides, its simple solution based on the Meraki acquisition was not feature-rich enough for the large enterprises.
|
||||
|
||||
With their acquisition of Viptela, Cisco currently has a 13% of the market share, and for the first time in decades, it is not the market leader. The large cloud vendors, such as Google and Facebook are utilizing their own technology for routing within their very large private networks.
|
||||
|
||||
At some point between 2012 and 2017, we witnessed the service providers adopting SD-WAN. This introduced the onset and movement of managed SD-WAN services. As a result, the service providers wanted to have SD-WAN on the menu for their customers. But there were many limitations in the SD-WAN market, as it was offered as a closed-box solution, giving the service providers limited control.
|
||||
|
||||
At this point surfaced an expectation of change, as service providers and enterprises looked for more control. Customers can get better functionality from a multi-vendor approach than from a single vendor.
|
||||
|
||||
### Don’t forget DIY SD-WAN
|
||||
|
||||
Up to 60% of service providers and enterprises within the USA are now looking at DIY SD-WAN. A DIY SD-WAN solution is not where the many pieces of open source are taken and caste into something. The utmost focus is on the solution that can be self-managed but buy from a vendor.
|
||||
|
||||
Today, the majority of the market is looking for managed solutions and the upper section that has the expertise wants to be equipped with more control options.
|
||||
|
||||
### SD-WAN vendors attempting everything
|
||||
|
||||
There is a trend that some vendors try to do everything with SD-WAN. As a result, whether you are an enterprise or a service provider, you are locked into a solution that is dictated by the SD-WAN vendor.
|
||||
|
||||
The SD-WAN vendors have made the supplier choice or developed what they think is relevant. Evidently, some vendors are using stacks and software development kits (SDKs) that they purchased, for example, for deep packet inspection (DPI).
|
||||
|
||||
Ultimately, you are locked into a specific solution that the vendor has chosen for you. If you are a service provider, you might disapprove of this limitation and if you are an enterprise with specific expertise, you might want to zoom in for more control.
|
||||
|
||||
### All-in-one security vendors
|
||||
|
||||
Many SD-WAN vendors promote themselves as security companies. But would you prefer to buy a security solution from an SD-WAN vendor or from an experienced vendor, such as Checkpoint?
|
||||
|
||||
Both: enterprise and service providers want to have a choice, but with an integrated black box security solution, you don't have a choice. The more you integrate and throw into the same box, the stronger the vendor lock-in is and the weaker the flexibility.
|
||||
|
||||
Essentially, with this approach, you are going for the lowest common denominator instead of the highest. Ideally, the technology of the services that you deploy on your network requires expertise. One vendor cannot be an expert in everything.
|
||||
|
||||
An open architecture lies in a place for experts in different areas to join together and add their own specialist functionality.
|
||||
|
||||
### Encrypted traffic
|
||||
|
||||
As a matter of fact, what is not encrypted today will be encrypted tomorrow. The vendor of the application can perform intelligent things that the SD-WAN vendor cannot because they control both sides. Hence, if you can put something inside the SD-WAN edge device, they can make smart decisions even if the traffic is encrypted.
|
||||
|
||||
But in the case of traditional SD-WANs, there needs to be a corporation with a content provider. However, with an open architecture, you can integrate anything, and nothing prevents the integration. A lot of traffic is encrypted and it's harder to manage encrypted traffic. However, an open architecture would allow the content providers to manage the traffic more effectively.
|
||||
|
||||
### 2019 and beyond: what is an open architecture?
|
||||
|
||||
Cloud providers and enterprises have discovered that 90% of the user experience and security problems arise due to the network: between where the cloud provider resides and where the end-user consumes the application.
|
||||
|
||||
Therefore, both cloud providers and large enterprise with digital strategies are focusing on building their solutions based on open source stacks. Having a viable open source SD-WAN solution is the next step in the SD-WAN evolution, where it moves to involve the community in the solution. This is similar to what happens with containers and tools.
|
||||
|
||||
Now, since we’re in 2019, are we going to witness a new era of SD-WAN? Are we moving to the open architecture with an open source SD-WAN solution? An open architecture should be the core of the SD-WAN infrastructure, where additional technologies are integrated inside the SD-WAN solution and not only complementary VNFs. There is an interface and native APIs that allow you to integrate logic into the router. This way, the router will be able to intercept and act according to the traffic.
|
||||
|
||||
So, if I’m a service provider and have my own application, I would want to write logic that would be able to communicate with my application. Without an open architecture, the service providers can’t really offer differentiation and change the way SD-WAN makes decisions and interacts with the traffic of their applications.
|
||||
|
||||
There is a list of various technologies that you need to be an expert in to be able to integrate. And each one of these technologies can be a company, for example, DPI, VoIP optimization, and network monitoring to name a few. An open architecture will allow you to pick and choose these various elements as per your requirements.
|
||||
|
||||
Networking is going through a lot of changes and it will continue to evolve with the passage of time. As a result, you wouldn’t want something that forces you to break out a forklift each time new technologies are introduced. Primarily, open architecture allows you to replace the components of the system and add code or elements that handle specific traffic and applications.
|
||||
|
||||
### Open source
|
||||
|
||||
Open source gives you more flexibility to control your own destiny. It offers the ability to select your own services that you want to be applied to your system. It provides security in the sense that if something happens to the vendor or there is a vulnerability in the system, you know that you are backed by the community that can fix such misadventures.
|
||||
|
||||
From the perspective of the business model, it makes a more flexible and cost-effective system. Besides, with open source, the total cost of ownership will also be lower.
|
||||
|
||||
**This article is published as part of the IDG Contributor Network.[Want to Join?][7]**
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390151/open-architecture-and-open-source-the-new-wave-for-sd-wan.html#tk.rss_all
|
||||
|
||||
作者:[Matt Conran][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Matt-Conran/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/03/6554314981_7f95641814_o-100714680-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3338143/the-role-of-open-source-in-networking.html
|
||||
[3]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb
|
||||
[4]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb
|
||||
[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb
|
||||
[6]: https://flexiwan.com/sd-wan-open-source/
|
||||
[7]: /contributor-network/signup.html
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,97 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco: DNSpionage attack adds new tools, morphs tactics)
|
||||
[#]: via: (https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco: DNSpionage attack adds new tools, morphs tactics
|
||||
======
|
||||
Cisco's Talos security group says DNSpionage tools have been upgraded to be more stealthy
|
||||
![Calvin Dexter / Getty Images][1]
|
||||
|
||||
The group behind the Domain Name System attacks known as DNSpionage have upped their dark actions with new tools and malware to focus their attacks and better hide their activities.
|
||||
|
||||
Cisco Talos security researchers, who discovered [DNSpionage][2] in November, this week warned of new exploits and capabilities of the nefarious campaign.
|
||||
|
||||
**More about DNS:**
|
||||
|
||||
* [DNS in the cloud: Why and why not][3]
|
||||
* [DNS over HTTPS seeks to make internet use more private][4]
|
||||
* [How to protect your infrastructure from DNS cache poisoning][5]
|
||||
* [ICANN housecleaning revokes old DNS security key][6]
|
||||
|
||||
|
||||
|
||||
“The threat actor's ongoing development of DNSpionage malware shows that the attacker continues to find new ways to avoid detection. DNS tunneling is a popular method of exfiltration for some actors and recent examples of DNSpionage show that we must ensure DNS is monitored as closely as an organization's normal proxy or weblogs,” [Talos wrote][7]. “DNS is essentially the phonebook of the internet, and when it is tampered with, it becomes difficult for anyone to discern whether what they are seeing online is legitimate.”
|
||||
|
||||
In Talos’ initial report, researchers said a DNSpionage campaign targeted various businesses in the Middle East as well as United Arab Emirates government domains. It also utilized two malicious websites containing job postings that were used to compromise targets via crafted Microsoft Office documents with embedded macros. The malware supported HTTP and DNS communication with the attackers.
|
||||
|
||||
In a separate DNSpionage campaign, the attackers used the same IP address to redirect the DNS of legitimate .gov and private company domains. During each DNS compromise, the actor carefully generated “Let's Encrypt” certificates for the redirected domains. These certificates provide X.509 certificates for [Transport Layer Security (TLS)][8] free of charge to the user, Talos said.
|
||||
|
||||
This week Cisco said DNSpionage actors have created a new remote administrative tool that supports HTTP and DNS communication with the attackers' command and control (C2).
|
||||
|
||||
“In our previous post concerning DNSpionage, we showed that the malware author used malicious macros embedded in a Microsoft Word document. In the new sample from Lebanon identified at the end of February, the attacker used an Excel document with a similar macro.”
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][9] ]**
|
||||
|
||||
Talos wrote: “The malware supports HTTP and DNS communication to the C2 server. The HTTP communication is hidden in the comments in the HTML code. This time, however, the C2 server mimics the GitHub platform instead of Wikipedia. While the DNS communication follows the same method we described in our previous article, the developer added some new features in this latest version and, this time, the actor removed the debug mode.”
|
||||
|
||||
Talos added that the domain used for the C2 campaign is “bizarre.”
|
||||
|
||||
“The previous version of DNSpionage attempted to use legitimate-looking domains in an attempt to remain undetected. However, this newer version uses the domain ‘coldfart[.]com,’ which would be easier to spot than other APT campaigns which generally try to blend in with traffic more suitable to enterprise environments. The domain was also hosted in the U.S., which is unusual for any espionage-style attack.”
|
||||
|
||||
Talos researchers said they discovered that DNSpionage added a reconnaissance phase, that ensures the payload is being dropped on specific targets rather than indiscriminately downloaded on every machine.
|
||||
|
||||
This level of attack also returns information about the workstation environment, including platform-specific information, the name of the domain and the local computer, and information concerning the operating system, Talos wrote. This information is key to helping the malware select the victims only and attempts to avoid researchers or sandboxes. Again, it shows the actor's improved abilities, as they now fingerprint the victim.
|
||||
|
||||
This new tactic indicates an improved level of sophistication and is likely in response to the significant amount of public interest in the campaign.
|
||||
|
||||
Talos noted that there have been several other public reports of DNSpionage attacks, and in January, the U.S. Department of Homeland Security issued an [alert][10] warning users about this threat activity.
|
||||
|
||||
“In addition to increased reports of threat activity, we have also discovered new evidence that the threat actors behind the DNSpionage campaign continue to change their tactics, likely in an attempt to improve the efficacy of their operations,” Talos stated.
|
||||
|
||||
In April, Cisco Talos identified an undocumented malware developed in .NET. On the analyzed samples, the malware author left two different internal names in plain text: "DropperBackdoor" and "Karkoff."
|
||||
|
||||
“The malware is lightweight compared to other malware due to its small size and allows remote code execution from the C2 server. There is no obfuscation and the code can be easily disassembled,” Talos wrote.
|
||||
|
||||
The Karkoff malware searches for two specific anti-virus platforms: Avira and Avast and will work around them.
|
||||
|
||||
“The discovery of Karkoff also shows the actor is pivoting and is increasingly attempting to avoid detection while remaining very focused on the Middle Eastern region,” Talos wrote.
|
||||
|
||||
Talos distinguished DNSpionage from another DNS attack method, “[Sea Turtle][11]”, it detailed this month. Sea Turtle involves state-sponsored attackers that are abusing DNS to target organizations and harvest credentials to gain access to sensitive networks and systems in a way that victims are unable to detect. This displays unique knowledge about how to manipulate DNS, Talos stated.
|
||||
|
||||
By obtaining control of victims’ DNS, attackers can change or falsify any data victims receive from the Internet, illicitly modify DNS name records to point users to actor-controlled servers and users visiting those sites would never know, Talos reported.
|
||||
|
||||
“While this incident is limited to targeting primarily national security organizations in the Middle East and North Africa, and we do not want to overstate the consequences of this specific campaign, we are concerned that the success of this operation will lead to actors more broadly attacking the global DNS system,” Talos stated about Sea Turtle.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390666/cisco-dnspionage-attack-adds-new-tools-morphs-tactics.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cyber_attack_threat_danger_breach_hack_security_by_calvindexter_gettyimages-860363294_2400x800-100788395-large.jpg
|
||||
[2]: https://blog.talosintelligence.com/2018/11/dnspionage-campaign-targets-middle-east.html
|
||||
[3]: https://www.networkworld.com/article/3273891/hybrid-cloud/dns-in-the-cloud-why-and-why-not.html
|
||||
[4]: https://www.networkworld.com/article/3322023/internet/dns-over-https-seeks-to-make-internet-use-more-private.html
|
||||
[5]: https://www.networkworld.com/article/3298160/internet/how-to-protect-your-infrastructure-from-dns-cache-poisoning.html
|
||||
[6]: https://www.networkworld.com/article/3331606/security/icann-housecleaning-revokes-old-dns-security-key.html
|
||||
[7]: https://blog.talosintelligence.com/2019/04/dnspionage-brings-out-karkoff.html
|
||||
[8]: https://www.networkworld.com/article/2303073/lan-wan-what-is-transport-layer-security-protocol.html
|
||||
[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[10]: https://www.us-cert.gov/ncas/alerts/AA19-024A
|
||||
[11]: https://www.networkworld.com/article/3389747/cisco-talos-details-exceptionally-dangerous-dns-hijacking-attack.html
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,70 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How data storage will shift to blockchain)
|
||||
[#]: via: (https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
How data storage will shift to blockchain
|
||||
======
|
||||
Move over cloud and traditional in-house enterprise data center storage, distributed storage based on blockchain may be arriving imminently.
|
||||
![Cybrain / Getty Images][1]
|
||||
|
||||
If you thought cloud storage was digging in its heels to become the go-to method for storing data, and at the same time grabbing share from own-server, in-house storage, you may be interested to hear that some think both are on the way out. Instead organizations will use blockchain-based storage.
|
||||
|
||||
Decentralized blockchain-based file storage will be more secure, will make it harder to lose data, and will be cheaper than anything seen before, say organizations actively promoting the slant on encrypted, distributed technology.
|
||||
|
||||
**[ Read also:[Why blockchain (might be) coming to an IoT implementation near you][2] ]**
|
||||
|
||||
### Storing transactional data in a blockchain
|
||||
|
||||
China company [FileStorm][3], which describes itself in marketing materials as the first [Interplanetary File Storage][4] (IPFS) platform on blockchain, says the key to making it all work is to only store the transactional data in blockchain. The actual data files, such as large video files, are distributed in IPFS.
|
||||
|
||||
IPFS is a distributed, peer-to-peer file storage protocol. File parts come from multiple computers all at the same time, supposedly making the storage hardy. FileStorm adds blockchain on top of it for a form of transactional indexing.
|
||||
|
||||
“Blockchain is designed to store transactions forever, and the data can never be altered, thus a trustworthy system is created,” says Raymond Fu, founder of FileStorm and chief product officer of MOAC, the underlying blockchain system used, in a video on the FileStorm website.
|
||||
|
||||
“The blocks are used to store only small transactional data,” he says. You can’t store the large files on it. Those are distributed. Decentralized data storage platforms are needed for added decentralized blockchain, he says.
|
||||
|
||||
YottaChain, another blockchain storage start-up project is coming at the whole thing from a slightly different angle. It claims its non-IPFS system is more secure partly because it performs deduplication after encryption.
|
||||
|
||||
“Data is 10,000 times more secure than [traditional] centralized storage,” it says on its [website][5]. Deduplication eliminates duplicated or redundant data.
|
||||
|
||||
### Disrupting data storage
|
||||
|
||||
“Blockchain will disrupt data storage,” [says BlockApps separately][6]. The blockchain backend platform provider says advantages to this new generation of storage include that decentralizing data provides more security and privacy. That's due in part because it's harder to hack than traditional centralized storage. That the files are spread piecemeal among nodes, conceivably all over the world, makes it impossible for even the participating node to view the contents of the complete file, it says.
|
||||
|
||||
Sharding, which is the term for the breaking apart and node-spreading of the actual data, is secured through keys. Markets can award token coins for mining, and coins can be spent to gain storage. Excess storage can even be sold. And cryptocurrencies have been started to “incentivize usage and to create a market for buying and selling decentralized storage,” BlockApps explains.
|
||||
|
||||
The final parts of this new storage mix are that lost files are minimized because data can be duplicated simply — the data sets, for example, can be stored multiple times for error correction — and costs are reduced due to efficiencies.
|
||||
|
||||
Square Tech (Shenzhen) Co., which makes blockchain file storage nodes, says in its marketing materials that it intends to build service centers globally to monitor its nodes in real time. Interestingly, another area the company has gotten involved in is the internet of things (IoT), and [it says][7] it wants “to unite the technical resources, capital, and human resources of the IoT industry and blockchain.” Perhaps we end up with a form of the internet of storage things?
|
||||
|
||||
“The entire cloud computing industry will be disrupted by blockchain technology in just a few short years,” says BlockApps. Dropbox and Amazon “may even become overpriced and obsolete if they do not find ways to integrate with the advances.”
|
||||
|
||||
Join the Network World communities on [Facebook][8] and [LinkedIn][9] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390722/how-data-storage-will-shift-to-blockchain.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_cybrain_gettyimages-926677890_2400x1600-100788435-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html
|
||||
[3]: http://filestorm.net/
|
||||
[4]: https://ipfs.io/
|
||||
[5]: https://www.yottachain.io/
|
||||
[6]: https://blockapps.net/blockchain-disrupt-data-storage/
|
||||
[7]: http://www.sikuaikeji.com/
|
||||
[8]: https://www.facebook.com/NetworkWorld/
|
||||
[9]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (IoT roundup: VMware, Nokia beef up their IoT)
|
||||
[#]: via: (https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
IoT roundup: VMware, Nokia beef up their IoT
|
||||
======
|
||||
Everyone wants in on the ground floor of the internet of things, and companies including Nokia, VMware and Silicon Labs are sharpening their offerings in anticipation of further growth.
|
||||
![Getty Images][1]
|
||||
|
||||
When attempting to understand the world of IoT, it’s easy to get sidetracked by all the fascinating use cases: Automated oil and gas platforms! Connected pet feeders! Internet-enabled toilets! (Is “the Internet of Toilets” a thing yet?) But the most important IoT trend to follow may be the way that major tech vendors are vying to make large portions of the market their own.
|
||||
|
||||
VMware’s play for a significant chunk of the IoT market is called Pulse IoT Center, and the company released version 2.0 of it this week. It follows the pattern set by other big companies getting into IoT: Leveraging their existing technological strengths and applying them to the messier, more heterodox networking environment that IoT represents.
|
||||
|
||||
Unsurprisingly, given that it’s VMware we’re talking about, there’s now a SaaS option, and the company was also eager to talk up that Pulse IoT Center 2.0 has simplified device-onboarding and centralized management features.
|
||||
|
||||
**More about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][2]
|
||||
* [Edge computing best practices][3]
|
||||
* [How edge computing can help secure the IoT][4]
|
||||
|
||||
|
||||
|
||||
That might sound familiar, and for good reason – companies with any kind of a background in network management, from HPE/Aruba to Amazon, have been pushing to promote their system as the best framework for managing a complicated and often decentralized web of IoT devices from a single platform. By rolling features like software updates, onboarding and security into a single-pane-of-glass management console, those companies are hoping to be the organizational base for customers trying to implement IoT.
|
||||
|
||||
Whether they’re successful or not remains to be seen. While major IT companies have been trying to capture market share by competing across multiple verticals, the operational orientation of the IoT also means that non-traditional tech vendors with expertise in particular fields (particularly industrial and automotive) are suddenly major competitors.
|
||||
|
||||
**Nokia spreads the IoT network wide**
|
||||
|
||||
As a giant carrier-equipment vendor, Nokia is an important company in the overall IoT landscape. While some types of enterprise-focused IoT are heavily localized, like connected factory floors or centrally managed office buildings, others are so geographically disparate that carrier networks are the only connectivity medium that makes sense.
|
||||
|
||||
The Finnish company earlier this month broadened its footprint in the IoT space, announcing that it had partnered with Nordic Telecom to create a wide-area network focused on enabling IoT and emergency services. The network, which Nokia is billing as the first mission-critical communications network, operates using LTE technology in the 410-430MHz band – a relatively low frequency, which allows for better propagation and a wide effective range.
|
||||
|
||||
The idea is to provide a high-throughput, low-latency network option to any user on the network, whether it’s an emergency services provider needing high-speed video communication or an energy or industrial company with a low-delay-tolerance application.
|
||||
|
||||
**Silicon Labs packs more onto IoT chips**
|
||||
|
||||
The heart of any IoT implementation remains the SoCs that make devices intelligent in the first place, and Silicon Labs announced that it's building more muscle into its IoT-focused product lineup.
|
||||
|
||||
The Austin-based chipmaker said that version 2 of its Wireless Gecko platform will pack more than double the wireless connectivity range of previous entries, which could seriously ease design requirements for companies planning out IoT deployments. The chipsets support Zigbee, Thread and Bluetooth mesh networking, and are designed for line-powered IoT devices, using Arm Cortex-M33 processors for relatively strong computing capacity and high energy efficiency.
|
||||
|
||||
Chipset advances aren’t the type of thing that will pay off immediately in terms of making IoT devices more capable, but improvements like these make designing IoT endpoints for particular uses that much easier, and new SoCs will begin to filter into line-of-business equipment over time.
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390682/iot-roundup-vmware-nokia-beef-up-their-iot.html#tk.rss_all
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home5-100768494-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,91 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (No, drone delivery still isn’t ready for prime time)
|
||||
[#]: via: (https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
No, drone delivery still isn’t ready for prime time
|
||||
======
|
||||
Despite incremental progress and limited regulatory approval in the U.S. and Australia, drone delivery still isn’t a viable option in the vast majority of use cases.
|
||||
![Sorry imKirk \(CC0\)][1]
|
||||
|
||||
April has a been a big month for drone delivery. First, [Alphabet’s Wing Aviation drones got approval from Australia’s Civil Aviation Safety Authority (CASA)][2], for public deliveries in the country, and this week [Wing earned Air Carrier Certification from the U.S. Federal Aviation Administration][3]. These two regulatory wins got lot of people got very excited. Finally, the conventional wisdom exulted, drone delivery is actually becoming a reality.
|
||||
|
||||
Not so fast.
|
||||
|
||||
### Drone delivery still in pilot/testing mode
|
||||
|
||||
Despite some small-scale successes and the first signs of regulatory acceptance, drone delivery remains firmly in the carefully controlled pilot/test phase (and yes, I know drones don’t carry pilots).
|
||||
|
||||
**[ Also read:[Coffee drone delivery: Ideas this bad could kill the internet of things][4] ]**
|
||||
|
||||
For example, despite getting FAA approval to begin commercial deliveries, Wing is still working up beginning delivery trials to test the technology and gather information in Virginia later this year.
|
||||
|
||||
But what about that public approval from CASA for deliveries outside Canberra? That’s [a real business][5] now, right?
|
||||
|
||||
Well, yes and no.
|
||||
|
||||
On the “yes” side, the Aussie approval reportedly came after 18 months of tests, 70,000 flights, and more than 3,000 actual deliveries of products from local coffee shops and pharmacies. So, at least some people somewhere in the world are actually getting stuff dropped at their doors by drones.
|
||||
|
||||
In the “no” column, however, goes the fact that the approval covers only about 100 suburban homes, though more are planned to be added “in the coming weeks and months.” More importantly, the approval includes strict limits on when and where the drones can go. No crossing main roads, no nighttime deliveries, and prohibitions to stay away from people. And drone-eligible customers need a safety briefing!
|
||||
|
||||
### Safety briefings required for customers
|
||||
|
||||
That still sounds like a small-scale test, not a full-scale commercial roll-out. And while I think drone-safety classes are probably a good idea – and the testing period apparently passed without any injuries – even the perceived _need_ for them is not be a great advertisement for rapid expansion of drone deliveries.
|
||||
|
||||
Ironically, though, a bigger issue than protecting people from the drones, perhaps, is protecting the drones from people. Instructions to stay 2 to 5 meters away from folks will help, but as I’ve previously addressed, these things are already seen as attractive nuisances and vandalism targets. Further raising the stakes, many local residents were said to be annoyed at the noise created by the drones. Now imagine those contraptions buzzing right by you all loaded down with steaming hot coffee or delicious ice cream.
|
||||
|
||||
And even with all those caveats, no one is talking about the key factors in making drone deliveries a viable business: How much will those deliveries cost and who will pay? For a while, the desire to explore the opportunity will drive investment, but that won’t last forever. If drone deliveries aren’t cost effective for businesses, they won’t spread very far.
|
||||
|
||||
From the customer perspective, most drone delivery tests are not yet charging for the service. If and when they start carrying fees as well as purchases, the novelty factor will likely entice many shoppers to pony up to get their items airlifted directly to their homes. But that also won’t last. Drone delivery will have to demonstrate that it’s better — faster, cheaper, or more reliable — than the existing alternatives to find its niche.
|
||||
|
||||
### Drone deliveries are fast, commercial roll-out will be slow
|
||||
|
||||
Long term, I have no doubt that drone delivery will eventually become an accepted part of the transportation infrastructure. I don’t necessarily buy into Wing’s prediction of an AU $40 million drone delivery market in Australia coming to pass anytime soon, but real commercial operations seem inevitable.
|
||||
|
||||
It’s just going to be more limited than many proponents claim, and it’s likely to take a lot longer than expected to become mainstream. For example, despite ongoing testing, [Amazon has already missed Jeff Bezos’ 2018 deadline to begin commercial drone deliveries][6], and we haven’t heard much about [Walmart’s drone delivery plans][7] lately. And while tests by a number of companies continue in locations ranging from Iceland and Finland to the U.K. and the U.S. have created a lot of interest, they have not yet translated into widespread availability.
|
||||
|
||||
Apart from the issue of how much consumers really want their stuff delivered by an armada of drones (a [2016 U.S. Post Office study][8] found that 44% of respondents liked the idea, while 34% didn’t — and 37% worried that drone deliveries might not be safe), a lot has to happen before that vision becomes reality.
|
||||
|
||||
At a minimum, successful implementations of full-scale commercial drone delivery will require better planning, better-thought-out business cases, more rugged and efficient drone technology, and significant advances in flight control and autonomous flight. Like drone deliveries themselves, all that stuff is coming; it just hasn’t arrived yet.
|
||||
|
||||
**More about drones and the internet of things:**
|
||||
|
||||
* [Drone defense -- powered by IoT -- is now a thing][9]
|
||||
* [Ideas this bad could kill the Internet of Things][4]
|
||||
* [10 reasons Amazon's drone delivery plan still won't fly][10]
|
||||
* [Amazon’s successful drone delivery test doesn’t really prove anything][11]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390677/drone-delivery-not-ready-for-prime-time.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/07/drone_mountains_by_sorry_imkirk_cc0_via_unsplash_1200x800-100763763-large.jpg
|
||||
[2]: https://medium.com/wing-aviation/wing-launches-commercial-air-delivery-service-in-canberra-5da134312474
|
||||
[3]: https://medium.com/wing-aviation/wing-becomes-first-certified-air-carrier-for-drones-in-the-us-43401883f20b
|
||||
[4]: https://www.networkworld.com/article/3301277/ideas-this-bad-could-kill-the-internet-of-things.html
|
||||
[5]: https://wing.com/australia/canberra
|
||||
[6]: https://www.businessinsider.com/jeff-bezos-predicted-amazon-would-be-making-drone-deliveries-by-2018-2018-12?r=US&IR=T
|
||||
[7]: https://www.networkworld.com/article/2999828/walmart-delivery-drone-plans.html
|
||||
[8]: https://www.uspsoig.gov/sites/default/files/document-library-files/2016/RARC_WP-17-001.pdf
|
||||
[9]: https://www.networkworld.com/article/3309413/drone-defense-powered-by-iot-is-now-a-thing.html
|
||||
[10]: https://www.networkworld.com/article/2900317/10-reasons-amazons-drone-delivery-plan-still-wont-fly.html
|
||||
[11]: https://www.networkworld.com/article/3185478/amazons-successful-drone-delivery-test-doesnt-really-prove-anything.html
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,52 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Dell EMC and Cisco renew converged infrastructure alliance)
|
||||
[#]: via: (https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Dell EMC and Cisco renew converged infrastructure alliance
|
||||
======
|
||||
Dell EMC and Cisco renewed their agreement to collaborate on converged infrastructure (CI) products for a few more years even though the momentum is elsewhere.
|
||||
![Dell EMC][1]
|
||||
|
||||
Dell EMC and Cisco have renewed a collaboration on converged infrastructure (CI) products that has run for more than a decade, even as the momentum shifts elsewhere. The news was announced via a [blog post][2] by Pete Manca, senior vice president for solutions engineering at Dell EMC.
|
||||
|
||||
The deal is centered around Dell EMC’s VxBlock product line, which originally started out in 2009 as a joint venture between EMC and Cisco called VCE (Virtual Computing Environment). EMC bought out Cisco’s stake in the venture before Dell bought EMC.
|
||||
|
||||
The devices offered UCS servers and networking from Cisco, EMC storage, and VMware virtualization software in pre-configured, integrated bundles. VCE was retired in favor of new brands, VxBlock, VxRail, and VxRack. The lineup has been pared down to one device, the VxBlock 1000.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][3] ]**
|
||||
|
||||
“The newly inked agreement entails continued company alignment across multiple organizations: executive, product development, marketing, and sales,” Manca wrote in the blog post. “This means we’ll continue to share product roadmaps and collaborate on strategic development activities, with Cisco investing in a range of Dell EMC sales, marketing and training initiatives to support VxBlock 1000.”
|
||||
|
||||
Dell EMC cites IDC research that it holds a 48% market share in converged systems, nearly 1.5 times that of any other vendor. But IDC's April edition of the Converged Systems Tracker said the CI category is on the downswing. CI sales fell 6.4% year over year, while the market for hyperconverged infrastructure (HCI) grew 57.2% year over year.
|
||||
|
||||
For the unfamiliar, the primary difference between converged and hyperconverged infrastructure is that CI relies on hardware building blocks, while HCI is software-defined and considered more flexible and scalable than CI and operates more like a cloud system with resources spun up and down as needed.
|
||||
|
||||
Despite this, Dell is committed to CI systems. Just last month it announced an update and expansion of the VxBlock 1000, including higher scalability, a broader choice of components, and the option to add new technologies. It featured updated VMware vRealize and vSphere support, the option to consolidate high-value, mission-critical workloads with new storage and data protection options and support for Cisco UCS fabric and servers.
|
||||
|
||||
For customers who prefer to build their own infrastructure solutions, Dell EMC introduced Ready Stack, a portfolio of validated designs with sizing, design, and deployment resources that offer VMware-based IaaS, vSphere on Dell EMC PowerEdge servers and Dell EMC Unity storage, and Microsoft Hyper-V on Dell EMC PowerEdge servers and Dell EMC Unity storage.
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391071/dell-emc-and-cisco-renew-converged-infrastructure-alliance.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/dell-emc-vxblock-1000-100794721-large.jpg
|
||||
[2]: https://blog.dellemc.com/en-us/dell-technologies-cisco-reaffirm-joint-commitment-converged-infrastructure/
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
87
sources/talk/20190429 Cisco goes all in on WiFi 6.md
Normal file
87
sources/talk/20190429 Cisco goes all in on WiFi 6.md
Normal file
@ -0,0 +1,87 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco goes all in on WiFi 6)
|
||||
[#]: via: (https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco goes all in on WiFi 6
|
||||
======
|
||||
Cisco rolls out Catalyst and Meraki WiFi 6-based access points, Catalyst 9000 switch
|
||||
![undefined / Getty Images][1]
|
||||
|
||||
Cisco has taken the wraps off a family of WiFi 6 access points, roaming technology and developer-community support all to make wireless a solid enterprise equal with the wired world.
|
||||
|
||||
“Best-effort’ wireless for enterprise customers doesn’t cut it any more. There’s been a change in customer expectations that there will be an uninterrupted unplugged experience,” said Scott Harrell, senior vice president and general manager of enterprise networking at Cisco. **“ **It is now a wireless-first world.** ”**
|
||||
|
||||
**More about 802.11ax (Wi-Fi 6)**
|
||||
|
||||
* [Why 802.11ax is the next big thing in wireless][2]
|
||||
* [FAQ: 802.11ax Wi-Fi][3]
|
||||
* [Wi-Fi 6 (802.11ax) is coming to a router near you][4]
|
||||
* [Wi-Fi 6 with OFDMA opens a world of new wireless possibilities][5]
|
||||
* [802.11ax preview: Access points and routers that support Wi-Fi 6 are on tap][6]
|
||||
|
||||
|
||||
|
||||
Bringing a wireless-first enterprise world together is one of the drivers behind a new family of WiFi 6-based access points (AP) for Cisco’s Catalyst and Meraki portfolios. WiFi 6 (802.11ax) is designed for high-density public or private environments. But it also will be beneficial in internet of things (IoT) deployments, and in offices that use bandwidth-hogging applications like videoconferencing.
|
||||
|
||||
The Cisco Catalyst 9100 family and Meraki [MR 45/55][7] WiFi-6 access points are built on Cisco silicon and communicate via pre-802.1ax protocols. The silicon in these access points now acts a rich sensor providing IT with insights about what is going on the wireless network in real-time, and that enables faster reactions to problems and security concerns, Harrell said.
|
||||
|
||||
Aside from WiFi 6, the boxes include support for visibility and communications with Zigbee, BLE and Thread protocols. The Catalyst APs support uplink speeds of 2.5 Gbps, in addition to 100 Mbps and 1 Gbps. All speeds are supported on Category 5e cabling for an industry first, as well as 10GBASE-T (IEEE 802.3bz) cabling, Cisco said.
|
||||
|
||||
Wireless traffic aggregates to wired networks so and the wired network must also evolve. Technology like multi-gigabit Ethernet must be driven into the access layer, which in turn drives higher bandwidth needs at the aggregation and core layers, [Harrell said][8].
|
||||
|
||||
Handling this influx of wireless traffic was part of the reason Cisco also upgraded its iconic Catalyst 6000 with the [Catalyst 9600 this week][9]. The 9600 brings with it support for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for wireless netowrks as well as Intent-based networking (IBN) and security segmentation. The 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the 9500 aggregation switch and 9800 wireless controller.
|
||||
|
||||
“WiFi doesn’t exist in a vacuum – how it connects to the enterprise and the data center or the Internet is key and in Cisco’s case that key is now the 9600 which has been built to handle the increased traffic,” said Lee Doyle, principal analyst with Doyle Research.
|
||||
|
||||
The new 9600 ties in with the recently [released Catalyst 9800][10], which features 40Gbps to 100Gbps performance, depending on the model, hot-patching to simplify updates and eliminate update-related downtime, Encrypted Traffic Analytics (ETA), policy-based micro- and macro-segmentation and Trustworthy solutions to detect malware on wired or wireless connected devices, Cisco said.
|
||||
|
||||
All Catalyst 9000 family members support other Cisco products such as [DNA Center][11] , which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise wired and wireless networks.
|
||||
|
||||
The new APs are pre-standard, but other vendors including Aruba, NetGear and others are also selling pre-standard 802.11ax devices. Cisco getting into the market solidifies the validity of this strategy, said Brandon Butler, a senior research analyst with IDC.
|
||||
|
||||
Many experts [expect the standard][12] to be ratified late this year.
|
||||
|
||||
“We expect to see volume shipments of WiFi 6 products by early next year and it being the de facto WiFi standard by 2022.”
|
||||
|
||||
On top of the APs and 9600 switch, Cisco extended its software development community – [DevNet][13] – to offer WiFi 6 learning labs, sandboxes and developer resources.
|
||||
|
||||
The Cisco Catalyst and Meraki access platforms are open and programmable all the way down to the chipset level, allowing applications to take advantage of network programmability, Cisco said.
|
||||
|
||||
Cisco also said it had added more vendors to now include Apple, Samsung, Boingo, Presidio and Intel for its ongoing [OpenRoaming][14] project. OpenRoaming, which is in beta promises to let users move seamlessly between wireless networks and LTE without interruption.
|
||||
|
||||
Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391919/cisco-goes-all-in-on-wifi-6.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/cisco_catalyst_wifi_coffee-cup_coffee-beans_-100794990-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3215907/mobile-wireless/why-80211ax-is-the-next-big-thing-in-wi-fi.html
|
||||
[3]: https://%20https//www.networkworld.com/article/3048196/mobile-wireless/faq-802-11ax-wi-fi.html
|
||||
[4]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html
|
||||
[5]: https://www.networkworld.com/article/3332018/wi-fi/wi-fi-6-with-ofdma-opens-a-world-of-new-wireless-possibilities.html
|
||||
[6]: https://www.networkworld.com/article/3309439/mobile-wireless/80211ax-preview-access-points-and-routers-that-support-the-wi-fi-6-protocol-on-tap.html
|
||||
[7]: https://meraki.cisco.com/lib/pdf/meraki_datasheet_MR55.pdf
|
||||
[8]: https://blogs.cisco.com/news/unplugged-and-uninterrupted
|
||||
[9]: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html
|
||||
[10]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
|
||||
[11]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
|
||||
[12]: https://www.networkworld.com/article/3336263/is-jumping-ahead-to-wi-fi-6-the-right-move.html
|
||||
[13]: https://developer.cisco.com/wireless/?utm_campaign=colaunch-wireless19&utm_source=pressrelease&utm_medium=ciscopress-wireless-main
|
||||
[14]: https://www.cisco.com/c/en/us/solutions/enterprise-networks/802-11ax-solution/openroaming.html
|
||||
[15]: https://www.facebook.com/NetworkWorld/
|
||||
[16]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to shop for enterprise firewalls)
|
||||
[#]: via: (https://www.networkworld.com/article/3390686/how-to-shop-for-enterprise-firewalls.html#tk.rss_all)
|
||||
[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/)
|
||||
|
||||
How to shop for enterprise firewalls
|
||||
======
|
||||
Performance, form factors, and automation capabilities are key considerations when choosing a next-generation firewall (NGFW).
|
||||
Firewalls have been around for years, but the technology keeps evolving as the threat landscape changes. Here are some tips about what to look for in a next-generation firewall ([NGFW][1]) that will satisfy business needs today and into the future.
|
||||
|
||||
### Don't trust firewall performance stats
|
||||
|
||||
Understanding how a NGFW performs requires more than looking at a vendor’s specification or running a bit of traffic through it. Most [firewalls][2] will perform well when traffic loads are light. It’s important to see how a firewall responds at scale, particularly when [encryption][3] is turned on. Roughly 80% of traffic is encrypted today, and the ability to maintain performance levels with high volumes of encrypted traffic is critical.
|
||||
|
||||
**Learn more about network security**
|
||||
|
||||
* [How to boost collaboration between network and security teams][4]
|
||||
* [What is microsegmentation? How getting granular improves network security][5]
|
||||
* [What to consider when deploying a next-generation firewall][1]
|
||||
* [How firewalls fit into enterprise security][2]
|
||||
|
||||
|
||||
|
||||
Also, be sure to turn on all major functions – including application and user identification, IPS, anti-malware, URL filtering and logging – during testing to see how a firewall will hold up in a production setting. Firewall vendors often tout a single performance number that's achieved with core features turned off. Data from [ZK Research][6] shows many IT pros learn this lesson the hard way: 58% of security professionals polled said they were forced to turn off features to maintain performance.
|
||||
|
||||
Before committing to a vendor, be sure to run tests with as many different types of traffic as possible and with various types of applications. Important metrics to look at include application throughput, connections per second, maximum sessions for both IPv4 and [IPv6][7], and SSL performance.
|
||||
|
||||
### NGFW needs to fit into broader security platform
|
||||
|
||||
Is it better to have a best-of-breed strategy or go with a single vendor for security? The issue has been debated for years, but the fact is, neither approach works flawlessly. It’s important to understand that best-of-breed everywhere doesn’t ensure best-in-class security. In fact, the opposite is typically true; having too many vendors can lead to complexity that can't be managed, which puts a business at risk. A better approach is a security platform, which can be thought of as an open architecture, that third-party products can be plugged into.
|
||||
|
||||
Any NGFW must be able to plug into a platform so it can "see" everything from IoT endpoints to cloud traffic to end-user devices. Also, once the NGFW has aggregated the data, it should be able to perform analytics to provide insights. This will enable the NGFW to take action and enforce policies across the network.
|
||||
|
||||
### Multiple form factors, consistent security features
|
||||
|
||||
Firewalls used to be relegated to corporate data centers. Today, networks have opened up, and customers need a consistent feature set at every point in the network. NGFW vendors should have the following form factors available to optimize price/performance:
|
||||
|
||||
* Data center
|
||||
* Internet edge
|
||||
* Midsize branch office
|
||||
* Small branch office
|
||||
* Ruggedized for IoT environments
|
||||
* Cloud delivered
|
||||
* Virtual machines that can run in private and public clouds
|
||||
|
||||
|
||||
|
||||
Also, NGFW vendors should have a roadmap for a containerized form factor. This certainly isn’t a trivial task. Most vendors won’t have a [container][8]-ready product yet, but they should be able to talk to how they plan to address the problem.
|
||||
|
||||
### Single-pane-of-glass firewall management
|
||||
|
||||
Having a broad product line doesn’t matter if products need to be managed individually. This makes it hard to keep policies and rules up to date and leads to inconsistencies in features and functions. A firewall vendor must have a single management tool that provides end-to-end visibility and enables the administrator to make a change and push it out across the network at once. Visibility must extend everywhere, including the cloud, [IoT][9] edge, operational technology (OT) environments, and branch offices. A single dashboard is also the right place to implement and maintain software-based segmentation instead of having to configure each device.
|
||||
|
||||
### Firewall automation capabilities
|
||||
|
||||
The goal of [automation][10] is to help remove many of the manual steps that create "human latency" in the security process. Almost all vendors tout some automation capabilities as a way of saving on headcount, but automation goes well beyond that.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][11]
|
||||
|
||||
[Learn More][12] Existing Users [Sign In][11]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390686/how-to-shop-for-enterprise-firewalls.html#tk.rss_all
|
||||
|
||||
作者:[Zeus Kerravala][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Zeus-Kerravala/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/article/3236448/what-to-consider-when-deploying-a-next-generation-firewall.html
|
||||
[2]: https://www.networkworld.com/article/3230457/what-is-a-firewall-perimeter-stateful-inspection-next-generation.html
|
||||
[3]: https://www.networkworld.com/article/3098354/enterprise-encryption-adoption-up-but-the-devils-in-the-details.html
|
||||
[4]: https://www.networkworld.com/article/3328218/how-to-boost-collaboration-between-network-and-security-teams.html
|
||||
[5]: https://www.networkworld.com/article/3247672/what-is-microsegmentation-how-getting-granular-improves-network-security.html
|
||||
[6]: https://zkresearch.com/
|
||||
[7]: https://www.networkworld.com/article/3254575/what-is-ipv6-and-why-aren-t-we-there-yet.html
|
||||
[8]: https://www.networkworld.com/article/3159735/containers-what-are-containers.html
|
||||
[9]: https://www.networkworld.com/article/3207535/what-is-iot-how-the-internet-of-things-works.html
|
||||
[10]: https://www.networkworld.com/article/3184389/automation-rolls-on-what-are-you-doing-about-it.html
|
||||
[11]: javascript://
|
||||
[12]: /learn-about-insider/
|
@ -0,0 +1,86 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600)
|
||||
[#]: via: (https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Venerable Cisco Catalyst 6000 switches ousted by new Catalyst 9600
|
||||
======
|
||||
Cisco introduced Catalyst 9600 switches, that let customers automate, set policy, provide security and gain assurance across wired and wireless networks.
|
||||
![Martyn Williams][1]
|
||||
|
||||
Few events in the tech industry are truly transformative, but Cisco’s replacement of its core Catalyst 6000 family could be one of those actions for customers and the company.
|
||||
|
||||
Introduced in 1999, [iterations of the Catalyst 6000][2] have nestled into the core of scores of enterprise networks, with the model 6500 becoming the company’s largest selling box ever.
|
||||
|
||||
**Learn about edge networking**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][3]
|
||||
* [Edge computing best practices][4]
|
||||
* [How edge computing can help secure the IoT][5]
|
||||
|
||||
|
||||
|
||||
It goes without question that migrating these customers alone to the new switch – the Catalyst 9600 which the company introduced today – will be of monumental importance to Cisco as it looks to revamp and continue to dominate large campus-core deployments. The first [Catalyst 9000][6], introduced in June 2017, is already the fastest ramping product line in Cisco’s history.
|
||||
|
||||
“There are at least tens of thousands of Cat 6000s running in campus cores all over the world,” said [Sachin Gupta][7], senior vice president for product management at Cisco. ”It is the Swiss Army knife of switches in term of features, and we have taken great care and over two years developing feature parity and an easy migration path for those users to the Cat 9000.”
|
||||
|
||||
Indeed the 9600 brings with it for Cat 6000 features such as support for MPLS, virtual switching and IPv6, while adding or bolstering support for newer items such as Intent-based networking (IBN), wireless networks and security segmentation. Strategically the 9600 helps fill out the company’s revamped lineup which includes the 9200 family of access switches, the [9500][8] aggregation switch and [9800 wireless controller.][9]
|
||||
|
||||
Some of the nitty-gritty details about the 9600:
|
||||
|
||||
* It is a purpose-built 40 Gigabit and 100 Gigabit Ethernet line of modular switches targeted for the enterprise campus with a wired switching capacity of up to 25.6 Tbps, with up to 6.4 Tbps of bandwidth per slot.
|
||||
* The switch supports granular port densities that fit diverse campus needs, including nonblocking 40 Gigabit and 100 Gigabit Ethernet Quad Small Form-Factor Pluggable (QSFP+, QSFP28) and 1, 10, and 25 GE Small Form-Factor Pluggable Plus (SFP, SFP+, SFP28).
|
||||
* It can be configured to support up to 48 nonblocking 100 Gigabit Ethernet QSPF28 ports with the Cisco Catalyst 9600 Series Supervisor Engine 1; Up to 96 nonblocking 40 Gigabit Ethernet QSFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1 and Up to 192 nonblocking 25 Gigabit/10 Gigabit Ethernet SFP28/SFP+ ports with the Cisco Catalyst 9600 Series Supervisor Engine 1.
|
||||
* It supports advanced routing and infrastructure services (MPLS, Layer 2 and Layer 3 VPNs, Multicast VPN, and Network Address Translation.
|
||||
* Cisco Software-Defined Access capabilities (such as a host-tracking database, cross-domain connectivity, and VPN Routing and Forwarding [VRF]-aware Locator/ID Separation Protocol; and network system virtualization with Cisco StackWise virtual technology.
|
||||
|
||||
|
||||
|
||||
The 9600 series runs Cisco’s IOS XE software which now runs across all Catalyst 9000 family members. The software brings with it support for other key products such as Cisco’s [DNA Center][10] which controls automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. What that means is that with one user interface, DNA Center, customers can automate, set policy, provide security and gain assurance across the entire wired and wireless network fabric, Gupta said.
|
||||
|
||||
“The 9600 is a big deal for Cisco and customers as it brings together the campus core and lets users establish standards access and usage policies across their wired and wireless environments,” said Brandon Butler, a senior research analyst with IDC. “It was important that Cisco add a powerful switch to handle the increasing amounts of traffic wireless and cloud applications are bringing to the network.”
|
||||
|
||||
IOS XE brings with it automated device provisioning and a wide variety of automation features including support for the network configuration protocol NETCONF and RESTCONF using YANG data models. The software offers near-real-time monitoring of the network, leading to quick detection and rectification of failures, Cisco says.
|
||||
|
||||
The software also supports hot patching which provides fixes for critical bugs and security vulnerabilities between regular maintenance releases. This support lets customers add patches without having to wait for the next maintenance release, Cisco says.
|
||||
|
||||
As with the rest of the Catalyst family, the 9600 is available via subscription-based licensing. Cisco says the [base licensing package][11] includes Network Advantage licensing options that are tied to the hardware. The base licensing packages cover switching fundamentals, management automation, troubleshooting, and advanced switching features. These base licenses are perpetual.
|
||||
|
||||
An add-on licensing package includes the Cisco DNA Premier and Cisco DNA Advantage options. The Cisco DNA add-on licenses are available as a subscription.
|
||||
|
||||
IDC’S Butler noted that there are competitors such as Ruckus, Aruba and Extreme that offer switches capable of melding wired and wireless environments.
|
||||
|
||||
The new switch is built for the next two decades of networking, Gupta said. “If any of our competitors though they could just go in and replace the Cat 6k they were misguided.”
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391580/venerable-cisco-catalyst-6000-switches-ousted-by-new-catalyst-9600.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.techhive.com/images/article/2017/02/170227-mwc-02759-100710709-large.jpg
|
||||
[2]: https://www.networkworld.com/article/2289826/133715-The-illustrious-history-of-Cisco-s-Catalyst-LAN-switches.html
|
||||
[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[4]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[5]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[6]: https://www.networkworld.com/article/3256264/cisco-ceo-we-are-still-only-on-the-front-end-of-a-new-version-of-the-network.html
|
||||
[7]: https://blogs.cisco.com/enterprise/looking-forward-catalyst-9600-switch-and-9100-access-point-meraki
|
||||
[8]: https://www.networkworld.com/article/3202105/cisco-brings-intent-based-networking-to-the-end-to-end-network.html
|
||||
[9]: https://www.networkworld.com/article/3321000/cisco-links-wireless-wired-worlds-with-new-catalyst-9000-switches.html
|
||||
[10]: https://www.networkworld.com/article/3280988/cisco/cisco-opens-dna-center-network-control-and-management-software-to-the-devops-masses.html
|
||||
[11]: https://www.cisco.com/c/en/us/td/docs/switches/lan/catalyst9600/software/release/16-11/release_notes/ol-16-11-9600.html#id_67835
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Measuring the edge: Finding success with edge deployments)
|
||||
[#]: via: (https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all)
|
||||
[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/)
|
||||
|
||||
Measuring the edge: Finding success with edge deployments
|
||||
======
|
||||
To make the most of edge computing investments, it’s important to first understand objectives and expectations.
|
||||
![iStock][1]
|
||||
|
||||
Edge computing deployments are well underway as companies seek to better process the wealth of data being generated, for example, by Internet of Things (IoT) devices.
|
||||
|
||||
So, what are the results? Plus, how can you ensure success with your own edge projects?
|
||||
|
||||
**Measurements of success**
|
||||
|
||||
The [use cases for edge computing deployments][2] vary widely, as do the business drivers and, ultimately, the benefits.
|
||||
|
||||
Whether they’re seeking improved network or application performance, real-time data analytics, a better customer experience, or other efficiencies, enterprises are accomplishing their goals. Based on two surveys — one by [_Automation World_][3] and another by [Futurum Research][4] — respondents have reported:
|
||||
|
||||
* Decreased network downtime
|
||||
* Increased productivity
|
||||
* Increased profitability/reduced costs
|
||||
* Improved business processes
|
||||
|
||||
|
||||
|
||||
Basically, success metrics can be bucketed into two categories: direct value propositions and cost reductions. In the former, companies are seeking results that measure revenue generation — such as improved digital experiences with customers and users. In the latter, metrics that prove the value of digitizing processes — like speed, quality, and efficacy — will demonstrate success with edge deployments.
|
||||
|
||||
**Goalposts for success with edge**
|
||||
|
||||
Edge computing deployments are underway. But before diving in, understand what’s driving these projects.
|
||||
|
||||
According to the Futurum Research, 29% of respondents are currently investing in edge infrastructure, and 62% expect to adopt within the year. For these companies, the driving force has been the business, which expects operational efficiencies from these investments. Beyond that, there’s an expectation down the road to better align with IoT strategy.
|
||||
|
||||
That being the case, it’s worth considering your business case before diving into edge. Ask: Are you seeking innovation and revenue generation amid digital transformation efforts? Or is your company looking for a low-risk, “test the waters” type of investment in edge?
|
||||
|
||||
Next up, what type of edge project makes sense for your environment? Edge data centers fall into three categories: local devices for a specific purpose (e.g., an appliance for security systems or a gateway for cloud-to-premises storage); small local data centers (typically one to 10 racks for storage and processing); and regional data centers (10+ racks for large office spaces).
|
||||
|
||||
Then, think about these best practices before talking with vendors:
|
||||
|
||||
* Management: Especially for unmanned edge data centers, remote management is critical. You’ll need predictive alerts and a service contract that enables IT to be just as resilient as a regular data center.
|
||||
* Security:Much of today’s conversation has been around data security. That starts with physical protection. Too many data breaches — including theft and employee error — are caused by physical access to IT assets.
|
||||
* Standardization: There is no need to reinvent the wheel when it comes to edge data center deployments. Using standard components makes it easier for the internal IT team to deploy, manage, and maintain.
|
||||
|
||||
|
||||
|
||||
Finally, consider the ecosystem. The end-to-end nature of edge requires not just technology integration, but also that all third parties work well together. A good ecosystem connects customers, partners, and vendors.
|
||||
|
||||
Get further information to kickstart your edge computing environment at [APC.com][5].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391570/measuring-the-edge-finding-success-with-edge-deployments.html#tk.rss_all
|
||||
|
||||
作者:[Anne Taylor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Anne-Taylor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/istock-912928582-100795093-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391016/edge-computing-is-in-most-industries-future.html
|
||||
[3]: https://www.automationworld.com/article/technologies/cloud-computing/its-not-edge-vs-cloud-its-both
|
||||
[4]: https://futurumresearch.com/edge-computing-from-edge-to-enterprise/
|
||||
[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp
|
54
sources/talk/20190430 Must-know Linux Commands.md
Normal file
54
sources/talk/20190430 Must-know Linux Commands.md
Normal file
@ -0,0 +1,54 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Must-know Linux Commands)
|
||||
[#]: via: (https://www.networkworld.com/article/3391029/must-know-linux-commands.html#tk.rss_all)
|
||||
[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/)
|
||||
|
||||
Must-know Linux Commands
|
||||
======
|
||||
A compilation of essential commands for searching, monitoring and securing Linux systems - plus the Linux Command Line Cheat Sheet.
|
||||
It takes some time working with Linux commands before you know which one you need for the task at hand, how to format it and what result to expect, but it’s possible to speed up the process.
|
||||
|
||||
With that in mind, we’ve gathered together some of the essential Linux commands as explained by Network World's [Unix as a Second Language][1] blogger Sandra Henry-Stocker to give aspiring power users what they need to get started with Linux systems.
|
||||
|
||||
The breakdown starts with the rich options available for finding files on Linux – **find** , **locate** , **mlocate** , **which** , **whereis** , to name a few. Some overlap, but some are more efficient than others or zoom in on a narrow set of results. There’s even a command – **apropos** – to find the right command to do what you want to do. This section will demystify searches.
|
||||
|
||||
Henry-Stocker's article on memory provides a wealth of options for discovering the availability of physical and virtual memory and ways to have that information updated at intervals to constantly measure whether there’s enough memory to go around. It shows how it’s possible to tailor your requests so you get a concise presentation of the results you seek.
|
||||
|
||||
Two remaining articles in this package show how to monitor activity on Linux servers and how to set up security parameters on these systems.
|
||||
|
||||
The first of these shows how to run the same command repetitively in order to have regular updates about any designated activity. It also tells about a command that focuses on user processes and shows changes as they occur, and a command that examines the time that users are connected.
|
||||
|
||||
The final article is a deep dive into commands that help keep Linux systems secure. It describes 22 of them that are essential for day-to-day admin work. They can restrict privileges to keep individuals from having more capabilities than their jobs call for and report on who’s logged in, where from and how long they’ve been there.
|
||||
|
||||
Some of these commands can track recent logins for individuals, which can be useful in running down who made changes. Others find files with varying characteristics, such as having no owner or by their contents. There are commands to control firewalls and to display routing tables.
|
||||
|
||||
As a bonus, our bundle of commands includes **The Linux Command-Line Cheat Sheet,** a concise summary of important commands that are useful every single day. It’s suitable for printing out on two sides of a single sheet, laminating and keeping beside your keyboard.
|
||||
|
||||
Enjoy!
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][2]
|
||||
|
||||
[Learn More][3] Existing Users [Sign In][2]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391029/must-know-linux-commands.html#tk.rss_all
|
||||
|
||||
作者:[Tim Greene][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Tim-Greene/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.networkworld.com/blog/unix-as-a-second-language/?nsdr=true
|
||||
[2]: javascript://
|
||||
[3]: /learn-about-insider/
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Cisco issues critical security warning for Nexus data-center switches)
|
||||
[#]: via: (https://www.networkworld.com/article/3392858/cisco-issues-critical-security-warning-for-nexus-data-center-switches.html#tk.rss_all)
|
||||
[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/)
|
||||
|
||||
Cisco issues critical security warning for Nexus data-center switches
|
||||
======
|
||||
Cisco released 40 security advisories around Nexus switches, Firepower firewalls and more
|
||||
![Thinkstock][1]
|
||||
|
||||
Cisco issued some 40 security advisories today but only one of them was deemed “[critical][2]” – a vulnerability in the Cisco Nexus 9000 Series Application Centric Infrastructure (ACI) Mode data-center switch that could let an attacker secretly access system resources.
|
||||
|
||||
The exposure, which was given a Common Vulnerability Scoring System importance of 9.8 out of 10, is described as a problem with secure shell (SSH) key-management for the Cisco Nexus 9000 that lets a remote attacker to connect to the affected system with the privileges of a root user, Cisco said.
|
||||
|
||||
**[ Read also:[How to plan a software-defined data-center network][3] ]**
|
||||
|
||||
“The vulnerability is due to the presence of a default SSH key pair that is present in all devices. An attacker could exploit this vulnerability by opening an SSH connection via IPv6 to a targeted device using the extracted key materials. This vulnerability is only exploitable over IPv6; IPv4 is not vulnerable," Cisco wrote.
|
||||
|
||||
This vulnerability affects Nexus 9000s if they are running a Cisco NX-OS software release prior to 14.1, and the company said there were no workarounds to address the problem.
|
||||
|
||||
However, Cisco has [released free software updates][4] that address the vulnerability.
|
||||
|
||||
The company also issued a “high” security warning advisory for the Nexus 9000 that involves an exploit that would let attackers execute arbitrary operating-system commands as root on an affected device. To succeed, an attacker would need valid administrator credentials for the device, Cisco said.
|
||||
|
||||
The vulnerability is due to overly broad system-file permissions, [Cisco wrote][5]. An attacker could exploit this vulnerability by authenticating to an affected device, creating a crafted command string and writing this crafted string to a specific file location.
|
||||
|
||||
**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][6] ]**
|
||||
|
||||
Cisco has released software updates that address this vulnerability.
|
||||
|
||||
Two other vulneraries rated “high” also involved the Nexus 9000:
|
||||
|
||||
* [A vulnerability][7] in the background-operations functionality of Cisco Nexus 9000 software could allow an authenticated, local attacker to gain elevated privileges as root on an affected device. The vulnerability is due to insufficient validation of user-supplied files on an affected device. Cisco said an attacker could exploit this vulnerability by logging in to the CLI of the affected device and creating a crafted file in a specific directory on the filesystem.
|
||||
* A [weakness][7] in the background-operations functionality of the switch software could allow an attacker to login to the CLI of the affected device and create a crafted file in a specific directory on the filesystem. The vulnerability is due to insufficient validation of user-supplied files on an affected device, Cisco said.
|
||||
|
||||
|
||||
|
||||
Cisco has [released software][4] for these vulnerabilities as well.
|
||||
|
||||
Also part of these security alerts were a number of “high” rated warnings about vulneraries in Cisco’s FirePower firewall series.
|
||||
|
||||
For example Cisco [wrote][8] that multiple vulnerabilities in the Server Message Block Protocol preprocessor detection engine for Cisco Firepower Threat Defense Software could allow an unauthenticated, adjacent or remote attacker to cause a denial of service (DoS) condition.
|
||||
|
||||
Yet [another vulnerability][9] in the internal packet-processing functionality of Cisco Firepower software for the Cisco Firepower 2100 Series could let an unauthenticated, remote attacker cause an affected device to stop processing traffic, resulting in a DOS situation, Cisco said.
|
||||
|
||||
[Software patches][4] are available for these vulnerabilities.
|
||||
|
||||
Other products such as the Cisco [Adaptive Security Virtual Appliance][10], and [Web Security appliance][11] had high priority patches as well.
|
||||
|
||||
Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3392858/cisco-issues-critical-security-warning-for-nexus-data-center-switches.html#tk.rss_all
|
||||
|
||||
作者:[Michael Cooney][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Michael-Cooney/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/02/lock_broken_unlocked_binary_code_security_circuits_protection_privacy_thinkstock_873916354-100750739-large.jpg
|
||||
[2]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-nexus9k-sshkey
|
||||
[3]: https://www.networkworld.com/article/3284352/data-center/how-to-plan-a-software-defined-data-center-network.html
|
||||
[4]: https://www.cisco.com/c/en/us/about/legal/cloud-and-software/end_user_license_agreement.html
|
||||
[5]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-nexus9k-rpe
|
||||
[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr
|
||||
[7]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-aci-hw-clock-util
|
||||
[8]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-frpwr-smb-snort
|
||||
[9]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-frpwr-dos
|
||||
[10]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-asa-ipsec-dos
|
||||
[11]: https://tools.cisco.com/security/center/content/CiscoSecurityAdvisory/cisco-sa-20190501-wsa-privesc
|
||||
[12]: https://www.facebook.com/NetworkWorld/
|
||||
[13]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Health care is still stitching together IoT systems)
|
||||
[#]: via: (https://www.networkworld.com/article/3392818/health-care-is-still-stitching-together-iot-systems.html#tk.rss_all)
|
||||
[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/)
|
||||
|
||||
Health care is still stitching together IoT systems
|
||||
======
|
||||
The use of IoT technology in medicine is fraught with complications, but users are making it work.
|
||||
_Government regulations, safety and technical integration are all serious issues facing the use of IoT in medicine, but professionals in the field say that medical IoT is moving forward despite the obstacles. A vendor, a doctor, and an IT pro all spoke to Network World about the work involved._
|
||||
|
||||
### Vendor: It's tough to gain acceptance**
|
||||
|
||||
**
|
||||
|
||||
Josh Stein is the CEO and co-founder of Adheretech, a medical-IoT startup whose main product is a connected pill bottle. The idea is to help keep seriously ill patients current with their medications, by monitoring whether they’ve taken correct dosages or not.
|
||||
|
||||
The bottle – which patients get for free (Adheretech’s clients are hospitals and clinics) – uses a cellular modem to call home to the company’s servers and report on how much medication is left in the bottle, according to sensors that detect how many pills are touching the bottle’s sides and measuring its weight. There, the data is analyzed not just to determine whether patients are sticking to their doctor’s prescription, but to help identify possible side effects and whether they need additional help.
|
||||
|
||||
For example, a bottle that detects itself being moved to the bathroom too often might send up a flag that the patient is experiencing gastrointestinal side effects. The system can then contact patients or providers via phone or text to help them take the next steps.
|
||||
|
||||
The challenges to reach this point have been stiff, according to Stein. The company was founded in 2011 and spent the first four years of its existence simply designing and building its product.
|
||||
|
||||
“We had to go through many years of R&D to create a device that’s replicatible a million times over,” he said. “If you’re a healthcare company, you have to deal with HIPAA, the FDA, and then there’s lots of other things like medication bottles have their whole own set of regulatory certifications.”
|
||||
|
||||
Beyond the simple fact of regulatory compliance, Stein said that there’s resistance to this sort of new technology in the medical community.
|
||||
|
||||
“Healthcare is typically one of the last industries to adopt new technology,” he said.
|
||||
|
||||
### Doctor: Colleagues wonder if medical IoT plusses are worth the trouble
|
||||
|
||||
Dr. Rebecca Mishuris is the associate chief medical information officer at Boston Medical Center, a private non-profit hospital located in the South End. One of the institution’s chief missions is to act as a safety net for the population of the area – 57% of BMC’s patients come from under-served populations, and roughly a third don’t speak English as a primary language. That, in itself, can be a problem for IoT because many devices are designed to be used by native English speakers.
|
||||
|
||||
BMC’s adoption of IoT tech has taken place mostly at the individual-practice level – things like Bluetooth-enabled scales and diagnostic equipment for specific offices that want to use them – but there’s no hospital-wide IoT initiative happening, according to Mishuris.
|
||||
|
||||
That’s partially due to the fact that many practitioners aren’t convinced that connected healthcare devices are worth the trouble to purchase, install and manage, she said. HIPAA compliance and BMC’s own privacy regulations are a particular concern, given that many of the devices deal with patient-generated data.
|
||||
|
||||
To continue reading this article register now
|
||||
|
||||
[Get Free Access][1]
|
||||
|
||||
[Learn More][2] Existing Users [Sign In][1]
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3392818/health-care-is-still-stitching-together-iot-systems.html#tk.rss_all
|
||||
|
||||
作者:[Jon Gold][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Jon-Gold/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: javascript://
|
||||
[2]: /learn-about-insider/
|
@ -0,0 +1,69 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Vapor IO provides direct, high-speed connections from the edge to AWS)
|
||||
[#]: via: (https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all)
|
||||
[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/)
|
||||
|
||||
Vapor IO provides direct, high-speed connections from the edge to AWS
|
||||
======
|
||||
With a direct fiber line, latency between the edge and the cloud can be dramatically reduced.
|
||||
![Vapor IO][1]
|
||||
|
||||
Edge computing startup Vapor IO now offers a direct connection between its edge containers to Amazon Web Services (AWS) via a high-speed fiber network link.
|
||||
|
||||
The company said that connection between its Kinetic Edge containers and AWS will be provided by Crown Castle's Cloud Connect fiber network, which uses Amazon Direct Connect Services. This would help reduce network latency by essentially drawing a straight fiber line from Vapor IO's edge computing data centers to Amazon's cloud computing data centers.
|
||||
|
||||
“When combined with Crown Castle’s high-speed Cloud Connect fiber, the Kinetic Edge lets AWS developers build applications that span the entire continuum from core to edge. By enabling new classes of applications at the edge, we make it possible for any AWS developer to unlock the next generation of real-time, innovative use cases,” wrote Matt Trifiro, chief marketing officer of Vapor IO, in a [blog post][2].
|
||||
|
||||
**[ Read also:[What is edge computing and how it’s changing the network][3] ]**
|
||||
|
||||
Vapor IO clams that the connection will lower latency by as much as 75%. “Connecting workloads and data at the Kinetic Edge with workloads and data in centralized AWS data centers makes it possible to build edge applications that leverage the full power of AWS,” wrote Trifiro.
|
||||
|
||||
Developers building applications at the Kinetic Edge will have access to the full suite of AWS cloud computing services, including Amazon Simple Storage Service (Amazon S3), Amazon Elastic Cloud Compute (Amazon EC2), Amazon Virtual Private Cloud (Amazon VPC), and Amazon Relational Database Service (Amazon RDS).
|
||||
|
||||
Crown Castle is the largest provider of shared communications infrastructure in the U.S., with 40,000 cell towers and 60,000 miles of fiber, offering 1Gbps to 10Gbps private fiber connectivity between the Kinetic Edge and AWS.
|
||||
|
||||
AWS Direct Connect is a essentially a private connection between Amazon's AWS customers and their the AWS data centers, so customers don’t have to rout their traffic over the public internet and compete with Netflix and YouTube, for example, for bandwidth.
|
||||
|
||||
### How edge computing works
|
||||
|
||||
The structure of [edge computing][3] is the reverse of the standard internet design. Rather than sending all the data up to central servers, as much processing as possible is done at the edge. This is to reduce the sheer volume of data coming upstream and thus reduce latency.
|
||||
|
||||
With things like smart cars, even if 95% of data is eliminated that remaining, 5% can still be a lot, so moving it fast is essential. Vapor IO said it will shuttle workloads to Amazon’s USEAST and USWEST data centers, depending on location.
|
||||
|
||||
This shows how the edge is up-ending the traditional internet design and moving more computing outside the traditional data center, although a connection upstream is still important because it allows for rapid movement of necessary data from the edge to the cloud, where it can be stored or processed.
|
||||
|
||||
**More about edge networking:**
|
||||
|
||||
* [How edge networking and IoT will reshape data centers][4]
|
||||
* [Edge computing best practices][5]
|
||||
* [How edge computing can help secure the IoT][6]
|
||||
|
||||
|
||||
|
||||
Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391922/vapor-io-provides-direct-high-speed-connections-from-the-edge-to-aws.html#tk.rss_all
|
||||
|
||||
作者:[Andy Patrizio][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Andy-Patrizio/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2018/09/vapor-io-kinetic-edge-data-center-100771510-large.jpg
|
||||
[2]: https://www.vapor.io/powering-amazon-web-services-at-the-kinetic-edge/
|
||||
[3]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html
|
||||
[4]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html
|
||||
[5]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html
|
||||
[6]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html
|
||||
[7]: https://www.facebook.com/NetworkWorld/
|
||||
[8]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,92 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Yet another killer cloud quarter puts pressure on data centers)
|
||||
[#]: via: (https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all)
|
||||
[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/)
|
||||
|
||||
Yet another killer cloud quarter puts pressure on data centers
|
||||
======
|
||||
Continued strong growth from Amazon Web Services, Microsoft Azure, and Google Cloud Platform signals even more enterprises are moving to the cloud.
|
||||
![Getty Images][1]
|
||||
|
||||
You’d almost think I’d get tired of [writing this story over and over and over][2]… but the ongoing growth of cloud computing is too big a trend to ignore.
|
||||
|
||||
Critically, the impressive growth numbers of the three leading cloud infrastructure providers—Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform—doesn’t occur in a vacuum. It’s not just about new workloads being run in the cloud; it’s also about more and more enterprises moving existing workloads to the cloud from on-premises data centers.
|
||||
|
||||
**[ Also read:[Is the cloud already killing the enterprise data center?][3] ]**
|
||||
|
||||
To put these trends in perspective, let’s take a look at the results for all three vendors.
|
||||
|
||||
### AWS keeps on trucking
|
||||
|
||||
AWS remains by far the dominant player in the cloud infrastructure market, with a massive [$7.7 billion in quarterly sales][4] (an annual run rate of a whopping $30.8 billion). Even more remarkable, somehow AWS continues to grow revenue by almost 42% year over year. Oh, and that kind of growth is not just unique _this_ quarter; the unit has topped 40% revenue growth _every_ quarter since the beginning of 2017. (To be fair, the first quarter of 2018 saw an amazing 49% revenue growth.)
|
||||
|
||||
And unlike many fast-growing tech companies, that incredible expansion isn’t being fueled by equally impressive losses. AWS earned a healthy $2.2 billion operating profit in the quarter, up 59% from the same period last year. One reason? [The company told analysts][5] it made big data center investments in 2016 and 2017, so it hasn’t had to do so more recently (it expects to boost spending on data centers later this year). The company [reportedly][6] described AWS revenue growth as “lumpy,” but it seems to me that the numbers merely vary between huge and even bigger.
|
||||
|
||||
### Microsoft Azure grows even faster than AWS
|
||||
|
||||
Sure, 41% growth is good, but [Microsoft’s quarterly Azure revenue][7] almost doubled that, jumping 73% year over year (fairly consistent with the previous—also stellar—quarter), helping the company exceed estimates for both sales and revenue and sparking a brief shining moment of a $1 billion valuation for the company. Microsoft doesn’t break out Azure’s sales and revenue, but [the commercial cloud business, which includes Azure as well as other cloud businesses, grew 41% in the quarter to $9.6 billion][8].
|
||||
|
||||
It’s impossible to tell exactly how big Azure is, but it appears to be growing faster than AWS, though off a much smaller base. While some analysts reportedly say Azure is growing faster than AWS was at a similar stage in its development, that may not bear much significance because the addressable cloud market is now far larger than it used be.
|
||||
|
||||
According to [the New York Times][9], like AWS, Microsoft is also now reaping the benefits of heavy investments in new data centers around the world. And the Times credits Microsoft with “particular success” in [hybrid cloud installations][10], helping ease concerns among some slow-to-change enterprise about full-scale cloud adoption.
|
||||
|
||||
**[ Also read:[Why hybrid cloud will turn out to be a transition strategy][11] ]**
|
||||
|
||||
### Can Google Cloud Platform keep up?
|
||||
|
||||
Even as the [overall quarterly numbers for Alphabet][12]—Google’s parent company—didn’t meet analysts’ revenue expectations (which sent the stock tumbling), Google Cloud Platform seems to have continued its strong growth. Alphabet doesn’t break out its cloud unit, but sales in Alphabet’s “Other Revenue” category—which includes cloud computing along with hardware—jumped 25% compared to the same period last year, hitting $5.4 billion.
|
||||
|
||||
More telling, perhaps, Alphabet Chief Financial Officer Ruth Porat [reportedly][13] told analysts that "Google Cloud Platform remains one of the fastest growing businesses in Alphabet." [Porat also mentioned][14] that hiring in the cloud unit was so aggressive that it drove a 20% jump in Alphabet’s operating expenses!
|
||||
|
||||
### Companies keep going cloud
|
||||
|
||||
But the raw numbers tell only part of the story. All that growth means existing customers are spending more, but also that ever-increasing numbers of enterprises are abandoning their hassle and expense of running their data centers in favor of buying what they need from the cloud.
|
||||
|
||||
**[ Also read:[Large enterprises abandon data centers for the cloud][15] ]**
|
||||
|
||||
The New York Times quotes Amy Hood, Microsoft’s chief financial officer, explaining that, “You don’t really get revenue growth unless you have a usage growth, so this is customers deploying and using Azure.” And the Times notes that Microsoft has signed big deals with companies such as [Walgreens Boots Alliance][16] that combined Azure with other Microsoft cloud-based services.
|
||||
|
||||
This growth is true in existing markets, and also includes new markets. For example, AWS just opened new regions in [Indonesia][17] and [Hong Kong][18].
|
||||
|
||||
**[ Now read:[After virtualization and cloud, what's left on premises?][19] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][20] and [LinkedIn][21] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391465/another-strong-cloud-computing-quarter-puts-pressure-on-data-centers.html#tk.rss_all
|
||||
|
||||
作者:[Fredric Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Fredric-Paul/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/cloud_comput_connect_blue-100787048-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3292935/cloud-computing-just-had-another-kick-ass-quarter.html
|
||||
[3]: https://www.networkworld.com/article/3268384/is-the-cloud-already-killing-the-enterprise-data-center.html
|
||||
[4]: https://www.latimes.com/business/la-fi-amazon-earnings-cloud-computing-aws-20190425-story.html
|
||||
[5]: https://www.businessinsider.com/amazon-q1-2019-earnings-aws-advertising-retail-prime-2019-4
|
||||
[6]: https://www.computerweekly.com/news/252462310/Amazon-cautions-against-reading-too-much-into-slowdown-in-AWS-revenue-growth-rate
|
||||
[7]: https://www.microsoft.com/en-us/Investor/earnings/FY-2019-Q3/press-release-webcast
|
||||
[8]: https://www.cnbc.com/2019/04/24/microsoft-q3-2019-earnings.html
|
||||
[9]: https://www.nytimes.com/2019/04/24/technology/microsoft-earnings.html
|
||||
[10]: https://www.networkworld.com/article/3268448/what-is-hybrid-cloud-really-and-whats-the-best-strategy.html
|
||||
[11]: https://www.networkworld.com/article/3238466/why-hybrid-cloud-will-turn-out-to-be-a-transition-strategy.html
|
||||
[12]: https://abc.xyz/investor/static/pdf/2019Q1_alphabet_earnings_release.pdf?cache=8ac2b86
|
||||
[13]: https://www.forbes.com/sites/jilliandonfro/2019/04/29/google-alphabet-q1-earnings-2019/#52f5c8c733be
|
||||
[14]: https://www.youtube.com/watch?v=31_KHdse_0Y
|
||||
[15]: https://www.networkworld.com/article/3240587/large-enterprises-abandon-data-centers-for-the-cloud.html
|
||||
[16]: https://www.walgreensbootsalliance.com
|
||||
[17]: https://www.businesswire.com/news/home/20190403005931/en/AWS-Open-New-Region-Indonesia
|
||||
[18]: https://www.apnews.com/Business%20Wire/57eaf4cb603e46e6b05b634d9751699b
|
||||
[19]: https://https//www.networkworld.com/article/3232626/virtualization/extreme-virtualization-impact-on-enterprises.html
|
||||
[20]: https://www.facebook.com/NetworkWorld/
|
||||
[21]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,65 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Revolutionary data compression technique could slash compute costs)
|
||||
[#]: via: (https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all)
|
||||
[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/)
|
||||
|
||||
Revolutionary data compression technique could slash compute costs
|
||||
======
|
||||
A new form of data compression, called Zippads, will create faster computer programs that could drastically lower the costs of computing.
|
||||
![Kevin Stanchfield \(CC BY 2.0\)][1]
|
||||
|
||||
There’s a major problem with today’s money-saving memory compression used for storing more data in less space. The issue is that computers store and run memory in predetermined blocks, yet many modern programs function and play out in variable chunks.
|
||||
|
||||
The way it’s currently done is actually, highly inefficient. That’s because the compressed programs, which use objects rather than evenly configured slabs of data, don’t match the space used to store and run them, explain scientists working on a revolutionary new compression system called Zippads.
|
||||
|
||||
The answer, they say—and something that if it works would drastically reduce those inefficiencies, speed things up, and importantly, reduce compute costs—is to compress the varied objects and not the cache lines, as is the case now. Cache lines are fixed-size blocks of memory that are transferred to memory cache.
|
||||
|
||||
**[ Read also:[How to deal with backup when you switch to hyperconverged infrastructure][2] ]**
|
||||
|
||||
“Objects, not cache lines, are the natural unit of compression,” writes Po-An Tsai and Daniel Sanchez in their MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) [paper][3] (pdf).
|
||||
|
||||
They say object-based programs — of the kind used now everyday, such as Python — should be compressed based on their programmed object size, not on some fixed value created by traditional or even state-of-the art cached methods.
|
||||
|
||||
The alternative, too, isn’t to recklessly abandon object-oriented programming just because it’s inefficient at using compression. One must adapt compression to that now common object-using code.
|
||||
|
||||
The scientists claim their new system can increase the compression ratio 1.63 times and improve performance by 17%. It’s the “first compressed memory hierarchy designed for object-based applications,” they say.
|
||||
|
||||
### The benefits of compression
|
||||
|
||||
Compression is a favored technique for making computers more efficient. The main advantage over simply adding more memory is that costs are lowered significantly—you don’t need to add increasing physical main memory hardware because you’re cramming more data into existing.
|
||||
|
||||
However, to date, hardware memory compression has been best suited to more old-school large blocks of data, not the “random, fine-grained memory accesses,” the team explains. It’s not great at accessing small pieces of data, such as words, for example.
|
||||
|
||||
### How the Zippads compression system works
|
||||
|
||||
In Zippads, as the new system is called, stored object hierarchical levels (called “pads”) are located on-chip and are directly accessed. The different levels (pads) have changing speed grades, with newly referenced objects being placed in the fastest pad. As a pad fills up, it begins the process of evicting older, not-so-active objects and ultimately recycles the unused code that is taking up desirable fast space and isn’t being used. Cleverly, at the fast level, the code parts aren’t even compressed, but as they prove their non-usefulness they get kicked down to compressed, slow-to-access, lower-importance pads—and are brought back up as necessary.
|
||||
|
||||
Zippads would “see computers that can run much faster or can run many more apps at the same speeds,” an[ MIT News][4] article says. “Each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.” Bandwidth is freed up, in other words.
|
||||
|
||||
“All computer systems would benefit from this,” Sanchez, a professor of computer science and electrical engineering, says in the article. “Programs become faster because they stop being bottlenecked by memory bandwidth.”
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3392716/revolutionary-data-compression-technique-could-slash-compute-costs.html#tk.rss_all
|
||||
|
||||
作者:[Patrick Nelson][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Patrick-Nelson/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/02/memory-100787327-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3389396/how-to-deal-with-backup-when-you-switch-to-hyperconverged-infrastructure.html
|
||||
[3]: http://people.csail.mit.edu/poantsai/papers/2019.zippads.asplos.pdf
|
||||
[4]: http://news.mit.edu/2019/hardware-data-compression-0416
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -1,126 +0,0 @@
|
||||
An introduction to the DomTerm terminal emulator for Linux
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
|
||||
[DomTerm][1] is a modern terminal emulator that uses a browser engine as a "GUI toolkit." This enables some neat features, such as embeddable graphics and links, HTML rich text, and foldable (show/hide) commands. Otherwise it looks and feels like a feature-full, standalone terminal emulator, with excellent xterm compatibility (including mouse handling and 24-bit color), and appropriate "chrome" (menus). In addition, there is built-in support for session management and sub-windows (as in `tmux` and `GNU screen`), basic input editing (as in `readline`), and paging (as in `less`).
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
|
||||
Image 1: The DomTerminal terminal emulator. View larger image.
|
||||
|
||||
Below we'll look more at these features. We'll assume you have `domterm` installed (skip to the end of this article if you need to get and build DomTerm). First, though, here's a quick overview of the technology.
|
||||
|
||||
### Frontend vs. backend
|
||||
|
||||
Most of DomTerm is written in JavaScript and runs in a browser engine. This can be a desktop web browser, such as Chrome or Firefox (see image 3), or it can be an embedded browser. Using a general web browser works fine, but the user experience isn't as nice (as the menus are designed for general browsing, not for a terminal emulator), and the security model gets in the way, so using an embedded browser is nicer.
|
||||
|
||||
The following are currently supported:
|
||||
|
||||
* `qtdomterm`, which uses the Qt toolkit and `QtWebEngine`
|
||||
* An `[Electron][2]` embedding (see image 1)
|
||||
* `atom-domterm` runs DomTerm as a package in the [Atom text editor][3] (which is also based on Electron) and integrates with the Atom pane system (see image 2)
|
||||
* A wrapper for JavaFX's `WebEngine`, which is useful for code written in Java (see image 4)
|
||||
* Previously, the preferred frontend used [Firefox-XUL][4], but Mozilla has since dropped XUL
|
||||
|
||||
|
||||
|
||||
![DomTerm terminal panes in Atom editor][6]
|
||||
|
||||
Image 2: DomTerm terminal panes in Atom editor. [View larger image.][7]
|
||||
|
||||
Currently, the Electron frontend is probably the nicest option, closely followed by the Qt frontend. If you use Atom, `atom-domterm` works pretty well.
|
||||
|
||||
The backend server is written in C. It manages pseudo terminals (PTYs) and sessions. It is also an HTTP server that provides the JavaScript and other files to the frontend. The `domterm` command starts terminal jobs and performs other requests. If there is no server running, `domterm` daemonizes itself. Communication between the backend and the server is normally done using WebSockets (with [libwebsockets][8] on the server). However, the JavaFX embedding uses neither WebSockets nor the DomTerm server; instead Java applications communicate directly using the Java-JavaScript bridge.
|
||||
|
||||
### A solid xterm-compatible terminal emulator
|
||||
|
||||
DomTerm looks and feels like a modern terminal emulator. It handles mouse events, 24-bit color, Unicode, double-width (CJK) characters, and input methods. DomTerm does a very good job on the [vttest testsuite][9].
|
||||
|
||||
Unusual features include:
|
||||
|
||||
**Show/hide buttons ("folding"):** The little triangles (seen in image 2 above) are buttons that hide/show the corresponding output. To create the buttons, just add certain [escape sequences][10] in the [prompt text][11].
|
||||
|
||||
**Mouse-click support for`readline` and similar input editors:** If you click in the (yellow) input area, DomTerm will send the right sequence of arrow-key keystrokes to the application. (This is enabled by escape sequences in the prompt; you can also force it using Alt+Click.)
|
||||
|
||||
**Style the terminal using CSS:** This is usually done in `~/.domterm/settings.ini`, which is automatically reloaded when saved. For example, in image 2, terminal-specific background colors were set.
|
||||
|
||||
### A better REPL console
|
||||
|
||||
A classic terminal emulator works on rectangular grids of character cells. This works for a REPL (command shell), but it is not ideal. Here are some DomTerm features useful for REPLs that are not typically found in terminal emulators:
|
||||
|
||||
**A command can "print" an image, a graph, a mathematical formula, or a set of clickable links:** An application can send an escape sequence containing almost any HTML. (The HTML is scrubbed to remove JavaScript and other dangerous features.)
|
||||
|
||||
The image 3 shows a fragment from a [`gnuplot`][12] session. Gnuplot (2.1 or later) supports `domterm` as a terminal type. Graphical output is converted to an [SVG image][13], which is then printed to the terminal. My blog post [Gnuplot display on DomTerm][14] provides more information on this.
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
|
||||
Image 3: Gnuplot screenshot. View larger image.
|
||||
|
||||
The [Kawa][15] language has a library for creating and transforming [geometric picture values][16]. If you print such a picture value to a DomTerm terminal, the picture is converted to SVG and embedded in the output.
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-kawa1.png)
|
||||
Image 4: Computable geometry in Kawa. View larger image.
|
||||
|
||||
**Rich text in output:** Help messages are more readable and look nicer with HTML styling. The lower pane of image 1 shows the ouput from `domterm help`. (The output is plaintext if not running under DomTerm.) Note the `PAUSED` message from the built-in pager.
|
||||
|
||||
**Error messages can include clickable links:** DomTerm recognizes the syntax `filename:line:column:` and turns it into a link that opens the file and line in a configurable text editor. (This works for relative filenames if you use `PROMPT_COMMAND` or similar to track directories.)
|
||||
|
||||
A compiler can detect that it is running under DomTerm and directly emit file links in an escape sequence. This is more robust than depending on DomTerm's pattern matching, as it handles spaces and other special characters, and it does not depend on directory tracking. In image 4, you can see error messages from the [Kawa compiler][15]. Hovering over the file position causes it to be underlined, and the `file:` URL shows in the `atom-domterm` message area (bottom of the window). (When not using `atom-domterm`, such messages are shown in an overlay box, as seen for the `PAUSED` message in image 1.)
|
||||
|
||||
The action when clicking on a link is configurable. The default action for a `file:` link with a `#position` suffix is to open the file in a text editor.
|
||||
|
||||
**Structured internal representation:** The following are all represented in the internal node structure: Commands, prompts, input lines, normal and error output, tabs, and preserving the structure if you "Save as HTML." The HTML file is compatible with XML, so you can use XML tools to search or transform the output. The command `domterm view-saved` opens a saved HTML file in a way that enables command folding (show/hide buttons are active) and reflow on window resize.
|
||||
|
||||
**Built-in Lisp-style pretty-printing:** You can include pretty-printing directives (e.g., grouping) in the output such that line breaks are recalculated on window resize. See my article [Dynamic pretty-printing in DomTerm][17] for a deeper discussion.
|
||||
|
||||
**Basic built-in line editing** with history (like `GNU readline`): This uses the browser's built-in editor, so it has great mouse and selection handling. You can switch between normal character-mode (most characters typed are sent directly to the process); or line-mode (regular characters are inserted while control characters cause editing actions, with Enter sending the edited line to the process). The default is automatic mode, where DomTerm switches between character-mode and line-mode depending on whether the PTY is in raw or canonical mode.
|
||||
|
||||
**A built-in pager** (like a simplified `less`): Keyboard shortcuts will control scrolling. In "paging mode," the output pauses after each new screen (or single line, if you move forward line-by-line). The paging mode is unobtrusive and smart about user input, so you can (if you wish) run it without it interfering with interactive programs.
|
||||
|
||||
### Multiplexing and sessions
|
||||
|
||||
**Tabs and tiling:** Not only can you create multiple terminal tabs, you can also tile them. You can use either the mouse or a keyboard shortcut to move between panes and tabs as well as create new ones. They can be rearranged and resized with the mouse. This is implemented using the [GoldenLayout][18] JavaScript library. [Image 1][19] shows a window with two panes. The top one has two tabs, with one running [Midnight Commander][20]; the bottom pane shows `domterm help` output as HTML. However, on Atom we instead use its built-in draggable tiles and tabs; you can see this in image 2.
|
||||
|
||||
**Detaching and reattaching to sessions:** DomTerm supports sessions arrangement, similar to `tmux` and GNU `screen`. You can even attach multiple windows or panes to the same session. This supports multi-user session sharing and remote connections. (For security, all sessions of the same server need to be able to read a Unix domain socket and a local file containing a random key. This restriction will be lifted when we have a good, safe remote-access story.)
|
||||
|
||||
**The** **`domterm`** **command** is also like `tmux` or GNU `screen` in that has multiple options for controlling or starting a server that manages one or more sessions. The major difference is that, if it's not already running under DomTerm, the `domterm` command creates a new top-level window, rather than running in the existing terminal.
|
||||
|
||||
The `domterm` command has a number of sub-commands, similar to `tmux` or `git`. Some sub-commands create windows or sessions. Others (such as "printing" an image) only work within an existing DomTerm session.
|
||||
|
||||
The command `domterm browse` opens a window or pane for browsing a specified URL, such as when browsing documentation.
|
||||
|
||||
### Getting and installing DomTerm
|
||||
|
||||
DomTerm is available from its [GitHub repository][21]. Currently, there are no prebuilt packages, but there are [detailed instructions][22]. All prerequisites are available on Fedora 27, which makes it especially easy to build.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
|
||||
|
||||
作者:[Per Bothner][a]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/perbothner
|
||||
[1]:http://domterm.org/
|
||||
[2]:https://electronjs.org/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://en.wikipedia.org/wiki/XUL
|
||||
[5]:/file/385346
|
||||
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
|
||||
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
|
||||
[8]:https://libwebsockets.org/
|
||||
[9]:http://invisible-island.net/vttest/
|
||||
[10]:http://domterm.org/Wire-byte-protocol.html
|
||||
[11]:http://domterm.org/Shell-prompts.html
|
||||
[12]:http://www.gnuplot.info/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[15]:https://www.gnu.org/software/kawa/
|
||||
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[18]:https://golden-layout.com/
|
||||
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[20]:https://midnight-commander.org/
|
||||
[21]:https://github.com/PerBothner/DomTerm
|
||||
[22]:http://domterm.org/Downloading-and-building.html
|
@ -1,147 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use autofs to mount NFS shares)
|
||||
[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
How to use autofs to mount NFS shares
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
Most Linux file systems are mounted at boot and remain mounted while the system is running. This is also true of any remote file systems that have been configured in the `fstab` file. However, there may be times when you prefer to have a remote file system mount only on demand—for example, to boost performance by reducing network bandwidth usage, or to hide or obfuscate certain directories for security reasons. The package [autofs][1] provides this feature. In this article, I'll describe how to get a basic automount configuration up and running.
|
||||
|
||||
`tree.mydatacenter.net` is up and running. Also assume a data directory named `ourfiles` and two user directories, for Carl and Sarah, are being shared by this server.
|
||||
|
||||
First, a few assumptions: Assume the NFS server namedis up and running. Also assume a data directory namedand two user directories, for Carl and Sarah, are being shared by this server.
|
||||
|
||||
A few best practices will make things work a bit better: It is a good idea to use the same user ID for your users on the server and any client workstations where they have an account. Also, your workstations and server should have the same domain name. Checking the relevant configuration files should confirm.
|
||||
```
|
||||
alan@workstation1:~$ sudo getent passwd carl sarah
|
||||
|
||||
[sudo] password for alan:
|
||||
|
||||
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
|
||||
|
||||
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
|
||||
|
||||
|
||||
|
||||
alan@workstation1:~$ sudo getent hosts
|
||||
|
||||
127.0.0.1 localhost
|
||||
|
||||
127.0.1.1 workstation1.mydatacenter.net workstation1
|
||||
|
||||
10.10.1.5 tree.mydatacenter.net tree
|
||||
|
||||
```
|
||||
|
||||
As you can see, both the client workstation and the NFS server are configured in the `hosts` file. I’m assuming a basic home or even small office network that might lack proper internal domain name service (i.e., DNS).
|
||||
|
||||
### Install the packages
|
||||
|
||||
You need to install only two packages: `nfs-common` for NFS client functions, and `autofs` to provide the automount function.
|
||||
```
|
||||
alan@workstation1:~$ sudo apt-get install nfs-common autofs
|
||||
|
||||
```
|
||||
|
||||
You can verify that the autofs files have been placed in the `etc` directory:
|
||||
```
|
||||
alan@workstation1:~$ cd /etc; ll auto*
|
||||
|
||||
-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
|
||||
|
||||
-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
|
||||
|
||||
-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
|
||||
|
||||
-rwxr-xr-x 1 root root 1039 Nov 19 2015 auto.net*
|
||||
|
||||
-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
|
||||
|
||||
alan@workstation1:/etc$
|
||||
|
||||
```
|
||||
|
||||
### Configure autofs
|
||||
|
||||
Now you need to edit several of these files and add the file `auto.home`. First, add the following two lines to the file `auto.master`:
|
||||
```
|
||||
/mnt/tree /etc/auto.misc
|
||||
|
||||
/home/tree /etc/auto.home
|
||||
|
||||
```
|
||||
|
||||
Each line begins with the directory where the NFS shares will be mounted. Go ahead and create those directories:
|
||||
```
|
||||
alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
|
||||
|
||||
```
|
||||
|
||||
Second, add the following line to the file `auto.misc`:
|
||||
```
|
||||
ourfiles -fstype=nfs tree:/share/ourfiles
|
||||
|
||||
```
|
||||
|
||||
This line instructs autofs to mount the `ourfiles` share at the location matched in the `auto.master` file for `auto.misc`. As shown above, these files will be available in the directory `/mnt/tree/ourfiles`.
|
||||
|
||||
Third, create the file `auto.home` with the following line:
|
||||
```
|
||||
* -fstype=nfs tree:/home/&
|
||||
|
||||
```
|
||||
|
||||
This line instructs autofs to mount the users share at the location matched in the `auto.master` file for `auto.home`. In this case, Carl and Sarah's files will be available in the directories `/home/tree/carl` or `/home/tree/sarah`, respectively. The asterisk (referred to as a wildcard) makes it possible for each user's share to be automatically mounted when they log in. The ampersand also works as a wildcard representing the user's directory on the server side. Their home directory should be mapped accordingly in the `passwd` file. This doesn’t have to be done if you prefer a local home directory; instead, the user could use this as simple remote storage for specific files.
|
||||
|
||||
Finally, restart the `autofs` daemon so it will recognize and load these configuration file changes.
|
||||
```
|
||||
alan@workstation1:/etc$ sudo service autofs restart
|
||||
|
||||
```
|
||||
|
||||
### Testing autofs
|
||||
|
||||
If you change to one of the directories listed in the file `auto.master` and run the `ls` command, you won’t see anything immediately. For example, change directory `(cd)` to `/mnt/tree`. At first, the output of `ls` won’t show anything, but after running `cd ourfiles`, the `ourfiles` share directory will be automatically mounted. The `cd` command will also be executed and you will be placed into the newly mounted directory.
|
||||
```
|
||||
carl@workstation1:~$ cd /mnt/tree
|
||||
|
||||
carl@workstation1:/mnt/tree$ ls
|
||||
|
||||
carl@workstation1:/mnt/tree$ cd ourfiles
|
||||
|
||||
carl@workstation1:/mnt/tree/ourfiles$
|
||||
|
||||
```
|
||||
|
||||
To further confirm that things are working, the `mount` command will display the details of the mounted share.
|
||||
```
|
||||
carl@workstation1:~$ mount
|
||||
|
||||
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)
|
||||
|
||||
```
|
||||
|
||||
The `/home/tree` directory will work the same way for Carl and Sarah.
|
||||
|
||||
I find it useful to bookmark these directories in my file manager for quicker access.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/alanfdoss
|
||||
[1]:https://wiki.archlinux.org/index.php/autofs
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (bodhix)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,531 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Inter-process communication in Linux: Using pipes and message queues
|
||||
======
|
||||
Learn how processes synchronize with each other in Linux.
|
||||
![Chat bubbles][1]
|
||||
|
||||
This is the second article in a series about [interprocess communication][2] (IPC) in Linux. The [first article][3] focused on IPC through shared storage: shared files and shared memory segments. This article turns to pipes, which are channels that connect processes for communication. A channel has a _write end_ for writing bytes, and a _read end_ for reading these bytes in FIFO (first in, first out) order. In typical use, one process writes to the channel, and a different process reads from this same channel. The bytes themselves might represent anything: numbers, employee records, digital movies, and so on.
|
||||
|
||||
Pipes come in two flavors, named and unnamed, and can be used either interactively from the command line or within programs; examples are forthcoming. This article also looks at memory queues, which have fallen out of fashion—but undeservedly so.
|
||||
|
||||
The code examples in the first article acknowledged the threat of race conditions (either file-based or memory-based) in IPC that uses shared storage. The question naturally arises about safe concurrency for the channel-based IPC, which will be covered in this article. The code examples for pipes and memory queues use APIs with the POSIX stamp of approval, and a core goal of the POSIX standards is thread-safety.
|
||||
|
||||
Consider the [man pages for the **mq_open**][4] function, which belongs to the memory queue API. These pages include a section on [Attributes][5] with this small table:
|
||||
|
||||
Interface | Attribute | Value
|
||||
---|---|---
|
||||
mq_open() | Thread safety | MT-Safe
|
||||
|
||||
The value **MT-Safe** (with **MT** for multi-threaded) means that the **mq_open** function is thread-safe, which in turn implies process-safe: A process executes in precisely the sense that one of its threads executes, and if a race condition cannot arise among threads in the _same_ process, such a condition cannot arise among threads in different processes. The **MT-Safe** attribute assures that a race condition does not arise in invocations of **mq_open**. In general, channel-based IPC is concurrent-safe, although a cautionary note is raised in the examples that follow.
|
||||
|
||||
### Unnamed pipes
|
||||
|
||||
Let's start with a contrived command line example that shows how unnamed pipes work. On all modern systems, the vertical bar **|** represents an unnamed pipe at the command line. Assume **%** is the command line prompt, and consider this command:
|
||||
|
||||
|
||||
```
|
||||
`% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right`
|
||||
```
|
||||
|
||||
The _sleep_ and _echo_ utilities execute as separate processes, and the unnamed pipe allows them to communicate. However, the example is contrived in that no communication occurs. The greeting _Hello, world!_ appears on the screen; then, after about five seconds, the command line prompt returns, indicating that both the _sleep_ and _echo_ processes have exited. What's going on?
|
||||
|
||||
In the vertical-bar syntax from the command line, the process to the left ( _sleep_ ) is the writer, and the process to the right ( _echo_ ) is the reader. By default, the reader blocks until there are bytes to read from the channel, and the writer—after writing its bytes—finishes up by sending an end-of-stream marker. (Even if the writer terminates prematurely, an end-of-stream marker is sent to the reader.) The unnamed pipe persists until both the writer and the reader terminate.
|
||||
|
||||
In the contrived example, the _sleep_ process does not write any bytes to the channel but does terminate after about five seconds, which sends an end-of-stream marker to the channel. In the meantime, the _echo_ process immediately writes the greeting to the standard output (the screen) because this process does not read any bytes from the channel, so it does no waiting. Once the _sleep_ and _echo_ processes terminate, the unnamed pipe—not used at all for communication—goes away and the command line prompt returns.
|
||||
|
||||
Here is a more useful example using two unnamed pipes. Suppose that the file _test.dat_ looks like this:
|
||||
|
||||
|
||||
```
|
||||
this
|
||||
is
|
||||
the
|
||||
way
|
||||
the
|
||||
world
|
||||
ends
|
||||
```
|
||||
|
||||
The command:
|
||||
|
||||
|
||||
```
|
||||
`% cat test.dat | sort | uniq`
|
||||
```
|
||||
|
||||
pipes the output from the _cat_ (concatenate) process into the _sort_ process to produce sorted output, and then pipes the sorted output into the _uniq_ process to eliminate duplicate records (in this case, the two occurrences of **the** reduce to one):
|
||||
|
||||
|
||||
```
|
||||
ends
|
||||
is
|
||||
the
|
||||
this
|
||||
way
|
||||
world
|
||||
```
|
||||
|
||||
The scene now is set for a program with two processes that communicate through an unnamed pipe.
|
||||
|
||||
#### Example 1. Two processes communicating through an unnamed pipe.
|
||||
|
||||
|
||||
```
|
||||
#include <sys/wait.h> /* wait */
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h> /* exit functions */
|
||||
#include <unistd.h> /* read, write, pipe, _exit */
|
||||
#include <string.h>
|
||||
|
||||
#define ReadEnd 0
|
||||
#define WriteEnd 1
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /** failure **/
|
||||
}
|
||||
|
||||
int main() {
|
||||
int pipeFDs[2]; /* two file descriptors */
|
||||
char buf; /* 1-byte buffer */
|
||||
const char* msg = "Nature's first green is gold\n"; /* bytes to write */
|
||||
|
||||
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
|
||||
pid_t cpid = fork(); /* fork a child process */
|
||||
if (cpid < 0) report_and_exit("fork"); /* check for failure */
|
||||
|
||||
if (0 == cpid) { /*** child ***/ /* child process */
|
||||
close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
|
||||
|
||||
while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
|
||||
write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
|
||||
|
||||
close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
|
||||
_exit(0); /* exit and notify parent at once */
|
||||
}
|
||||
else { /*** parent ***/
|
||||
close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
|
||||
|
||||
write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
|
||||
close(pipeFDs[WriteEnd]); /* done writing: generate eof */
|
||||
|
||||
wait(NULL); /* wait for child to exit */
|
||||
[exit][7](0); /* exit normally */
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _pipeUN_ program above uses the system function **fork** to create a process. Although the program has but a single source file, multi-processing occurs during (successful) execution. Here are the particulars in a quick review of how the library function **fork** works:
|
||||
|
||||
* The **fork** function, called in the _parent_ process, returns **-1** to the parent in case of failure. In the _pipeUN_ example, the call is: [code]`pid_t cpid = fork(); /* called in parent */`[/code] The returned value is stored, in this example, in the variable **cpid** of integer type **pid_t**. (Every process has its own _process ID_ , a non-negative integer that identifies the process.) Forking a new process could fail for several reasons, including a full _process table_ , a structure that the system maintains to track processes. Zombie processes, clarified shortly, can cause a process table to fill if these are not harvested.
|
||||
* If the **fork** call succeeds, it thereby spawns (creates) a new child process, returning one value to the parent but a different value to the child. Both the parent and the child process execute the _same_ code that follows the call to **fork**. (The child inherits copies of all the variables declared so far in the parent.) In particular, a successful call to **fork** returns:
|
||||
* Zero to the child process
|
||||
* The child's process ID to the parent
|
||||
* An _if/else_ or equivalent construct typically is used after a successful **fork** call to segregate code meant for the parent from code meant for the child. In this example, the construct is: [code] if (0 == cpid) { /*** child ***/
|
||||
...
|
||||
}
|
||||
else { /*** parent ***/
|
||||
...
|
||||
}
|
||||
```
|
||||
If forking a child succeeds, the _pipeUN_ program proceeds as follows. There is an integer array:
|
||||
```
|
||||
`int pipeFDs[2]; /* two file descriptors */`
|
||||
```
|
||||
to hold two file descriptors, one for writing to the pipe and another for reading from the pipe. (The array element **pipeFDs[0]** is the file descriptor for the read end, and the array element **pipeFDs[1]** is the file descriptor for the write end.) A successful call to the system **pipe** function, made immediately before the call to **fork** , populates the array with the two file descriptors:
|
||||
```
|
||||
`if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");`
|
||||
```
|
||||
The parent and the child now have copies of both file descriptors, but the _separation of concerns_ pattern means that each process requires exactly one of the descriptors. In this example, the parent does the writing and the child does the reading, although the roles could be reversed. The first statement in the child _if_ -clause code, therefore, closes the pipe's write end:
|
||||
```
|
||||
`close(pipeFDs[WriteEnd]); /* called in child code */`
|
||||
```
|
||||
and the first statement in the parent _else_ -clause code closes the pipe's read end:
|
||||
```
|
||||
`close(pipeFDs[ReadEnd]); /* called in parent code */`
|
||||
```
|
||||
The parent then writes some bytes (ASCII codes) to the unnamed pipe, and the child reads these and echoes them to the standard output.
|
||||
|
||||
One more aspect of the program needs clarification: the call to the **wait** function in the parent code. Once spawned, a child process is largely independent of its parent, as even the short _pipeUN_ program illustrates. The child can execute arbitrary code that may have nothing to do with the parent. However, the system does notify the parent through a signal—if and when the child terminates.
|
||||
|
||||
What if the parent terminates before the child? In this case, unless precautions are taken, the child becomes and remains a _zombie_ process with an entry in the process table. The precautions are of two broad types. One precaution is to have the parent notify the system that the parent has no interest in the child's termination:
|
||||
```
|
||||
`signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */`
|
||||
```
|
||||
A second approach is to have the parent execute a **wait** on the child's termination, thereby ensuring that the parent outlives the child. This second approach is used in the _pipeUN_ program, where the parent code has this call:
|
||||
```
|
||||
`wait(NULL); /* called in parent */`
|
||||
```
|
||||
This call to **wait** means _wait until the termination of any child occurs_ , and in the _pipeUN_ program, there is only one child process. (The **NULL** argument could be replaced with the address of an integer variable to hold the child's exit status.) There is a more flexible **waitpid** function for fine-grained control, e.g., for specifying a particular child process among several.
|
||||
|
||||
The _pipeUN_ program takes another precaution. When the parent is done waiting, the parent terminates with the call to the regular **exit** function. By contrast, the child terminates with a call to the **_exit** variant, which fast-tracks notification of termination. In effect, the child is telling the system to notify the parent ASAP that the child has terminated.
|
||||
|
||||
If two processes write to the same unnamed pipe, can the bytes be interleaved? For example, if process P1 writes:
|
||||
```
|
||||
`foo bar`
|
||||
```
|
||||
to a pipe and process P2 concurrently writes:
|
||||
```
|
||||
`baz baz`
|
||||
```
|
||||
to the same pipe, it seems that the pipe contents might be something arbitrary, such as:
|
||||
```
|
||||
`baz foo baz bar`
|
||||
```
|
||||
The POSIX standard ensures that writes are not interleaved so long as no write exceeds **PIPE_BUF** bytes. On Linux systems, **PIPE_BUF** is 4,096 bytes in size. My preference with pipes is to have a single writer and a single reader, thereby sidestepping the issue.
|
||||
|
||||
## Named pipes
|
||||
|
||||
An unnamed pipe has no backing file: the system maintains an in-memory buffer to transfer bytes from the writer to the reader. Once the writer and reader terminate, the buffer is reclaimed, so the unnamed pipe goes away. By contrast, a named pipe has a backing file and a distinct API.
|
||||
|
||||
Let's look at another command line example to get the gist of named pipes. Here are the steps:
|
||||
|
||||
* Open two terminals. The working directory should be the same for both.
|
||||
* In one of the terminals, enter these two commands (the prompt again is **%** , and my comments start with **##** ): [code] % mkfifo tester ## creates a backing file named tester
|
||||
% cat tester ## type the pipe's contents to stdout [/code] At the beginning, nothing should appear in the terminal because nothing has been written yet to the named pipe.
|
||||
* In the second terminal, enter the command: [code] % cat > tester ## redirect keyboard input to the pipe
|
||||
hello, world! ## then hit Return key
|
||||
bye, bye ## ditto
|
||||
<Control-C> ## terminate session with a Control-C [/code] Whatever is typed into this terminal is echoed in the other. Once **Ctrl+C** is entered, the regular command line prompt returns in both terminals: the pipe has been closed.
|
||||
* Clean up by removing the file that implements the named pipe: [code]`% unlink tester`
|
||||
```
|
||||
|
||||
|
||||
|
||||
As the utility's name _mkfifo_ implies, a named pipe also is called a FIFO because the first byte in is the first byte out, and so on. There is a library function named **mkfifo** that creates a named pipe in programs and is used in the next example, which consists of two processes: one writes to the named pipe and the other reads from this pipe.
|
||||
|
||||
#### Example 2. The _fifoWriter_ program
|
||||
|
||||
|
||||
```
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <time.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#define MaxLoops 12000 /* outer loop */
|
||||
#define ChunkSize 16 /* how many written at a time */
|
||||
#define IntsPerChunk 4 /* four 4-byte ints per chunk */
|
||||
#define MaxZs 250 /* max microseconds to sleep */
|
||||
|
||||
int main() {
|
||||
const char* pipeName = "./fifoChannel";
|
||||
mkfifo(pipeName, 0666); /* read/write for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
|
||||
if (fd < 0) return -1; /* can't go on */
|
||||
|
||||
int i;
|
||||
for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
|
||||
int j;
|
||||
for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
|
||||
int k;
|
||||
int chunk[IntsPerChunk];
|
||||
for (k = 0; k < IntsPerChunk; k++)
|
||||
chunk[k] = [rand][9]();
|
||||
write(fd, chunk, sizeof(chunk));
|
||||
}
|
||||
usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
|
||||
}
|
||||
|
||||
close(fd); /* close pipe: generates an end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _fifoWriter_ program above can be summarized as follows:
|
||||
|
||||
* The program creates a named pipe for writing: [code] mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); [/code] where **pipeName** is the name of the backing file passed to **mkfifo** as the first argument. The named pipe then is opened with the by-now familiar call to the **open** function, which returns a file descriptor.
|
||||
* For a touch of realism, the _fifoWriter_ does not write all the data at once, but instead writes a chunk, sleeps a random number of microseconds, and so on. In total, 768,000 4-byte integer values are written to the named pipe.
|
||||
* After closing the named pipe, the _fifoWriter_ also unlinks the file: [code] close(fd); /* close pipe: generates end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */ [/code] The system reclaims the backing file once every process connected to the pipe has performed the unlink operation. In this example, there are only two such processes: the _fifoWriter_ and the _fifoReader_ , both of which do an _unlink_ operation.
|
||||
|
||||
|
||||
|
||||
The two programs should be executed in different terminals with the same working directory. However, the _fifoWriter_ should be started before the _fifoReader_ , as the former creates the pipe. The _fifoReader_ then accesses the already created named pipe.
|
||||
|
||||
#### Example 3. The _fifoReader_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
unsigned is_prime(unsigned n) { /* not pretty, but efficient */
|
||||
if (n <= 3) return n > 1;
|
||||
if (0 == (n % 2) || 0 == (n % 3)) return 0;
|
||||
|
||||
unsigned i;
|
||||
for (i = 5; (i * i) <= n; i += 6)
|
||||
if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
|
||||
|
||||
return 1; /* found a prime! */
|
||||
}
|
||||
|
||||
int main() {
|
||||
const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY);
|
||||
if (fd < 0) return -1; /* no point in continuing */
|
||||
unsigned count = 0, total = 0, primes_count = 0;
|
||||
|
||||
while (1) {
|
||||
int next;
|
||||
int i;
|
||||
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
if (0 == count) break; /* end of stream */
|
||||
else if (count == sizeof(int)) { /* read a 4-byte int value */
|
||||
total++;
|
||||
if (is_prime(next)) primes_count++;
|
||||
}
|
||||
}
|
||||
|
||||
close(fd); /* close pipe from read end */
|
||||
unlink(file); /* unlink from the underlying file */
|
||||
[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _fifoReader_ program above can be summarized as follows:
|
||||
|
||||
* Because the _fifoWriter_ creates the named pipe, the _fifoReader_ needs only the standard call **open** to access the pipe through the backing file: [code] const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY); [/code] The file opens as read-only.
|
||||
* The program then goes into a potentially infinite loop, trying to read a 4-byte chunk on each iteration. The **read** call: [code]`ssize_t count = read(fd, &next, sizeof(int));`[/code] returns 0 to indicate end-of-stream, in which case the _fifoReader_ breaks out of the loop, closes the named pipe, and unlinks the backing file before terminating.
|
||||
* After reading a 4-byte integer, the _fifoReader_ checks whether the number is a prime. This represents the business logic that a production-grade reader might perform on the received bytes. On a sample run, there were 37,682 primes among the 768,000 integers received.
|
||||
|
||||
|
||||
|
||||
On repeated sample runs, the _fifoReader_ successfully read all of the bytes that the _fifoWriter_ wrote. This is not surprising. The two processes execute on the same host, taking network issues out of the equation. Named pipes are a highly reliable and efficient IPC mechanism and, therefore, in wide use.
|
||||
|
||||
Here is the output from the two programs, each launched from a separate terminal but with the same working directory:
|
||||
|
||||
|
||||
```
|
||||
% ./fifoWriter
|
||||
768000 ints sent to the pipe.
|
||||
###
|
||||
% ./fifoReader
|
||||
Received ints: 768000, primes: 37682
|
||||
```
|
||||
|
||||
### Message queues
|
||||
|
||||
Pipes have strict FIFO behavior: the first byte written is the first byte read, the second byte written is the second byte read, and so forth. Message queues can behave in the same way but are flexible enough that byte chunks can be retrieved out of FIFO order.
|
||||
|
||||
As the name suggests, a message queue is a sequence of messages, each of which has two parts:
|
||||
|
||||
* The payload, which is an array of bytes ( **char** in C)
|
||||
* A type, given as a positive integer value; types categorize messages for flexible retrieval
|
||||
|
||||
|
||||
|
||||
Consider the following depiction of a message queue, with each message labeled with an integer type:
|
||||
|
||||
|
||||
```
|
||||
+-+ +-+ +-+ +-+
|
||||
sender--->|3|--->|2|--->|2|--->|1|--->receiver
|
||||
+-+ +-+ +-+ +-+
|
||||
```
|
||||
|
||||
Of the four messages shown, the one labeled 1 is at the front, i.e., closest to the receiver. Next come two messages with label 2, and finally, a message labeled 3 at the back. If strict FIFO behavior were in play, then the messages would be received in the order 1-2-2-3. However, the message queue allows other retrieval orders. For example, the messages could be retrieved by the receiver in the order 3-2-1-2.
|
||||
|
||||
The _mqueue_ example consists of two programs, the _sender_ that writes to the message queue and the _receiver_ that reads from this queue. Both programs include the header file _queue.h_ shown below:
|
||||
|
||||
#### Example 4. The header file _queue.h_
|
||||
|
||||
|
||||
```
|
||||
#define ProjectId 123
|
||||
#define PathName "queue.h" /* any existing, accessible file would do */
|
||||
#define MsgLen 4
|
||||
#define MsgCount 6
|
||||
|
||||
typedef struct {
|
||||
long type; /* must be of type long */
|
||||
char payload[MsgLen + 1]; /* bytes in the message */
|
||||
} queuedMessage;
|
||||
```
|
||||
|
||||
The header file defines a structure type named **queuedMessage** , with **payload** (byte array) and **type** (integer) fields. This file also defines symbolic constants (the **#define** statements), the first two of which are used to generate a key that, in turn, is used to get a message queue ID. The **ProjectId** can be any positive integer value, and the **PathName** must be an existing, accessible file—in this case, the file _queue.h_. The setup statements in both the _sender_ and the _receiver_ programs are:
|
||||
|
||||
|
||||
```
|
||||
key_t key = ftok(PathName, ProjectId); /* generate key */
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
|
||||
```
|
||||
|
||||
The ID **qid** is, in effect, the counterpart of a file descriptor for message queues.
|
||||
|
||||
#### Example 5. The message _sender_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key = ftok(PathName, ProjectId);
|
||||
if (key < 0) report_and_exit("couldn't get key...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
if (qid < 0) report_and_exit("couldn't get queue id...");
|
||||
|
||||
char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
|
||||
int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
/* build the message */
|
||||
queuedMessage msg;
|
||||
msg.type = types[i];
|
||||
[strcpy][11](msg.payload, payloads[i]);
|
||||
|
||||
/* send the message */
|
||||
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
|
||||
[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _sender_ program above sends out six messages, two each of a specified type: the first messages are of type 1, the next two of type 2, and the last two of type 3. The sending statement:
|
||||
|
||||
|
||||
```
|
||||
`msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);`
|
||||
```
|
||||
|
||||
is configured to be non-blocking (the flag **IPC_NOWAIT** ) because the messages are so small. The only danger is that a full queue, unlikely in this example, would result in a sending failure. The _receiver_ program below also receives messages using the **IPC_NOWAIT** flag.
|
||||
|
||||
#### Example 6. The message _receiver_ program
|
||||
|
||||
|
||||
```
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
|
||||
if (key < 0) report_and_exit("key not gotten...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
|
||||
if (qid < 0) report_and_exit("no access to queue...");
|
||||
|
||||
int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
queuedMessage msg; /* defined in queue.h */
|
||||
if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
|
||||
[puts][12]("msgrcv trouble...");
|
||||
[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
|
||||
/** remove the queue **/
|
||||
if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
|
||||
report_and_exit("trouble removing queue...");
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
The _receiver_ program does not create the message queue, although the API suggests as much. In the _receiver_ , the call:
|
||||
|
||||
|
||||
```
|
||||
`int qid = msgget(key, 0666 | IPC_CREAT);`
|
||||
```
|
||||
|
||||
is misleading because of the **IPC_CREAT** flag, but this flag really means _create if needed, otherwise access_. The _sender_ program calls **msgsnd** to send messages, whereas the _receiver_ calls **msgrcv** to retrieve them. In this example, the _sender_ sends the messages in the order 1-1-2-2-3-3, but the _receiver_ then retrieves them in the order 3-1-2-1-3-2, showing that message queues are not bound to strict FIFO behavior:
|
||||
|
||||
|
||||
```
|
||||
% ./sender
|
||||
msg1 sent as type 1
|
||||
msg2 sent as type 1
|
||||
msg3 sent as type 2
|
||||
msg4 sent as type 2
|
||||
msg5 sent as type 3
|
||||
msg6 sent as type 3
|
||||
|
||||
% ./receiver
|
||||
msg5 received as type 3
|
||||
msg1 received as type 1
|
||||
msg3 received as type 2
|
||||
msg2 received as type 1
|
||||
msg6 received as type 3
|
||||
msg4 received as type 2
|
||||
```
|
||||
|
||||
The output above shows that the _sender_ and the _receiver_ can be launched from the same terminal. The output also shows that the message queue persists even after the _sender_ process creates the queue, writes to it, and exits. The queue goes away only after the _receiver_ process explicitly removes it with the call to **msgctl** :
|
||||
|
||||
|
||||
```
|
||||
`if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */`
|
||||
```
|
||||
|
||||
### Wrapping up
|
||||
|
||||
The pipes and message queue APIs are fundamentally _unidirectional_ : one process writes and another reads. There are implementations of bidirectional named pipes, but my two cents is that this IPC mechanism is at its best when it is simplest. As noted earlier, message queues have fallen in popularity—but without good reason; these queues are yet another tool in the IPC toolbox. Part 3 completes this quick tour of the IPC toolbox with code examples of IPC through sockets and signals.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
|
||||
[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
|
||||
[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
@ -1,294 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building scalable social media sentiment analysis services in Python)
|
||||
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
|
||||
|
||||
Building scalable social media sentiment analysis services in Python
|
||||
======
|
||||
Learn how you can use spaCy, vaderSentiment, Flask, and Python to add
|
||||
sentiment analysis capabilities to your work.
|
||||
![Tall building with windows][1]
|
||||
|
||||
The [first part][2] of this series provided some background on how sentiment analysis works. Now let's investigate how to add these capabilities to your designs.
|
||||
|
||||
### Exploring spaCy and vaderSentiment in Python
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* A terminal shell
|
||||
* Python language binaries (version 3.4+) in your shell
|
||||
* The **pip** command for installing Python packages
|
||||
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
|
||||
|
||||
|
||||
|
||||
#### Configure your environment
|
||||
|
||||
Before you begin writing code, you will need to set up the Python environment by installing the [spaCy][4] and [vaderSentiment][5] packages and downloading a language model to assist your analysis. Thankfully, most of this is relatively easy to do from the command line.
|
||||
|
||||
In your shell, type the following command to install the spaCy and vaderSentiment packages:
|
||||
|
||||
|
||||
```
|
||||
`pip install spacy vaderSentiment`
|
||||
```
|
||||
|
||||
After the command completes, install a language model that spaCy can use for text analysis. The following command will use the spaCy module to download and install the English language [model][6]:
|
||||
|
||||
|
||||
```
|
||||
`python -m spacy download en_core_web_sm`
|
||||
```
|
||||
|
||||
With these libraries and models installed, you are now ready to begin coding.
|
||||
|
||||
#### Do a simple text analysis
|
||||
|
||||
Use the [Python interpreter interactive mode][7] to write some code that will analyze a single text fragment. Begin by starting the Python environment:
|
||||
|
||||
|
||||
```
|
||||
$ python
|
||||
Python 3.6.8 (default, Jan 31 2019, 09:38:34)
|
||||
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>>
|
||||
```
|
||||
|
||||
_(Your Python interpreter version print might look different than this.)_
|
||||
|
||||
1. Import the necessary modules: [code] >>> import spacy
|
||||
>>> from vaderSentiment import vaderSentiment
|
||||
```
|
||||
2. Load the English language model from spaCy: [code]`>>> english = spacy.load("en_core_web_sm")`
|
||||
```
|
||||
3. Process a piece of text. This example shows a very simple sentence that we expect to return a slightly positive sentiment: [code]`>>> result = english("I like to eat applesauce with sugar and cinnamon.")`
|
||||
```
|
||||
4. Gather the sentences from the processed result. SpaCy has identified and processed the entities within the phrase; this step generates sentiment for each sentence (even though there is only one sentence in this example): [code]`>>> sentences = [str(s) for s in result.sents]`
|
||||
```
|
||||
5. Create an analyzer using vaderSentiments: [code]`>>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()`
|
||||
```
|
||||
6. Perform the sentiment analysis on the sentences: [code]`>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]`
|
||||
```
|
||||
|
||||
|
||||
|
||||
The sentiment variable now contains the polarity scores for the example sentence. Print out the value to see how it analyzed the sentence.
|
||||
|
||||
|
||||
```
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
|
||||
```
|
||||
|
||||
What does this structure mean?
|
||||
|
||||
On the surface, this is an array with a single dictionary object; had there been multiple sentences, there would be a dictionary for each one. There are four keys in the dictionary that correspond to different types of sentiment. The **neg** key represents negative sentiment, of which none has been reported in this text, as evidenced by the **0.0** value. The **neu** key represents neutral sentiment, which has gotten a fairly high score of **0.737** (with a maximum of **1.0** ). The **pos** key represents positive sentiments, which has a moderate score of **0.263**. Last, the **compound** key represents an overall score for the text; this can range from negative to positive scores, with the value **0.3612** representing a sentiment more on the positive side.
|
||||
|
||||
To see how these values might change, you can run a small experiment using the code you already entered. The following block demonstrates an evaluation of sentiment scores on a similar sentence.
|
||||
|
||||
|
||||
```
|
||||
>>> result = english("I love applesauce!")
|
||||
>>> sentences = [str(s) for s in result.sents]
|
||||
>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
|
||||
```
|
||||
|
||||
You can see that by changing the example sentence to something overwhelmingly positive, the sentiment values have changed dramatically.
|
||||
|
||||
### Building a sentiment analysis service
|
||||
|
||||
Now that you have assembled the basic building blocks for doing sentiment analysis, let's turn that knowledge into a simple service.
|
||||
|
||||
For this demonstration, you will create a [RESTful][8] HTTP server using the Python [Flask package][9]. This service will accept text data in English and return the sentiment analysis. Please note that this example service is for learning the technologies involved and not something to put into production.
|
||||
|
||||
#### Prerequisites
|
||||
|
||||
* A terminal shell
|
||||
* The Python language binaries (version 3.4+) in your shell.
|
||||
* The **pip** command for installing Python packages
|
||||
* The **curl** command
|
||||
* A text editor
|
||||
* (optional) A [Python Virtualenv][3] to keep your work isolated from the system
|
||||
|
||||
|
||||
|
||||
#### Configure your environment
|
||||
|
||||
This environment is nearly identical to the one in the previous section. The only difference is the addition of the Flask package to Python.
|
||||
|
||||
1. Install the necessary dependencies: [code]`pip install spacy vaderSentiment flask`
|
||||
```
|
||||
2. Install the English language model for spaCy: [code]`python -m spacy download en_core_web_sm`
|
||||
```
|
||||
|
||||
|
||||
|
||||
#### Create the application file
|
||||
|
||||
Open your editor and create a file named **app.py**. Add the following contents to it _(don't worry, we will review every line)_ :
|
||||
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
Although this is not an overly large source file, it is quite dense. Let's walk through the pieces of this application and describe what they are doing.
|
||||
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
```
|
||||
|
||||
The first three lines bring in the packages needed for performing the language analysis and the HTTP framework.
|
||||
|
||||
|
||||
```
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
```
|
||||
|
||||
The next three lines create a few global variables. The first variable, **app** , is the main entry point that Flask uses for creating HTTP routes. The second variable, **analyzer** , is the same type used in the previous example, and it will be used to generate the sentiment scores. The last variable, **english** , is also the same type used in the previous example, and it will be used to annotate and tokenize the initial text input.
|
||||
|
||||
You might be wondering why these variables have been declared globally. In the case of the **app** variable, this is standard procedure for many Flask applications. But, in the case of the **analyzer** and **english** variables, the decision to make them global is based on the load times associated with the classes involved. Although the load time might appear minor, when it's run in the context of an HTTP server, these delays can negatively impact performance.
|
||||
|
||||
|
||||
```
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
```
|
||||
|
||||
The next piece is the heart of the service—a function for generating sentiment values from a string of text. You can see that the operations in this function correspond to the commands you ran in the Python interpreter earlier. Here they're wrapped in a function definition with the source **text** being passed in as the variable text and finally the **sentiments** variable returned to the caller.
|
||||
|
||||
|
||||
```
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
The last function in the source file contains the logic that will instruct Flask how to configure the HTTP server for the service. It starts with a line that will associate an HTTP route **/** with the request methods **POST** and **GET**.
|
||||
|
||||
After the function definition line, the **if** clause will detect if the request method is **GET**. If a user sends this request to the service, the following line will return a text message instructing how to access the server. This is largely included as a convenience to end users.
|
||||
|
||||
The next line uses the **flask.request** object to acquire the body of the request, which should contain the text string to be processed. The **decode** function will convert the array of bytes into a usable, formatted string. The decoded text message is now passed to the **get_sentiments** function to generate the sentiment scores. Last, the scores are returned to the user through the HTTP framework.
|
||||
|
||||
You should now save the file, if you have not done so already, and return to the shell.
|
||||
|
||||
#### Run the sentiment service
|
||||
|
||||
With everything in place, running the service is quite simple with Flask's built-in debugging server. To start the service, enter the following command from the same directory as your source file:
|
||||
|
||||
|
||||
```
|
||||
`FLASK_APP=app.py flask run`
|
||||
```
|
||||
|
||||
You will now see some output from the server in your shell, and the server will be running. To test that the server is running, you will need to open a second shell and use the **curl** command.
|
||||
|
||||
First, check to see that the instruction message is printed by entering this command:
|
||||
|
||||
|
||||
```
|
||||
`curl http://localhost:5000`
|
||||
```
|
||||
|
||||
You should see the instruction message:
|
||||
|
||||
|
||||
```
|
||||
`To access this service send a POST request to this URI with the text you want analyzed in the body.`
|
||||
```
|
||||
|
||||
Next, send a test message to see the sentiment analysis by running the following command:
|
||||
|
||||
|
||||
```
|
||||
`curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"`
|
||||
```
|
||||
|
||||
The response you get from the server should be similar to the following:
|
||||
|
||||
|
||||
```
|
||||
`[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]`
|
||||
```
|
||||
|
||||
Congratulations! You have now implemented a RESTful HTTP sentiment analysis service. You can find a link to a [reference implementation of this service and all the code from this article on GitHub][10].
|
||||
|
||||
### Continue exploring
|
||||
|
||||
Now that you have an understanding of the principles and mechanics behind natural language processing and sentiment analysis, here are some ways to further your discovery of this topic.
|
||||
|
||||
#### Create a streaming sentiment analyzer on OpenShift
|
||||
|
||||
While creating local applications to explore sentiment analysis is a convenient first step, having the ability to deploy your applications for wider usage is a powerful next step. By following the instructions and code in this [workshop from Radanalytics.io][11], you will learn how to create a sentiment analyzer that can be containerized and deployed to a Kubernetes platform. You will also see how Apache Kafka is used as a framework for event-driven messaging and how Apache Spark can be used as a distributed computing platform for sentiment analysis.
|
||||
|
||||
#### Discover live data with the Twitter API
|
||||
|
||||
Although the [Radanalytics.io][12] lab generated synthetic tweets to stream, you are not limited to synthetic data. In fact, anyone with a Twitter account can access the Twitter streaming API and perform sentiment analysis on tweets with the [Tweepy Python][13] package.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
|
||||
|
||||
作者:[Michael McCune ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/elmiko/users/jschlessman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
|
||||
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
|
||||
[3]: https://virtualenv.pypa.io/en/stable/
|
||||
[4]: https://pypi.org/project/spacy/
|
||||
[5]: https://pypi.org/project/vaderSentiment/
|
||||
[6]: https://spacy.io/models
|
||||
[7]: https://docs.python.org/3.6/tutorial/interpreter.html
|
||||
[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[9]: http://flask.pocoo.org/
|
||||
[10]: https://github.com/elmiko/social-moments-service
|
||||
[11]: https://github.com/radanalyticsio/streaming-lab
|
||||
[12]: http://Radanalytics.io
|
||||
[13]: https://github.com/tweepy/tweepy
|
@ -1,5 +1,5 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
|
@ -1,341 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?
|
||||
======
|
||||
|
||||
Do you know what are the tools we can use for troubleshooting or monitoring real-time disk activity in Linux?
|
||||
|
||||
If **[Linux system performance][1]** gets slow down we may use **[top command][2]** to see the system performance.
|
||||
|
||||
It is used to check what are the processes are consuming high utilization on server.
|
||||
|
||||
It’s common for most of the Linux administrator.
|
||||
|
||||
It’s widely used by Linux administrator in the real world.
|
||||
|
||||
If you don’t see much difference in the process output still you have an option to check other things.
|
||||
|
||||
I would like to advise you to check `wa` status in the top output because most of the time the server performance will be degraded due to high I/O Read and Write on hard disk.
|
||||
|
||||
If it’s high or fluctuation, it could be a cause. So, we need to check I/O activity on hard drive.
|
||||
|
||||
We can monitory disk I/O statistics for all disks and file system in Linux system using `iotop` and `iostat` commands.
|
||||
|
||||
### What Is iotop?
|
||||
|
||||
iotop is a top-like utility for displaying real-time disk activity.
|
||||
|
||||
iotop watches I/O usage information output by the Linux kernel and displays a table of current I/O usage by processes or threads on the system.
|
||||
|
||||
It displays the I/O bandwidth read and written by each process/thread. It also displays the percentage of time the thread/process spent while swapping in and while waiting on I/O.
|
||||
|
||||
Total DISK READ and Total DISK WRITE values represent total read and write bandwidth between processes and kernel threads on the one side and kernel block device subsystem on the other.
|
||||
|
||||
Actual DISK READ and Actual DISK WRITE values represent corresponding bandwidths for actual disk I/O between kernel block device subsystem and underlying hardware (HDD, SSD, etc.).
|
||||
|
||||
### How To Install iotop In Linux?
|
||||
|
||||
We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][3]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo dnf install iotop
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo apt install iotop
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo pacman -S iotop
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo yum install iotop
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install iotop.
|
||||
|
||||
```
|
||||
$ sudo zypper install iotop
|
||||
```
|
||||
|
||||
### How To Monitor Disk I/O Activity/Statistics In Linux Using iotop Command?
|
||||
|
||||
There are many options are available in iotop command to check varies statistics about disk I/O.
|
||||
|
||||
Run the iotop command without any arguments to see each process or thread current I/O usage.
|
||||
|
||||
```
|
||||
# iotop
|
||||
```
|
||||
|
||||
[![][9]![][9]][10]
|
||||
|
||||
If you would like to check which process are actually doing IO then run the iotop command with `-o` or `--only` option.
|
||||
|
||||
```
|
||||
# iotop --only
|
||||
```
|
||||
|
||||
[![][9]![][9]][11]
|
||||
|
||||
**Details:**
|
||||
|
||||
* **`IO:`** It shows I/O utilization for each process, which includes disk and swap.
|
||||
* **`SWAPIN:`** It shows only the swap usage of each process.
|
||||
|
||||
|
||||
|
||||
### What Is iostat?
|
||||
|
||||
iostat is used to report Central Processing Unit (CPU) statistics and input/output statistics for devices and partitions.
|
||||
|
||||
The iostat command is used for monitoring system input/output device loading by observing the time the devices are active in relation to their average transfer rates.
|
||||
|
||||
The iostat command generates reports that can be used to change system configuration to better balance the input/output load between physical disks.
|
||||
|
||||
All statistics are reported each time the iostat command is run. The report consists of a CPU header row followed by a row of CPU statistics.
|
||||
|
||||
On multiprocessor systems, CPU statistics are calculated system-wide as averages among all processors. A device header row is displayed followed by a line of statistics for each device that is configured.
|
||||
|
||||
The iostat command generates two types of reports, the CPU Utilization report and the Device Utilization report.
|
||||
|
||||
### How To Install iostat In Linux?
|
||||
|
||||
iostat tool is part of sysstat package so, We can easily install it with help of package manager since the package is available in all the Linux distributions repository.
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][3]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo dnf install sysstat
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][4]** or **[APT Command][5]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo apt install sysstat
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][6]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo pacman -S sysstat
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][7]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo yum install sysstat
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][8]** to install sysstat.
|
||||
|
||||
```
|
||||
$ sudo zypper install sysstat
|
||||
```
|
||||
|
||||
### How To Monitor Disk I/O Activity/Statistics In Linux Using sysstat Command?
|
||||
|
||||
There are many options are available in iostat command to check varies statistics about disk I/O and CPU.
|
||||
|
||||
Run the iostat command without any arguments to see complete statistics of the system.
|
||||
|
||||
```
|
||||
# iostat
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.45 0.02 16.47 0.12 0.00 53.94
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.95 124.97 0.00 58420014 57507206 0
|
||||
sda 0.18 6.77 80.24 0.00 3115036 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-d` option to see I/O statistics for all the devices
|
||||
|
||||
```
|
||||
# iostat -d
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.95 124.97 0.00 58420030 57509090 0
|
||||
sda 0.18 6.77 80.24 0.00 3115292 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-p` option to see I/O statistics for all the devices and their partitions.
|
||||
|
||||
```
|
||||
# iostat -p
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.42 0.02 16.45 0.12 0.00 53.99
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.94 124.96 0.00 58420062 57512278 0
|
||||
nvme0n1p1 6.40 124.46 118.36 0.00 57279753 54474898 0
|
||||
nvme0n1p2 0.27 2.47 6.60 0.00 1138069 3037380 0
|
||||
sda 0.18 6.77 80.23 0.00 3116060 36924764 0
|
||||
sda1 0.00 0.01 0.00 0.00 3224 0 0
|
||||
sda2 0.18 6.76 80.23 0.00 3111508 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-x` option to see detailed I/O statistics for all the devices.
|
||||
|
||||
```
|
||||
# iostat -x
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.41 0.02 16.45 0.12 0.00 54.00
|
||||
|
||||
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz aqu-sz %util
|
||||
nvme0n1 2.45 126.93 0.60 19.74 0.40 51.74 4.23 124.96 5.12 54.76 3.16 29.54 0.00 0.00 0.00 0.00 0.00 0.00 0.31 30.28
|
||||
sda 0.06 6.77 0.00 0.00 8.34 119.20 0.12 80.23 19.94 99.40 31.84 670.73 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.13
|
||||
loop0 0.00 0.00 0.00 0.00 0.08 19.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
|
||||
```
|
||||
|
||||
Run the iostat command with `-d [Device_Name]` option to see I/O statistics of particular device and their partitions.
|
||||
|
||||
```
|
||||
# iostat -p [Device_Name]
|
||||
|
||||
# iostat -p sda
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.38 0.02 16.43 0.12 0.00 54.05
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
sda 0.18 6.77 80.21 0.00 3117468 36924764 0
|
||||
sda2 0.18 6.76 80.21 0.00 3112916 36924764 0
|
||||
sda1 0.00 0.01 0.00 0.00 3224 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-m` option to see I/O statistics with `MB` for all the devices instead of `KB`. By default it shows the output with KB.
|
||||
|
||||
```
|
||||
# iostat -m
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.36 0.02 16.41 0.12 0.00 54.09
|
||||
|
||||
Device tps MB_read/s MB_wrtn/s MB_dscd/s MB_read MB_wrtn MB_dscd
|
||||
nvme0n1 6.68 0.12 0.12 0.00 57050 56176 0
|
||||
sda 0.18 0.01 0.08 0.00 3045 36059 0
|
||||
loop0 0.00 0.00 0.00 0.00 2 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with certain interval then use the following format. In this example, we are going to capture totally two reports at five seconds interval.
|
||||
|
||||
```
|
||||
# iostat [Interval] [Number Of Reports]
|
||||
|
||||
# iostat 5 2
|
||||
|
||||
Linux 4.19.32-1-MANJARO (daygeek-Y700) Thursday 18 April 2019 _x86_64_ (8 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
29.35 0.02 16.41 0.12 0.00 54.10
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 6.68 126.89 124.95 0.00 58420116 57525344 0
|
||||
sda 0.18 6.77 80.20 0.00 3118492 36924764 0
|
||||
loop0 0.00 0.00 0.00 0.00 2160 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 1093 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 1077 0 0
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
3.71 0.00 2.51 0.05 0.00 93.73
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
|
||||
nvme0n1 19.00 0.20 311.40 0.00 1 1557 0
|
||||
sda 0.20 25.60 0.00 0.00 128 0 0
|
||||
loop0 0.00 0.00 0.00 0.00 0 0 0
|
||||
loop1 0.00 0.00 0.00 0.00 0 0 0
|
||||
loop2 0.00 0.00 0.00 0.00 0 0 0
|
||||
```
|
||||
|
||||
Run the iostat command with `-N` option to see the LVM disk I/O statistics report.
|
||||
|
||||
```
|
||||
# iostat -N
|
||||
|
||||
Linux 4.15.0-47-generic (Ubuntu18.2daygeek.com) Thursday 18 April 2019 _x86_64_ (2 CPU)
|
||||
|
||||
avg-cpu: %user %nice %system %iowait %steal %idle
|
||||
0.38 0.07 0.18 0.26 0.00 99.12
|
||||
|
||||
Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn
|
||||
sda 3.60 57.07 69.06 968729 1172340
|
||||
sdb 0.02 0.33 0.00 5680 0
|
||||
sdc 0.01 0.12 0.00 2108 0
|
||||
2g-2gvol1 0.00 0.07 0.00 1204 0
|
||||
```
|
||||
|
||||
Run the nfsiostat command to see the I/O statistics for Network File System(NFS).
|
||||
|
||||
```
|
||||
# nfsiostat
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/monitor-disk-io-activity-using-iotop-iostat-command-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/monitoring-tools/
|
||||
[2]: https://www.2daygeek.com/linux-top-command-linux-system-performance-monitoring-tool/
|
||||
[3]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[7]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[8]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
|
||||
[10]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-1.jpg
|
||||
[11]: https://www.2daygeek.com/wp-content/uploads/2015/03/monitor-disk-io-activity-using-iotop-iostat-command-in-linux-2.jpg
|
@ -0,0 +1,135 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Epic Games Store is Now Available on Linux Thanks to Lutris)
|
||||
[#]: via: (https://itsfoss.com/epic-games-lutris-linux/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
Epic Games Store is Now Available on Linux Thanks to Lutris
|
||||
======
|
||||
|
||||
_**Brief: Open Source gaming platform Lutris now enables you to use Epic Games Store on Linux. We tried it on Ubuntu 19.04 and here’s our experience with it.**_
|
||||
|
||||
[Gaming on Linux][1] just keeps getting better. Want to [play Windows games on Linux][2], Steam’s new [in-progress feature][3] enables you to do that.
|
||||
|
||||
Steam might be new in the field of Windows games on Linux but Lutris has been doing it for years.
|
||||
|
||||
[Lutris][4] is an open source gaming platform for Linux where it provides installers for game clients like Origin, Steam, Blizzard.net app and so on. It utilizes Wine to run stuff that isn’t natively supported on Linux.
|
||||
|
||||
Lutris has recently announced that you can now use Epic Games Store using Lutris.
|
||||
|
||||
### Lutris brings Epic Games to Linux
|
||||
|
||||
![Epic Games Store Lutris Linux][5]
|
||||
|
||||
[Epic Games Store][6] is a digital video game distribution platform like Steam. It only supports Windows and macOS for the moment.
|
||||
|
||||
The Lutris team worked hard to bring Epic Games Store to Linux via Lutris. Even though I’m not a big fan of Epic Games Store, it was good to know about the support for Linux via Lutris:
|
||||
|
||||
> Good news! [@EpicGames][7] Store is now fully functional under Linux if you use Lutris to install it! No issues observed whatsoever. <https://t.co/cYmd7PcYdG>[@TimSweeneyEpic][8] will probably like this 😊 [pic.twitter.com/7mt9fXt7TH][9]
|
||||
>
|
||||
> — Lutris Gaming (@LutrisGaming) [April 17, 2019][10]
|
||||
|
||||
As an avid gamer and Linux user, I immediately jumped upon this news and installed Lutris to run Epic Games on it.
|
||||
|
||||
**Note:** _I used[Ubuntu 19.04][11] to test Epic Games store for Linux._
|
||||
|
||||
### Using Epic Games Store for Linux using Lutris
|
||||
|
||||
To install Epic Games Store on your Linux system, make sure that you have [Lutris][4] installed with its pre-requisites Wine and Python 3. So, first [install Wine on Ubuntu][12] or whichever Linux you are using and then [download Lutris from its website][13].
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read Ubuntu Mate Will Be Default OS On Entroware Laptops
|
||||
|
||||
#### Installing Epic Games Store
|
||||
|
||||
Once the installation of Lutris is successful, simply launch it.
|
||||
|
||||
While I tried this, I encountered an error (nothing happened when I tried to launch it using the GUI). However, when I typed in “ **lutris** ” on the terminal to launch it otherwise, I noticed an error that looked like this:
|
||||
|
||||
![][15]
|
||||
|
||||
Thanks to Abhishek, I learned that this is a common issue (you can check that on [GitHub][16]).
|
||||
|
||||
So, to fix it, all I had to do was – type in a command in the terminal:
|
||||
|
||||
```
|
||||
export LC_ALL=C
|
||||
```
|
||||
|
||||
Just copy it and enter it in your terminal if you face the same issue. And, then, you will be able to open Lutris.
|
||||
|
||||
**Note:** _You’ll have to enter this command every time you launch Lutris. So better to add it to your .bashrc or list of environment variable._
|
||||
|
||||
Once that is done, simply launch it and search for “ **Epic Games Store** ” as shown in the image below:
|
||||
|
||||
![Epic Games Store in Lutris][17]
|
||||
|
||||
Here, I have it installed already, so you will get the option to “Install” it and then it will automatically ask you to install the required packages that it needs. You just have to proceed in order to successfully install it. That’s it – no rocket science involved.
|
||||
|
||||
#### Playing a Game on Epic Games Store
|
||||
|
||||
![Epic Games Store][18]
|
||||
|
||||
Now that we have Epic Games store via Lutris on Linux, simply launch it and log in to your account to get started.
|
||||
|
||||
But, does it really work?
|
||||
|
||||
_Yes, the Epic Games Store does work._ **But, all the games don’t.**
|
||||
|
||||
Well, I haven’t tried everything, but I grabbed a free game (Transistor – a turn-based ARPG game) to check if that works.
|
||||
|
||||
![Transistor – Epic Games Store][19]
|
||||
|
||||
Unfortunately, it didn’t. It says that it is “Running” when I launch it but then again, nothing happens.
|
||||
|
||||
As of now, I’m not aware of any solutions to that – so I’ll try to keep you guys updated if I find a fix.
|
||||
|
||||
[][20]
|
||||
|
||||
Suggested read Alpha Version Of New Skype Client For Linux Is Out Now
|
||||
|
||||
**Wrapping Up**
|
||||
|
||||
It’s good to see the gaming scene improve on Linux thanks to the solutions like Lutris for users. However, there’s still a lot of work to be done.
|
||||
|
||||
For a game to run hassle-free on Linux is still a challenge. There can be issues like this which I encountered or similar. But, it’s going in the right direction – even if it has issues.
|
||||
|
||||
What do you think of Epic Games Store on Linux via Lutris? Have you tried it yet? Let us know your thoughts in the comments below.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/epic-games-lutris-linux/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/linux-gaming-guide/
|
||||
[2]: https://itsfoss.com/steam-play/
|
||||
[3]: https://itsfoss.com/steam-play-proton/
|
||||
[4]: https://lutris.net/
|
||||
[5]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-lutris-linux-800x450.png
|
||||
[6]: https://www.epicgames.com/store/en-US/
|
||||
[7]: https://twitter.com/EpicGames?ref_src=twsrc%5Etfw
|
||||
[8]: https://twitter.com/TimSweeneyEpic?ref_src=twsrc%5Etfw
|
||||
[9]: https://t.co/7mt9fXt7TH
|
||||
[10]: https://twitter.com/LutrisGaming/status/1118552969816018948?ref_src=twsrc%5Etfw
|
||||
[11]: https://itsfoss.com/ubuntu-19-04-release-features/
|
||||
[12]: https://itsfoss.com/install-latest-wine/
|
||||
[13]: https://lutris.net/downloads/
|
||||
[14]: https://itsfoss.com/ubuntu-mate-entroware/
|
||||
[15]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-error.jpg
|
||||
[16]: https://github.com/lutris/lutris/issues/660
|
||||
[17]: https://itsfoss.com/wp-content/uploads/2019/04/lutris-epic-games-store-800x520.jpg
|
||||
[18]: https://itsfoss.com/wp-content/uploads/2019/04/epic-games-store-800x450.jpg
|
||||
[19]: https://itsfoss.com/wp-content/uploads/2019/04/transistor-game-epic-games-store-800x410.jpg
|
||||
[20]: https://itsfoss.com/skpe-alpha-linux/
|
@ -0,0 +1,260 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to identify same-content files on Linux)
|
||||
[#]: via: (https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
How to identify same-content files on Linux
|
||||
======
|
||||
Copies of files sometimes represent a big waste of disk space and can cause confusion if you want to make updates. Here are six commands to help you identify these files.
|
||||
![Vinoth Chandar \(CC BY 2.0\)][1]
|
||||
|
||||
In a recent post, we looked at [how to identify and locate files that are hard links][2] (i.e., that point to the same disk content and share inodes). In this post, we'll check out commands for finding files that have the same _content_ , but are not otherwise connected.
|
||||
|
||||
Hard links are helpful because they allow files to exist in multiple places in the file system while not taking up any additional disk space. Copies of files, on the other hand, sometimes represent a big waste of disk space and run some risk of causing some confusion if you want to make updates. In this post, we're going to look at multiple ways to identify these files.
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
|
||||
|
||||
### Comparing files with the diff command
|
||||
|
||||
Probably the easiest way to compare two files is to use the **diff** command. The output will show you the differences between the two files. The < and > signs indicate whether the extra lines are in the first (<) or second (>) file provided as arguments. In this example, the extra lines are in backup.html.
|
||||
|
||||
```
|
||||
$ diff index.html backup.html
|
||||
2438a2439,2441
|
||||
> <pre>
|
||||
> That's all there is to report.
|
||||
> </pre>
|
||||
```
|
||||
|
||||
If diff shows no output, that means the two files are the same.
|
||||
|
||||
```
|
||||
$ diff home.html index.html
|
||||
$
|
||||
```
|
||||
|
||||
The only drawbacks to diff are that it can only compare two files at a time, and you have to identify the files to compare. Some commands we will look at in this post can find the duplicate files for you.
|
||||
|
||||
### Using checksums
|
||||
|
||||
The **cksum** (checksum) command computes checksums for files. Checksums are a mathematical reduction of the contents to a lengthy number (like 2819078353 228029). While not absolutely unique, the chance that files that are not identical in content would result in the same checksum is extremely small.
|
||||
|
||||
```
|
||||
$ cksum *.html
|
||||
2819078353 228029 backup.html
|
||||
4073570409 227985 home.html
|
||||
4073570409 227985 index.html
|
||||
```
|
||||
|
||||
In the example above, you can see how the second and third files yield the same checksum and can be assumed to be identical.
|
||||
|
||||
### Using the find command
|
||||
|
||||
While the find command doesn't have an option for finding duplicate files, it can be used to search files by name or type and run the cksum command. For example:
|
||||
|
||||
```
|
||||
$ find . -name "*.html" -exec cksum {} \;
|
||||
4073570409 227985 ./home.html
|
||||
2819078353 228029 ./backup.html
|
||||
4073570409 227985 ./index.html
|
||||
```
|
||||
|
||||
### Using the fslint command
|
||||
|
||||
The **fslint** command can be used to specifically find duplicate files. Note that we give it a starting location. The command can take quite some time to complete if it needs to run through a large number of files. Here's output from a very modest search. Note how it lists the duplicate files and also looks for other issues, such as empty directories and bad IDs.
|
||||
|
||||
```
|
||||
$ fslint .
|
||||
-----------------------------------file name lint
|
||||
-------------------------------Invalid utf8 names
|
||||
-----------------------------------file case lint
|
||||
----------------------------------DUPlicate files <==
|
||||
home.html
|
||||
index.html
|
||||
-----------------------------------Dangling links
|
||||
--------------------redundant characters in links
|
||||
------------------------------------suspect links
|
||||
--------------------------------Empty Directories
|
||||
./.gnupg
|
||||
----------------------------------Temporary Files
|
||||
----------------------duplicate/conflicting Names
|
||||
------------------------------------------Bad ids
|
||||
-------------------------Non Stripped executables
|
||||
```
|
||||
|
||||
You may have to install **fslint** on your system. You will probably have to add it to your search path, as well:
|
||||
|
||||
```
|
||||
$ export PATH=$PATH:/usr/share/fslint/fslint
|
||||
```
|
||||
|
||||
### Using the rdfind command
|
||||
|
||||
The **rdfind** command will also look for duplicate (same content) files. The name stands for "redundant data find," and the command is able to determine, based on file dates, which files are the originals — which is helpful if you choose to delete the duplicates, as it will remove the newer files.
|
||||
|
||||
```
|
||||
$ rdfind ~
|
||||
Now scanning "/home/shark", found 12 files.
|
||||
Now have 12 files in total.
|
||||
Removed 1 files due to nonunique device and inode.
|
||||
Total size is 699498 bytes or 683 KiB
|
||||
Removed 9 files due to unique sizes from list.2 files left.
|
||||
Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
|
||||
Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
|
||||
Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
|
||||
It seems like you have 2 files that are not unique
|
||||
Totally, 223 KiB can be reduced.
|
||||
Now making results file results.txt
|
||||
```
|
||||
|
||||
You can also run this command in "dryrun" (i.e., only report the changes that might otherwise be made).
|
||||
|
||||
```
|
||||
$ rdfind -dryrun true ~
|
||||
(DRYRUN MODE) Now scanning "/home/shark", found 12 files.
|
||||
(DRYRUN MODE) Now have 12 files in total.
|
||||
(DRYRUN MODE) Removed 1 files due to nonunique device and inode.
|
||||
(DRYRUN MODE) Total size is 699352 bytes or 683 KiB
|
||||
Removed 9 files due to unique sizes from list.2 files left.
|
||||
(DRYRUN MODE) Now eliminating candidates based on first bytes:removed 0 files from list.2 files left.
|
||||
(DRYRUN MODE) Now eliminating candidates based on last bytes:removed 0 files from list.2 files left.
|
||||
(DRYRUN MODE) Now eliminating candidates based on sha1 checksum:removed 0 files from list.2 files left.
|
||||
(DRYRUN MODE) It seems like you have 2 files that are not unique
|
||||
(DRYRUN MODE) Totally, 223 KiB can be reduced.
|
||||
(DRYRUN MODE) Now making results file results.txt
|
||||
```
|
||||
|
||||
The rdfind command also provides options for things such as ignoring empty files (-ignoreempty) and following symbolic links (-followsymlinks). Check out the man page for explanations.
|
||||
|
||||
```
|
||||
-ignoreempty ignore empty files
|
||||
-minsize ignore files smaller than speficied size
|
||||
-followsymlinks follow symbolic links
|
||||
-removeidentinode remove files referring to identical inode
|
||||
-checksum identify checksum type to be used
|
||||
-deterministic determiness how to sort files
|
||||
-makesymlinks turn duplicate files into symbolic links
|
||||
-makehardlinks replace duplicate files with hard links
|
||||
-makeresultsfile create a results file in the current directory
|
||||
-outputname provide name for results file
|
||||
-deleteduplicates delete/unlink duplicate files
|
||||
-sleep set sleep time between reading files (milliseconds)
|
||||
-n, -dryrun display what would have been done, but don't do it
|
||||
```
|
||||
|
||||
Note that the rdfind command offers an option to delete duplicate files with the **-deleteduplicates true** setting. Hopefully the command's modest problem with grammar won't irritate you. ;-)
|
||||
|
||||
```
|
||||
$ rdfind -deleteduplicates true .
|
||||
...
|
||||
Deleted 1 files. <==
|
||||
```
|
||||
|
||||
You will likely have to install the rdfind command on your system. It's probably a good idea to experiment with it to get comfortable with how it works.
|
||||
|
||||
### Using the fdupes command
|
||||
|
||||
The **fdupes** command also makes it easy to identify duplicate files and provides a large number of useful options — like **-r** for recursion. In its simplest form, it groups duplicate files together like this:
|
||||
|
||||
```
|
||||
$ fdupes ~
|
||||
/home/shs/UPGRADE
|
||||
/home/shs/mytwin
|
||||
|
||||
/home/shs/lp.txt
|
||||
/home/shs/lp.man
|
||||
|
||||
/home/shs/penguin.png
|
||||
/home/shs/penguin0.png
|
||||
/home/shs/hideme.png
|
||||
```
|
||||
|
||||
Here's an example using recursion. Note that many of the duplicate files are important (users' .bashrc and .profile files) and should clearly not be deleted.
|
||||
|
||||
```
|
||||
# fdupes -r /home
|
||||
/home/shark/home.html
|
||||
/home/shark/index.html
|
||||
|
||||
/home/dory/.bashrc
|
||||
/home/eel/.bashrc
|
||||
|
||||
/home/nemo/.profile
|
||||
/home/dory/.profile
|
||||
/home/shark/.profile
|
||||
|
||||
/home/nemo/tryme
|
||||
/home/shs/tryme
|
||||
|
||||
/home/shs/arrow.png
|
||||
/home/shs/PNGs/arrow.png
|
||||
|
||||
/home/shs/11/files_11.zip
|
||||
/home/shs/ERIC/file_11.zip
|
||||
|
||||
/home/shs/penguin0.jpg
|
||||
/home/shs/PNGs/penguin.jpg
|
||||
/home/shs/PNGs/penguin0.jpg
|
||||
|
||||
/home/shs/Sandra_rotated.png
|
||||
/home/shs/PNGs/Sandra_rotated.png
|
||||
```
|
||||
|
||||
The fdupe command's many options are listed below. Use the **fdupes -h** command, or read the man page for more details.
|
||||
|
||||
```
|
||||
-r --recurse recurse
|
||||
-R --recurse: recurse through specified directories
|
||||
-s --symlinks follow symlinked directories
|
||||
-H --hardlinks treat hard links as duplicates
|
||||
-n --noempty ignore empty files
|
||||
-f --omitfirst omit the first file in each set of matches
|
||||
-A --nohidden ignore hidden files
|
||||
-1 --sameline list matches on a single line
|
||||
-S --size show size of duplicate files
|
||||
-m --summarize summarize duplicate files information
|
||||
-q --quiet hide progress indicator
|
||||
-d --delete prompt user for files to preserve
|
||||
-N --noprompt when used with --delete, preserve the first file in set
|
||||
-I --immediate delete duplicates as they are encountered
|
||||
-p --permissions don't soncider files with different owner/group or
|
||||
permission bits as duplicates
|
||||
-o --order=WORD order files according to specification
|
||||
-i --reverse reverse order while sorting
|
||||
-v --version display fdupes version
|
||||
-h --help displays help
|
||||
```
|
||||
|
||||
The fdupes command is another one that you're like to have to install and work with for a while to become familiar with its many options.
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Linux systems provide a good selection of tools for locating and potentially removing duplicate files, along with options for where you want to run your search and what you want to do with duplicate files when you find them.
|
||||
|
||||
**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][4] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3390204/how-to-identify-same-content-files-on-linux.html#tk.rss_all
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/chairs-100794266-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3387961/how-to-identify-duplicate-files-on-linux.html
|
||||
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[4]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html
|
||||
[5]: https://www.facebook.com/NetworkWorld/
|
||||
[6]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,132 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Automate backups with restic and systemd)
|
||||
[#]: via: (https://fedoramagazine.org/automate-backups-with-restic-and-systemd/)
|
||||
[#]: author: (Link Dupont https://fedoramagazine.org/author/linkdupont/)
|
||||
|
||||
Automate backups with restic and systemd
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Timely backups are important. So much so that [backing up software][2] is a common topic of discussion, even [here on the Fedora Magazine][3]. This article demonstrates how to automate backups with **restic** using only systemd unit files.
|
||||
|
||||
For an introduction to restic, be sure to check out our article [Use restic on Fedora for encrypted backups][4]. Then read on for more details.
|
||||
|
||||
Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the _backup_ command needs to be run on a regular frequency. The second service takes care of data pruning.
|
||||
|
||||
If you’re not familiar with systemd at all, there’s never been a better time to learn. Check out [the series on systemd here at the Magazine][5], starting with this primer on unit files:
|
||||
|
||||
> [systemd unit file basics][6]
|
||||
|
||||
If you haven’t installed restic already, note it’s in the official Fedora repositories. To install use this command [with sudo][7]:
|
||||
|
||||
```
|
||||
$ sudo dnf install restic
|
||||
```
|
||||
|
||||
### Backup
|
||||
|
||||
First, create the _~/.config/systemd/user/restic-backup.service_ file. Copy and paste the text below into the file for best results.
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
|
||||
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
This service references an environment file in order to load secrets (such as _RESTIC_PASSWORD_ ). Create the _~/.config/restic-backup.conf_ file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.
|
||||
|
||||
```
|
||||
BACKUP_PATHS="/home/rupert"
|
||||
BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
|
||||
RETENTION_DAYS=7
|
||||
RETENTION_WEEKS=4
|
||||
RETENTION_MONTHS=6
|
||||
RETENTION_YEARS=3
|
||||
B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
|
||||
RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
|
||||
```
|
||||
|
||||
Now that the service is installed, reload systemd: _systemctl –user daemon-reload_. Try running the service manually to create a backup: _systemctl –user start restic-backup_.
|
||||
|
||||
Because the service is a _oneshot_ , it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the _restic-backup.service_ daily, create _~/.config/systemd/user/restic-backup.timer_ as follows. Again, copy and paste this text:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Backup with restic daily
|
||||
[Timer]
|
||||
OnCalendar=daily
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
Enable it by running this command:
|
||||
|
||||
```
|
||||
$ systemctl --user enable --now restic-backup.timer
|
||||
```
|
||||
|
||||
### Prune
|
||||
|
||||
While the main service runs the _forget_ command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The _prune_ command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because _prune_ can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file _~/.config/systemd/user/restic-prune.service_ by copying and pasting this text:
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Restic backup service (data pruning)
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart=restic prune
|
||||
EnvironmentFile=%h/.config/restic-backup.conf
|
||||
```
|
||||
|
||||
Similarly to the main _restic-backup.service_ , _restic-prune_ is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at _~/.config/systemd/user/restic-prune.timer_ :
|
||||
|
||||
```
|
||||
[Unit]
|
||||
Description=Prune data from the restic repository monthly
|
||||
[Timer]
|
||||
OnCalendar=monthly
|
||||
Persistent=true
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
```
|
||||
|
||||
That’s it! Restic will now run daily and prune data monthly.
|
||||
|
||||
* * *
|
||||
|
||||
_Photo by _[ _Samuel Zeller_][8]_ on _[_Unsplash_][9]_._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/automate-backups-with-restic-and-systemd/
|
||||
|
||||
作者:[Link Dupont][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/linkdupont/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/restic-systemd-816x345.jpg
|
||||
[2]: https://restic.net/
|
||||
[3]: https://fedoramagazine.org/?s=backup
|
||||
[4]: https://fedoramagazine.org/use-restic-encrypted-backups/
|
||||
[5]: https://fedoramagazine.org/series/systemd-series/
|
||||
[6]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
|
||||
[7]: https://fedoramagazine.org/howto-use-sudo/
|
||||
[8]: https://unsplash.com/photos/JuFcQxgCXwA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[9]: https://unsplash.com/search/photos/archive?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
106
sources/tech/20190425 Debian has a New Project Leader.md
Normal file
106
sources/tech/20190425 Debian has a New Project Leader.md
Normal file
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Debian has a New Project Leader)
|
||||
[#]: via: (https://itsfoss.com/debian-project-leader-election/)
|
||||
[#]: author: (Shirish https://itsfoss.com/author/shirish/)
|
||||
|
||||
Debian has a New Project Leader
|
||||
======
|
||||
|
||||
Like each year, the Debian Secretary announced a call for nominations for the post of Debian Project Leader (commonly known as DPL) in early March. Soon 5 candidates shared their nomination. One of the DPL candidates backed out due to personal reasons and we had [four candidates][1] as can be seen in the Nomination section of the Vote page.
|
||||
|
||||
### Sam Hartman, the new Debian Project Leader
|
||||
|
||||
![][2]
|
||||
|
||||
While I will not go much into details as Sam already outlined his position on his [platform][3], it is good to see that most Debian developers recognize that it’s no longer just the technical excellence which need to be looked at. I do hope he is able to create more teams which would leave some more time in DPL’s hands and less stress going forward.
|
||||
|
||||
As he has shared, he would be looking into also helping the other DPL candidates, all of which presented initiatives to make Debian better.
|
||||
|
||||
Apart from this, there had been some excellent suggestions, for example modernizing debian-installer, making lists.debian.org have a [Mailman 3][4] instance, modernizing Debian packaging and many more.
|
||||
|
||||
While probably a year is too short a time for any of the deliverables that Debian people are thinking, some sort of push or start should enable Debian to reach greater heights than today.
|
||||
|
||||
### A brief history of DPL elections
|
||||
|
||||
In the beginning, Debian was similar to many distributions which have a [BDFL][5], although from the very start Debian had a sort of rolling leadership. While I wouldn’t go through the whole history, from October 1998 there was an idea [germinated][6] to have a Debian Constitution.
|
||||
|
||||
After quite a bit of discussion between Debian users, contributors, developers etc. [Debian 1.0 Constitution][7] was released on December 2nd, 1998. One of the big changes was that it formalised the selection of Debian Project Leader via elections.
|
||||
|
||||
From 1998 till 2019 13 Debian project leaders have been elected till date with Sam Hartman being the latest (2019).
|
||||
|
||||
Before Sam, [Chris Lamb][8] was DPL in 2017 and again stood up for re-election in 2018. One of the biggest changes in Chris’s tenure was having more impetus to outreach than ever before. This made it possible to have many more mini-debconfs all around the world and thus increasing more number of Debian users and potential Debian Developers.
|
||||
|
||||
[][9]
|
||||
|
||||
Suggested read SemiCode OS: A Linux Distribution For Programmers And Web Developers
|
||||
|
||||
### Duties and Responsibilities of the Debian Project Leader
|
||||
|
||||
![][10]
|
||||
|
||||
Debian Project Leader (DPL) is a non-monetary position which means that the DPL doesn’t get a salary or any monetary benefits in the traditional sense but it’s a prestigious position.
|
||||
|
||||
Curious what what a DPL does? Here are some of the duties, responsibilities, prestige and perks associated with this position.
|
||||
|
||||
#### Travelling
|
||||
|
||||
As the DPL is the public face of the project, she/he is supposed to travel to many places in the world to share about Debian. While the travel may be a perk, it is and could be discounted by being not paid for the time spent articulating Debian’s position in various free software and other communities. Also travel, language, politics of free software are also some of the stress points that any DPL would have to go through.
|
||||
|
||||
#### Communication
|
||||
|
||||
A DPL is expected to have excellent verbal and non-verbal communication skills as she/he is the expected to share Debian’s vision of computing to technical and non-technical people. As she/he is also expected to weigh in many a sensitive matter, the Project Leader has to make choices about which communications should be made public and which should be private.
|
||||
|
||||
#### Budgeting
|
||||
|
||||
Quite a bit of the time the Debian Project Leader has to look into the finances along with the Secretary and take a call at various initiatives mooted by the larger community. The Project Leader has to ask and then make informed decisions on the same.
|
||||
|
||||
#### Delegation
|
||||
|
||||
One of the important tasks of the DPL is to delegate different tasks to suitable people. Some sensitive delegations include ftp-master, ftp-assistant, list-managers, debian-mirror, debian-infrastructure and so on.
|
||||
|
||||
#### Influence
|
||||
|
||||
Last but not the least, just like any other election, the people who contest for DPL have a platform where they share their ideas about where they would like to see the Debian project heading and how they would go about doing it.
|
||||
|
||||
This is by no means an exhaustive list. I would suggest to read Lucas Nussbaum’s [mail][11] in which he outlines some more responsibilities as a Debian Project Leader.
|
||||
|
||||
[][12]
|
||||
|
||||
Suggested read Lightweight Linux Distribution Bodhi Linux 5.0 Released
|
||||
|
||||
**In the end…**
|
||||
|
||||
I wish Sam Hartman all the luck. I look forward to see how Debian grows under his leadership.
|
||||
|
||||
I also hope that you learned a few non-technical thing around Debian. If you are an [ardent Debian user][13], stuff like this make you feel more involved with Debian project. What do you say?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/debian-project-leader-election/
|
||||
|
||||
作者:[Shirish][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/shirish/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.debian.org/vote/2019/vote_001
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2019/04/Debian-Project-Leader-election-800x450.png
|
||||
[3]: https://www.debian.org/vote/2019/platforms/hartmans
|
||||
[4]: http://docs.mailman3.org/en/latest/
|
||||
[5]: https://en.wikipedia.org/wiki/Benevolent_dictator_for_life
|
||||
[6]: https://lists.debian.org/debian-devel/1998/09/msg00506.html
|
||||
[7]: https://www.debian.org/devel/constitution.1.0
|
||||
[8]: https://www.debian.org/vote/2017/platforms/lamby
|
||||
[9]: https://itsfoss.com/semicode-os-linux/
|
||||
[10]: https://itsfoss.com/wp-content/uploads/2019/04/leadership-800x450.jpg
|
||||
[11]: https://lists.debian.org/debian-vote/2019/03/msg00023.html
|
||||
[12]: https://itsfoss.com/bodhi-linux-5/
|
||||
[13]: https://itsfoss.com/reasons-why-i-love-debian/
|
125
sources/tech/20190426 NomadBSD, a BSD for the Road.md
Normal file
125
sources/tech/20190426 NomadBSD, a BSD for the Road.md
Normal file
@ -0,0 +1,125 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (NomadBSD, a BSD for the Road)
|
||||
[#]: via: (https://itsfoss.com/nomadbsd/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
NomadBSD, a BSD for the Road
|
||||
======
|
||||
|
||||
As regular It’s FOSS readers should know, I like diving into the world of BSDs. Recently, I came across an interesting BSD that is designed to live on a thumb drive. Let’s take a look at NomadBSD.
|
||||
|
||||
### What is NomadBSD?
|
||||
|
||||
![Nomadbsd Desktop][1]
|
||||
|
||||
[NomadBSD][2] is different than most available BSDs. NomadBSD is a live system based on FreeBSD. It comes with automatic hardware detection and an initial config tool. NomadBSD is designed to “be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD’s hardware compatibility.”
|
||||
|
||||
This German BSD comes with an [OpenBox][3]-based desktop with the Plank application dock. NomadBSD makes use of the [DSB project][4]. DSB stands for “Desktop Suite (for) (Free)BSD” and consists of a collection of programs designed to create a simple and working environment without needing a ton of dependencies to use one tool. DSB is created by [Marcel Kaiser][5] one of the lead devs of NomadBSD.
|
||||
|
||||
Just like the original BSD projects, you can contact the NomadBSD developers via a [mailing list][6].
|
||||
|
||||
[][7]
|
||||
|
||||
Suggested read Enjoy Netflix? You Should Thank FreeBSD
|
||||
|
||||
#### Included Applications
|
||||
|
||||
NomadBSD comes with the following software installed:
|
||||
|
||||
* Thunar file manager
|
||||
* Asunder CD ripper
|
||||
* Bash 5.0
|
||||
* Filezilla FTP client
|
||||
* Firefox web browser
|
||||
* Fish Command line
|
||||
* Gimp
|
||||
* Qpdfview
|
||||
* Git
|
||||
|
||||
|
||||
* Hexchat IRC client
|
||||
* Leafpad text editor
|
||||
* Midnight Commander file manager
|
||||
* PaleMoon web browser
|
||||
* PCManFM file manager
|
||||
* Pidgin messaging client
|
||||
* Transmission BitTorrent client
|
||||
|
||||
|
||||
* Redshift
|
||||
* Sakura terminal emulator
|
||||
* Slim login manager
|
||||
* Thunderbird email client
|
||||
* VLC media player
|
||||
* Plank application dock
|
||||
* Z Shell
|
||||
|
||||
|
||||
|
||||
You can see a complete of the pre-installed applications in the [MANIFEST file][8].
|
||||
|
||||
![Nomadbsd Openbox Menu][9]
|
||||
|
||||
#### Version 1.2 Released
|
||||
|
||||
NomadBSD recently released version 1.2 on April 21, 2019. This means that NomadBSD is now based on FreeBSD 12.0-p3. TRIM is now enabled by default. One of the biggest changes is that the initial command-line setup was replaced with a Qt graphical interface. They also added a Qt5 tool to install NomadBSD to your hard drive. A number of fixes were included to improve graphics support. They also added support for creating 32-bit images.
|
||||
|
||||
[][10]
|
||||
|
||||
Suggested read 6 Reasons Why Linux Users Switch to BSD
|
||||
|
||||
### Installing NomadBSD
|
||||
|
||||
Since NomadBSD is designed to be a live system, we will need to add the BSD to a USB drive. First, you will need to [download it][11]. There are several options to choose from: 64-bit, 32-bit, or 64-bit Mac.
|
||||
|
||||
You will be a USB drive that has at least 4GB. The system that you are installing to should have a 1.2 GHz processor and 1GB of RAM to run NomadBSD comfortably. Both BIOS and UEFI are supported.
|
||||
|
||||
All of the images available for download are compressed as a `.lzma` file. So, once you have downloaded the file, you will need to extract the `.img` file. On Linux, you can use either of these commands: `lzma -d nomadbsd-x.y.z.img.lzma` or `xzcat nomadbsd-x.y.z.img.lzma`. (Be sure to replace x.y.z with the correct file name you just downloaded.)
|
||||
|
||||
Before we proceed, we need to find out the id of your USB drive. (Hopefully, you have inserted it by now.) I use the `lsblk` command to find my USB drive, which in my case is `sdb`. To write the image file, use this command `sudo dd if=nomadbsd-x.y.z.img of=/dev/sdb bs=1M conv=sync`. (Again, don’t forget to correct the file name.) If you are uncomfortable using `dd`, you can use [Etcher][12]. If you have Windows, you will need to use [7-zip][13] to extract the image file and Etcher or [Rufus][14] to write the image to the USB drive.
|
||||
|
||||
When you boot from the USB drive, you will encounter a simple config tool. Once you answer the required questions, you will be greeted with a simple Openbox desktop.
|
||||
|
||||
### Thoughts on NomadBSD
|
||||
|
||||
I first discovered NomadBSD back in January when they released 1.2-RC1. At the time, I had been unable to install [Project Trident][15] on my laptop and was very frustrated with BSDs. I downloaded NomadBSD and tried it out. I initially ran into issues reaching the desktop, but RC2 fixed that issue. However, I was unable to get on the internet, even though I had an Ethernet cable plugged in. Luckily, I found the wifi manager in the menu and was able to connect to my wifi.
|
||||
|
||||
Overall, my experience with NomadBSD was pleasant. Once I figured out a few things, I was good to go. I hope that NomadBSD is the first of a new generation of BSDs that focus on mobility and ease of use. BSD has conquered the server world, it’s about time they figured out how to be more user-friendly.
|
||||
|
||||
Have you ever used NomadBSD? What is your BSD? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/nomadbsd/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-desktop-800x500.jpg
|
||||
[2]: http://nomadbsd.org/
|
||||
[3]: http://openbox.org/wiki/Main_Page
|
||||
[4]: https://freeshell.de/%7Emk/projects/dsb.html
|
||||
[5]: https://github.com/mrclksr
|
||||
[6]: http://nomadbsd.org/contact.html
|
||||
[7]: https://itsfoss.com/netflix-freebsd-cdn/
|
||||
[8]: http://nomadbsd.org/download/nomadbsd-1.2.manifest
|
||||
[9]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-Openbox-menu-800x500.jpg
|
||||
[10]: https://itsfoss.com/why-use-bsd/
|
||||
[11]: http://nomadbsd.org/download.html
|
||||
[12]: https://www.balena.io/etcher/
|
||||
[13]: https://www.7-zip.org/
|
||||
[14]: https://rufus.ie/
|
||||
[15]: https://itsfoss.com/project-trident-interview/
|
||||
[16]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,166 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Monitoring CPU and GPU Temperatures on Linux)
|
||||
[#]: via: (https://itsfoss.com/monitor-cpu-gpu-temp-linux/)
|
||||
[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/)
|
||||
|
||||
Monitoring CPU and GPU Temperatures on Linux
|
||||
======
|
||||
|
||||
_**Brief: This articles discusses two simple ways of monitoring CPU and GPU temperatures in Linux command line.**_
|
||||
|
||||
Because of **[Steam][1]** (including _[Steam Play][2]_ , aka _Proton_ ) and other developments, **GNU/Linux** is becoming the gaming platform of choice for more and more computer users everyday. A good number of users are also going for **GNU/Linux** when it comes to other resource-consuming computing tasks such as [video editing][3] or graphic design ( _Kdenlive_ and _[Blender][4]_ are good examples of programs for these).
|
||||
|
||||
Whether you are one of those users or otherwise, you are bound to have wondered how hot your computer’s CPU and GPU can get (even more so if you do overclocking). If that is the case, keep reading. We will be looking at a couple of very simple commands to monitor CPU and GPU temps.
|
||||
|
||||
My setup includes a [Slimbook Kymera][5] and two displays (a TV set and a PC monitor) which allows me to use one for playing games and the other to keep an eye on the temperatures. Also, since I use [Zorin OS][6] I will be focusing on **Ubuntu** and **Ubuntu** derivatives.
|
||||
|
||||
To monitor the behaviour of both CPU and GPU we will be making use of the useful `watch` command to have dynamic readings every certain number of seconds.
|
||||
|
||||
![][7]
|
||||
|
||||
### Monitoring CPU Temperature in Linux
|
||||
|
||||
For CPU temps, we will combine `watch` with the `sensors` command. An interesting article about a [gui version of this tool has already been covered on It’s FOSS][8]. However, we will use the terminal version here:
|
||||
|
||||
```
|
||||
watch -n 2 sensors
|
||||
```
|
||||
|
||||
`watch` guarantees that the readings will be updated every 2 seconds (and this value can — of course — be changed to what best fit your needs):
|
||||
|
||||
```
|
||||
Every 2,0s: sensors
|
||||
|
||||
iwlwifi-virtual-0
|
||||
Adapter: Virtual device
|
||||
temp1: +39.0°C
|
||||
|
||||
acpitz-virtual-0
|
||||
Adapter: Virtual device
|
||||
temp1: +27.8°C (crit = +119.0°C)
|
||||
temp2: +29.8°C (crit = +119.0°C)
|
||||
|
||||
coretemp-isa-0000
|
||||
Adapter: ISA adapter
|
||||
Package id 0: +37.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 0: +35.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 1: +35.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 2: +33.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 3: +36.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 4: +37.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
Core 5: +35.0°C (high = +82.0°C, crit = +100.0°C)
|
||||
```
|
||||
|
||||
Amongst other things, we get the following information:
|
||||
|
||||
* We have 5 cores in use at the moment (with the current highest temperature being 37.0ºC).
|
||||
* Values higher than 82.0ºC are considered high.
|
||||
* A value over 100.0ºC is deemed critical.
|
||||
|
||||
|
||||
|
||||
[][9]
|
||||
|
||||
Suggested read Top 10 Command Line Games For Linux
|
||||
|
||||
The values above lead us to the conclusion that the computer’s workload is very light at the moment.
|
||||
|
||||
### Monitoring GPU Temperature in Linux
|
||||
|
||||
Let us turn to the graphics card now. I have never used an **AMD** dedicated graphics card, so I will be focusing on **Nvidia** ones. The first thing to do is download the appropriate, current driver through [additional drivers in Ubuntu][10].
|
||||
|
||||
On **Ubuntu** (and its forks such as **Zorin** or **Linux Mint** ), going to _Software & Updates_ > _Additional Drivers_ and selecting the most recent one normally suffices. Additionally, you can add/enable the official _ppa_ for graphics cards (either through the command line or via _Software & Updates_ > _Other Software_ ). After installing the driver you will have at your disposal the _Nvidia X Server_ gui application along with the command line utility _nvidia-smi_ (Nvidia System Management Interface). So we will use `watch` and `nvidia-smi`:
|
||||
|
||||
```
|
||||
watch -n 2 nvidia-smi
|
||||
```
|
||||
|
||||
And — the same as for the CPU — we will get updated readings every two seconds:
|
||||
|
||||
```
|
||||
Every 2,0s: nvidia-smi
|
||||
|
||||
Fri Apr 19 20:45:30 2019
|
||||
+-----------------------------------------------------------------------------+
|
||||
| Nvidia-SMI 418.56 Driver Version: 418.56 CUDA Version: 10.1 |
|
||||
|-------------------------------+----------------------+----------------------+
|
||||
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
|
||||
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|
||||
|===============================+======================+======================|
|
||||
| 0 GeForce GTX 106... Off | 00000000:01:00.0 On | N/A |
|
||||
| 0% 54C P8 10W / 120W | 433MiB / 6077MiB | 4% Default |
|
||||
+-------------------------------+----------------------+----------------------+
|
||||
|
||||
+-----------------------------------------------------------------------------+
|
||||
| Processes: GPU Memory |
|
||||
| GPU PID Type Process name Usage |
|
||||
|=============================================================================|
|
||||
| 0 1557 G /usr/lib/xorg/Xorg 190MiB |
|
||||
| 0 1820 G /usr/bin/gnome-shell 174MiB |
|
||||
| 0 7820 G ...equest-channel-token=303407235874180773 65MiB |
|
||||
+-----------------------------------------------------------------------------+
|
||||
```
|
||||
|
||||
The chart gives the following information about the graphics card:
|
||||
|
||||
* it is using the open source driver version 418.56.
|
||||
* the current temperature of the card is 54.0ºC — with the fan at 0% of its capacity.
|
||||
* the power consumption is very low: only 10W.
|
||||
* out of 6 GB of vram (video random access memory), it is only using 433 MB.
|
||||
* the used vram is being taken by three processes whose IDs are — respectively — 1557, 1820 and 7820.
|
||||
|
||||
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Googler: Now You Can Google From Linux Terminal!
|
||||
|
||||
Most of these facts/values show that — clearly — we are not playing any resource-consuming games or dealing with heavy workloads. Should we started playing a game, processing a video — or the like —, the values would start to go up.
|
||||
|
||||
#### Conclusion
|
||||
|
||||
Althoug there are gui tools, I find these two commands very handy to check on your hardware in real time.
|
||||
|
||||
What do you make of them? You can learn more about the utilities involved by reading their man pages.
|
||||
|
||||
Do you have other preferences? Share them with us in the comments, ;).
|
||||
|
||||
Halof!!! (Have a lot of fun!!!).
|
||||
|
||||
![avatar][12]
|
||||
|
||||
### Alejandro Egea-Abellán
|
||||
|
||||
It’s FOSS Community Contributor
|
||||
|
||||
I developed a liking for electronics, linguistics, herpetology and computers (particularly GNU/Linux and FOSS). I am LPIC-2 certified and currently work as a technical consultant and Moodle administrator in the Department for Lifelong Learning at the Ministry of Education in Murcia, Spain. I am a firm believer in lifelong learning, the sharing of knowledge and computer-user freedom.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/monitor-cpu-gpu-temp-linux/
|
||||
|
||||
作者:[It's FOSS Community][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/itsfoss/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/install-steam-ubuntu-linux/
|
||||
[2]: https://itsfoss.com/steam-play-proton/
|
||||
[3]: https://itsfoss.com/best-video-editing-software-linux/
|
||||
[4]: https://www.blender.org/
|
||||
[5]: https://slimbook.es/
|
||||
[6]: https://zorinos.com/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2019/04/monitor-cpu-gpu-temperature-linux-800x450.png
|
||||
[8]: https://itsfoss.com/check-laptop-cpu-temperature-ubuntu/
|
||||
[9]: https://itsfoss.com/best-command-line-games-linux/
|
||||
[10]: https://itsfoss.com/install-additional-drivers-ubuntu/
|
||||
[11]: https://itsfoss.com/review-googler-linux/
|
||||
[12]: https://itsfoss.com/wp-content/uploads/2019/04/EGEA-ABELLAN-Alejandro.jpg
|
@ -0,0 +1,116 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Installing Budgie Desktop on Ubuntu [Quick Guide])
|
||||
[#]: via: (https://itsfoss.com/install-budgie-ubuntu/)
|
||||
[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/)
|
||||
|
||||
Installing Budgie Desktop on Ubuntu [Quick Guide]
|
||||
======
|
||||
|
||||
_**Brief: Learn how to install Budgie desktop on Ubuntu in this step-by-step tutorial.**_
|
||||
|
||||
Among all the [various Ubuntu versions][1], [Ubuntu Budgie][2] is the most underrated one. It looks elegant and it’s not heavy on resources.
|
||||
|
||||
Read this [Ubuntu Budgie review][3] or simply watch this video to see what Ubuntu Budgie 18.04 looks like.
|
||||
|
||||
[Subscribe to our YouTube channel for more Linux Videos][4]
|
||||
|
||||
If you like [Budgie desktop][5] but you are using some other version of Ubuntu such as the default Ubuntu with GNOME desktop, I have good news for you. You can install Budgie on your current Ubuntu system and switch the desktop environments.
|
||||
|
||||
In this post, I’m going to tell you exactly how to do that. But first, a little introduction to Budgie for those who are unaware about it.
|
||||
|
||||
Budgie desktop environment is developed mainly by [Solus Linux team.][6] It is designed with focus on elegance and modern usage. Budgie is available for all major Linux distributions for users to try and experience this new desktop environment. Budgie is pretty mature by now and provides a great desktop experience.
|
||||
|
||||
Warning
|
||||
|
||||
Installing multiple desktops on the same system MAY result in conflicts and you may see some issue like missing icons in the panel or multiple icons of the same program.
|
||||
|
||||
You may not see any issue at all as well. It’s your call if you want to try different desktop.
|
||||
|
||||
### Install Budgie on Ubuntu
|
||||
|
||||
This method is not tested on Linux Mint, so I recommend that you not follow this guide for Mint.
|
||||
|
||||
For those on Ubuntu, Budgie is now a part of the Ubuntu repositories by default. Hence, we don’t need to add any PPAs in order to get Budgie.
|
||||
|
||||
To install Budgie, simply run this command in terminal. We’ll first make sure that the system is fully updated.
|
||||
|
||||
```
|
||||
sudo apt update && sudo apt upgrade
|
||||
sudo apt install ubuntu-budgie-desktop
|
||||
```
|
||||
|
||||
When everything is done downloading, you will get a prompt to choose your display manager. Select ‘lightdm’ to get the full Budgie experience.
|
||||
|
||||
![Select lightdm][7]
|
||||
|
||||
After the installation is complete, reboot your computer. You will be then greeted by the Budgie login screen. Enter your password to go into the homescreen.
|
||||
|
||||
![Budgie Desktop Home][8]
|
||||
|
||||
### Switching to other desktop environments
|
||||
|
||||
![Budgie login screen][9]
|
||||
|
||||
You can click the Budgie icon next to your name to get options for login. From there you can select between the installed Desktop Environments (DEs). In my case, I see Budgie and the default Ubuntu (GNOME) DEs.
|
||||
|
||||
![Select your DE][10]
|
||||
|
||||
Hence whenever you feel like logging into GNOME, you can do so using this menu.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Get Rid of 'snapd returned status code 400: Bad Request' Error in Ubuntu
|
||||
|
||||
### How to Remove Budgie
|
||||
|
||||
If you don’t like Budgie or just want to go back to your regular old Ubuntu, you can switch back to your regular desktop as described in the above section.
|
||||
|
||||
However, if you really want to remove Budgie and its component, you can follow the following commands to get back to a clean slate.
|
||||
|
||||
_**Switch to some other desktop environments before using these commands:**_
|
||||
|
||||
```
|
||||
sudo apt remove ubuntu-budgie-desktop ubuntu-budgie* lightdm
|
||||
sudo apt autoremove
|
||||
sudo apt install --reinstall gdm3
|
||||
```
|
||||
|
||||
After running all the commands successfully, reboot your computer.
|
||||
|
||||
Now, you will be back to GNOME or whichever desktop environment you had.
|
||||
|
||||
**What you think of Budgie?**
|
||||
|
||||
Budgie is one of the [best desktop environments for Linux][12]. Hope this short guide helped you install the awesome Budgie desktop on your Ubuntu system.
|
||||
|
||||
If you did install Budgie, what do you like about it the most? Let us know in the comments below. And as usual, any questions or suggestions are always welcome.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/install-budgie-ubuntu/
|
||||
|
||||
作者:[Atharva Lele][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/atharva/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/which-ubuntu-install/
|
||||
[2]: https://ubuntubudgie.org/
|
||||
[3]: https://itsfoss.com/ubuntu-budgie-18-review/
|
||||
[4]: https://www.youtube.com/c/itsfoss?sub_confirmation=1
|
||||
[5]: https://github.com/solus-project/budgie-desktop
|
||||
[6]: https://getsol.us/home/
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_select_dm.png?fit=800%2C559&ssl=1
|
||||
[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_homescreen.jpg?fit=800%2C500&ssl=1
|
||||
[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen.png?fit=800%2C403&ssl=1
|
||||
[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/budgie_install_lockscreen_select_de.png?fit=800%2C403&ssl=1
|
||||
[11]: https://itsfoss.com/snapd-error-ubuntu/
|
||||
[12]: https://itsfoss.com/best-linux-desktop-environments/
|
177
sources/tech/20190429 Awk utility in Fedora.md
Normal file
177
sources/tech/20190429 Awk utility in Fedora.md
Normal file
@ -0,0 +1,177 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Awk utility in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/awk-utility-in-fedora/)
|
||||
[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/)
|
||||
|
||||
Awk utility in Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora provides _awk_ as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is _awk_ and why would you need it?
|
||||
|
||||
_Awk_ is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU _awk_ or _gawk_ is used. Read on for more about this language and how to use it.
|
||||
|
||||
### A brief history of awk
|
||||
|
||||
_Awk_ began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan.
|
||||
|
||||
> The specification for _awk_ in the POSIX Command Language and Utilities standard further clarified the language. Both the _gawk_ designers and the original _awk_ designers at Bell Laboratories provided feedback for the POSIX specification.
|
||||
>
|
||||
> From [The GNU Awk User’s Guide][2]
|
||||
|
||||
For a more in-depth look at how _awk/gawk_ ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of _gawk_. Among those are:
|
||||
|
||||
* Arnold Robbins and David Trueman, the creators of _gawk_
|
||||
* Michael Brennan, the creator of _mawk_ , which later was merged with _gawk_
|
||||
* Jurgen Kahrs, who added networking capabilities to _gawk_ in 1997
|
||||
* John Hague, who rewrote the _gawk_ internals and added an _awk_ -level debugger in 2011
|
||||
|
||||
|
||||
|
||||
### Using awk
|
||||
|
||||
The following sections show various ways of using _awk_ in Fedora.
|
||||
|
||||
#### At the command line
|
||||
|
||||
The simples way to invoke _awk_ is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use _cat_ to take a look at the command history file in your home director:
|
||||
|
||||
```
|
||||
$ cat ~/.bash_history
|
||||
```
|
||||
|
||||
There are probably many lines scrolling by right now.
|
||||
|
||||
_Awk_ helps with this type of file quite easily. Instead of printing the entire file out to the terminal like _cat_ , you can use _awk_ to find something of specific interest. For this example, type the following at the command line if you’re running a standard Fedora edition:
|
||||
|
||||
```
|
||||
$ awk '/dnf/' ~/.bash_history
|
||||
```
|
||||
|
||||
If you’re running Silverblue, try this instead:
|
||||
|
||||
```
|
||||
$ awk '/rpm-ostree/' ~/.bash_history
|
||||
```
|
||||
|
||||
In both cases, more data likely appears than what you really want. That’s no problem for _awk_ since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these:
|
||||
|
||||
```
|
||||
$ awk '/rpm-ostree install/' ~/.bash_history
|
||||
$ awk '/dnf install/' ~/.bash_history
|
||||
```
|
||||
|
||||
All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached.
|
||||
|
||||
#### From an _awk_ program
|
||||
|
||||
Using awk at the command line as above is not much different than piping output to _grep_ , like this:
|
||||
|
||||
```
|
||||
$ cat .bash_history | grep 'dnf install'
|
||||
```
|
||||
|
||||
The end result of printing to standard output ( _stdout_ ) is the same with both methods.
|
||||
|
||||
Awk is a programming language, and the command _awk_ is an interpreter of that language. The real power and flexibility of _awk_ is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with _awk_ , you can also incorporate C or C++ code using [Dynamic-Extensions][3].
|
||||
|
||||
Next, to show the power of _awk_ , let’s make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this we’ll create two awk program files.
|
||||
|
||||
The first file prints out the header of the bingo card. For this example it is called _bingo-title.awk_. Use your favorite editor to save this text as that file name:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
BEGIN {
|
||||
print "B\tI\tN\tG\tO"
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
Now the title program is ready. You could try it out with this command:
|
||||
|
||||
```
|
||||
$ awk -f bingo-title.awk
|
||||
```
|
||||
|
||||
The program prints the word BINGO, with a tab space ( _\t_ ) between the characters. For the number selection, let’s use one of awk’s builtin numeric functions called _rand()_ and use two of the control statements, _for_ and _switch._ (Except the editor changed my program, so no switch statement used this time).
|
||||
|
||||
The title of the second awk program is _bingo-num.awk_. Enter the following into your favorite editor and save with that file name:
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
@include "bingo-title.awk"
|
||||
BEGIN {
|
||||
for (i = 1; i < = 5; i++) {
|
||||
b = int(rand() * 15) + (15*(i-1))
|
||||
printf "%s\t", b
|
||||
}
|
||||
print
|
||||
}
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
The _@include_ statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the _bingo-title.awk_ file so the title prints out first.
|
||||
|
||||
#### Running the test program
|
||||
|
||||
Now enter the command to pick a row of bingo numbers:
|
||||
|
||||
```
|
||||
$ awk -f bingo-num.awk
|
||||
```
|
||||
|
||||
Output appears similar to the following. Note that the _rand()_ function in _awk_ is not ideal for truly random numbers. It’s used here only as for example purposes.
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
$ awk -f bingo-num.awk
|
||||
B I N G O
|
||||
13 23 34 53 71
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function _srand()_ for answers on how that could be done.
|
||||
|
||||
### Final examples
|
||||
|
||||
_Awk_ can be useful even for mundane daily search tasks that you encounter, like listing all _flatpak’s_ on the _Flathub_ repository from _org.gnome_ (providing you have the Flathub repository setup). The command to do that would be:
|
||||
|
||||
```
|
||||
$ flatpak remote-ls flathub --system | awk /org.gnome/
|
||||
```
|
||||
|
||||
A listing appears that shows all output from _remote-ls_ that matches the _org.gnome_ pattern. To see flatpaks already installed from org.gnome, enter this command:
|
||||
|
||||
```
|
||||
$ flatpak list --system | awk /org.gnome/
|
||||
```
|
||||
|
||||
Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/awk-utility-in-fedora/
|
||||
|
||||
作者:[Stephen Snow][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/jakfrost/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/awk-816x345.jpg
|
||||
[2]: https://www.gnu.org/software/gawk/manual/gawk.html#Foreword3
|
||||
[3]: https://www.gnu.org/software/gawk/manual/gawk.html#Dynamic-Extensions
|
@ -0,0 +1,111 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip])
|
||||
[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/)
|
||||
[#]: author: (Chinmay https://itsfoss.com/author/chinmay/)
|
||||
|
||||
How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]
|
||||
======
|
||||
|
||||
_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_
|
||||
|
||||
The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device.
|
||||
|
||||
You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops.
|
||||
|
||||
Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts.
|
||||
|
||||
For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
Bestseller No. 2
|
||||
|
||||
[][6]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$54.99 [][5]
|
||||
|
||||
### Turn on Raspberry Pi
|
||||
|
||||
![Micro USB port for Power][7]
|
||||
|
||||
The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things.
|
||||
|
||||
* Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot.
|
||||
* Plugging in the HDMI cable, USB keyboard and a Mouse.
|
||||
* Plugging in the Ethernet Cable(Optional).
|
||||
|
||||
|
||||
|
||||
Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System.
|
||||
|
||||
Bestseller No. 1
|
||||
|
||||
[][4]
|
||||
|
||||
[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4]
|
||||
|
||||
CanaKit - Personal Computers
|
||||
|
||||
$79.99 [][5]
|
||||
|
||||
### Shutting Down the Pi
|
||||
|
||||
Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown.
|
||||
|
||||
![Turn off Raspberry Pi graphically][9]
|
||||
|
||||
Alternatively, you can use the [shutdown command][10] in the terminal:
|
||||
|
||||
```
|
||||
sudo shutdown now
|
||||
```
|
||||
|
||||
Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding.
|
||||
|
||||
[][2]
|
||||
|
||||
Suggested read 12 Single Board Computers: Alternative to Raspberry Pi
|
||||
|
||||
_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._
|
||||
|
||||
Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/turn-on-raspberry-pi/
|
||||
|
||||
作者:[Chinmay][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/chinmay/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.raspberrypi.org/
|
||||
[2]: https://itsfoss.com/raspberry-pi-alternatives/
|
||||
[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/
|
||||
[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case))
|
||||
[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime)
|
||||
[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply)
|
||||
[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1
|
||||
[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md
|
||||
[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1
|
||||
[10]: https://linuxhandbook.com/linux-shutdown-command/
|
@ -0,0 +1,115 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The Awesome Fedora 30 is Here! Check Out the New Features)
|
||||
[#]: via: (https://itsfoss.com/fedora-30/)
|
||||
[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/)
|
||||
|
||||
The Awesome Fedora 30 is Here! Check Out the New Features
|
||||
======
|
||||
|
||||
The latest and greatest release of Fedora is here. Fedora 30 brings some visual as well as performance improvements.
|
||||
|
||||
Fedora releases a new version every six months and each release is supported for thirteen months.
|
||||
|
||||
Before you decide to download or upgrade Fedora, let’s first see what’s new in Fedora 30.
|
||||
|
||||
### New Features in Fedora 30
|
||||
|
||||
![Fedora 30 Release][1]
|
||||
|
||||
Here’s what’s new in the latest release of Fedora.
|
||||
|
||||
#### GNOME 3.32 gives a brand new look, features and performance improvement
|
||||
|
||||
A lot of visual improvements is brought by the latest release of GNOME.
|
||||
|
||||
GNOME 3.32 has refreshed new icons and UI and it almost looks like a brand new version of GNOME.
|
||||
|
||||
![Gnome 3.32 icons | Image Credit][2]
|
||||
|
||||
GNOME 3.32 also brings several other features like fractional scaling, permission control for each application, granular control on Night Light intensity among many other changes.
|
||||
|
||||
GNOME 3.32 also brings some performance improvements. You’ll see faster file and app searches and a smoother scrolling.
|
||||
|
||||
#### Improved performance for DNF
|
||||
|
||||
Fedora 30 will see a faster [DNF][3] (the default package manager for Fedora) thanks to the [zchunk][4] compression algorithm.
|
||||
|
||||
The zchunk algorithm splits the file into independent chunks. This helps in dealing with ‘delta’ or changes as you download only the changed chunks while downloading the new version of a file.
|
||||
|
||||
With zcunk, dnf will only download the difference between the metadata of the current version and the earlier versions.
|
||||
|
||||
#### Fedora 30 brings two new desktop environments into the fold
|
||||
|
||||
Fedora already offers several desktop environment choices. Fedora 30 extends the offering with [elementary OS][5]‘ Pantheon desktop environment and Deepin Linux’ [DeepinDE][6].
|
||||
|
||||
So now you can enjoy the looks and feel of elementary OS and Deepin Linux in Fedora. How cool is that!
|
||||
|
||||
#### Linux Kernel 5
|
||||
|
||||
Fedora 29 has Linux Kernel 5.0.9 version that has improved support for hardware and some performance improvements. You may check out the [features of Linux kernel 5.0 in this article][7].
|
||||
|
||||
[][8]
|
||||
|
||||
Suggested read The Featureful Release of Nextcloud 14 Has Two New Security Features
|
||||
|
||||
#### Updated software
|
||||
|
||||
You’ll also get newer versions of software. Some of the major ones are:
|
||||
|
||||
* GCC 9.0.1
|
||||
* [Bash Shell 5.0][9]
|
||||
* GNU C Library 2.29
|
||||
* Ruby 2.6
|
||||
* Golang 1.12
|
||||
* Mesa 19.0.2
|
||||
|
||||
|
||||
* Vagrant 2.2
|
||||
* JDK12
|
||||
* PHP 7.3
|
||||
* Fish 3.0
|
||||
* Erlang 21
|
||||
* Python 3.7.3
|
||||
|
||||
|
||||
|
||||
### Getting Fedora 30
|
||||
|
||||
If you are already using Fedora 29 then you can upgrade to the latest release from your current install. You may follow this guide to learn [how to upgrade a Fedora version][10].
|
||||
|
||||
Fedora 29 users will still get the updates for seven more months so if you don’t feel like upgrading, you may skip it for now. Fedora 28 users have no choice because Fedora 28 reached end of life next month which means there will be no security or maintenance update anymore. Upgrading to a newer version is no longer a choice.
|
||||
|
||||
You always has the option to download the ISO of Fedora 30 and install it afresh. You can download Fedora from its official website. It’s only available for 64-bit systems and the ISO is 1.9 GB in size.
|
||||
|
||||
[Download Fedora 30 Workstation][11]
|
||||
|
||||
What do you think of Fedora 30? Are you planning to upgrade or at least try it out? Do share your thoughts in the comment section.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/fedora-30/
|
||||
|
||||
作者:[Abhishek Prakash][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/abhishek/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2019/04/fedora-30-release-800x450.png
|
||||
[2]: https://itsfoss.com/wp-content/uploads/2019/04/gnome-3-32-icons.png
|
||||
[3]: https://fedoraproject.org/wiki/DNF?rd=Dnf
|
||||
[4]: https://github.com/zchunk/zchunk
|
||||
[5]: https://itsfoss.com/elementary-os-juno-features/
|
||||
[6]: https://www.deepin.org/en/dde/
|
||||
[7]: https://itsfoss.com/linux-kernel-5/
|
||||
[8]: https://itsfoss.com/nextcloud-14-release/
|
||||
[9]: https://itsfoss.com/bash-5-release/
|
||||
[10]: https://itsfoss.com/upgrade-fedora-version/
|
||||
[11]: https://getfedora.org/en/workstation/
|
96
sources/tech/20190430 Upgrading Fedora 29 to Fedora 30.md
Normal file
96
sources/tech/20190430 Upgrading Fedora 29 to Fedora 30.md
Normal file
@ -0,0 +1,96 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Upgrading Fedora 29 to Fedora 30)
|
||||
[#]: via: (https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/)
|
||||
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
|
||||
|
||||
Upgrading Fedora 29 to Fedora 30
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Fedora 30 i[s available now][2]. You’ll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.
|
||||
|
||||
### Upgrading Fedora 29 Workstation to Fedora 30
|
||||
|
||||
Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
|
||||
|
||||
Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.
|
||||
|
||||
If you don’t see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
|
||||
|
||||
Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
|
||||
|
||||
### Using the command line
|
||||
|
||||
If you’ve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.
|
||||
|
||||
##### 1\. Update software and back up your system
|
||||
|
||||
Before you do anything, you will want to make sure you have the latest software for Fedora 29 before beginning the upgrade process. To update your software, use _GNOME Software_ or enter the following command in a terminal.
|
||||
|
||||
```
|
||||
sudo dnf upgrade --refresh
|
||||
```
|
||||
|
||||
Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
|
||||
|
||||
##### 2\. Install the DNF plugin
|
||||
|
||||
Next, open a terminal and type the following command to install the plugin:
|
||||
|
||||
```
|
||||
sudo dnf install dnf-plugin-system-upgrade
|
||||
```
|
||||
|
||||
##### 3\. Start the update with DNF
|
||||
|
||||
Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade download --releasever=30
|
||||
```
|
||||
|
||||
This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _‐‐allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
|
||||
|
||||
##### 4\. Reboot and upgrade
|
||||
|
||||
Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
|
||||
|
||||
```
|
||||
sudo dnf system-upgrade reboot
|
||||
```
|
||||
|
||||
Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
|
||||
|
||||
Now might be a good time for a coffee break! Once it finishes, your system will restart and you’ll be able to log in to your newly upgraded Fedora 30 system.
|
||||
|
||||
![][4]
|
||||
|
||||
### Resolving upgrade problems
|
||||
|
||||
On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade wiki page][5] for more information on troubleshooting in the event of a problem.
|
||||
|
||||
If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/
|
||||
|
||||
作者:[Ryan Lerch][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/ryanlerch/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/29-30-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/announcing-fedora-30/
|
||||
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
|
||||
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
|
||||
[5]: https://fedoraproject.org/wiki/DNF_system_upgrade#Resolving_post-upgrade_issues
|
@ -0,0 +1,73 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (3 apps to manage personal finances in Fedora)
|
||||
[#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/)
|
||||
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
|
||||
|
||||
3 apps to manage personal finances in Fedora
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
There are numerous services available on the web for managing your personal finances. Although they may be convenient, they also often mean leaving your most valuable personal data with a company you can’t monitor. Some people are comfortable with this level of trust.
|
||||
|
||||
Whether you are or not, you might be interested in an app you can maintain on your own system. This means your data never has to leave your own computer if you don’t want. One of these three apps might be what you’re looking for.
|
||||
|
||||
### HomeBank
|
||||
|
||||
HomeBank is a fully featured way to manage multiple accounts. It’s easy to set up and keep updated. It has multiple ways to categorize and graph income and liabilities so you can see where your money goes. It’s available through the official Fedora repositories.
|
||||
|
||||
![A simple account set up in HomeBank with a few transactions.][2]
|
||||
|
||||
To install HomeBank, open the _Software_ app, search for _HomeBank_ , and select the app. Then click _Install_ to add it to your system. HomeBank is also available via a Flatpak.
|
||||
|
||||
### KMyMoney
|
||||
|
||||
The KMyMoney app is a mature app that has been around for a long while. It has a robust set of features to help you manage multiple accounts, including assets, liabilities, taxes, and more. KMyMoney includes a full set of tools for managing investments and making forecasts. It also sports a huge set of reports for seeing how your money is doing.
|
||||
|
||||
![A subset of the many reports available in KMyMoney.][3]
|
||||
|
||||
To install, use a software center app, or use the command line:
|
||||
|
||||
```
|
||||
$ sudo dnf install kmymoney
|
||||
```
|
||||
|
||||
### GnuCash
|
||||
|
||||
One of the most venerable free GUI apps for personal finance is GnuCash. GnuCash is not just for personal finances. It also has functions for managing income, assets, and liabilities for a business. That doesn’t mean you can’t use it for managing just your own accounts. Check out [the online tutorial and guide][4] to get started.
|
||||
|
||||
![Checking account records shown in GnuCash.][5]
|
||||
|
||||
Open the _Software_ app, search for _GnuCash_ , and select the app. Then click _Install_ to add it to your system. Or use _dnf install_ as above to install the _gnucash_ package.
|
||||
|
||||
It’s now available via Flathub which makes installation easy. If you don’t have Flathub support, check out [this article on the Fedora Magazine][6] for how to use it. Then you can also use the _flatpak install GnuCash_ command with a terminal.
|
||||
|
||||
* * *
|
||||
|
||||
*Photo by _[_Fabian Blank_][7]_ on *[ _Unsplash_][8].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/
|
||||
|
||||
作者:[Paul W. Frields][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/pfrields/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/personal-finance-3-apps-816x345.jpg
|
||||
[2]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-16-16-1024x637.png
|
||||
[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-27-10-1-1024x649.png
|
||||
[4]: https://www.gnucash.org/viewdoc.phtml?rev=3&lang=C&doc=guide
|
||||
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-41-27-1024x631.png
|
||||
[6]: https://fedoramagazine.org/install-flathub-apps-fedora/
|
||||
[7]: https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
||||
[8]: https://unsplash.com/search/photos/money?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
|
219
sources/tech/20190501 Looking into Linux modules.md
Normal file
219
sources/tech/20190501 Looking into Linux modules.md
Normal file
@ -0,0 +1,219 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Looking into Linux modules)
|
||||
[#]: via: (https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all)
|
||||
[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/)
|
||||
|
||||
Looking into Linux modules
|
||||
======
|
||||
The lsmod command can tell you which kernel modules are currently loaded on your system, along with some interesting details about their use.
|
||||
![Rob Oo \(CC BY 2.0\)][1]
|
||||
|
||||
### What are Linux modules?
|
||||
|
||||
Kernel modules are chunks of code that are loaded and unloaded into the kernel as needed, thus extending the functionality of the kernel without requiring a reboot. In fact, unless users inquire about modules using commands like **lsmod** , they won't likely know that anything has changed.
|
||||
|
||||
One important thing to understand is that there are _lots_ of modules that will be in use on your Linux system at all times and that a lot of details are available if you're tempted to dive into the details.
|
||||
|
||||
One of the prime ways that lsmod is used is to examine modules when a system isn't working properly. However, most of the time, modules load as needed and users don't need to be aware of how they are working.
|
||||
|
||||
**[ Also see:[Must-know Linux Commands][2] ]**
|
||||
|
||||
### Listing modules
|
||||
|
||||
The easiest way to list modules is with the **lsmod** command. While this command provides a lot of detail, this is the most user-friendly output.
|
||||
|
||||
```
|
||||
$ lsmod
|
||||
Module Size Used by
|
||||
snd_hda_codec_realtek 114688 1
|
||||
snd_hda_codec_generic 77824 1 snd_hda_codec_realtek
|
||||
ledtrig_audio 16384 2 snd_hda_codec_generic,snd_hda_codec_realtek
|
||||
snd_hda_codec_hdmi 53248 1
|
||||
snd_hda_intel 40960 2
|
||||
snd_hda_codec 131072 4 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
|
||||
,snd_hda_codec_realtek
|
||||
snd_hda_core 86016 5 snd_hda_codec_generic,snd_hda_codec_hdmi,snd_hda_intel
|
||||
,snd_hda_codec,snd_hda_codec_realtek
|
||||
snd_hwdep 20480 1 snd_hda_codec
|
||||
snd_pcm 102400 4 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hda
|
||||
_core
|
||||
snd_seq_midi 20480 0
|
||||
snd_seq_midi_event 16384 1 snd_seq_midi
|
||||
dcdbas 20480 0
|
||||
snd_rawmidi 36864 1 snd_seq_midi
|
||||
snd_seq 69632 2 snd_seq_midi,snd_seq_midi_event
|
||||
coretemp 20480 0
|
||||
snd_seq_device 16384 3 snd_seq,snd_seq_midi,snd_rawmidi
|
||||
snd_timer 36864 2 snd_seq,snd_pcm
|
||||
kvm_intel 241664 0
|
||||
kvm 626688 1 kvm_intel
|
||||
radeon 1454080 10
|
||||
irqbypass 16384 1 kvm
|
||||
joydev 24576 0
|
||||
input_leds 16384 0
|
||||
ttm 102400 1 radeon
|
||||
drm_kms_helper 180224 1 radeon
|
||||
drm 475136 13 drm_kms_helper,radeon,ttm
|
||||
snd 81920 15 snd_hda_codec_generic,snd_seq,snd_seq_device,snd_hda
|
||||
_codec_hdmi,snd_hwdep,snd_hda_intel,snd_hda_codec,snd
|
||||
_hda_codec_realtek,snd_timer,snd_pcm,snd_rawmidi
|
||||
i2c_algo_bit 16384 1 radeon
|
||||
fb_sys_fops 16384 1 drm_kms_helper
|
||||
syscopyarea 16384 1 drm_kms_helper
|
||||
serio_raw 20480 0
|
||||
sysfillrect 16384 1 drm_kms_helper
|
||||
sysimgblt 16384 1 drm_kms_helper
|
||||
soundcore 16384 1 snd
|
||||
mac_hid 16384 0
|
||||
sch_fq_codel 20480 2
|
||||
parport_pc 40960 0
|
||||
ppdev 24576 0
|
||||
lp 20480 0
|
||||
parport 53248 3 parport_pc,lp,ppdev
|
||||
ip_tables 28672 0
|
||||
x_tables 40960 1 ip_tables
|
||||
autofs4 45056 2
|
||||
raid10 57344 0
|
||||
raid456 155648 0
|
||||
async_raid6_recov 24576 1 raid456
|
||||
async_memcpy 20480 2 raid456,async_raid6_recov
|
||||
async_pq 24576 2 raid456,async_raid6_recov
|
||||
async_xor 20480 3 async_pq,raid456,async_raid6_recov
|
||||
async_tx 20480 5 async_pq,async_memcpy,async_xor,raid456,async_raid6_re
|
||||
cov
|
||||
xor 24576 1 async_xor
|
||||
raid6_pq 114688 3 async_pq,raid456,async_raid6_recov
|
||||
libcrc32c 16384 1 raid456
|
||||
raid1 45056 0
|
||||
raid0 24576 0
|
||||
multipath 20480 0
|
||||
linear 20480 0
|
||||
hid_generic 16384 0
|
||||
psmouse 151552 0
|
||||
i2c_i801 32768 0
|
||||
pata_acpi 16384 0
|
||||
lpc_ich 24576 0
|
||||
usbhid 53248 0
|
||||
hid 126976 2 usbhid,hid_generic
|
||||
e1000e 245760 0
|
||||
floppy 81920 0
|
||||
```
|
||||
|
||||
In the output above:
|
||||
|
||||
* "Module" shows the name of each module
|
||||
* "Size" shows the module size (not how much memory it is using)
|
||||
* "Used by" shows each module's usage count and the referring modules
|
||||
|
||||
|
||||
|
||||
Clearly, that's a _lot_ of modules. The number of modules loaded will depend on your system and distribution and what's running. We can count them like this:
|
||||
|
||||
```
|
||||
$ lsmod | wc -l
|
||||
67
|
||||
```
|
||||
|
||||
To see the number of modules available on the system (not just running), try this command:
|
||||
|
||||
```
|
||||
$ modprobe -c | wc -l
|
||||
41272
|
||||
```
|
||||
|
||||
### Other commands for examining modules
|
||||
|
||||
Linux provides several commands for listing, loading and unloading, examining, and checking the status of modules.
|
||||
|
||||
* depmod -- generates modules.dep and map files
|
||||
* insmod -- a simple program to insert a module into the Linux Kernel
|
||||
* lsmod -- show the status of modules in the Linux Kernel
|
||||
* modinfo -- show information about a Linux Kernel module
|
||||
* modprobe -- add and remove modules from the Linux Kernel
|
||||
* rmmod -- a simple program to remove a module from the Linux Kernel
|
||||
|
||||
|
||||
|
||||
### Listing modules that are built in
|
||||
|
||||
As mentioned above, the **lsmod** command is the most convenient command for listing modules. There are, however, other ways to examine them. The modules.builtin file lists all modules that are built into the kernel and is used by modprobe when trying to load one of these modules. Note that **$(uname -r)** in the commands below provides the name of the kernel release.
|
||||
|
||||
```
|
||||
$ more /lib/modules/$(uname -r)/modules.builtin | head -10
|
||||
kernel/arch/x86/crypto/crc32c-intel.ko
|
||||
kernel/arch/x86/events/intel/intel-uncore.ko
|
||||
kernel/arch/x86/platform/intel/iosf_mbi.ko
|
||||
kernel/mm/zpool.ko
|
||||
kernel/mm/zbud.ko
|
||||
kernel/mm/zsmalloc.ko
|
||||
kernel/fs/binfmt_script.ko
|
||||
kernel/fs/mbcache.ko
|
||||
kernel/fs/configfs/configfs.ko
|
||||
kernel/fs/crypto/fscrypto.ko
|
||||
```
|
||||
|
||||
You can get some additional detail on a module by using the **modinfo** command, though nothing that qualifies as an easy explanation of what service the module provides. The omitted details from the output below include a lengthy signature.
|
||||
|
||||
```
|
||||
$ modinfo floppy | head -16
|
||||
filename: /lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
|
||||
alias: block-major-2-*
|
||||
license: GPL
|
||||
author: Alain L. Knaff
|
||||
srcversion: EBEAA26742DF61790588FD9
|
||||
alias: acpi*:PNP0700:*
|
||||
alias: pnp:dPNP0700*
|
||||
depends:
|
||||
retpoline: Y
|
||||
intree: Y
|
||||
name: floppy
|
||||
vermagic: 5.0.0-13-generic SMP mod_unload
|
||||
sig_id: PKCS#7
|
||||
signer:
|
||||
sig_key:
|
||||
sig_hashalgo: md4
|
||||
```
|
||||
|
||||
You can load or unload a module using the **modprobe** command. Using a command like the one below, you can locate the kernel object associated with a particular module:
|
||||
|
||||
```
|
||||
$ find /lib/modules/$(uname -r) -name floppy*
|
||||
/lib/modules/5.0.0-13-generic/kernel/drivers/block/floppy.ko
|
||||
```
|
||||
|
||||
If you needed to load the module, you could use a command like this one:
|
||||
|
||||
```
|
||||
$ sudo modprobe floppy
|
||||
```
|
||||
|
||||
### Wrap-up
|
||||
|
||||
Clearly the loading and unloading of modules is a big deal. It makes Linux systems considerably more flexible and efficient than if they ran with a one-size-fits-all kernel. It also means you can make significant changes — including adding hardware — without rebooting.
|
||||
|
||||
**[ Two-Minute Linux Tips:[Learn how to master a host of Linux commands in these 2-minute video tutorials][3] ]**
|
||||
|
||||
Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.networkworld.com/article/3391362/looking-into-linux-modules.html#tk.rss_all
|
||||
|
||||
作者:[Sandra Henry-Stocker][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://images.idgesg.net/images/article/2019/04/modules-100794941-large.jpg
|
||||
[2]: https://www.networkworld.com/article/3391029/must-know-linux-commands.html
|
||||
[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua
|
||||
[4]: https://www.facebook.com/NetworkWorld/
|
||||
[5]: https://www.linkedin.com/company/network-world
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Crowdsourcing license compliance with ClearlyDefined)
|
||||
[#]: via: (https://opensource.com/article/19/5/license-compliance-clearlydefined)
|
||||
[#]: author: (Jeff McAffer https://opensource.com/users/jeffmcaffer)
|
||||
|
||||
Crowdsourcing license compliance with ClearlyDefined
|
||||
======
|
||||
Licensing is what holds open source together, and ClearlyDefined takes
|
||||
the mystery out of projects' licenses, copyright, and source location.
|
||||
![][1]
|
||||
|
||||
Open source use continues to skyrocket, not just in use cases and scenarios but also in volume. It is trivial for a developer to depend on a 1,000 JavaScript packages from a single run of `npm install` or have thousands of packages in a [Docker][2] image. At the same time, there is increased interest in ensuring license compliance.
|
||||
|
||||
Without the right license you may not be able to legally use a software component in the way you intend or may have obligations that run counter to your business model. For instance, a JavaScript package could be marked as [MIT license][3], which allows commercial reuse, while one of its dependencies is licensed has a [copyleft license][4] that requires you give your software away under the same license. Complying means finding the applicable license(s), and assessing and adhering to the terms, which is not too bad for individual components adn can be daunting for large initiatives.
|
||||
|
||||
Fortunately, this open source challenge has an open source solution: [ClearlyDefined][5]. ClearlyDefined is a crowdsourced, open source, [Open Source Initiative][6] (OSI) effort to gather, curate, and upstream/normalize data about open source components, such as license, copyright, and source location. This data is the cornerstone of reducing the friction in open source license compliance.
|
||||
|
||||
The premise behind ClearlyDefined is simple: we are all struggling to find and understand key information related to the open source we use—whether it is finding the license, knowing who to attribute, or identifying the source that goes with a particular package. Rather than struggling independently, ClearlyDefined allows us to collaborate and share the compliance effort. Moreover, the ClearlyDefined community seeks to upstream any corrections so future releases are more clearly defined and make conventions more explicit to improve community understanding of project intent.
|
||||
|
||||
### How it works
|
||||
|
||||
![ClearlyDefined's harvest, curate, upstream process][7]
|
||||
|
||||
ClearlyDefined monitors the open source ecosystem and automatically harvests relevant data from open source components using a host of open source tools such as [ScanCode][8], [FOSSology][9], and [Licensee][10]. The results are summarized and aggregated to create a _definition_ , which is then surfaced to users via an API and a UI. Each definition includes:
|
||||
|
||||
* Declared license of the component
|
||||
* Licenses and copyrights discovered across all files
|
||||
* Exact source code location to the commit level
|
||||
* Release date
|
||||
* List of embedded components
|
||||
|
||||
|
||||
|
||||
Coincidentally (well, not really), this is exactly the data you need to do license compliance.
|
||||
|
||||
### Curating
|
||||
|
||||
Any given definition may have gaps or imperfections due to tool issues or the data being missing or incorrect at the origin. ClearlyDefined enables users to curate the results by refining the values and filling in the gaps. These contributions are reviewed and merged, as with any open source project. The result is an improved dataset for all to use.
|
||||
|
||||
### Getting ahead
|
||||
|
||||
To a certain degree, this process is still chasing the problem—analyzing and curating after the packages have already been published. To get ahead of the game, the ClearlyDefined community also feeds merged curations back to the originating projects as pull requests (e.g., adding a license file, clarifying a copyright). This increases the clarity of future release and sets up a virtuous cycle.
|
||||
|
||||
### Adapting, not mandating
|
||||
|
||||
In doing the analysis, we've found quite a number of approaches to expressing license-related data. Different communities put LICENSE files in different places or have different practices around attribution. The ClearlyDefined philosophy is to discover these conventions and adapt to them rather than asking the communities to do something different. A side benefit of this is that implicit conventions can be made more explicit, improving clarity for all.
|
||||
|
||||
Related to this, ClearlyDefined is careful to not look too hard for this interesting data. If we have to be too smart and infer too much to find the data, then there's a good chance the origin is not all that clear. Instead, we prefer to work with the community to better understand and clarify the conventions being used. From there, we can update the tools accordingly and make it easier to be "clearly defined."
|
||||
|
||||
#### NOTICE files
|
||||
|
||||
As an added bonus for users, we set up an API and UI for generating NOTICE files, making it trivial for you to comply with the attribution requirements found in most open source licenses. You can give ClearlyDefined a list of components (e.g., _drag and drop an npm package-lock.json file on the UI_ ) and get back a fully formed NOTICE file rendered by one of several renderers (e.g., text, HTML, Handlebars.js template). This is a snap, given that we already have all the compliance data. Big shout out to the [OSS Attribution Builder project][11] for making a simple and pluggable NOTICE renderer we could just pop into the ClearlyDefined service.
|
||||
|
||||
### Getting involved
|
||||
|
||||
You can get involved with ClearlyDefined in several ways:
|
||||
|
||||
* Become an active user, contributing to your compliance workflow
|
||||
* Review other people's curations using the interface
|
||||
* Get involved in [the code][12] (Node and React)
|
||||
* Ask and answer questions on [our mailing list][13] or [Discord channel][14]
|
||||
* Contribute money to the OSI targeted to ClearlyDefined. We'll use that to fund development and curation.
|
||||
|
||||
|
||||
|
||||
We are excited to continue to grow our community of contributors so that licensing can continue to become an understable part of any team's open source adoption. For more information, check out [https://clearlydefined.io][15].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/license-compliance-clearlydefined
|
||||
|
||||
作者:[Jeff McAffer][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jeffmcaffer
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Crowdfunding_520x292_9597717_0612CM.png?itok=lxSKyFXU
|
||||
[2]: https://opensource.com/resources/what-docker
|
||||
[3]: /article/19/4/history-mit-license
|
||||
[4]: /resources/what-is-copyleft
|
||||
[5]: https://clearlydefined.io
|
||||
[6]: https://opensource.org
|
||||
[7]: https://opensource.com/sites/default/files/uploads/clearlydefined.png (ClearlyDefined's harvest, curate, upstream process)
|
||||
[8]: https://github.com/nexB/scancode-toolkit
|
||||
[9]: https://www.fossology.org/
|
||||
[10]: https://github.com/licensee/licensee
|
||||
[11]: https://github.com/amzn/oss-attribution-builder
|
||||
[12]: https://github.com/clearlydefined
|
||||
[13]: mailto:clearlydefined@googlegroups.com
|
||||
[14]: %C2%A0https://clearlydefined.io/discord)
|
||||
[15]: https://clearlydefined.io/
|
@ -0,0 +1,99 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Format Python however you like with Black)
|
||||
[#]: via: (https://opensource.com/article/19/5/python-black)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez/users/moshez)
|
||||
|
||||
Format Python however you like with Black
|
||||
======
|
||||
Learn more about solving common Python problems in our series covering
|
||||
seven PyPI libraries.
|
||||
![OpenStack source code \(Python\) in VIM][1]
|
||||
|
||||
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
|
||||
|
||||
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. In the first article, we learned about [Cython][4]; today, we'll examine the **[Black][5]** code formatter.
|
||||
|
||||
### Black
|
||||
|
||||
Sometimes creativity can be a wonderful thing. Sometimes it is just a pain. I enjoy solving hard problems creatively, but I want my Python formatted as consistently as possible. Nobody has ever been impressed by code that uses "interesting" indentation.
|
||||
|
||||
But even worse than inconsistent formatting is a code review that consists of nothing but formatting nits. It is annoying to the reviewer—and even more annoying to the person whose code is reviewed. It's also infuriating when your linter tells you that your code is indented incorrectly, but gives no hint about the _correct_ amount of indentation.
|
||||
|
||||
Enter Black. Instead of telling you _what_ to do, Black is a good, industrious robot: it will fix your code for you.
|
||||
|
||||
To see how it works, feel free to write something beautifully inconsistent like:
|
||||
|
||||
|
||||
```
|
||||
def add(a, b): return a+b
|
||||
|
||||
def mult(a, b):
|
||||
return \
|
||||
a * b
|
||||
```
|
||||
|
||||
Does Black complain? Goodness no, it just fixes it for you!
|
||||
|
||||
|
||||
```
|
||||
$ black math
|
||||
reformatted math
|
||||
All done! ✨ 🍰 ✨
|
||||
1 file reformatted.
|
||||
$ cat math
|
||||
def add(a, b):
|
||||
return a + b
|
||||
|
||||
def mult(a, b):
|
||||
return a * b
|
||||
```
|
||||
|
||||
Black does offer the option of failing instead of fixing and even outputting a **diff** -style edit. These options are great in a continuous integration (CI) system that enforces running Black locally. In addition, if the **diff** output is logged to the CI output, you can directly paste it into **patch** in the rare case that you need to fix your output but cannot install Black locally.
|
||||
|
||||
|
||||
```
|
||||
$ black --check --diff bad
|
||||
\--- math 2019-04-09 17:24:22.747815 +0000
|
||||
+++ math 2019-04-09 17:26:04.269451 +0000
|
||||
@@ -1,7 +1,7 @@
|
||||
-def add(a, b): return a + b
|
||||
+def add(a, b):
|
||||
\+ return a + b
|
||||
|
||||
|
||||
def mult(a, b):
|
||||
\- return \
|
||||
\- a * b
|
||||
\+ return a * b
|
||||
|
||||
would reformat math
|
||||
All done! 💥 💔 💥
|
||||
1 file would be reformatted.
|
||||
$ echo $?
|
||||
1
|
||||
```
|
||||
|
||||
In the next article in this series, we'll look at **attrs** , a library that helps you write concise, correct code quickly.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/python-black
|
||||
|
||||
作者:[Moshe Zadka ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez/users/moshez/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openstack_python_vim_1.jpg?itok=lHQK5zpm (OpenStack source code (Python) in VIM)
|
||||
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
|
||||
[3]: https://pypi.org/
|
||||
[4]: https://opensource.com/article/19/4/7-python-problems-solved-cython
|
||||
[5]: https://pypi.org/project/black/
|
@ -0,0 +1,61 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Get started with Libki to manage public user computer access)
|
||||
[#]: via: (https://opensource.com/article/19/5/libki-computer-access)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins/users/tony-thomas)
|
||||
|
||||
Get started with Libki to manage public user computer access
|
||||
======
|
||||
Libki is a cross-platform, computer reservation and time management
|
||||
system.
|
||||
![][1]
|
||||
|
||||
Libraries, schools, colleges, and other organizations that provide public computers need a good way to manage users' access—otherwise, there's no way to prevent some people from monopolizing the machines and ensure everyone has a fair amount of time. This is the problem that [Libki][2] was designed to solve.
|
||||
|
||||
Libki is an open source, cross-platform, computer reservation and time management system for Windows and Linux PCs. It provides a web-based server and a web-based administration system that staff can use to manage computer access, including creating and deleting users, setting time limits on accounts, logging out and banning users, and setting access restrictions.
|
||||
|
||||
According to lead developer [Kyle Hall][3], Libki is mainly used for PC time control as an open source alternative to Envisionware's proprietary computer access control software. When users log into a Libki-managed computer, they get a block of time to use the computer; once that time is up, they are logged off. The default setting is 45 minutes, but that can easily be adjusted using the web-based administration system. Some organizations offer 24 hours of access before logging users off, and others use it to track usage without setting time limits.
|
||||
|
||||
Kyle is currently lead developer at [ByWater Solutions][4], which provides open source software solutions (including Libki) to libraries. He developed Libki early in his career when he was the IT tech at the [Meadville Public Library][5] in Pennsylvania. He was occasionally asked to cover the children's room during lunch breaks for other employees. The library used a paper sign-up sheet to manage access to the computers in the children's room, which meant constant supervision and checking to ensure equitable access for the people who came there.
|
||||
|
||||
Kyle said, "I found this system to be cumbersome and awkward, and I wanted to find a solution. That solution needed to be both FOSS and cross-platform. In the end, no existing software package suited our particular needs, and that is why I developed Libki."
|
||||
|
||||
Or, as Libki's website proclaims, "Libki was born of the need to avoid interacting with teenagers and now allows librarians to avoid interacting with teenagers around the world!"
|
||||
|
||||
### Easy to set up and use
|
||||
|
||||
I recently decided to try Libki in our local public library, where I frequently volunteer. I followed the [documentation][6] for the automatic installation, using Ubuntu 18.04 Server, and very quickly had it up and running.
|
||||
|
||||
I am planning to support Libki in our local library, but I wondered about libraries that don't have someone with IT experience or the ability to build and deploy a server. Kyle says, "ByWater Solutions can cloud-host a Libki server, which makes maintenance and management much simpler for everyone."
|
||||
|
||||
Kyle says ByWater is not planning to bundle Libki with its most popular offering, open source integrated library system (ILS) Koha, or any of the other [projects][7] it supports. "Libki and Koha are different [types of] software serving different needs, but they definitely work well together in a library setting. In fact, it was quite early on that I developed Libki's SIP2 integration so it could support single sign-on using Koha," he says.
|
||||
|
||||
### How you can contribute
|
||||
|
||||
Libki client is licensed under the GPLv3 and Libki server is licensed under the AGPLv3. Kyle says he would love Libki to have a more active and robust community, and the project is always looking for new people to join its [contributors][8]. If you would like to participate, visit [Libki's Community page][9] and join the mailing list.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/libki-computer-access
|
||||
|
||||
作者:[Don Watkins ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins/users/tony-thomas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/desk_clock_job_work.jpg?itok=Nj4fuhl6
|
||||
[2]: https://libki.org/
|
||||
[3]: https://www.linkedin.com/in/kylemhallinfo/
|
||||
[4]: https://opensource.com/article/19/4/software-libraries
|
||||
[5]: https://meadvillelibrary.org/
|
||||
[6]: https://manual.libki.org/master/libki-manual.html#_automatic_installation
|
||||
[7]: https://bywatersolutions.com/projects
|
||||
[8]: https://github.com/Libki/libki-server/graphs/contributors
|
||||
[9]: https://libki.org/community/
|
@ -0,0 +1,62 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (The making of the Breaking the Code electronic book)
|
||||
[#]: via: (https://opensource.com/article/19/5/code-book)
|
||||
[#]: author: (Alicia Gibb https://opensource.com/users/aliciagibb/users/don-watkins)
|
||||
|
||||
The making of the Breaking the Code electronic book
|
||||
======
|
||||
Offering a safe space for middle school girls to learn technology speaks
|
||||
volumes about who should be sitting around the tech table.
|
||||
![Open hardware electronic book][1]
|
||||
|
||||
I like a good challenge. The [Open Source Stories team][2] came to me with a great one: Create a hardware project where students could create their own thing that would be put together as a larger thing. The students would be middle school girls. My job was to figure out the hardware and make this thing make sense.
|
||||
|
||||
After days of sketching out concepts, I was wandering through my local public library, and it dawned on me that the perfect piece of hardware where everyone could design their own part to create something whole is a book! The idea of a book using paper electronics was exciting, simple enough to be taught in a day, and fit the criteria of needing no special equipment, like soldering irons.
|
||||
|
||||
!["Breaking the Code" book cover][3]
|
||||
|
||||
I designed two parts to the electronics within the book. Half the circuits were developed with copper tape, LEDs, and DIY buttons, and half were developed with LilyPad Arduino microcontrollers, sensors, LEDs, and DIY buttons. Using the electronics in the book, the girls could make pages light up, buzz, or play music using various inputs such as button presses, page turns, or tilting the book.
|
||||
|
||||
!['Breaking the Code' interior pages][4]
|
||||
|
||||
We worked with young adult author [Lauren Sabel][5] to come up with the story, which features two girls who get locked in the basement of their school and have to solve puzzles to get out. Setting the scene in the basement gave us lots of opportunities to use lights! Along with the story, we received illustrations that the girls enhanced with electronics. The girls got creative, for example, using lights as the skeleton's eyes, not just for the obvious light bulb in the room.
|
||||
|
||||
Creating a curriculum that was flexible enough to empower each girl to build her own successfully functioning circuit was a vital piece of the user experience. We chose components so the circuit wouldn't need to be over-engineered. We also used breakout boards and LEDs with built-in resistors so that the circuits allowed flexibility and functioned with only basic knowledge of circuit design—without getting too muddled in the deep end.
|
||||
|
||||
!['Breaking the Code' interior pages][6]
|
||||
|
||||
The project curriculum gave girls the confidence and skills to understand electronics by building two circuits, in the process learning circuit layout, directional aspects, cause-and-effect through inputs and outputs, and how to identify various components. Controlling electrons by pushing them through a circuit feels a bit like you're controlling a tiny part of the universe. And seeing the girls' faces light up is like seeing a universe of opportunities open in front of them.
|
||||
|
||||
!['Breaking the Code' interior pages][7]
|
||||
|
||||
The girls were ecstatic to see their work as a completed book, taking pride in their pages and showing others what they had built.
|
||||
|
||||
![About 'Breaking the Code'][8]
|
||||
|
||||
Teaching them my little corner of the world for the day was a truly empowering experience for me. As a woman in tech, I think this is the right approach for companies trying to change the gender inequalities we see in tech. Offering a safe space to learn—with lots of people in the room who look like you as mentors—speaks volumes about who should be sitting around the tech table.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/code-book
|
||||
|
||||
作者:[Alicia Gibb][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/aliciagibb/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_book_electronics_hardware.jpg?itok=zb-zaiwz (Open hardware electronic book)
|
||||
[2]: https://www.redhat.com/en/open-source-stories
|
||||
[3]: https://opensource.com/sites/default/files/uploads/codebook_cover.jpg ("Breaking the Code" book cover)
|
||||
[4]: https://opensource.com/sites/default/files/uploads/codebook_38-39.jpg ('Breaking the Code' interior pages)
|
||||
[5]: https://www.amazon.com/Lauren-Sabel/e/B01M0FW223
|
||||
[6]: https://opensource.com/sites/default/files/uploads/codebook_lightbulb.jpg ('Breaking the Code' interior pages)
|
||||
[7]: https://opensource.com/sites/default/files/uploads/codebook_10-11.jpg ('Breaking the Code' interior pages)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/codebook_pg1.jpg (About 'Breaking the Code')
|
735
sources/tech/20190503 API evolution the right way.md
Normal file
735
sources/tech/20190503 API evolution the right way.md
Normal file
@ -0,0 +1,735 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (API evolution the right way)
|
||||
[#]: via: (https://opensource.com/article/19/5/api-evolution-right-way)
|
||||
[#]: author: (A. Jesse https://opensource.com/users/emptysquare)
|
||||
|
||||
API evolution the right way
|
||||
======
|
||||
Ten covenants that responsible library authors keep with their users.
|
||||
![Browser of things][1]
|
||||
|
||||
Imagine you are a creator deity, designing a body for a creature. In your benevolence, you wish for the creature to evolve over time: first, because it must respond to changes in its environment, and second, because your wisdom grows and you think of better designs for the beast. It shouldn't remain in the same body forever!
|
||||
|
||||
![Serpents][2]
|
||||
|
||||
The creature, however, might be relying on features of its present anatomy. You can't add wings or change its scales without warning. It needs an orderly process to adapt its lifestyle to its new body. How can you, as a responsible designer in charge of this creature's natural history, gently coax it toward ever greater improvements?
|
||||
|
||||
It's the same for responsible library maintainers. We keep our promises to the people who depend on our code: we release bugfixes and useful new features. We sometimes delete features if that's beneficial for the library's future. We continue to innovate, but we don't break the code of people who use our library. How can we fulfill all those goals at once?
|
||||
|
||||
### Add useful features
|
||||
|
||||
Your library shouldn't stay the same for eternity: you should add features that make your library better for your users. For example, if you have a Reptile class and it would be useful to have wings for flying, go for it.
|
||||
|
||||
|
||||
```
|
||||
class Reptile:
|
||||
@property
|
||||
def teeth(self):
|
||||
return 'sharp fangs'
|
||||
|
||||
# If wings are useful, add them!
|
||||
@property
|
||||
def wings(self):
|
||||
return 'majestic wings'
|
||||
```
|
||||
|
||||
But beware, features come with risk. Consider the following feature in the Python standard library, and see what went wrong with it.
|
||||
|
||||
|
||||
```
|
||||
bool(datetime.time(9, 30)) == True
|
||||
bool(datetime.time(0, 0)) == False
|
||||
```
|
||||
|
||||
This is peculiar: converting any time object to a boolean yields True, except for midnight. (Worse, the rules for timezone-aware times are even stranger.)
|
||||
|
||||
I've been writing Python for more than a decade but I didn't discover this rule until last week. What kind of bugs can this odd behavior cause in users' code?
|
||||
|
||||
Consider a calendar application with a function that creates events. If an event has an end time, the function requires it to also have a start time.
|
||||
|
||||
|
||||
```
|
||||
def create_event(day,
|
||||
start_time=None,
|
||||
end_time=None):
|
||||
if end_time and not start_time:
|
||||
raise ValueError("Can't pass end_time without start_time")
|
||||
|
||||
# The coven meets from midnight until 4am.
|
||||
create_event(datetime.date.today(),
|
||||
datetime.time(0, 0),
|
||||
datetime.time(4, 0))
|
||||
```
|
||||
|
||||
Unfortunately for witches, an event starting at midnight fails this validation. A careful programmer who knows about the quirk at midnight can write this function correctly, of course.
|
||||
|
||||
|
||||
```
|
||||
def create_event(day,
|
||||
start_time=None,
|
||||
end_time=None):
|
||||
if end_time is not None and start_time is None:
|
||||
raise ValueError("Can't pass end_time without start_time")
|
||||
```
|
||||
|
||||
But this subtlety is worrisome. If a library creator wanted to make an API that bites users, a "feature" like the boolean conversion of midnight works nicely.
|
||||
|
||||
![Man being chased by an alligator][3]
|
||||
|
||||
The responsible creator's goal, however, is to make your library easy to use correctly.
|
||||
|
||||
This feature was written by Tim Peters when he first made the datetime module in 2002. Even founding Pythonistas like Tim make mistakes. [The quirk was removed][4], and all times are True now.
|
||||
|
||||
|
||||
```
|
||||
# Python 3.5 and later.
|
||||
|
||||
bool(datetime.time(9, 30)) == True
|
||||
bool(datetime.time(0, 0)) == True
|
||||
```
|
||||
|
||||
Programmers who didn't know about the oddity of midnight are saved from obscure bugs, but it makes me nervous to think about any code that relies on the weird old behavior and didn't notice the change. It would have been better if this bad feature were never implemented at all. This leads us to the first promise of any library maintainer:
|
||||
|
||||
#### First covenant: Avoid bad features
|
||||
|
||||
The most painful change to make is when you have to delete a feature. One way to avoid bad features is to add few features in general! Make no public method, class, function, or property without a good reason. Thus:
|
||||
|
||||
#### Second covenant: Minimize features
|
||||
|
||||
Features are like children: conceived in a moment of passion, they must be supported for years. Don't do anything silly just because you can. Don't add feathers to a snake!
|
||||
|
||||
![Serpents with and without feathers][5]
|
||||
|
||||
But of course, there are plenty of occasions when users need something from your library that it does not yet offer. How do you choose the right feature to give them? Here's another cautionary tale.
|
||||
|
||||
### A cautionary tale from asyncio
|
||||
|
||||
As you may know, when you call a coroutine function, it returns a coroutine object:
|
||||
|
||||
|
||||
```
|
||||
async def my_coroutine():
|
||||
pass
|
||||
|
||||
print(my_coroutine())
|
||||
|
||||
[/code] [code]`<coroutine object my_coroutine at 0x10bfcbac8>`
|
||||
```
|
||||
|
||||
Your code must "await" this object to run the coroutine. It's easy to forget this, so asyncio's developers wanted a "debug mode" that catches this mistake. Whenever a coroutine is destroyed without being awaited, the debug mode prints a warning with a traceback to the line where it was created.
|
||||
|
||||
When Yury Selivanov implemented the debug mode, he added as its foundation a "coroutine wrapper" feature. The wrapper is a function that takes in a coroutine and returns anything at all. Yury used it to install the warning logic on each coroutine, but someone else could use it to turn coroutines into the string "hi!"
|
||||
|
||||
|
||||
```
|
||||
import sys
|
||||
|
||||
def my_wrapper(coro):
|
||||
return 'hi!'
|
||||
|
||||
sys.set_coroutine_wrapper(my_wrapper)
|
||||
|
||||
async def my_coroutine():
|
||||
pass
|
||||
|
||||
print(my_coroutine())
|
||||
|
||||
[/code] [code]`hi!`
|
||||
```
|
||||
|
||||
That is one hell of a customization. It changes the very meaning of "async." Calling set_coroutine_wrapper once will globally and permanently change all coroutine functions. It is, [as Nathaniel Smith wrote][6], "a problematic API" that is prone to misuse and had to be removed. The asyncio developers could have avoided the pain of deleting the feature if they'd better shaped it to its purpose. Responsible creators must keep this in mind:
|
||||
|
||||
#### Third covenant: Keep features narrow
|
||||
|
||||
Luckily, Yury had the good judgment to mark this feature provisional, so asyncio users knew not to rely on it. Nathaniel was free to replace **set_coroutine_wrapper** with a narrower feature that only customized the traceback depth.
|
||||
|
||||
|
||||
```
|
||||
import sys
|
||||
|
||||
sys.set_coroutine_origin_tracking_depth(2)
|
||||
|
||||
async def my_coroutine():
|
||||
pass
|
||||
|
||||
print(my_coroutine())
|
||||
|
||||
[/code] [code]
|
||||
|
||||
<coroutine object my_coroutine at 0x10bfcbac8>
|
||||
|
||||
RuntimeWarning:'my_coroutine' was never awaited
|
||||
|
||||
Coroutine created at (most recent call last)
|
||||
File "script.py", line 8, in <module>
|
||||
print(my_coroutine())
|
||||
```
|
||||
|
||||
This is much better. There's no more global setting that can change coroutines' type, so asyncio users need not code as defensively. Deities should all be as farsighted as Yury.
|
||||
|
||||
#### Fourth covenant: Mark experimental features "provisional"
|
||||
|
||||
If you have merely a hunch that your creature wants horns and a quadruple-forked tongue, introduce the features but mark them "provisional."
|
||||
|
||||
![Serpent with horns][7]
|
||||
|
||||
You might discover that the horns are extraneous but the quadruple-forked tongue is useful after all. In the next release of your library, you can delete the former and mark the latter official.
|
||||
|
||||
### Deleting features
|
||||
|
||||
No matter how wisely we guide our creature's evolution, there may come a time when it's best to delete an official feature. For example, you might have created a lizard, and now you choose to delete its legs. Perhaps you want to transform this awkward creature into a sleek and modern python.
|
||||
|
||||
![Lizard transformed to snake][8]
|
||||
|
||||
There are two main reasons to delete features. First, you might discover a feature was a bad idea, through user feedback or your own growing wisdom. That was the case with the quirky behavior of midnight. Or, the feature might have been well-adapted to your library's environment at first, but the ecology changes. Perhaps another deity invents mammals. Your creature wants to squeeze into the mammals' little burrows and eat the tasty mammal filling, so it has to lose its legs.
|
||||
|
||||
![A mouse][9]
|
||||
|
||||
Similarly, the Python standard library deletes features in response to changes in the language itself. Consider asyncio's Lock. It has been awaitable ever since "await" was added as a keyword:
|
||||
|
||||
|
||||
```
|
||||
lock = asyncio.Lock()
|
||||
|
||||
async def critical_section():
|
||||
await lock
|
||||
try:
|
||||
print('holding lock')
|
||||
finally:
|
||||
lock.release()
|
||||
```
|
||||
|
||||
But now, we can do "async with lock."
|
||||
|
||||
|
||||
```
|
||||
lock = asyncio.Lock()
|
||||
|
||||
async def critical_section():
|
||||
async with lock:
|
||||
print('holding lock')
|
||||
```
|
||||
|
||||
The new style is much better! It's short and less prone to mistakes in a big function with other try-except blocks. Since "there should be one and preferably only one obvious way to do it," [the old syntax is deprecated in Python 3.7][10] and it will be banned soon.
|
||||
|
||||
It's inevitable that ecological change will have this effect on your code, too, so learn to delete features gently. Before you do so, consider the cost or benefit of deleting it. Responsible maintainers are reluctant to make their users change a large amount of their code or change their logic. (Remember how painful it was when Python 3 removed the "u" string prefix, before it was added back.) If the code changes are mechanical, however, like a simple search-and-replace, or if the feature is dangerous, it may be worth deleting.
|
||||
|
||||
#### Whether to delete a feature
|
||||
|
||||
![Balance scales][11]
|
||||
|
||||
Con | Pro
|
||||
---|---
|
||||
Code must change | Change is mechanical
|
||||
Logic must change | Feature is dangerous
|
||||
|
||||
In the case of our hungry lizard, we decide to delete its legs so it can slither into a mouse's hole and eat it. How do we go about this? We could just delete the **walk** method, changing code from this:
|
||||
|
||||
|
||||
```
|
||||
class Reptile:
|
||||
def walk(self):
|
||||
print('step step step')
|
||||
```
|
||||
|
||||
to this:
|
||||
|
||||
|
||||
```
|
||||
class Reptile:
|
||||
def slither(self):
|
||||
print('slide slide slide')
|
||||
```
|
||||
|
||||
That's not a good idea; the creature is accustomed to walking! Or, in terms of a library, your users have code that relies on the existing method. When they upgrade to the latest version of your library, their code will break.
|
||||
|
||||
|
||||
```
|
||||
# User's code. Oops!
|
||||
Reptile.walk()
|
||||
```
|
||||
|
||||
Therefore, responsible creators make this promise:
|
||||
|
||||
#### Fifth covenant: Delete features gently
|
||||
|
||||
There are a few steps involved in deleting a feature gently. Starting with a lizard that walks with its legs, you first add the new method, "slither." Next, deprecate the old method.
|
||||
|
||||
|
||||
```
|
||||
import warnings
|
||||
|
||||
class Reptile:
|
||||
def walk(self):
|
||||
warnings.warn(
|
||||
"walk is deprecated, use slither",
|
||||
DeprecationWarning, stacklevel=2)
|
||||
print('step step step')
|
||||
|
||||
def slither(self):
|
||||
print('slide slide slide')
|
||||
```
|
||||
|
||||
The Python warnings module is quite powerful. By default it prints warnings to stderr, only once per code location, but you can silence warnings or turn them into exceptions, among other options.
|
||||
|
||||
As soon as you add this warning to your library, PyCharm and other IDEs render the deprecated method with a strikethrough. Users know right away that the method is due for deletion.
|
||||
|
||||
`Reptile().walk()`
|
||||
|
||||
What happens when they run their code with the upgraded library?
|
||||
|
||||
|
||||
```
|
||||
$ python3 script.py
|
||||
|
||||
DeprecationWarning: walk is deprecated, use slither
|
||||
script.py:14: Reptile().walk()
|
||||
|
||||
step step step
|
||||
```
|
||||
|
||||
By default, they see a warning on stderr, but the script succeeds and prints "step step step." The warning's traceback shows what line of the user's code must be fixed. (That's what the "stacklevel" argument does: it shows the call site that users need to change, not the line in your library where the warning is generated.) Notice that the error message is instructive, it describes what a library user must do to migrate to the new version.
|
||||
|
||||
Your users will want to test their code and prove they call no deprecated library methods. Warnings alone won't make unit tests fail, but exceptions will. Python has a command-line option to turn deprecation warnings into exceptions.
|
||||
|
||||
|
||||
```
|
||||
> python3 -Werror::DeprecationWarning script.py
|
||||
|
||||
Traceback (most recent call last):
|
||||
File "script.py", line 14, in <module>
|
||||
Reptile().walk()
|
||||
File "script.py", line 8, in walk
|
||||
DeprecationWarning, stacklevel=2)
|
||||
DeprecationWarning: walk is deprecated, use slither
|
||||
```
|
||||
|
||||
Now, "step step step" is not printed, because the script terminates with an error.
|
||||
|
||||
So, once you've released a version of your library that warns about the deprecated "walk" method, you can delete it safely in the next release. Right?
|
||||
|
||||
Consider what your library's users might have in their projects' requirements.
|
||||
|
||||
|
||||
```
|
||||
# User's requirements.txt has a dependency on the reptile package.
|
||||
reptile
|
||||
```
|
||||
|
||||
The next time they deploy their code, they'll install the latest version of your library. If they haven't yet handled all deprecations, then their code will break, because it still depends on "walk." You need to be gentler than this. There are three more promises you must keep to your users: maintain a changelog, choose a version scheme, and write an upgrade guide.
|
||||
|
||||
#### Sixth covenant: Maintain a changelog
|
||||
|
||||
Your library must have a changelog; its main purpose is to announce when a feature that your users rely on is deprecated or deleted.
|
||||
|
||||
#### Changes in Version 1.1
|
||||
|
||||
**New features**
|
||||
|
||||
* New function Reptile.slither()
|
||||
|
||||
|
||||
|
||||
**Deprecations**
|
||||
|
||||
* Reptile.walk() is deprecated and will be removed in version 2.0, use slither()
|
||||
|
||||
|
||||
---
|
||||
|
||||
Responsible creators use version numbers to express how a library has changed so users can make informed decisions about upgrading. A "version scheme" is a language for communicating the pace of change.
|
||||
|
||||
#### Seventh covenant: Choose a version scheme
|
||||
|
||||
There are two schemes in widespread use, [semantic versioning][12] and time-based versioning. I recommend semantic versioning for nearly any library. The Python flavor thereof is defined in [PEP 440][13], and tools like **pip** understand semantic version numbers.
|
||||
|
||||
If you choose semantic versioning for your library, you can delete its legs gently with version numbers like:
|
||||
|
||||
> 1.0: First "stable" release, with walk()
|
||||
> 1.1: Add slither(), deprecate walk()
|
||||
> 2.0: Delete walk()
|
||||
|
||||
Your users should depend on a range of your library's versions, like so:
|
||||
|
||||
|
||||
```
|
||||
# User's requirements.txt.
|
||||
reptile>=1,<2
|
||||
```
|
||||
|
||||
This allows them to upgrade automatically within a major release, receiving bugfixes and potentially raising some deprecation warnings, but not upgrading to the _next_ major release and risking a change that breaks their code.
|
||||
|
||||
If you follow time-based versioning, your releases might be numbered thus:
|
||||
|
||||
> 2017.06.0: A release in June 2017
|
||||
> 2018.11.0: Add slither(), deprecate walk()
|
||||
> 2019.04.0: Delete walk()
|
||||
|
||||
And users can depend on your library like:
|
||||
|
||||
|
||||
```
|
||||
# User's requirements.txt for time-based version.
|
||||
reptile==2018.11.*
|
||||
```
|
||||
|
||||
This is terrific, but how do your users know your versioning scheme and how to test their code for deprecations? You have to advise them how to upgrade.
|
||||
|
||||
#### Eighth covenant: Write an upgrade guide
|
||||
|
||||
Here's how a responsible library creator might guide users:
|
||||
|
||||
#### Upgrading to 2.0
|
||||
|
||||
**Migrate from Deprecated APIs**
|
||||
|
||||
See the changelog for deprecated features.
|
||||
|
||||
**Enable Deprecation Warnings**
|
||||
|
||||
Upgrade to 1.1 and test your code with:
|
||||
|
||||
`python -Werror::DeprecationWarning`
|
||||
|
||||
Now it's safe to upgrade.
|
||||
|
||||
---
|
||||
|
||||
You must teach users how to handle deprecation warnings by showing them the command line options. Not all Python programmers know this—I certainly have to look up the syntax each time. And take note, you must _release_ a version that prints warnings from each deprecated API so users can test with that version before upgrading again. In this example, version 1.1 is the bridge release. It allows your users to rewrite their code incrementally, fixing each deprecation warning separately until they have entirely migrated to the latest API. They can test changes to their code and changes in your library, independently from each other, and isolate the cause of bugs.
|
||||
|
||||
If you chose semantic versioning, this transitional period lasts until the next major release, from 1.x to 2.0, or from 2.x to 3.0, and so on. The gentle way to delete a creature's legs is to give it at least one version in which to adjust its lifestyle. Don't remove the legs all at once!
|
||||
|
||||
![A skink][14]
|
||||
|
||||
Version numbers, deprecation warnings, the changelog, and the upgrade guide work together to gently evolve your library without breaking the covenant with your users. The [Twisted project's Compatibility Policy][15] explains this beautifully:
|
||||
|
||||
> "The First One's Always Free"
|
||||
>
|
||||
> Any application which runs without warnings may be upgraded one minor version of Twisted.
|
||||
>
|
||||
> In other words, any application which runs its tests without triggering any warnings from Twisted should be able to have its Twisted version upgraded at least once with no ill effects except the possible production of new warnings.
|
||||
|
||||
Now, we creator deities have gained the wisdom and power to add features by adding methods and to delete them gently. We can also add features by adding parameters, but this brings a new level of difficulty. Are you ready?
|
||||
|
||||
### Adding parameters
|
||||
|
||||
Imagine that you just gave your snake-like creature a pair of wings. Now you must allow it the choice whether to move by slithering or flying. Currently its "move" function takes one parameter.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
def move(direction):
|
||||
print(f'slither {direction}')
|
||||
|
||||
# A user's application.
|
||||
move('north')
|
||||
```
|
||||
|
||||
You want to add a "mode" parameter, but this breaks your users' code if they upgrade, because they pass only one argument.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
def move(direction, mode):
|
||||
assert mode in ('slither', 'fly')
|
||||
print(f'{mode} {direction}')
|
||||
|
||||
# A user's application. Error!
|
||||
move('north')
|
||||
```
|
||||
|
||||
A truly wise creator promises not to break users' code this way.
|
||||
|
||||
#### Ninth covenant: Add parameters compatibly
|
||||
|
||||
To keep this covenant, add each new parameter with a default value that preserves the original behavior.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
def move(direction, mode='slither'):
|
||||
assert mode in ('slither', 'fly')
|
||||
print(f'{mode} {direction}')
|
||||
|
||||
# A user's application.
|
||||
move('north')
|
||||
```
|
||||
|
||||
Over time, parameters are the natural history of your function's evolution. They're listed oldest first, each with a default value. Library users can pass keyword arguments to opt into specific new behaviors and accept the defaults for all others.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
def move(direction,
|
||||
mode='slither',
|
||||
turbo=False,
|
||||
extra_sinuous=False,
|
||||
hail_lyft=False):
|
||||
# ...
|
||||
|
||||
# A user's application.
|
||||
move('north', extra_sinuous=True)
|
||||
```
|
||||
|
||||
There is a danger, however, that a user might write code like this:
|
||||
|
||||
|
||||
```
|
||||
# A user's application, poorly-written.
|
||||
move('north', 'slither', False, True)
|
||||
```
|
||||
|
||||
What happens if, in the next major version of your library, you get rid of one of the parameters, like "turbo"?
|
||||
|
||||
|
||||
```
|
||||
# Your library code, next major version. "turbo" is deleted.
|
||||
def move(direction,
|
||||
mode='slither',
|
||||
extra_sinuous=False,
|
||||
hail_lyft=False):
|
||||
# ...
|
||||
|
||||
# A user's application, poorly-written.
|
||||
move('north', 'slither', False, True)
|
||||
```
|
||||
|
||||
The user's code still compiles, and this is a bad thing. The code stopped moving extra-sinuously and started hailing a Lyft, which was not the intention. I trust that you can predict what I'll say next: Deleting a parameter requires several steps. First, of course, deprecate the "turbo" parameter. I like a technique like this one, which detects whether any user's code relies on this parameter.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
_turbo_default = object()
|
||||
|
||||
def move(direction,
|
||||
mode='slither',
|
||||
turbo=_turbo_default,
|
||||
extra_sinuous=False,
|
||||
hail_lyft=False):
|
||||
if turbo is not _turbo_default:
|
||||
warnings.warn(
|
||||
"'turbo' is deprecated",
|
||||
DeprecationWarning,
|
||||
stacklevel=2)
|
||||
else:
|
||||
# The old default.
|
||||
turbo = False
|
||||
```
|
||||
|
||||
But your users might not notice the warning. Warnings are not very loud: they can be suppressed or lost in log files. Users might heedlessly upgrade to the next major version of your library, the version that deletes "turbo." Their code will run without error and silently do the wrong thing! As the Zen of Python says, "Errors should never pass silently." Indeed, reptiles hear poorly, so you must correct them very loudly when they make mistakes.
|
||||
|
||||
![Woman riding an alligator][16]
|
||||
|
||||
The best way to protect your users is with Python 3's star syntax, which requires callers to pass keyword arguments.
|
||||
|
||||
|
||||
```
|
||||
# Your library code.
|
||||
# All arguments after "*" must be passed by keyword.
|
||||
def move(direction,
|
||||
*,
|
||||
mode='slither',
|
||||
turbo=False,
|
||||
extra_sinuous=False,
|
||||
hail_lyft=False):
|
||||
# ...
|
||||
|
||||
# A user's application, poorly-written.
|
||||
# Error! Can't use positional args, keyword args required.
|
||||
move('north', 'slither', False, True)
|
||||
```
|
||||
|
||||
With the star in place, this is the only syntax allowed:
|
||||
|
||||
|
||||
```
|
||||
# A user's application.
|
||||
move('north', extra_sinuous=True)
|
||||
```
|
||||
|
||||
Now when you delete "turbo," you can be certain any user code that relies on it will fail loudly. If your library also supports Python 2, there's no shame in that; you can simulate the star syntax thus ([credit to Brett Slatkin][17]):
|
||||
|
||||
|
||||
```
|
||||
# Your library code, Python 2 compatible.
|
||||
def move(direction, **kwargs):
|
||||
mode = kwargs.pop('mode', 'slither')
|
||||
turbo = kwargs.pop('turbo', False)
|
||||
sinuous = kwargs.pop('extra_sinuous', False)
|
||||
lyft = kwargs.pop('hail_lyft', False)
|
||||
|
||||
if kwargs:
|
||||
raise TypeError('Unexpected kwargs: %r'
|
||||
% kwargs)
|
||||
|
||||
# ...
|
||||
```
|
||||
|
||||
Requiring keyword arguments is a wise choice, but it requires foresight. If you allow an argument to be passed positionally, you cannot convert it to keyword-only in a later release. So, add the star now. You can observe in the asyncio API that it uses the star pervasively in constructors, methods, and functions. Even though "Lock" only takes one optional parameter so far, the asyncio developers added the star right away. This is providential.
|
||||
|
||||
|
||||
```
|
||||
# In asyncio.
|
||||
class Lock:
|
||||
def __init__(self, *, loop=None):
|
||||
# ...
|
||||
```
|
||||
|
||||
Now we've gained the wisdom to change methods and parameters while keeping our covenant with users. The time has come to try the most challenging kind of evolution: changing behavior without changing either methods or parameters.
|
||||
|
||||
### Changing behavior
|
||||
|
||||
Let's say your creature is a rattlesnake, and you want to teach it a new behavior.
|
||||
|
||||
![Rattlesnake][18]
|
||||
|
||||
Sidewinding! The creature's body will appear the same, but its behavior will change. How can we prepare it for this step of its evolution?
|
||||
|
||||
![][19]
|
||||
|
||||
Image by HCA [[CC BY-SA 4.0][20]], [via Wikimedia Commons][21], modified by Opensource.com
|
||||
|
||||
A responsible creator can learn from the following example in the Python standard library, when behavior changed without a new function or parameters. Once upon a time, the os.stat function was introduced to get file statistics, like the creation time. At first, times were always integers.
|
||||
|
||||
|
||||
```
|
||||
>>> os.stat('file.txt').st_ctime
|
||||
1540817862
|
||||
```
|
||||
|
||||
One day, the core developers decided to use floats for os.stat times to give sub-second precision. But they worried that existing user code wasn't ready for the change. They created a setting in Python 2.3, "stat_float_times," that was false by default. A user could set it to True to opt into floating-point timestamps.
|
||||
|
||||
|
||||
```
|
||||
>>> # Python 2.3.
|
||||
>>> os.stat_float_times(True)
|
||||
>>> os.stat('file.txt').st_ctime
|
||||
1540817862.598021
|
||||
```
|
||||
|
||||
Starting in Python 2.5, float times became the default, so any new code written for 2.5 and later could ignore the setting and expect floats. Of course, you could set it to False to keep the old behavior or set it to True to ensure the new behavior in all Python versions, and prepare your code for the day when stat_float_times is deleted.
|
||||
|
||||
Ages passed. In Python 3.1, the setting was deprecated to prepare people for the distant future and finally, after its decades-long journey, [the setting was removed][22]. Float times are now the only option. It's a long road, but responsible deities are patient because we know this gradual process has a good chance of saving users from unexpected behavior changes.
|
||||
|
||||
#### Tenth covenant: Change behavior gradually
|
||||
|
||||
Here are the steps:
|
||||
|
||||
* Add a flag to opt into the new behavior, default False, warn if it's False
|
||||
* Change default to True, deprecate flag entirely
|
||||
* Remove the flag
|
||||
|
||||
|
||||
|
||||
If you follow semantic versioning, the versions might be like so:
|
||||
|
||||
Library version | Library API | User code
|
||||
---|---|---
|
||||
| |
|
||||
1.0 | No flag | Expect old behavior
|
||||
1.1 | Add flag, default False,
|
||||
warn if it's False | Set flag True,
|
||||
handle new behavior
|
||||
2.0 | Change default to True,
|
||||
deprecate flag entirely | Handle new behavior
|
||||
3.0 | Remove flag | Handle new behavior
|
||||
|
||||
You need _two_ major releases to complete the maneuver. If you had gone straight from "Add flag, default False, warn if it's False" to "Remove flag" without the intervening release, your users' code would be unable to upgrade. User code written correctly for 1.1, which sets the flag to True and handles the new behavior, must be able to upgrade to the next release with no ill effect except new warnings, but if the flag were deleted in the next release, that code would break. A responsible deity never violates the Twisted policy: "The First One's Always Free."
|
||||
|
||||
### The responsible creator
|
||||
|
||||
![Demeter][23]
|
||||
|
||||
Our 10 covenants belong loosely in three categories:
|
||||
|
||||
**Evolve cautiously**
|
||||
|
||||
1. Avoid bad features
|
||||
2. Minimize features
|
||||
3. Keep features narrow
|
||||
4. Mark experimental features "provisional"
|
||||
5. Delete features gently
|
||||
|
||||
|
||||
|
||||
**Record history rigorously**
|
||||
|
||||
1. Maintain a changelog
|
||||
2. Choose a version scheme
|
||||
3. Write an upgrade guide
|
||||
|
||||
|
||||
|
||||
**Change slowly and loudly**
|
||||
|
||||
1. Add parameters compatibly
|
||||
2. Change behavior gradually
|
||||
|
||||
|
||||
|
||||
If you keep these covenants with your creature, you'll be a responsible creator deity. Your creature's body can evolve over time, forever improving and adapting to changes in its environment but without sudden changes the creature isn't prepared for. If you maintain a library, keep these promises to your users and you can innovate your library without breaking the code of the people who rely on you.
|
||||
|
||||
* * *
|
||||
|
||||
_This article originally appeared on[A. Jesse Jiryu Davis's blog][24] and is republished with permission._
|
||||
|
||||
Illustration credits:
|
||||
|
||||
* [The World's Progress, The Delphian Society, 1913][25]
|
||||
* [Essay Towards a Natural History of Serpents, Charles Owen, 1742][26]
|
||||
* [On the batrachia and reptilia of Costa Rica: With notes on the herpetology and ichthyology of Nicaragua and Peru, Edward Drinker Cope, 1875][27]
|
||||
* [Natural History, Richard Lydekker et. al., 1897][28]
|
||||
* [Mes Prisons, Silvio Pellico, 1843][29]
|
||||
* [Tierfotoagentur / m.blue-shadow][30]
|
||||
* [Los Angeles Public Library, 1930][31]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/api-evolution-right-way
|
||||
|
||||
作者:[A. Jesse][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/emptysquare
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things)
|
||||
[2]: https://opensource.com/sites/default/files/uploads/praise-the-creator.jpg (Serpents)
|
||||
[3]: https://opensource.com/sites/default/files/uploads/bite.jpg (Man being chased by an alligator)
|
||||
[4]: https://bugs.python.org/issue13936
|
||||
[5]: https://opensource.com/sites/default/files/uploads/feathers.jpg (Serpents with and without feathers)
|
||||
[6]: https://bugs.python.org/issue32591
|
||||
[7]: https://opensource.com/sites/default/files/uploads/horns.jpg (Serpent with horns)
|
||||
[8]: https://opensource.com/sites/default/files/uploads/lizard-to-snake.jpg (Lizard transformed to snake)
|
||||
[9]: https://opensource.com/sites/default/files/uploads/mammal.jpg (A mouse)
|
||||
[10]: https://bugs.python.org/issue32253
|
||||
[11]: https://opensource.com/sites/default/files/uploads/scale.jpg (Balance scales)
|
||||
[12]: https://semver.org
|
||||
[13]: https://www.python.org/dev/peps/pep-0440/
|
||||
[14]: https://opensource.com/sites/default/files/uploads/skink.jpg (A skink)
|
||||
[15]: https://twistedmatrix.com/documents/current/core/development/policy/compatibility-policy.html
|
||||
[16]: https://opensource.com/sites/default/files/uploads/loudly.jpg (Woman riding an alligator)
|
||||
[17]: http://www.informit.com/articles/article.aspx?p=2314818
|
||||
[18]: https://opensource.com/sites/default/files/uploads/rattlesnake.jpg (Rattlesnake)
|
||||
[19]: https://opensource.com/sites/default/files/articles/neonate_sidewinder_sidewinding_with_tracks_unlabeled.png
|
||||
[20]: https://creativecommons.org/licenses/by-sa/4.0
|
||||
[21]: https://commons.wikimedia.org/wiki/File:Neonate_sidewinder_sidewinding_with_tracks_unlabeled.jpg
|
||||
[22]: https://bugs.python.org/issue31827
|
||||
[23]: https://opensource.com/sites/default/files/uploads/demeter.jpg (Demeter)
|
||||
[24]: https://emptysqua.re/blog/api-evolution-the-right-way/
|
||||
[25]: https://www.gutenberg.org/files/42224/42224-h/42224-h.htm
|
||||
[26]: https://publicdomainreview.org/product-att/artist/charles-owen/
|
||||
[27]: https://archive.org/details/onbatrachiarepti00cope/page/n3
|
||||
[28]: https://www.flickr.com/photos/internetarchivebookimages/20556001490
|
||||
[29]: https://www.oldbookillustrations.com/illustrations/stationery/
|
||||
[30]: https://www.alamy.com/mediacomp/ImageDetails.aspx?ref=D7Y61W
|
||||
[31]: https://www.vintag.es/2013/06/riding-alligator-c-1930s.html
|
@ -0,0 +1,81 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Check your spelling at the command line with Ispell)
|
||||
[#]: via: (https://opensource.com/article/19/5/spelling-command-line-ispell)
|
||||
[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt)
|
||||
|
||||
Check your spelling at the command line with Ispell
|
||||
======
|
||||
Ispell helps you stamp out typos in plain text files written in more
|
||||
than 50 languages.
|
||||
![Command line prompt][1]
|
||||
|
||||
Good spelling is a skill. A skill that takes time to learn and to master. That said, there are people who never quite pick that skill up—I know a couple or three outstanding writers who can't spell to save their lives.
|
||||
|
||||
Even if you spell well, the occasional typo creeps in. That's especially true if you're quickly banging on your keyboard to meet a deadline. Regardless of your spelling chops, it's always a good idea to run what you've written through a spelling checker.
|
||||
|
||||
I do most of my writing in [plain text][2] and often use a command line spelling checker called [Aspell][3] to do the deed. Aspell isn't the only game in town. You might also want to check out the venerable [Ispell][4].
|
||||
|
||||
### Getting started
|
||||
|
||||
Ispell's been around, in various forms, since 1971. Don't let its age fool you. Ispell is still a peppy application that you can use effectively in the 21st century.
|
||||
|
||||
Before doing anything else, check whether or not Ispell is installed on your computer by cracking open a terminal window and typing **which ispell**. If it isn't installed, fire up your distribution's package manager and install Ispell from there.
|
||||
|
||||
Don't forget to install dictionaries for the languages you work in, too. My only language is English, so I just need to worry about grabbing the US and British English dictionaries. You're not limited to my mother (and only) tongue. Ispell has [dictionaries for over 50 languages][5].
|
||||
|
||||
![Installing Ispell dictionaries][6]
|
||||
|
||||
### Using Ispell
|
||||
|
||||
If you haven't guessed already, Ispell only works with text files. That includes ones marked up with HTML, LaTeX, and [nroff or troff][7]. More on this in a few moments.
|
||||
|
||||
To get to work, open a terminal window and navigate to the directory containing the file where you want to run a spelling check. Type **ispell** followed by the file's name and then press Enter.
|
||||
|
||||
![Checking spelling with Ispell][8]
|
||||
|
||||
Ispell highlights the first word it doesn't recognize. If the word is misspelled, Ispell usually offers one or more alternatives. Press **R** and then the number beside the correct choice. In the screen capture above, I'd press **R** and **0** to fix the error.
|
||||
|
||||
If, on the other hand, the word is correctly spelled, press **A** to move to the next misspelled word.
|
||||
|
||||
Keep doing that until you reach the end of the file. Ispell saves your changes, creates a backup of the file you just checked (with the extension _.bak_ ), and shuts down.
|
||||
|
||||
### A few other options
|
||||
|
||||
This example illustrates basic Ispell usage. The program has a [number of options][9], some of which you _might_ use and others you _might never_ use. Let's take a quick peek at some of the ones I regularly use.
|
||||
|
||||
A few paragraphs ago, I mentioned that Ispell works with certain markup languages. You need to tell it a file's format. When starting Ispell, add **-t** for a TeX or LaTeX file, **-H** for an HTML file, or **-n** for a groff or troff file. For example, if you enter **ispell -t myReport.tex** , Ispell ignores all markup.
|
||||
|
||||
If you don't want the backup file that Ispell creates after checking a file, add **-x** to the command line—for example, **ispell -x myFile.txt**.
|
||||
|
||||
What happens if Ispell runs into a word that's spelled correctly but isn't in its dictionary, like a proper name? You can add that word to a personal word list by pressing **I**. This saves the word to a file called _.ispell_default_ in the root of your _/home_ directory.
|
||||
|
||||
Those are the options I find most useful when working with Ispell, but check out [Ispell's man page][9] for descriptions of all its options.
|
||||
|
||||
Is Ispell any better or faster than Aspell or any other command line spelling checker? I have to say it's no worse than any of them, nor is it any slower. Ispell's not for everyone. It might not be for you. But it is good to have options, isn't it?
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/spelling-command-line-ispell
|
||||
|
||||
作者:[Scott Nesbitt ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/scottnesbitt
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg (Command line prompt)
|
||||
[2]: https://plaintextproject.online
|
||||
[3]: https://opensource.com/article/18/2/how-check-spelling-linux-command-line-aspell
|
||||
[4]: https://www.cs.hmc.edu/~geoff/ispell.html
|
||||
[5]: https://www.cs.hmc.edu/~geoff/ispell-dictionaries.html
|
||||
[6]: https://opensource.com/sites/default/files/uploads/ispell-install-dictionaries.png (Installing Ispell dictionaries)
|
||||
[7]: https://opensource.com/article/18/2/how-format-academic-papers-linux-groff-me
|
||||
[8]: https://opensource.com/sites/default/files/uploads/ispell-checking.png (Checking spelling with Ispell)
|
||||
[9]: https://www.cs.hmc.edu/~geoff/ispell-man.html
|
@ -0,0 +1,306 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Mirror your System Drive using Software RAID)
|
||||
[#]: via: (https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/)
|
||||
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
|
||||
|
||||
Mirror your System Drive using Software RAID
|
||||
======
|
||||
|
||||
![][1]
|
||||
|
||||
Nothing lasts forever. When it comes to the hardware in your PC, most of it can easily be replaced. There is, however, one special-case hardware component in your PC that is not as easy to replace as the rest — your hard disk drive.
|
||||
|
||||
### Drive Mirroring
|
||||
|
||||
Your hard drive stores your personal data. Some of your data can be backed up automatically by scheduled backup jobs. But those jobs scan the files to be backed up for changes and trying to scan an entire drive would be very resource intensive. Also, anything that you’ve changed since your last backup will be lost if your drive fails. [Drive mirroring][2] is a better way to maintain a secondary copy of your entire hard drive. With drive mirroring, a secondary copy of _all the data_ on your hard drive is maintained _in real time_.
|
||||
|
||||
An added benefit of live mirroring your hard drive to a secondary hard drive is that it can [increase your computer’s performance][3]. Because disk I/O is one of your computer’s main performance [bottlenecks][4], the performance improvement can be quite significant.
|
||||
|
||||
Note that a mirror is not a backup. It only protects your data from being lost if one of your physical drives fail. Types of failures that drive mirroring, by itself, does not protect against include:
|
||||
|
||||
* [File System Corruption][5]
|
||||
* [Bit Rot][6]
|
||||
* Accidental File Deletion
|
||||
* Simultaneous Failure of all Mirrored Drives (highly unlikely)
|
||||
|
||||
|
||||
|
||||
Some of the above can be addressed by other file system features that can be used in conjunction with drive mirroring. File system features that address the above types of failures include:
|
||||
|
||||
* Using a [Journaling][7] or [Log-Structured][8] file system
|
||||
* Using [Checksums][9] ([ZFS][10] , for example, does this automatically and transparently)
|
||||
* Using [Snapshots][11]
|
||||
* Using [BCVs][12]
|
||||
|
||||
|
||||
|
||||
This guide will demonstrate one method of mirroring your system drive using the Multiple Disk and Device Administration (mdadm) toolset. Just for fun, this guide will show how to do the conversion without using any extra boot media (CDs, USB drives, etc). For more about the concepts and terminology related to the multiple device driver, you can skim the _md_ man page:
|
||||
|
||||
```
|
||||
$ man md
|
||||
```
|
||||
|
||||
### The Procedure
|
||||
|
||||
1. **Use** [**sgdisk**][13] **to (re)partition the _extra_ drive that you have added to your computer** :
|
||||
|
||||
```
|
||||
$ sudo -i
|
||||
# MY_DISK_1=/dev/sdb
|
||||
# sgdisk --zap-all $MY_DISK_1
|
||||
# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_1 $MY_DISK_1
|
||||
# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_1 $MY_DISK_1
|
||||
# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_1 $MY_DISK_1
|
||||
# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_1 $MY_DISK_1
|
||||
```
|
||||
|
||||
– If the drive that you will be using for the second half of the mirror in step 12 is smaller than this drive, then you will need to adjust down the size of the last partition so that the total size of all the partitions is not greater than the size of your second drive.
|
||||
– A few of the commands in this guide are prefixed with a test for the existence of an _efivars_ directory. This is necessary because those commands are slightly different depending on whether your computer is BIOS-based or UEFI-based.
|
||||
|
||||
2. **Use** [**mdadm**][14] **to create RAID devices that use the new partitions to store their data** :
|
||||
|
||||
```
|
||||
# mdadm --create /dev/md/boot --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/boot_1 missing
|
||||
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/swap_1 missing
|
||||
# mdadm --create /dev/md/root --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/root_1 missing
|
||||
|
||||
# cat << END > /etc/mdadm.conf
|
||||
MAILADDR root
|
||||
AUTO +all
|
||||
DEVICE partitions
|
||||
END
|
||||
|
||||
# mdadm --detail --scan >> /etc/mdadm.conf
|
||||
```
|
||||
|
||||
– The _missing_ parameter tells mdadm to create an array with a missing member. You will add the other half of the mirror in step 14.
|
||||
– You should configure [sendmail][15] so you will be notified if a drive fails.
|
||||
– You can configure [Evolution][16] to [monitor a local mail spool][17].
|
||||
|
||||
3. **Use** [**dracut**][18] **to update the initramfs** :
|
||||
|
||||
```
|
||||
# dracut -f --add mdraid --add-drivers xfs
|
||||
```
|
||||
|
||||
– Dracut will include the /etc/mdadm.conf file you created in the previous section in your initramfs _unless_ you build your initramfs with the _hostonly_ option set to _no_. If you build your initramfs with the hostonly option set to no, then you should either manually include the /etc/mdadm.conf file, manually specify the UUID’s of the RAID arrays to assemble at boot time with the _rd.md.uuid_ kernel parameter, or specify the _rd.auto_ kernel parameter to have all RAID arrays automatically assembled and started at boot time. This guide will demonstrate the _rd.auto_ option since it is the most generic.
|
||||
|
||||
4. **Format the RAID devices** :
|
||||
|
||||
```
|
||||
# mkfs -t vfat /dev/md/boot
|
||||
# mkswap /dev/md/swap
|
||||
# mkfs -t xfs /dev/md/root
|
||||
```
|
||||
|
||||
– The new [Boot Loader Specification][19] states “if the OS is installed on a disk with GPT disk label, and no ESP partition exists yet, a new suitably sized (let’s say 500MB) ESP should be created and should be used as $BOOT” and “$BOOT must be a VFAT (16 or 32) file system”.
|
||||
|
||||
5. **Reboot and set the _rd.auto_ , _rd.break_ and _single_ kernel parameters** :
|
||||
|
||||
```
|
||||
# reboot
|
||||
```
|
||||
|
||||
– You may need to [set your root password][20] before rebooting so that you can get into _single-user mode_ in step 7.
|
||||
– See “[Making Temporary Changes to a GRUB 2 Menu][21]” for directions on how to set kernel parameters on compters that use the GRUB 2 boot loader.
|
||||
|
||||
6. **Use** [**the dracut shell**][18] **to copy the root file system** :
|
||||
|
||||
```
|
||||
# mkdir /newroot
|
||||
# mount /dev/md/root /newroot
|
||||
# shopt -s dotglob
|
||||
# cp -ax /sysroot/* /newroot
|
||||
# rm -rf /newroot/boot/*
|
||||
# umount /newroot
|
||||
# exit
|
||||
```
|
||||
|
||||
– The _dotglob_ flag is set for this bash session so that the [wildcard character][22] will match hidden files.
|
||||
– Files are removed from the _boot_ directory because they will be copied to a separate partition in the next step.
|
||||
– This copy operation is being done from the dracut shell to insure that no processes are accessing the files while they are being copied.
|
||||
|
||||
7. **Use _single-user mode_ to copy the non-root file systems** :
|
||||
|
||||
```
|
||||
# mkdir /newroot
|
||||
# mount /dev/md/root /newroot
|
||||
# mount /dev/md/boot /newroot/boot
|
||||
# shopt -s dotglob
|
||||
# cp -Lr /boot/* /newroot/boot
|
||||
# test -d /newroot/boot/efi/EFI && mv /newroot/boot/efi/EFI/* /newroot/boot/efi && rmdir /newroot/boot/efi/EFI
|
||||
# test -d /sys/firmware/efi/efivars && ln -sfr /newroot/boot/efi/fedora/grub.cfg /newroot/etc/grub2-efi.cfg
|
||||
# cp -ax /home/* /newroot/home
|
||||
# exit
|
||||
```
|
||||
|
||||
– It is OK to run these commands in the dracut shell shown in the previous section instead of doing it from single-user mode. I’ve demonstrated using single-user mode to avoid having to explain how to mount the non-root partitions from the dracut shell.
|
||||
– The parameters being past to the _cp_ command for the _boot_ directory are a little different because the VFAT file system doesn’t support symbolic links or Unix-style file permissions.
|
||||
– In rare cases, the _rd.auto_ parameter is known to cause LVM to fail to assemble due to a [race condition][23]. If you see errors about your _swap_ or _home_ partition failing to mount when entering single-user mode, simply try again by repeating step 5 but omiting the _rd.break_ paramenter so that you will go directly to single-user mode.
|
||||
|
||||
8. **Update _fstab_ on the new drive** :
|
||||
|
||||
```
|
||||
# cat << END > /newroot/etc/fstab
|
||||
/dev/md/root / xfs defaults 0 0
|
||||
/dev/md/boot /boot vfat defaults 0 0
|
||||
/dev/md/swap swap swap defaults 0 0
|
||||
END
|
||||
```
|
||||
|
||||
9. **Configure the boot loader on the new drive** :
|
||||
|
||||
```
|
||||
# NEW_GRUB_CMDLINE_LINUX=$(cat /etc/default/grub | sed -n 's/^GRUB_CMDLINE_LINUX="\(.*\)"/\1/ p')
|
||||
# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//rd.lvm.*([^ ])}
|
||||
# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//resume=*([^ ])}
|
||||
# NEW_GRUB_CMDLINE_LINUX+=" selinux=0 rd.auto"
|
||||
# sed -i "/^GRUB_CMDLINE_LINUX=/s/=.*/=\"$NEW_GRUB_CMDLINE_LINUX\"/" /newroot/etc/default/grub
|
||||
```
|
||||
|
||||
– You can re-enable selinux after this procedure is complete. But you will have to [relabel your file system][24] first.
|
||||
|
||||
10. **Install the boot loader on the new drive** :
|
||||
|
||||
```
|
||||
# sed -i '/^GRUB_DISABLE_OS_PROBER=.*/d' /newroot/etc/default/grub
|
||||
# echo "GRUB_DISABLE_OS_PROBER=true" >> /newroot/etc/default/grub
|
||||
# MY_DISK_1=$(mdadm --detail /dev/md/boot | grep active | grep -m 1 -o "/dev/sd.")
|
||||
# for i in dev dev/pts proc sys run; do mount -o bind /$i /newroot/$i; done
|
||||
# chroot /newroot env MY_DISK_1=$MY_DISK_1 bash --login
|
||||
# test -d /sys/firmware/efi/efivars || MY_GRUB_DIR=/boot/grub2
|
||||
# test -d /sys/firmware/efi/efivars && MY_GRUB_DIR=$(find /boot/efi -type d -name 'fedora' -print -quit)
|
||||
# test -e /usr/sbin/grub2-switch-to-blscfg && grub2-switch-to-blscfg --grub-directory=$MY_GRUB_DIR
|
||||
# grub2-mkconfig -o $MY_GRUB_DIR/grub.cfg \;
|
||||
# test -d /sys/firmware/efi/efivars && test /boot/grub2/grubenv -nt $MY_GRUB_DIR/grubenv && cp /boot/grub2/grubenv $MY_GRUB_DIR/grubenv
|
||||
# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_1"
|
||||
# logout
|
||||
# for i in run sys proc dev/pts dev; do umount /newroot/$i; done
|
||||
# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_1" -p 1 -l "$(find /newroot/boot -name shimx64.efi -printf '/%P\n' -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 1"
|
||||
```
|
||||
|
||||
– The _grub2-switch-to-blscfg_ command is optional. It is only supported on Fedora 29+.
|
||||
– The _cp_ command above should not be necessary, but there appears to be a bug in the current version of grub which causes it to write to $BOOT/grub2/grubenv instead of $BOOT/efi/fedora/grubenv on UEFI systems.
|
||||
– You can use the following command to verify the contents of the _grub.cfg_ file right after running the _grub2-mkconfig_ command above:
|
||||
|
||||
```
|
||||
# sed -n '/BEGIN .*10_linux/,/END .*10_linux/ p' $MY_GRUB_DIR/grub.cfg
|
||||
```
|
||||
|
||||
– You should see references to _mdraid_ and _mduuid_ in the output from the above command if the RAID array was detected properly.
|
||||
|
||||
11. **Boot off of the new drive** :
|
||||
|
||||
```
|
||||
# reboot
|
||||
```
|
||||
|
||||
– How to select the new drive is system-dependent. It usually requires pressing one of the **F12** , **F10** , **Esc** or **Del** keys when you hear the [System OK BIOS beep code][25].
|
||||
– On UEFI systems the boot loader on the new drive should be labeled “Fedora RAID Disk 1”.
|
||||
|
||||
12. **Remove all the volume groups and partitions from your old drive** :
|
||||
|
||||
```
|
||||
# MY_DISK_2=/dev/sda
|
||||
# MY_VOLUMES=$(pvs | grep $MY_DISK_2 | awk '{print $2}' | tr "\n" " ")
|
||||
# test -n "$MY_VOLUMES" && vgremove $MY_VOLUMES
|
||||
# sgdisk --zap-all $MY_DISK_2
|
||||
```
|
||||
|
||||
– **WARNING** : You want to make certain that everything is working properly on your new drive before you do this. A good way to verify that your old drive is no longer being used is to try booting your computer once without the old drive connected.
|
||||
– You can add another new drive to your computer instead of erasing your old one if you prefer.
|
||||
|
||||
13. **Create new partitions on your old drive to match the ones on your new drive** :
|
||||
|
||||
```
|
||||
# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_2 $MY_DISK_2
|
||||
# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_2 $MY_DISK_2
|
||||
# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_2 $MY_DISK_2
|
||||
# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_2 $MY_DISK_2
|
||||
```
|
||||
|
||||
– It is important that the partitions match in size and type. I prefer to use the _parted_ command to display the partition table because it supports setting the display unit:
|
||||
|
||||
```
|
||||
# parted /dev/sda unit MiB print
|
||||
# parted /dev/sdb unit MiB print
|
||||
```
|
||||
|
||||
14. **Use mdadm to add the new partitions to the RAID devices** :
|
||||
|
||||
```
|
||||
# mdadm --manage /dev/md/boot --add /dev/disk/by-partlabel/boot_2
|
||||
# mdadm --manage /dev/md/swap --add /dev/disk/by-partlabel/swap_2
|
||||
# mdadm --manage /dev/md/root --add /dev/disk/by-partlabel/root_2
|
||||
```
|
||||
|
||||
15. **Install the boot loader on your old drive** :
|
||||
|
||||
```
|
||||
# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_2"
|
||||
# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_2" -p 1 -l "$(find /boot -name shimx64.efi -printf "/%P\n" -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 2"
|
||||
```
|
||||
|
||||
16. **Use mdadm to test that email notifications are working** :
|
||||
|
||||
```
|
||||
# mdadm --monitor --scan --oneshot --test
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
||||
As soon as your drives have finished synchronizing, you should be able to select either drive when restarting your computer and you will receive the same live-mirrored operating system. If either drive fails, mdmonitor will send an email notification. Recovering from a drive failure is now simply a matter of swapping out the bad drive with a new one and running a few _sgdisk_ and _mdadm_ commands to re-create the mirrors (steps 13 through 15). You will no longer have to worry about losing any data if a drive fails!
|
||||
|
||||
### Video Demonstrations
|
||||
|
||||
Converting a UEFI PC to RAID1
|
||||
|
||||
Converting a BIOS PC to RAID1
|
||||
|
||||
* TIP: Set the the quality to 720p on the above videos for best viewing.
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
|
||||
|
||||
作者:[Gregory Bartholomew][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://fedoramagazine.org/author/glb/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/raid_mirroring-816x345.jpg
|
||||
[2]: https://en.wikipedia.org/wiki/Disk_mirroring
|
||||
[3]: https://en.wikipedia.org/wiki/Disk_mirroring#Additional_benefits
|
||||
[4]: https://en.wikipedia.org/wiki/Bottleneck_(software)
|
||||
[5]: https://en.wikipedia.org/wiki/Data_corruption
|
||||
[6]: https://en.wikipedia.org/wiki/Data_degradation
|
||||
[7]: https://en.wikipedia.org/wiki/Journaling_file_system
|
||||
[8]: https://www.quora.com/What-is-the-difference-between-a-journaling-vs-a-log-structured-file-system
|
||||
[9]: https://en.wikipedia.org/wiki/File_verification
|
||||
[10]: https://en.wikipedia.org/wiki/ZFS#Summary_of_key_differentiating_features
|
||||
[11]: https://en.wikipedia.org/wiki/Snapshot_(computer_storage)#File_systems
|
||||
[12]: https://en.wikipedia.org/wiki/Business_continuance_volume
|
||||
[13]: https://fedoramagazine.org/managing-partitions-with-sgdisk/
|
||||
[14]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
|
||||
[15]: https://fedoraproject.org/wiki/QA:Testcase_Sendmail
|
||||
[16]: https://en.wikipedia.org/wiki/Evolution_(software)
|
||||
[17]: https://dotancohen.com/howto/root_email.html
|
||||
[18]: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
|
||||
[19]: https://systemd.io/BOOT_LOADER_SPECIFICATION#technical-details
|
||||
[20]: https://docs.fedoraproject.org/en-US/Fedora/26/html/System_Administrators_Guide/sec-Changing_and_Resetting_the_Root_Password.html
|
||||
[21]: https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/#sec-Making_Temporary_Changes_to_a_GRUB_2_Menu
|
||||
[22]: https://en.wikipedia.org/wiki/Wildcard_character#File_and_directory_patterns
|
||||
[23]: https://en.wikipedia.org/wiki/Race_condition
|
||||
[24]: https://wiki.centos.org/HowTos/SELinux#head-867ca18a09f3103705cdb04b7d2581b69cd74c55
|
||||
[25]: https://en.wikipedia.org/wiki/Power-on_self-test#Original_IBM_POST_beep_codes
|
@ -0,0 +1,107 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Say goodbye to boilerplate in Python with attrs)
|
||||
[#]: via: (https://opensource.com/article/19/5/python-attrs)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez/users/moshez)
|
||||
|
||||
Say goodbye to boilerplate in Python with attrs
|
||||
======
|
||||
Learn more about solving common Python problems in our series covering
|
||||
seven PyPI libraries.
|
||||
![Programming at a browser, orange hands][1]
|
||||
|
||||
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
|
||||
|
||||
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**attrs**][4], a Python package that helps you write concise, correct code quickly.
|
||||
|
||||
### attrs
|
||||
|
||||
If you have been using Python for any length of time, you are probably used to writing code like:
|
||||
|
||||
|
||||
```
|
||||
class Book(object):
|
||||
|
||||
def __init__(self, isbn, name, author):
|
||||
self.isbn = isbn
|
||||
self.name = name
|
||||
self.author = author
|
||||
```
|
||||
|
||||
Then you write a **__repr__** function; otherwise, it would be hard to log instances of **Book** :
|
||||
|
||||
|
||||
```
|
||||
def __repr__(self):
|
||||
return f"Book({self.isbn}, {self.name}, {self.author})"
|
||||
```
|
||||
|
||||
Next, you write a nice docstring documenting the expected types. But you notice you forgot to add the **edition** and **published_year** attributes, so you have to modify them in five places.
|
||||
|
||||
What if you didn't have to?
|
||||
|
||||
|
||||
```
|
||||
@attr.s(auto_attribs=True)
|
||||
class Book(object):
|
||||
isbn: str
|
||||
name: str
|
||||
author: str
|
||||
published_year: int
|
||||
edition: int
|
||||
```
|
||||
|
||||
Annotating the attributes with types using the new type annotation syntax, **attrs** detects the annotations and creates a class.
|
||||
|
||||
ISBNs have a specific format. What if we want to enforce that format?
|
||||
|
||||
|
||||
```
|
||||
@attr.s(auto_attribs=True)
|
||||
class Book(object):
|
||||
isbn: str = attr.ib()
|
||||
@isbn.validator
|
||||
def pattern_match(self, attribute, value):
|
||||
m = re.match(r"^(\d{3}-)\d{1,3}-\d{2,3}-\d{1,7}-\d$", value)
|
||||
if not m:
|
||||
raise ValueError("incorrect format for isbn", value)
|
||||
name: str
|
||||
author: str
|
||||
published_year: int
|
||||
edition: int
|
||||
```
|
||||
|
||||
The **attrs** library also has great support for [immutability-style programming][5]. Changing the first line to **@attr.s(auto_attribs=True, frozen=True)** means that **Book** is now immutable: trying to modify an attribute will raise an exception. Instead, we can get a _new_ instance with modification using **attr.evolve(old_book, published_year=old_book.published_year+1)** , for example, if we need to push publication forward by a year.
|
||||
|
||||
In the next article in this series, we'll look at **singledispatch** , a library that allows you to add methods to Python libraries retroactively.
|
||||
|
||||
#### Review the previous articles in this series
|
||||
|
||||
* [Cython][6]
|
||||
* [Black][7]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/python-attrs
|
||||
|
||||
作者:[Moshe Zadka ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y (Programming at a browser, orange hands)
|
||||
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
|
||||
[3]: https://pypi.org/
|
||||
[4]: https://pypi.org/project/attrs/
|
||||
[5]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures
|
||||
[6]: https://opensource.com/article/19/4/7-python-problems-solved-cython
|
||||
[7]: https://opensource.com/article/19/4/python-problems-solved-black
|
@ -0,0 +1,105 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (SuiteCRM: An Open Source CRM Takes Aim At Salesforce)
|
||||
[#]: via: (https://itsfoss.com/suitecrm-ondemand/)
|
||||
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
|
||||
|
||||
SuiteCRM: An Open Source CRM Takes Aim At Salesforce
|
||||
======
|
||||
|
||||
SuiteCRM is one of the most popular open source CRM (Customer Relationship Management) software available. With its unique-priced managed CRM hosting service, SuiteCRM is aiming to challenge enterprise CRMs like Salesforce.
|
||||
|
||||
### SuiteCRM: An Open Source CRM Software
|
||||
|
||||
CRM stands for Customer Relationship Management. It is used by businesses to manage the interaction with customers, keep track of services, supplies and other things that help the business manage their customers.
|
||||
|
||||
![][1]
|
||||
|
||||
[SuiteCRM][2] came into existence after the hugely popular [SugarCRM][3] decided to stop developing its open source version. The open source version of SugarCRM was then forked into SuiteCRM by UK-based [SalesAgility][4] team.
|
||||
|
||||
In just a couple of years, SuiteCRM became immensely popular and started to be considered the best open source CRM software out there. You can gauge its popularity from the fact that it’s nearing a million download and it has over 100,000 community members. There are around 4 million SuiteCRM users worldwide (a CRM software usually has more than one user) and it is available in several languages. It’s even used by National Health Service ([NHS][5]) in UK.
|
||||
|
||||
Since SuiteCRM is a free and open source software, you are free to download it and deploy it on your cloud server such as [UpCloud][6] (we at It’s FOSS use it), [DigitalOcean][7], [AWS][8] or any Linux server of our own.
|
||||
|
||||
But configuring the software, deploying it and managing it a tiresome job and requires certain skill level or a the services of a sysadmin. This is why business oriented open source software provide a hosted version of their software.
|
||||
|
||||
This enables you to enjoy the open source software without the additional headache and the team behind the software has a way to generate revenue and continue the development of their software.
|
||||
|
||||
### Suite:OnDemand – Cost effective managed hosting of SuiteCRM
|
||||
|
||||
So, recently, [SalesAgility][4] – the creators/maintainers of SuiteCRM, decided to challenge [Salesforce][9] and other enterprise CRMs by introducing [Suite:OnDemand][10] , a hosted version of SuiteCRM.
|
||||
|
||||
[][11]
|
||||
|
||||
Suggested read Papyrus: An Open Source Note Manager
|
||||
|
||||
Normally, you will observe pricing plans on the basis of number of users. But, with SuiteCRM’s OnDemand cloud hosting plans, they are trying to give businesses an affordable solution on a “per-server” basis instead of paying for every user you add.
|
||||
|
||||
In other words, they want you to pay extra only for advanced features, not for more users.
|
||||
|
||||
Here’s what SalesAgility mentioned in their [press release][12]:
|
||||
|
||||
> Unlike Salesforce and other enterprise CRM vendors, the practice of pricing per user has been abandoned in favour of per-server hosting packages all of which will support unlimited users. In addition, there’s no increase in cost for access to advanced features. With Suite:OnDemand every feature and benefit is available with each hosting package.
|
||||
|
||||
Of course, unlimited users does not mean that you will have to abuse the term. So, there’s a recommended number of users for every hosting plan you opt for.
|
||||
|
||||
![Suitecrm Hosting][13]
|
||||
|
||||
The CEO of SalesAgility also had to describe their goals for this step:
|
||||
|
||||
“ _We want SuiteCRM to be available to all businesses and to all users within a business,_ ”said **Dale Murray CEO** of **SalesAgility**.
|
||||
|
||||
In addition to that, they also mentioned that they want to revolutionize the way enterprise-class CRM is being currently offered in order to make it more accessible to businesses and organizations:
|
||||
|
||||
> “Many organisations do not have the experience to run and support our product on-premise or it is not part of their technology strategy to do so. With Suite:OnDemand we are providing our customers with a quick and easy solution to access all the features of SuiteCRM without a per user cost. We’re also saying to Salesforce that enterprise-class CRM can be delivered, enhanced, maintained and supported without charging mouth-wateringly expensive monthly fees. Our aim is to transform the CRM market to enable users to make CRM pervasive within their organisations.”
|
||||
>
|
||||
> Dale Murray, CEO of SalesAgility
|
||||
|
||||
### Why is this a big deal?
|
||||
|
||||
This is a huge relief for small business owners and startups because other CRMs like Saleforce and SugarCRM charge $30-$40 per month per user. If you have 10 members in your team, this will increase the cost to $300-$400 per month.
|
||||
|
||||
[][14]
|
||||
|
||||
Suggested read Winds Beautifully Combines Feed Reader and Podcast Player in One Single App
|
||||
|
||||
This is also a good news for the open source community that we will have an affordable alternative to Salesforce.
|
||||
|
||||
In addition to this, SuiteCRM is fully open source meaning there are no license fees or vendor lock-in – as they mention. You are always free to use it on your own.
|
||||
|
||||
It is interesting to see different strategies and solutions being applied for an open source CRM software to take an aim at Salesforce directly.
|
||||
|
||||
What do you think? Let us know your thoughts in the comments below.
|
||||
|
||||
_With inputs from Abhishek Prakash._
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/suitecrm-ondemand/
|
||||
|
||||
作者:[Ankush Das][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/ankush/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2019/05/suite-crm-800x450.png
|
||||
[2]: https://suitecrm.com/
|
||||
[3]: https://www.sugarcrm.com/
|
||||
[4]: https://salesagility.com/
|
||||
[5]: https://www.nhs.uk/
|
||||
[6]: https://www.upcloud.com/register/?promo=itsfoss
|
||||
[7]: https://m.do.co/c/d58840562553
|
||||
[8]: https://aws.amazon.com/
|
||||
[9]: https://www.salesforce.com
|
||||
[10]: https://suitecrm.com/suiteondemand/
|
||||
[11]: https://itsfoss.com/papyrus-open-source-note-manager/
|
||||
[12]: https://suitecrm.com/sod-pr/
|
||||
[13]: https://itsfoss.com/wp-content/uploads/2019/05/suitecrm-hosting-800x457.jpg
|
||||
[14]: https://itsfoss.com/winds-podcast-feedreader/
|
@ -0,0 +1,85 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Tutanota Launches New Encrypted Tool to Support Press Freedom)
|
||||
[#]: via: (https://itsfoss.com/tutanota-secure-connect/)
|
||||
[#]: author: (John Paul https://itsfoss.com/author/john/)
|
||||
|
||||
Tutanota Launches New Encrypted Tool to Support Press Freedom
|
||||
======
|
||||
|
||||
A secure email provider has announced the release of a new product designed to help whistleblowers get their information to the media. The tool is free for journalists.
|
||||
|
||||
### Tutanota helps you protect your privacy
|
||||
|
||||
![][1]
|
||||
|
||||
[Tutanota][2] is a German-based company that provides “world’s most secure email service, easy to use and private by design.” They offer end-to-end encryption for their [secure email service][3]. Recently Tutanota announced a [desktop app for their email service][4].
|
||||
|
||||
They also make use of two-factor authentication and [open source the code][5] that they use.
|
||||
|
||||
While you can get an account for free, you don’t have to worry about your information being sold or seeing ads. Tutanota makes money by charging for extra features and storage. They also offer solutions for non-profit organizations.
|
||||
|
||||
Tutanota has launched a new service to further help journalists, social activists and whistleblowers in communicating securely.
|
||||
|
||||
[][6]
|
||||
|
||||
Suggested read Purism's New Offering is a Dream Come True for Privacy Concerned People
|
||||
|
||||
### Secure Connect: An encrypted form for websites
|
||||
|
||||
![][7]
|
||||
|
||||
Tutanota has released a new piece of software named Secure Connect. Secure Connect is “an open source encrypted contact form for news sites”. The goal of the project is to create a way so that “whistleblowers can get in touch with journalists securely”. Tutanota picked the right day because May 3rd is the [Day of Press Freedom][8].
|
||||
|
||||
According to Tutanota, Secure Connect is designed to be easily added to websites, but can also work on any blog to ensure access by smaller news agencies. A whistleblower would access Secure Connect app on a news site, preferably using Tor, and type in any information that they want to bring to light. The whistleblower would also be able to upload files. Once they submit the information, Secure Connect will assign a random address and password, “which lets the whistleblower re-access his sent message at a later stage and check for replies from the news site.”
|
||||
|
||||
![Secure Connect Encrypted Contact Form][9]
|
||||
|
||||
While Tutanota will be offering Secure Connect to journalists for free, they know that someone will have to foot the bill. They plan to pay for further development of the project by selling it to businesses, such as “lawyers, financial institutions, medical institutions, educational institutions, and the authorities”. Non-journalists would have to pay €24 per month.
|
||||
|
||||
You can see a demo of Secure Connect, by clicking [here][10]. If you are a journalist interested in adding Secure Connect to your website or blog, you can contact them at [[email protected]][11] Be sure to include a link to your website.
|
||||
|
||||
[][12]
|
||||
|
||||
Suggested read 8 Privacy Oriented Alternative Search Engines To Google in 2019
|
||||
|
||||
### Final Thoughts on Secure Connect
|
||||
|
||||
I have read repeatedly about whistleblowers whose identities were accidentally exposed, either by themselves or others. Tutanota’s project looks like it would remove that possibility by making it impossible for others to discover their identity. It also gives both parties an easy way to exchange information without having to worry about encryption or PGP keys.
|
||||
|
||||
I understand that it’s not the same as [Firefox Send][13], another encrypted file sharing program from Mozilla. The only question I have is whose servers will the whistleblowers’ information be sitting on?
|
||||
|
||||
Do you think that Tutanota’s Secure Connect will be a boon for whistleblowers and activists? Please let us know in the comments below.
|
||||
|
||||
If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][14].
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://itsfoss.com/tutanota-secure-connect/
|
||||
|
||||
作者:[John Paul][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://itsfoss.com/author/john/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://itsfoss.com/wp-content/uploads/2018/02/tutanota-featured-800x450.png
|
||||
[2]: https://tutanota.com/
|
||||
[3]: https://itsfoss.com/tutanota-review/
|
||||
[4]: https://itsfoss.com/tutanota-desktop/
|
||||
[5]: https://tutanota.com/blog/posts/open-source-email
|
||||
[6]: https://itsfoss.com/librem-one/
|
||||
[7]: https://itsfoss.com/wp-content/uploads/2019/05/secure-communication.jpg
|
||||
[8]: https://en.wikipedia.org/wiki/World_Press_Freedom_Day
|
||||
[9]: https://itsfoss.com/wp-content/uploads/2019/05/secure-connect-encrypted-contact-form.png
|
||||
[10]: https://secureconnect.tutao.de/contactform/demo
|
||||
[11]: /cdn-cgi/l/email-protection
|
||||
[12]: https://itsfoss.com/privacy-search-engines/
|
||||
[13]: https://itsfoss.com/firefox-send/
|
||||
[14]: http://reddit.com/r/linuxusersgroup
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Add methods retroactively in Python with singledispatch)
|
||||
[#]: via: (https://opensource.com/article/19/5/python-singledispatch)
|
||||
[#]: author: (Moshe Zadka https://opensource.com/users/moshez)
|
||||
|
||||
Add methods retroactively in Python with singledispatch
|
||||
======
|
||||
Learn more about solving common Python problems in our series covering
|
||||
seven PyPI libraries.
|
||||
![][1]
|
||||
|
||||
Python is one of the most [popular programming languages][2] in use today—and for good reasons: it's open source, it has a wide range of uses (such as web programming, business applications, games, scientific programming, and much more), and it has a vibrant and dedicated community supporting it. This community is the reason we have such a large, diverse range of software packages available in the [Python Package Index][3] (PyPI) to extend and improve Python and solve the inevitable glitches that crop up.
|
||||
|
||||
In this series, we'll look at seven PyPI libraries that can help you solve common Python problems. Today, we'll examine [**singledispatch**][4], a library that allows you to add methods to Python libraries retroactively.
|
||||
|
||||
### singledispatch
|
||||
|
||||
Imagine you have a "shapes" library with a **Circle** class, a **Square** class, etc.
|
||||
|
||||
A **Circle** has a **radius** , a **Square** has a **side** , and a **Rectangle** has **height** and **width**. Our library already exists; we do not want to change it.
|
||||
|
||||
However, we do want to add an **area** calculation to our library. If we didn't share this library with anyone else, we could just add an **area** method so we could call **shape.area()** and not worry about what the shape is.
|
||||
|
||||
While it is possible to reach into a class and add a method, this is a bad idea: nobody expects their class to grow new methods, and things might break in weird ways.
|
||||
|
||||
Instead, the **singledispatch** function in **functools** can come to our rescue.
|
||||
|
||||
|
||||
```
|
||||
@singledispatch
|
||||
def get_area(shape):
|
||||
raise NotImplementedError("cannot calculate area for unknown shape",
|
||||
shape)
|
||||
```
|
||||
|
||||
The "base" implementation for the **get_area** function fails. This makes sure that if we get a new shape, we will fail cleanly instead of returning a nonsense result.
|
||||
|
||||
|
||||
```
|
||||
@get_area.register(Square)
|
||||
def _get_area_square(shape):
|
||||
return shape.side ** 2
|
||||
@get_area.register(Circle)
|
||||
def _get_area_circle(shape):
|
||||
return math.pi * (shape.radius ** 2)
|
||||
```
|
||||
|
||||
One nice thing about doing things this way is that if someone writes a _new_ shape that is intended to play well with our code, they can implement **get_area** themselves.
|
||||
|
||||
|
||||
```
|
||||
from area_calculator import get_area
|
||||
|
||||
@attr.s(auto_attribs=True, frozen=True)
|
||||
class Ellipse:
|
||||
horizontal_axis: float
|
||||
vertical_axis: float
|
||||
|
||||
@get_area.register(Ellipse)
|
||||
def _get_area_ellipse(shape):
|
||||
return math.pi * shape.horizontal_axis * shape.vertical_axis
|
||||
```
|
||||
|
||||
_Calling_ **get_area** is straightforward.
|
||||
|
||||
|
||||
```
|
||||
`print(get_area(shape))`
|
||||
```
|
||||
|
||||
This means we can change a function that has a long **if isintance()/elif isinstance()** chain to work this way, without changing the interface. The next time you are tempted to check **if isinstance** , try using **singledispatch**!
|
||||
|
||||
In the next article in this series, we'll look at **tox** , a tool for automating tests on Python code.
|
||||
|
||||
#### Review the previous articles in this series:
|
||||
|
||||
* [Cython][5]
|
||||
* [Black][6]
|
||||
* [attrs][7]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/python-singledispatch
|
||||
|
||||
作者:[Moshe Zadka ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/moshez
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_code_programming_laptop.jpg?itok=ormv35tV
|
||||
[2]: https://opensource.com/article/18/5/numbers-python-community-trends
|
||||
[3]: https://pypi.org/
|
||||
[4]: https://pypi.org/project/singledispatch/
|
||||
[5]: https://opensource.com/article/19/4/7-python-problems-solved-cython
|
||||
[6]: https://opensource.com/article/19/4/python-problems-solved-black
|
||||
[7]: https://opensource.com/article/19/4/python-problems-solved-attrs
|
@ -0,0 +1,93 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech)
|
||||
[#]: via: (https://opensource.com/article/19/5/may-the-fourth-star-wars-trek)
|
||||
[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas)
|
||||
|
||||
May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech
|
||||
======
|
||||
The technologies may have been fictional, but these two acclaimed sci-fi
|
||||
series have inspired open source tech.
|
||||
![Triangulum galaxy, NASA][1]
|
||||
|
||||
Conventional wisdom says you can either be a fan of _Star Trek_ or of _Star Wars_ , but mixing the two is like mixing matter and anti-matter. I'm not sure that's true, but even if the laws of physics cannot be changed, these two acclaimed sci-fi series have influenced the open source universe and created their own open source multi-verses.
|
||||
|
||||
For example, fans have used the original _Star Trek_ as "source code" to create fan-made films, cartoons, and games. One of the more notable fan creations was the web series _Star Trek Continues_ , which faithfully adapted Gene Roddenberry's universe and redistributed it to the world.
|
||||
|
||||
"Eventually we realized that there is no more profound way in which people could express what _Star Trek_ has meant to them than by creating their own very personal _Star Trek_ things," [Roddenberry said][2]. However, due to copyright restrictions, this "open source" channel [has since been curtailed][3].
|
||||
|
||||
_Star Wars_ has a different approach to open sourcing its universe. [Jess Paguaga writes][4] on FanSided: "With a variety [of] fan film awards dating back to 2002, the _Star Wars_ brand has always supported and encouraged the creation of short films that help expand the universe of a galaxy far, far away."
|
||||
|
||||
But, _Star Wars_ is not without its own copyright prime directives. In one case, a Darth Vader film by a YouTuber called Star Wars Theory has drawn a copyright claim from Disney. The claim does not stop production of the film, but diverts monetary gains from it, [reports James Richards][5] on FanSided.
|
||||
|
||||
This could be one of the [Ferengi Rules of Acquisition][6], perhaps.
|
||||
|
||||
But if you can't watch your favorite fan film, you can still get your [_Star Wars_ fix right in the Linux terminal][7] by entering:
|
||||
|
||||
|
||||
```
|
||||
`telnet towel.blinkenlights.nl`
|
||||
```
|
||||
|
||||
And _Star Trek_ fans can also interact with the Federation with the original text-based video game from 1971. While a high-school senior, Mike Mayfield ported the game from punch cards to HP BASIC. If you'd like to go old school and battle Klingons, the source code is available at the [Code Project][8].
|
||||
|
||||
### Real-life star tech
|
||||
|
||||
Both _Star Wars_ and _Star Trek_ have inspired real-life technologies. Although those technologies were fictional, many have become the practical, open technology we use today. Some of them inspired technologies that are still in development now.
|
||||
|
||||
In the early 1970s, Motorola engineer Martin Cooper was trying to beat AT&T at the car-phone game. He says he was watching Captain Kirk use a "communicator" on an episode of _Star Trek_ and had a eureka moment. His team went on to create the first portable cellular 800MHz phone prototype in 90 days.
|
||||
|
||||
In _Star Wars_ , scout stormtroopers of the Galactic Empire rode the Aratech 74-Z Speeder Bike, and a real-life counterpart is the [Aero-X][9] being developed by California's Aerofex.
|
||||
|
||||
Perhaps the most visible _Star Wars_ tech to enter our lives is droids. We first encountered R2-D2 back in the 1970s, but now we have droids vacuuming our carpets and mowing our lawns, from Roombas to the [Worx Landroid][10] lawnmower.
|
||||
|
||||
And, in _Star Wars_ , Princess Leia appeared to Obi-Wan Kenobi as a hologram, and in Star Trek: Voyager, the ship's chief medical officer was an interactive hologram that could diagnose and treat patients. The technology to bring characters like these to "life" is still a ways off, but there are some interesting open source developments that hint of things to come. [OpenHolo][11], "an open source library containing algorithms and software implementations for holograms in various fields," is one such project.
|
||||
|
||||
### Where's the beef?
|
||||
|
||||
> "She handled… real meat… touched it, and cut it?" —Keiko O'Brien, Star Trek: The Next Generation
|
||||
|
||||
In the _Star Trek_ universe, crew members get their meals by simply ordering a replicator to produce whatever food they desire. That could one day become a reality thanks to a concept created by two German students for an open source "meat-printer" they call the [Cultivator][12]. It would use bio-printing to produce something that appears to be meat; the user could even select its mineral and fat content. Perhaps with more collaboration and development, the Cultivator could become the replicator in tomorrow's kitchen!
|
||||
|
||||
### The 501st
|
||||
|
||||
Cosplayers, people from all walks of life who dress as their favorite characters, are the "open source embodiment" of their favorite universes. The [501st][13] [Legion][13] is an all-volunteer _Star Wars_ fan organization "formed for the express purpose of bringing together costume enthusiasts under a collective identity within which to operate," according to its charter.
|
||||
|
||||
Jon Stallard, a member of Garrison Tyranus, the Central Virginia chapter of the 501st Legion says, "Everybody wanted to be something else when they were a kid, right? Whether it was Neil Armstrong, Batman, or the Six Million Dollar Man. Every backyard playdate was some kind of make-believe. The 501st lets us participate in our fan communities while contributing to the community at large."
|
||||
|
||||
Are cosplayers really "open source characters"? Well, that depends. The copyright laws around cosplay and using unique props, costumes, and more are very complex, [writes Meredith Filak Rose][14] for _Public Knowledge_. "We're lucky to be living in a time where fandom generally enjoys a positive relationship with the creators whose work it admires," Rose concludes.
|
||||
|
||||
So, it is safe to say that stormtroopers, Ferengi, Vulcans, and Yoda are all here to stay for a long, long time, near, and far, far away.
|
||||
|
||||
Live long and prosper, you shall.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/may-the-fourth-star-wars-trek
|
||||
|
||||
作者:[Jeff Macharyas ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/jeffmacharyas
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/triangulum_galaxy_nasa_stars.jpg?itok=NdS19A7m
|
||||
[2]: https://fanlore.org/wiki/Gene_Roddenberry#His_Views_Regarding_Fanworks
|
||||
[3]: https://trekmovie.com/2016/06/23/cbs-and-paramount-release-fan-film-guidelines/
|
||||
[4]: https://dorksideoftheforce.com/2019/01/17/star-wars-fan-films/
|
||||
[5]: https://dorksideoftheforce.com/2019/01/16/disney-claims-copyright-star-wars-theory/
|
||||
[6]: https://en.wikipedia.org/wiki/Rules_of_Acquisition
|
||||
[7]: https://itsfoss.com/star-wars-linux/
|
||||
[8]: https://www.codeproject.com/Articles/28228/Star-Trek-1971-Text-Game
|
||||
[9]: https://www.livescience.com/58943-real-life-star-wars-technology.html
|
||||
[10]: https://www.digitaltrends.com/cool-tech/best-robot-lawnmowers/
|
||||
[11]: http://openholo.org/
|
||||
[12]: https://www.pastemagazine.com/articles/2016/05/the-future-is-vegan-according-to-star-trek.html
|
||||
[13]: https://www.501st.com/
|
||||
[14]: https://www.publicknowledge.org/news-blog/blogs/copyright-and-cosplay-working-with-an-awkward-fit
|
@ -0,0 +1,240 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Using the force at the Linux command line)
|
||||
[#]: via: (https://opensource.com/article/19/5/may-the-force-linux)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
Using the force at the Linux command line
|
||||
======
|
||||
Like the Jedi Force, -f is powerful, potentially destructive, and very
|
||||
helpful when you know how to use it.
|
||||
![Fireworks][1]
|
||||
|
||||
Sometime in recent history, sci-fi nerds began an annual celebration of everything [_Star Wars_ on May the 4th][2], a pun on the Jedi blessing, "May the Force be with you." Although most Linux users are probably not Jedi, they still have ways to use the force. Of course, the movie might not have been quite as exciting if Yoda simply told Luke to type **man X-Wing fighter** or **man force**. Or if he'd said, "RTFM" (Read the Force Manual, of course).
|
||||
|
||||
Many Linux commands have an **-f** option, which stands for, you guessed it, force! Sometimes when you execute a command, it fails or prompts you for additional input. This may be an effort to protect the files you are trying to change or inform the user that a device is busy or a file already exists.
|
||||
|
||||
If you don't want to be bothered by prompts or don't care about errors, use the force!
|
||||
|
||||
Be aware that using a command's force option to override these protections is, generally, destructive. Therefore, the user needs to pay close attention and be sure that they know what they are doing. Using the force can have consequences!
|
||||
|
||||
Following are four Linux commands with a force option and a brief description of how and why you might want to use it.
|
||||
|
||||
### cp
|
||||
|
||||
The **cp** command is short for copy—it's used to copy (or duplicate) a file or directory. The [man page][3] describes the force option for **cp** as:
|
||||
|
||||
|
||||
```
|
||||
-f, --force
|
||||
if an existing destination file cannot be opened, remove it
|
||||
and try again
|
||||
```
|
||||
|
||||
This example is for when you are working with read-only files:
|
||||
|
||||
|
||||
```
|
||||
[alan@workstation ~]$ ls -l
|
||||
total 8
|
||||
-rw-rw---- 1 alan alan 13 May 1 12:24 Hoth
|
||||
-r--r----- 1 alan alan 14 May 1 12:23 Naboo
|
||||
[alan@workstation ~]$ cat Hoth Naboo
|
||||
Icy Planet
|
||||
|
||||
Green Planet
|
||||
```
|
||||
|
||||
If you want to copy a file called _Hoth_ to _Naboo_ , the **cp** command will not allow it since _Naboo_ is read-only:
|
||||
|
||||
|
||||
```
|
||||
[alan@workstation ~]$ cp Hoth Naboo
|
||||
cp: cannot create regular file 'Naboo': Permission denied
|
||||
```
|
||||
|
||||
But by using the force, **cp** will not prompt. The contents and permissions of _Hoth_ will immediately be copied to _Naboo_ :
|
||||
|
||||
|
||||
```
|
||||
[alan@workstation ~]$ cp -f Hoth Naboo
|
||||
[alan@workstation ~]$ cat Hoth Naboo
|
||||
Icy Planet
|
||||
|
||||
Icy Planet
|
||||
|
||||
[alan@workstation ~]$ ls -l
|
||||
total 8
|
||||
-rw-rw---- 1 alan alan 12 May 1 12:32 Hoth
|
||||
-rw-rw---- 1 alan alan 12 May 1 12:38 Naboo
|
||||
```
|
||||
|
||||
Oh no! I hope they have winter gear on Naboo.
|
||||
|
||||
### ln
|
||||
|
||||
The **ln** command is used to make links between files. The [man page][4] describes the force option for **ln** as:
|
||||
|
||||
|
||||
```
|
||||
-f, --force
|
||||
remove existing destination files
|
||||
```
|
||||
|
||||
Suppose Princess Leia is maintaining a Java application server and she has a directory where all Java versions are stored. Here is an example:
|
||||
|
||||
|
||||
```
|
||||
leia@workstation:/usr/lib/java$ ls -lt
|
||||
total 28
|
||||
lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
|
||||
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
|
||||
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
|
||||
```
|
||||
|
||||
As you can see, there are several versions of the Java Development Kit (JDK) and a symbolic link pointing to the latest one. She uses a script with the following commands to install new JDK versions. However, it won't work without a force option or unless the root user runs it:
|
||||
|
||||
|
||||
```
|
||||
tar xvzmf jdk1.8.0_181.tar.gz -C jdk1.8.0_181/
|
||||
ln -vs jdk1.8.0_181 jdk
|
||||
```
|
||||
|
||||
The **tar** command will extract the .gz file to the specified directory, but the **ln** command will fail to upgrade the link because one already exists. The result will be that the link no longer points to the latest JDK:
|
||||
|
||||
|
||||
```
|
||||
leia@workstation:/usr/lib/java$ ln -vs jdk1.8.0_181 jdk
|
||||
ln: failed to create symbolic link 'jdk/jdk1.8.0_181': File exists
|
||||
leia@workstation:/usr/lib/java$ ls -lt
|
||||
total 28
|
||||
drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
|
||||
lrwxrwxrwx 1 leia leia 12 Mar 5 2018 jdk -> jdk1.8.0_162
|
||||
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
|
||||
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
|
||||
```
|
||||
|
||||
She can force **ln** to update the link correctly by passing the force option and one other, **-n**. The **-n** is needed because the link points to a directory. Now, the link again points to the latest JDK:
|
||||
|
||||
|
||||
```
|
||||
leia@workstation:/usr/lib/java$ ln -vsnf jdk1.8.0_181 jdk
|
||||
'jdk' -> 'jdk1.8.0_181'
|
||||
leia@workstation:/usr/lib/java$ ls -lt
|
||||
total 28
|
||||
lrwxrwxrwx 1 leia leia 12 May 1 16:13 jdk -> jdk1.8.0_181
|
||||
drwxr-x--- 2 leia leia 4096 May 1 15:44 jdk1.8.0_181
|
||||
drwxr-xr-x 8 leia leia 4096 Mar 5 2018 jdk1.8.0_162
|
||||
drwxr-xr-x 8 leia leia 4096 Aug 28 2017 jdk1.8.0_144
|
||||
```
|
||||
|
||||
A Java application can be configured to find the JDK with the path **/usr/lib/java/jdk** instead of having to change it every time Java is updated.
|
||||
|
||||
### rm
|
||||
|
||||
The **rm** command is short for "remove" (which we often call delete, since some other operating systems have a **del** command for this action). The [man page][5] describes the force option for **rm** as:
|
||||
|
||||
|
||||
```
|
||||
-f, --force
|
||||
ignore nonexistent files and arguments, never prompt
|
||||
```
|
||||
|
||||
If you try to delete a read-only file, you will be prompted by **rm** :
|
||||
|
||||
|
||||
```
|
||||
[alan@workstation ~]$ ls -l
|
||||
total 4
|
||||
-r--r----- 1 alan alan 16 May 1 11:38 B-wing
|
||||
[alan@workstation ~]$ rm B-wing
|
||||
rm: remove write-protected regular file 'B-wing'?
|
||||
```
|
||||
|
||||
You must type either **y** or **n** to answer the prompt and allow the **rm** command to proceed. If you use the force option, **rm** will not prompt you and will immediately delete the file:
|
||||
|
||||
|
||||
```
|
||||
[alan@workstation ~]$ rm -f B-wing
|
||||
[alan@workstation ~]$ ls -l
|
||||
total 0
|
||||
[alan@workstation ~]$
|
||||
```
|
||||
|
||||
The most common use of force with **rm** is to delete a directory. The **-r** (recursive) option tells **rm** to remove a directory. When combined with the force option, it will remove the directory and all its contents without prompting.
|
||||
|
||||
The **rm** command with certain options can be disastrous. Over the years, online forums have filled with jokes and horror stories of users completely wiping their systems. This notorious usage is **rm -rf ***. This will immediately delete all files and directories without any prompt wherever it is used.
|
||||
|
||||
### userdel
|
||||
|
||||
The **userdel** command is short for user delete, which will delete a user. The [man page][6] describes the force option for **userdel** as:
|
||||
|
||||
|
||||
```
|
||||
-f, --force
|
||||
This option forces the removal of the user account, even if the
|
||||
user is still logged in. It also forces userdel to remove the
|
||||
user's home directory and mail spool, even if another user uses
|
||||
the same home directory or if the mail spool is not owned by the
|
||||
specified user. If USERGROUPS_ENAB is defined to yes in
|
||||
/etc/login.defs and if a group exists with the same name as the
|
||||
deleted user, then this group will be removed, even if it is
|
||||
still the primary group of another user.
|
||||
|
||||
Note: This option is dangerous and may leave your system in an
|
||||
inconsistent state.
|
||||
```
|
||||
|
||||
When Obi-Wan reached the castle on Mustafar, he knew what had to be done. He had to delete Darth's user account—but Darth was still logged in.
|
||||
|
||||
|
||||
```
|
||||
[root@workstation ~]# ps -fu darth
|
||||
UID PID PPID C STIME TTY TIME CMD
|
||||
darth 7663 7655 0 13:28 pts/3 00:00:00 -bash
|
||||
[root@workstation ~]# userdel darth
|
||||
userdel: user darth is currently used by process 7663
|
||||
```
|
||||
|
||||
Since Darth is currently logged in, Obi-Wan has to use the force option to **userdel**. This will delete the user account even though it's logged in.
|
||||
|
||||
|
||||
```
|
||||
[root@workstation ~]# userdel -f darth
|
||||
userdel: user darth is currently used by process 7663
|
||||
[root@workstation ~]# finger darth
|
||||
finger: darth: no such user.
|
||||
[root@workstation ~]# ps -fu darth
|
||||
error: user name does not exist
|
||||
```
|
||||
|
||||
As you can see, the **finger** and **ps** commands confirm the user Darth has been deleted.
|
||||
|
||||
### Using force in shell scripts
|
||||
|
||||
Many other commands have a force option. One place force is very useful is in shell scripts. Since we use scripts in cron jobs and other automated operations, avoiding any prompts is crucial, or else these automated processes will not complete.
|
||||
|
||||
I hope the four examples I shared above help you understand how certain circumstances may require the use of force. You should have a strong understanding of the force option when used at the command line or in creating automation scripts. It's misuse can have devastating effects—sometimes across your infrastructure, and not only on a single machine.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/5/may-the-force-linux
|
||||
|
||||
作者:[Alan Formy-Duval ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/alanfdoss
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fireworks_light_art_design.jpg?itok=hfx9i4By (Fireworks)
|
||||
[2]: https://www.starwars.com/star-wars-day
|
||||
[3]: http://man7.org/linux/man-pages/man1/cp.1.html
|
||||
[4]: http://man7.org/linux/man-pages/man1/ln.1.html
|
||||
[5]: http://man7.org/linux/man-pages/man1/rm.1.html
|
||||
[6]: http://man7.org/linux/man-pages/man8/userdel.8.html
|
@ -0,0 +1,88 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8]
|
||||
======
|
||||
|
||||
![Introduction To Hyperledger Project][1]
|
||||
|
||||
Once a new technology platform reaches a threshold level of popularity in terms of active development and commercial interests, major global companies and smaller start-ups alike rush to catch a slice of the pie. **Linux** was one such platform back in the day. Once the ubiquity of its applications was realized individuals, firms, and institutions started displaying their interest in it and by 2000 the **Linux foundation** was formed.
|
||||
|
||||
The Linux foundation aims to standardize and develop Linux as a platform by sponsoring their development team. The Linux Foundation is a non-profit organization that is supported by software and IT behemoths such as Microsoft, Oracle, Samsung, Cisco, IBM, Intel among others[1]. This is excluding the hundreds of individual developers who offer their services for the betterment of the platform. Over the years the Linux foundation has taken many projects under its roof. The **Hyperledger Project** is their fastest growing one till date.
|
||||
|
||||
Such consortium led development have a lot of advantages when it comes to furthering tech into usable useful forms. Developing the standards, libraries and all the back-end protocols for large scale projects are expensive and resource intensive without a shred of income generating from it. Hence, it makes sense for companies to pool in their resources to develop the common “boring” parts by supporting such organizations and later upon completing work on these standard parts to simply plug & play and customize their products afterwards. Apart from the economics of the model, such collaborative efforts also yield standards allowing for easier use and integration into aspiring products and services.
|
||||
|
||||
Other major innovations that were once or are currently being developed following the said consortium model include standards for WiFi (The Wi-Fi alliance), Mobile Telephony etc.
|
||||
|
||||
### Introduction to Hyperledger Project (HLP)
|
||||
|
||||
The Hyperledger project was launched in December 2015 by the Linux foundation as is currently among the fastest growing project they’ve incubated. It’s an umbrella organization for collaborative efforts into developing and advancing tools & standards for [**blockchain**][2] based distributed ledger technologies(DLT). Major industry players supporting the project include **IBM** , **Intel** and **SAP Ariba** among [**others**][3]. The HLP aims to create frameworks for individuals and companies to create shared as well as closed blockchains as required to further their own requirements. The design principles include a strong tilt toward developing a globally deployable, scalable, robust platform with a focus on privacy, and future auditability[2]. It is also important to note that most of the blockchains proposed and the frame.
|
||||
|
||||
### Development goals and structure: Making it plug & play
|
||||
|
||||
Although enterprise facing platforms exist from the likes of the Ethereum alliance, HLP is by definition business facing and supported by industry behemoths who contribute and further development in the many modules that come under the HLP banner. The HLP incubates projects in development after their induction into the cause and after finishing work on it and correcting the knick-knacks rolls it out for the public. Members of the Hyperledger project contribute their own work such as how IBM contributed their Fabric platform for collaborative development. The codebase is absorbed and developed in house by the group in the project and rolled out for all members equally for their use.
|
||||
|
||||
Such processes make the modules in HLP highly flexible plug-in frameworks which will support rapid development and roll-outs in enterprise settings. Furthermore, other comparable platforms are open **permission-less blockchains** or rather **public chains** by default and even though it is possible to adapt them to specific applications, HLP modules support the feature natively.
|
||||
|
||||
The differences and use cases of public & private blockchains are covered more [**here**][4] in this comparative primer on the same.
|
||||
|
||||
The Hyperledger project’s mission is four-fold according to **Brian Behlendorf** , the executive director of the project.
|
||||
|
||||
They are:
|
||||
|
||||
1. To create an enterprise grade DLT framework and standards which anyone can port to suit their specific industrial or personal needs.
|
||||
2. To give rise to a robust open source community to aid the ecosystem.
|
||||
3. To promote and further participation of industry members of the said ecosystem such as member firms.
|
||||
4. To host a neutral unbiased infrastructure for the HLP community to gather and share updates and developments regarding the same.
|
||||
|
||||
|
||||
|
||||
The original document can be accessed [**here**][5]****.
|
||||
|
||||
### Structure of the HLP
|
||||
|
||||
The **HLP consists of 12 projects** that are classified as independent modules, each usually structured and working independently to develop their module. These are first studied for their capabilities and viability before being incubated. Proposals for additions can be made by any member of the organization. After the project is incubated active development ensues after which it is rolled out. The interoperability between these modules are given a high priority, hence regular communication between these groups are maintained by the community. Currently 4 of these projects are categorized as active. The active tag implies these are ready for use but not ready for a major release yet. These 4 are arguably the most significant or rather fundamental modules to furthering the blockchain revolution. We’ll look at the individual modules and their functionalities at a later time in detail. However, a brief description of a the Hyperledger Fabric platform, arguably the most popular among them follows.
|
||||
|
||||
### Hyperledger Fabric
|
||||
|
||||
The **Hyperledger Fabric** [2] is a fully open-source, permissioned (non-public) blockchain-based DLT platform that is designed keeping enterprise uses in mind. The platform provides features and is structured to fit the enterprise environment. It is highly modular allowing its developers to choose from different consensus protocols, **chain code protocols ([smart contracts][6])** , or identity management systems etc., as they go along. **It is a permissioned blockchain based platform** that’s makes use of an identity management system, meaning participants will be aware of each other’s identities which is required in an enterprise setting. Fabric allows for smart contract ( _ **“chaincode”, is the term that the Hyperledger team uses**_ ) development in a variety of mainstream programming languages including **Java** , **Javascript** , **Go** etc. This allows institutions and enterprises to make use of their existing talent in the area without hiring or re-training developers to develop their own smart contracts. Fabric also uses an execute-order-validate system to handle smart contracts for better reliability compared to the standard order-validate system that is used by other platforms providing smart contract functionality. Pluggable performance, identity management systems, DBMS, Consensus platforms etc. are other features of Fabric that keeps it miles ahead of its competition.
|
||||
|
||||
### Conclusion
|
||||
|
||||
Projects such as the Hyperledger Fabric platforms enable a faster rate of adoption of blockchain technology in mainstream use-cases. The Hyperledger community structure itself supports open governance principles and since all the projects are led as open source platforms, this improves the security and accountability that the teams exhibit in pushing out commitments.
|
||||
|
||||
Since major applications of such projects involve working with enterprises to further development of platforms and standards, the Hyperledger project is currently at a great position with respect to comparable projects by others.
|
||||
|
||||
**References:**
|
||||
|
||||
* **[1][Samsung takes a seat with Intel and IBM at the Linux Foundation | TheINQUIRER][7]**
|
||||
* **[2] E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018.**
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Introduction-To-Hyperledger-Project-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.hyperledger.org/members
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
[5]: http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[7]: https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation
|
@ -0,0 +1,106 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – Public Vs Private Blockchain Comparison [Part 7]
|
||||
======
|
||||
|
||||
![Public vs Private blockchain][1]
|
||||
|
||||
The previous part of the [**Blockchain 2.0**][2] series explored the [**the state of Smart contracts**][3] now. This post intends to throw some light on the different types of blockchains that can be created. Each of these are used for vastly different applications and depending on the use cases, the protocol followed by each of these differ. Now let us go ahead and learn about **Public vs Private blockchain comparison** with Open source and proprietary technology.
|
||||
|
||||
The fundamental three-layer structure of a blockchain based distributed ledger as we know is as follows:
|
||||
|
||||
![][4]
|
||||
|
||||
Figure 1 – Fundamental structure of Blockchain-based ledgers
|
||||
|
||||
The differences between the types mentioned here is attributable primarily to the protocol that rests on the underlying blockchain. The protocol dictates rules for the participants and the behavior of the blockchain in response to the said participation.
|
||||
|
||||
Remember to keep the following things in mind while reading through this article:
|
||||
|
||||
* Platforms such as these are always created to solve a use-case requirement. There is no one direction that the technology should take that is best. Blockchains for instance have tremendous applications and some of these might require dropping features that seem significant in other settings. **Decentralized storage** is a major example in this regard.
|
||||
* Blockchains are basically database systems keeping track of information by timestamping and organizing data in the form of blocks. Creators of such blockchains can choose who has the right to make these blocks and perform alterations.
|
||||
* Blockchains can be “centralized” as well, and participation in varying extents can be limited to those who this “central authority” deems eligible.
|
||||
|
||||
|
||||
|
||||
Most blockchains are either **public** or **private**. Broadly speaking, public blockchains can be considered as being the equivalent of open source software and most private blockchains can be seen as proprietary platforms deriving from the public ones. The figure below should make the basic difference obvious to most of you.
|
||||
|
||||
![][5]
|
||||
|
||||
Figure 2 – Public vs Private blockchain comparison with Open source and Proprietary Technology
|
||||
|
||||
This is not to say that all private blockchains are derived from open public ones. The most popular ones however usually are though.
|
||||
|
||||
### Public Blockchains
|
||||
|
||||
A public blockchain can be considered as a **permission-less platform** or **network**. Anyone with the knowhow and computing resources can participate in it. This will have the following implications:
|
||||
|
||||
* Anyone can join and participate in a public blockchain network. All the “participant” needs is a stable internet connection along with computing resources.
|
||||
* Participation will include reading, writing, verifying, and providing consensus during transactions. An example for participating individuals would be **Bitcoin miners**. In exchange for participating in the network the miners are paid back in Bitcoins in this case.
|
||||
* The platform is decentralized completely and fully redundant.
|
||||
* Because of the decentralized nature, no one entity has complete control over the data recorded in the ledger. To validate a block all (or most) participants need to vet the data.
|
||||
* This means that once information is verified and recorded, it cannot be altered easily. Even if it is, its impossible to not leave marks.
|
||||
* The identity of participants remains anonymous by design in platforms such as **BITCOIN** and **LITECOIN**. These platforms by design aim for protecting and securing user identities. This is primarily a feature provided by the overlying protocol stack.
|
||||
* Examples for public blockchain networks are **BITCOIN** , **LITECOIN** , **ETHEREUM** etc.
|
||||
* Extensive decentralizations mean that gaining consensus on transactions might take a while compared to what is typically possible over blockchain ledger networks and throughput can be a challenge for large enterprises aiming for pushing a very high number of transactions every instant.
|
||||
* The open participation and often the high number of such participants in open chains such as bitcoin add up to considerable initial investments in computing equipment and energy costs.
|
||||
|
||||
|
||||
|
||||
### Private Blockchain
|
||||
|
||||
In contrast, a private blockchain is a **permissioned blockchain**. Meaning:
|
||||
|
||||
* Permission to participate in the network is restricted and is presided over by the owner or institution overseeing the network. Meaning even though an individual will be able to store data and transact (send and receive payments for example), the validation and storage of these transactions will be done only by select participants.
|
||||
* Participation even once permission is given by the central authority will be limited by terms. For instance, in case of a private blockchain network run by a financial institution, not every customer will have access to the entire blockchain ledger, and even among those with the permission, not everyone will be able to access everything. Permissions to access select services will be given by the central figure in this case. This is often referred to as **“channeling”**.
|
||||
* Such systems have significantly larger throughput capabilities and also showcase much faster transaction speeds compared to their public counterparts because a block of information only needs to be validated by a select few.
|
||||
* Security by design is something the public blockchains are renowned for. They achieve this
|
||||
by:
|
||||
* Anonymizing participants,
|
||||
* Distributed & redundant but encrypted storage on multiple nodes,
|
||||
* Mass consensus required for creating and altering data.
|
||||
|
||||
|
||||
|
||||
Private blockchains usually don’t feature any of these in their protocol. This makes the system only as secure as most cloud-based database systems currently in use.
|
||||
|
||||
### A note for the wise
|
||||
|
||||
An important point to note is this, the fact that they’re named public or private (or open or closed) has nothing to do with the underlying code base. The code or the literal foundations on which the platforms are based on may or may not be publicly available and or developed in either of these cases. **R3** is a **DLT** ( **D** istributed **L** edger **T** echnology) company that leads a public consortium of over 200 multinational institutions. Their aim is to further development of blockchain and related distributed ledger technology in the domain of finance and commerce. **Corda** is the product of this joint effort. R3 defines corda as a blockchain platform that is built specially for businesses. The codebase for the same is open source and developers all over the world are encouraged to contribute to the project. However, given its business facing nature and the needs it is meant to address, corda would be categorized as a permissioned closed blockchain platform. Meaning businesses can choose the participants of the network once it is deployed and choose the kind of information these participants can access through the use of natively available smart contract tools.
|
||||
|
||||
While it is a reality that public platforms like Bitcoin and Ethereum are responsible for the widespread awareness and development going on in the space, it can still be argued that private blockchains designed for specific use cases in enterprise or business settings is what will lead monetary investments in the short run. These are the platforms most of us will see implemented the near future in practical ways.
|
||||
|
||||
Read the next guide about Hyperledger project in this series.
|
||||
|
||||
* [**Blockchain 2.0 – An Introduction To Hyperledger Project (HLP)**][6]
|
||||
|
||||
|
||||
|
||||
We are working on many interesting topics on Blockchain technology. Stay tuned!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
|
||||
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
@ -0,0 +1,83 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Blockchain 2.0 – What Is Ethereum [Part 9])
|
||||
[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/)
|
||||
[#]: author: (editor https://www.ostechnix.com/author/editor/)
|
||||
|
||||
Blockchain 2.0 – What Is Ethereum [Part 9]
|
||||
======
|
||||
|
||||
![Ethereum][1]
|
||||
|
||||
In the previous guide of this series, we discussed about [**Hyperledger Project (HLP)**][2], a fastest growing product developed by **Linux Foundation**. In this guide, we are going to discuss about what is **Ethereum** and its features in detail. Many researchers opine that the future of the internet will be based on principles of decentralized computing. Decentralized computing was in fact among one of the broader objectives of having the internet in the first place. However, the internet took another turn owing to differences in computing capabilities available. While modern server capabilities make the case for server-side processing and execution, lack of decent mobile networks in large parts of the world make the case for the same on the client side. Modern smartphones now have **SoCs** (system on a chip or system on chip) capable of handling many such operations on the client side itself, however, limitations owing to retrieving and storing data securely still pushes developers to have server-side computing and data management. Hence, a bottleneck in regards to data transfer capabilities is currently observed.
|
||||
|
||||
All of that might soon change because of advancements in distributed data storage and program execution platforms. [**The blockchain**][3], for the first time in the history of the internet, basically allows for secure data management and program execution on a distributed network of users as opposed to central servers.
|
||||
|
||||
**Ethereum** is one such blockchain platform that gives developers access to frameworks and tools used to build and run applications on such a decentralized network. Though more popularly known in general for its cryptocurrency, Ethereum is more than just **ethers** (the cryptocurrency). It’s a full **Turing complete programming language** that is designed to develop and deploy **DApps** or **Distributed APPlications** [1]. We’ll look at DApps in more detail in one of the upcoming posts.
|
||||
|
||||
Ethereum is an open-source, supports by default a public (non-permissioned) blockchain, and features an extensive smart contract platform **(Solidity)** underneath. Ethereum provides a virtual computing environment called the **Ethereum virtual machine** to run applications and [**smart contracts**][4] as well[2]. The Ethereum virtual machine runs on thousands of participating nodes all over the world, meaning the application data while being secure, is almost impossible to be tampered with or lost.
|
||||
|
||||
### Getting behind Ethereum: What sets it apart
|
||||
|
||||
In 2017, a 30 plus group of the who’s who of the tech and financial world got together to leverage the Ethereum blockchain’s capabilities. Thus, the **Ethereum Enterprise Alliance (EEA)** was formed by a long list of supporting members including _Microsoft_ , _JP Morgan_ , _Cisco Systems_ , _Deloitte_ , and _Accenture_. JP Morgan already has **Quorum** , a decentralized computing platform for financial services based on Ethereum currently in operation, while Microsoft has Ethereum based cloud services it markets through its Azure cloud business[3].
|
||||
|
||||
### What is ether and how is it related to Ethereum
|
||||
|
||||
Ethereum creator **Vitalik Buterin** understood the true value of a decentralized processing platform and the underlying blockchain tech that powered bitcoin. He failed to gain majority agreement for his idea of proposing that Bitcoin should be developed to support running distributed applications (DApps) and programs (now referred to as smart contracts).
|
||||
|
||||
Hence in 2013, he proposed the idea of Ethereum in a white paper he published. The original white paper is still maintained and available for readers **[here][5]**. The idea was to develop a blockchain based platform to run smart contracts and applications designed to run on nodes and user devices instead of servers.
|
||||
|
||||
The Ethereum system is often mistaken to just mean the cryptocurrency ether, however, it has to be reiterated that Ethereum is a full stack platform for developing applications and executing them as well and has been so since inception whereas bitcoin isn’t. **Ether is currently the second biggest cryptocurrency** by market capitalization and trades at an average of $170 per ether at the time of writing this article[4].
|
||||
|
||||
### Features and technicalities of the platform[5]
|
||||
|
||||
* As we’ve already mentioned, the cryptocurrency called ether is simply one of the things the platform features. The purpose of the system is more than taking care of financial transactions. In fact, the key difference between the Ethereum platform and Bitcoin is in their scripting capabilities. Ethereum is developed in a Turing complete programming language which means it has scripting and application capabilities similar to other major programming languages. Developers require this feature to create DApps and complex smart contracts on the platform, a feature that bitcoin misses on.
|
||||
* The “mining” process of ether is more stringent and complex. While specialized ASICs may be used to mine bitcoin, the basic hashing algorithm used by Ethereum **(EThash)** reduces the advantage that ASICs have in this regard.
|
||||
* The transaction fees itself to be paid as an incentive to miners and node operators for running the network is calculated using a computational token called **Gas**. Gas improves the system’s resilience and resistance to external hacks and attacks by requiring the initiator of the transaction to pay ethers proportionate to the number of computational resources that are required to carry out that transaction. This is in contrast to other platforms such as Bitcoin where the transaction fee is measured in tandem with the transaction size. As such, the average transaction costs in Ethereum is radically less than Bitcoin. This also implies that running applications running on the Ethereum virtual machine will require a fee depending straight up on the computational problems that the application is meant to solve. Basically, the more complex an execution, the more the fee.
|
||||
* The block time for Ethereum is estimated to be around _**10-15 seconds**_. The block time is the average time that is required to timestamp and create a block on the blockchain network. Compared to the 10+ minutes the same transaction will take on the bitcoin network, it becomes apparent that _**Ethereum is much faster**_ with respect to transactions and verification of blocks.
|
||||
* _It is also interesting to note that there is no hard cap on the amount of ether that can be mined or the rate at which ether can be mined leading to less radical system design than bitcoin._
|
||||
|
||||
|
||||
|
||||
### Conclusion
|
||||
|
||||
While Ethereum is comparable and far outpaces similar platforms, the platform itself lacked a definite path for development until the Ethereum enterprise alliance started pushing it. While the definite push for enterprise developments are made by the Ethereum platform, it has to be noted that Ethereum also caters to small-time developers and individuals as well. As such developing the platform for end users and enterprises leave a lot of specific functionality out of the loop for Ethereum. Also, the blockchain model proposed and developed by the Ethereum foundation is a public model whereas the one proposed by projects such as the Hyperledger project is private and permissioned.
|
||||
|
||||
While only time can tell which platform among the ones put forward by Ethereum, Hyperledger, and R3 Corda among others will find the most fans in real-world use cases, such systems do prove the validity behind the claim of a blockchain powered future.
|
||||
|
||||
**References:**
|
||||
|
||||
* [1] [**Gabriel Nicholas, “Ethereum Is Coding’s New Wild West | WIRED,” Wired , 2017**][6].
|
||||
* [2] [**What is Ethereum? — Ethereum Homestead 0.1 documentation**][7].
|
||||
* [3] [**Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin’s – The New York Times**][8].
|
||||
* [4] [**Cryptocurrency Market Capitalizations | CoinMarketCap**][9].
|
||||
* [5] [**Introduction — Ethereum Homestead 0.1 documentation**][10].
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
|
||||
|
||||
作者:[editor][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/editor/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png
|
||||
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
|
||||
[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
|
||||
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
|
||||
[5]: https://github.com/ethereum/wiki/wiki/White-Paper
|
||||
[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/
|
||||
[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine
|
||||
[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html
|
||||
[9]: https://coinmarketcap.com/
|
||||
[10]: http://www.ethdocs.org/en/latest/introduction/index.html
|
@ -0,0 +1,261 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Duc – A Collection Of Tools To Inspect And Visualize Disk Usage)
|
||||
[#]: via: (https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Duc – A Collection Of Tools To Inspect And Visualize Disk Usage
|
||||
======
|
||||
|
||||
![Duc - A Collection Of Tools To Inspect And Visualize Disk Usage][1]
|
||||
|
||||
**Duc** is a collection of tools that can be used to index, inspect and visualize disk usage on Unix-like operating systems. Don’t think of it as a simple CLI tool that merely displays a fancy graph of your disk usage. It is built to scale quite well on huge filesystems. Duc has been tested on systems that consisted of more than 500 million files and several petabytes of storage without any problems.
|
||||
|
||||
Duc is quite fast and versatile tool. It stores your disk usage in an optimized database, so you can quickly find where your bytes are as soon as the index is completed. In addition, it comes with various user interfaces and back-ends to access the database and draw the graphs.
|
||||
|
||||
Here is the list of currently supported user interfaces (UI):
|
||||
|
||||
1. Command line interface (ls),
|
||||
2. Ncurses console interface (ui),
|
||||
3. X11 GUI (duc gui),
|
||||
4. OpenGL GUI (duc gui).
|
||||
|
||||
|
||||
|
||||
List of supported database back-ends:
|
||||
|
||||
* Tokyocabinet,
|
||||
* Leveldb,
|
||||
* Sqlite3.
|
||||
|
||||
|
||||
|
||||
Duc uses **Tokyocabinet** as default database backend.
|
||||
|
||||
### Install Duc
|
||||
|
||||
Duc is available in the default repositories of Debian and its derivatives such as Ubuntu. So installing Duc on DEB-based systems is a piece of cake.
|
||||
|
||||
```
|
||||
$ sudo apt-get install duc
|
||||
```
|
||||
|
||||
On other Linux distributions, you may need to manually compile and install Duc from source as shown below.
|
||||
|
||||
Download latest duc source .tgz file from the [**releases**][2] page on github. As of writing this guide, the latest version was **1.4.4**.
|
||||
|
||||
```
|
||||
$ wget https://github.com/zevv/duc/releases/download/1.4.4/duc-1.4.4.tar.gz
|
||||
```
|
||||
|
||||
Then run the following commands one by one to install DUC.
|
||||
|
||||
```
|
||||
$ tar -xzf duc-1.4.4.tar.gz
|
||||
$ cd duc-1.4.4
|
||||
$ ./configure
|
||||
$ make
|
||||
$ sudo make install
|
||||
```
|
||||
|
||||
### Duc Usage
|
||||
|
||||
The typical usage of duc is:
|
||||
|
||||
```
|
||||
$ duc <subcommand> <options>
|
||||
```
|
||||
|
||||
You can view the list of general options and sub-commands by running the following command:
|
||||
|
||||
```
|
||||
$ duc help
|
||||
```
|
||||
|
||||
You can also know the the usage of a specific subcommand as below.
|
||||
|
||||
```
|
||||
$ duc help <subcommand>
|
||||
```
|
||||
|
||||
To view the extensive list of all commands and their options, simply run:
|
||||
|
||||
```
|
||||
$ duc help --all
|
||||
```
|
||||
|
||||
Let us now se some practical use cases of duc utility.
|
||||
|
||||
### Create Index (database)
|
||||
|
||||
First of all, you need to create an index file (database) of your filesystem. To create an index file, use “duc index” command.
|
||||
|
||||
For example, to create an index of your **/home** directory, simply run:
|
||||
|
||||
```
|
||||
$ duc index /home
|
||||
```
|
||||
|
||||
The above command will create the index of your /home/ directory and save it in **$HOME/.duc.db** file. If you have added new files/directories in the /home directory in future, just re-run the above command at any time later to rebuild the index.
|
||||
|
||||
### Query Index
|
||||
|
||||
Duc has various sub-commands to query and explore the index.
|
||||
|
||||
To view the list of available indexes, run:
|
||||
|
||||
```
|
||||
$ duc info
|
||||
```
|
||||
|
||||
**Sample output:**
|
||||
|
||||
```
|
||||
Date Time Files Dirs Size Path
|
||||
2019-04-09 15:45:55 3.5K 305 654.6M /home
|
||||
```
|
||||
|
||||
As you see in the above output, I have already indexed the /home directory.
|
||||
|
||||
To list all files and directories in the current working directory, you can do:
|
||||
|
||||
```
|
||||
$ duc ls
|
||||
```
|
||||
|
||||
To list files/directories in a specific directory, for example **/home/sk/Downloads** , just pass the path as argument like below.
|
||||
|
||||
```
|
||||
$ duc ls /home/sk/Downloads
|
||||
```
|
||||
|
||||
Similarly, run **“duc ui”** command to open a **ncurses** based console user interface for exploring the file system usage and run **“duc gui”** to start a **graphical (X11)** interface to explore the file system.
|
||||
|
||||
To know more about a sub-command usage, simply refer the help section.
|
||||
|
||||
```
|
||||
$ duc help ls
|
||||
```
|
||||
|
||||
The above command will display the help section of “ls” subcommand.
|
||||
|
||||
### Visualize Disk Usage
|
||||
|
||||
In the previous section, we have seen how to list files and directories using duc subcommands. In addition, you can even show the file sizes in a fancy graph.
|
||||
|
||||
To show the graph of a given path, use “ls” subcommand like below.
|
||||
|
||||
```
|
||||
$ duc ls -Fg /home/sk
|
||||
```
|
||||
|
||||
Sample output:
|
||||
|
||||
![][3]
|
||||
|
||||
Visualize disk usage using “duc ls” command
|
||||
|
||||
As you see in the above output, the “ls” subcommand queries the duc database and lists the inclusive size of all
|
||||
files and directories of the given path i.e **/home/sk/** in this case.
|
||||
|
||||
Here, the **“-F”** option is used to append file type indicator (one of */) to entries and the **“-g”** option is used to draw graph with relative size for each entry.
|
||||
|
||||
Please note that if no path is given, the current working directory is explored.
|
||||
|
||||
You can use **-R** option to view the disk usage result in [**tree**][4] structure.
|
||||
|
||||
```
|
||||
$ duc ls -R /home/sk
|
||||
```
|
||||
|
||||
![][5]
|
||||
|
||||
Visualize disk usage in tree structure
|
||||
|
||||
To query the duc database and open a **ncurses** based console user interface for exploring the disk usage of given path, use **“ui”** subcommand like below.
|
||||
|
||||
```
|
||||
$ duc ui /home/sk
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
Similarly, we use **“gui”** subcommand to query the duc database and start a **graphical (X11)** interface to explore the disk usage of the given path:
|
||||
|
||||
```
|
||||
$ duc gui /home/sk
|
||||
```
|
||||
|
||||
![][7]
|
||||
|
||||
Like I already mentioned earlier, we can learn more about a subcommand usage like below.
|
||||
|
||||
```
|
||||
$ duc help <subcommand-name>
|
||||
```
|
||||
|
||||
I covered the basic usage part only. Refer man pages for more details about “duc” tool.
|
||||
|
||||
```
|
||||
$ man duc
|
||||
```
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**Filelight – Visualize Disk Usage On Your Linux System**][8]
|
||||
* [**Some Good Alternatives To ‘du’ Command**][9]
|
||||
* [**How To Check Disk Space Usage In Linux Using Ncdu**][10]
|
||||
* [**Agedu – Find Out Wasted Disk Space In Linux**][11]
|
||||
* [**How To Find The Size Of A Directory In Linux**][12]
|
||||
* [**The df Command Tutorial With Examples For Beginners**][13]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
### Conclusion
|
||||
|
||||
Duc is simple yet useful disk usage viewer. If you want to quickly and easily know which files/directories are eating up your disk space, Duc might be a good choice. What are you waiting for? Go get this tool already, scan your filesystem and get rid of unused files/directories.
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Duc website**][14]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/duc-a-collection-of-tools-to-inspect-and-visualize-disk-usage/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/duc-720x340.png
|
||||
[2]: https://github.com/zevv/duc/releases
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-1-1.png
|
||||
[4]: https://www.ostechnix.com/view-directory-tree-structure-linux/
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-2.png
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-3.png
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/duc-4.png
|
||||
[8]: https://www.ostechnix.com/filelight-visualize-disk-usage-on-your-linux-system/
|
||||
[9]: https://www.ostechnix.com/some-good-alternatives-to-du-command/
|
||||
[10]: https://www.ostechnix.com/check-disk-space-usage-linux-using-ncdu/
|
||||
[11]: https://www.ostechnix.com/agedu-find-out-wasted-disk-space-in-linux/
|
||||
[12]: https://www.ostechnix.com/find-size-directory-linux/
|
||||
[13]: https://www.ostechnix.com/the-df-command-tutorial-with-examples-for-beginners/
|
||||
[14]: https://duc.zevv.nl/
|
@ -0,0 +1,183 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Five Methods To Check Your Current Runlevel In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/check-current-runlevel-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Five Methods To Check Your Current Runlevel In Linux?
|
||||
======
|
||||
|
||||
A run level is an operating system state on Linux system.
|
||||
|
||||
There are seven runlevels exist, numbered from zero to six.
|
||||
|
||||
A system can be booted into any of the given runlevel. Run levels are identified by numbers.
|
||||
|
||||
Each runlevel designates a different system configuration and allows access to a different combination of processes.
|
||||
|
||||
By default Linux boots either to runlevel 3 or to runlevel 5.
|
||||
|
||||
Only one runlevel is executed at a time on startup. It doesn’t execute one after another.
|
||||
|
||||
The default runlevel for a system is specified in the /etc/inittab file for SysVinit system.
|
||||
|
||||
But systemd systems doesn’t read this file and it uses the following file `/etc/systemd/system/default.target` to get default runlevel information.
|
||||
|
||||
We can check the Linux system current runlevel using the below five methods.
|
||||
|
||||
* **`runlevel Command:`** runlevel prints the previous and current runlevel of the system.
|
||||
* **`who Command:`** Print information about users who are currently logged in. It will print the runlevel information with “-r” option.
|
||||
* **`systemctl Command:`** It controls the systemd system and service manager.
|
||||
* **`Using /etc/inittab File:`** The default runlevel for a system is specified in the /etc/inittab file for SysVinit System.
|
||||
* **`Using /etc/systemd/system/default.target File:`** The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
|
||||
|
||||
|
||||
|
||||
Detailed runlevels information is described in the below table.
|
||||
|
||||
**Runlevel** | **SysVinit System** | **systemd System**
|
||||
---|---|---
|
||||
0 | Shutdown or Halt the system | shutdown.target
|
||||
1 | Single user mode | rescue.target
|
||||
2 | Multiuser, without NFS | multi-user.target
|
||||
3 | Full multiuser mode | multi-user.target
|
||||
4 | unused | multi-user.target
|
||||
5 | X11 (Graphical User Interface) | graphical.target
|
||||
6 | reboot the system | reboot.target
|
||||
|
||||
The system will execute the programs/service based on the runlevel.
|
||||
|
||||
For SysVinit system, it will be execute from the following location.
|
||||
|
||||
* Run level 0 – /etc/rc.d/rc0.d/
|
||||
* Run level 1 – /etc/rc.d/rc1.d/
|
||||
* Run level 2 – /etc/rc.d/rc2.d/
|
||||
* Run level 3 – /etc/rc.d/rc3.d/
|
||||
* Run level 4 – /etc/rc.d/rc4.d/
|
||||
* Run level 5 – /etc/rc.d/rc5.d/
|
||||
* Run level 6 – /etc/rc.d/rc6.d/
|
||||
|
||||
|
||||
|
||||
For systemd system, it will be execute from the following location.
|
||||
|
||||
* runlevel1.target – /etc/systemd/system/rescue.target
|
||||
* runlevel2.target – /etc/systemd/system/multi-user.target.wants
|
||||
* runlevel3.target – /etc/systemd/system/multi-user.target.wants
|
||||
* runlevel4.target – /etc/systemd/system/multi-user.target.wants
|
||||
* runlevel5.target – /etc/systemd/system/graphical.target.wants
|
||||
|
||||
|
||||
|
||||
### 1) How To Check Your Current Runlevel In Linux Using runlevel Command?
|
||||
|
||||
runlevel prints the previous and current runlevel of the system.
|
||||
|
||||
```
|
||||
$ runlevel
|
||||
N 5
|
||||
```
|
||||
|
||||
* **`N:`** “N” indicates that the runlevel has not been changed since the system was booted.
|
||||
* **`5:`** “5” indicates the current runlevel of the system.
|
||||
|
||||
|
||||
|
||||
### 2) How To Check Your Current Runlevel In Linux Using who Command?
|
||||
|
||||
Print information about users who are currently logged in. It will print the runlevel information with `-r` option.
|
||||
|
||||
```
|
||||
$ who -r
|
||||
run-level 5 2019-04-22 09:32
|
||||
```
|
||||
|
||||
### 3) How To Check Your Current Runlevel In Linux Using systemctl Command?
|
||||
|
||||
systemctl is used to controls the systemd system and service manager. systemd is system and service manager for Unix like operating systems.
|
||||
|
||||
It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
|
||||
|
||||
systemd uses `.service` files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file.
|
||||
|
||||
```
|
||||
$ systemctl get-default
|
||||
graphical.target
|
||||
```
|
||||
|
||||
### 4) How To Check Your Current Runlevel In Linux Using /etc/inittab File?
|
||||
|
||||
The default runlevel for a system is specified in the /etc/inittab file for SysVinit System but systemd systemd doesn’t read the files.
|
||||
|
||||
So, it will work only on SysVinit system and not in systemd system.
|
||||
|
||||
```
|
||||
$ cat /etc/inittab
|
||||
# inittab is only used by upstart for the default runlevel.
|
||||
#
|
||||
# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
|
||||
#
|
||||
# System initialization is started by /etc/init/rcS.conf
|
||||
#
|
||||
# Individual runlevels are started by /etc/init/rc.conf
|
||||
#
|
||||
# Ctrl-Alt-Delete is handled by /etc/init/control-alt-delete.conf
|
||||
#
|
||||
# Terminal gettys are handled by /etc/init/tty.conf and /etc/init/serial.conf,
|
||||
# with configuration in /etc/sysconfig/init.
|
||||
#
|
||||
# For information on how to write upstart event handlers, or how
|
||||
# upstart works, see init(5), init(8), and initctl(8).
|
||||
#
|
||||
# Default runlevel. The runlevels used are:
|
||||
# 0 - halt (Do NOT set initdefault to this)
|
||||
# 1 - Single user mode
|
||||
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
|
||||
# 3 - Full multiuser mode
|
||||
# 4 - unused
|
||||
# 5 - X11
|
||||
# 6 - reboot (Do NOT set initdefault to this)
|
||||
#
|
||||
id:5:initdefault:
|
||||
```
|
||||
|
||||
### 5) How To Check Your Current Runlevel In Linux Using /etc/systemd/system/default.target File?
|
||||
|
||||
The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
|
||||
|
||||
It doesn’t work on SysVinit system.
|
||||
|
||||
```
|
||||
$ cat /etc/systemd/system/default.target
|
||||
# This file is part of systemd.
|
||||
#
|
||||
# systemd is free software; you can redistribute it and/or modify it
|
||||
# under the terms of the GNU Lesser General Public License as published by
|
||||
# the Free Software Foundation; either version 2.1 of the License, or
|
||||
# (at your option) any later version.
|
||||
|
||||
[Unit]
|
||||
Description=Graphical Interface
|
||||
Documentation=man:systemd.special(7)
|
||||
Requires=multi-user.target
|
||||
Wants=display-manager.service
|
||||
Conflicts=rescue.service rescue.target
|
||||
After=multi-user.target rescue.service rescue.target display-manager.service
|
||||
AllowIsolate=yes
|
||||
```
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/check-current-runlevel-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
209
sources/tech/20190505 How To Create SSH Alias In Linux.md
Normal file
209
sources/tech/20190505 How To Create SSH Alias In Linux.md
Normal file
@ -0,0 +1,209 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Create SSH Alias In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Create SSH Alias In Linux
|
||||
======
|
||||
|
||||
![How To Create SSH Alias In Linux][1]
|
||||
|
||||
If you frequently access a lot of different remote systems via SSH, this trick will save you some time. You can create SSH alias to frequently-accessed systems via SSH. This way you need not to remember all the different usernames, hostnames, ssh port numbers and IP addresses etc. Additionally, It avoids the need to repetitively type the same username/hostname, ip address, port no whenever you SSH into a Linux server(s).
|
||||
|
||||
### Create SSH Alias In Linux
|
||||
|
||||
Before I know this trick, usually, I connect to a remote system over SSH using anyone of the following ways.
|
||||
|
||||
Using IP address:
|
||||
|
||||
```
|
||||
$ ssh 192.168.225.22
|
||||
```
|
||||
|
||||
Or using port number, username and IP address:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@server.example.com
|
||||
```
|
||||
|
||||
Or using port number, username and hostname:
|
||||
|
||||
```
|
||||
$ ssh -p 22 sk@server.example.com
|
||||
```
|
||||
|
||||
Here,
|
||||
|
||||
* **22** is the port number,
|
||||
* **sk** is the username of the remote system,
|
||||
* **192.168.225.22** is the IP of my remote system,
|
||||
* **server.example.com** is the hostname of remote system.
|
||||
|
||||
|
||||
|
||||
I believe most of the newbie Linux users and/or admins would SSH into a remote system this way. However, If you SSH into multiple different systems, remembering all hostnames/ip addresses, usernames is bit difficult unless you write them down in a paper or save them in a text file. No worries! This can be easily solved by creating an alias(or shortcut) for SSH connections.
|
||||
|
||||
We can create an alias for SSH commands in two methods.
|
||||
|
||||
##### Method 1 – Using SSH Config File
|
||||
|
||||
This is my preferred way of creating aliases.
|
||||
|
||||
We can use SSH default configuration file to create SSH alias. To do so, edit **~/.ssh/config** file (If this file doesn’t exist, just create one):
|
||||
|
||||
```
|
||||
$ vi ~/.ssh/config
|
||||
```
|
||||
|
||||
Add all of your remote hosts details like below:
|
||||
|
||||
```
|
||||
Host webserver
|
||||
HostName 192.168.225.22
|
||||
User sk
|
||||
|
||||
Host dns
|
||||
HostName server.example.com
|
||||
User root
|
||||
|
||||
Host dhcp
|
||||
HostName 192.168.225.25
|
||||
User ostechnix
|
||||
Port 2233
|
||||
```
|
||||
|
||||
![][2]
|
||||
|
||||
Create SSH Alias In Linux Using SSH Config File
|
||||
|
||||
Replace the values of **Host** , **Hostname** , **User** and **Port** with your own. Once you added the details of all remote hosts, save and exit the file.
|
||||
|
||||
Now you can SSH into the systems with commands:
|
||||
|
||||
```
|
||||
$ ssh webserver
|
||||
|
||||
$ ssh dns
|
||||
|
||||
$ ssh dhcp
|
||||
```
|
||||
|
||||
It is simple as that.
|
||||
|
||||
Have a look at the following screenshot.
|
||||
|
||||
![][3]
|
||||
|
||||
Access remote system using SSH alias
|
||||
|
||||
See? I only used the alias name (i.e **webserver** ) to access my remote system that has IP address **192.168.225.22**.
|
||||
|
||||
Please note that this applies for current user only. If you want to make the aliases available for all users (system wide), add the above lines in **/etc/ssh/ssh_config** file.
|
||||
|
||||
You can also add plenty of other things in the SSH config file. For example, if you have [**configured SSH Key-based authentication**][4], mention the SSH keyfile location as below.
|
||||
|
||||
```
|
||||
Host ubuntu
|
||||
HostName 192.168.225.50
|
||||
User senthil
|
||||
IdentityFIle ~/.ssh/id_rsa_remotesystem
|
||||
```
|
||||
|
||||
Make sure you have replace the hostname, username and SSH keyfile path with your own.
|
||||
|
||||
Now connect to the remote server with command:
|
||||
|
||||
```
|
||||
$ ssh ubuntu
|
||||
```
|
||||
|
||||
This way you can add as many as remote hosts you want to access over SSH and quickly access them using their alias name.
|
||||
|
||||
##### Method 2 – Using Bash aliases
|
||||
|
||||
This is quick and dirty way to create SSH aliases for faster communication. You can use the [**alias command**][5] to make this task much easier.
|
||||
|
||||
Open **~/.bashrc** or **~/.bash_profile** file:
|
||||
|
||||
Add aliases for each SSH connections one by one like below.
|
||||
|
||||
```
|
||||
alias webserver='ssh sk@server.example.com'
|
||||
alias dns='ssh sk@server.example.com'
|
||||
alias dhcp='ssh sk@server.example.com -p 2233'
|
||||
alias ubuntu='ssh sk@server.example.com -i ~/.ssh/id_rsa_remotesystem'
|
||||
```
|
||||
|
||||
Again make sure you have replaced the host, hostname, port number and ip address with your own. Save the file and exit.
|
||||
|
||||
Then, apply the changes using command:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
Or,
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
```
|
||||
|
||||
In this method, you don’t even need to use “ssh alias-name” command. Instead, just use alias name only like below.
|
||||
|
||||
```
|
||||
$ webserver
|
||||
$ dns
|
||||
$ dhcp
|
||||
$ ubuntu
|
||||
```
|
||||
|
||||
![][6]
|
||||
|
||||
These two methods are very simple, yet useful and much more convenient for those who often SSH into multiple different systems. Use any one of the aforementioned methods that suits for you to quickly access your remote Linux systems over SSH.
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**Allow Or Deny SSH Access To A Particular User Or Group In Linux**][7]
|
||||
* [**How To SSH Into A Particular Directory On Linux**][8]
|
||||
* [**How To Stop SSH Session From Disconnecting In Linux**][9]
|
||||
* [**4 Ways To Keep A Command Running After You Log Out Of The SSH Session**][10]
|
||||
* [**SSLH – Share A Same Port For HTTPS And SSH**][11]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/how-to-create-ssh-alias-in-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/ssh-alias-720x340.png
|
||||
[2]: http://www.ostechnix.com/wp-content/uploads/2019/04/Create-SSH-Alias-In-Linux.png
|
||||
[3]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias.png
|
||||
[4]: https://www.ostechnix.com/configure-ssh-key-based-authentication-linux/
|
||||
[5]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/create-ssh-alias-1.png
|
||||
[7]: https://www.ostechnix.com/allow-deny-ssh-access-particular-user-group-linux/
|
||||
[8]: https://www.ostechnix.com/how-to-ssh-into-a-particular-directory-on-linux/
|
||||
[9]: https://www.ostechnix.com/how-to-stop-ssh-session-from-disconnecting-in-linux/
|
||||
[10]: https://www.ostechnix.com/4-ways-keep-command-running-log-ssh-session/
|
||||
[11]: https://www.ostechnix.com/sslh-share-port-https-ssh/
|
@ -0,0 +1,338 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Install/Uninstall Listed Packages From A File In Linux?
|
||||
======
|
||||
|
||||
In some case you may want to install list of packages from one server to another server.
|
||||
|
||||
For example, You have installed 15 packages on ServerA, and those packages needs to be installed on ServerB, ServerC, etc.,
|
||||
|
||||
We can manually install all the packages but it’s time consuming process.
|
||||
|
||||
It can be done for one or two servers, think about if you have around 10 servers.
|
||||
|
||||
In this case it doesn’t help you then What will be the solution?
|
||||
|
||||
Don’t worry we are here to help you out in this situation or scenario.
|
||||
|
||||
We have added four methods in this article to overcome this situation.
|
||||
|
||||
I hope this will help you to fix your issue. I have tested these commands on CentOS7 and Ubuntu 18.04 systems.
|
||||
|
||||
I hope this will work with other distributions too. Just replace with distribution official package manager command instead of us.
|
||||
|
||||
Navigate to the following article if you want to **[check list of installed packages in Linux system][1]**.
|
||||
|
||||
For example, if you would like to create a package lists from RHEL based system then use the following steps. Do the same for other distributions as well.
|
||||
|
||||
```
|
||||
# rpm -qa --last | head -15 | awk '{print $1}' > /tmp/pack1.txt
|
||||
|
||||
# cat /tmp/pack1.txt
|
||||
mariadb-server-5.5.60-1.el7_5.x86_64
|
||||
perl-DBI-1.627-4.el7.x86_64
|
||||
perl-DBD-MySQL-4.023-6.el7.x86_64
|
||||
perl-PlRPC-0.2020-14.el7.noarch
|
||||
perl-Net-Daemon-0.48-5.el7.noarch
|
||||
perl-IO-Compress-2.061-2.el7.noarch
|
||||
perl-Compress-Raw-Zlib-2.061-4.el7.x86_64
|
||||
mariadb-5.5.60-1.el7_5.x86_64
|
||||
perl-Data-Dumper-2.145-3.el7.x86_64
|
||||
perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64
|
||||
httpd-2.4.6-88.el7.centos.x86_64
|
||||
mailcap-2.1.41-2.el7.noarch
|
||||
httpd-tools-2.4.6-88.el7.centos.x86_64
|
||||
apr-util-1.5.2-6.el7.x86_64
|
||||
apr-1.4.8-3.el7_4.1.x86_64
|
||||
```
|
||||
|
||||
### Method-1 : How To Install Listed Packages From A File In Linux With Help Of cat Command?
|
||||
|
||||
To achieve this, i would like to go with this first method. As this very simple and straightforward.
|
||||
|
||||
To do so, just create a file and add the list of packages that you want to install it.
|
||||
|
||||
For testing purpose, we are going to add only the below three packages into the following file.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt
|
||||
|
||||
apache2
|
||||
mariadb-server
|
||||
nano
|
||||
```
|
||||
|
||||
Simply run the following **[apt command][2]** to install all the packages in a single shot from a file in Ubuntu/Debian systems.
|
||||
|
||||
```
|
||||
# apt -y install $(cat /tmp/pack1.txt)
|
||||
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
libopts25 sntp
|
||||
Use 'sudo apt autoremove' to remove them.
|
||||
Suggested packages:
|
||||
apache2-doc apache2-suexec-pristine | apache2-suexec-custom spell
|
||||
The following NEW packages will be installed:
|
||||
apache2 mariadb-server nano
|
||||
0 upgraded, 3 newly installed, 0 to remove and 24 not upgraded.
|
||||
Need to get 339 kB of archives.
|
||||
After this operation, 1,377 kB of additional disk space will be used.
|
||||
Get:1 http://in.archive.ubuntu.com/ubuntu bionic-updates/main amd64 apache2 amd64 2.4.29-1ubuntu4.6 [95.1 kB]
|
||||
Get:2 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
|
||||
Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 mariadb-server all 1:10.1.38-0ubuntu0.18.04.1 [12.9 kB]
|
||||
Fetched 339 kB in 19s (18.0 kB/s)
|
||||
Selecting previously unselected package apache2.
|
||||
(Reading database ... 290926 files and directories currently installed.)
|
||||
Preparing to unpack .../apache2_2.4.29-1ubuntu4.6_amd64.deb ...
|
||||
Unpacking apache2 (2.4.29-1ubuntu4.6) ...
|
||||
Selecting previously unselected package nano.
|
||||
Preparing to unpack .../nano_2.9.3-2_amd64.deb ...
|
||||
Unpacking nano (2.9.3-2) ...
|
||||
Selecting previously unselected package mariadb-server.
|
||||
Preparing to unpack .../mariadb-server_1%3a10.1.38-0ubuntu0.18.04.1_all.deb ...
|
||||
Unpacking mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
|
||||
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
|
||||
Setting up apache2 (2.4.29-1ubuntu4.6) ...
|
||||
Processing triggers for ureadahead (0.100.0-20) ...
|
||||
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
|
||||
Setting up nano (2.9.3-2) ...
|
||||
update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in auto mode
|
||||
update-alternatives: using /bin/nano to provide /usr/bin/pico (pico) in auto mode
|
||||
Processing triggers for systemd (237-3ubuntu10.20) ...
|
||||
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
|
||||
Setting up mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
|
||||
```
|
||||
|
||||
For removal, use the same format with appropriate option.
|
||||
|
||||
```
|
||||
# apt -y remove $(cat /tmp/pack1.txt)
|
||||
Reading package lists... Done
|
||||
Building dependency tree
|
||||
Reading state information... Done
|
||||
The following packages were automatically installed and are no longer required:
|
||||
apache2-bin apache2-data apache2-utils galera-3 libaio1 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig-inifiles-perl libdbd-mysql-perl libdbi-perl libjemalloc1 liblua5.2-0
|
||||
libmysqlclient20 libopts25 libterm-readkey-perl mariadb-client-10.1 mariadb-client-core-10.1 mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 mysql-common sntp socat
|
||||
Use 'apt autoremove' to remove them.
|
||||
The following packages will be REMOVED:
|
||||
apache2 mariadb-server nano
|
||||
0 upgraded, 0 newly installed, 3 to remove and 24 not upgraded.
|
||||
After this operation, 1,377 kB disk space will be freed.
|
||||
(Reading database ... 291046 files and directories currently installed.)
|
||||
Removing apache2 (2.4.29-1ubuntu4.6) ...
|
||||
Removing mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
|
||||
Removing nano (2.9.3-2) ...
|
||||
update-alternatives: using /usr/bin/vim.tiny to provide /usr/bin/editor (editor) in auto mode
|
||||
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
|
||||
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
|
||||
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
|
||||
```
|
||||
|
||||
Use the following **[yum command][3]** to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
|
||||
|
||||
```
|
||||
# yum -y install $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
|
||||
|
||||
```
|
||||
# yum -y remove $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following **[dnf command][4]** to install listed packages from a file on Fedora system.
|
||||
|
||||
```
|
||||
# dnf -y install $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on Fedora system.
|
||||
|
||||
```
|
||||
# dnf -y remove $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following **[zypper command][5]** to install listed packages from a file on openSUSE system.
|
||||
|
||||
```
|
||||
# zypper -y install $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on openSUSE system.
|
||||
|
||||
```
|
||||
# zypper -y remove $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following **[pacman command][6]** to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
|
||||
|
||||
```
|
||||
# pacman -S $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
|
||||
|
||||
```
|
||||
# pacman -Rs $(cat /tmp/pack1.txt)
|
||||
```
|
||||
|
||||
### Method-2 : How To Install Listed Packages From A File In Linux With Help Of cat And xargs Command?
|
||||
|
||||
Even, i prefer to go with this method because this is very simple and straightforward method.
|
||||
|
||||
Use the following apt command to install listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs apt -y install
|
||||
```
|
||||
|
||||
Use the following apt command to uninstall listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs apt -y remove
|
||||
```
|
||||
|
||||
Use the following yum command to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs yum -y install
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs yum -y remove
|
||||
```
|
||||
|
||||
Use the following dnf command to install listed packages from a file on Fedora system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs dnf -y install
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on Fedora system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs dnf -y remove
|
||||
```
|
||||
|
||||
Use the following zypper command to install listed packages from a file on openSUSE system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs zypper -y install
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on openSUSE system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs zypper -y remove
|
||||
```
|
||||
|
||||
Use the following pacman command to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs pacman -S
|
||||
```
|
||||
|
||||
Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
|
||||
|
||||
```
|
||||
# cat /tmp/pack1.txt | xargs pacman -Rs
|
||||
```
|
||||
|
||||
### Method-3 : How To Install Listed Packages From A File In Linux With Help Of For Loop Command?
|
||||
|
||||
Alternatively we can use the “For Loop” command to achieve this.
|
||||
|
||||
To install bulk packages. Use the below format to run a “For Loop” with single line.
|
||||
|
||||
```
|
||||
# for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done
|
||||
```
|
||||
|
||||
To install bulk packages with shell script use the following “For Loop”.
|
||||
|
||||
```
|
||||
# vi /opt/scripts/bulk-package-install.sh
|
||||
|
||||
#!/bin/bash
|
||||
for pack in `cat /tmp/pack1.txt`
|
||||
do apt -y remove $pack
|
||||
done
|
||||
```
|
||||
|
||||
Set an executable permission to `bulk-package-install.sh` file.
|
||||
|
||||
```
|
||||
# chmod + bulk-package-install.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh bulk-package-install.sh
|
||||
```
|
||||
|
||||
### Method-4 : How To Install Listed Packages From A File In Linux With Help Of While Loop Command?
|
||||
|
||||
Alternatively we can use the “While Loop” command to achieve this.
|
||||
|
||||
To install bulk packages. Use the below format to run a “While Loop” with single line.
|
||||
|
||||
```
|
||||
# file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file"
|
||||
```
|
||||
|
||||
To install bulk packages with shell script use the following "While Loop".
|
||||
|
||||
```
|
||||
# vi /opt/scripts/bulk-package-install.sh
|
||||
|
||||
#!/bin/bash
|
||||
file="/tmp/pack1.txt"
|
||||
while read -r pack
|
||||
do apt -y remove $pack
|
||||
done < "$file"
|
||||
```
|
||||
|
||||
Set an executable permission to `bulk-package-install.sh` file.
|
||||
|
||||
```
|
||||
# chmod + bulk-package-install.sh
|
||||
```
|
||||
|
||||
Finally run the script to achieve this.
|
||||
|
||||
```
|
||||
# sh bulk-package-install.sh
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/
|
||||
[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[5]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
||||
[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
@ -0,0 +1,350 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Navigate Directories Faster In Linux)
|
||||
[#]: via: (https://www.ostechnix.com/navigate-directories-faster-linux/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
How To Navigate Directories Faster In Linux
|
||||
======
|
||||
|
||||
![Navigate Directories Faster In Linux][1]
|
||||
|
||||
Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come.
|
||||
|
||||
### Five Different Methods To Navigate Directories Faster In Linux
|
||||
|
||||
##### Method 1: Using “Pushd”, “Popd” And “Dirs” Commands
|
||||
|
||||
This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so don’t bother with installation. These trio commands are quite useful when you’re working in a deep directory structure and scripts. For more details, check our guide in the link given below.
|
||||
|
||||
* **[How To Use Pushd, Popd And Dirs Commands For Faster CLI Navigation][2]**
|
||||
|
||||
|
||||
|
||||
##### Method 2: Using “bd” utility
|
||||
|
||||
The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash.
|
||||
|
||||
Bd is also available in the [**Debian extra**][3] and [**Ubuntu universe**][4] repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below:
|
||||
|
||||
```
|
||||
$ sudo apt-get update
|
||||
|
||||
$ sudo apt-get install bd
|
||||
```
|
||||
|
||||
For other distributions, you can install as shown below.
|
||||
|
||||
```
|
||||
$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd
|
||||
|
||||
$ sudo chmod +rx /usr/local/bin/bd
|
||||
|
||||
$ echo 'alias bd=". bd -si"' >> ~/.bashrc
|
||||
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
To enable auto completion, run:
|
||||
|
||||
```
|
||||
$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd
|
||||
|
||||
$ source /etc/bash_completion.d/bd
|
||||
```
|
||||
|
||||
The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool.
|
||||
|
||||
Create some directories.
|
||||
|
||||
```
|
||||
$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
|
||||
```
|
||||
|
||||
The above command will create a hierarchy of directories. Let us check [**directory structure**][5] using command:
|
||||
|
||||
```
|
||||
$ tree dir1/
|
||||
dir1/
|
||||
└── dir2
|
||||
└── dir3
|
||||
└── dir4
|
||||
└── dir5
|
||||
└── dir6
|
||||
└── dir7
|
||||
└── dir8
|
||||
└── dir9
|
||||
└── dir10
|
||||
|
||||
9 directories, 0 files
|
||||
```
|
||||
|
||||
Alright, we have now 10 directories. Let us say you’re currently in 7th directory i.e dir7.
|
||||
|
||||
```
|
||||
$ pwd
|
||||
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
|
||||
```
|
||||
|
||||
You want to move to dir3. Normally you would type:
|
||||
|
||||
```
|
||||
$ cd /home/sk/dir1/dir2/dir3
|
||||
```
|
||||
|
||||
Right? yes! But it not necessary though! To go back to dir3, just type:
|
||||
|
||||
```
|
||||
$ bd dir3
|
||||
```
|
||||
|
||||
Now you will be in dir3.
|
||||
|
||||
![][6]
|
||||
|
||||
Navigate Directories Faster In Linux Using “bd” Utility
|
||||
|
||||
Easy, isn’t it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path.
|
||||
|
||||
To check the contents of a specific parent directory, you don’t need to inside that particular directory. Instead, just type:
|
||||
|
||||
```
|
||||
$ ls `bd dir1`
|
||||
```
|
||||
|
||||
The above command will display the contents of dir1 from your current working directory.
|
||||
|
||||
For more details, check out the following GitHub page.
|
||||
|
||||
* [**bd GitHub repository**][7]
|
||||
|
||||
|
||||
|
||||
##### Method 3: Using “Up” Shell script
|
||||
|
||||
The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too!
|
||||
|
||||
To install “Up” on **Bash** , run the following commands one bye:
|
||||
|
||||
```
|
||||
$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
|
||||
|
||||
$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc
|
||||
```
|
||||
|
||||
The up script registers the “up” function and some completion functions via your “.bashrc” file.
|
||||
|
||||
Update the changes using command:
|
||||
|
||||
```
|
||||
$ source ~/.bashrc
|
||||
```
|
||||
|
||||
On **zsh** :
|
||||
|
||||
```
|
||||
$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh
|
||||
|
||||
$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc
|
||||
```
|
||||
|
||||
The up script registers the “up” function and some completion functions via your “.zshrc” file.
|
||||
|
||||
Update the changes using command:
|
||||
|
||||
```
|
||||
$ source ~/.zshrc
|
||||
```
|
||||
|
||||
On **fish** :
|
||||
|
||||
```
|
||||
$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish
|
||||
|
||||
$ source ~/.config/up/up.fish
|
||||
```
|
||||
|
||||
The up script registers the “up” function and some completion functions via “funcsave”.
|
||||
|
||||
Now it is time to see some examples.
|
||||
|
||||
Let us create some directories.
|
||||
|
||||
```
|
||||
$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10
|
||||
```
|
||||
|
||||
Let us say you’re in 7th directory i.e dir7.
|
||||
|
||||
```
|
||||
$ pwd
|
||||
/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7
|
||||
```
|
||||
|
||||
You want to move to dir3. Using “cd” command, we can do this by typing the following command:
|
||||
|
||||
```
|
||||
$ cd /home/sk/dir1/dir2/dir3
|
||||
```
|
||||
|
||||
But it is really easy to go back to dir3 using “up” script:
|
||||
|
||||
```
|
||||
$ up dir3
|
||||
```
|
||||
|
||||
That’s it. Now you will be in dir3. To go one directory up, just type:
|
||||
|
||||
```
|
||||
$ up 1
|
||||
```
|
||||
|
||||
To go back two directory type:
|
||||
|
||||
```
|
||||
$ up 2
|
||||
```
|
||||
|
||||
It’s that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path.
|
||||
|
||||
For more details, check out the GitHub page.
|
||||
|
||||
* [**Up GitHub Repository**][8]
|
||||
|
||||
|
||||
|
||||
Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You can’t move forward. If you want to switch to dir10 from dir5, you can’t! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory!
|
||||
|
||||
##### Method 4: Using “Shortcut” tool
|
||||
|
||||
This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to [**alias**][9] command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If you’re working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below.
|
||||
|
||||
* [**Create Shortcuts To The Frequently Used Directories In Your Shell**][10]
|
||||
|
||||
|
||||
|
||||
##### Method 5: Using “CDPATH” Environment variable
|
||||
|
||||
This method doesn’t require any installation. **CDPATH** is an environment variable. It is somewhat similar to **PATH** variable which contains many different paths concatenated using **‘:’** (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for **cd** command.
|
||||
|
||||
I have the following directory structure.
|
||||
|
||||
![][11]
|
||||
|
||||
Directory structure
|
||||
|
||||
As you see, there are four child directories under a parent directory named “ostechnix”.
|
||||
|
||||
Now add this parent directory to CDPATH using command:
|
||||
|
||||
```
|
||||
$ export CDPATH=~/ostechnix
|
||||
```
|
||||
|
||||
You now can instantly cd to the sub-directories of the parent directory (i.e **~/ostechnix** in our case) from anywhere in the filesystem.
|
||||
|
||||
For instance, currently I am in **/var/mail/** location.
|
||||
|
||||
![][12]
|
||||
|
||||
To cd into **~/ostechnix/Linux/** directory, we don’t have to use the full path of the directory as shown below:
|
||||
|
||||
```
|
||||
$ cd ~/ostechnix/Linux
|
||||
```
|
||||
|
||||
Instead, just mention the name of the sub-directory you want to switch to:
|
||||
|
||||
```
|
||||
$ cd Linux
|
||||
```
|
||||
|
||||
It will automatically cd to **~/ostechnix/Linux** directory instantly.
|
||||
|
||||
![][13]
|
||||
|
||||
As you can see in the above output, I didn’t use “cd <full-path-of-subdir>”. Instead, I just used “cd <subdir-name>” command.
|
||||
|
||||
Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesn’t much help for navigating a stack of directories (directories inside sub-directories, of course).
|
||||
|
||||
To find the values of CDPATH variable, run:
|
||||
|
||||
```
|
||||
$ echo $CDPATH
|
||||
```
|
||||
|
||||
Sample output would be:
|
||||
|
||||
```
|
||||
/home/sk/ostechnix
|
||||
```
|
||||
|
||||
**Set multiple values to CDPATH**
|
||||
|
||||
Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:).
|
||||
|
||||
```
|
||||
$ export CDPATH=.:~/ostechnix:/etc:/var:/opt
|
||||
```
|
||||
|
||||
**Make the changes persistent**
|
||||
|
||||
As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your **~/.bashrc** or **~/.bash_profile** files.
|
||||
|
||||
```
|
||||
$ vi ~/.bash_profile
|
||||
```
|
||||
|
||||
Add the values:
|
||||
|
||||
```
|
||||
export CDPATH=.:~/ostechnix:/etc:/var:/opt
|
||||
```
|
||||
|
||||
Hit **ESC** key and type **:wq** to save and exit.
|
||||
|
||||
Apply the changes using command:
|
||||
|
||||
```
|
||||
$ source ~/.bash_profile
|
||||
```
|
||||
|
||||
**Clear CDPATH**
|
||||
|
||||
To clear the values of CDPATH, use **export CDPATH=””**. Or, simply delete the entire line from **~/.bashrc** or **~/.bash_profile** files.
|
||||
|
||||
In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, it’s not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide.
|
||||
|
||||
And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/navigate-directories-faster-linux/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-In-Linux-720x340.png
|
||||
[2]: https://www.ostechnix.com/use-pushd-popd-dirs-commands-faster-cli-navigation/
|
||||
[3]: https://tracker.debian.org/pkg/bd
|
||||
[4]: https://launchpad.net/ubuntu/+source/bd
|
||||
[5]: https://www.ostechnix.com/view-directory-tree-structure-linux/
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-1.png
|
||||
[7]: https://github.com/vigneshwaranr/bd
|
||||
[8]: https://github.com/shannonmoeller/up
|
||||
[9]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/
|
||||
[10]: https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/
|
||||
[11]: http://www.ostechnix.com/wp-content/uploads/2018/12/tree-command-output.png
|
||||
[12]: http://www.ostechnix.com/wp-content/uploads/2018/12/pwd-command.png
|
||||
[13]: http://www.ostechnix.com/wp-content/uploads/2018/12/cdpath.png
|
@ -0,0 +1,156 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Kindd – A Graphical Frontend To dd Command)
|
||||
[#]: via: (https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Kindd – A Graphical Frontend To dd Command
|
||||
======
|
||||
|
||||
![Kindd - A Graphical Frontend To dd Command][1]
|
||||
|
||||
A while ago we learned how to [**create bootable ISO using dd command**][2] in Unix-like systems. Please keep in mind that dd command is one of the dangerous and destructive command. If you’re not sure what you are actually doing, you might accidentally wipe your hard drive in minutes. The dd command just takes bytes from **if** and writes them to **of**. It won’t care what it’s overwriting, it won’t care if there’s a partition table in the way, or a boot sector, or a home folder, or anything important. It will simply do what it is told to do. If you’re beginner, mostly try to avoid using dd command to do stuffs. Thankfully, there is a simple GUI utility for dd command. Say hello to **“Kindd”** , a graphical frontend to dd command. It is free, open source tool written in **Qt Quick**. This tool can be very helpful for the beginners and who are not comfortable with command line in general.
|
||||
|
||||
The developer created this tool mainly to provide,
|
||||
|
||||
1. a modern, simple and safe graphical user interface for dd command,
|
||||
2. a graphical way to easily create bootable device without having to use Terminal.
|
||||
|
||||
|
||||
|
||||
### Installing Kindd
|
||||
|
||||
Kindd is available in [**AUR**][3]. So if you’re a Arch user, install it using any AUR helper tools, for example [**Yay**][4].
|
||||
|
||||
To install Git version, run:
|
||||
|
||||
```
|
||||
$ yay -S kindd-git
|
||||
```
|
||||
|
||||
To install release version, run:
|
||||
|
||||
```
|
||||
$ yay -S kindd
|
||||
```
|
||||
|
||||
After installing, launch Kindd from the Menu or Application launcher.
|
||||
|
||||
For other distributions, you need to manually compile and install it from source as shown below.
|
||||
|
||||
Make sure you have installed the following prerequisites.
|
||||
|
||||
* git
|
||||
* coreutils
|
||||
* polkit
|
||||
* qt5-base
|
||||
* qt5-quickcontrols
|
||||
* qt5-quickcontrols2
|
||||
* qt5-graphicaleffects
|
||||
|
||||
|
||||
|
||||
Once all prerequisites installed, git clone the Kindd repository:
|
||||
|
||||
```
|
||||
git clone https://github.com/LinArcX/Kindd/
|
||||
```
|
||||
|
||||
Go to the directory where you just cloned Kindd and compile and install it:
|
||||
|
||||
```
|
||||
cd Kindd
|
||||
|
||||
qmake
|
||||
|
||||
make
|
||||
```
|
||||
|
||||
Finally run the following command to launch Kindd application:
|
||||
|
||||
```
|
||||
./kindd
|
||||
```
|
||||
|
||||
Kindd uses **pkexec** internally. The pkexec agent is installed by default in most most Desktop environments. But if you use **i3** (or maybe some other DE), you should install **polkit-gnome** first, and then paste the following line into i3 config file:
|
||||
|
||||
```
|
||||
exec /usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1 &
|
||||
```
|
||||
|
||||
### Create bootable ISO using Kindd
|
||||
|
||||
To create a bootable USB from an ISO, plug in the USB drive. Then, launch Kindd either from the Menu or Terminal.
|
||||
|
||||
This is how Kindd default interface looks like:
|
||||
|
||||
![][5]
|
||||
|
||||
Kindd interface
|
||||
|
||||
As you can see, Kindd interface is very simple and self-explanatory. There are just two sections namely **List Devices** which displays the list of available devices (hdd and Usb) on your system and **Create Bootable .iso**. You will be in “Create Bootable .iso” section by default.
|
||||
|
||||
Enter the block size in the first column, select the path of the ISO file in the second column and choose the correct device (USB drive path) in third column. Click **Convert/Copy** button to start creating bootable ISO.
|
||||
|
||||
![][6]
|
||||
|
||||
Once the process is completed, you will see successful message.
|
||||
|
||||
![][7]
|
||||
|
||||
Now, unplug the USB drive and boot your system with USB to check if it really works.
|
||||
|
||||
If you don’t know the actual device name (target path), just click on the List devices and check the USB drive name.
|
||||
|
||||
![][8]
|
||||
|
||||
* * *
|
||||
|
||||
**Related read:**
|
||||
|
||||
* [**Etcher – A Beautiful App To Create Bootable SD Cards Or USB Drives**][9]
|
||||
* [**Bootiso Lets You Safely Create Bootable USB Drive**][10]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
Kindd is in its early development stage. So, there would be bugs. If you find any bugs, please report them in its GitHub page given at the end of this guide.
|
||||
|
||||
And, that’s all. Hope this was useful. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Kindd GitHub Repository**][11]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/kindd-a-graphical-frontend-to-dd-command/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/kindd-720x340.png
|
||||
[2]: https://www.ostechnix.com/how-to-create-bootable-usb-drive-using-dd-command/
|
||||
[3]: https://aur.archlinux.org/packages/kindd-git/
|
||||
[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
|
||||
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-interface.png
|
||||
[6]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-1.png
|
||||
[7]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-2.png
|
||||
[8]: http://www.ostechnix.com/wp-content/uploads/2019/04/kindd-3.png
|
||||
[9]: https://www.ostechnix.com/etcher-beauitiful-app-create-bootable-sd-cards-usb-drives/
|
||||
[10]: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
|
||||
[11]: https://github.com/LinArcX/Kindd
|
@ -0,0 +1,202 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Linux Shell Script To Monitor Disk Space Usage And Send Email)
|
||||
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
======
|
||||
|
||||
There are numerous monitoring tools are available in market to monitor Linux systems and it will send an email when the system reaches the threshold limit.
|
||||
|
||||
It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
|
||||
|
||||
However, it’s suitable for small and big environment.
|
||||
|
||||
Think about if you have only few systems then what will be the best approach on this.
|
||||
|
||||
Yup, we want to write a **[shell script][1]** to achieve this.
|
||||
|
||||
In this tutorial we are going to write a shell script to monitor disk space usage on system.
|
||||
|
||||
When the system reaches the given threshold then it will trigger a mail to corresponding email id.
|
||||
|
||||
We have added totally four shell scripts in this article and each has been used for different purpose.
|
||||
|
||||
Later, we will come up with other shell scripts to monitor CPU, Memory and Swap utilization.
|
||||
|
||||
Before step into that, i would like to clarify one thing which i noticed regarding the disk space usage shell script.
|
||||
|
||||
Most of the users were commented in multiple blogs saying they were getting the following error message when they are running the disk space usage script.
|
||||
|
||||
```
|
||||
# sh /opt/script/disk-usage-alert-old.sh
|
||||
|
||||
/dev/mapper/vg_2g-lv_root
|
||||
test-script.sh: line 7: [: /dev/mapper/vg_2g-lv_root: integer expression expected
|
||||
/ 9.8G
|
||||
```
|
||||
|
||||
Yes that’s right. Even, i had faced the same issue when i ran the script first time. Later, i had found the root causes.
|
||||
|
||||
When you use “df -h” or “df -H” in shell script for disk space alert on RHEL 5 & RHEL 6 based system, you will be end up with the above error message because the output is not in the proper format, see the below output.
|
||||
|
||||
To overcome this issue, we need to use “df -Ph” (POSIX output format) but by default “df -h” is working fine on RHEL 7 based systems.
|
||||
|
||||
```
|
||||
# df -h
|
||||
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/mapper/vg_2g-lv_root
|
||||
10G 6.7G 3.4G 67% /
|
||||
tmpfs 7.8G 0 7.8G 0% /dev/shm
|
||||
/dev/sda1 976M 95M 830M 11% /boot
|
||||
/dev/mapper/vg_2g-lv_home
|
||||
5.0G 4.3G 784M 85% /home
|
||||
/dev/mapper/vg_2g-lv_tmp
|
||||
4.8G 14M 4.6G 1% /tmp
|
||||
```
|
||||
|
||||
### Method-1 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
|
||||
You can use the following shell script to monitor disk space usage on Linux system.
|
||||
|
||||
It will send an email when the system reaches the given threshold limit. In this example, we set threshold limit at 60% for testing purpose and you can change this limit as per your requirements.
|
||||
|
||||
It will send multiple mails if more than one file systems get reached the given threshold limit because the script is using loop.
|
||||
|
||||
Also, replace your email id instead of us to get this alert.
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert.sh
|
||||
|
||||
#!/bin/sh
|
||||
df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
|
||||
do
|
||||
echo $output
|
||||
used=$(echo $output | awk '{print $1}' | sed s/%//g)
|
||||
partition=$(echo $output | awk '{print $2}')
|
||||
if [ $used -ge 60 ]; then
|
||||
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)" | mail -s "Disk Space Alert: $used% Used On $(hostname)" [email protected]
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**Output:** I got the following two email alerts.
|
||||
|
||||
```
|
||||
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
|
||||
|
||||
The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
|
||||
```
|
||||
|
||||
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert.sh
|
||||
```
|
||||
|
||||
### Method-2 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
|
||||
Alternatively, you can use the following shell script. We have made few changes in this compared with above script.
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert-1.sh
|
||||
|
||||
#!/bin/sh
|
||||
df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
|
||||
do
|
||||
max=60%
|
||||
echo $output
|
||||
used=$(echo $output | awk '{print $1}')
|
||||
partition=$(echo $output | awk '{print $2}')
|
||||
if [ ${used%?} -ge ${max%?} ]; then
|
||||
echo "The partition \"$partition\" on $(hostname) has used $used at $(date)" | mail -s "Disk Space Alert: $used Used On $(hostname)" [email protected]
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
**Output:** I got the following two email alerts.
|
||||
|
||||
```
|
||||
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
|
||||
|
||||
The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
|
||||
```
|
||||
|
||||
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-1.sh
|
||||
```
|
||||
|
||||
### Method-3 : Linux Shell Script To Monitor Disk Space Usage And Send Email
|
||||
|
||||
I would like to go with this method. Since, it work like a charm and you will be getting single email for everything.
|
||||
|
||||
This is very simple and straightforward.
|
||||
|
||||
```
|
||||
*/10 * * * * df -Ph | sed s/%//g | awk '{ if($5 > 60) print $0;}' | mail -s "Disk Space Alert On $(hostname)" [email protected]
|
||||
```
|
||||
|
||||
**Output:** I got a single mail for all alerts.
|
||||
|
||||
```
|
||||
Filesystem Size Used Avail Use Mounted on
|
||||
/dev/mapper/vg_2g-lv_root 10G 6.7G 3.4G 67 /
|
||||
/dev/mapper/vg_2g-lv_home 5.0G 4.3G 784M 85 /home
|
||||
```
|
||||
|
||||
### Method-4 : Linux Shell Script To Monitor Disk Space Usage Of Particular Partition And Send Email
|
||||
|
||||
If anybody wants to monitor the particular partition then you can use the following shell script. Simply replace your filesystem name instead of us.
|
||||
|
||||
```
|
||||
# vi /opt/script/disk-usage-alert-2.sh
|
||||
|
||||
#!/bin/bash
|
||||
used=$(df -Ph | grep '/dev/mapper/vg_2g-lv_dbs' | awk {'print $5'})
|
||||
max=80%
|
||||
if [ ${used%?} -ge ${max%?} ]; then
|
||||
echo "The Mount Point "/DB" on $(hostname) has used $used at $(date)" | mail -s "Disk space alert on $(hostname): $used used" [email protected]
|
||||
fi
|
||||
```
|
||||
|
||||
**Output:** I got the following email alerts.
|
||||
|
||||
```
|
||||
The partition /dev/mapper/vg_2g-lv_dbs on 2g.CentOS6 has used 82% at Mon Apr 29 06:16:14 IST 2019
|
||||
```
|
||||
|
||||
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
|
||||
|
||||
```
|
||||
# crontab -e
|
||||
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-2.sh
|
||||
```
|
||||
|
||||
**Note:** You will be getting an email alert 10 mins later since the script has scheduled to run every 10 minutes (But it’s not exactly 10 mins and it depends the timing).
|
||||
|
||||
Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it’s clear now.
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/category/shell-script/
|
||||
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/
|
@ -0,0 +1,89 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Ping Multiple Servers And Show The Output In Top-like Text UI)
|
||||
[#]: via: (https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/)
|
||||
[#]: author: (sk https://www.ostechnix.com/author/sk/)
|
||||
|
||||
Ping Multiple Servers And Show The Output In Top-like Text UI
|
||||
======
|
||||
|
||||
![Ping Multiple Servers And Show The Output In Top-like Text UI][1]
|
||||
|
||||
A while ago, we wrote about [**“Fping”**][2] utility which enables us to ping multiple hosts at once. Unlike the traditional **“Ping”** utility, Fping doesn’t wait for one host’s timeout. It uses round-robin method. Meaning – It will send the ICMP Echo request to one host, then move to the another host and finally display which hosts are up or down at a time. Today, we are going to discuss about a similar utility named **“Pingtop”**. As the name says, it will ping multiple servers at a time and show the result in Top-like Terminal UI. It is free and open source, command line program written in **Python**.
|
||||
|
||||
### Install Pingtop
|
||||
|
||||
Pingtop can be installed using Pip, a package manager to install programs developed in Python. Make sure sure you have installed Python 3.7.x and Pip in your Linux box.
|
||||
|
||||
To install Pip on Linux, refer the following link.
|
||||
|
||||
* [**How To Manage Python Packages Using Pip**][3]
|
||||
|
||||
|
||||
|
||||
Once Pip is installed, run the following command to install Pingtop:
|
||||
|
||||
```
|
||||
$ pip install pingtop
|
||||
```
|
||||
|
||||
Now let us go ahead and ping multiple systems using Pingtop.
|
||||
|
||||
### Ping Multiple Servers And Show The Output In Top-like Terminal UI
|
||||
|
||||
To ping mulstiple hosts/systems, run:
|
||||
|
||||
```
|
||||
$ pingtop ostechnix.com google.com facebook.com twitter.com
|
||||
```
|
||||
|
||||
You will now see the result in a nice top-like Terminal UI as shown in the following output.
|
||||
|
||||
![][4]
|
||||
|
||||
Ping multiple servers using Pingtop
|
||||
|
||||
* * *
|
||||
|
||||
**Suggested read:**
|
||||
|
||||
* [**Some Alternatives To ‘top’ Command line Utility You Might Want To Know**][5]
|
||||
|
||||
|
||||
|
||||
* * *
|
||||
|
||||
I personally couldn’t find any use cases for Pingtop utility at the moment. But I like the idea of showing the ping command’s output in text user interface. Give it a try and see if it helps.
|
||||
|
||||
And, that’s all for now. More good stuffs to come. Stay tuned!
|
||||
|
||||
Cheers!
|
||||
|
||||
**Resource:**
|
||||
|
||||
* [**Pingtop GitHub Repository**][6]
|
||||
|
||||
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.ostechnix.com/ping-multiple-servers-and-show-the-output-in-top-like-text-ui/
|
||||
|
||||
作者:[sk][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.ostechnix.com/author/sk/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-720x340.png
|
||||
[2]: https://www.ostechnix.com/ping-multiple-hosts-linux/
|
||||
[3]: https://www.ostechnix.com/manage-python-packages-using-pip/
|
||||
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/pingtop-1.gif
|
||||
[5]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/
|
||||
[6]: https://github.com/laixintao/pingtop
|
@ -0,0 +1,121 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: ( )
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System)
|
||||
[#]: via: (https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System
|
||||
======
|
||||
|
||||
Package installation is become more easier on Ubuntu/Debian based systems when we use apt-clone utility.
|
||||
|
||||
apt-clone will work for you, if you want to build few systems with same set of packages.
|
||||
|
||||
It’s time consuming process if you want to build and install necessary packages manually on each systems.
|
||||
|
||||
It can be achieved in many ways and there are many utilities are available in Linux.
|
||||
|
||||
We have already wrote an article about **[Aptik][1]** in the past.
|
||||
|
||||
It’s one of the utility that allow Ubuntu users to backup and restore system settings and data
|
||||
|
||||
### What Is apt-clone?
|
||||
|
||||
[apt-clone][2] lets allow you to create backup of all installed packages for your Debian/Ubuntu systems that can be restored on freshly installed systems (or containers) or into a directory.
|
||||
|
||||
This backup can be restored on multiple systems with same operating system version and architecture.
|
||||
|
||||
### How To Install apt-clone?
|
||||
|
||||
The apt-clone package is available on Ubuntu/Debian official repository so, use **[apt Package Manager][3]** or **[apt-get Package Manager][4]** to install it.
|
||||
|
||||
Install apt-clone package using apt package manager.
|
||||
|
||||
```
|
||||
$ sudo apt install apt-clone
|
||||
```
|
||||
|
||||
Install apt-clone package using apt-get package manager.
|
||||
|
||||
```
|
||||
$ sudo apt-get install apt-clone
|
||||
```
|
||||
|
||||
### How To Backup Installed Packages Using apt-clone?
|
||||
|
||||
Once you have successfully installed the apt-clone package. Simply give a location where do you want to save the backup file.
|
||||
|
||||
We are going to save the installed packages backup under `/backup` directory.
|
||||
|
||||
The apt-clone utility will save the installed packages list into `apt-clone-state-Ubuntu18.2daygeek.com.tar.gz` file.
|
||||
|
||||
```
|
||||
$ sudo apt-clone clone /backup
|
||||
```
|
||||
|
||||
We can check the same by running the ls Command.
|
||||
|
||||
```
|
||||
$ ls -lh /backup/
|
||||
total 32K
|
||||
-rw-r--r-- 1 root root 29K Apr 20 19:06 apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
|
||||
```
|
||||
|
||||
Run the following command to view the details of the backup file.
|
||||
|
||||
```
|
||||
$ apt-clone info /backup/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
|
||||
Hostname: Ubuntu18.2daygeek.com
|
||||
Arch: amd64
|
||||
Distro: bionic
|
||||
Meta: libunity-scopes-json-def-desktop, ubuntu-desktop
|
||||
Installed: 1792 pkgs (194 automatic)
|
||||
Date: Sat Apr 20 19:06:43 2019
|
||||
```
|
||||
|
||||
As per the above output, totally we have 1792 packages in the backup file.
|
||||
|
||||
### How To Restore The Backup Which Was Taken Using apt-clone?
|
||||
|
||||
You can use any of the remote copy utility to copy the files on remote server.
|
||||
|
||||
```
|
||||
$ scp /backup/apt-clone-state-ubunt-18-04.tar.gz Destination-Server:/opt
|
||||
```
|
||||
|
||||
Once you copy the file then perform the restore using apt-clone utility.
|
||||
|
||||
Run the following command to restore it.
|
||||
|
||||
```
|
||||
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
|
||||
```
|
||||
|
||||
Make a note, The restore will override your existing `/etc/apt/sources.list` and will install/remove packages. So be careful.
|
||||
|
||||
If you want to restore all the packages into a folder instead of actual restore, you can do it by using the following command.
|
||||
|
||||
```
|
||||
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz --destination /opt/oldubuntu
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/aptik-backup-restore-ppas-installed-apps-users-data/
|
||||
[2]: https://github.com/mvo5/apt-clone
|
||||
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
@ -0,0 +1,125 @@
|
||||
DomTerm 一款为 Linux 打造的终端模拟器
|
||||
======
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah)
|
||||
|
||||
[DomTerm][1] 是一款现代化的终端模拟器,它使用浏览器引擎作为 “GUI 工具包”。这就使得一些干净特性例如可嵌入的图像和链接,HTML 富文本以及可折叠(显示/隐藏)命令成为可能。除此以外,它看起来感觉就像一个功能强大,有着优秀 xterm 兼容性(包括鼠标处理和24位色)和恰当的 “chrome” (菜单)的独特终端模拟器。另外它同样有对会话管理和副窗口(例如 `tmux` 和 `GNU Screen` 中),基本输入编辑(例如 `readline` 中)以及分页(例如 `less` 中)的内置支持。
|
||||
|
||||
![](https://opensource.com/sites/default/files/u128651/domterm1.png)
|
||||
图 1: DomTerminal 终端模拟器。 查看大图
|
||||
|
||||
在以下部分我们将看一看这些特性。我们将假设你已经安装好了 `domterm` (如果你需要获取并搭建 Dormterm 请跳到本文最后)。开始之前先让我们概览一下这项技术。
|
||||
|
||||
### 前端 vs. 后端
|
||||
|
||||
DomTerm 大部分是用 JavaScript 写的,它运行在一个浏览器引擎中。这个引擎可以是一个桌面浏览器,例如 Chrome 或者 Firefox(见图三),也可以是一个内嵌的浏览器。使用一个通用的网页浏览器没有问题,但是用户体验却不够好(因为菜单是为通用的网页浏览而不是为了终端模拟器所打造),并且安全模型也会妨碍使用。因此使用内嵌的浏览器更好一些。
|
||||
|
||||
目前以下这些是支持的:
|
||||
|
||||
* `qdomterm`,使用了 Qt 工具包 和 `QtWebEngine`
|
||||
* 一个内嵌的 `[Electron][2]`(见图一)
|
||||
* `atom-domterm` 以 [Atom 文本编辑器][3](同样基于 Electron)包的形式运行 DomTerm,并和 Atom 面板系统集成在一起(见图二)
|
||||
* 一个为 JavaFX 的 `WebEngine` 包装器,这对 Java 编程十分有用(见图四)
|
||||
* 之前前端使用 [Firefox-XUL][4] 作为首选,但是 Mozilla 已经终止了 XUL
|
||||
|
||||
|
||||
|
||||
![在 Atom 编辑器中的 DomTerm 终端面板][6]
|
||||
|
||||
图二:在 Atom 编辑器中的 DomTerm 终端面板。[查看大图][7]
|
||||
|
||||
目前,Electron 前端可能是最佳选择,紧随其后的是 Qt 前端。如果你使用 Atom,`atom-domterm` 也工作得相当不错。
|
||||
|
||||
后端服务器是用 C 写的。它管理着伪终端(PTYs)和会话。它同样也是一个为前端提供 Javascript 和其他文件的 HTTP 服务器。如果没有服务器在运行,`domterm` 就会使用它自己。后端与服务器之间的通讯通常是用 WebSockets (在服务器端是[libwebsockets][8])完成的。然而,JavaFX 嵌入时既不用 Websockets 也不用 DomTerm 服务器。相反 Java 应用直接通过 Java-Javascript 桥接进行通讯。
|
||||
|
||||
### 一个稳健的可兼容 xterm 的终端模拟器
|
||||
|
||||
DomTerm 看上去感觉像一个现代的终端模拟器。它处理鼠标事件,24位色,万国码,倍宽字符(CJK)以及输入方式。DomTerm 在 [vttest 测试套件][9] 上工作地十分出色。
|
||||
|
||||
不同寻常的特性包括:
|
||||
|
||||
**展示/隐藏按钮(“折叠”):** 小三角(如上图二)是隐藏/展示相应输出的按钮。仅需在[提示文字][11]中添加特定的[转义字符][10]就可以创建按钮。
|
||||
|
||||
**对于 `readline` 和类似输入编辑器的鼠标点击支持:** 如果你点击(黄色)输入区域,DomTerm 会向应用发送正确的方向键按键序列。(提示窗口中的转义字符使能这一特性,你也可以通过 Alt+Click 强制使用。)
|
||||
|
||||
**用CSS样式化终端:** 这通常是在 `~/.domterm/settings.ini` 里完成的,保存时会自动重载。例如在图二中,终端专用的背景色被设置。
|
||||
|
||||
### 一个更好的 REPL 控制台
|
||||
|
||||
一个经典的终端模拟器基于长方形的字符单元格工作。这在 REPL(命令行)上没问题,但是并不理想。这有些对通常在终端模拟器中不常见的 REPL 很有用的 DomTerm 特性:
|
||||
|
||||
**一个能“打印”图片,图表,数学公式或者一组可点击的链接的命令:** 一个应用可以发送包含几乎任何 HTML 的转义字符。(擦除 HTML 以移除 JavaScript 和其它危险特性。)
|
||||
|
||||
图三从[`gnuplot`][12]会话展示了一个片段。Gnuplot(2.1或者跟高版本)支持 `dormterm` 作为终端类型。图像输出被转换成 [SVG 图][13],然后图片被打印到终端。我的博客帖子[在 DormTerm 上的 Gnuplot 展示]在这方面提供了更多信息。
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-gnuplot.png)
|
||||
图三: Gnuplot 截图。查看大图
|
||||
|
||||
[Kawa][15] 语言有一个创建并转换[几何图像值][16]的库。如果你将这样的图片值打印到 DomTerm 终端,图片就会被转换成 SVG 形式并嵌入进输出中。
|
||||
|
||||
![](https://opensource.com/sites/default/files/dt-kawa1.png)
|
||||
图四: Kawa 中可计算的几何形状。查看大图
|
||||
|
||||
**输出中的富文本:** 有着 HTML 样式的帮助信息更加便于阅读,看上去也更漂亮。图片一的面板下部分展示 `dormterm help` 的输出。(如果没在 DomTerm 下运行的话输出的是普通文本。)注意自带的分页器中的 `PAUSED` 消息。
|
||||
|
||||
**包括可点击链接的错误消息:** DomTerm 识别语法 `filename:line:column` 并将其转化成一个能在可定制文本编辑器中打开文件并定位到行的链接。(这适用相对的文件名上如果你用 `PROMPT_COMMAND` 或类似的以跟踪目录。)
|
||||
|
||||
一个编译器可以侦测到它在 DomTerm 下运行,并直接用转义字符发出文件链接。这比依赖 DomTerm 的样式匹配要稳健得多,因为它可以处理空格和其他字符并且无需依赖目录追踪。在图四中,你可以看到来自 [Kawa Compiler][15] 的错误消息。悬停在文件位置上会使其出现下划线,`file:` URL 出现在 `atom-domterm` 消息栏(窗口底部)中。(当不用 `atom-domterm` 时,这样的消息会在一个覆盖框中显示,如图一中所看到的 `PAUSED` 消息所示。)
|
||||
|
||||
点击链接时的动作是可以配置的。默认对于带有 `#position` 后缀的 `file:` 链接的动作是在文本编辑器中打开那个文件。
|
||||
|
||||
**内建的 Lisp 样式优美打印:** 你可以在输出中包括优美打印目录(比如,组)这样行分隔符会随着窗口调整二重新计算。查看我的文章 [DomTerm 中的动态优美打印][17]以更深入探讨。
|
||||
|
||||
**基本的有着历史记录的内建行编辑**(像 `GNU readline` 一样): 这使用浏览器自带的编辑器,因此它有着优秀的鼠标和选择处理机制。你可以在正常字符模式(大多数输入的字符被指接送向进程); 或者行模式(当控制字符导致编辑动作,回车键向进程发送被编辑行,通常的字符是被插入的)之间转换。默认的是自动模式,根据 PTY 是在原始模式还是终端模式中,DomTerm 在字符模式与行模式间转换。
|
||||
|
||||
**自带的分页器**(类似简化版的 `less`):键盘快捷键控制滚动。在“页模式”中,输出在每个新的屏幕(或者单独的行如果你一行行地向前移)后暂停; 页模式对于用户输入简单智能,因此(如果你想的话)你可以无需阻碍交互程序就可以运行它。
|
||||
|
||||
### 多路传输和会话
|
||||
|
||||
**标签和平铺:** 你不仅可以创建多个终端标签,也可以平铺它们。你可以要么使用鼠标要么使用键盘快捷键来创建或者切换面板和标签。它们可以用鼠标重新排列并调整大小。这是通过[GoldenLayout][18] JavaScript 库实现的。[图一][19]展示了一个有着两个面板的窗口。上面的有两个标签,一个运行 [Midnight Commander][20]; 底下的面板以 HTML 形式展示了 `dormterm help` 输出。然而相反在 Atom 中我们使用其自带的可拖拽的面板和标签。你可以在图二中看到这个。
|
||||
|
||||
**分离或重接会话:** 与 `tmux` 和 GNU `screen` 类似,DomTerm 支持会话安排。你甚至可以给同样的会话接上多个窗口或面板。这支持多用户会话分享和远程链接。(为了安全,同一个服务器的所有会话都需要能够读取 Unix 域接口和包含随机密钥的本地文件。当我们有了良好,安全的远程链接,这个限制将会有所改善。)
|
||||
|
||||
**`domterm`** **命令** 与 `tmux` 和 GNU `screen` 同样相似的地方在于它为控制或者打开单个或多个会话的服务器有着多个选项。主要的差别在于,如果它没在 DomTerm 下运行,`dormterm` 命令会创建一个新的顶层窗口,而不是在现有的终端中运行。
|
||||
|
||||
与 `tmux` 和 `git` 类似,dormterm` 命令有许多副命令。一些副命令创建窗口或者会话。另一些(例如“打印”一张图片)仅在现有的 DormTerm 会话下起作用。
|
||||
|
||||
命令 `domterm browse` 打开一个窗口或者面板以浏览一个指定的 URL,例如浏览文档的时候。
|
||||
|
||||
### 获取并安装 DomTerm
|
||||
|
||||
DomTerm 从其 [Github 仓库][21]可以获取。目前没有提前搭建好的包,但是有[详细指导][22]。所有的前提条件都可以在 Fedora 27 上获取,这使得其特别容易被搭建。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/1/introduction-domterm-terminal-emulator
|
||||
|
||||
作者:[Per Bothner][a]
|
||||
译者:[tomjlw](https://github.com/tomjlw)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/perbothner
|
||||
[1]:http://domterm.org/
|
||||
[2]:https://electronjs.org/
|
||||
[3]:https://atom.io/
|
||||
[4]:https://en.wikipedia.org/wiki/XUL
|
||||
[5]:/file/385346
|
||||
[6]:https://opensource.com/sites/default/files/images/dt-atom1.png (DomTerm terminal panes in Atom editor)
|
||||
[7]:https://opensource.com/sites/default/files/images/dt-atom1.png
|
||||
[8]:https://libwebsockets.org/
|
||||
[9]:http://invisible-island.net/vttest/
|
||||
[10]:http://domterm.org/Wire-byte-protocol.html
|
||||
[11]:http://domterm.org/Shell-prompts.html
|
||||
[12]:http://www.gnuplot.info/
|
||||
[13]:https://developer.mozilla.org/en-US/docs/Web/SVG
|
||||
[14]:http://per.bothner.com/blog/2016/gnuplot-in-domterm/
|
||||
[15]:https://www.gnu.org/software/kawa/
|
||||
[16]:https://www.gnu.org/software/kawa/Composable-pictures.html
|
||||
[17]:http://per.bothner.com/blog/2017/dynamic-prettyprinting/
|
||||
[18]:https://golden-layout.com/
|
||||
[19]:https://opensource.com/sites/default/files/u128651/domterm1.png
|
||||
[20]:https://midnight-commander.org/
|
||||
[21]:https://github.com/PerBothner/DomTerm
|
||||
[22]:http://domterm.org/Downloading-and-building.html
|
||||
|
@ -0,0 +1,146 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How to use autofs to mount NFS shares)
|
||||
[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
|
||||
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
|
||||
|
||||
如何使用 autofs 挂载 NFS 共享
|
||||
======
|
||||
|
||||
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
|
||||
|
||||
大多数 Linux 文件系统在引导时挂载,并在系统运行时保持挂载状态。对于已在 `fstab` 中配置的任何远程文件系统也是如此。但是,有时你可能希望仅按需挂载远程文件系统 - 例如,通过减少网络带宽使用来提高性能,或出于安全原因隐藏或混淆某些目录。[autofs][1] 软件包提供此功能。在本文中,我将介绍如何配置基本的自动挂载。
|
||||
|
||||
首先做点假设:假设有台 NFS 服务器 `tree.mydatacenter.net` 已经启动并运行。另外假设一个名为 `ourfiles` 的数据目录还有供 Carl 和 Sarah 使用的用户目录,它们都由服务器共享。
|
||||
|
||||
一些最佳实践可以使工作更好:服务器上的用户和任何客户端工作站上的帐号有相同的用户 ID。此外,你的工作站和服务器应有相同的域名。检查相关配置文件应该确认。
|
||||
|
||||
```
|
||||
alan@workstation1:~$ sudo getent passwd carl sarah
|
||||
|
||||
[sudo] password for alan:
|
||||
|
||||
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
|
||||
|
||||
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
|
||||
|
||||
|
||||
|
||||
alan@workstation1:~$ sudo getent hosts
|
||||
|
||||
127.0.0.1 localhost
|
||||
|
||||
127.0.1.1 workstation1.mydatacenter.net workstation1
|
||||
|
||||
10.10.1.5 tree.mydatacenter.net tree
|
||||
|
||||
```
|
||||
|
||||
如你所见,客户端工作站和 NFS 服务器都在 `hosts` 中配置。我假设一个基本的家庭甚至小型办公室网络,可能缺乏适合的内部域名服务(即 DNS)。
|
||||
|
||||
### 安装软件包
|
||||
|
||||
你只需要安装两个软件包:用于 NFS 客户端的 `nfs-common` 和提供自动挂载的 `autofs`。
|
||||
```
|
||||
alan@workstation1:~$ sudo apt-get install nfs-common autofs
|
||||
|
||||
```
|
||||
|
||||
你可以验证 autofs 是否已放在 `etc` 目录中:
|
||||
```
|
||||
alan@workstation1:~$ cd /etc; ll auto*
|
||||
|
||||
-rw-r--r-- 1 root root 12596 Nov 19 2015 autofs.conf
|
||||
|
||||
-rw-r--r-- 1 root root 857 Mar 10 2017 auto.master
|
||||
|
||||
-rw-r--r-- 1 root root 708 Jul 6 2017 auto.misc
|
||||
|
||||
-rwxr-xr-x 1 root root 1039 Nov 19 2015 auto.net*
|
||||
|
||||
-rwxr-xr-x 1 root root 2191 Nov 19 2015 auto.smb*
|
||||
|
||||
alan@workstation1:/etc$
|
||||
|
||||
```
|
||||
|
||||
### 配置 autofs
|
||||
|
||||
现在你需要编辑其中几个文件并添加 `auto.home` 文件。首先,将以下两行添加到文件 `auto.master` 中:
|
||||
```
|
||||
/mnt/tree /etc/auto.misc
|
||||
|
||||
/home/tree /etc/auto.home
|
||||
|
||||
```
|
||||
|
||||
每行以挂载 NFS 共享的目录开头。继续创建这些目录:
|
||||
```
|
||||
alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
|
||||
|
||||
```
|
||||
|
||||
接下来,将以下行添加到文件 `auto.misc`:
|
||||
```
|
||||
ourfiles -fstype=nfs tree:/share/ourfiles
|
||||
|
||||
```
|
||||
|
||||
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.misc` 的 `ourfiles` 共享。如上所示,这些文件将在 `/mnt/tree/ourfiles` 目录中。
|
||||
|
||||
第三步,使用以下行创建文件 `auto.home`:
|
||||
```
|
||||
* -fstype=nfs tree:/home/&
|
||||
|
||||
```
|
||||
|
||||
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.home` 的用户共享。在这种情况下,Carl 和 Sarah 的文件将分别在目录 `/home/tree/carl` 或 `/home/tree/sarah`中。星号(称为通配符)使每个用户的共享可以在登录时自动挂载。& 符号也可以作为表示服务器端用户目录的通配符。它们的主目录会相应地根据 `passwd` 文件映射。如果你更喜欢本地主目录,则无需执行此操作。相反,用户可以将其用作特定文件的简单远程存储。
|
||||
|
||||
最后,重启 `autofs` 守护进程,以便识别并加载这些配置的更改。
|
||||
```
|
||||
alan@workstation1:/etc$ sudo service autofs restart
|
||||
|
||||
```
|
||||
|
||||
### 测试 autofs
|
||||
|
||||
如果更改文件 `auto.master` 中的列出目录并运行 `ls` 命令,那么不会立即看到任何内容。例如,`(cd)` 到目录 `/mnt/tree`。首先,`ls` 的输出不会显示任何内容,但在运行 `cd ourfiles` 之后,将自动挂载 `ourfiles` 共享目录。 `cd` 命令也将被执行,你将进入新挂载的目录中。
|
||||
```
|
||||
carl@workstation1:~$ cd /mnt/tree
|
||||
|
||||
carl@workstation1:/mnt/tree$ ls
|
||||
|
||||
carl@workstation1:/mnt/tree$ cd ourfiles
|
||||
|
||||
carl@workstation1:/mnt/tree/ourfiles$
|
||||
|
||||
```
|
||||
|
||||
为了进一步确认正常工作,`mount` 命令会显示已挂载共享的细节
|
||||
```
|
||||
carl@workstation1:~$ mount
|
||||
|
||||
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)
|
||||
|
||||
```
|
||||
|
||||
对于Carl和Sarah,`/home/tree` 目录工作方式相同。
|
||||
|
||||
我发现在我的文件管理器中添加这些目录的书签很有用,可以用来快速访问。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
|
||||
|
||||
作者:[Alan Formy-Duval][a]
|
||||
选题:[lujun9972](https://github.com/lujun9972)
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]:https://opensource.com/users/alanfdoss
|
||||
[1]:https://wiki.archlinux.org/index.php/autofs
|
@ -7,84 +7,84 @@
|
||||
[#]: via: (https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
How To Install And Configure Chrony As NTP Client?
|
||||
如何正确安装和配置Chrony作为NTP客户端?
|
||||
======
|
||||
|
||||
The NTP server and NTP client allow us to sync the clock across the network.
|
||||
NTP服务器和NTP客户端运行我们通过网络来同步时钟。
|
||||
|
||||
We had written an article about **[NTP server and NTP client installation and configuration][1]** in the past.
|
||||
在过去,我们已经撰写了一篇关于 **[NTP服务器和NTP客户端的安装与配置][1]** 的文章。
|
||||
|
||||
If you would like to check these, navigate to the above URL.
|
||||
如果你想看这些内容,点击上述的URL访问。
|
||||
|
||||
### What Is Chrony Client?
|
||||
### 什么是Chrony客户端?
|
||||
|
||||
Chrony is replacement of NTP client.
|
||||
Chrony是NTP客户端的替代品。
|
||||
|
||||
It can synchronize the system clock faster with better time accuracy and it can be particularly useful for the systems which are not online all the time.
|
||||
它能以更精确的时间和更快的速度同步时钟,并且它对于那些不是全天候在线的系统非常有用。
|
||||
|
||||
chronyd is smaller, it uses less memory and it wakes up the CPU only when necessary, which is better for power saving.
|
||||
chronyd更小、更省电,它占用更少的内存且仅当需要时它才唤醒CPU。
|
||||
|
||||
It can perform well even when the network is congested for longer periods of time.
|
||||
即使网络拥塞较长时间,它也能很好地运行。
|
||||
|
||||
It supports hardware timestamping on Linux, which allows extremely accurate synchronization on local networks.
|
||||
它支持Linux上的硬件时间戳,允许在本地网络进行极其准确的同步。
|
||||
|
||||
It offers following two services.
|
||||
它提供下列两个服务。
|
||||
|
||||
* **`chronyc:`** Command line interface for chrony.
|
||||
* **`chronyd:`** Chrony daemon service.
|
||||
* **`chronyc:`** Chrony的命令行接口。
|
||||
* **`chronyd:`** Chrony守护进程服务。
|
||||
|
||||
|
||||
|
||||
### How To Install And Configure Chrony In Linux?
|
||||
### 如何在Linux上安装和配置Chrony?
|
||||
|
||||
Since the package is available in most of the distributions official repository. So, use the package manager to install it.
|
||||
由于安装包在大多数发行版的官方仓库中可用,因此直接使用包管理器去安装它。
|
||||
|
||||
For **`Fedora`** system, use **[DNF Command][2]** to install chrony.
|
||||
对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装chrony.
|
||||
|
||||
```
|
||||
$ sudo dnf install chrony
|
||||
```
|
||||
|
||||
For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install chrony.
|
||||
对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装chrony.
|
||||
|
||||
```
|
||||
$ sudo apt install chrony
|
||||
```
|
||||
|
||||
For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install chrony.
|
||||
对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装chrony.
|
||||
|
||||
```
|
||||
$ sudo pacman -S chrony
|
||||
```
|
||||
|
||||
For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install chrony.
|
||||
对于 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装chrony.
|
||||
|
||||
```
|
||||
$ sudo yum install chrony
|
||||
```
|
||||
|
||||
For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install chrony.
|
||||
对于**`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装chrony.
|
||||
|
||||
```
|
||||
$ sudo zypper install chrony
|
||||
```
|
||||
|
||||
In this article, we are going to use the following setup to test this.
|
||||
在这篇文章中,我们将使用下列设置去测试。
|
||||
|
||||
* **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
|
||||
* **`Chrony Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
|
||||
* **`NTP服务器:`** 主机名: CentOS7.2daygeek.com, IP:192.168.1.5, OS:CentOS 7
|
||||
* **`Chrony客户端:`** 主机名: Ubuntu18.2daygeek.com, IP:192.168.1.3, OS:Ubuntu 18.04
|
||||
|
||||
|
||||
导航到 **[在Linux上安装和配置NTP服务器][1]** 的URL。
|
||||
|
||||
Navigate to the following URL for **[NTP server installation and configuration in Linux][1]**.
|
||||
|
||||
I have installed and configured the NTP server on `CentOS7.2daygeek.com` so, append the same into all the client machines. Also, include the other required information on it.
|
||||
我已经在`CentOS7.2daygeek.com`这台主机上安装和配置了NTP服务器,因此,将其附加到所有的客户端机器上。此外,还包括其他所需信息。
|
||||
|
||||
The `chrony.conf` file will be placed in the different locations based on your distribution.
|
||||
`chrony.conf`文件的位置根据你的发行版不同而不同。
|
||||
|
||||
For RHEL based systems, it’s located at `/etc/chrony.conf`.
|
||||
对基于RHEL的系统,它位于`/etc/chrony.conf`。
|
||||
|
||||
For Debian based systems, it’s located at `/etc/chrony/chrony.conf`.
|
||||
对基于Debian的系统,它位于`/etc/chrony/chrony.conf`。
|
||||
|
||||
```
|
||||
# vi /etc/chrony/chrony.conf
|
||||
@ -98,27 +98,28 @@ makestep 1 3
|
||||
cmdallow 192.168.1.0/24
|
||||
```
|
||||
|
||||
Bounce the Chrony service once you update the configuration.
|
||||
更新配置后需要重启Chrony服务。
|
||||
|
||||
For sysvinit systems. For RHEL based system we need to run `chronyd` instead of chrony.
|
||||
对于sysvinit系统。基于RHEL的系统需要去运行`chronyd`而不是chrony。
|
||||
|
||||
```
|
||||
# service chrony restart
|
||||
# service chronyd restart
|
||||
|
||||
# chkconfig chrony on
|
||||
# chkconfig chronyd on
|
||||
```
|
||||
|
||||
For systemctl systems. For RHEL based system we need to run `chronyd` instead of chrony.
|
||||
对于systemctl系统。 基于RHEL的系统需要去运行`chronyd`而不是chrony。
|
||||
|
||||
```
|
||||
# systemctl restart chrony
|
||||
# systemctl restart chronyd
|
||||
|
||||
# systemctl enable chrony
|
||||
# systemctl enable chronyd
|
||||
```
|
||||
|
||||
Use the following commands like tacking, sources and sourcestats to check chrony synchronization details.
|
||||
使用像tacking,sources和sourcestats这样的命令去检查chrony的同步细节。
|
||||
|
||||
去检查chrony的跟踪状态。
|
||||
|
||||
To check chrony tracking status.
|
||||
|
||||
```
|
||||
# chronyc tracking
|
||||
@ -137,7 +138,7 @@ Update interval : 2.0 seconds
|
||||
Leap status : Normal
|
||||
```
|
||||
|
||||
Run the sources command to displays information about the current time sources.
|
||||
运行sources命令去显示当前时间源的信息。
|
||||
|
||||
```
|
||||
# chronyc sources
|
||||
@ -147,7 +148,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample
|
||||
^* CentOS7.2daygeek.com 2 6 17 62 +36us[+1230us] +/- 1111ms
|
||||
```
|
||||
|
||||
The sourcestats command displays information about the drift rate and offset estimation process for each of the sources currently being examined by chronyd.
|
||||
sourcestats命令显示有关chronyd当前正在检查的每个源的漂移率和偏移估计过程的信息。
|
||||
|
||||
```
|
||||
# chronyc sourcestats
|
||||
@ -157,7 +158,7 @@ Name/IP Address NP NR Span Frequency Freq Skew Offset Std Dev
|
||||
CentOS7.2daygeek.com 5 3 71 -97.314 78.754 -469us 441us
|
||||
```
|
||||
|
||||
When chronyd is configured as an NTP client or peer, you can have the transmit and receive timestamping modes and the interleaved mode reported for each NTP source by the chronyc ntpdata command.
|
||||
当chronyd配置为NTP客户端或对等端时,你就能通过chronyc ntpdata命令向每一个NTP源发送和接收时间戳模式和交错模式报告。
|
||||
|
||||
```
|
||||
# chronyc ntpdata
|
||||
@ -190,13 +191,14 @@ Total RX : 46
|
||||
Total valid RX : 46
|
||||
```
|
||||
|
||||
Finally run the `date` command.
|
||||
最后运行`date`命令。
|
||||
|
||||
```
|
||||
# date
|
||||
Thu Mar 28 03:08:11 CDT 2019
|
||||
```
|
||||
|
||||
为了立即切换系统时钟,通过转换绕过任何正在进行的调整,请以root身份发出以下命令(手动调整系统时钟)。
|
||||
To step the system clock immediately, bypassing any adjustments in progress by slewing, issue the following command as root (To adjust the system clock manually).
|
||||
|
||||
```
|
||||
@ -209,7 +211,7 @@ via: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[译者ID](https://github.com/译者ID)
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
@ -1,262 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (arrowfeng)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (How To Install And Configure NTP Server And NTP Client In Linux?)
|
||||
[#]: via: (https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/)
|
||||
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
|
||||
|
||||
如何在Linux上安装、配置NTP服务和NTP客户端?
|
||||
======
|
||||
|
||||
你也许听说过这个词很多次或者你可能已经在使用它了。
|
||||
但是,在这篇文章中我将会清晰的告诉你NTP服务和NTP客户端的安装。
|
||||
|
||||
之后我们将会了解 **[Chrony NTP 客户端的安装][1]**。
|
||||
|
||||
|
||||
### 什么是NTP服务?
|
||||
|
||||
NTP 表示为网络时间协议。
|
||||
|
||||
它是通过网络在电脑系统之间进行时钟同步的网络协议。
|
||||
|
||||
另一方面,我可以说,它可以让那些通过NTP或者Chrony客户端连接到NTP服务的系统保持时间上的一致(它能保持一个精确的时间)。
|
||||
|
||||
|
||||
NTP在公共互联网上通常能够保持时间延迟在几十毫秒以内的精度,并在理想条件下,它能在局域网下达到优于一毫秒的延迟精度。
|
||||
|
||||
它使用用户数据报协议(UDP)在端口123上发送和接受时间戳。它是C/S架构的应用程序。
|
||||
|
||||
|
||||
|
||||
### 什么是NTP客户端?
|
||||
|
||||
NTP客户端将其时钟与网络时间服务器同步。
|
||||
|
||||
### 什么是Chrony客户端?
|
||||
Chrony是NTP客户端的替代品。它能以更精确的时间更快的同步系统时钟,并且它对于那些不总是在线的系统很有用。
|
||||
|
||||
### 为什么我们需要NTP服务?
|
||||
|
||||
为了使你组织中的所有服务器与基于时间的作业保持精确的时间同步。
|
||||
|
||||
为了说明这点,我将告诉你一个场景。比如说,我们有两个服务器(服务器1和服务器2)。服务器1通常在10:55完成离线作业,然后服务器2在11:00需要基于服务器1完成的作业报告去运行其他作业。
|
||||
|
||||
如果两个服务器正在使用不同的时间(如果服务器2时间比服务器1提前,服务器1的时间就落后于服务器2),然后我们就不能去执行这个作业。为了达到时间一致,我们应该安装NTP。
|
||||
希望上述能清除你对于NTP的疑惑。
|
||||
|
||||
|
||||
在这篇文章中,我们将使用下列设置去测试。
|
||||
|
||||
* **`NTP Server:`** HostName: CentOS7.2daygeek.com, IP:192.168.1.8, OS:CentOS 7
|
||||
* **`NTP Client:`** HostName: Ubuntu18.2daygeek.com, IP:192.168.1.5, OS:Ubuntu 18.04
|
||||
|
||||
|
||||
|
||||
### NTP服务端: 如何在Linux上安装NTP?
|
||||
|
||||
因为它是c/s架构,所以NTP服务端和客户端的安装包没有什么不同。在发行版的官方仓库中都有NTP安装包,因此可以使用发行版的包管理器安装它。
|
||||
|
||||
对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装ntp.
|
||||
|
||||
```
|
||||
$ sudo dnf install ntp
|
||||
```
|
||||
|
||||
对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$
|
||||
```
|
||||
|
||||
对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo pacman -S ntp
|
||||
```
|
||||
|
||||
对 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo yum install ntp
|
||||
```
|
||||
|
||||
对于 **`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo zypper install ntp
|
||||
```
|
||||
|
||||
### 如何在Linux上配置NTP服务?
|
||||
|
||||
安装NTP软件包后,请确保在服务器端的`/etc/ntp.conf`文件中,必须取消以下配置的注释。
|
||||
|
||||
默认情况下,NTP服务器配置依赖于`X.distribution_name.pool.ntp.org`。 如果有必要,可以使用默认配置,也可以访问<https://www.ntppool.org/zone/@>站点,根据你所在的位置(特定国家/地区)进行更改。
|
||||
|
||||
比如说如果你在印度,然后你的NTP服务器将是`0.in.pool.ntp.org`,并且这个地址适用于大多数国家。
|
||||
|
||||
```
|
||||
# vi /etc/ntp.conf
|
||||
|
||||
restrict default kod nomodify notrap nopeer noquery
|
||||
restrict -6 default kod nomodify notrap nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
server 0.asia.pool.ntp.org
|
||||
server 1.asia.pool.ntp.org
|
||||
server 2.asia.pool.ntp.org
|
||||
server 3.asia.pool.ntp.org
|
||||
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
|
||||
driftfile /var/lib/ntp/drift
|
||||
keys /etc/ntp/keys
|
||||
```
|
||||
|
||||
我们仅允许`192.168.1.0/24`子网的客户端访问NTP服务器。
|
||||
|
||||
由于默认情况下基于RHEL7的发行版的防火墙是打开的,因此允许ntp服务通过。
|
||||
|
||||
```
|
||||
# firewall-cmd --add-service=ntp --permanent
|
||||
# firewall-cmd --reload
|
||||
```
|
||||
|
||||
更新配置后重启服务。
|
||||
|
||||
对于基于Debian的sysvinit系统,我们需要去运行`ntp`而不是`ntpd`。
|
||||
|
||||
```
|
||||
# service ntpd restart
|
||||
|
||||
# chkconfig ntpd on
|
||||
```
|
||||
对于基于Debian的systemctl系统,我们需要去运行`ntp`和`ntpd`。
|
||||
|
||||
```
|
||||
# systemctl restart ntpd
|
||||
|
||||
# systemctl enable ntpd
|
||||
```
|
||||
|
||||
### NTP客户端:如何在Linux上安装NTP客户端?
|
||||
|
||||
正如我在这篇文章中前面所说的。NTP服务端和客户端的安装包没有什么不同。因此在客户端上也安装同样的软件包。
|
||||
|
||||
对于 **`Fedora`** 系统, 使用 **[DNF 命令][2]** 去安装ntp.
|
||||
|
||||
```
|
||||
$ sudo dnf install ntp
|
||||
```
|
||||
|
||||
对于 **`Debian/Ubuntu`** 系统, 使用 **[APT-GET 命令][3]** 或者 **[APT 命令][4]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$
|
||||
```
|
||||
|
||||
对基于 **`Arch Linux`** 的系统, 使用 **[Pacman 命令][5]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo pacman -S ntp
|
||||
```
|
||||
|
||||
对 **`RHEL/CentOS`** 系统, 使用 **[YUM 命令][6]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo yum install ntp
|
||||
```
|
||||
|
||||
对于 **`openSUSE Leap`** 系统, 使用 **[Zypper 命令][7]** 去安装 ntp.
|
||||
|
||||
```
|
||||
$ sudo zypper install ntp
|
||||
```
|
||||
|
||||
我已经在`CentOS7.2daygeek.com`这台主机上安装和配置了NTP服务器,因此将其附加到所有的客户端机器上。
|
||||
|
||||
```
|
||||
# vi /etc/ntp.conf
|
||||
|
||||
restrict default kod nomodify notrap nopeer noquery
|
||||
restrict -6 default kod nomodify notrap nopeer noquery
|
||||
restrict 127.0.0.1
|
||||
restrict -6 ::1
|
||||
server CentOS7.2daygeek.com prefer iburst
|
||||
driftfile /var/lib/ntp/drift
|
||||
keys /etc/ntp/keys
|
||||
```
|
||||
|
||||
更新配置后重启服务。
|
||||
|
||||
对于基于Debian的sysvinit系统,我们需要去运行`ntp`而不是`ntpd`。
|
||||
|
||||
```
|
||||
# service ntpd restart
|
||||
|
||||
# chkconfig ntpd on
|
||||
```
|
||||
对于基于Debian的systemctl系统,我们需要去运行`ntp`和`ntpd`。
|
||||
|
||||
```
|
||||
# systemctl restart ntpd
|
||||
|
||||
# systemctl enable ntpd
|
||||
```
|
||||
|
||||
重新启动NTP服务后等待几分钟以便从NTP服务器获取同步的时间。
|
||||
|
||||
在Linux上运行下列命令去验证NTP服务的同步状态。
|
||||
|
||||
```
|
||||
# ntpq –p
|
||||
或
|
||||
# ntpq -pn
|
||||
|
||||
remote refid st t when poll reach delay offset jitter
|
||||
==============================================================================
|
||||
*CentOS7.2daygee 133.243.238.163 2 u 14 64 37 0.686 0.151 16.432
|
||||
```
|
||||
|
||||
运行下列命令去得到ntpd的当前状态。
|
||||
|
||||
```
|
||||
# ntpstat
|
||||
synchronised to NTP server (192.168.1.8) at stratum 3
|
||||
time correct to within 508 ms
|
||||
polling server every 64 s
|
||||
```
|
||||
|
||||
最后运行`date`命令。
|
||||
|
||||
```
|
||||
# date
|
||||
Tue Mar 26 23:17:05 CDT 2019
|
||||
```
|
||||
|
||||
如果你观察到NTP中输出的偏移很大。运行下列命令从NTP服务器手动同步时钟。当你执行下列命令的时候,确保你的NTP客户端应该为未激活状态。
|
||||
|
||||
```
|
||||
# ntpdate –uv CentOS7.2daygeek.com
|
||||
```
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://www.2daygeek.com/install-configure-ntp-server-ntp-client-in-linux/
|
||||
|
||||
作者:[Magesh Maruthamuthu][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[arrowfeng](https://github.com/arrowfeng)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://www.2daygeek.com/author/magesh/
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://www.2daygeek.com/configure-ntp-client-using-chrony-in-linux/
|
||||
[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
|
||||
[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
|
||||
[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/
|
||||
[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
|
||||
[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
|
@ -0,0 +1,574 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (FSSlc)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Inter-process communication in Linux: Using pipes and message queues)
|
||||
[#]: via: (https://opensource.com/article/19/4/interprocess-communication-linux-channels)
|
||||
[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu)
|
||||
|
||||
Linux 下的进程间通信:使用管道和消息队列
|
||||
======
|
||||
学习在 Linux 中进程是如何与其他进程进行同步的。
|
||||
![Chat bubbles][1]
|
||||
|
||||
本篇是 Linux 下[进程间通信][2](IPC)系列的第二篇文章。[第一篇文章][3] 聚焦于通过共享文件和共享内存段这样的共享存储来进行 IPC。这篇文件的重点将转向管道,它是连接需要通信的进程之间的通道。管道拥有一个 _写端_ 用于写入字节数据,还有一个 _读端_ 用于按照先入先出的顺序读入这些字节数据。而这些字节数据可能代表任何东西:数字、数字电影等等。
|
||||
|
||||
管道有两种类型,命名管道和无名管道,都可以交互式的在命令行或程序中使用它们;相关的例子在下面展示。这篇文章也将介绍内存队列,尽管它们有些过时了,但它们不应该受这样的待遇。
|
||||
|
||||
在本系列的第一篇文章中的示例代码承认了在 IPC 中可能受到竞争条件(不管是基于文件的还是基于内存的)的威胁。自然地我们也会考虑基于管道的 IPC 的安全并发问题,这个也将在本文中提及。针对管道和内存队列的例子将会使用 POSIX 推荐使用的 API, POSIX 的一个核心目标就是线程安全。
|
||||
|
||||
考虑查看 [**mq_open** 函数的 man 页][4],这个函数属于内存队列的 API。这个 man 页中有一个章节是有关 [特性][5] 的小表格:
|
||||
|
||||
Interface | Attribute | Value
|
||||
---|---|---
|
||||
mq_open() | Thread safety | MT-Safe
|
||||
|
||||
上面的 **MT-Safe**(**MT** 指的是 multi-threaded,多线程)意味着 **mq_open** 函数是线程安全的,进而暗示是进程安全的:一个进程的执行和它的一个线程执行的过程类似,假如竞争条件不会发生在处于 _相同_ 进程的线程中,那么这样的条件也不会发生在处于不同进程的线程中。**MT-Safe** 特性保证了调用 **mq_open** 时不会出现竞争条件。一般来说,基于通道的 IPC 是并发安全的,尽管在下面例子中会出现一个有关警告的注意事项。
|
||||
|
||||
### 无名管道
|
||||
|
||||
首先让我们通过一个特意构造的命令行例子来展示无名管道是如何工作的。在所有的现代系统中,符号 **|** 在命令行中都代表一个无名管道。假设我们的命令行提示符为 **%**,接下来考虑下面的命令:
|
||||
|
||||
```shell
|
||||
% sleep 5 | echo "Hello, world!" ## writer to the left of |, reader to the right
|
||||
```
|
||||
|
||||
_sleep_ 和 _echo_ 程序以不同的进程执行,无名管道允许它们进行通信。但是上面的例子被特意设计为没有通信发生。问候语 _Hello, world!_ 出现在屏幕中,然后过了 5 秒后,命令行返回,暗示 _sleep_ 和 _echo_ 进程都已经结束了。这期间发生了什么呢?
|
||||
|
||||
在命令行中的竖线 **|** 的语法中,左边的进程(_sleep_)是写入方,右边的进程(_echo_)为读取方。默认情况下,读入方将会堵塞,直到从通道中能够读取字节数据,而写入方在写完它的字节数据后,将发送 流已终止 的标志。(即便写入方永久地停止了,一个流已终止的标志还是会发给读取方。)无名管道将保持到写入方和读取方都停止的那个时刻。
|
||||
|
||||
在上面的例子中,_sleep_ 进程并没有向通道写入任何的字节数据,但在 5 秒后就停止了,这时将向通道发送一个流已终止的标志。与此同时,_echo_ 进程立即向标准输出(屏幕)写入问候语,因为这个进程并不从通道中读入任何字节,所以它并没有等待。一旦 _sleep_ 和 _echo_ 进程都终止了,不会再用作通信的无名管道将会消失然后返回命令行提示符。
|
||||
|
||||
下面这个更加实用示例将使用两个无名管道。我们假定文件 _test.dat_ 的内容如下:
|
||||
|
||||
```
|
||||
this
|
||||
is
|
||||
the
|
||||
way
|
||||
the
|
||||
world
|
||||
ends
|
||||
```
|
||||
|
||||
下面的命令
|
||||
|
||||
```
|
||||
% cat test.dat | sort | uniq
|
||||
```
|
||||
|
||||
会将 _cat_(concatenate 的缩写)进程的输出通过管道传给 _sort_ 进程以生成排序后的输出,然后将排序后的输出通过管道传给 _uniq_ 进程以消除重复的记录(在本例中,会将两次出现的 **the** 缩减为一个):
|
||||
|
||||
```
|
||||
ends
|
||||
is
|
||||
the
|
||||
this
|
||||
way
|
||||
world
|
||||
```
|
||||
|
||||
下面展示的情景展示的是一个带有两个进程的程序通过一个无名管道通信来进行通信。
|
||||
|
||||
#### 示例 1. 两个进程通过一个无名管道来进行通信
|
||||
|
||||
|
||||
```c
|
||||
#include <sys/wait.h> /* wait */
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h> /* exit functions */
|
||||
#include <unistd.h> /* read, write, pipe, _exit */
|
||||
#include <string.h>
|
||||
|
||||
#define ReadEnd 0
|
||||
#define WriteEnd 1
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /** failure **/
|
||||
}
|
||||
|
||||
int main() {
|
||||
int pipeFDs[2]; /* two file descriptors */
|
||||
char buf; /* 1-byte buffer */
|
||||
const char* msg = "Nature's first green is gold\n"; /* bytes to write */
|
||||
|
||||
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
|
||||
pid_t cpid = fork(); /* fork a child process */
|
||||
if (cpid < 0) report_and_exit("fork"); /* check for failure */
|
||||
|
||||
if (0 == cpid) { /*** child ***/ /* child process */
|
||||
close(pipeFDs[WriteEnd]); /* child reads, doesn't write */
|
||||
|
||||
while (read(pipeFDs[ReadEnd], &buf, 1) > 0) /* read until end of byte stream */
|
||||
write(STDOUT_FILENO, &buf, sizeof(buf)); /* echo to the standard output */
|
||||
|
||||
close(pipeFDs[ReadEnd]); /* close the ReadEnd: all done */
|
||||
_exit(0); /* exit and notify parent at once */
|
||||
}
|
||||
else { /*** parent ***/
|
||||
close(pipeFDs[ReadEnd]); /* parent writes, doesn't read */
|
||||
|
||||
write(pipeFDs[WriteEnd], msg, [strlen][8](msg)); /* write the bytes to the pipe */
|
||||
close(pipeFDs[WriteEnd]); /* done writing: generate eof */
|
||||
|
||||
wait(NULL); /* wait for child to exit */
|
||||
[exit][7](0); /* exit normally */
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
上面名为 _pipeUN_ 的程序使用系统函数 **fork** 来创建一个进程。尽管这个程序只有一个单一的源文件,在它正确执行的情况下将会发生多进程的情况。下面的内容是对库函数 **fork** 如何工作的一个简要回顾:
|
||||
|
||||
* **fork** 函数由 _父_ 进程调用,在失败时返回 **-1** 给父进程。在 _pipeUN_ 这个例子中,相应的调用是
|
||||
|
||||
```c
|
||||
pid_t cpid = fork(); /* called in parent */
|
||||
```
|
||||
|
||||
函数,调用后的返回值也被保存下来了。在这个例子中,保存在整数类型 **pid_t** 的变量 **cpid** 中。(每个进程有它自己的 _进程 ID_,一个非负的整数,用来标记进程)。复刻一个新的进程可能会因为多种原因而失败。最终它们将会被包括进一个完整的 _进程表_,这个结构由系统维持,以此来追踪进程状态。明确地说,僵尸进程假如没有被处理掉,将可能引起一个进程表被填满。
|
||||
* 假如 **fork** 调用成功,则它将创建一个新的子进程,向父进程返回一个值,向子进程返回另外的一个值。在调用 **fork** 后父进程和子进程都将执行相同的代码。(子进程继承了到此为止父进程中声明的所有变量的拷贝),特别地,一次成功的 **fork** 调用将返回如下的东西:
|
||||
* 向子进程返回 0
|
||||
* 向父进程返回子进程的进程 ID
|
||||
* 在依次成功的 **fork** 调用后,一个 _if/else_ 或等价的结构将会被用来隔离针对父进程和子进程的代码。在这个例子中,相应的声明为:
|
||||
|
||||
```c
|
||||
if (0 == cpid) { /*** child ***/
|
||||
...
|
||||
}
|
||||
else { /*** parent ***/
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
假如成功地复刻出了一个子进程,_pipeUN_ 程序将像下面这样去执行。存在一个整数的数列
|
||||
|
||||
```c
|
||||
int pipeFDs[2]; /* two file descriptors */
|
||||
```
|
||||
|
||||
来保存两个文件描述符,一个用来向管道中写入,另一个从管道中写入。(数组元素 **pipeFDs[0]** 是读端的文件描述符,元素 **pipeFDs[1]** 是写端的文件描述符。)在调用 **fork** 之前,对系统 **pipe** 函数的成功调用,将立刻使得这个数组获得两个文件描述符:
|
||||
|
||||
```c
|
||||
if (pipe(pipeFDs) < 0) report_and_exit("pipeFD");
|
||||
```
|
||||
|
||||
父进程和子进程现在都有了文件描述符的副本。但 _分离关注点_ 模式意味着每个进程恰好只需要一个描述符。在这个例子中,父进程负责写入,而子进程负责读取,尽管这样的角色分配可以反过来。在 _if_ 子句中的第一个语句将用于关闭管道的读端:
|
||||
|
||||
```c
|
||||
close(pipeFDs[WriteEnd]); /* called in child code */
|
||||
```
|
||||
|
||||
在父进程中的 _else_ 子句将会关闭管道的读端:
|
||||
|
||||
```c
|
||||
close(pipeFDs[ReadEnd]); /* called in parent code */
|
||||
```
|
||||
|
||||
然后父进程将向无名管道中写入某些字节数据(ASCII 代码),子进程读取这些数据,然后向标准输出中回放它们。
|
||||
|
||||
在这个程序中还需要澄清的一点是在父进程代码中的 **wait** 函数。一旦被创建后,子进程很大程度上独立于它的父进程,正如简短的 _pipeUN_ 程序所展示的那样。子进程可以执行任意的代码,而它们可能与父进程完全没有关系。但是,假如当子进程终止时,系统将会通过一个信号来通知父进程。
|
||||
|
||||
要是父进程在子进程之前终止又该如何呢?在这种情形下,除非采取了预防措施,子进程将会变成在进程表中的一个 _僵尸_ 进程。预防措施有两大类型:第一种是让父进程去通知系统,告诉系统它对子进程的终止没有任何兴趣:
|
||||
|
||||
```c
|
||||
signal(SIGCHLD, SIG_IGN); /* in parent: ignore notification */
|
||||
```
|
||||
|
||||
第二种方法是在子进程终止时,让父进程执行一个 **wait**。这样就确保了父进程可以独立于子进程而存在。在 _pipeUN_ 程序中使用了第二种方法,其中父进程的代码使用的是下面的调用:
|
||||
|
||||
```c
|
||||
wait(NULL); /* called in parent */
|
||||
```
|
||||
|
||||
这个对 **wait** 的调用意味着 _一直等待直到任意一个子进程的终止发生_,因此在 _pipeUN_ 程序中,只有一个子进程。(其中的 **NULL** 参数可以被替换为一个保存有子程序退出状态的整数变量的地址。)对于更细颗粒度的控制,还可以使用更灵活的 **waitpid** 函数,例如特别指定多个子进程中的某一个。
|
||||
|
||||
_pipeUN_ 将会采取另一个预防措施。当父进程结束了等待,父进程将会调用常规的 **exit** 函数去退出。对应的,子进程将会调用 **_exit** 变种来退出,这类变种将快速跟踪终止相关的通知。在效果上,子进程会告诉系统立刻去通知父进程它的这个子进程已经终止了。
|
||||
|
||||
假如两个进程向相同的无名管道中写入内容,字节数据会交错吗?例如,假如进程 P1 向管道写入内容:
|
||||
|
||||
```
|
||||
foo bar
|
||||
```
|
||||
|
||||
同时进程 P2 并发地写入:
|
||||
|
||||
```
|
||||
baz baz
|
||||
```
|
||||
|
||||
到相同的管道,最后的结果似乎是管道中的内容将会是任意错乱的,例如像这样:
|
||||
|
||||
```
|
||||
baz foo baz bar
|
||||
```
|
||||
|
||||
POSIX 标准确保了写不是交错的,使得没有写操作能够超过 **PIPE_BUF** 的范围。在 Linux 系统中, **PIPE_BUF** 的大小是 4096 字节。对于管道我更喜欢只有一个写方和一个读方,从而绕过这个问题。
|
||||
|
||||
## 命名管道
|
||||
|
||||
无名管道没有备份文件:系统将维持一个内存缓存来将字节数据从写方传给读方。一旦写方和读方终止,这个缓存将会被回收,进而无名管道消失。相反的,命名管道有备份文件和一个不同的 API。
|
||||
|
||||
下面让我们通过另一个命令行示例来知晓命名管道的要点。下面是具体的步骤:
|
||||
|
||||
* 开启两个终端。这两个终端的工作目录应该相同。
|
||||
* 在其中一个终端中,键入下面的两个命令(命令行提示符仍然是 **%**,我的注释以 **##** 打头。):
|
||||
|
||||
```shell
|
||||
% mkfifo tester ## creates a backing file named tester
|
||||
% cat tester ## type the pipe's contents to stdout
|
||||
```
|
||||
|
||||
在最开始,没有任何东西会出现在终端中,因为到现在为止没有命名管道中写入任何东西。
|
||||
* 在第二个终端中输入下面的命令:
|
||||
|
||||
```shell
|
||||
% cat > tester ## redirect keyboard input to the pipe
|
||||
hello, world! ## then hit Return key
|
||||
bye, bye ## ditto
|
||||
<Control-C> ## terminate session with a Control-C
|
||||
```
|
||||
|
||||
无论在这个终端中输入什么,它都会在另一个终端中显示出来。一旦键入 **Ctrl+C**,就会回到正常的命令行提示符,因为管道已经被关闭了。
|
||||
* 通过移除实现命名管道的文件来进行清理:
|
||||
|
||||
```shell
|
||||
% unlink tester
|
||||
```
|
||||
|
||||
正如 _mkfifo_ 程序的名字所暗示的那样,一个命名管道也被叫做一个 FIFO,因为第一个字节先进,然后第一个字节就先出,其他的类似。存在一个名为 **mkfifo** 的库函数,用它可以在程序中创建一个命名管道,它将在下一个示例中被用到,该示例由两个进程组成:一个向命名管道写入,而另一个从该管道读取。
|
||||
|
||||
#### 示例 2. _fifoWriter_ 程序
|
||||
|
||||
```c
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
#include <time.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
|
||||
#define MaxLoops 12000 /* outer loop */
|
||||
#define ChunkSize 16 /* how many written at a time */
|
||||
#define IntsPerChunk 4 /* four 4-byte ints per chunk */
|
||||
#define MaxZs 250 /* max microseconds to sleep */
|
||||
|
||||
int main() {
|
||||
const char* pipeName = "./fifoChannel";
|
||||
mkfifo(pipeName, 0666); /* read/write for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY); /* open as write-only */
|
||||
if (fd < 0) return -1; /** error **/
|
||||
|
||||
int i;
|
||||
for (i = 0; i < MaxLoops; i++) { /* write MaxWrites times */
|
||||
int j;
|
||||
for (j = 0; j < ChunkSize; j++) { /* each time, write ChunkSize bytes */
|
||||
int k;
|
||||
int chunk[IntsPerChunk];
|
||||
for (k = 0; k < IntsPerChunk; k++)
|
||||
chunk[k] = [rand][9]();
|
||||
write(fd, chunk, sizeof(chunk));
|
||||
}
|
||||
usleep(([rand][9]() % MaxZs) + 1); /* pause a bit for realism */
|
||||
}
|
||||
|
||||
close(fd); /* close pipe: generates an end-of-file */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
[printf][10]("%i ints sent to the pipe.\n", MaxLoops * ChunkSize * IntsPerChunk);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
上面的 _fifoWriter_ 程序可以被总结为如下:
|
||||
|
||||
* 首先程序创建了一个命名管道用来写入数据:
|
||||
|
||||
```c
|
||||
mkfifo(pipeName, 0666); /* read/write perms for user/group/others */
|
||||
int fd = open(pipeName, O_CREAT | O_WRONLY);
|
||||
```
|
||||
|
||||
其中的 **pipeName** 是传递给 **mkfifo** 作为它的第一个参数的备份文件的名字。接着命名管道通过我们熟悉的 **open** 函数调用被打开,而这个函数将会返回一个文件描述符。
|
||||
* 在实现层面上,_fifoWriter_ 不会一次性将所有的数据都写入,而是写入一个块,然后休息随机数目的微秒时间,接着再循环往复。总的来说,有 768000 个 4 比特的整数值被写入到命名管道中。
|
||||
* 在关闭命名管道后,_fifoWriter_ 也将使用 unlink 去掉关联。
|
||||
|
||||
```c
|
||||
close(fd); /* close pipe: generates end-of-stream marker */
|
||||
unlink(pipeName); /* unlink from the implementing file */
|
||||
```
|
||||
|
||||
一旦连接到管道的每个进程都执行了 unlink 操作后,系统将回收这些备份文件。在这个例子中,只有两个这样的进程 _fifoWriter_ 和 _fifoReader_,它们都做了 _unlink_ 操作。
|
||||
|
||||
这个两个程序应该在位于相同工作目录下的不同终端中被执行。但是 _fifoWriter_ 应该在 _fifoReader_ 之前被启动,因为需要 _fifoWriter_ 去创建管道。然后 _fifoReader_ 才能够获取到刚被创建的命名管道。
|
||||
|
||||
#### 示例 3. _fifoReader_ 程序
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <fcntl.h>
|
||||
#include <unistd.h>
|
||||
|
||||
|
||||
unsigned is_prime(unsigned n) { /* not pretty, but gets the job done efficiently */
|
||||
if (n <= 3) return n > 1;
|
||||
if (0 == (n % 2) || 0 == (n % 3)) return 0;
|
||||
|
||||
unsigned i;
|
||||
for (i = 5; (i * i) <= n; i += 6)
|
||||
if (0 == (n % i) || 0 == (n % (i + 2))) return 0;
|
||||
|
||||
return 1; /* found a prime! */
|
||||
}
|
||||
|
||||
int main() {
|
||||
const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY);
|
||||
if (fd < 0) return -1; /* no point in continuing */
|
||||
unsigned count = 0, total = 0, primes_count = 0;
|
||||
|
||||
while (1) {
|
||||
int next;
|
||||
int i;
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
|
||||
if (0 == count) break; /* end of stream */
|
||||
else if (count == sizeof(int)) { /* read a 4-byte int value */
|
||||
total++;
|
||||
if (is_prime(next)) primes_count++;
|
||||
}
|
||||
}
|
||||
|
||||
close(fd); /* close pipe from read end */
|
||||
unlink(file); /* unlink from the underlying file */
|
||||
[printf][10]("Received ints: %u, primes: %u\n", total, primes_count);
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
上面的 _fifoReader_ 的内容可以总结为如下:
|
||||
|
||||
* 因为 _fifoWriter_ 已经创建了命名管道,所以 _fifoReader_ 只需要利用标准的 **open** 调用来通过备份文件来获取到管道中的内容:
|
||||
|
||||
```c
|
||||
const char* file = "./fifoChannel";
|
||||
int fd = open(file, O_RDONLY);
|
||||
```
|
||||
|
||||
这个文件的打开是只读的。
|
||||
* 然后这个程序进入一个潜在的无限循环,在每次循环时,尝试读取 4 比特的块。**read** 调用:
|
||||
|
||||
```c
|
||||
ssize_t count = read(fd, &next, sizeof(int));
|
||||
```
|
||||
|
||||
返回 0 来暗示流的结束。在这种情况下,_fifoReader_ 跳出循环,关闭命名管道,并在终止前 unlink 备份文件。
|
||||
* 在读入 4 比特整数后,_fifoReader_ 检查这个数是否为质数。这个操作代表了一个生产级别的读取器可能在接收到的字节数据上执行的逻辑操作。在示例运行中,接收了 768000 个整数中的 37682 个质数。
|
||||
|
||||
在重复的运行示例时, _fifoReader_ 将成功地读取 _fifoWriter_ 写入的所有字节。这不是很让人惊讶的。这两个进程在相同的机器上执行,从而可以不用考虑网络相关的问题。命名管道是一个可信且高效的 IPC 机制,因而被广泛使用。
|
||||
|
||||
下面是这两个程序的输出,在不同的终端中启动,但处于相同的工作目录:
|
||||
|
||||
```shell
|
||||
% ./fifoWriter
|
||||
768000 ints sent to the pipe.
|
||||
###
|
||||
% ./fifoReader
|
||||
Received ints: 768000, primes: 37682
|
||||
```
|
||||
|
||||
### 消息队列
|
||||
|
||||
管道有着严格的先入先出行为:第一个被写入的字节将会第一个被读,第二个写入的字节将第二个被读,以此类推。消息队列可以做出相同的表现,但它又足够灵活,可以使得字节块不以先入先出的次序来接收。
|
||||
|
||||
正如它的名字所建议的那样,消息队列是一系列的消息,每个消息包含两部分:
|
||||
* 荷载,一个字节序列(在 C 中是 **char**)
|
||||
* 一个类型,以一个正整数值的形式给定,类型用来分类消息,为了更灵活的回收
|
||||
|
||||
考虑下面对一个消息队列的描述,每个消息被一个整数类型标记:
|
||||
|
||||
```
|
||||
+-+ +-+ +-+ +-+
|
||||
sender--->|3|--->|2|--->|2|--->|1|--->receiver
|
||||
+-+ +-+ +-+ +-+
|
||||
```
|
||||
|
||||
在上面展示的 4 个消息中,标记为 1 的是开头,即最接近接收端,然后另个标记为 2 的消息,最后接着一个标记为 3 的消息。假如按照严格的 FIFO 行为执行,消息将会以 1-2-2-3 这样的次序被接收。但是消息队列允许其他回收次序。例如,消息可以被接收方以 3-2-1-2 的次序接收。
|
||||
|
||||
_mqueue_ 示例包含两个程序,_sender_ 将向消息队列中写入数据,而 _receiver_ 将从这个队列中读取数据。这两个程序都包含下面展示的头文件 _queue.h_:
|
||||
|
||||
#### 示例 4. 头文件 _queue.h_
|
||||
|
||||
```c
|
||||
#define ProjectId 123
|
||||
#define PathName "queue.h" /* any existing, accessible file would do */
|
||||
#define MsgLen 4
|
||||
#define MsgCount 6
|
||||
|
||||
typedef struct {
|
||||
long type; /* must be of type long */
|
||||
char payload[MsgLen + 1]; /* bytes in the message */
|
||||
} queuedMessage;
|
||||
```
|
||||
|
||||
上面的头文件定义了一个名为 **queuedMessage** 的结构类型,它带有 **payload**(字节数组)和 **type**(整数)这两个域。该文件也定义了一些符号常数(使用 **#define** 语句)。前两个常数被用来生成一个 key,而这个 key 反过来被用来获取一个消息队列的 ID。**ProjectId** 可以是任何正整数值,而 **PathName** 必须是一个存在的,可访问的文件,在这个示例中,指的是文件 _queue.h_。在 _sender_ 和 _receiver_ 中,它们都有的设定语句为:
|
||||
|
||||
```c
|
||||
key_t key = ftok(PathName, ProjectId); /* generate key */
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* use key to get queue id */
|
||||
```
|
||||
|
||||
ID **qid** 在效果上是消息队列文件描述符的对应物。
|
||||
|
||||
#### 示例 5. _sender_ 程序
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key = ftok(PathName, ProjectId);
|
||||
if (key < 0) report_and_exit("couldn't get key...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
if (qid < 0) report_and_exit("couldn't get queue id...");
|
||||
|
||||
char* payloads[] = {"msg1", "msg2", "msg3", "msg4", "msg5", "msg6"};
|
||||
int types[] = {1, 1, 2, 2, 3, 3}; /* each must be > 0 */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
/* build the message */
|
||||
queuedMessage msg;
|
||||
msg.type = types[i];
|
||||
[strcpy][11](msg.payload, payloads[i]);
|
||||
|
||||
/* send the message */
|
||||
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT); /* don't block */
|
||||
[printf][10]("%s sent as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
上面的 _sender_ 程序将发送出 6 个消息,每两个为一个类型:前两个是类型 1,接着的连个是类型 2,最后的两个为类型 3。发送的语句:
|
||||
|
||||
```c
|
||||
msgsnd(qid, &msg, sizeof(msg), IPC_NOWAIT);
|
||||
```
|
||||
|
||||
被配置为非阻塞的(**IPC_NOWAIT** 标志),因为这里的消息体量上都很小。唯一的危险在于一个完整的序列将可能导致发送失败,而这个例子不会。下面的 _receiver_ 程序也将使用 **IPC_NOWAIT** 标志来接收消息。
|
||||
|
||||
#### 示例 6. _receiver_ 程序
|
||||
|
||||
```c
|
||||
#include <stdio.h>
|
||||
#include <sys/ipc.h>
|
||||
#include <sys/msg.h>
|
||||
#include <stdlib.h>
|
||||
#include "queue.h"
|
||||
|
||||
void report_and_exit(const char* msg) {
|
||||
[perror][6](msg);
|
||||
[exit][7](-1); /* EXIT_FAILURE */
|
||||
}
|
||||
|
||||
int main() {
|
||||
key_t key= ftok(PathName, ProjectId); /* key to identify the queue */
|
||||
if (key < 0) report_and_exit("key not gotten...");
|
||||
|
||||
int qid = msgget(key, 0666 | IPC_CREAT); /* access if created already */
|
||||
if (qid < 0) report_and_exit("no access to queue...");
|
||||
|
||||
int types[] = {3, 1, 2, 1, 3, 2}; /* different than in sender */
|
||||
int i;
|
||||
for (i = 0; i < MsgCount; i++) {
|
||||
queuedMessage msg; /* defined in queue.h */
|
||||
if (msgrcv(qid, &msg, sizeof(msg), types[i], MSG_NOERROR | IPC_NOWAIT) < 0)
|
||||
[puts][12]("msgrcv trouble...");
|
||||
[printf][10]("%s received as type %i\n", msg.payload, (int) msg.type);
|
||||
}
|
||||
|
||||
/** remove the queue **/
|
||||
if (msgctl(qid, IPC_RMID, NULL) < 0) /* NULL = 'no flags' */
|
||||
report_and_exit("trouble removing queue...");
|
||||
|
||||
return 0;
|
||||
}
|
||||
```
|
||||
|
||||
这个 _receiver_ 程序不会创建消息队列,尽管 API 看起来像是那样。在 _receiver_ 中,对
|
||||
|
||||
```c
|
||||
int qid = msgget(key, 0666 | IPC_CREAT);
|
||||
```
|
||||
|
||||
的调用可能因为带有 **IPC_CREAT** 标志而具有误导性,但是这个标志的真实意义是 _如果需要就创建,否则直接获取_。_sender_ 程序调用 **msgsnd** 来发送消息,而 _receiver_ 调用 **msgrcv** 来接收它们。在这个例子中,_sender_ 以 1-1-2-2-3-3 的次序发送消息,但 _receiver_ 接收它们的次序为 3-1-2-1-3-2,这显示消息队列没有被严格的 FIFO 行为所拘泥:
|
||||
|
||||
```shell
|
||||
% ./sender
|
||||
msg1 sent as type 1
|
||||
msg2 sent as type 1
|
||||
msg3 sent as type 2
|
||||
msg4 sent as type 2
|
||||
msg5 sent as type 3
|
||||
msg6 sent as type 3
|
||||
|
||||
% ./receiver
|
||||
msg5 received as type 3
|
||||
msg1 received as type 1
|
||||
msg3 received as type 2
|
||||
msg2 received as type 1
|
||||
msg6 received as type 3
|
||||
msg4 received as type 2
|
||||
```
|
||||
|
||||
上面的输出显示 _sender_ 和 _receiver_ 可以在同一个终端中启动。输出也显示消息队列是持久的,即便在 _sender_ 进程在完成创建队列,向队列写数据,然后离开的整个过程后,队列仍然存在。只有在 _receiver_ 进程显式地调用 **msgctl** 来移除该队列,这个队列才会消失:
|
||||
|
||||
```c
|
||||
if (msgctl(qid, IPC_RMID, NULL) < 0) /* remove queue */
|
||||
```
|
||||
|
||||
### 总结
|
||||
|
||||
管道和消息队列的 API 在根本上来说都是单向的:一个进程写,然后另一个进程读。当然还存在双向命名管道的实现,但我认为这个 IPC 机制在它最为简单的时候反而是最佳的。正如前面提到的那样,消息队列已经不大受欢迎了,尽管没有找到什么特别好的原因来解释这个现象。而队列仍然是 IPC 工具箱中的另一个工具。这个快速的 IPC 工具箱之旅将以第 3 部分-通过套接字和信号来示例 IPC -来终结。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/interprocess-communication-linux-channels
|
||||
|
||||
作者:[Marty Kalin][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[FSSlc](https://github.com/FSSlc)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/mkalindepauledu
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles)
|
||||
[2]: https://en.wikipedia.org/wiki/Inter-process_communication
|
||||
[3]: https://opensource.com/article/19/4/interprocess-communication-ipc-linux-part-1
|
||||
[4]: http://man7.org/linux/man-pages/man2/mq_open.2.html
|
||||
[5]: http://man7.org/linux/man-pages/man2/mq_open.2.html#ATTRIBUTES
|
||||
[6]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html
|
||||
[7]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html
|
||||
[8]: http://www.opengroup.org/onlinepubs/009695399/functions/strlen.html
|
||||
[9]: http://www.opengroup.org/onlinepubs/009695399/functions/rand.html
|
||||
[10]: http://www.opengroup.org/onlinepubs/009695399/functions/printf.html
|
||||
[11]: http://www.opengroup.org/onlinepubs/009695399/functions/strcpy.html
|
||||
[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html
|
@ -0,0 +1,290 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (MjSeven)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (Building scalable social media sentiment analysis services in Python)
|
||||
[#]: via: (https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable)
|
||||
[#]: author: (Michael McCune https://opensource.com/users/elmiko/users/jschlessman)
|
||||
|
||||
使用 Python 构建可扩展的社交媒体情感分析服务
|
||||
======
|
||||
学习如何使用 spaCy、vaderSentiment、Flask 和 Python 来为你的工作添加情感分析能力。
|
||||
![Tall building with windows][1]
|
||||
|
||||
本系列的[第一部分][2]提供了情感分析工作原理的一些背景知识,现在让我们研究如何将这些功能添加到你的设计中。
|
||||
|
||||
### 探索 Python 库 spaCy 和 vaderSentiment
|
||||
|
||||
#### 前提条件
|
||||
|
||||
* 一个终端 shell
|
||||
* shell 中的 Python 语言二进制文件(3.4+ 版本)
|
||||
* 用于安装 Python 包的 **pip** 命令
|
||||
* (可选)一个 [Python 虚拟环境][3]使你的工作与系统隔离开来
|
||||
|
||||
#### 配置环境
|
||||
|
||||
在开始编写代码之前,你需要安装 [spaCy][4] 和 [vaderSentiment][5] 包来设置 Python 环境,同时下载一个语言模型来帮助你分析。幸运的是,大部分操作都容易在命令行中完成。
|
||||
|
||||
在 shell 中,输入以下命令来安装 spaCy 和 vaderSentiment 包:
|
||||
|
||||
```
|
||||
pip install spacy vaderSentiment
|
||||
```
|
||||
|
||||
命令安装完成后,安装 spaCy 可用于文本分析的语言模型。以下命令将使用 spaCy 模块下载并安装英语[模型][6]:
|
||||
|
||||
```
|
||||
python -m spacy download en_core_web_sm
|
||||
```
|
||||
|
||||
安装了这些库和模型之后,就可以开始编码了。
|
||||
|
||||
#### 一个简单的文本分析
|
||||
|
||||
使用 [Python 解释器交互模式][7] 编写一些代码来分析单个文本片段。首先启动 Python 环境:
|
||||
|
||||
```
|
||||
$ python
|
||||
Python 3.6.8 (default, Jan 31 2019, 09:38:34)
|
||||
[GCC 8.2.1 20181215 (Red Hat 8.2.1-6)] on linux
|
||||
Type "help", "copyright", "credits" or "license" for more information.
|
||||
>>>
|
||||
```
|
||||
|
||||
_(你的 Python 解释器版本打印可能与此不同。)_
|
||||
|
||||
1. 导入所需模块:
|
||||
```
|
||||
>>> import spacy
|
||||
>>> from vaderSentiment import vaderSentiment
|
||||
```
|
||||
2. 从 spaCy 加载英语语言模型:
|
||||
```
|
||||
>>> english = spacy.load("en_core_web_sm")
|
||||
```
|
||||
3. 处理一段文本。本例展示了一个非常简单的句子,我们希望它能给我们带来些许积极的情感:
|
||||
```
|
||||
>>> result = english("I like to eat applesauce with sugar and cinnamon.")
|
||||
```
|
||||
4. 从处理后的结果中收集句子。SpaCy 已识别并处理短语中的实体,这一步为每个句子生成情感(即时在本例中只有一个句子):
|
||||
```
|
||||
>>> sentences = [str(s) for s in result.sents]
|
||||
```
|
||||
5. 使用 vaderSentiments 创建一个分析器:
|
||||
```
|
||||
>>> analyzer = vaderSentiment.SentimentIntensityAnalyzer()
|
||||
```
|
||||
6. 对句子进行情感分析:
|
||||
```
|
||||
>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
```
|
||||
|
||||
`sentiment` 变量现在包含例句的极性分数。打印出这个值,看看它是如何分析这个句子的。
|
||||
|
||||
```
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.737, 'pos': 0.263, 'compound': 0.3612}]
|
||||
```
|
||||
|
||||
这个结构是什么意思?
|
||||
|
||||
表面上,这是一个只有一个字典对象的数组。如果有多个句子,那么每个句子都会对应一个字典对象。字典中有四个键对应不同类型的情感。**neg** 键表示负面情感,因为在本例中没有报告任何负面情感,**0.0** 值证明了这一点。**neu** 键表示中性情感,它的得分相当高,为**0.737**(最高为 **1.0**)。**pos** 键代表积极情感,得分适中,为 **0.263**。最后,**cmpound** 键代表文本的总体得分,它可以从负数到正数,**0.3612** 表示积极方面的情感多一点。
|
||||
|
||||
要查看这些值可能如何变化,你可以使用已输入的代码做一个小实验。以下代码块显示了如何对类似句子的情感评分的评估。
|
||||
|
||||
```
|
||||
>>> result = english("I love applesauce!")
|
||||
>>> sentences = [str(s) for s in result.sents]
|
||||
>>> sentiment = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
>>> print(sentiment)
|
||||
[{'neg': 0.0, 'neu': 0.182, 'pos': 0.818, 'compound': 0.6696}]
|
||||
```
|
||||
|
||||
你可以看到,通过将例句改为非常积极的句子,`sentiment` 的值发生了巨大变化。
|
||||
|
||||
### 建立一个情感分析服务
|
||||
|
||||
现在你已经为情感分析组装了基本的代码块,让我们将这些东西转化为一个简单的服务。
|
||||
|
||||
在这个演示中,你将使用 Python [Flask 包][9] 创建一个 [RESTful][8] HTTP 服务器。此服务将接受英文文本数据并返回情感分析结果。请注意,此示例服务是用于学习所涉及的技术,而不是用于投入生产的东西。
|
||||
|
||||
#### 前提条件
|
||||
|
||||
* 一个终端 shell
|
||||
* shell 中的 Python 语言二进制文件(3.4+版本)
|
||||
* 安装 Python 包的 **pip** 命令
|
||||
* **curl** 命令
|
||||
* 一个文本编辑器
|
||||
* (可选) 一个 [Python 虚拟环境][3]使你的工作与系统隔离开来
|
||||
|
||||
#### 配置环境
|
||||
|
||||
这个环境几乎与上一节中的环境相同,唯一的区别是在 Python 环境中添加了 Flask 包。
|
||||
|
||||
1. 安装所需依赖项:
|
||||
```
|
||||
pip install spacy vaderSentiment flask
|
||||
```
|
||||
2. 安装 spaCy 的英语语言模型:
|
||||
```
|
||||
python -m spacy download en_core_web_sm
|
||||
```
|
||||
|
||||
|
||||
#### 创建应用程序文件
|
||||
|
||||
打开编辑器,创建一个名为 **app.py** 的文件。添加以下内容 _(不用担心,我们将解释每一行)_ :
|
||||
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
虽然这个源文件不是很大,但它非常密集。让我们来看看这个应用程序的各个部分,并解释它们在做什么。
|
||||
|
||||
```
|
||||
import flask
|
||||
import spacy
|
||||
import vaderSentiment.vaderSentiment as vader
|
||||
```
|
||||
|
||||
前三行引入了执行语言分析和 HTTP 框架所需的包。
|
||||
|
||||
```
|
||||
app = flask.Flask(__name__)
|
||||
analyzer = vader.SentimentIntensityAnalyzer()
|
||||
english = spacy.load("en_core_web_sm")
|
||||
```
|
||||
|
||||
接下来的三行代码创建了一些全局变量。第一个变量 **app**,它是 Flask 用于创建 HTTP 路由的主要入口点。第二个变量 **analyzer** 与上一个示例中使用的类型相同,它将用于生成情感分数。最后一个变量 **english** 也与上一个示例中使用的类型相同,它将用于注释和标记初始文本输入。
|
||||
|
||||
你可能想知道为什么全局声明这些变量。对于 **app** 变量,这是许多 Flask 应用程序的标准过程。但是,对于 **analyzer** 和 **english** 变量,将它们设置为全局变量的决定是基于与所涉及的类关联的加载时间。虽然加载时间可能看起来很短,但是当它在 HTTP 服务器的上下文中运行时,这些延迟会对性能产生负面影响。
|
||||
|
||||
|
||||
```
|
||||
def get_sentiments(text):
|
||||
result = english(text)
|
||||
sentences = [str(sent) for sent in result.sents]
|
||||
sentiments = [analyzer.polarity_scores(str(s)) for s in sentences]
|
||||
return sentiments
|
||||
```
|
||||
|
||||
这部分是服务的核心 -- 一个用于从一串文本生成情感值的函数。你可以看到此函数中的操作对应于你之前在 Python 解释器中运行的命令。这里它们被封装在一个函数定义中,**text** 源作为文本变量传入,最后 **sentiments** 变量返回给调用者。
|
||||
|
||||
|
||||
```
|
||||
@app.route("/", methods=["POST", "GET"])
|
||||
def index():
|
||||
if flask.request.method == "GET":
|
||||
return "To access this service send a POST request to this URL with" \
|
||||
" the text you want analyzed in the body."
|
||||
body = flask.request.data.decode("utf-8")
|
||||
sentiments = get_sentiments(body)
|
||||
return flask.json.dumps(sentiments)
|
||||
```
|
||||
|
||||
源文件的最后一个函数包含了指导 Flask 如何为服务配置 HTTP 服务器的逻辑。它从一行开始,该行将 HTTP 路由 **/** 与请求方法 **POST** 和 **GET** 相关联。
|
||||
|
||||
在函数定义行之后,**if** 子句将检测请求方法是否为 **GET**。如果用户向服务发送此请求,那么下面的行将返回一条指示如何访问服务器的文本消息。这主要是为了方便最终用户。
|
||||
|
||||
下一行使用 **flask.request** 对象来获取请求的主体,该主体应包含要处理的文本字符串。**decode** 函数将字节数组转换为可用的格式化字符串。经过解码的文本消息被传递给 **get_sentiments** 函数以生成情感分数。最后,分数通过 HTTP 框架返回给用户。
|
||||
|
||||
你现在应该保存文件,如果尚未保存,那么返回 shell。
|
||||
|
||||
#### 运行情感服务
|
||||
|
||||
一切就绪后,使用 Flask 的内置调试服务器运行服务非常简单。要启动该服务,请从与源文件相同的目录中输入以下命令:
|
||||
|
||||
```
|
||||
FLASK_APP=app.py flask run
|
||||
```
|
||||
|
||||
现在,你将在 shell 中看到来自服务器的一些输出,并且服务器将处于运行状态。要测试服务器是否正在运行,你需要打开第二个 shell 并使用 **curl** 命令。
|
||||
|
||||
首先,输入以下命令检查是否打印了指令信息:
|
||||
|
||||
```
|
||||
curl http://localhost:5000
|
||||
```
|
||||
|
||||
你应该看到说明消息:
|
||||
|
||||
```
|
||||
To access this service send a POST request to this URI with the text you want analyzed in the body.
|
||||
```
|
||||
|
||||
接下来,运行以下命令发送测试消息,查看情感分析:
|
||||
|
||||
```
|
||||
curl http://localhost:5000 --header "Content-Type: application/json" --data "I love applesauce!"
|
||||
```
|
||||
|
||||
你从服务器获得的响应应类似于以下内容:
|
||||
|
||||
```
|
||||
[{"compound": 0.6696, "neg": 0.0, "neu": 0.182, "pos": 0.818}]
|
||||
```
|
||||
|
||||
恭喜!你现在已经实现了一个 RESTful HTTP 情感分析服务。你可以在 [GitHub 上找到此服务的参考实现和本文中的所有代码][10]。
|
||||
|
||||
### 继续探索
|
||||
|
||||
现在你已经了解了自然语言处理和情感分析背后的原理和机制,下面是进一步发现探索主题的一些方法。
|
||||
|
||||
#### 在 OpenShift 上创建流式情感分析器
|
||||
|
||||
虽然创建本地应用程序来研究情绪分析很方便,但是接下来需要能够部署应用程序以实现更广泛的用途。按照[ Radnaalytics.io][11] 提供的指导和代码进行操作,你将学习如何创建一个情感分析仪,可以集装箱化并部署到 Kubernetes 平台。你还将了解如何将 APache Kafka 用作事件驱动消息传递的框架,以及如何将 Apache Spark 用作情绪分析的分布式计算平台。
|
||||
|
||||
#### 使用 Twitter API 发现实时数据
|
||||
|
||||
虽然 [Radanalytics.io][12] 实验室可以生成合成推文流,但你可以不受限于合成数据。事实上,拥有 Twitter 账户的任何人都可以使用 [Tweepy Python][13] 包访问 Twitter 流媒体 API 对推文进行情感分析。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-scalable
|
||||
|
||||
作者:[Michael McCune ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[MjSeven](https://github.com/MjSeven)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/elmiko/users/jschlessman
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/windows_building_sky_scale.jpg?itok=mH6CAX29 (Tall building with windows)
|
||||
[2]: https://opensource.com/article/19/4/social-media-sentiment-analysis-python-part-1
|
||||
[3]: https://virtualenv.pypa.io/en/stable/
|
||||
[4]: https://pypi.org/project/spacy/
|
||||
[5]: https://pypi.org/project/vaderSentiment/
|
||||
[6]: https://spacy.io/models
|
||||
[7]: https://docs.python.org/3.6/tutorial/interpreter.html
|
||||
[8]: https://en.wikipedia.org/wiki/Representational_state_transfer
|
||||
[9]: http://flask.pocoo.org/
|
||||
[10]: https://github.com/elmiko/social-moments-service
|
||||
[11]: https://github.com/radanalyticsio/streaming-lab
|
||||
[12]: http://Radanalytics.io
|
||||
[13]: https://github.com/tweepy/tweepy
|
@ -1,80 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (warmfrog)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (This is how System76 does open hardware)
|
||||
[#]: via: (https://opensource.com/article/19/4/system76-hardware)
|
||||
[#]: author: (Don Watkins https://opensource.com/users/don-watkins)
|
||||
|
||||
这就是 System76 如何打造开源硬件的
|
||||
================================
|
||||
是什么让新的 Thelio 台式机系列与众不同。
|
||||
![在计算机上显示度量和数据][1]
|
||||
|
||||
大多数人对他们电脑的硬件一无所知。作为一个长期的 Linux 用户,我分享了对此的失望,就是当我想让我的无线网卡,视频卡,显示器,和其他硬件与我选择的发行版共同运行时。自主品牌的硬件通常使判断这些问题变得很困难,就是:为什么以太网驱动,无线驱动,或者鼠标驱动和我们预期的不太一样?。当 Linux 分发版已经成熟,这可能不再是问题,但是我们仍能发现触控板和其他外部设备的怪异行为,尤其是当我们对下面的硬件知道的不多时。
|
||||
|
||||
像 [System76][2] 的公司目标是解决这些问题,来提升 Linux 用户体验。System76 生产了一些列的 Linux 笔记本,台式机,和服务器,甚至提供了它自己的 Linux 发行版 [Pop! OS][3] 作为客户的一个选择。最近我有幸参观了 System76 在 Devnver 的工厂并揭开 [Thelio][5] [神秘的面纱][5],它的新的台式机产品线。
|
||||
|
||||
### 关于 Thelio
|
||||
|
||||
System76 宣称 Thelio 的开源硬件子板,被命名为木星之后的第 5 个卫星的名字 Thelio Io,这是它在市场上独特的特点之一。Thelio Io 被证实为 [OSHWA #us000145][6],并且有 4 个用于存储的 SATA 端口和一个控制风扇和用于电源按钮控制的嵌入式控制器。Thelio IO SAS 被证实 是 [OSHWA #us000146][7],并且有 4 个用于存储的 U.2 端口,没有嵌入式控制器。在展示时,System76 显示了这些组件如何调整风扇通过底盘来优化部件的性能。
|
||||
|
||||
该控制器还管理电源键,和围绕该电源键的 LED 光环,当被按下时它以 100% 的亮度发光。这提供了触觉和视觉上的确认:该主机已经启动电源了。当电脑在使用中,该按钮被设置为 35% 的亮度,当在睡眠模式,它的亮度在 2.35% 和 25% 之间跳动。当计算机关闭后,LED 保持朦胧的亮度,因此能够在黑暗的房间里找到电源控制。
|
||||
|
||||
Thelio 的嵌入式控制器是一个低功耗的 [ATmega32U4][8] 微控制器,并且控制器的设置可以使用 Arduino Micro 进行原型设计。Thelio Io 主板变化的多少取决于你购买哪种 Thelio 型号。
|
||||
|
||||
Thelio 可能是我见过的设计的最好的电脑样例和系统。如果你曾经亲身体验过在一个常规的 PC 的内部进行操作的话,你可能会同意我的观点。我已经做了很多次了,因此我能以自己过往的糟糕经历来证明这点。
|
||||
|
||||
### 为什么做开源硬件?
|
||||
|
||||
该主板是在 [KiCAD][9] 设计的,你可以在 [GitHub][10] 上的 GPL 证书下访问 Thelio 所有的设计文件。因此,为什么一个与其他 PC 制造商竞争的公司会设计一个独特的接口并公开授权呢?可能是该公司认识到开源设计的价值和分享的能力,并且根据你的需要调整一个 I/O 主板,即便你是市场上的竞争者。
|
||||
|
||||
![在 Thelio 启动时 Don Watkins 与 System76 的 CEO Carl Richell 谈话][11]
|
||||
|
||||
在 [Thelio 启动时][12] Don Watkins 与 System76 的 CEO Carl Richell 谈话。
|
||||
|
||||
我问 [Carl Richell][13],System76 的设计者和 CEO,该公司是否考虑到开放它的硬件设计意味着有人采取它的独特设计并用它来将 System76 驱逐出市场。他说:
|
||||
|
||||
> 开源硬件对我们所有人都有益。这是我们未来提升技术的方式并且使得每个人获取技术更容易。我们欢迎任何想要提高 Thelio 设计的人来这么做。开源硬件不仅更快的帮助我们的电脑提升技术,并且能够使我们的消费者 100% 信任他们的设备。我们的目标是尽可能地移除专利授权,并且仍然能够为消费者提供一个有竞争力的 Linux 主机。
|
||||
>
|
||||
> 我们已经与 Linux 社区一起工作超过 13 年来为我们的笔记本,台式机,服务器创造一个完美的顺滑的体验。我们长期专注于为 Linux 社区提供服务,提供给我们的客户高水准的服务,我们的个性使 System76 变得独特。
|
||||
|
||||
我问 Carl 为什么通常来说开源硬件对 System76 和 PC 市场是有意义的。他回复道:
|
||||
|
||||
> System76 创立之初的想法是技术应该对每个人是开放和可获取的。我们还没有到达 100% 开源创造一个电脑的程度,但是有了开源硬件,我们是接近目标的必不可少的一大步。
|
||||
>
|
||||
> 我们生活在技术变成工具的时代。计算机在每一层教育和在很多工业当中是人们的工具。由于每个人特定的需要,每个人对于如何提升电脑和软件作为他们的主要工具有他们自己的想法。开源我们的计算机允许这些想法完成,反过来促进技术成为一个更强大的工具。在一个开源环境中,我们持续迭代来获取一个更好的 PC。这有点酷。
|
||||
|
||||
我们总结了我们讨论的关于 System76 技术路线的对话,包含了开源硬件 mini PC,最终还包含了笔记本。在 System76 品牌下的已售出的 mini PC 和笔记本是由其他供应商制造的,并不是基于开源硬件的(尽管他们的 Linux 软件,当然是开源的)。
|
||||
|
||||
设计和支持开放式硬件是PC业务中改变游戏规则的因素,也正是它造就了 System76 的新 Thelio 台式机电脑产品线的不同。
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/system76-hardware
|
||||
|
||||
作者:[Don Watkins ][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[warmfrog](https://github.com/warmfrog)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/don-watkins
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen)
|
||||
[2]: https://system76.com/
|
||||
[3]: https://opensource.com/article/18/1/behind-scenes-popos-linux
|
||||
[4]: /article/18/11/system76-thelio-desktop-computer
|
||||
[5]: https://system76.com/desktops
|
||||
[6]: https://certification.oshwa.org/us000145.html
|
||||
[7]: https://certification.oshwa.org/us000146.html
|
||||
[8]: https://www.microchip.com/wwwproducts/ATmega32u4
|
||||
[9]: http://kicad-pcb.org/
|
||||
[10]: https://github.com/system76/thelio-io
|
||||
[11]: https://opensource.com/sites/default/files/uploads/don_system76_ceo.jpg (Don Watkins speaks with System76 CEO Carl Richell at the Thelio launch event.)
|
||||
[12]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-FKWFxFv
|
||||
[13]: https://www.linkedin.com/in/carl-richell-9435781
|
||||
|
||||
|
@ -1,65 +0,0 @@
|
||||
[#]: collector: (lujun9972)
|
||||
[#]: translator: (geekpi)
|
||||
[#]: reviewer: ( )
|
||||
[#]: publisher: ( )
|
||||
[#]: url: ( )
|
||||
[#]: subject: (8 environment-friendly open software projects you should know)
|
||||
[#]: via: (https://opensource.com/article/19/4/environment-projects)
|
||||
[#]: author: (Laura Hilliger https://opensource.com/users/laurahilliger)
|
||||
|
||||
8 个你应该了解的环保开源项目
|
||||
======
|
||||
通过给这些致力于提升环境的项目做贡献来庆祝地球日。
|
||||
![][1]
|
||||
|
||||
在过去的几年里,我一直在帮助 [Greenpeace][2] 建立其第一个完全开源的项目,Planet 4. [Planet 4][3] 是一个全球参与平台,Greenpeace 的支持者和活动家可以互动并参与组织。它的目标是让人们代表我们的星球采取行动。我们希望邀请参与并利用人力来应对气候变化和塑料污染等全球性问题。它们正在寻找开发者、设计师、作者、贡献者和其他通过开源支持环保主义的人都非常欢迎[参与进来][4]!
|
||||
|
||||
Planet 4 远非唯一关注环境的开源项目。对于地球日,我会分享其他七个关注我们星球的开源项目。
|
||||
|
||||
**[Eco Hacker Farm][5]** 致力于支持可持续社区。它建议并支持将黑客空间/黑客基地和永续农业生活结合在一起的项目。该组织还有在线项目。访问其 [wiki][6] 或 [Twitter][7] 了解有关 Eco Hacker Farm 正在做的更多信息。
|
||||
|
||||
**[Public Lab][8]** 是一个开放社区和非营利组织,它致力于将科学掌握在公民手中。它于 2010 年在 BP 石油灾难后形成,Public Lab 与开源合作,协助环境勘探和调查。它是一个多元化的社区,有很多方法可以做[贡献][9]。
|
||||
|
||||
不久前,Opensource.com 的管理 Don Watkins 撰写了一篇 **[Open Climate Workbench][10]** 的文章,该项目来自 Apache 基金会。 [OCW][11] 提供了进行气候建模和评估的软件,可用于各种应用。
|
||||
|
||||
**[Open Source Ecology][12]** 是一个旨在改善经济运作方式的项目。该项目着眼于环境再生和社会公正,它旨在重新定义我们的一些肮脏的生产和分配技术,以创造一个更可持续的文明。
|
||||
|
||||
促进开源和大数据工具之间的合作,以实现海洋、大气、土地和气候的研究,“ **[Pangeo][13]** 是第一个推广开放、可重复和可扩展科学的社区。”大数据可以改变世界!
|
||||
|
||||
**[Leaflet][14]** 是一个著名的开源 JavaScript 库。它可以做各种各样的事情,包括环保项目,如 [Arctic Web Map][15],它能让科学家准确地可视化和分析北极地区,这是气候研究的关键能力。
|
||||
|
||||
当然,没有我在 Mozilla 的朋友就没有这个列表(不是这个完整的列表!)。**[Mozilla Science Lab][16]** 社区就像所有 Mozilla 项目一样,非常开放,它致力于将开源原则带给科学界。它的项目和社区使科学家能够进行我们世界所需的各种研究,以解决一些最普遍的环境问题。
|
||||
|
||||
### 如何贡献
|
||||
|
||||
在这个地球日,做为期六个月的承诺,将一些时间贡献给一个有助于应对气候变化的开源项目,或以其他方式鼓励人们保护地球母亲。肯定还有许多关注环境的开源项目,所以请在评论中留言!
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
via: https://opensource.com/article/19/4/environment-projects
|
||||
|
||||
作者:[Laura Hilliger][a]
|
||||
选题:[lujun9972][b]
|
||||
译者:[geekpi](https://github.com/geekpi)
|
||||
校对:[校对者ID](https://github.com/校对者ID)
|
||||
|
||||
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
|
||||
|
||||
[a]: https://opensource.com/users/laurahilliger
|
||||
[b]: https://github.com/lujun9972
|
||||
[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/world_hands_diversity.png?itok=zm4EDxgE
|
||||
[2]: http://www.greenpeace.org
|
||||
[3]: http://medium.com/planet4
|
||||
[4]: https://planet4.greenpeace.org/community/#partners-open-sourcers
|
||||
[5]: https://wiki.ecohackerfarm.org/start
|
||||
[6]: https://wiki.ecohackerfarm.org/
|
||||
[7]: https://twitter.com/EcoHackerFarm
|
||||
[8]: https://publiclab.org/
|
||||
[9]: https://publiclab.org/contribute
|
||||
[10]: https://opensource.com/article/17/1/apache-open-climate-workbench
|
||||
[11]: https://climate.apache.org/
|
||||
[12]: https://wiki.opensourceecology.org/wiki/Project_needs
|
||||
[13]: http://pangeo.io/
|
||||
[14]: https://leafletjs.com/
|
||||
[15]: https://webmap.arcticconnect.ca/#ac_3573/2/20.8/-65.5
|
||||
[16]: https://science.mozilla.org/
|
Loading…
Reference in New Issue
Block a user