Merge remote-tracking branch 'LCTT/master'

This commit is contained in:
Xingyu.Wang 2019-05-05 13:21:41 +08:00
commit 2e5598d026
18 changed files with 2245 additions and 252 deletions

View File

@ -7,37 +7,37 @@
**`/dev/urandom` 不安全。加密用途必须使用 `/dev/random`**
事实:`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
*事实*`/dev/urandom` 才是类 Unix 操作系统下推荐的加密种子。
**`/dev/urandom` 是<ruby>伪随机数生成器<rt>pseudo random number generator</rt></ruby>PRND`/dev/random` 是“真”随机数生成器。**
事实:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”“真”随机完全无关
*事实*:它们两者本质上用的是同一种 CSPRNG (一种密码学伪随机数生成器)。它们之间细微的差别和“真”“真”随机完全无关参见“Linux 随机数生成器的构架”一节)
**`/dev/random` 在任何情况下都是密码学应用更好地选择。即便 `/dev/urandom` 也同样安全,我们还是不应该用它。**
事实:`/dev/random` 有个很恶心人的问题它是阻塞的。LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
*事实*`/dev/random` 有个很恶心人的问题:它是阻塞的。(参见:“阻塞有什么问题?”一节)LCTT 译注:意味着请求都得逐个执行,等待前一个请求完成)
**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
**但阻塞不是好事吗!`/dev/random` 只会给出电脑收集的信息熵足以支持的随机量。`/dev/urandom` 在用完了所有熵的情况下还会不断吐不安全的随机数给你。**
事实:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。
*事实*:这是误解。就算我们不去考虑应用层面后续对随机种子的用法,“用完信息熵池”这个概念本身就不存在。仅仅 256 位的熵就足以生成计算上安全的随机数很长、很长的一段时间了。(参见:“那熵池快空了的情况呢?”一节)
问题的关键还在后头:`/dev/random` 怎么知道有系统会*多少*可用的信息熵?接着看!
**但密码学家老是讨论重新选种子re-seeding。这难道不和上一条冲突吗**
事实:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。
*事实*:你说的也没错!某种程度上吧。确实,随机数生成器一直在使用系统信息熵的状态重新选种。但这么做(一部分)是因为别的原因。(参见:“重新选种”一节)
这样说吧,我没有说引入新的信息熵是坏的。更多的熵肯定更好。我只是说在熵池低的时候阻塞是没必要的。
**好,就算你说的都对,但是 `/dev/(u)random` 的 man 页面和你说的也不一样啊!到底有没有专家同意你说的这堆啊?**
事实:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。
*事实*:其实 man 页面和我说的不冲突。它看似好像在说 `/dev/urandom` 对密码学用途来说不安全,但如果你真的理解这堆密码学术语你就知道它说的并不是这个意思。参见“random 和 urandom 的 man 页面”一节)
man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也没问题,但绝对不是说必要的),但它也推荐在大多数“一般”的密码学应用下使用 `/dev/urandom`
虽然诉诸权威一般来说不是好事,但在密码学这么严肃的事情上,和专家统一意见是很有必要的。
所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。
所以说呢,还确实有一些*专家*和我的一件事一致的:`/dev/urandom` 就应该是类 UNIX 操作系统下密码学应用的首选。显然的,是他们的观点说服了我而不是反过来的。(参见:“正道”一节)
------
@ -45,9 +45,9 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
我尝试不讲太高深的东西,但是有两点内容必须先提一下才能让我们接着论证观点。
首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?
首当其冲的,*什么是随机性*,或者更准确地:我们在探讨什么样的随机性?(参见:“真随机”一节)
另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。
另外一点很重要的是,我*没有尝试以说教的态度*对你们写这段话。我写这篇文章是为了日后可以在讨论起的时候指给别人看。比 140 字长LCTT 译注:推特长度)。这样我就不用一遍遍重复我的观点了。能把论点磨炼成一篇文章本身就很有助于将来的讨论。(参见:“你是在说我笨?!”一节)
并且我非常乐意听到不一样的观点。但我只是认为单单地说 `/dev/urandom` 坏是不够的。你得能指出到底有什么问题,并且剖析它们。
@ -55,43 +55,43 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
绝对没有!
事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?
事实上我自己也相信了 “`/dev/urandom` 是不安全的” 好些年。这几乎不是我们的错,因为那么德高望重的人在 Usenet、论坛、推特上跟我们重复这个观点。甚至*连 man 手册*都似是而非地说着。我们当年怎么可能鄙视诸如“信息熵太低了”这种看上去就很让人信服的观点呢?参见“random 和 urandom 的 man 页面”一节)
整个流言之所以如此广为流传不是因为人们太蠢,而是因为但凡有点关于信息熵和密码学概念的人都会觉得这个说法很有道理。直觉似乎都在告诉我们这流言讲的很有道理。很不幸直觉在密码学里通常不管用,这次也一样。
### 真随机
什么叫一个随机变量是“真随机的”
随机数是“真正随机”是什么意思
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为随机模型大家见仁见智,讨论很快变得毫无意义。
我不想搞的太复杂以至于变成哲学范畴的东西。这种讨论很容易走偏因为对于随机模型大家见仁见智,讨论很快变得毫无意义。
在我看来“真随机”的“试金石”是量子效应。一个光子穿过或不穿过一个半透镜。或者观察一个放射性粒子衰变。这类东西是现实世界最接近真随机的东西。当然,有些人也不相信这类过程是真随机的,或者这个世界根本不存在任何随机性。这个就百家争鸣了,我也不好多说什么了。
密码学家一般都会通过不去讨论什么是“真随机”来避免这种争论。它们更关心的是<ruby>不可预测性<rt> unpredictability</rt></ruby>。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
密码学家一般都会通过不去讨论什么是“真随机”来避免这种哲学辩论。他们更关心的是<ruby>不可预测性<rt>unpredictability</rt></ruby>。只要没有*任何*方法能猜出下一个随机数就可以了。所以当你以密码学应用为前提讨论一个随机数好不好的时候,在我看来这才是最重要的。
无论如何,我不怎么关心“哲学上安全”的随机数,这也包括别人嘴里的“真”随机数。
## 两种安全,一种有用
### 两种安全,一种有用
但就让我们退一步说,你有了一个“真”随机变量。你下一步做什么呢?
你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我很理解你。
你把它们打印出来然后挂在墙上来展示量子宇宙的美与和谐?牛逼!我支持你。
但是等等,你说你要*用*它们?做密码学用途?额,那这就废了,因为这事情就有点复杂了。
事情是这样的,你的真随机,量子力学加护的随机数即将被用进不理想的现实世界程序里
事情是这样的,你的真随机、量子力学加护的随机数即将被用进不理想的现实世界算法里去
因为我们使用的大多数算法并不是<ruby>理论信息学<rt>information-theoretic</rt></ruby>上安全的。它们“只能”提供 **计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享 和 One-time pad 算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
因为我们使用的几乎所有的算法都并不是<ruby>信息论安全性<rt>information-theoretic security </rt></ruby>的。它们“只能”提供**计算意义上的安全**。我能想到为数不多的例外就只有 Shamir 密钥分享和<ruby>一次性密码本<rt>One-time pad</rt></ruby>OTP算法。并且就算前者是名副其实的(如果你实际打算用的话),后者则毫无可行性可言。
但所有那些大名鼎鼎的密码学算法AES、RSA、Diffie-Hellman、椭圆曲线还有所有那些加密软件包OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API都仅仅是计算意义上安全的。
但所有那些大名鼎鼎的密码学算法AES、RSA、Diffie-Hellman、椭圆曲线还有所有那些加密软件包OpenSSL、GnuTLS、Keyczar、你的操作系统的加密 API都仅仅是计算意义上安全的。
那区别是什么呢?理论信息学上的安全肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
那区别是什么呢?信息论安全的算法肯定是安全的,绝对是,其它那些的算法都可能在理论上被拥有无限计算力的穷举破解。我们依然愉快地使用它们因为全世界的计算机加起来都不可能在宇宙年龄的时间里破解,至少现在是这样。而这就是我们文章里说的“不安全”。
除非哪个聪明的家伙破解了算法本身——在只需要极少量计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
除非哪个聪明的家伙破解了算法本身 —— 在只需要更少量计算力、在今天可实现的计算力的情况下。这也是每个密码学家梦寐以求的圣杯:破解 AES 本身、破解 RSA 本身等等。
所以现在我们来到了更底层的东西:随机数生成器,你坚持要“真随机”而不是“伪随机”。但是没过一会儿你的真随机数就被喂进了你极为鄙视的伪随机算法里了!
真相是,如果我们最先进的 hash 算法被破解了,或者最先进的块加密被破解了,你得到这些那些“哲学上不安全的”甚至无所谓了,因为反正你也没有安全的应用方法了。
真相是,如果我们最先进的哈希算法被破解了,或者最先进的分组加密算法被破解了,你得到的这些“哲学上不安全”的随机数甚至无所谓了,因为反正你也没有安全的应用方法了。
所以把计算性上安全的随机数喂给你的仅仅是计算性上安全的算法就可以了,换而言之,用 `/dev/urandom`
@ -103,7 +103,7 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
![image: mythical structure of the kernel's random number generator][1]
“真随机数”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random``/dev/urandom` 从里面生成随机数。
“真正的随机性”,尽管可能有点瑕疵,进入操作系统然后它的熵立刻被加入内部熵计数器。然后经过“矫偏”和“漂白”之后它进入内核的熵池,然后 `/dev/random``/dev/urandom` 从里面生成随机数。
“真”随机数生成器,`/dev/random`,直接从池里选出随机数,如果熵计数器表示能满足需要的数字大小,那就吐出数字并且减少熵计数。如果不够的话,它会阻塞程序直至有足够的熵进入系统。
@ -123,25 +123,25 @@ man 页面确实说在一些情况下推荐使用 `/dev/random` (我觉得也
![image: actual structure of the kernel's random number generator before Linux 4.8][2]
> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了我们跳过。
你看到最大的区别了吗CSPRNG 并不是和随机数生成器一起跑的,以 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
你看到最大的区别了吗CSPRNG 并不是和随机数生成器一起跑的,它在 `/dev/urandom` 需要输出但熵不够的时候进行填充。CSPRNG 是整个随机数生成过程的内部组件之一。从来就没有什么 `/dev/random` 直接从池里输出纯纯的随机性。每个随机源的输入都在 CSPRNG 里充分混合和散列过了,这一切都发生在实际变成一个随机数,被 `/dev/urandom` 或者 `/dev/random` 吐出去之前。
另外一个重要的区别是这里没有熵计数器的任何事情,只有预估。一个源给你的熵的量并不是什么很明确能直接得到的数字。你得预估它。注意,如果你太乐观地预估了它,那 `/dev/random` 最重要的特性——只给出熵允许的随机量——就荡然无存了。很不幸的,预估熵的量是很困难的。
Linux 内核只使用事件的到达时间来预估熵的量。它通过多项式插值,某种模型,来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
> 这是个很粗糙的简化。实际上不仅有一个,而是三个熵池。一个主池,另一个给 `/dev/random`,还有一个给 `/dev/urandom`,后两者依靠从主池里获取熵。这三个池都有各自的熵计数器,但二级池(后两个)的计数器基本都在 0 附近,而“新鲜”的熵总在需要的时候从主池流过来。同时还有好多混合和回流进系统在同时进行。整个过程对于这篇文档来说都过于复杂了,我们跳过。
Linux 内核只使用事件的到达时间来预估熵的量。根据模型,它通过多项式插值来预估实际的到达时间有多“出乎意料”。这种多项式插值的方法到底是不是好的预估熵量的方法本身就是个问题。同时硬件情况会不会以某种特定的方式影响到达时间也是个问题。而所有硬件的取样率也是个问题,因为这基本上就直接决定了随机数到达时间的颗粒度。
说到最后,至少现在看来,内核的熵预估还是不错的。这也意味着它比较保守。有些人会具体地讨论它有多好,这都超出我的脑容量了。就算这样,如果你坚持不想在没有足够多的熵的情况下吐出随机数,那你看到这里可能还会有一丝紧张。我睡的就很香了,因为我不关心熵预估什么的。
最后强调一下终点`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 喂的输入。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
最后要明确一下`/dev/random` 和 `/dev/urandom` 都是被同一个 CSPRNG 喂的。只有它们在用完各自熵池(根据某种预估标准)的时候,它们的行为会不同:`/dev/random` 阻塞,`/dev/urandom` 不阻塞。
##### Linux 4.8 以后
在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
![image: actual structure of the kernel's random number generator from Linux 4.8 onward][3]
*我们很快会理解*为什么这不是一个安全问题。
在 Linux 4.8 里,`/dev/random` 和 `/dev/urandom` 的等价性被放弃了。现在 `/dev/urandom` 的输出不来自于熵池,而是直接从 CSPRNG 来。
*我们很快会理解*为什么这不是一个安全问题。参见“CSPRNG 没问题”一节)
### 阻塞有什么问题?
@ -149,7 +149,7 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
这些都是问题。阻塞本质上会降低可用性。换而言之你的系统不干你让它干的事情。不用我说,这是不好的。要是它不干活你干嘛搭建它呢?
> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?被错误操作。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
> 我在工厂自动化里做过和安全相关的系统。猜猜看安全系统失效的主要原因是什么?操作问题。就这么简单。很多安全措施的流程让工人恼火了。比如时间太长,或者太不方便。你要知道人很会找捷径来“解决”问题。
但其实有个更深刻的问题:人们不喜欢被打断。它们会找一些绕过的方法,把一些诡异的东西接在一起仅仅因为这样能用。一般人根本不知道什么密码学什么乱七八糟的,至少正常的人是这样吧。
@ -157,23 +157,23 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
到头来如果东西太难用的话,你的用户就会被迫开始做一些降低系统安全性的事情——你甚至不知道它们会做些什么。
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用,难用,不方便都是次要的?
我们很容易会忽视可用性之类的重要性。毕竟安全第一对吧?所以比起牺牲安全,不可用、难用、不方便都是次要的?
这种二元对立的想法是错的。阻塞不一定就安全了。正如我们看到的,`/dev/urandom` 直接从 CSPRNG 里给你一样好的随机数。用它不好吗!
### CSPRNG 没问题
现在情况听上去很沧桑。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
现在情况听上去很惨淡。如果连高质量的 `/dev/random` 都是从一个 CSPRNG 里来的,我们怎么敢在高安全性的需求上使用它呢?
实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
实际上,“看上去随机”是现存大多数密码学基础组件的基本要求。如果你观察一个密码学哈希的输出,它一定得和随机的字符串不可区分,密码学家才会认可这个算法。如果你生成一个分组加密,它的输出(在你不知道密钥的情况下)也必须和随机数据不可区分才行。
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕到头来都一样。
如果任何人能比暴力穷举要更有效地破解一个加密,比如它利用了某些 CSPRNG 伪随机的弱点,那这就又是老一套了:一切都废了,也别谈后面的了。分组加密、哈希,一切都是基于某个数学算法,比如 CSPRNG。所以别害怕到头来都一样。
### 那熵池快空了的情况呢?
毫无影响。
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。一般的下限是 256 位,不需要更多了。
加密算法的根基建立在攻击者不能预测输出上,只要最一开始有足够的随机性(熵)就行了。“足够”的下限可以是 256 位,不需要更多了。
介于我们一直在很随意的使用“熵”这个概念,我用“位”来量化随机性希望读者不要太在意细节。像我们之前讨论的那样,内核的随机数生成器甚至没法精确地知道进入系统的熵的量。只有一个预估。而且这个预估的准确性到底怎么样也没人知道。
@ -211,7 +211,7 @@ Linux 内核只使用事件的到达时间来预估熵的量。它通过多项
我们在回到 man 页面说:“使用 `/dev/random`”。我们已经知道了,虽然 `/dev/urandom` 不阻塞,但是它的随机数和 `/dev/random` 都是从同一个 CSPRNG 里来的。
如果你真的需要信息论理论上安全的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`
如果你真的需要信息论安全的随机数(你不需要的,相信我),那才有可能成为唯一一个你需要等足够熵进入 CSPRNG 的理由。而且你也不能用 `/dev/random`
man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
@ -227,7 +227,7 @@ man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
### 正道
本篇文章里的观点显然在互联网上是“小众”的。但如果问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
本篇文章里的观点显然在互联网上是“小众”的。但如果问一个真正的密码学家,你很难找到一个认同阻塞 `/dev/random` 的人。
比如我们看看 [Daniel Bernstein][5](即著名的 djb的看法
@ -238,8 +238,6 @@ man 页面有毒,就这样。但至少它还稍稍挽回了一下自己:
>
> 对密码学家来说这甚至都不好笑了
或者 [Thomas Pornin][6] 的看法,他也是我在 stackexchange 上见过最乐于助人的一位:
> 简单来说,是的。展开说,答案还是一样。`/dev/urandom` 生成的数据可以说和真随机完全无法区分,至少在现有科技水平下。使用比 `/dev/urandom` “更好的“随机性毫无意义,除非你在使用极为罕见的“信息论安全”的加密算法。这肯定不是你的情况,不然你早就说了。
@ -260,13 +258,13 @@ Linux 的 `/dev/urandom` 会很乐意给你吐点不怎么随机的随机数,
FreeBSD 的行为更正确点:`/dev/random` 和 `/dev/urandom` 是一样的,在系统启动的时候 `/dev/random` 会阻塞到有足够的熵为止,然后它们都再也不阻塞了。
> 与此同时 Linux 实行了一个新的<ruby>系统调用<rt>syscall</rt></ruby>,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为阻塞到有足够的熵为止然后再也不阻塞了。当然这是个系统调用而不是一个字节设备LCTT 译注:不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
> 与此同时 Linux 实行了一个新的<ruby>系统调用<rt>syscall</rt></ruby>,最早由 OpenBSD 引入叫 `getentrypy(2)`,在 Linux 下这个叫 `getrandom(2)`。这个系统调用有着上述正确的行为阻塞到有足够的熵为止然后再也不阻塞了。当然这是个系统调用而不是一个字节设备LCTT 译注:不在 `/dev/` 下),所以它在 shell 或者别的脚本语言里没那么容易获取。这个系统调用 自 Linux 3.17 起存在。
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中储蓄一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
在 Linux 上其实这个问题不太大,因为 Linux 发行版会在启动的过程中保存一点随机数(这发生在已经有一些熵之后,因为启动程序不会在按下电源的一瞬间就开始运行)到一个种子文件中,以便系统下次启动的时候读取。所以每次启动的时候系统都会从上一次会话里带一点随机性过来。
显然这比不上在关机脚本里写入一些随机种子,因为这样的显然就有更多熵可以操作了。但这样做显而易见的好处就是它不用关心系统是不是正确关机了,比如可能你系统崩溃了。
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在系统安装器一般会写一个种子文件,所以基本上问题不大。
而且这种做法在你真正第一次启动系统的时候也没法帮你随机,不过好在 Linux 系统安装程序一般会保存一个种子文件,所以基本上问题不大。
虚拟机是另外一层问题。因为用户喜欢克隆它们,或者恢复到某个之前的状态。这种情况下那个种子文件就帮不到你了。

View File

@ -1,74 +1,62 @@
[#]: collector: (lujun9972)
[#]: translator: (warmfrog)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: reviewer: (wxy)
[#]: publisher: (wxy)
[#]: url: (https://linux.cn/article-10815-1.html)
[#]: subject: (How To Monitor Disk I/O Activity Using iotop And iostat Commands In Linux?)
[#]: via: (https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-command/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
在 Linux 中如何使用 iotop 和 iostat 监控磁盘 I/O 活动?
===================================================
======================================
你知道在 Linux 中我们使用什么工具检修和监控实时的磁盘活动吗?
你知道在 Linux 中我们使用什么工具检修和监控实时的磁盘活动吗?如果 [Linux 系统性能][1]变慢,我们会用 [top 命令][2] 来查看系统性能。它被用来检查是什么进程在服务器上占有如此高的使用率,对于大多数 Linux 系统管理员来说很常见,现实世界中被 Linux 系统管理员广泛采用。
如果 **[Linux 系统性能][1]**变慢,我们会用 **[top 命令][12]** 来查看系统性能。
它被用来检查是什么进程在服务器上占有如此高的使用率。
对于大多数 Linux 系统管理员来说很常见。
现实世界中被 Linux 系统管理员广泛采用。
如果在进程输出中你没有看到很大的不同,你仍然有选择查看其他东西。
我会建议你在 top 输出中检查 `wa` 状态因为大多数时间服务器性能由于在硬盘上的高 I/O 读和写降低了性能。
如果它很高或者波动,很可能就是它造成的。因此,我们需要检查硬盘上的 I/O 活动。
如果在进程输出中你没有看到很大的不同,你仍然有选择查看其他东西。我会建议你在 `top` 输出中检查 `wa` 状态,因为大多数时间里服务器性能由于在硬盘上的高 I/O 读和写降低了性能。如果它很高或者波动,很可能就是它造成的。因此,我们需要检查硬盘上的 I/O 活动。
我们可以在 Linux 中使用 `iotop``iostat` 命令监控所有的磁盘和文件系统的磁盘 I/O 统计。
### 什么是 iotop
iotop 是一个类似 top 的工具来显示实时的磁盘活动。
`iotop` 是一个类似 `top` 的工具,用来显示实时的磁盘活动。
iotop 监控 Linux 内核输出的 I/O 使用信息并且显示一个系统中进程或线程的当前 I/O 使用情况。
`iotop` 监控 Linux 内核输出的 I/O 使用信息,并且显示一个系统中进程或线程的当前 I/O 使用情况。
它显示每个进程/线程读写 I/O 带宽。它同样显示当等待换入和等待 I/O 的线程/进程 时间花费的百分比。
它显示每个进程/线程读写 I/O 带宽。它同样显示当等待换入和等待 I/O 的线程/进程花费的时间的百分比。
Total DISK READ 和 Total DISK WRITE 的值表示了一方面进程和内核线程之间的总的读写带宽,另一方面表示内核块设备子系统的。
`Total DISK READ``Total DISK WRITE` 的值一方面表示了进程和内核线程之间的总的读写带宽,另一方面也表示内核块设备子系统的。
Actual DISK READ 和 Actual DISK WRITE 的值表示在内核块设备子系统和下面硬件HDDSSD等等。)对应的实际磁盘 I/O 带宽。
`Actual DISK READ``Actual DISK WRITE` 的值表示在内核块设备子系统和下面硬件HDD、SSD 等等)对应的实际磁盘 I/O 带宽。
### 如何在 Linux 中安装 iotop
我们可以轻松在包管理器的帮助下安装,因为该软件包在所有的 Linux 发行版仓库中都可以获得。
对于 **`Fedora`** 系统,使用 **[DNF 命令][3]** 来安装 iotop。
对于 Fedora 系统,使用 [DNF 命令][3] 来安装 `iotop`
```
$ sudo dnf install iotop
```
对于 **`Debian/Ubuntu`** 系统,使用 **[API-GET 命令][4]** 或者 **[APT 命令][5]** 来安装 iotop。
对于 Debian/Ubuntu 系统,使用 [API-GET 命令][4] 或者 [APT 命令][5] 来安装 `iotop`
```
$ sudo apt install iotop
```
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman Command][6]** 来安装 iotop。
对于基于 Arch Linux 的系统,使用 [Pacman Command][6] 来安装 `iotop`
```
$ sudo pacman -S iotop
```
对于 **`RHEL/CentOS`** 的系统,使用 **[YUM Command][7]** 来安装 iotop。
对于 RHEL/CentOS 的系统,使用 [YUM Command][7] 来安装 `iotop`
```
$ sudo yum install iotop
```
对于使用 **`openSUSE Leap`** 的系统,使用 **[Zypper Command][8]** 来安装 iotop。
对于使用 openSUSE Leap 的系统,使用 [Zypper Command][8] 来安装 `iotop`
```
$ sudo zypper install iotop
@ -76,72 +64,70 @@ $ sudo zypper install iotop
### 在 Linux 中如何使用 iotop 命令来监控磁盘 I/O 活动/统计?
iotop 命令有很多参数来检查关于磁盘 I/O 的变化
`iotop` 命令有很多参数来检查关于磁盘 I/O 的变化
```
# iotop
```
[![][9]![][9]][10]
![10]
如果你想检查那个进程实际在做 I/O那么运行 iotop 命令加上 `-o` 或者 `--only` 参数。
如果你想检查那个进程实际在做 I/O那么运行 `iotop` 命令加上 `-o` 或者 `--only` 参数。
```
# iotop --only
```
[![][9]![][9]][11]
**细节:**
* **`IO:`** 它显示每个进程的 I/O 利用率,包含磁盘和交换。
* **`SWAPIN:`** 它只显示每个进程的交换使用率。
![11]
细节:
* `IO`:它显示每个进程的 I/O 利用率,包含磁盘和交换。
* `SWAPIN` 它只显示每个进程的交换使用率。
### 什么是 iostat
iostat 被用来报告中央处理单元CPU的统计和设备与分区的输出/输出的统计。
`iostat` 被用来报告中央处理单元CPU的统计和设备与分区的输出/输出的统计。
iostat 命令通过观察与他们平均传输率相关的设备活跃时间来监控系统输入/输出设备载
`iostat` 命令通过观察与它们平均传输率相关的设备活跃时间来监控系统输入/输出设备载。
iostat 命令生成的报告可以被用来改变系统配置来更好的平衡物理磁盘之间的输入/输出负载。
`iostat` 命令生成的报告可以被用来改变系统配置来更好的平衡物理磁盘之间的输入/输出负载。
所有的统计都在 iostat 命令每次运行时被报告。该报告包含一个 CPU 头部,后面是一行 CPU 统计。
所有的统计都在 `iostat` 命令每次运行时被报告。该报告包含一个 CPU 头部,后面是一行 CPU 统计。
在多处理器系统中CPU 统计被计算为系统层面的所有处理器的平均值。一个设备头行显示后紧跟一行每个配置设备的统计。
在多处理器系统中CPU 统计被计算为系统层面的所有处理器的平均值。设备头行后紧跟显示每个配置的设备一行的统计。
iostat 命令生成两种类型的报告CPU 利用率报告和设备利用率报告。
`iostat` 命令生成两种类型的报告CPU 利用率报告和设备利用率报告。
### 在 Linux 中怎样安装 iostat
iostat 工具是 sysstat 包的一部分,所以我们可以轻松地在包管理器地帮助下安装因为在所有的 Linux 发行版的仓库都是可以获得的。
`iostat` 工具是 `sysstat` 包的一部分,所以我们可以轻松地在包管理器地帮助下安装因为在所有的 Linux 发行版的仓库都是可以获得的。
对于 **`Fedora`** 系统,使用 **[DNF Command][3]** 来安装 sysstat。
对于 Fedora 系统,使用 [DNF Command][3] 来安装 `sysstat`
```
$ sudo dnf install sysstat
```
对于 **`Debian/Ubuntu`** 系统,使用 **[APT-GET Command][4]** 或者 **[APT Command][5]** 来安装 sysstat。
对于 Debian/Ubuntu 系统,使用 [APT-GET Command][4] 或者 [APT Command][5] 来安装 `sysstat`
```
$ sudo apt install sysstat
```
对于基于 **`Arch Linux`** 的系统,使用 **[Pacman Command][6]** 来安装 sysstat。
对于基于 Arch Linux 的系统,使用 [Pacman Command][6] 来安装 `sysstat`
```
$ sudo pacman -S sysstat
```
对于 **`RHEL/CentOS`** 系统,使用 **[YUM Command][7]** 来安装 sysstat。
对于 RHEL/CentOS 系统,使用 [YUM Command][7] 来安装 `sysstat`
```
$ sudo yum install sysstat
```
对于 **`openSUSE Leap`** 系统,使用 **[Zypper Command][8]** 来安装 sysstat。
对于 openSUSE Leap 系统,使用 [Zypper Command][8] 来安装 `sysstat`
```
$ sudo zypper install sysstat
@ -149,9 +135,9 @@ $ sudo zypper install sysstat
### 在 Linux 中如何使用 sysstat 命令监控磁盘 I/O 活动/统计?
在 iostat 命令中有很多参数来检查关于 I/O 和 CPU 的变化统计信息。
`iostat` 命令中有很多参数来检查关于 I/O 和 CPU 的变化统计信息。
不加参数运行 iostat 命令会看到完整的系统统计。
不加参数运行 `iostat` 命令会看到完整的系统统计。
```
# iostat
@ -169,7 +155,7 @@ loop1 0.00 0.00 0.00 0.00 1093
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
运行 iostat 命令加上 `-d` 参数查看所有设备的 I/O 统计。
运行 `iostat` 命令加上 `-d` 参数查看所有设备的 I/O 统计。
```
# iostat -d
@ -184,7 +170,7 @@ loop1 0.00 0.00 0.00 0.00 1093
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
运行 iostat 命令加上 `-p` 参数查看所有的设备和分区的 I/O 统计。
运行 `iostat` 命令加上 `-p` 参数查看所有的设备和分区的 I/O 统计。
```
# iostat -p
@ -206,7 +192,7 @@ loop1 0.00 0.00 0.00 0.00 1093
loop2 0.00 0.00 0.00 0.00 1077 0 0
```
运行 iostat 命令加上 `-x` 参数显示所有设备的详细的 I/O 统计信息。
运行 `iostat` 命令加上 `-x` 参数显示所有设备的详细的 I/O 统计信息。
```
# iostat -x
@ -224,7 +210,7 @@ loop1 0.00 0.00 0.00 0.00 0.40 12.86 0.00 0.
loop2 0.00 0.00 0.00 0.00 0.38 19.58 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
```
运行 iostat 命令加上 `-d [设备名]` 参数查看具体设备和它的分区的 I/O 统计信息。
运行 `iostat` 命令加上 `-d [设备名]` 参数查看具体设备和它的分区的 I/O 统计信息。
```
# iostat -p [Device_Name]
@ -242,7 +228,7 @@ sda2 0.18 6.76 80.21 0.00 3112916 36924
sda1 0.00 0.01 0.00 0.00 3224 0 0
```
运行 iostat 命令加上 `-m` 参数以 `MB` 为单位而不是 `KB` 查看所有设备的统计。默认以 KB 显示输出。
运行 `iostat` 命令加上 `-m` 参数以 MB 为单位而不是 KB 查看所有设备的统计。默认以 KB 显示输出。
```
# iostat -m
@ -260,7 +246,7 @@ loop1 0.00 0.00 0.00 0.00 1
loop2 0.00 0.00 0.00 0.00 1 0 0
```
运行 iostat 命令使用特定的间隔使用如下的格式。在这个例子中,我们打算以 5 秒捕获的间隔捕获两个报告。
运行 `iostat` 命令使用特定的间隔使用如下的格式。在这个例子中,我们打算以 5 秒捕获的间隔捕获两个报告。
```
# iostat [Interval] [Number Of Reports]
@ -290,7 +276,7 @@ loop1 0.00 0.00 0.00 0.00 0
loop2 0.00 0.00 0.00 0.00 0 0 0
```
运行 iostat 命令 `-N` 参数来查看 LVM 磁盘 I/O 统计报告。
运行 `iostat` 命令与 `-N` 参数来查看 LVM 磁盘 I/O 统计报告。
```
# iostat -N
@ -307,7 +293,7 @@ sdc 0.01 0.12 0.00 2108 0
2g-2gvol1 0.00 0.07 0.00 1204 0
```
运行 nfsiostat 命令来查看 Network File SystemNFS的 I/O 统计。
运行 `nfsiostat` 命令来查看 Network File SystemNFS的 I/O 统计。
```
# nfsiostat
@ -320,7 +306,7 @@ via: https://www.2daygeek.com/check-monitor-disk-io-in-linux-using-iotop-iostat-
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[warmfrog](https://github.com/warmfrog)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,147 +0,0 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use autofs to mount NFS shares)
[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
How to use autofs to mount NFS shares
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
Most Linux file systems are mounted at boot and remain mounted while the system is running. This is also true of any remote file systems that have been configured in the `fstab` file. However, there may be times when you prefer to have a remote file system mount only on demand—for example, to boost performance by reducing network bandwidth usage, or to hide or obfuscate certain directories for security reasons. The package [autofs][1] provides this feature. In this article, I'll describe how to get a basic automount configuration up and running.
`tree.mydatacenter.net` is up and running. Also assume a data directory named `ourfiles` and two user directories, for Carl and Sarah, are being shared by this server.
First, a few assumptions: Assume the NFS server namedis up and running. Also assume a data directory namedand two user directories, for Carl and Sarah, are being shared by this server.
A few best practices will make things work a bit better: It is a good idea to use the same user ID for your users on the server and any client workstations where they have an account. Also, your workstations and server should have the same domain name. Checking the relevant configuration files should confirm.
```
alan@workstation1:~$ sudo getent passwd carl sarah
[sudo] password for alan:
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
alan@workstation1:~$ sudo getent hosts
127.0.0.1       localhost
127.0.1.1       workstation1.mydatacenter.net workstation1
10.10.1.5       tree.mydatacenter.net tree
```
As you can see, both the client workstation and the NFS server are configured in the `hosts` file. Im assuming a basic home or even small office network that might lack proper internal domain name service (i.e., DNS).
### Install the packages
You need to install only two packages: `nfs-common` for NFS client functions, and `autofs` to provide the automount function.
```
alan@workstation1:~$ sudo apt-get install nfs-common autofs
```
You can verify that the autofs files have been placed in the `etc` directory:
```
alan@workstation1:~$ cd /etc; ll auto*
-rw-r--r-- 1 root root 12596 Nov 19  2015 autofs.conf
-rw-r--r-- 1 root root   857 Mar 10  2017 auto.master
-rw-r--r-- 1 root root   708 Jul  6  2017 auto.misc
-rwxr-xr-x 1 root root  1039 Nov 19  2015 auto.net*
-rwxr-xr-x 1 root root  2191 Nov 19  2015 auto.smb*
alan@workstation1:/etc$
```
### Configure autofs
Now you need to edit several of these files and add the file `auto.home`. First, add the following two lines to the file `auto.master`:
```
/mnt/tree  /etc/auto.misc
/home/tree  /etc/auto.home
```
Each line begins with the directory where the NFS shares will be mounted. Go ahead and create those directories:
```
alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
```
Second, add the following line to the file `auto.misc`:
```
ourfiles        -fstype=nfs     tree:/share/ourfiles
```
This line instructs autofs to mount the `ourfiles` share at the location matched in the `auto.master` file for `auto.misc`. As shown above, these files will be available in the directory `/mnt/tree/ourfiles`.
Third, create the file `auto.home` with the following line:
```
*               -fstype=nfs     tree:/home/&
```
This line instructs autofs to mount the users share at the location matched in the `auto.master` file for `auto.home`. In this case, Carl and Sarah's files will be available in the directories `/home/tree/carl` or `/home/tree/sarah`, respectively. The asterisk (referred to as a wildcard) makes it possible for each user's share to be automatically mounted when they log in. The ampersand also works as a wildcard representing the user's directory on the server side. Their home directory should be mapped accordingly in the `passwd` file. This doesnt have to be done if you prefer a local home directory; instead, the user could use this as simple remote storage for specific files.
Finally, restart the `autofs` daemon so it will recognize and load these configuration file changes.
```
alan@workstation1:/etc$ sudo service autofs restart
```
### Testing autofs
If you change to one of the directories listed in the file `auto.master` and run the `ls` command, you wont see anything immediately. For example, change directory `(cd)` to `/mnt/tree`. At first, the output of `ls` wont show anything, but after running `cd ourfiles`, the `ourfiles` share directory will be automatically mounted. The `cd` command will also be executed and you will be placed into the newly mounted directory.
```
carl@workstation1:~$ cd /mnt/tree
carl@workstation1:/mnt/tree$ ls
carl@workstation1:/mnt/tree$ cd ourfiles
carl@workstation1:/mnt/tree/ourfiles$
```
To further confirm that things are working, the `mount` command will display the details of the mounted share.
```
carl@workstation1:~$ mount
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)
```
The `/home/tree` directory will work the same way for Carl and Sarah.
I find it useful to bookmark these directories in my file manager for quicker access.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
作者:[Alan Formy-Duval][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/alanfdoss
[1]:https://wiki.archlinux.org/index.php/autofs

View File

@ -1,5 +1,5 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )

View File

@ -0,0 +1,132 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Automate backups with restic and systemd)
[#]: via: (https://fedoramagazine.org/automate-backups-with-restic-and-systemd/)
[#]: author: (Link Dupont https://fedoramagazine.org/author/linkdupont/)
Automate backups with restic and systemd
======
![][1]
Timely backups are important. So much so that [backing up software][2] is a common topic of discussion, even [here on the Fedora Magazine][3]. This article demonstrates how to automate backups with **restic** using only systemd unit files.
For an introduction to restic, be sure to check out our article [Use restic on Fedora for encrypted backups][4]. Then read on for more details.
Two systemd services are required to run in order to automate taking snapshots and keeping data pruned. The first service runs the _backup_ command needs to be run on a regular frequency. The second service takes care of data pruning.
If youre not familiar with systemd at all, theres never been a better time to learn. Check out [the series on systemd here at the Magazine][5], starting with this primer on unit files:
> [systemd unit file basics][6]
If you havent installed restic already, note its in the official Fedora repositories. To install use this command [with sudo][7]:
```
$ sudo dnf install restic
```
### Backup
First, create the _~/.config/systemd/user/restic-backup.service_ file. Copy and paste the text below into the file for best results.
```
[Unit]
Description=Restic backup service
[Service]
Type=oneshot
ExecStart=restic backup --verbose --one-file-system --tag systemd.timer $BACKUP_EXCLUDES $BACKUP_PATHS
ExecStartPost=restic forget --verbose --tag systemd.timer --group-by "paths,tags" --keep-daily $RETENTION_DAYS --keep-weekly $RETENTION_WEEKS --keep-monthly $RETENTION_MONTHS --keep-yearly $RETENTION_YEARS
EnvironmentFile=%h/.config/restic-backup.conf
```
This service references an environment file in order to load secrets (such as _RESTIC_PASSWORD_ ). Create the _~/.config/restic-backup.conf_ file. Copy and paste the content below for best results. This example uses BackBlaze B2 buckets. Adjust the ID, key, repository, and password values accordingly.
```
BACKUP_PATHS="/home/rupert"
BACKUP_EXCLUDES="--exclude-file /home/rupert/.restic_excludes --exclude-if-present .exclude_from_backup"
RETENTION_DAYS=7
RETENTION_WEEKS=4
RETENTION_MONTHS=6
RETENTION_YEARS=3
B2_ACCOUNT_ID=XXXXXXXXXXXXXXXXXXXXXXXXX
B2_ACCOUNT_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
RESTIC_REPOSITORY=b2:XXXXXXXXXXXXXXXXXX:/
RESTIC_PASSWORD=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
```
Now that the service is installed, reload systemd: _systemctl user daemon-reload_. Try running the service manually to create a backup: _systemctl user start restic-backup_.
Because the service is a _oneshot_ , it will run once and exit. After verifying that the service runs and creates snapshots as desired, set up a timer to run this service regularly. For example, to run the _restic-backup.service_ daily, create _~/.config/systemd/user/restic-backup.timer_ as follows. Again, copy and paste this text:
```
[Unit]
Description=Backup with restic daily
[Timer]
OnCalendar=daily
Persistent=true
[Install]
WantedBy=timers.target
```
Enable it by running this command:
```
$ systemctl --user enable --now restic-backup.timer
```
### Prune
While the main service runs the _forget_ command to only keep snapshots within the keep policy, the data is not actually removed from the restic repository. The _prune_ command inspects the repository and current snapshots, and deletes any data not associated with a snapshot. Because _prune_ can be a time-consuming process, it is not necessary to run every time a backup is run. This is the perfect scenario for a second service and timer. First, create the file _~/.config/systemd/user/restic-prune.service_ by copying and pasting this text:
```
[Unit]
Description=Restic backup service (data pruning)
[Service]
Type=oneshot
ExecStart=restic prune
EnvironmentFile=%h/.config/restic-backup.conf
```
Similarly to the main _restic-backup.service_ , _restic-prune_ is a oneshot service and can be run manually. Once the service has been set up, create and enable a corresponding timer at _~/.config/systemd/user/restic-prune.timer_ :
```
[Unit]
Description=Prune data from the restic repository monthly
[Timer]
OnCalendar=monthly
Persistent=true
[Install]
WantedBy=timers.target
```
Thats it! Restic will now run daily and prune data monthly.
* * *
_Photo by _[ _Samuel Zeller_][8]_ on _[_Unsplash_][9]_._
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/automate-backups-with-restic-and-systemd/
作者:[Link Dupont][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/linkdupont/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/restic-systemd-816x345.jpg
[2]: https://restic.net/
[3]: https://fedoramagazine.org/?s=backup
[4]: https://fedoramagazine.org/use-restic-encrypted-backups/
[5]: https://fedoramagazine.org/series/systemd-series/
[6]: https://fedoramagazine.org/systemd-getting-a-grip-on-units/
[7]: https://fedoramagazine.org/howto-use-sudo/
[8]: https://unsplash.com/photos/JuFcQxgCXwA?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[9]: https://unsplash.com/search/photos/archive?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,177 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Awk utility in Fedora)
[#]: via: (https://fedoramagazine.org/awk-utility-in-fedora/)
[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/)
Awk utility in Fedora
======
![][1]
Fedora provides _awk_ as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is _awk_ and why would you need it?
_Awk_ is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU _awk_ or _gawk_ is used. Read on for more about this language and how to use it.
### A brief history of awk
_Awk_ began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan.
> The specification for _awk_ in the POSIX Command Language and Utilities standard further clarified the language. Both the _gawk_ designers and the original _awk_ designers at Bell Laboratories provided feedback for the POSIX specification.
>
> From [The GNU Awk Users Guide][2]
For a more in-depth look at how _awk/gawk_ ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of _gawk_. Among those are:
* Arnold Robbins and David Trueman, the creators of _gawk_
* Michael Brennan, the creator of _mawk_ , which later was merged with _gawk_
* Jurgen Kahrs, who added networking capabilities to _gawk_ in 1997
* John Hague, who rewrote the _gawk_ internals and added an _awk_ -level debugger in 2011
### Using awk
The following sections show various ways of using _awk_ in Fedora.
#### At the command line
The simples way to invoke _awk_ is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use _cat_ to take a look at the command history file in your home director:
```
$ cat ~/.bash_history
```
There are probably many lines scrolling by right now.
_Awk_ helps with this type of file quite easily. Instead of printing the entire file out to the terminal like _cat_ , you can use _awk_ to find something of specific interest. For this example, type the following at the command line if youre running a standard Fedora edition:
```
$ awk '/dnf/' ~/.bash_history
```
If youre running Silverblue, try this instead:
```
$ awk '/rpm-ostree/' ~/.bash_history
```
In both cases, more data likely appears than what you really want. Thats no problem for _awk_ since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these:
```
$ awk '/rpm-ostree install/' ~/.bash_history
$ awk '/dnf install/' ~/.bash_history
```
All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached.
#### From an _awk_ program
Using awk at the command line as above is not much different than piping output to _grep_ , like this:
```
$ cat .bash_history | grep 'dnf install'
```
The end result of printing to standard output ( _stdout_ ) is the same with both methods.
Awk is a programming language, and the command _awk_ is an interpreter of that language. The real power and flexibility of _awk_ is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with _awk_ , you can also incorporate C or C++ code using [Dynamic-Extensions][3].
Next, to show the power of _awk_ , lets make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this well create two awk program files.
The first file prints out the header of the bingo card. For this example it is called _bingo-title.awk_. Use your favorite editor to save this text as that file name:
```
```
BEGIN {
print "B\tI\tN\tG\tO"
}
```
```
Now the title program is ready. You could try it out with this command:
```
$ awk -f bingo-title.awk
```
The program prints the word BINGO, with a tab space ( _\t_ ) between the characters. For the number selection, lets use one of awks builtin numeric functions called _rand()_ and use two of the control statements, _for_ and _switch._ (Except the editor changed my program, so no switch statement used this time).
The title of the second awk program is _bingo-num.awk_. Enter the following into your favorite editor and save with that file name:
```
```
@include "bingo-title.awk"
BEGIN {
for (i = 1; i < = 5; i++) {
b = int(rand() * 15) + (15*(i-1))
printf "%s\t", b
}
print
}
```
```
The _@include_ statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the _bingo-title.awk_ file so the title prints out first.
#### Running the test program
Now enter the command to pick a row of bingo numbers:
```
$ awk -f bingo-num.awk
```
Output appears similar to the following. Note that the _rand()_ function in _awk_ is not ideal for truly random numbers. Its used here only as for example purposes.
```
```
$ awk -f bingo-num.awk
B I N G O
13 23 34 53 71
```
```
In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function _srand()_ for answers on how that could be done.
### Final examples
_Awk_ can be useful even for mundane daily search tasks that you encounter, like listing all _flatpaks_ on the _Flathub_ repository from _org.gnome_ (providing you have the Flathub repository setup). The command to do that would be:
```
$ flatpak remote-ls flathub --system | awk /org.gnome/
```
A listing appears that shows all output from _remote-ls_ that matches the _org.gnome_ pattern. To see flatpaks already installed from org.gnome, enter this command:
```
$ flatpak list --system | awk /org.gnome/
```
Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/awk-utility-in-fedora/
作者:[Stephen Snow][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/jakfrost/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/awk-816x345.jpg
[2]: https://www.gnu.org/software/gawk/manual/gawk.html#Foreword3
[3]: https://www.gnu.org/software/gawk/manual/gawk.html#Dynamic-Extensions

View File

@ -0,0 +1,96 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Upgrading Fedora 29 to Fedora 30)
[#]: via: (https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/)
[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/)
Upgrading Fedora 29 to Fedora 30
======
![][1]
Fedora 30 i[s available now][2]. Youll likely want to upgrade your system to the latest version of Fedora. Fedora Workstation has a graphical upgrade method. Alternatively, Fedora offers a command-line method for upgrading Fedora 29 to Fedora 30.
### Upgrading Fedora 29 Workstation to Fedora 30
Soon after release time, a notification appears to tell you an upgrade is available. You can click the notification to launch the **GNOME Software** app. Or you can choose Software from GNOME Shell.
Choose the _Updates_ tab in GNOME Software and you should see a screen informing you that Fedora 30 is Now Available.
If you dont see anything on this screen, try using the reload button at the top left. It may take some time after release for all systems to be able to see an upgrade available.
Choose _Download_ to fetch the upgrade packages. You can continue working until you reach a stopping point, and the download is complete. Then use GNOME Software to restart your system and apply the upgrade. Upgrading takes time, so you may want to grab a coffee and come back to the system later.
### Using the command line
If youve upgraded from past Fedora releases, you are likely familiar with the _dnf upgrade_ plugin. This method is the recommended and supported way to upgrade from Fedora 29 to Fedora 30. Using this plugin will make your upgrade to Fedora 30 simple and easy.
##### 1\. Update software and back up your system
Before you do anything, you will want to make sure you have the latest software for Fedora 29 before beginning the upgrade process. To update your software, use _GNOME Software_ or enter the following command in a terminal.
```
sudo dnf upgrade --refresh
```
Additionally, make sure you back up your system before proceeding. For help with taking a backup, see [the backup series][3] on the Fedora Magazine.
##### 2\. Install the DNF plugin
Next, open a terminal and type the following command to install the plugin:
```
sudo dnf install dnf-plugin-system-upgrade
```
##### 3\. Start the update with DNF
Now that your system is up-to-date, backed up, and you have the DNF plugin installed, you can begin the upgrade by using the following command in a terminal:
```
sudo dnf system-upgrade download --releasever=30
```
This command will begin downloading all of the upgrades for your machine locally to prepare for the upgrade. If you have issues when upgrading because of packages without updates, broken dependencies, or retired packages, add the _allowerasing_ flag when typing the above command. This will allow DNF to remove packages that may be blocking your system upgrade.
##### 4\. Reboot and upgrade
Once the previous command finishes downloading all of the upgrades, your system will be ready for rebooting. To boot your system into the upgrade process, type the following command in a terminal:
```
sudo dnf system-upgrade reboot
```
Your system will restart after this. Many releases ago, the _fedup_ tool would create a new option on the kernel selection / boot screen. With the _dnf-plugin-system-upgrade_ package, your system reboots into the current kernel installed for Fedora 29; this is normal. Shortly after the kernel selection screen, your system begins the upgrade process.
Now might be a good time for a coffee break! Once it finishes, your system will restart and youll be able to log in to your newly upgraded Fedora 30 system.
![][4]
### Resolving upgrade problems
On occasion, there may be unexpected issues when you upgrade your system. If you experience any issues, please visit the [DNF system upgrade wiki page][5] for more information on troubleshooting in the event of a problem.
If you are having issues upgrading and have third-party repositories installed on your system, you may need to disable these repositories while you are upgrading. For support with repositories not provided by Fedora, please contact the providers of the repositories.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/upgrading-fedora-29-to-fedora-30/
作者:[Ryan Lerch][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/ryanlerch/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/29-30-816x345.jpg
[2]: https://fedoramagazine.org/announcing-fedora-30/
[3]: https://fedoramagazine.org/taking-smart-backups-duplicity/
[4]: https://cdn.fedoramagazine.org/wp-content/uploads/2016/06/Screenshot_f23-ws-upgrade-test_2016-06-10_110906-1024x768.png
[5]: https://fedoraproject.org/wiki/DNF_system_upgrade#Resolving_post-upgrade_issues

View File

@ -0,0 +1,73 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (3 apps to manage personal finances in Fedora)
[#]: via: (https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/)
[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/)
3 apps to manage personal finances in Fedora
======
![][1]
There are numerous services available on the web for managing your personal finances. Although they may be convenient, they also often mean leaving your most valuable personal data with a company you cant monitor. Some people are comfortable with this level of trust.
Whether you are or not, you might be interested in an app you can maintain on your own system. This means your data never has to leave your own computer if you dont want. One of these three apps might be what youre looking for.
### HomeBank
HomeBank is a fully featured way to manage multiple accounts. Its easy to set up and keep updated. It has multiple ways to categorize and graph income and liabilities so you can see where your money goes. Its available through the official Fedora repositories.
![A simple account set up in HomeBank with a few transactions.][2]
To install HomeBank, open the _Software_ app, search for _HomeBank_ , and select the app. Then click _Install_ to add it to your system. HomeBank is also available via a Flatpak.
### KMyMoney
The KMyMoney app is a mature app that has been around for a long while. It has a robust set of features to help you manage multiple accounts, including assets, liabilities, taxes, and more. KMyMoney includes a full set of tools for managing investments and making forecasts. It also sports a huge set of reports for seeing how your money is doing.
![A subset of the many reports available in KMyMoney.][3]
To install, use a software center app, or use the command line:
```
$ sudo dnf install kmymoney
```
### GnuCash
One of the most venerable free GUI apps for personal finance is GnuCash. GnuCash is not just for personal finances. It also has functions for managing income, assets, and liabilities for a business. That doesnt mean you cant use it for managing just your own accounts. Check out [the online tutorial and guide][4] to get started.
![Checking account records shown in GnuCash.][5]
Open the _Software_ app, search for _GnuCash_ , and select the app. Then click _Install_ to add it to your system. Or use _dnf install_ as above to install the _gnucash_ package.
Its now available via Flathub which makes installation easy. If you dont have Flathub support, check out [this article on the Fedora Magazine][6] for how to use it. Then you can also use the _flatpak install GnuCash_ command with a terminal.
* * *
*Photo by _[_Fabian Blank_][7]_ on *[ _Unsplash_][8].
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/3-apps-to-manage-personal-finances-in-fedora/
作者:[Paul W. Frields][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/pfrields/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/personal-finance-3-apps-816x345.jpg
[2]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-16-16-1024x637.png
[3]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-27-10-1-1024x649.png
[4]: https://www.gnucash.org/viewdoc.phtml?rev=3&lang=C&doc=guide
[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/Screenshot-from-2019-04-28-16-41-27-1024x631.png
[6]: https://fedoramagazine.org/install-flathub-apps-fedora/
[7]: https://unsplash.com/photos/pElSkGRA2NU?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText
[8]: https://unsplash.com/search/photos/money?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

View File

@ -0,0 +1,306 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Mirror your System Drive using Software RAID)
[#]: via: (https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/)
[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/)
Mirror your System Drive using Software RAID
======
![][1]
Nothing lasts forever. When it comes to the hardware in your PC, most of it can easily be replaced. There is, however, one special-case hardware component in your PC that is not as easy to replace as the rest — your hard disk drive.
### Drive Mirroring
Your hard drive stores your personal data. Some of your data can be backed up automatically by scheduled backup jobs. But those jobs scan the files to be backed up for changes and trying to scan an entire drive would be very resource intensive. Also, anything that youve changed since your last backup will be lost if your drive fails. [Drive mirroring][2] is a better way to maintain a secondary copy of your entire hard drive. With drive mirroring, a secondary copy of _all the data_ on your hard drive is maintained _in real time_.
An added benefit of live mirroring your hard drive to a secondary hard drive is that it can [increase your computers performance][3]. Because disk I/O is one of your computers main performance [bottlenecks][4], the performance improvement can be quite significant.
Note that a mirror is not a backup. It only protects your data from being lost if one of your physical drives fail. Types of failures that drive mirroring, by itself, does not protect against include:
* [File System Corruption][5]
* [Bit Rot][6]
* Accidental File Deletion
* Simultaneous Failure of all Mirrored Drives (highly unlikely)
Some of the above can be addressed by other file system features that can be used in conjunction with drive mirroring. File system features that address the above types of failures include:
* Using a [Journaling][7] or [Log-Structured][8] file system
* Using [Checksums][9] ([ZFS][10] , for example, does this automatically and transparently)
* Using [Snapshots][11]
* Using [BCVs][12]
This guide will demonstrate one method of mirroring your system drive using the Multiple Disk and Device Administration (mdadm) toolset. Just for fun, this guide will show how to do the conversion without using any extra boot media (CDs, USB drives, etc). For more about the concepts and terminology related to the multiple device driver, you can skim the _md_ man page:
```
$ man md
```
### The Procedure
1. **Use** [**sgdisk**][13] **to (re)partition the _extra_ drive that you have added to your computer** :
```
$ sudo -i
# MY_DISK_1=/dev/sdb
# sgdisk --zap-all $MY_DISK_1
# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_1 $MY_DISK_1
# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_1 $MY_DISK_1
# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_1 $MY_DISK_1
# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_1 $MY_DISK_1
```
If the drive that you will be using for the second half of the mirror in step 12 is smaller than this drive, then you will need to adjust down the size of the last partition so that the total size of all the partitions is not greater than the size of your second drive.
A few of the commands in this guide are prefixed with a test for the existence of an _efivars_ directory. This is necessary because those commands are slightly different depending on whether your computer is BIOS-based or UEFI-based.
2. **Use** [**mdadm**][14] **to create RAID devices that use the new partitions to store their data** :
```
# mdadm --create /dev/md/boot --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/boot_1 missing
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/swap_1 missing
# mdadm --create /dev/md/root --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/root_1 missing
# cat << END > /etc/mdadm.conf
MAILADDR root
AUTO +all
DEVICE partitions
END
# mdadm --detail --scan >> /etc/mdadm.conf
```
The _missing_ parameter tells mdadm to create an array with a missing member. You will add the other half of the mirror in step 14.
You should configure [sendmail][15] so you will be notified if a drive fails.
You can configure [Evolution][16] to [monitor a local mail spool][17].
3. **Use** [**dracut**][18] **to update the initramfs** :
```
# dracut -f --add mdraid --add-drivers xfs
```
Dracut will include the /etc/mdadm.conf file you created in the previous section in your initramfs _unless_ you build your initramfs with the _hostonly_ option set to _no_. If you build your initramfs with the hostonly option set to no, then you should either manually include the /etc/mdadm.conf file, manually specify the UUIDs of the RAID arrays to assemble at boot time with the _rd.md.uuid_ kernel parameter, or specify the _rd.auto_ kernel parameter to have all RAID arrays automatically assembled and started at boot time. This guide will demonstrate the _rd.auto_ option since it is the most generic.
4. **Format the RAID devices** :
```
# mkfs -t vfat /dev/md/boot
# mkswap /dev/md/swap
# mkfs -t xfs /dev/md/root
```
The new [Boot Loader Specification][19] states “if the OS is installed on a disk with GPT disk label, and no ESP partition exists yet, a new suitably sized (lets say 500MB) ESP should be created and should be used as $BOOT” and “$BOOT must be a VFAT (16 or 32) file system”.
5. **Reboot and set the _rd.auto_ , _rd.break_ and _single_ kernel parameters** :
```
# reboot
```
You may need to [set your root password][20] before rebooting so that you can get into _single-user mode_ in step 7.
See “[Making Temporary Changes to a GRUB 2 Menu][21]” for directions on how to set kernel parameters on compters that use the GRUB 2 boot loader.
6. **Use** [**the dracut shell**][18] **to copy the root file system** :
```
# mkdir /newroot
# mount /dev/md/root /newroot
# shopt -s dotglob
# cp -ax /sysroot/* /newroot
# rm -rf /newroot/boot/*
# umount /newroot
# exit
```
The _dotglob_ flag is set for this bash session so that the [wildcard character][22] will match hidden files.
Files are removed from the _boot_ directory because they will be copied to a separate partition in the next step.
This copy operation is being done from the dracut shell to insure that no processes are accessing the files while they are being copied.
7. **Use _single-user mode_ to copy the non-root file systems** :
```
# mkdir /newroot
# mount /dev/md/root /newroot
# mount /dev/md/boot /newroot/boot
# shopt -s dotglob
# cp -Lr /boot/* /newroot/boot
# test -d /newroot/boot/efi/EFI && mv /newroot/boot/efi/EFI/* /newroot/boot/efi && rmdir /newroot/boot/efi/EFI
# test -d /sys/firmware/efi/efivars && ln -sfr /newroot/boot/efi/fedora/grub.cfg /newroot/etc/grub2-efi.cfg
# cp -ax /home/* /newroot/home
# exit
```
It is OK to run these commands in the dracut shell shown in the previous section instead of doing it from single-user mode. Ive demonstrated using single-user mode to avoid having to explain how to mount the non-root partitions from the dracut shell.
The parameters being past to the _cp_ command for the _boot_ directory are a little different because the VFAT file system doesnt support symbolic links or Unix-style file permissions.
In rare cases, the _rd.auto_ parameter is known to cause LVM to fail to assemble due to a [race condition][23]. If you see errors about your _swap_ or _home_ partition failing to mount when entering single-user mode, simply try again by repeating step 5 but omiting the _rd.break_ paramenter so that you will go directly to single-user mode.
8. **Update _fstab_ on the new drive** :
```
# cat << END > /newroot/etc/fstab
/dev/md/root / xfs defaults 0 0
/dev/md/boot /boot vfat defaults 0 0
/dev/md/swap swap swap defaults 0 0
END
```
9. **Configure the boot loader on the new drive** :
```
# NEW_GRUB_CMDLINE_LINUX=$(cat /etc/default/grub | sed -n 's/^GRUB_CMDLINE_LINUX="\(.*\)"/\1/ p')
# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//rd.lvm.*([^ ])}
# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//resume=*([^ ])}
# NEW_GRUB_CMDLINE_LINUX+=" selinux=0 rd.auto"
# sed -i "/^GRUB_CMDLINE_LINUX=/s/=.*/=\"$NEW_GRUB_CMDLINE_LINUX\"/" /newroot/etc/default/grub
```
You can re-enable selinux after this procedure is complete. But you will have to [relabel your file system][24] first.
10. **Install the boot loader on the new drive** :
```
# sed -i '/^GRUB_DISABLE_OS_PROBER=.*/d' /newroot/etc/default/grub
# echo "GRUB_DISABLE_OS_PROBER=true" >> /newroot/etc/default/grub
# MY_DISK_1=$(mdadm --detail /dev/md/boot | grep active | grep -m 1 -o "/dev/sd.")
# for i in dev dev/pts proc sys run; do mount -o bind /$i /newroot/$i; done
# chroot /newroot env MY_DISK_1=$MY_DISK_1 bash --login
# test -d /sys/firmware/efi/efivars || MY_GRUB_DIR=/boot/grub2
# test -d /sys/firmware/efi/efivars && MY_GRUB_DIR=$(find /boot/efi -type d -name 'fedora' -print -quit)
# test -e /usr/sbin/grub2-switch-to-blscfg && grub2-switch-to-blscfg --grub-directory=$MY_GRUB_DIR
# grub2-mkconfig -o $MY_GRUB_DIR/grub.cfg \;
# test -d /sys/firmware/efi/efivars && test /boot/grub2/grubenv -nt $MY_GRUB_DIR/grubenv && cp /boot/grub2/grubenv $MY_GRUB_DIR/grubenv
# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_1"
# logout
# for i in run sys proc dev/pts dev; do umount /newroot/$i; done
# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_1" -p 1 -l "$(find /newroot/boot -name shimx64.efi -printf '/%P\n' -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 1"
```
The _grub2-switch-to-blscfg_ command is optional. It is only supported on Fedora 29+.
The _cp_ command above should not be necessary, but there appears to be a bug in the current version of grub which causes it to write to $BOOT/grub2/grubenv instead of $BOOT/efi/fedora/grubenv on UEFI systems.
You can use the following command to verify the contents of the _grub.cfg_ file right after running the _grub2-mkconfig_ command above:
```
# sed -n '/BEGIN .*10_linux/,/END .*10_linux/ p' $MY_GRUB_DIR/grub.cfg
```
You should see references to _mdraid_ and _mduuid_ in the output from the above command if the RAID array was detected properly.
11. **Boot off of the new drive** :
```
# reboot
```
How to select the new drive is system-dependent. It usually requires pressing one of the **F12** , **F10** , **Esc** or **Del** keys when you hear the [System OK BIOS beep code][25].
On UEFI systems the boot loader on the new drive should be labeled “Fedora RAID Disk 1”.
12. **Remove all the volume groups and partitions from your old drive** :
```
# MY_DISK_2=/dev/sda
# MY_VOLUMES=$(pvs | grep $MY_DISK_2 | awk '{print $2}' | tr "\n" " ")
# test -n "$MY_VOLUMES" && vgremove $MY_VOLUMES
# sgdisk --zap-all $MY_DISK_2
```
**WARNING** : You want to make certain that everything is working properly on your new drive before you do this. A good way to verify that your old drive is no longer being used is to try booting your computer once without the old drive connected.
You can add another new drive to your computer instead of erasing your old one if you prefer.
13. **Create new partitions on your old drive to match the ones on your new drive** :
```
# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_2 $MY_DISK_2
# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_2 $MY_DISK_2
# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_2 $MY_DISK_2
# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_2 $MY_DISK_2
```
It is important that the partitions match in size and type. I prefer to use the _parted_ command to display the partition table because it supports setting the display unit:
```
# parted /dev/sda unit MiB print
# parted /dev/sdb unit MiB print
```
14. **Use mdadm to add the new partitions to the RAID devices** :
```
# mdadm --manage /dev/md/boot --add /dev/disk/by-partlabel/boot_2
# mdadm --manage /dev/md/swap --add /dev/disk/by-partlabel/swap_2
# mdadm --manage /dev/md/root --add /dev/disk/by-partlabel/root_2
```
15. **Install the boot loader on your old drive** :
```
# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_2"
# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_2" -p 1 -l "$(find /boot -name shimx64.efi -printf "/%P\n" -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 2"
```
16. **Use mdadm to test that email notifications are working** :
```
# mdadm --monitor --scan --oneshot --test
```
As soon as your drives have finished synchronizing, you should be able to select either drive when restarting your computer and you will receive the same live-mirrored operating system. If either drive fails, mdmonitor will send an email notification. Recovering from a drive failure is now simply a matter of swapping out the bad drive with a new one and running a few _sgdisk_ and _mdadm_ commands to re-create the mirrors (steps 13 through 15). You will no longer have to worry about losing any data if a drive fails!
### Video Demonstrations
Converting a UEFI PC to RAID1
Converting a BIOS PC to RAID1
* TIP: Set the the quality to 720p on the above videos for best viewing.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/
作者:[Gregory Bartholomew][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://fedoramagazine.org/author/glb/
[b]: https://github.com/lujun9972
[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/raid_mirroring-816x345.jpg
[2]: https://en.wikipedia.org/wiki/Disk_mirroring
[3]: https://en.wikipedia.org/wiki/Disk_mirroring#Additional_benefits
[4]: https://en.wikipedia.org/wiki/Bottleneck_(software)
[5]: https://en.wikipedia.org/wiki/Data_corruption
[6]: https://en.wikipedia.org/wiki/Data_degradation
[7]: https://en.wikipedia.org/wiki/Journaling_file_system
[8]: https://www.quora.com/What-is-the-difference-between-a-journaling-vs-a-log-structured-file-system
[9]: https://en.wikipedia.org/wiki/File_verification
[10]: https://en.wikipedia.org/wiki/ZFS#Summary_of_key_differentiating_features
[11]: https://en.wikipedia.org/wiki/Snapshot_(computer_storage)#File_systems
[12]: https://en.wikipedia.org/wiki/Business_continuance_volume
[13]: https://fedoramagazine.org/managing-partitions-with-sgdisk/
[14]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/
[15]: https://fedoraproject.org/wiki/QA:Testcase_Sendmail
[16]: https://en.wikipedia.org/wiki/Evolution_(software)
[17]: https://dotancohen.com/howto/root_email.html
[18]: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/
[19]: https://systemd.io/BOOT_LOADER_SPECIFICATION#technical-details
[20]: https://docs.fedoraproject.org/en-US/Fedora/26/html/System_Administrators_Guide/sec-Changing_and_Resetting_the_Root_Password.html
[21]: https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/#sec-Making_Temporary_Changes_to_a_GRUB_2_Menu
[22]: https://en.wikipedia.org/wiki/Wildcard_character#File_and_directory_patterns
[23]: https://en.wikipedia.org/wiki/Race_condition
[24]: https://wiki.centos.org/HowTos/SELinux#head-867ca18a09f3103705cdb04b7d2581b69cd74c55
[25]: https://en.wikipedia.org/wiki/Power-on_self-test#Original_IBM_POST_beep_codes

View File

@ -0,0 +1,105 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (SuiteCRM: An Open Source CRM Takes Aim At Salesforce)
[#]: via: (https://itsfoss.com/suitecrm-ondemand/)
[#]: author: (Ankush Das https://itsfoss.com/author/ankush/)
SuiteCRM: An Open Source CRM Takes Aim At Salesforce
======
SuiteCRM is one of the most popular open source CRM (Customer Relationship Management) software available. With its unique-priced managed CRM hosting service, SuiteCRM is aiming to challenge enterprise CRMs like Salesforce.
### SuiteCRM: An Open Source CRM Software
CRM stands for Customer Relationship Management. It is used by businesses to manage the interaction with customers, keep track of services, supplies and other things that help the business manage their customers.
![][1]
[SuiteCRM][2] came into existence after the hugely popular [SugarCRM][3] decided to stop developing its open source version. The open source version of SugarCRM was then forked into SuiteCRM by UK-based [SalesAgility][4] team.
In just a couple of years, SuiteCRM became immensely popular and started to be considered the best open source CRM software out there. You can gauge its popularity from the fact that its nearing a million download and it has over 100,000 community members. There are around 4 million SuiteCRM users worldwide (a CRM software usually has more than one user) and it is available in several languages. Its even used by National Health Service ([NHS][5]) in UK.
Since SuiteCRM is a free and open source software, you are free to download it and deploy it on your cloud server such as [UpCloud][6] (we at Its FOSS use it), [DigitalOcean][7], [AWS][8] or any Linux server of our own.
But configuring the software, deploying it and managing it a tiresome job and requires certain skill level or a the services of a sysadmin. This is why business oriented open source software provide a hosted version of their software.
This enables you to enjoy the open source software without the additional headache and the team behind the software has a way to generate revenue and continue the development of their software.
### Suite:OnDemand Cost effective managed hosting of SuiteCRM
So, recently, [SalesAgility][4] the creators/maintainers of SuiteCRM, decided to challenge [Salesforce][9] and other enterprise CRMs by introducing [Suite:OnDemand][10] , a hosted version of SuiteCRM.
[][11]
Suggested read Papyrus: An Open Source Note Manager
Normally, you will observe pricing plans on the basis of number of users. But, with SuiteCRMs OnDemand cloud hosting plans, they are trying to give businesses an affordable solution on a “per-server” basis instead of paying for every user you add.
In other words, they want you to pay extra only for advanced features, not for more users.
Heres what SalesAgility mentioned in their [press release][12]:
> Unlike Salesforce and other enterprise CRM vendors, the practice of pricing per user has been abandoned in favour of per-server hosting packages all of which will support unlimited users. In addition, theres no increase in cost for access to advanced features. With Suite:OnDemand every feature and benefit is available with each hosting package.
Of course, unlimited users does not mean that you will have to abuse the term. So, theres a recommended number of users for every hosting plan you opt for.
![Suitecrm Hosting][13]
The CEO of SalesAgility also had to describe their goals for this step:
_We want SuiteCRM to be available to all businesses and to all users within a business,_ ”said **Dale Murray CEO** of **SalesAgility**.
In addition to that, they also mentioned that they want to revolutionize the way enterprise-class CRM is being currently offered in order to make it more accessible to businesses and organizations:
> “Many organisations do not have the experience to run and support our product on-premise or it is not part of their technology strategy to do so. With Suite:OnDemand we are providing our customers with a quick and easy solution to access all the features of SuiteCRM without a per user cost. Were also saying to Salesforce that enterprise-class CRM can be delivered, enhanced, maintained and supported without charging mouth-wateringly expensive monthly fees. Our aim is to transform the CRM market to enable users to make CRM pervasive within their organisations.”
>
> Dale Murray, CEO of SalesAgility
### Why is this a big deal?
This is a huge relief for small business owners and startups because other CRMs like Saleforce and SugarCRM charge $30-$40 per month per user. If you have 10 members in your team, this will increase the cost to $300-$400 per month.
[][14]
Suggested read Winds Beautifully Combines Feed Reader and Podcast Player in One Single App
This is also a good news for the open source community that we will have an affordable alternative to Salesforce.
In addition to this, SuiteCRM is fully open source meaning there are no license fees or vendor lock-in as they mention. You are always free to use it on your own.
It is interesting to see different strategies and solutions being applied for an open source CRM software to take an aim at Salesforce directly.
What do you think? Let us know your thoughts in the comments below.
_With inputs from Abhishek Prakash._
--------------------------------------------------------------------------------
via: https://itsfoss.com/suitecrm-ondemand/
作者:[Ankush Das][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://itsfoss.com/author/ankush/
[b]: https://github.com/lujun9972
[1]: https://itsfoss.com/wp-content/uploads/2019/05/suite-crm-800x450.png
[2]: https://suitecrm.com/
[3]: https://www.sugarcrm.com/
[4]: https://salesagility.com/
[5]: https://www.nhs.uk/
[6]: https://www.upcloud.com/register/?promo=itsfoss
[7]: https://m.do.co/c/d58840562553
[8]: https://aws.amazon.com/
[9]: https://www.salesforce.com
[10]: https://suitecrm.com/suiteondemand/
[11]: https://itsfoss.com/papyrus-open-source-note-manager/
[12]: https://suitecrm.com/sod-pr/
[13]: https://itsfoss.com/wp-content/uploads/2019/05/suitecrm-hosting-800x457.jpg
[14]: https://itsfoss.com/winds-podcast-feedreader/

View File

@ -0,0 +1,88 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 An Introduction To Hyperledger Project (HLP) [Part 8])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 An Introduction To Hyperledger Project (HLP) [Part 8]
======
![Introduction To Hyperledger Project][1]
Once a new technology platform reaches a threshold level of popularity in terms of active development and commercial interests, major global companies and smaller start-ups alike rush to catch a slice of the pie. **Linux** was one such platform back in the day. Once the ubiquity of its applications was realized individuals, firms, and institutions started displaying their interest in it and by 2000 the **Linux foundation** was formed.
The Linux foundation aims to standardize and develop Linux as a platform by sponsoring their development team. The Linux Foundation is a non-profit organization that is supported by software and IT behemoths such as Microsoft, Oracle, Samsung, Cisco, IBM, Intel among others[1]. This is excluding the hundreds of individual developers who offer their services for the betterment of the platform. Over the years the Linux foundation has taken many projects under its roof. The **Hyperledger Project** is their fastest growing one till date.
Such consortium led development have a lot of advantages when it comes to furthering tech into usable useful forms. Developing the standards, libraries and all the back-end protocols for large scale projects are expensive and resource intensive without a shred of income generating from it. Hence, it makes sense for companies to pool in their resources to develop the common “boring” parts by supporting such organizations and later upon completing work on these standard parts to simply plug & play and customize their products afterwards. Apart from the economics of the model, such collaborative efforts also yield standards allowing for easier use and integration into aspiring products and services.
Other major innovations that were once or are currently being developed following the said consortium model include standards for WiFi (The Wi-Fi alliance), Mobile Telephony etc.
### Introduction to Hyperledger Project (HLP)
The Hyperledger project was launched in December 2015 by the Linux foundation as is currently among the fastest growing project theyve incubated. Its an umbrella organization for collaborative efforts into developing and advancing tools & standards for [**blockchain**][2] based distributed ledger technologies(DLT). Major industry players supporting the project include **IBM** , **Intel** and **SAP Ariba** among [**others**][3]. The HLP aims to create frameworks for individuals and companies to create shared as well as closed blockchains as required to further their own requirements. The design principles include a strong tilt toward developing a globally deployable, scalable, robust platform with a focus on privacy, and future auditability[2]. It is also important to note that most of the blockchains proposed and the frame.
### Development goals and structure: Making it plug & play
Although enterprise facing platforms exist from the likes of the Ethereum alliance, HLP is by definition business facing and supported by industry behemoths who contribute and further development in the many modules that come under the HLP banner. The HLP incubates projects in development after their induction into the cause and after finishing work on it and correcting the knick-knacks rolls it out for the public. Members of the Hyperledger project contribute their own work such as how IBM contributed their Fabric platform for collaborative development. The codebase is absorbed and developed in house by the group in the project and rolled out for all members equally for their use.
Such processes make the modules in HLP highly flexible plug-in frameworks which will support rapid development and roll-outs in enterprise settings. Furthermore, other comparable platforms are open **permission-less blockchains** or rather **public chains** by default and even though it is possible to adapt them to specific applications, HLP modules support the feature natively.
The differences and use cases of public & private blockchains are covered more [**here**][4] in this comparative primer on the same.
The Hyperledger projects mission is four-fold according to **Brian Behlendorf** , the executive director of the project.
They are:
1. To create an enterprise grade DLT framework and standards which anyone can port to suit their specific industrial or personal needs.
2. To give rise to a robust open source community to aid the ecosystem.
3. To promote and further participation of industry members of the said ecosystem such as member firms.
4. To host a neutral unbiased infrastructure for the HLP community to gather and share updates and developments regarding the same.
The original document can be accessed [**here**][5]****.
### Structure of the HLP
The **HLP consists of 12 projects** that are classified as independent modules, each usually structured and working independently to develop their module. These are first studied for their capabilities and viability before being incubated. Proposals for additions can be made by any member of the organization. After the project is incubated active development ensues after which it is rolled out. The interoperability between these modules are given a high priority, hence regular communication between these groups are maintained by the community. Currently 4 of these projects are categorized as active. The active tag implies these are ready for use but not ready for a major release yet. These 4 are arguably the most significant or rather fundamental modules to furthering the blockchain revolution. Well look at the individual modules and their functionalities at a later time in detail. However, a brief description of a the Hyperledger Fabric platform, arguably the most popular among them follows.
### Hyperledger Fabric
The **Hyperledger Fabric** [2] is a fully open-source, permissioned (non-public) blockchain-based DLT platform that is designed keeping enterprise uses in mind. The platform provides features and is structured to fit the enterprise environment. It is highly modular allowing its developers to choose from different consensus protocols, **chain code protocols ([smart contracts][6])** , or identity management systems etc., as they go along. **It is a permissioned blockchain based platform** thats makes use of an identity management system, meaning participants will be aware of each others identities which is required in an enterprise setting. Fabric allows for smart contract ( _ **“chaincode”, is the term that the Hyperledger team uses**_ ) development in a variety of mainstream programming languages including **Java** , **Javascript** , **Go** etc. This allows institutions and enterprises to make use of their existing talent in the area without hiring or re-training developers to develop their own smart contracts. Fabric also uses an execute-order-validate system to handle smart contracts for better reliability compared to the standard order-validate system that is used by other platforms providing smart contract functionality. Pluggable performance, identity management systems, DBMS, Consensus platforms etc. are other features of Fabric that keeps it miles ahead of its competition.
### Conclusion
Projects such as the Hyperledger Fabric platforms enable a faster rate of adoption of blockchain technology in mainstream use-cases. The Hyperledger community structure itself supports open governance principles and since all the projects are led as open source platforms, this improves the security and accountability that the teams exhibit in pushing out commitments.
Since major applications of such projects involve working with enterprises to further development of platforms and standards, the Hyperledger project is currently at a great position with respect to comparable projects by others.
**References:**
* **[1][Samsung takes a seat with Intel and IBM at the Linux Foundation | TheINQUIRER][7]**
* **[2] E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018.**
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Introduction-To-Hyperledger-Project-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[3]: https://www.hyperledger.org/members
[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
[5]: http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html
[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[7]: https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation

View File

@ -0,0 +1,106 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 Public Vs Private Blockchain Comparison [Part 7])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 Public Vs Private Blockchain Comparison [Part 7]
======
![Public vs Private blockchain][1]
The previous part of the [**Blockchain 2.0**][2] series explored the [**the state of Smart contracts**][3] now. This post intends to throw some light on the different types of blockchains that can be created. Each of these are used for vastly different applications and depending on the use cases, the protocol followed by each of these differ. Now let us go ahead and learn about **Public vs Private blockchain comparison** with Open source and proprietary technology.
The fundamental three-layer structure of a blockchain based distributed ledger as we know is as follows:
![][4]
Figure 1 Fundamental structure of Blockchain-based ledgers
The differences between the types mentioned here is attributable primarily to the protocol that rests on the underlying blockchain. The protocol dictates rules for the participants and the behavior of the blockchain in response to the said participation.
Remember to keep the following things in mind while reading through this article:
* Platforms such as these are always created to solve a use-case requirement. There is no one direction that the technology should take that is best. Blockchains for instance have tremendous applications and some of these might require dropping features that seem significant in other settings. **Decentralized storage** is a major example in this regard.
* Blockchains are basically database systems keeping track of information by timestamping and organizing data in the form of blocks. Creators of such blockchains can choose who has the right to make these blocks and perform alterations.
* Blockchains can be “centralized” as well, and participation in varying extents can be limited to those who this “central authority” deems eligible.
Most blockchains are either **public** or **private**. Broadly speaking, public blockchains can be considered as being the equivalent of open source software and most private blockchains can be seen as proprietary platforms deriving from the public ones. The figure below should make the basic difference obvious to most of you.
![][5]
Figure 2 Public vs Private blockchain comparison with Open source and Proprietary Technology
This is not to say that all private blockchains are derived from open public ones. The most popular ones however usually are though.
### Public Blockchains
A public blockchain can be considered as a **permission-less platform** or **network**. Anyone with the knowhow and computing resources can participate in it. This will have the following implications:
* Anyone can join and participate in a public blockchain network. All the “participant” needs is a stable internet connection along with computing resources.
* Participation will include reading, writing, verifying, and providing consensus during transactions. An example for participating individuals would be **Bitcoin miners**. In exchange for participating in the network the miners are paid back in Bitcoins in this case.
* The platform is decentralized completely and fully redundant.
* Because of the decentralized nature, no one entity has complete control over the data recorded in the ledger. To validate a block all (or most) participants need to vet the data.
* This means that once information is verified and recorded, it cannot be altered easily. Even if it is, its impossible to not leave marks.
* The identity of participants remains anonymous by design in platforms such as **BITCOIN** and **LITECOIN**. These platforms by design aim for protecting and securing user identities. This is primarily a feature provided by the overlying protocol stack.
* Examples for public blockchain networks are **BITCOIN** , **LITECOIN** , **ETHEREUM** etc.
* Extensive decentralizations mean that gaining consensus on transactions might take a while compared to what is typically possible over blockchain ledger networks and throughput can be a challenge for large enterprises aiming for pushing a very high number of transactions every instant.
* The open participation and often the high number of such participants in open chains such as bitcoin add up to considerable initial investments in computing equipment and energy costs.
### Private Blockchain
In contrast, a private blockchain is a **permissioned blockchain**. Meaning:
* Permission to participate in the network is restricted and is presided over by the owner or institution overseeing the network. Meaning even though an individual will be able to store data and transact (send and receive payments for example), the validation and storage of these transactions will be done only by select participants.
* Participation even once permission is given by the central authority will be limited by terms. For instance, in case of a private blockchain network run by a financial institution, not every customer will have access to the entire blockchain ledger, and even among those with the permission, not everyone will be able to access everything. Permissions to access select services will be given by the central figure in this case. This is often referred to as **“channeling”**.
* Such systems have significantly larger throughput capabilities and also showcase much faster transaction speeds compared to their public counterparts because a block of information only needs to be validated by a select few.
* Security by design is something the public blockchains are renowned for. They achieve this
by:
* Anonymizing participants,
* Distributed & redundant but encrypted storage on multiple nodes,
* Mass consensus required for creating and altering data.
Private blockchains usually dont feature any of these in their protocol. This makes the system only as secure as most cloud-based database systems currently in use.
### A note for the wise
An important point to note is this, the fact that theyre named public or private (or open or closed) has nothing to do with the underlying code base. The code or the literal foundations on which the platforms are based on may or may not be publicly available and or developed in either of these cases. **R3** is a **DLT** ( **D** istributed **L** edger **T** echnology) company that leads a public consortium of over 200 multinational institutions. Their aim is to further development of blockchain and related distributed ledger technology in the domain of finance and commerce. **Corda** is the product of this joint effort. R3 defines corda as a blockchain platform that is built specially for businesses. The codebase for the same is open source and developers all over the world are encouraged to contribute to the project. However, given its business facing nature and the needs it is meant to address, corda would be categorized as a permissioned closed blockchain platform. Meaning businesses can choose the participants of the network once it is deployed and choose the kind of information these participants can access through the use of natively available smart contract tools.
While it is a reality that public platforms like Bitcoin and Ethereum are responsible for the widespread awareness and development going on in the space, it can still be argued that private blockchains designed for specific use cases in enterprise or business settings is what will lead monetary investments in the short run. These are the platforms most of us will see implemented the near future in practical ways.
Read the next guide about Hyperledger project in this series.
* [**Blockchain 2.0 An Introduction To Hyperledger Project (HLP)**][6]
We are working on many interesting topics on Blockchain technology. Stay tuned!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Public-Vs-Private-Blockchain-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[3]: https://www.ostechnix.com/blockchain-2-0-ongoing-projects-the-state-of-smart-contracts-now/
[4]: http://www.ostechnix.com/wp-content/uploads/2019/04/blockchain-architecture.png
[5]: http://www.ostechnix.com/wp-content/uploads/2019/04/Public-vs-Private-blockchain-comparison.png
[6]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/

View File

@ -0,0 +1,83 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Blockchain 2.0 What Is Ethereum [Part 9])
[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/)
[#]: author: (editor https://www.ostechnix.com/author/editor/)
Blockchain 2.0 What Is Ethereum [Part 9]
======
![Ethereum][1]
In the previous guide of this series, we discussed about [**Hyperledger Project (HLP)**][2], a fastest growing product developed by **Linux Foundation**. In this guide, we are going to discuss about what is **Ethereum** and its features in detail. Many researchers opine that the future of the internet will be based on principles of decentralized computing. Decentralized computing was in fact among one of the broader objectives of having the internet in the first place. However, the internet took another turn owing to differences in computing capabilities available. While modern server capabilities make the case for server-side processing and execution, lack of decent mobile networks in large parts of the world make the case for the same on the client side. Modern smartphones now have **SoCs** (system on a chip or system on chip) capable of handling many such operations on the client side itself, however, limitations owing to retrieving and storing data securely still pushes developers to have server-side computing and data management. Hence, a bottleneck in regards to data transfer capabilities is currently observed.
All of that might soon change because of advancements in distributed data storage and program execution platforms. [**The blockchain**][3], for the first time in the history of the internet, basically allows for secure data management and program execution on a distributed network of users as opposed to central servers.
**Ethereum** is one such blockchain platform that gives developers access to frameworks and tools used to build and run applications on such a decentralized network. Though more popularly known in general for its cryptocurrency, Ethereum is more than just **ethers** (the cryptocurrency). Its a full **Turing complete programming language** that is designed to develop and deploy **DApps** or **Distributed APPlications** [1]. Well look at DApps in more detail in one of the upcoming posts.
Ethereum is an open-source, supports by default a public (non-permissioned) blockchain, and features an extensive smart contract platform **(Solidity)** underneath. Ethereum provides a virtual computing environment called the **Ethereum virtual machine** to run applications and [**smart contracts**][4] as well[2]. The Ethereum virtual machine runs on thousands of participating nodes all over the world, meaning the application data while being secure, is almost impossible to be tampered with or lost.
### Getting behind Ethereum: What sets it apart
In 2017, a 30 plus group of the whos who of the tech and financial world got together to leverage the Ethereum blockchains capabilities. Thus, the **Ethereum Enterprise Alliance (EEA)** was formed by a long list of supporting members including _Microsoft_ , _JP Morgan_ , _Cisco Systems_ , _Deloitte_ , and _Accenture_. JP Morgan already has **Quorum** , a decentralized computing platform for financial services based on Ethereum currently in operation, while Microsoft has Ethereum based cloud services it markets through its Azure cloud business[3].
### What is ether and how is it related to Ethereum
Ethereum creator **Vitalik Buterin** understood the true value of a decentralized processing platform and the underlying blockchain tech that powered bitcoin. He failed to gain majority agreement for his idea of proposing that Bitcoin should be developed to support running distributed applications (DApps) and programs (now referred to as smart contracts).
Hence in 2013, he proposed the idea of Ethereum in a white paper he published. The original white paper is still maintained and available for readers **[here][5]**. The idea was to develop a blockchain based platform to run smart contracts and applications designed to run on nodes and user devices instead of servers.
The Ethereum system is often mistaken to just mean the cryptocurrency ether, however, it has to be reiterated that Ethereum is a full stack platform for developing applications and executing them as well and has been so since inception whereas bitcoin isnt. **Ether is currently the second biggest cryptocurrency** by market capitalization and trades at an average of $170 per ether at the time of writing this article[4].
### Features and technicalities of the platform[5]
* As weve already mentioned, the cryptocurrency called ether is simply one of the things the platform features. The purpose of the system is more than taking care of financial transactions. In fact, the key difference between the Ethereum platform and Bitcoin is in their scripting capabilities. Ethereum is developed in a Turing complete programming language which means it has scripting and application capabilities similar to other major programming languages. Developers require this feature to create DApps and complex smart contracts on the platform, a feature that bitcoin misses on.
* The “mining” process of ether is more stringent and complex. While specialized ASICs may be used to mine bitcoin, the basic hashing algorithm used by Ethereum **(EThash)** reduces the advantage that ASICs have in this regard.
* The transaction fees itself to be paid as an incentive to miners and node operators for running the network is calculated using a computational token called **Gas**. Gas improves the systems resilience and resistance to external hacks and attacks by requiring the initiator of the transaction to pay ethers proportionate to the number of computational resources that are required to carry out that transaction. This is in contrast to other platforms such as Bitcoin where the transaction fee is measured in tandem with the transaction size. As such, the average transaction costs in Ethereum is radically less than Bitcoin. This also implies that running applications running on the Ethereum virtual machine will require a fee depending straight up on the computational problems that the application is meant to solve. Basically, the more complex an execution, the more the fee.
* The block time for Ethereum is estimated to be around _**10-15 seconds**_. The block time is the average time that is required to timestamp and create a block on the blockchain network. Compared to the 10+ minutes the same transaction will take on the bitcoin network, it becomes apparent that _**Ethereum is much faster**_ with respect to transactions and verification of blocks.
* _It is also interesting to note that there is no hard cap on the amount of ether that can be mined or the rate at which ether can be mined leading to less radical system design than bitcoin._
### Conclusion
While Ethereum is comparable and far outpaces similar platforms, the platform itself lacked a definite path for development until the Ethereum enterprise alliance started pushing it. While the definite push for enterprise developments are made by the Ethereum platform, it has to be noted that Ethereum also caters to small-time developers and individuals as well. As such developing the platform for end users and enterprises leave a lot of specific functionality out of the loop for Ethereum. Also, the blockchain model proposed and developed by the Ethereum foundation is a public model whereas the one proposed by projects such as the Hyperledger project is private and permissioned.
While only time can tell which platform among the ones put forward by Ethereum, Hyperledger, and R3 Corda among others will find the most fans in real-world use cases, such systems do prove the validity behind the claim of a blockchain powered future.
**References:**
* [1] [**Gabriel Nicholas, “Ethereum Is Codings New Wild West | WIRED,” Wired , 2017**][6].
* [2] [**What is Ethereum? — Ethereum Homestead 0.1 documentation**][7].
* [3] [**Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoins The New York Times**][8].
* [4] [**Cryptocurrency Market Capitalizations | CoinMarketCap**][9].
* [5] [**Introduction — Ethereum Homestead 0.1 documentation**][10].
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/
作者:[editor][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.ostechnix.com/author/editor/
[b]: https://github.com/lujun9972
[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png
[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/
[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/
[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/
[5]: https://github.com/ethereum/wiki/wiki/White-Paper
[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/
[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine
[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html
[9]: https://coinmarketcap.com/
[10]: http://www.ethdocs.org/en/latest/introduction/index.html

View File

@ -0,0 +1,183 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Five Methods To Check Your Current Runlevel In Linux?)
[#]: via: (https://www.2daygeek.com/check-current-runlevel-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Five Methods To Check Your Current Runlevel In Linux?
======
A run level is an operating system state on Linux system.
There are seven runlevels exist, numbered from zero to six.
A system can be booted into any of the given runlevel. Run levels are identified by numbers.
Each runlevel designates a different system configuration and allows access to a different combination of processes.
By default Linux boots either to runlevel 3 or to runlevel 5.
Only one runlevel is executed at a time on startup. It doesnt execute one after another.
The default runlevel for a system is specified in the /etc/inittab file for SysVinit system.
But systemd systems doesnt read this file and it uses the following file `/etc/systemd/system/default.target` to get default runlevel information.
We can check the Linux system current runlevel using the below five methods.
* **`runlevel Command:`** runlevel prints the previous and current runlevel of the system.
* **`who Command:`** Print information about users who are currently logged in. It will print the runlevel information with “-r” option.
* **`systemctl Command:`** It controls the systemd system and service manager.
* **`Using /etc/inittab File:`** The default runlevel for a system is specified in the /etc/inittab file for SysVinit System.
* **`Using /etc/systemd/system/default.target File:`** The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
Detailed runlevels information is described in the below table.
**Runlevel** | **SysVinit System** | **systemd System**
---|---|---
0 | Shutdown or Halt the system | shutdown.target
1 | Single user mode | rescue.target
2 | Multiuser, without NFS | multi-user.target
3 | Full multiuser mode | multi-user.target
4 | unused | multi-user.target
5 | X11 (Graphical User Interface) | graphical.target
6 | reboot the system | reboot.target
The system will execute the programs/service based on the runlevel.
For SysVinit system, it will be execute from the following location.
* Run level 0 /etc/rc.d/rc0.d/
* Run level 1 /etc/rc.d/rc1.d/
* Run level 2 /etc/rc.d/rc2.d/
* Run level 3 /etc/rc.d/rc3.d/
* Run level 4 /etc/rc.d/rc4.d/
* Run level 5 /etc/rc.d/rc5.d/
* Run level 6 /etc/rc.d/rc6.d/
For systemd system, it will be execute from the following location.
* runlevel1.target /etc/systemd/system/rescue.target
* runlevel2.target /etc/systemd/system/multi-user.target.wants
* runlevel3.target /etc/systemd/system/multi-user.target.wants
* runlevel4.target /etc/systemd/system/multi-user.target.wants
* runlevel5.target /etc/systemd/system/graphical.target.wants
### 1) How To Check Your Current Runlevel In Linux Using runlevel Command?
runlevel prints the previous and current runlevel of the system.
```
$ runlevel
N 5
```
* **`N:`** “N” indicates that the runlevel has not been changed since the system was booted.
* **`5:`** “5” indicates the current runlevel of the system.
### 2) How To Check Your Current Runlevel In Linux Using who Command?
Print information about users who are currently logged in. It will print the runlevel information with `-r` option.
```
$ who -r
run-level 5 2019-04-22 09:32
```
### 3) How To Check Your Current Runlevel In Linux Using systemctl Command?
systemctl is used to controls the systemd system and service manager. systemd is system and service manager for Unix like operating systems.
It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1.
systemd uses `.service` files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file.
```
$ systemctl get-default
graphical.target
```
### 4) How To Check Your Current Runlevel In Linux Using /etc/inittab File?
The default runlevel for a system is specified in the /etc/inittab file for SysVinit System but systemd systemd doesnt read the files.
So, it will work only on SysVinit system and not in systemd system.
```
$ cat /etc/inittab
# inittab is only used by upstart for the default runlevel.
#
# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
#
# System initialization is started by /etc/init/rcS.conf
#
# Individual runlevels are started by /etc/init/rc.conf
#
# Ctrl-Alt-Delete is handled by /etc/init/control-alt-delete.conf
#
# Terminal gettys are handled by /etc/init/tty.conf and /etc/init/serial.conf,
# with configuration in /etc/sysconfig/init.
#
# For information on how to write upstart event handlers, or how
# upstart works, see init(5), init(8), and initctl(8).
#
# Default runlevel. The runlevels used are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
#
id:5:initdefault:
```
### 5) How To Check Your Current Runlevel In Linux Using /etc/systemd/system/default.target File?
The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System.
It doesnt work on SysVinit system.
```
$ cat /etc/systemd/system/default.target
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
Wants=display-manager.service
Conflicts=rescue.service rescue.target
After=multi-user.target rescue.service rescue.target display-manager.service
AllowIsolate=yes
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/check-current-runlevel-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972

View File

@ -0,0 +1,338 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How To Install/Uninstall Listed Packages From A File In Linux?)
[#]: via: (https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
How To Install/Uninstall Listed Packages From A File In Linux?
======
In some case you may want to install list of packages from one server to another server.
For example, You have installed 15 packages on ServerA, and those packages needs to be installed on ServerB, ServerC, etc.,
We can manually install all the packages but its time consuming process.
It can be done for one or two servers, think about if you have around 10 servers.
In this case it doesnt help you then What will be the solution?
Dont worry we are here to help you out in this situation or scenario.
We have added four methods in this article to overcome this situation.
I hope this will help you to fix your issue. I have tested these commands on CentOS7 and Ubuntu 18.04 systems.
I hope this will work with other distributions too. Just replace with distribution official package manager command instead of us.
Navigate to the following article if you want to **[check list of installed packages in Linux system][1]**.
For example, if you would like to create a package lists from RHEL based system then use the following steps. Do the same for other distributions as well.
```
# rpm -qa --last | head -15 | awk '{print $1}' > /tmp/pack1.txt
# cat /tmp/pack1.txt
mariadb-server-5.5.60-1.el7_5.x86_64
perl-DBI-1.627-4.el7.x86_64
perl-DBD-MySQL-4.023-6.el7.x86_64
perl-PlRPC-0.2020-14.el7.noarch
perl-Net-Daemon-0.48-5.el7.noarch
perl-IO-Compress-2.061-2.el7.noarch
perl-Compress-Raw-Zlib-2.061-4.el7.x86_64
mariadb-5.5.60-1.el7_5.x86_64
perl-Data-Dumper-2.145-3.el7.x86_64
perl-Compress-Raw-Bzip2-2.061-3.el7.x86_64
httpd-2.4.6-88.el7.centos.x86_64
mailcap-2.1.41-2.el7.noarch
httpd-tools-2.4.6-88.el7.centos.x86_64
apr-util-1.5.2-6.el7.x86_64
apr-1.4.8-3.el7_4.1.x86_64
```
### Method-1 : How To Install Listed Packages From A File In Linux With Help Of cat Command?
To achieve this, i would like to go with this first method. As this very simple and straightforward.
To do so, just create a file and add the list of packages that you want to install it.
For testing purpose, we are going to add only the below three packages into the following file.
```
# cat /tmp/pack1.txt
apache2
mariadb-server
nano
```
Simply run the following **[apt command][2]** to install all the packages in a single shot from a file in Ubuntu/Debian systems.
```
# apt -y install $(cat /tmp/pack1.txt)
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
libopts25 sntp
Use 'sudo apt autoremove' to remove them.
Suggested packages:
apache2-doc apache2-suexec-pristine | apache2-suexec-custom spell
The following NEW packages will be installed:
apache2 mariadb-server nano
0 upgraded, 3 newly installed, 0 to remove and 24 not upgraded.
Need to get 339 kB of archives.
After this operation, 1,377 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu bionic-updates/main amd64 apache2 amd64 2.4.29-1ubuntu4.6 [95.1 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu bionic/main amd64 nano amd64 2.9.3-2 [231 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 mariadb-server all 1:10.1.38-0ubuntu0.18.04.1 [12.9 kB]
Fetched 339 kB in 19s (18.0 kB/s)
Selecting previously unselected package apache2.
(Reading database ... 290926 files and directories currently installed.)
Preparing to unpack .../apache2_2.4.29-1ubuntu4.6_amd64.deb ...
Unpacking apache2 (2.4.29-1ubuntu4.6) ...
Selecting previously unselected package nano.
Preparing to unpack .../nano_2.9.3-2_amd64.deb ...
Unpacking nano (2.9.3-2) ...
Selecting previously unselected package mariadb-server.
Preparing to unpack .../mariadb-server_1%3a10.1.38-0ubuntu0.18.04.1_all.deb ...
Unpacking mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Setting up apache2 (2.4.29-1ubuntu4.6) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
Setting up nano (2.9.3-2) ...
update-alternatives: using /bin/nano to provide /usr/bin/editor (editor) in auto mode
update-alternatives: using /bin/nano to provide /usr/bin/pico (pico) in auto mode
Processing triggers for systemd (237-3ubuntu10.20) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
```
For removal, use the same format with appropriate option.
```
# apt -y remove $(cat /tmp/pack1.txt)
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
apache2-bin apache2-data apache2-utils galera-3 libaio1 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libconfig-inifiles-perl libdbd-mysql-perl libdbi-perl libjemalloc1 liblua5.2-0
libmysqlclient20 libopts25 libterm-readkey-perl mariadb-client-10.1 mariadb-client-core-10.1 mariadb-common mariadb-server-10.1 mariadb-server-core-10.1 mysql-common sntp socat
Use 'apt autoremove' to remove them.
The following packages will be REMOVED:
apache2 mariadb-server nano
0 upgraded, 0 newly installed, 3 to remove and 24 not upgraded.
After this operation, 1,377 kB disk space will be freed.
(Reading database ... 291046 files and directories currently installed.)
Removing apache2 (2.4.29-1ubuntu4.6) ...
Removing mariadb-server (1:10.1.38-0ubuntu0.18.04.1) ...
Removing nano (2.9.3-2) ...
update-alternatives: using /usr/bin/vim.tiny to provide /usr/bin/editor (editor) in auto mode
Processing triggers for ufw (0.36-0ubuntu0.18.04.1) ...
Processing triggers for install-info (6.5.0.dfsg.1-2) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
```
Use the following **[yum command][3]** to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
```
# yum -y install $(cat /tmp/pack1.txt)
```
Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
```
# yum -y remove $(cat /tmp/pack1.txt)
```
Use the following **[dnf command][4]** to install listed packages from a file on Fedora system.
```
# dnf -y install $(cat /tmp/pack1.txt)
```
Use the following format to uninstall listed packages from a file on Fedora system.
```
# dnf -y remove $(cat /tmp/pack1.txt)
```
Use the following **[zypper command][5]** to install listed packages from a file on openSUSE system.
```
# zypper -y install $(cat /tmp/pack1.txt)
```
Use the following format to uninstall listed packages from a file on openSUSE system.
```
# zypper -y remove $(cat /tmp/pack1.txt)
```
Use the following **[pacman command][6]** to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
```
# pacman -S $(cat /tmp/pack1.txt)
```
Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
```
# pacman -Rs $(cat /tmp/pack1.txt)
```
### Method-2 : How To Install Listed Packages From A File In Linux With Help Of cat And xargs Command?
Even, i prefer to go with this method because this is very simple and straightforward method.
Use the following apt command to install listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
```
# cat /tmp/pack1.txt | xargs apt -y install
```
Use the following apt command to uninstall listed packages from a file on Debian based systems such as Debian, Ubuntu and Linux Mint.
```
# cat /tmp/pack1.txt | xargs apt -y remove
```
Use the following yum command to install listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
```
# cat /tmp/pack1.txt | xargs yum -y install
```
Use the following format to uninstall listed packages from a file on RHEL based systems such as CentOS, RHEL (Redhat) and OEL (Oracle Enterprise Linux).
```
# cat /tmp/pack1.txt | xargs yum -y remove
```
Use the following dnf command to install listed packages from a file on Fedora system.
```
# cat /tmp/pack1.txt | xargs dnf -y install
```
Use the following format to uninstall listed packages from a file on Fedora system.
```
# cat /tmp/pack1.txt | xargs dnf -y remove
```
Use the following zypper command to install listed packages from a file on openSUSE system.
```
# cat /tmp/pack1.txt | xargs zypper -y install
```
Use the following format to uninstall listed packages from a file on openSUSE system.
```
# cat /tmp/pack1.txt | xargs zypper -y remove
```
Use the following pacman command to install listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
```
# cat /tmp/pack1.txt | xargs pacman -S
```
Use the following format to uninstall listed packages from a file on Arch Linux based systems such as Manjaro and Antergos system.
```
# cat /tmp/pack1.txt | xargs pacman -Rs
```
### Method-3 : How To Install Listed Packages From A File In Linux With Help Of For Loop Command?
Alternatively we can use the “For Loop” command to achieve this.
To install bulk packages. Use the below format to run a “For Loop” with single line.
```
# for pack in `cat /tmp/pack1.txt` ; do apt -y install $i; done
```
To install bulk packages with shell script use the following “For Loop”.
```
# vi /opt/scripts/bulk-package-install.sh
#!/bin/bash
for pack in `cat /tmp/pack1.txt`
do apt -y remove $pack
done
```
Set an executable permission to `bulk-package-install.sh` file.
```
# chmod + bulk-package-install.sh
```
Finally run the script to achieve this.
```
# sh bulk-package-install.sh
```
### Method-4 : How To Install Listed Packages From A File In Linux With Help Of While Loop Command?
Alternatively we can use the “While Loop” command to achieve this.
To install bulk packages. Use the below format to run a “While Loop” with single line.
```
# file="/tmp/pack1.txt"; while read -r pack; do apt -y install $pack; done < "$file"
```
To install bulk packages with shell script use the following "While Loop".
```
# vi /opt/scripts/bulk-package-install.sh
#!/bin/bash
file="/tmp/pack1.txt"
while read -r pack
do apt -y remove $pack
done < "$file"
```
Set an executable permission to `bulk-package-install.sh` file.
```
# chmod + bulk-package-install.sh
```
Finally run the script to achieve this.
```
# sh bulk-package-install.sh
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/how-to-install-uninstall-listed-packages-from-a-file-in-linux/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/check-installed-packages-in-rhel-centos-fedora-debian-ubuntu-opensuse-arch-linux/
[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/
[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/
[5]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/
[6]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/

View File

@ -0,0 +1,202 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (Linux Shell Script To Monitor Disk Space Usage And Send Email)
[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
Linux Shell Script To Monitor Disk Space Usage And Send Email
======
There are numerous monitoring tools are available in market to monitor Linux systems and it will send an email when the system reaches the threshold limit.
It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more.
However, its suitable for small and big environment.
Think about if you have only few systems then what will be the best approach on this.
Yup, we want to write a **[shell script][1]** to achieve this.
In this tutorial we are going to write a shell script to monitor disk space usage on system.
When the system reaches the given threshold then it will trigger a mail to corresponding email id.
We have added totally four shell scripts in this article and each has been used for different purpose.
Later, we will come up with other shell scripts to monitor CPU, Memory and Swap utilization.
Before step into that, i would like to clarify one thing which i noticed regarding the disk space usage shell script.
Most of the users were commented in multiple blogs saying they were getting the following error message when they are running the disk space usage script.
```
# sh /opt/script/disk-usage-alert-old.sh
/dev/mapper/vg_2g-lv_root
test-script.sh: line 7: [: /dev/mapper/vg_2g-lv_root: integer expression expected
/ 9.8G
```
Yes thats right. Even, i had faced the same issue when i ran the script first time. Later, i had found the root causes.
When you use “df -h” or “df -H” in shell script for disk space alert on RHEL 5 & RHEL 6 based system, you will be end up with the above error message because the output is not in the proper format, see the below output.
To overcome this issue, we need to use “df -Ph” (POSIX output format) but by default “df -h” is working fine on RHEL 7 based systems.
```
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_2g-lv_root
10G 6.7G 3.4G 67% /
tmpfs 7.8G 0 7.8G 0% /dev/shm
/dev/sda1 976M 95M 830M 11% /boot
/dev/mapper/vg_2g-lv_home
5.0G 4.3G 784M 85% /home
/dev/mapper/vg_2g-lv_tmp
4.8G 14M 4.6G 1% /tmp
```
### Method-1 : Linux Shell Script To Monitor Disk Space Usage And Send Email
You can use the following shell script to monitor disk space usage on Linux system.
It will send an email when the system reaches the given threshold limit. In this example, we set threshold limit at 60% for testing purpose and you can change this limit as per your requirements.
It will send multiple mails if more than one file systems get reached the given threshold limit because the script is using loop.
Also, replace your email id instead of us to get this alert.
```
# vi /opt/script/disk-usage-alert.sh
#!/bin/sh
df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
do
echo $output
used=$(echo $output | awk '{print $1}' | sed s/%//g)
partition=$(echo $output | awk '{print $2}')
if [ $used -ge 60 ]; then
echo "The partition \"$partition\" on $(hostname) has used $used% at $(date)" | mail -s "Disk Space Alert: $used% Used On $(hostname)" [email protected]
fi
done
```
**Output:** I got the following two email alerts.
```
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
```
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/script/disk-usage-alert.sh
```
### Method-2 : Linux Shell Script To Monitor Disk Space Usage And Send Email
Alternatively, you can use the following shell script. We have made few changes in this compared with above script.
```
# vi /opt/script/disk-usage-alert-1.sh
#!/bin/sh
df -Ph | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5,$1 }' | while read output;
do
max=60%
echo $output
used=$(echo $output | awk '{print $1}')
partition=$(echo $output | awk '{print $2}')
if [ ${used%?} -ge ${max%?} ]; then
echo "The partition \"$partition\" on $(hostname) has used $used at $(date)" | mail -s "Disk Space Alert: $used Used On $(hostname)" [email protected]
fi
done
```
**Output:** I got the following two email alerts.
```
The partition "/dev/mapper/vg_2g-lv_home" on 2g.CentOS7 has used 85% at Mon Apr 29 06:16:14 IST 2019
The partition "/dev/mapper/vg_2g-lv_root" on 2g.CentOS7 has used 67% at Mon Apr 29 06:16:14 IST 2019
```
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-1.sh
```
### Method-3 : Linux Shell Script To Monitor Disk Space Usage And Send Email
I would like to go with this method. Since, it work like a charm and you will be getting single email for everything.
This is very simple and straightforward.
```
*/10 * * * * df -Ph | sed s/%//g | awk '{ if($5 > 60) print $0;}' | mail -s "Disk Space Alert On $(hostname)" [email protected]
```
**Output:** I got a single mail for all alerts.
```
Filesystem Size Used Avail Use Mounted on
/dev/mapper/vg_2g-lv_root 10G 6.7G 3.4G 67 /
/dev/mapper/vg_2g-lv_home 5.0G 4.3G 784M 85 /home
```
### Method-4 : Linux Shell Script To Monitor Disk Space Usage Of Particular Partition And Send Email
If anybody wants to monitor the particular partition then you can use the following shell script. Simply replace your filesystem name instead of us.
```
# vi /opt/script/disk-usage-alert-2.sh
#!/bin/bash
used=$(df -Ph | grep '/dev/mapper/vg_2g-lv_dbs' | awk {'print $5'})
max=80%
if [ ${used%?} -ge ${max%?} ]; then
echo "The Mount Point "/DB" on $(hostname) has used $used at $(date)" | mail -s "Disk space alert on $(hostname): $used used" [email protected]
fi
```
**Output:** I got the following email alerts.
```
The partition /dev/mapper/vg_2g-lv_dbs on 2g.CentOS6 has used 82% at Mon Apr 29 06:16:14 IST 2019
```
Finally add a **[cronjob][2]** to automate this. It will run every 10 minutes.
```
# crontab -e
*/10 * * * * /bin/bash /opt/script/disk-usage-alert-2.sh
```
**Note:** You will be getting an email alert 10 mins later since the script has scheduled to run every 10 minutes (But its not exactly 10 mins and it depends the timing).
Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope its clear now.
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/linux-shell-script-to-monitor-disk-space-usage-and-send-email/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/category/shell-script/
[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/

View File

@ -0,0 +1,121 @@
[#]: collector: (lujun9972)
[#]: translator: ( )
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System)
[#]: via: (https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/)
[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/)
apt-clone : Backup Installed Packages And Restore Those On Fresh Ubuntu System
======
Package installation is become more easier on Ubuntu/Debian based systems when we use apt-clone utility.
apt-clone will work for you, if you want to build few systems with same set of packages.
Its time consuming process if you want to build and install necessary packages manually on each systems.
It can be achieved in many ways and there are many utilities are available in Linux.
We have already wrote an article about **[Aptik][1]** in the past.
Its one of the utility that allow Ubuntu users to backup and restore system settings and data
### What Is apt-clone?
[apt-clone][2] lets allow you to create backup of all installed packages for your Debian/Ubuntu systems that can be restored on freshly installed systems (or containers) or into a directory.
This backup can be restored on multiple systems with same operating system version and architecture.
### How To Install apt-clone?
The apt-clone package is available on Ubuntu/Debian official repository so, use **[apt Package Manager][3]** or **[apt-get Package Manager][4]** to install it.
Install apt-clone package using apt package manager.
```
$ sudo apt install apt-clone
```
Install apt-clone package using apt-get package manager.
```
$ sudo apt-get install apt-clone
```
### How To Backup Installed Packages Using apt-clone?
Once you have successfully installed the apt-clone package. Simply give a location where do you want to save the backup file.
We are going to save the installed packages backup under `/backup` directory.
The apt-clone utility will save the installed packages list into `apt-clone-state-Ubuntu18.2daygeek.com.tar.gz` file.
```
$ sudo apt-clone clone /backup
```
We can check the same by running the ls Command.
```
$ ls -lh /backup/
total 32K
-rw-r--r-- 1 root root 29K Apr 20 19:06 apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
Run the following command to view the details of the backup file.
```
$ apt-clone info /backup/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
Hostname: Ubuntu18.2daygeek.com
Arch: amd64
Distro: bionic
Meta: libunity-scopes-json-def-desktop, ubuntu-desktop
Installed: 1792 pkgs (194 automatic)
Date: Sat Apr 20 19:06:43 2019
```
As per the above output, totally we have 1792 packages in the backup file.
### How To Restore The Backup Which Was Taken Using apt-clone?
You can use any of the remote copy utility to copy the files on remote server.
```
$ scp /backup/apt-clone-state-ubunt-18-04.tar.gz Destination-Server:/opt
```
Once you copy the file then perform the restore using apt-clone utility.
Run the following command to restore it.
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz
```
Make a note, The restore will override your existing `/etc/apt/sources.list` and will install/remove packages. So be careful.
If you want to restore all the packages into a folder instead of actual restore, you can do it by using the following command.
```
$ sudo apt-clone restore /opt/apt-clone-state-Ubuntu18.2daygeek.com.tar.gz --destination /opt/oldubuntu
```
--------------------------------------------------------------------------------
via: https://www.2daygeek.com/apt-clone-backup-installed-packages-and-restore-them-on-fresh-ubuntu-system/
作者:[Magesh Maruthamuthu][a]
选题:[lujun9972][b]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://www.2daygeek.com/author/magesh/
[b]: https://github.com/lujun9972
[1]: https://www.2daygeek.com/aptik-backup-restore-ppas-installed-apps-users-data/
[2]: https://github.com/mvo5/apt-clone
[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/
[4]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/

View File

@ -0,0 +1,146 @@
[#]: collector: (lujun9972)
[#]: translator: (geekpi)
[#]: reviewer: ( )
[#]: publisher: ( )
[#]: url: ( )
[#]: subject: (How to use autofs to mount NFS shares)
[#]: via: (https://opensource.com/article/18/6/using-autofs-mount-nfs-shares)
[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss)
如何使用 autofs 挂载 NFS 共享
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx)
大多数 Linux 文件系统在引导时挂载,并在系统运行时保持挂载状态。对于已在 `fstab` 中配置的任何远程文件系统也是如此。但是,有时你可能希望仅按需挂载远程文件系统 - 例如,通过减少网络带宽使用来提高性能,或出于安全原因隐藏或混淆某些目录。[autofs][1] 软件包提供此功能。在本文中,我将介绍如何配置基本的自动挂载。
首先做点假设:假设有台 NFS 服务器 `tree.mydatacenter.net` 已经启动并运行。另外假设一个名为 `ourfiles` 的数据目录还有供 Carl 和 Sarah 使用的用户目录,它们都由服务器共享。
一些最佳实践可以使工作更好:服务器上的用户和任何客户端工作站上的帐号有相同的用户 ID。此外你的工作站和服务器应有相同的域名。检查相关配置文件应该确认。
```
alan@workstation1:~$ sudo getent passwd carl sarah
[sudo] password for alan:
carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash
sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash
alan@workstation1:~$ sudo getent hosts
127.0.0.1       localhost
127.0.1.1       workstation1.mydatacenter.net workstation1
10.10.1.5       tree.mydatacenter.net tree
```
如你所见,客户端工作站和 NFS 服务器都在 `hosts` 中配置。我假设一个基本的家庭甚至小型办公室网络,可能缺乏适合的内部域名服务(即 DNS
### 安装软件包
你只需要安装两个软件包:用于 NFS 客户端的 `nfs-common` 和提供自动挂载的 `autofs`
```
alan@workstation1:~$ sudo apt-get install nfs-common autofs
```
你可以验证 autofs 是否已放在 `etc` 目录中:
```
alan@workstation1:~$ cd /etc; ll auto*
-rw-r--r-- 1 root root 12596 Nov 19  2015 autofs.conf
-rw-r--r-- 1 root root   857 Mar 10  2017 auto.master
-rw-r--r-- 1 root root   708 Jul  6  2017 auto.misc
-rwxr-xr-x 1 root root  1039 Nov 19  2015 auto.net*
-rwxr-xr-x 1 root root  2191 Nov 19  2015 auto.smb*
alan@workstation1:/etc$
```
### 配置 autofs
现在你需要编辑其中几个文件并添加 `auto.home` 文件。首先,将以下两行添加到文件 `auto.master` 中:
```
/mnt/tree  /etc/auto.misc
/home/tree  /etc/auto.home
```
每行以挂载 NFS 共享的目录开头。继续创建这些目录:
```
alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree
```
接下来,将以下行添加到文件 `auto.misc`
```
ourfiles        -fstype=nfs     tree:/share/ourfiles
```
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.misc``ourfiles` 共享。如上所示,这些文件将在 `/mnt/tree/ourfiles` 目录中。
第三步,使用以下行创建文件 `auto.home`
```
*               -fstype=nfs     tree:/home/&
```
该行表示 autofs 将挂载 `auto.master` 文件中匹配 `auto.home` 的用户共享。在这种情况下Carl 和 Sarah 的文件将分别在目录 `/home/tree/carl``/home/tree/sarah`中。星号(称为通配符)使每个用户的共享可以在登录时自动挂载。& 符号也可以作为表示服务器端用户目录的通配符。它们的主目录会相应地根据 `passwd` 文件映射。如果你更喜欢本地主目录,则无需执行此操作。相反,用户可以将其用作特定文件的简单远程存储。
最后,重启 `autofs` 守护进程,以便识别并加载这些配置的更改。
```
alan@workstation1:/etc$ sudo service autofs restart
```
### 测试 autofs
如果更改文件 `auto.master` 中的列出目录并运行 `ls` 命令,那么不会立即看到任何内容。例如,`(cd)` 到目录 `/mnt/tree`。首先,`ls` 的输出不会显示任何内容,但在运行 `cd ourfiles` 之后,将自动挂载 `ourfiles` 共享目录。 `cd` 命令也将被执行,你将进入新挂载的目录中。
```
carl@workstation1:~$ cd /mnt/tree
carl@workstation1:/mnt/tree$ ls
carl@workstation1:/mnt/tree$ cd ourfiles
carl@workstation1:/mnt/tree/ourfiles$
```
为了进一步确认正常工作,`mount` 命令会显示已挂载共享的细节
```
carl@workstation1:~$ mount
tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5)
```
对于Carl和Sarah`/home/tree` 目录工作方式相同。
我发现在我的文件管理器中添加这些目录的书签很有用,可以用来快速访问。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares
作者:[Alan Formy-Duval][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/alanfdoss
[1]:https://wiki.archlinux.org/index.php/autofs