Merge pull request #1 from LCTT/master

merge
This commit is contained in:
Kevin Sicong Jiang 2018-05-22 16:52:02 -05:00 committed by GitHub
commit 9153d00805
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
87 changed files with 4930 additions and 2891 deletions

View File

@ -0,0 +1,106 @@
探秘“栈”之旅
==============
早些时候,我们探索了 [“内存中的程序之秘”][2],我们欣赏了在一台电脑中是如何运行我们的程序的。今天,我们去探索*栈的调用*,它在大多数编程语言和虚拟机中都默默地存在。在此过程中,我们将接触到一些平时很难见到的东西,像<ruby>闭包<rt>closure</rt></ruby>、递归、以及缓冲溢出等等。但是,我们首先要作的事情是,描绘出栈是如何运作的。
栈非常重要,因为它追踪着一个程序中运行的*函数*,而函数又是一个软件的重要组成部分。事实上,程序的内部操作都是非常简单的。它大部分是由函数向栈中推入数据或者从栈中弹出数据的相互调用组成的,而在堆上为数据分配内存才能在跨函数的调用中保持数据。不论是低级的 C 软件还是像 JavaScript 和 C# 这样的基于虚拟机的语言,它们都是这样的。而对这些行为的深刻理解,对排错、性能调优以及大概了解究竟发生了什么是非常重要的。
当一个函数被调用时,将会创建一个<ruby>栈帧<rt>stack frame</rt></ruby>去支持函数的运行。这个栈帧包含函数的*局部变量*和调用者传递给它的*参数*。这个栈帧也包含了允许被调用的函数(*callee*)安全返回给其调用者的内部事务信息。栈帧的精确内容和结构因处理器架构和函数调用规则而不同。在本文中我们以 Intel x86 架构和使用 C 风格的函数调用(`cdecl`)的栈为例。下图是一个处于栈顶部的一个单个栈帧:
![](https://manybutfinite.com/img/stack/stackIntro.png)
在图上的场景中,有三个 CPU 寄存器进入栈。<ruby>栈指针<rt>stack pointer</rt></ruby> `esp`LCTT 译注:扩展栈指针寄存器) 指向到栈的顶部。栈的顶部总是被最*后一个推入到栈且还没有弹出*的东西所占据,就像现实世界中堆在一起的一叠盘子或者 100 美元大钞一样。
保存在 `esp` 中的地址始终在变化着,因为栈中的东西不停被推入和弹出,而它总是指向栈中的最后一个推入的东西。许多 CPU 指令的一个副作用就是自动更新 `esp`,离开寄存器而使用栈是行不通的。
在 Intel 的架构中,绝大多数情况下,栈的增长是向着*低位内存地址*的方向。因此,这个“顶部” 在包含数据的栈中是处于低位的内存地址(在这种情况下,包含的数据是 `local_buffer`)。注意,关于从 `esp``local_buffer` 的箭头不是随意连接的。这个箭头代表着事务:它*专门*指向到由 `local_buffer` 所拥有的*第一个字节*,因为,那是一个保存在 `esp` 中的精确地址。
第二个寄存器跟踪的栈是 `ebp`LCTT 译注:扩展基址指针寄存器),它包含一个<ruby>基指针<rt>base pointer</rt></ruby>或者称为<ruby>帧指针<rt>frame pointer</rt></ruby>。它指向到一个*当前运行*的函数的栈帧内的固定位置,并且它为参数和局部变量的访问提供一个稳定的参考点(基址)。仅当开始或者结束调用一个函数时,`ebp` 的内容才会发生变化。因此,我们可以很容易地处理在栈中的从 `ebp` 开始偏移后的每个东西。如图所示。
不像 `esp` `ebp` 大多数情况下是在程序代码中通过花费很少的 CPU 来进行维护的。有时候,完成抛弃 `ebp` 有一些性能优势,可以通过 [编译标志][3] 来做到这一点。Linux 内核就是一个这样做的示例。
最后,`eax`LCTT 译注:扩展的 32 位通用数据寄存器)寄存器惯例被用来转换大多数 C 数据类型返回值给调用者。
现在,我们来看一下在我们的栈帧中的数据。下图清晰地按字节展示了字节的内容,就像你在一个调试器中所看到的内容一样,内存是从左到右、从顶部至底部增长的,如下图所示:
![](https://manybutfinite.com/img/stack/frameContents.png)
局部变量 `local_buffer` 是一个字节数组,包含一个由 null 终止的 ASCII 字符串,这是 C 程序中的一个基本元素。这个字符串可以读取自任意地方,例如,从键盘输入或者来自一个文件,它只有 7 个字节的长度。因为,`local_buffer` 只能保存 8 字节,所以还剩下 1 个未使用的字节。*这个字节的内容是未知的*,因为栈不断地推入和弹出,*除了你写入的之外*,你根本不会知道内存中保存了什么。这是因为 C 编译器并不为栈帧初始化内存,所以它的内容是未知的并且是随机的 —— 除非是你自己写入。这使得一些人对此很困惑。
再往上走,`local1` 是一个 4 字节的整数并且你可以看到每个字节的内容。它似乎是一个很大的数字在8 后面跟着的都是零,在这里可能会误导你。
Intel 处理器是<ruby>小端<rt>little endian</rt></ruby>机器这表示在内存中的数字也是首先从小的一端开始的。因此在一个多字节数字中较小的部分在内存中处于最低端的地址。因为一般情况下是从左边开始显示的这背离了我们通常的数字表示方式。我们讨论的这种从小到大的机制使我想起《格里佛游记》就像小人国的人们吃鸡蛋是从小头开始的一样Intel 处理器处理它们的数字也是从字节的小端开始的。
因此,`local1` 事实上只保存了一个数字 8和章鱼的腿数量一样。然而`param1` 在第二个字节的位置有一个值 2因此它的数学上的值是 `2 * 256 = 512`(我们与 256 相乘是因为,每个位置值的范围都是从 0 到 255。同时`param2` 承载的数量是 `1 * 256 * 256 = 65536`
这个栈帧的内部数据是由两个重要的部分组成:*前一个*栈帧的地址(保存的 `ebp` 值)和函数退出才会运行的指令的地址(返回地址)。它们一起确保了函数能够正常返回,从而使程序可以继续正常运行。
现在,我们来看一下栈帧是如何产生的,以及去建立一个它们如何共同工作的内部蓝图。首先,栈的增长是非常令人困惑的,因为它与你你预期的方式相反。例如,在栈上分配一个 8 字节,就要从 `esp` 减去 8而减法是与增长不同的奇怪方式。
我们来看一个简单的 C 程序:
```
Simple Add Program - add.c
int add(int a, int b)
{
int result = a + b;
return result;
}
int main(int argc)
{
int answer;
answer = add(40, 2);
}
```
*简单的加法程序 - add.c*
假设我们在 Linux 中不使用命令行参数去运行它。当你运行一个 C 程序时,实际运行的第一行代码是在 C 运行时库里,由它来调用我们的 `main` 函数。下图展示了程序运行时每一步都发生了什么。每个图链接的 GDB 输出展示了内存和寄存器的状态。你也可以看到所使用的 [GDB 命令][4],以及整个 [GDB 输出][5]。如下:
![](https://manybutfinite.com/img/stack/mainProlog.png)
第 2 步和第 3 步,以及下面的第 4 步,都只是函数的<ruby>序言<rt>prologue</rt></ruby>,几乎所有的函数都是这样的:`ebp` 的当前值被保存到了栈的顶部,然后,将 `esp` 的内容拷贝到 `ebp`,以建立一个新的栈帧。`main` 的序言和其它函数一样,但是,不同之处在于,当程序启动时 `ebp` 被清零。
如果你去检查栈下方(右边)的整形变量(`argc`),你将找到更多的数据,包括指向到程序名和命令行参数(传统的 C 的 `argv`)、以及指向 Unix 环境变量以及它们真实的内容的指针。但是,在这里这些并不是重点,因此,继续向前调用 `add()`
![](https://manybutfinite.com/img/stack/callAdd.png)
`main``esp` 减去 12 之后得到它所需的栈空间,它为 `a``b` 设置值。在内存中的值展示为十六进制,并且是小端格式,与你从调试器中看到的一样。一旦设置了参数值,`main` 将调用 `add`,并且开始运行:
![](https://manybutfinite.com/img/stack/addProlog.png)
现在,有一点小激动!我们进入了另一个函数序言,但这次你可以明确看到栈帧是如何从 `ebp` 到栈建立一个链表。这就是调试器和高级语言中的 `Exception` 对象如何对它们的栈进行跟踪的。当一个新帧产生时,你也可以看到更多这种典型的从 `ebp``esp` 的捕获。我们再次从 `esp` 中做减法得到更多的栈空间。
`ebp` 寄存器的值拷贝到内存时,这里也有一个稍微有些怪异的字节逆转。在这里发生的奇怪事情是,寄存器其实并没有字节顺序:因为对于内存,没有像寄存器那样的“增长的地址”。因此,惯例上调试器以对人类来说最自然的格式展示了寄存器的值:数位从最重要的到最不重要。因此,这个在小端机器中的副本的结果,与内存中常用的从左到右的标记法正好相反。我想用图去展示你将会看到的东西,因此有了下面的图。
在比较难懂的部分,我们增加了注释:
![](https://manybutfinite.com/img/stack/doAdd.png)
这是一个临时寄存器,用于帮你做加法,因此没有什么警报或者惊喜。对于加法这样的作业,栈的动作正好相反,我们留到下次再讲。
对于任何读到这里的人都应该有一个小礼物,因此,我做了一个大的图表展示了 [组合到一起的所有步骤][6]。
一旦把它们全部布置好了,看上起似乎很乏味。这些小方框给我们提供了很多帮助。事实上,在计算机科学中,这些小方框是主要的展示工具。我希望这些图片和寄存器的移动能够提供一种更直观的构想图,将栈的增长和内存的内容整合到一起。从软件的底层运作来看,我们的软件与一个简单的图灵机器差不多。
这就是我们栈探秘的第一部分,再讲一些内容之后,我们将看到构建在这个基础上的高级编程的概念。下周见!
--------------------------------------------------------------------------------
via:https://manybutfinite.com/post/journey-to-the-stack/
作者:[Gustavo Duarte][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://duartes.org/gustavo/blog/about/
[1]:https://manybutfinite.com/post/journey-to-the-stack/
[2]:https://linux.cn/article-9255-1.html
[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer
[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt
[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt
[6]:https://manybutfinite.com/img/stack/callSequence.png

View File

@ -1,31 +1,29 @@
成为你所在社区的美好力量
============================================================
>明白如何传递美好,了解积极意愿的力量,以及更多。
> 明白如何传递美好,了解积极意愿的力量,以及更多。
![Be a force for good in your community](https://opensource.com/sites/default/files/styles/image-full-size/public/images/life/people_remote_teams_world.png?itok=wI-GW8zX "Be a force for good in your community")
![Be a force for good in your community](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/people_remote_teams_world.png?itok=_9DCHEel)
>图片来自opensource.com
激烈的争论是开源社区和开放组织的标志特征之一。在我们最好的日子里,这些争论充满活力和建设性。他们面红耳赤的背后其实是幽默和善意。各方实事求是,共同解决问题,推动持续改进。对我们中的许多人来说,他们只是单纯的娱乐而已。
激烈的争论是开源社区和开放组织的标志特征之一。在好的时候,这些争论充满活力和建设性。他们面红耳赤的背后其实是幽默和善意。各方实事求是,共同解决问题,推动持续改进。对我们中的许多人来说,他们只是单纯的喜欢而已。
然而在我们最糟糕的日子里,这些争论演变成了对旧话题的反复争吵。或者我们用各种方式来传递伤害和相互攻击,或是使用卑劣的手段,而这些侵蚀着我们社区的激情、信任和生产力。
然而在那些不好的日子里,这些争论演变成了对旧话题的反复争吵。或者我们用各种方式来传递伤害和相互攻击,或是使用卑劣的手段,而这些侵蚀着我们社区的激情、信任和生产力。
我们茫然四顾,束手无策,因为社区的对话开始变得有毒。然而,正如 [DeLisa Alexander最近的分享][1],我们每个人都有很多方法可以成为我们社区的一种力量。
我们茫然四顾,束手无策,因为社区的对话开始变得有毒。然而,正如 [DeLisa Alexander 最近的分享][1],我们每个人都有很多方法可以成为我们社区的一种力量。
在这个“开源文化”系列的第一篇文章中,我将分享一些策略,教你如何在这个关键时刻进行干预,引导每个人走向更积极、更有效率的方向。
### 不要将人推开,而是将人推向前方
最近,我和我的朋友和同事 [Mark Rumbles][2] 一起吃午饭。多年来,我们在许多支持开源文化和引领 Red Hat 的项目中合作。在这一天,马克问我是怎么坚持的,当我看到辩论变得越来越丑陋的时候,他看到我最近介入了一个邮件列表的对话。
最近,我和我的朋友和同事 [Mark Rumbles][2] 一起吃午饭。多年来,我们在许多支持开源文化和引领 Red Hat 的项目中合作。在这一天,Mark 问我,他看到我最近介入了一个邮件列表的对话,当其中的辩论越来越过分时我是怎么坚持的
幸运的是,这事早已尘埃落定,事实上我几乎忘记了谈话的内容。然而,它让我们开始讨论如何在一个拥有数千名成员的社区里,公开和坦率的辩论。
>在我们的社区里,我们成为一种美好力量的最好的方法之一就是:在回应冲突时,以一种迫使每个人提升他们的行为,而不是使冲突升级的方式。
Mark 说了一些让我印象深刻的话。他说:“你知道,作为一个社区,我们真的很擅长将人推开。但我想看到的是,我们更多的是互相扶持 _向前_ 。”
Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量的最好的方法之一就是:在回应冲突时,以一种迫使每个人提升他们的行为,而不是使冲突升级的方式。
Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量的最好的方法之一就是:以一种迫使每个人提升他们的行为的方式回应冲突,而不是使冲突升级的方式。
### 积极意愿假想
@ -33,9 +31,9 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
诚然这不是一件容易的事情。当我看到一场辩论正在变得肮脏的迹象时我停下来问自己史蒂芬·科维Steven Covey所说的人性化问题是什么
“为什么一个理性、正直的人会做这样的事情?”
> “为什么一个理性、正直的人会做这样的事情?”
现在,如果他是你的一个“普通的观察对象”——一个有消极行为倾向的社区成员——也许你的第一个想法是,“嗯,也许这个人是个不靠谱,不理智的人”
现在,如果他是你的一个“普通的观察对象”—— 一个有消极行为倾向的社区成员——也许你的第一个想法是,“嗯,也许这个人是个不靠谱,不理智的人”
回过头来说。我并不是说你让你自欺欺人。这其实就是人性化的问题,不仅是因为它让你理解别人的立场,它还让你变得人性化。
@ -51,7 +49,7 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
一个简单的积极意愿假想,我们可以适用于几乎所有的不良行为,其实就是那个人想要被聆听,被尊重,或被理解。我想这是相当合理的。
通过站在这个更客观、更有同情心的角度,我们可以看到他们的行为几乎肯定 **_不_** 帮助他们得到他们想要的东西,而社区也会因此而受到影响。如果没有我们的帮助的话。
通过站在这个更客观、更有同情心的角度,我们可以看到他们的行为几乎肯定 **_不_** 帮助他们得到他们想要的东西,而社区也会因此而受到影响。如果没有我们的帮助的话。
对我来说,这激发了一个愿望:帮助每个人从我们所处的这个丑陋的地方“摆脱困境”。
@ -60,7 +58,7 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
容易想到的例子包括:
* 他们担心我们错过了一些重要的东西,或者我们犯了一个错误,没有人能看到它。
* 他们想为自己的贡献感到有价值。
* 他们想感受到自己的贡献的价值。
* 他们精疲力竭,因为在社区里工作过度或者在他们的个人生活中发生了一些事情。
* 他们讨厌一些东西被破坏,并感到沮丧,因为没有人能看到造成的伤害或不便。
* ……诸如此类。
@ -69,11 +67,11 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
### 传递美好,挣脱泥潭
什么是 an out类似与佛家“解脱法门”的意思把它想象成一个逃跑的门。这是一种退出对话的方式或者放弃不良的行为恢复表现得像一个体面的人而不是丢面子。是叫某人振作向上而不是叫他走开。
什么是 an outLCTT 译注:类似与佛家“解脱法门”的意思)把它想象成一个逃跑的门。这是一种退出对话的方式,或者放弃不良的行为,恢复表现得像一个体面的人,而不是丢面子。是叫某人振作向上,而不是叫他走开。
你可能经历过这样的事情,在你的生活中,当 _你_ 在一次谈话中表现不佳时,咆哮着,大喊大叫,对某事大惊小怪,而有人慷慨地给 _你_ 提供了一个台阶下。也许他们选择不去和你“抬杠”,相反,他们说了一些表明他们相信你是一个理性、正直的人,他们采用积极意愿假想,比如:
> _所以,嗯,我听到的是你真的很担心,你很沮丧,因为似乎没有人在听。或者你担心我们忽略了它的重要性。是这样对吧?_
> 所以,嗯,我听到的是你真的很担心,你很沮丧,因为似乎没有人在听。或者你担心我们忽略了它的重要性。是这样对吧
于是乎:即使这不是完全正确的(也许你的意图不那么高尚),在那一刻,你可能抓住了他们提供给你的台阶,并欣然接受了重新定义你的不良行为的机会。你几乎可以肯定地转向一个更富有成效的角度,甚至你自己可能都没有意识到。
@ -85,13 +83,13 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
### 坏行为还是坏人?
如果这个人特别激动,他们可能不会听到或者接受你给出的第一台阶。没关系。最可能的是,他们迟钝的大脑已经被史前曾经对人类生存至关重要的杏仁接管了,他们需要更多的时间来认识到你并不是一个威胁。只是需要你保持温和的态度,坚定地对待他们,就好像他们 _曾经是_ 一个理性、正直的人,看看会发生什么。
如果这个人特别激动,他们可能不会听到或者接受你给出的第一台阶。没关系。最可能的是,他们迟钝的大脑已经被史前曾经对人类生存至关重要的杏仁接管了,他们需要更多的时间来认识到你并不是一个威胁。只是需要你保持温和的态度,坚定地对待他们,就好像他们 _曾经是_ 一个理性、正直的人,看看会发生什么。
根据我的经验,这些社区干预以三种方式结束:
大多数情况下,这个人实际上 _是_ 一个理性的人,很快,他们就感激地接受了这个事实。在这个过程中,每个人都跳出了“黑与白”,“赢或输”的心态。人们开始思考创造性的选择和“双赢”的结果,每个人都将受益。
> 为什么一个理性、正直的人会做这样的事呢?
> 为什么一个理性、正直的人会做这样的事呢
有时候,这个人天生不是特别理性或正直的,但当他被你以如此一致的、不知疲倦的、耐心的慷慨和善良的对待的时候,他们就会羞愧地从谈话中撤退。这听起来像是,“嗯,我想我已经说了所有要说的了。谢谢你听我的意见”。或者,对于不那么开明的人来说,“嗯,我厌倦了这种谈话。让我们结束吧。”(好的,谢谢)。
@ -99,7 +97,7 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
这就是积极意愿假想的力量。通过对愤怒和充满敌意的言辞做出回应,优雅而有尊严地回应,你就能化解一场战争,理清混乱,解决棘手的问题,而且在这个过程中很有可能会交到一个新朋友。
我每次应用这个原则都成功吗?见鬼,不。但我从不后悔选择了积极意愿。但是我能生动的回想起,当我采用消极意愿假想时,将问题变得更糟糕的场景。
我每次应用这个原则都成功吗?见鬼,不。但我从不后悔选择了积极意愿。但是我能生动的回想起,当我采用消极意愿假想时,将问题变得更糟糕的场景。
现在轮到你了。我很乐意听到你提出的一些策略和原则,当你的社区里的对话变得激烈的时候,要成为一股好力量。在下面的评论中分享你的想法。
@ -111,7 +109,7 @@ Mark 是绝对正确的。在我们的社区里,我们成为一种美好力量
![](https://opensource.com/sites/default/files/styles/profile_pictures/public/pictures/headshot-square_0.jpg?itok=FS97b9YD)
丽贝卡·费尔南德斯Rebecca Fernandez是红帽公司Red Hat的首席就业品牌 + 通讯专家,是《开组织》书籍的贡献者也是开源决策框架的维护者。她的兴趣是开源和业务管理模型的开源方式。Twitter@ruhbehka
丽贝卡·费尔南德斯Rebecca Fernandez是红帽公司Red Hat的首席就业品牌 + 通讯专家,是《开组织》书籍的贡献者也是开源决策框架的维护者。她的兴趣是开源和业务管理模型的开源方式。Twitter@ruhbehka
--------------------------------------------------------------------------------
@ -119,7 +117,7 @@ via: https://opensource.com/open-organization/17/1/force-for-good-community
作者:[Rebecca Fernandez][a]
译者:[chao-zhi](https://github.com/chao-zhi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,65 +1,67 @@
Taskwarrior 入门
基于命令行的任务管理器 Taskwarrior
=====
Taskwarrior 是一个灵活的[命令行任务管理程序][1],用他们[自己的话说][2]
Taskwarrior 是从你的命令行管理你的 TODO 列表。它灵活,快速,高效,不显眼,它默默做自己的事情让你避免自己管理。
> Taskwarrior 在命令行里管理你的 TODO 列表。它灵活,快速,高效,不显眼,它默默做自己的事情让你避免自己管理。
Taskwarrior 是高度可定制的,但也可以“立即使用”。在本文中,我们将向你展示添加和完成任务的基本命令,然后我们将介绍几个更高级的命令。最后,我们将向你展示一些基本的配置设置,以开始自定义你的设置。
### 安装 Taskwarrior
Taskwarrior 在 Fedora 仓库中是可用的,所有安装它很容易:
```
sudo dnf install task
```
一旦完成安装,运行 `task`。第一次运行将会创建一个 `~/.taskrc` 文件。
一旦完成安装,运行 `task` 命令。第一次运行将会创建一个 `~/.taskrc` 文件。
```
$ **task**
$ task
A configuration file could not be found in ~
Would you like a sample /home/link/.taskrc created, so Taskwarrior can proceed? (yes/no) yes
[task next]
No matches.
```
### 添加任务
添加任务快速而不显眼。
```
$ **task add Plant the wheat**
$ task add Plant the wheat
Created task 1.
```
运行 `task` 或者 `task list` 来显示即将来临的任务。
```
$ **task list**
$ task list
ID Age Description Urg
1 8s Plant the wheat 0
1 task
```
让我们添加一些任务来完成这个示例。
```
$ **task add Tend the wheat**
Created task 2.
$ **task add Cut the wheat**
Created task 3.
$ **task add Take the wheat to the mill to be ground into flour**
Created task 4.
$ **task add Bake a cake**
Created task 5.
```
$ task add Tend the wheat
Created task 2.
$ task add Cut the wheat
Created task 3.
$ task add Take the wheat to the mill to be ground into flour
Created task 4.
$ task add Bake a cake
Created task 5.
```
再次运行 `task` 来查看列表。
```
[task next]
@ -71,84 +73,83 @@ ID Age Description Urg
5 2s Bake a cake 0
5 tasks
```
### 完成任务
将一个任务标记为完成, 查找其 ID 并运行:
```
$ **task 1 done**
$ task 1 done
Completed task 1 'Plant the wheat'.
Completed 1 task.
```
你也可以用它的描述来标记一个任务已完成。
```
$ **task 'Tend the wheat' done**
$ task 'Tend the wheat' done
Completed task 1 'Tend the wheat'.
Completed 1 task.
```
通过使用 `add`, `list``done`,你可以说已经入门了。
通过使用 `add`、`list``done`,你可以说已经入门了。
### 设定截止日期
很多任务不需要一个截止日期:
```
task add Finish the article on Taskwarrior
```
但是有时候,设定一个截止日期正是你需要提高效率的动力。在添加任务时使用 `due` 修饰符来设置特定的截止日期。
```
task add Finish the article on Taskwarrior due:tomorrow
```
`due` 非常灵活。它接受特定日期 ("2017-02-02") 或 ISO-8601 ("2017-02-02T20:53:00Z"),甚至相对时间 ("8hrs")。可以查看所有示例的 [Date & Time][3] 文档。
`due` 非常灵活。它接受特定日期 (`2017-02-02`) 或 ISO-8601 (`2017-02-02T20:53:00Z`),甚至相对时间 (`8hrs`)。可以查看所有示例的 [Date & Time][3] 文档。
日期也不只有截止日期Taskwarrior 有 `scheduled`, `wait``until` 选项。
日期也会超出截止日期Taskwarrior 有 `scheduled`, `wait``until` 选项。
```
task add Proof the article on Taskwarrior scheduled:thurs
```
一旦日期(本例中的星期四)通过,该任务就会被标记为 `READY` 虚拟标记。它会显示在 `ready` 报告中。
```
$ **task ready**
$ task ready
ID Age S Description Urg
1 2s 1d Proof the article on Taskwarrior 5
```
要移除一个日期,使用空白值来 `modify` 任务:
```
$ task 1 modify scheduled:
```
### 查找任务
如果没有使用正则表达式搜索的能力,任务列表是不完整的,对吧?
```
$ **task '/.* the wheat/' list**
$ task '/.* the wheat/' list
ID Age Project Description Urg
2 42min Take the wheat to the mill to be ground into flour 0
1 42min Home Cut the wheat 1
2 tasks
```
### 自定义 Taskwarrior
记得我们在开头创建的文件 (`~/.taskrc`)吗?让我们来看看默认设置:
```
# [Created by task 2.5.1 2/9/2017 16:39:14]
# Taskwarrior program configuration file.
@ -180,41 +181,40 @@ data.location=~/.task
#include /usr//usr/share/task/solarized-dark-256.theme
#include /usr//usr/share/task/solarized-light-256.theme
#include /usr//usr/share/task/no-color.theme
```
现在唯一有效的选项是 `data.location=~/.task`。要查看活动配置设置(包括内置的默认设置),运行 `show`
现在唯一生效的选项是 `data.location=~/.task`。要查看活动配置设置(包括内置的默认设置),运行 `show`
```
task show
```
改变设置,使用 `config`
要改变设置,使用 `config`
```
$ **task config displayweeknumber no**
$ task config displayweeknumber no
Are you sure you want to add 'displayweeknumber' with a value of 'no'? (yes/no) yes
Config file /home/link/.taskrc modified.
```
### 示例
这些只是你可以用 Taskwarrior 做的一部分事情。
为你的任务分配一个项目:
将你的任务分配到一个项目:
```
task 'Fix leak in the roof' modify project:Home
```
使用 `start` 来标记你正在做的事情,这可以帮助你回忆起你周末后在做什么:
```
task 'Fix bug #141291' start
```
使用相关的标签:
```
task add 'Clean gutters' +weekend +house
@ -229,7 +229,7 @@ via: https://fedoramagazine.org/getting-started-taskwarrior/
作者:[Link Dupont][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,120 @@
在 KVM 中测试 IPv6 网络:第 2 部分
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_4.png?itok=yZBHylwd)
我们又见面了,在上一篇 [在 KVM 中测试 IPv6 网络:第 1 部分][1] 中,我们学习了有关 IPv6 私有地址的内容。今天,我们将使用 KVM 创建一个网络,去测试上一星期学习的 IPv6 的内容。
如果你想重新温习如何使用 KVM可以查看 [在 KVM 中创建虚拟机:第 1 部分][2] 和 [在 KVM 中创建虚拟机:第 2 部分— 网络][3]。
### 在 KVM 中创建网络
在 KVM 中你至少需要两个虚拟机。当然了,如果你愿意,也可以创建更多的虚拟机。在我的系统中有 Fedora、Ubuntu、以及 openSUSE。去创建一个新的 IPv6 网络,在主虚拟机管理窗口中打开 “Edit > Connection Details > Virtual Networks”。点击左下角的绿色十字按钮去创建一个新的网络图 1
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kvm-fig-1_0.png?itok=ruqjPXxd)
*图 1创建一个网络*
给新网络输入一个名字,然后,点击 “Forward” 按钮。如果你愿意,也可以不创建 IPv4 网络。当你创建一个新的 IPv4 网络时,虚拟机管理器将不让你创建重复网络,或者是使用了一个无效地址。在我的宿主机 Ubuntu 系统上,有效的地址是以绿色高亮显示的,而无效地址是使用高亮的玫瑰红色调。在我的 openSUSE 机器上没有高亮颜色。启用或不启用 DHCP以及创建或不创建一个静态路由然后进入下一个窗口。
选中 “Enable IPv6 network address space definition”然后输入你的私有地址范围。你可以使用任何你希望的 IPv6 地址类,但是要注意,不能将你的实验网络泄漏到公网上去。我们将使用非常好用的 IPv6 唯一本地地址ULA并且使用在 [Simple DNS Plus][4] 上的在线地址生成器,去创建我们的网络地址。拷贝 “Combined/CID” 地址到网络框中(图 2
![network address][6]
*图 2拷贝 "Combined/CID" 地址到网络框中*
虚拟机认为我的地址是无效的,因为,它显示了高亮的玫瑰红色。它做的对吗?我们使用 `ipv6calc` 去验证一下:
```
$ ipv6calc -qi fd7d:844d:3e17:f3ae::/64
Address type: unicast, unique-local-unicast, iid, iid-local
Registry for address: reserved(RFC4193#3.1)
Address type has SLA: f3ae
Interface identifier: 0000:0000:0000:0000
Interface identifier is probably manual set
```
`ipv6calc` 认为没有问题。如果感兴趣,你可以改变其中一个数字为无效的东西,比如字母 `g`,然后再试一次。(问 “如果…?”,试验和错误是最好的学习方法)。
我们继续进行,启用 DHCPv6图 3。你可以接受缺省值或者输入一个你自己的设置值。
![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-3.png?itok=F-oAAtN9)
*图 3 启用 DHCPv6*
我们将跳过缺省路由定义这一步,继续进入下一屏,在那里我们将启用 “Isolated Virtual Network” 和 “Enable IPv6 internal routing/networking”。
### 虚拟机网络选择
现在,你可以配置你的虚拟机去使用新的网络。打开你的虚拟机,然后点击顶部左侧的 “i” 按钮去打开 “Show virtual hardware details” 屏幕。在 “Add Hardware” 列点击 “NIC” 按钮去打开网络选择器,然后选择你喜欢的新的 IPv6 网络。点击 “Apply”然后重新启动。或者使用你喜欢的方法去重新启动网络或者更新你的 DHCP 租期。)
### 测试
`ifconfig` 告诉我们它做了什么?
```
$ ifconfig
ens3: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.30.207 netmask 255.255.255.0
broadcast 192.168.30.255
inet6 fd7d:844d:3e17:f3ae::6314
prefixlen 128 scopeid 0x0
inet6 fe80::4821:5ecb:e4b4:d5fc
prefixlen 64 scopeid 0x20
```
这是我们新的 ULA`fd7d:844d:3e17:f3ae::6314`,它是自动生成的本地链路地址。如果你有兴趣,可以 ping 一下ping 网络上的其它虚拟机:
```
vm1 ~$ ping6 -c2 fd7d:844d:3e17:f3ae::2c9f
PING fd7d:844d:3e17:f3ae::2c9f(fd7d:844d:3e17:f3ae::2c9f) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=1 ttl=64 time=0.635 ms
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=2 ttl=64 time=0.365 ms
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
当你努力去理解子网时,这是一个可以让你尝试不同地址是否可以正常工作的快速易用的方法。你可以给单个接口分配多个 IP 地址,然后 ping 它们去看一下会发生什么。在一个 ULA 中,接口,或者主机是 IP 地址的最后四部分,因此,你可以在那里做任何事情,只要它们在同一个子网中即可,在那个例子中是 `f3ae`。在我的其中一个虚拟机上,我只改变了这个示例的接口 ID以展示使用这四个部分你可以做任何你想做的事情
```
vm1 ~$ sudo /sbin/ip -6 addr add fd7d:844d:3e17:f3ae:a:b:c:6314 dev ens3
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
现在,尝试使用不同的子网,在下面的示例中使用了 `f4ae` 代替 `f3ae`
```
$ ping6 -c2 fd7d:844d:3e17:f4ae:a:b:c:6314
PING fd7d:844d:3e17:f4ae:a:b:c:6314(fd7d:844d:3e17:f4ae:a:b:c:6314) 56 data bytes
From fd7d:844d:3e17:f3ae::1 icmp_seq=1 Destination unreachable: No route
From fd7d:844d:3e17:f3ae::1 icmp_seq=2 Destination unreachable: No route
```
这也是练习路由的好机会,以后,我们将专门做一期,如何在不使用 DHCP 情况下实现自动寻址。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-2
作者:[CARLA SCHRODER][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://linux.cn/article-9594-1.html
[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1
[3]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking
[4]:http://simpledns.com/private-ipv6.aspx
[5]:/files/images/kvm-fig-2png
[6]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-2.png?itok=gncdPGj- "network address"
[7]:https://www.linux.com/licenses/category/used-permission

View File

@ -0,0 +1,74 @@
从专有到开源的十个简单步骤
======
> 这个共同福利并不适用于专有软件:保持隐藏的东西是不能照亮或丰富这个世界的。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_abstract_pieces.jpg?itok=tGR1d2MU)
“开源软件肯定不太安全,因为每个人都能看到它的源代码,而且他们能重新编译它,用他们自己写的不好的东西进行替换。”举手示意:谁之前听说过这个说法?^注1
当我和客户讨论的时候(是的,他们有时候会让我和客户交谈),对于这个领域^注2 的人来说种问题是很常见的。在前一篇文章中,“[许多只眼睛的审查并不一定能防止错误代码]”,我谈论的是开源软件(尤其是安全软件)并不能神奇地比专有软件更安全,但是和专有软件比起来,我每次还是比较青睐开源软件。但我听到“关于开源软件不是很安全”这种问题表明了有时候只是说“开源需要参与”是不够的,我们也需要积极的参与辩护^注3 。
我并不期望能够达到牛顿或者维特根斯坦的逻辑水平,但是我会尽我所能,而且我会在结尾做个总结,如果你感兴趣的话可以去快速的浏览一下。
### 关键因素
首先,我们必须明白没有任何软件是完美的^注6 。无论是专有软件还是开源软件。第二,我们应该承认确实还是存在一些很不错的专有软件的,以及第三,也存在一些糟糕的开源软件。第四,有很多非常聪明的,很有天赋的,专业的架构师、设计师和软件工程师设计开发了专有软件。
但也有些问题:第五,从事专有软件的人员是有限的,而且你不可能总是能够雇佣到最好的员工。即使在政府部门或者公共组织 —— 他们拥有丰富的人才资源池,但在安全应用这块,他们的人才也是有限的。第六,可以查看、测试、改进、拆解、再次改进和发布开源软件的人总是无限的,而且还包含最好的人才。第七(也是我最喜欢的一条),这群人也包含很多编写专有软件的人才。第八,许多政府或者公共组织开发的软件也都逐渐开源了。
第九,如果你在担心你在运行的软件的不被支持或者来源不明,好消息是:有一批组织^注7 会来检查软件代码的来源,提供支持、维护和补丁更新。他们会按照专利软件模式那样去运行开源软件,你也可以确保你从他们那里得到的软件是正确的软件:他们的技术标准就是对软件包进行签名,以便你可以验证你正在运行的开源软件不是来源不明或者是恶意的软件。
第十(也是这篇文章的重点),当你运行开源软件,测试它,对问题进行反馈,发现问题并且报告的时候,你就是在为<ruby>共同福利<rt>commonwealth</rt></ruby>贡献知识、专业技能以及经验,这就是开源,其因为你的所做的这些而变得更好。如果你是通过个人或者提供支持开源软件的商业组织之一^注8 参与的,你已经成为了这个共同福利的一部分了。开源让软件变得越来越好,你可以看到它们的变化。没有什么是隐藏封闭的,它是完全开放的。事情会变坏吗?是的,但是我们能够及时发现问题并且修复。
这个共同福利并不适用于专有软件:保持隐藏的东西是不能照亮或丰富这个世界的。
我知道作为一个英国人在使用<ruby>共同福利<rt>commonwealth</rt></ruby>这个词的时候要小心谨慎的;它和帝国连接着的,但我所表达的不是这个意思。它不是克伦威尔^注9 在对这个词所表述的意思,无论如何,他是一个有争议的历史人物。我所表达的意思是这个词有“共同”和“福利”连接,福利不是指钱而是全人类都能拥有的福利。
我真的很相信这点的。如果i想从这篇文章中得到一些虔诚信息的话那应该是第十条^注10 :共同福利是我们的遗产,我们的经验,我们的知识,我们的责任。共同福利是全人类都能拥有的。我们共同拥有它而且它是一笔无法估量的财富。
### 便利贴
1. (几乎)没有一款软件是完美无缺的。
2. 有很好的专有软件。
3. 有不好的专有软件。
4. 有聪明,有才能,专注的人开发专有软件。
5. 从事开发完善专有软件的人是有限的,即使在政府或者公共组织也是如此。
6. 相对来说从事开源软件的人是无限的。
7. …而且包括很多从事专有软件的人才。
8. 政府和公共组织的人经常开源它们的软件。
9. 有商业组织会为你的开源软件提供支持。
10. 贡献--即使是使用--为开源软件贡献。
### 脚注
- 注1 好的--你现在可以放下手了
- 注2这应该大写吗有特定的领域吗或者它是如何工作的我不确定。
- 注3我有一个英国文学和神学的学位--这可能不会惊讶到我的文章的普通读者^注4 。
- 注4 我希望不是,因为我说的太多神学了^注5 。但是它经常充满了冗余的,无关紧要的人文科学引用。
- 注5 Emacs每一次。
- 注6 甚至是 Emacs。而且我知道有技术能够去证明一些软件的正确性。我怀疑 Emacs 不能全部通过它们...
- 注7 注意这里:我被他们其中之一 [Red Hat][3] 所雇佣,去查看一下--它是一个有趣的工作地方,而且[我们经常在招聘][4]。
- 注8 假设他们完全遵守他们正在使用的开源软件许可证。
- 注9 昔日的“英格兰、苏格兰、爱尔兰的上帝守护者”--比克伦威尔。
- 注10 很明显,我选择 Emacs 而不是 Vi 变体。
这篇文章原载于 [Alice, Eve, and Bob - a security blog] 而且已经被授权重新发布。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/commonwealth-open-source
作者:[Mike Bursell][a]
译者:[FelixYFZ](https://github.com/FelixYFZ)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://opensource.com/article/17/10/many-eyes
[2]:https://en.wikipedia.org/wiki/Apologetics
[3]:https://www.redhat.com/
[4]:https://www.redhat.com/en/jobs
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/

View File

@ -0,0 +1,84 @@
如何创建适合移动设备的文档
=====
> 帮助用户在智能手机或平板上快速轻松地找到他们所需的信息。
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
我并不是完全相信[移动为先][1]的理念,但是我确实发现更多的人使用智能手机和平板电脑等移动设备来获取信息。这包括在线的软件和硬件文档,但它们大部分都是冗长的,不适合小屏幕。通常情况下,它的伸缩性不太好,而且很难导航。
当用户使用移动设备访问文档时,他们通常需要迅速获取信息以了解如何执行任务或解决问题,他们不想通过看似无尽的页面来寻找他们需要的特定信息。幸运的是,解决这个问题并不难。以下是一些技巧,可以帮助你构建文档以满足移动阅读器的需求。
### 简短一点
这意味着简短的句子,简短的段落和简短的流程。你不是在写一部长篇小说或一段长新闻。使你的文档简洁。尽可能使用少量的语言来获得想法和信息。
以广播新闻报道为示范:关注关键要素,用简单直接的语言对其进行解释。不要让你的读者在屏幕上看到冗长的文字。
另外,直接切入重点。关注读者需要的信息。在线发布的文档不应该像以前厚厚的手册一样。不要把所有东西都放在一个页面上,把你的信息分成更小的块。接下来是怎样做到这一点:
### 主题
在技术写作的世界里,主题是独立的,独立的信息块。每个主题都由你网站上的单个页面组成。读者应该能从特定的主题中获取他们需要的信息 -- 并且只是那些信息。要做到这一点,选择哪些主题要包含在文档中并决定如何组织它们:
### DITA
<ruby>[达尔文信息类型化体系结构][2]<rt>Darwin Information Typing Architecture</rt></ruby> DITA 是用于编写和发布的一个 XML 模型。它[广泛采用][3]在技术写作中,特别是作为较长的文档集中。
我并不是建议你将文档转换为 XML除非你真的想。相反考虑将 DITA 的不同类型主题的概念应用到你的文档中:
* 一般:概述信息
* 任务:分步骤的流程
* 概念:背景或概念信息
* 参考API 参考或数据字典等专用信息
* 术语表:定义术语
* 故障排除:有关用户可能遇到的问题以及如何解决问题的信息
你会得到很多单独的页面。要连接这些页面:
### 链接
许多内容管理系统、维基和发布框架都包含某种形式的导航 —— 通常是目录或[面包屑导航][4],这是一种在移动设备上逐渐消失的导航。
为了加强导航,在主题之间添加明确的链接。将这些链接放在每个主题末尾的标题**另请参阅**或**相关主题**。每个部分应包含两到五个链接,指向与当前主题相关的概述、概念和参考主题。
如果你需要指向文档集之外的信息,请确保链接在浏览器新的选项卡中打开。这将把读者送到另一个网站,同时也将读者继续留你的网站上。
这解决了文本问题,那么图片呢?
### 不使用图片
除少数情况之外,不应该加太多图片到文档中。仔细查看文档中的每个图片,然后问自己:
* 它有用吗?
* 它是否增强了文档?
* 如果删除它,读者会错过这张图片吗?
如果回答否,那么移除图片。
另一方面,如果你绝对不能没有图片,就让它变成[响应式的][5]。这样,图片就会自动调整以适应更小的屏幕。
如果你仍然不确定图片是否应该出现Opensource.com 社区版主 Ben Cotton 提供了一个关于在文档中使用屏幕截图的[极好的解释][6]。
### 最后的想法
通过少量努力,你就可以构建适合移动设备用户的文档。此外,这些更改也改进了桌面计算机和笔记本电脑用户的文档体验。
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/think-mobile
作者:[Scott Nesbitt][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/chrisshort
[1]:https://www.uxmatters.com/mt/archives/2012/03/mobile-first-what-does-it-mean.php
[2]:https://en.wikipedia.org/wiki/Darwin_Information_Typing_Architecture
[3]:http://dita.xml.org/book/list-of-organizations-using-dita
[4]:https://en.wikipedia.org/wiki/Breadcrumb_(navigation)
[5]:https://en.wikipedia.org/wiki/Responsive_web_design
[6]:https://opensource.com/business/15/9/when-does-your-documentation-need-screenshots

View File

@ -0,0 +1,122 @@
两款 Linux 桌面端可用的科学计算器
======
> 如果你想找个高级的桌面计算器的话,你可以看看开源软件,以及一些其它有趣的工具。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
每个 Linux 桌面环境都至少带有一个功能简单的桌面计算器,但大多数计算器只能进行一些简单的计算。
幸运的是,还是有例外的:不仅可以做得比开平方根和一些三角函数还多,而且还很简单。这里将介绍两款强大的计算器,外加一大堆额外的功能。
### SpeedCrunch
[SpeedCrunch][1] 是一款高精度科学计算器,有着简明的 Qt5 图像界面,并且强烈依赖键盘。
![SpeedCrunch graphical interface][3]
*SpeedCrunch 运行中*
它支持单位,并且可用在所有函数中。
例如,
```
2 * 10^6 newton / (meter^2)
```
你可以得到:
```
= 2000000 pascal
```
SpeedCrunch 会默认地将结果转化为国际标准单位,但还是可以用 `in` 命令转换:
例如:
```
3*10^8 meter / second in kilo meter / hour
```
结果是:
```
= 1080000000 kilo meter / hour
```
`F5` 键可以将所有结果转为科学计数法(`1.08e9 kilo meter / hour``F2` 键可以只将那些很大的数或很小的数转为科学计数法。更多选项可以在配置页面找到。
可用的函数的列表看上去非常壮观。它可以用在 Linux 、 Windows、macOS。许可证是 GPLv2你可以在 [Bitbucket][4] 上得到它的源码。
### Qalculate!
[Qalculate!][5](有感叹号)有一段长而复杂的历史。
这个项目给了我们一个强大的库,而这个库可以被其它程序使用(在 Plasma 桌面中krunner 可以用它来计算),以及一个用 GTK3 搭建的图形界面。它允许你转换单位,处理物理常量,创建图像,使用复数,矩阵以及向量,选择任意精度,等等。
![Qalculate! Interface][7]
*在 Qalculate! 中查看物理常量*
在单位的使用方面Qalculate! 会比 SppedCrunch 更加直观,而且可以识别一些常用前缀。你有听说过 exapascal 压力吗?反正我没有(太阳的中心大概在 `~26 PPa`),但 Qalculate! ,可以准确 `1 EPa` 的意思。同时Qalculate! 可以更加灵活地处理语法错误所以你不需要担心打括号如果没有歧义Qalculate! 会直接给出正确答案。
一段时间之后这个项目看上去被遗弃了。但在 2016 年,它又变得强大了,在一年里更新了 10 个版本。它的许可证是 GPLv2 (源码在 [GitHub][8] 上提供Linux 、Windows 、macOS的版本。
### 更多计算器
#### ConvertAll
好吧,这不是“计算器”,但这个程序非常好用。
大部分单位转换器只是一个大的基本单位列表以及一大堆基本组合,但 [ConvertAll][9] 与它们不一样。有试过把天文单位每年转换为英寸每秒吗不管它们说不说得通只要你想转换任何种类的单位ConvertAll 就是你要的工具。
只需要在相应的输入框内输入转换前和转换后的单位:如果单位相容,你会直接得到答案。
主程序是在 PyQt5 上搭建的,但也有 [JavaScript 的在线版本][10]。
#### 带有单位包的 (wx)Maxima
有时候(好吧,很多时候)一款桌面计算器时候不够你用的,然后你需要更多的原力。
[Maxima][11] 是一款计算机代数系统LCTT 译注:进行符号运算的软件。这种系统的要件是数学表示式的符号运算),你可以用它计算导数、积分、方程、特征值和特征向量、泰勒级数、拉普拉斯变换与傅立叶变换,以及任意精度的数字计算、二维或三维图像··· ···列出这些都够我们写几页纸的了。
[wxMaxima][12] 是一个设计精湛的 Maxima 的图形前端,它简化了许多 Maxima 的选项,但并不会影响其它。在 Maxima 的基础上wxMaxima 还允许你创建 “笔记本”,你可以在上面写一些笔记,保存你的图像等。其中一项 (wx)Maxima 最惊艳的功能是它可以处理尺寸单位。
在提示符只需要输入:
```
load("unit")
```
`Shift+Enter`,等几秒钟的时间,然后你就可以开始了。
默认地,单位包可以用基本的 MKS 单位,但如果你喜欢,例如,你可以用 `N` 为单位而不是 `kg*m/s2`,你只需要输入:`setunits(N)`。
Maxima 的帮助(也可以在 wxMaxima 的帮助菜单中找到)会给你更多信息。
你使用这些程序吗?你知道还有其它好的科学、工程用途的桌面计算器或者其它相关的计算器吗?在评论区里告诉我们吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/scientific-calculators-linux
作者:[Ricardo Berlasso][a]
译者:[zyk2290](https://github.com/zyk2290)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rgb-es
[1]:http://speedcrunch.org/index.html
[2]:/file/382511
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png "SpeedCrunch graphical interface"
[4]:https://bitbucket.org/heldercorreia/speedcrunch
[5]:https://qalculate.github.io/
[6]:/file/382506
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png "Qalculate! Interface"
[8]:https://github.com/Qalculate
[9]:http://convertall.bellz.org/
[10]:http://convertall.bellz.org/js/
[11]:http://maxima.sourceforge.net/
[12]:https://andrejv.github.io/wxmaxima/

View File

@ -1,23 +1,25 @@
为什么建设一个社区值得额外的努力
======
> 建立 NethServer 社区是有风险的。但是我们从这些激情的人们所带来的力量当中学习到了很多。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_brandbalance.png?itok=XSQ1OU16)
当我们在 2003 年推出 [Nethesis][1] 时,我们还只是系统集成商。我们只使用有的开源项目。我们的业务模式非常明确:为这些项目增加多种形式的价值:实践知识、针对意大利市场的文档、额外模块、专业支持和培训课程。我们还通过向上游贡献代码并参与其社区来回馈上游项目。
当我们在 2003 年推出 [Nethesis][1] 时,我们还只是系统集成商。我们只使用有的开源项目。我们的业务模式非常明确:为这些项目增加多种形式的价值:实践知识、针对意大利市场的文档、额外模块、专业支持和培训课程。我们还通过向上游贡献代码并参与其社区来回馈上游项目。
那时时代不同。我们不能太张扬地使用“开源”这个词。人们将它与诸如“书呆子”,“没有价值”而最糟糕的是“自由”这些词联系起来。这些不太适合生意。
那时时代不同。我们不能太张扬地使用“开源”这个词。人们将它与诸如“书呆子”,“没有价值”以及最糟糕的“免费”这些词联系起来。这些不太适合生意。
在 2010 年的一个星期六Nethesis 的工作人员,他们手中拿着馅饼和浓咖啡,正在讨论如何推进事情发展(嘿,我们喜欢在创新的同时吃喝东西!)。尽管势头对我们不利,但我们决定不改变方向。事实上,我们决定加大力度 - 去做开源和开放的工作方式,这是一个成功运营企业的模式。
在 2010 年的一个星期六Nethesis 的工作人员,他们手中拿着馅饼和浓咖啡,正在讨论如何推进事情发展(嘿,我们喜欢在创新的同时吃喝东西!)。尽管势头对我们不利,但我们决定不改变方向。事实上,我们决定加大力度 —— 去做开源和开放的工作方式,这是一个成功运营企业的模式。
多年来,我们已经证明了该模型的潜力。有一件事是我们成功的关键:社区。
在这个由三部分组成的系列文章中,我将解释社区在开放组织的存在中扮演的重要角色。我将探讨为什么一个组织希望建立一个社区,并讨论如何建立一个社区 - 因为我确实认为这是如今产生新创新的最佳方式。
在这个由三部分组成的系列文章中,我将解释社区在开放组织的存在中扮演的重要角色。我将探讨为什么一个组织希望建立一个社区,并讨论如何建立一个社区 —— 因为我确实认为这是如今产生新创新的最佳方式。
### 这个疯狂的想法
与 Nethesis 伙伴一起,我们决定构建自己的开源项目:我们自己的操作系统,它建立在 CentOS 之上(因为我们不想重新发明轮子)。我们假设我们拥有实现它的经验、实践知识和人力。我们感到很勇敢。
我们非常希望构建一个名为 [NethServer][2] 的操作系统,其使命是:通过开源使系统管理员的生活更轻松。我们知道我们可以为服务器创建一个 Linux 发行版,与当前提供的任何东西相比,这些发行版更容易获取、更易于使用,并且更易于理解。
我们非常希望构建一个名为 [NethServer][2] 的操作系统,其使命是:通过开源使系统管理员的生活更轻松。我们知道我们可以为服务器创建一个 Linux 发行版,与当前已有的相比,它更容易使用、更易于部署,并且更易于理解。
不过最重要的是我们决定创建一个真正的100 开放的项目,其主要规则有三条:
@ -25,13 +27,11 @@
* 开发公开
* 社区驱动
最后一个很重要。我们是一家公司。我们能够自己开发它。如果我们在内部完成这项工作,我们将会更有效(并且做出更快的决定)。与其他任何意大利公司一样,这将非常简单。
但是我们已经如此深入到开源文化中,所以我们选择了不同的路径。
我们确实希望有尽可能多的人围绕着我们、围绕着产品、围绕着公司周围。我们希望对工作有尽可能多的视角。我们意识到:独自一人,你可以走得快 - 但是如果你想走很远,你需要一起走。
我们确实希望有尽可能多的人围绕着我们、围绕着产品、围绕着公司周围。我们希望对工作有尽可能多的视角。我们意识到:独自一人,你可以走得快 —— 但是如果你想走很远,你需要一起走。
所以我们决定建立一个社区。
@ -41,11 +41,11 @@
但是很快就出现了这样一个问题:我们如何建立一个社区?我们不知道如何实现这一点。我们参加了很多社区,但我们从未建立过一个社区。
我们擅长编码 - 而不是人。我们是一家公司,是一个有非常具体优先事项的组织。那么我们如何建立一个社区,并在公司和社区之间建立良好的关系呢?
我们擅长编码 —— 而不是人。我们是一家公司,是一个有非常具体优先事项的组织。那么我们如何建立一个社区,并在公司和社区之间建立良好的关系呢?
我们做了你必须做的第一件事:学习。我们从专家、博客和许多书中学到了知识。我们进行了实验。我们失败了多次,从结果中收集数据,并再次进行测试。
最终我们学到了社区管理的黄金法则:没有社区管理的黄金法则。
最终我们学到了社区管理的黄金法则:**没有社区管理的黄金法则。**
人们太复杂了,社区无法用一条规则来“统治他们”。
@ -57,7 +57,7 @@ via: https://opensource.com/open-organization/18/1/why-build-community-1
作者:[Alessio Fattorini][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -2,13 +2,14 @@ Linux 局域网路由新手指南:第 1 部分
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/traffic_warder.jpeg?itok=hZxS_PB4)
前面我们学习了 [IPv6 路由][1]。现在我们继续深入学习 Linux 中的 IPv4 路由的基础知识。我们从硬件概述、操作系统和 IPv4 地址的基础知识开始,下周我们将继续学习它们如何配置,以及测试路由。
### 局域网路由器硬件
Linux 实际上是一个网络操作系统,一直都是,从一开始它就有内置的网络功能。将你的局域网连入因特网,构建一个局域网路由器比起构建网关路由器要简单的多。你不要太过于执念安全或者防火墙规则,对于处理 NAT 它还是比较复杂的,网络地址转换是 IPv4 的一个痛点。我们为什么不放弃 IPv4 去转到 IPv6 呢?这样将使网络管理员的工作更加简单。
Linux 实际上是一个网络操作系统,一直都是,从一开始它就有内置的网络功能。将你的局域网连入因特网,构建一个局域网路由器比起构建网关路由器要简单的多。你不要太过于执念安全或者防火墙规则,对于处理网络地址转换NAT它还是比较复杂的NAT是 IPv4 的一个痛点。我们为什么不放弃 IPv4 去转到 IPv6 呢?这样将使网络管理员的工作更加简单。
有点跑题了。从理论上讲,你的 Linux 路由器是一个至少有两个网络接口的小型机器。Linux Gizmos 是一个单片机的综合体:[98 个开放规格的目录,黑客友好的 SBCs][2]。你能够使用一个很老的笔记本电脑或者台式计算机。你也可以使用一个精简版计算机,像 ZaReason Zini 或者 System76 Meerkat 一样,虽然这些有点贵,差不多要 $600。但是它们又结实又可靠并且你不用在 Windows 许可证上浪费钱。
有点跑题了。从理论上讲,你的 Linux 路由器是一个至少有两个网络接口的小型机器。Linux Gizmos 有一个很大的单板机名单:[98 个开放规格、适于黑客的 SBC 的目录][2]。你能够使用一个很老的笔记本电脑或者台式计算机。你也可以使用一个紧凑型计算机,像 ZaReason Zini 或者 System76 Meerkat 一样,虽然这些有点贵,差不多要 $600。但是它们又结实又可靠并且你不用在 Windows 许可证上浪费钱。
如果对路由器的要求不高,使用树莓派 3 Model B 作为路由器是一个非常好的选择。它有一个 10/100 以太网端口,板载 2.4GHz 的 802.11n 无线网卡,并且它还有四个 USB 端口,因此你可以插入多个 USB 网卡。USB 2.0 和低速板载网卡可能会让树莓派变成你的网络上的瓶颈,但是,你不能对它期望太高(毕竟它只有 $35既没有存储也没有电源。它支持很多种风格的 Linux因此你可以选择使用你喜欢的版本。基于 Debian 的树莓派是我的最爱。
@ -16,63 +17,63 @@ Linux 实际上是一个网络操作系统,一直都是,从一开始它就
你可以在你选择的硬件上安装将你喜欢的 Linux 的简化版,因为定制的路由器操作系统,比如 OpenWRT、 Tomato、DD-WRT、Smoothwall、Pfsense 等等,都有它们自己的非标准界面。我的观点是,没有必要这么麻烦,它们对你并没有什么帮助。尽量使用标准的 Linux 工具,因为你只需要学习它们一次就够了。
Debian 的网络安装镜像大约有 300MB 大小,并且支持多种架构,包括 ARM、i386、amd64和 armhf。Ubuntu 的服务器网络安装镜像也小于 50MB这样你就可以控制你要安装哪些包。Fedora、Mageia、和 openSUSE 都提供精简的网络安装镜像。如果你需要创意,你可以浏览 [Distrowatch][3]。
Debian 的网络安装镜像大约有 300MB 大小,并且支持多种架构,包括 ARM、i386、amd64 和 armhf。Ubuntu 的服务器网络安装镜像也小于 50MB这样你就可以控制你要安装哪些包。Fedora、Mageia、和 openSUSE 都提供精简的网络安装镜像。如果你需要创意,你可以浏览 [Distrowatch][3]。
### 路由器能做什么
我们需要网络路由器做什么?一个路由器连接不同的网络。如果没有路由,那么每个网络都是相互隔离的,所有的悲伤和孤独都没有人与你分享,所有节点只能孤独终老。假设你有一个 192.168.1.0/24 和一个 192.168.2.0/24 网络。如果没有路由器,你的两个网络之间不能相互沟通。这些都是 C 类的私有地址,它们每个都有 254 个可用网络地址。使用 ipcalc 可以非常容易地得到它们的这些信息:
我们需要网络路由器做什么?一个路由器连接不同的网络。如果没有路由,那么每个网络都是相互隔离的,所有的悲伤和孤独都没有人与你分享,所有节点只能孤独终老。假设你有一个 192.168.1.0/24 和一个 192.168.2.0/24 网络。如果没有路由器,你的两个网络之间不能相互沟通。这些都是 C 类的私有地址,它们每个都有 254 个可用网络地址。使用 `ipcalc` 可以非常容易地得到它们的这些信息:
```
$ ipcalc 192.168.1.0/24
Address: 192.168.1.0 11000000.10101000.00000001. 00000000
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
Address: 192.168.1.0 11000000.10101000.00000001. 00000000
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
=>
Network: 192.168.1.0/24 11000000.10101000.00000001. 00000000
HostMin: 192.168.1.1 11000000.10101000.00000001. 00000001
HostMax: 192.168.1.254 11000000.10101000.00000001. 11111110
Broadcast: 192.168.1.255 11000000.10101000.00000001. 11111111
Hosts/Net: 254 Class C, Private Internet
Network: 192.168.1.0/24 11000000.10101000.00000001. 00000000
HostMin: 192.168.1.1 11000000.10101000.00000001. 00000001
HostMax: 192.168.1.254 11000000.10101000.00000001. 11111110
Broadcast: 192.168.1.255 11000000.10101000.00000001. 11111111
Hosts/Net: 254 Class C, Private Internet
```
我喜欢 ipcalc 的二进制输出信息,它更加可视地表示了掩码是如何工作的。前三个八位组表示了网络地址,第四个八位组是主机地址,因此,当你分配主机地址时,你将 “掩盖” 掉网络地址部分,只使用剩余的主机部分。你的两个网络有不同的网络地址,而这就是如果两个网络之间没有路由器它们就不能互相通讯的原因。
我喜欢 `ipcalc` 的二进制输出信息,它更加可视地表示了掩码是如何工作的。前三个八位组表示了网络地址,第四个八位组是主机地址,因此,当你分配主机地址时,你将 “掩盖” 掉网络地址部分,只使用剩余的主机部分。你的两个网络有不同的网络地址,而这就是如果两个网络之间没有路由器它们就不能互相通讯的原因。
每个八位组一共有 256 字节,但是它们并不能提供 256 个主机地址,因为第一个和最后一个值 ,也就是 0 和 255是被保留的。0 是网络标识,而 255 是广播地址,因此,只有 254 个主机地址。ipcalc 可以帮助你很容易地计算出这些。
每个八位组一共有 256 字节,但是它们并不能提供 256 个主机地址,因为第一个和最后一个值 ,也就是 0 和 255是被保留的。0 是网络标识,而 255 是广播地址,因此,只有 254 个主机地址。`ipcalc` 可以帮助你很容易地计算出这些。
当然,这并不意味着你不能有一个结尾是 0 或者 255 的主机地址。假设你有一个 16 位的前缀:
```
$ ipcalc 192.168.0.0/16
Address: 192.168.0.0 11000000.10101000. 00000000.00000000
Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000
Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111
Address: 192.168.0.0 11000000.10101000. 00000000.00000000
Netmask: 255.255.0.0 = 16 11111111.11111111. 00000000.00000000
Wildcard: 0.0.255.255 00000000.00000000. 11111111.11111111
=>
Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000
HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001
HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110
Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111
Hosts/Net: 65534 Class C, Private Internet
Network: 192.168.0.0/16 11000000.10101000. 00000000.00000000
HostMin: 192.168.0.1 11000000.10101000. 00000000.00000001
HostMax: 192.168.255.254 11000000.10101000. 11111111.11111110
Broadcast: 192.168.255.255 11000000.10101000. 11111111.11111111
Hosts/Net: 65534 Class C, Private Internet
```
ipcalc 列出了你的第一个和最后一个主机地址,它们是 192.168.0.1 和 192.168.255.254。你是可以有以 0 或者 255 结尾的主机地址的例如192.168.1.0 和 192.168.0.255,因为它们都在最小主机地址和最大主机地址之间。
`ipcalc` 列出了你的第一个和最后一个主机地址,它们是 192.168.0.1 和 192.168.255.254。你是可以有以 0 或者 255 结尾的主机地址的例如192.168.1.0 和 192.168.0.255,因为它们都在最小主机地址和最大主机地址之间。
不论你的地址块是私有的还是公共的,这个原则同样都是适用的。不要羞于使用 ipcalc 来帮你计算地址。
不论你的地址块是私有的还是公共的,这个原则同样都是适用的。不要羞于使用 `ipcalc` 来帮你计算地址。
### CIDR
CIDR无类域间路由就是通过可变长度的子网掩码来扩展 IPv4 的。CIDR 允许对网络空间进行更精细地分割。我们使用 ipcalc 来演示一下:
CIDR无类域间路由就是通过可变长度的子网掩码来扩展 IPv4 的。CIDR 允许对网络空间进行更精细地分割。我们使用 `ipcalc` 来演示一下:
```
$ ipcalc 192.168.1.0/22
Address: 192.168.1.0 11000000.10101000.000000 01.00000000
Netmask: 255.255.252.0 = 22 11111111.11111111.111111 00.00000000
Wildcard: 0.0.3.255 00000000.00000000.000000 11.11111111
Address: 192.168.1.0 11000000.10101000.000000 01.00000000
Netmask: 255.255.252.0 = 22 11111111.11111111.111111 00.00000000
Wildcard: 0.0.3.255 00000000.00000000.000000 11.11111111
=>
Network: 192.168.0.0/22 11000000.10101000.000000 00.00000000
HostMin: 192.168.0.1 11000000.10101000.000000 00.00000001
HostMax: 192.168.3.254 11000000.10101000.000000 11.11111110
Broadcast: 192.168.3.255 11000000.10101000.000000 11.11111111
Hosts/Net: 1022 Class C, Private Internet
Network: 192.168.0.0/22 11000000.10101000.000000 00.00000000
HostMin: 192.168.0.1 11000000.10101000.000000 00.00000001
HostMax: 192.168.3.254 11000000.10101000.000000 11.11111110
Broadcast: 192.168.3.255 11000000.10101000.000000 11.11111111
Hosts/Net: 1022 Class C, Private Internet
```
网络掩码并不局限于整个八位组,它可以跨越第三和第四个八位组,并且子网部分的范围可以是从 0 到 3而不是非得从 0 到 255。可用主机地址的数量并不一定是 8 的倍数,因为它是由整个八位组定义的。
@ -81,7 +82,7 @@ Hosts/Net: 1022 Class C, Private Internet
从 [理解 IP 地址和 CIDR 图表][4]、[IPv4 私有地址空间和过滤][5]、以及 [IANA IPv4 地址空间注册][6] 开始。接下来的我们将学习如何创建和管理路由器。
通过来自 Linux 基金会和 edX 的免费课程 ["Linux 入门" ][7]学习更多 Linux 知识。
通过来自 Linux 基金会和 edX 的免费课程 [“Linux 入门”][7]学习更多 Linux 知识。
--------------------------------------------------------------------------------
@ -89,7 +90,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginne
作者:[Carla Schroder][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,24 +1,24 @@
使用一个树莓派和 projectx/os 托管你自己的电子邮件
使用树莓派和 projectx/os 托管你自己的电子邮件
======
> 这个开源项目可以通过低成本的服务器设施帮助你保护你的数据隐私和所有权。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2)
现在有大量的理由,不能再将存储你的数据的任务委以他人之手,也不能在第三方公司运行你的服务;隐私、所有权以及防范任何人拿你的数据去“赚钱”。但是对于大多数人来说,自己去运行一个服务器,是件即费时间又需要太多的专业知识的事情。不得已,我们只能妥协。抛开这些顾虑,使用某些公司的云服务,随之而来的就是广告、数据挖掘和售卖、以及其它可能的任何东西。
现在有大量的理由,不能再将存储你的数据的任务委以他人之手,也不能在第三方公司运行你的服务;隐私、所有权以及防范任何人拿你的数据去“赚钱”。但是对于大多数人来说,自己去运行一个服务器,是件即费时间又需要太多的专业知识的事情。不得已,我们只能妥协。抛开这些顾虑,使用某些公司的云服务,随之而来的就是广告、数据挖掘和售卖、以及其它可能的任何东西。
[projectx/os][1] 项目就是要去除这种顾虑,它可以在家里毫不费力地做服务托管,并且可以很容易地创建一个类似于 Gmail 的帐户。实现上述目标,你只需一个 $35 的树莓派 3 和一个基于 Debian 的操作系统镜像 —— 并且不需要很多的专业知识。仅需要四步就可以实现:
1. 解压缩一个 ZIP 文件到 SD 存储卡中。
2. 编辑 SD 上的一个文本文件以便于它连接你的 WiFi如果你不使用有线网络的话
2. 编辑 SD 上的一个文本文件以便于它连接你的 WiFi如果你不使用有线网络的话
3. 将这个 SD 卡插到树莓派 3 中。
4. 使用你的智能手机在树莓派 3 上安装 "email 服务器“ 应用并选择一个二级域。
4. 使用你的智能手机在树莓派 3 上安装 “email 服务器” 应用并选择一个二级域。
服务器应用程序(比如电子邮件服务器)被分解到多个容器中,它们中的每个都只能够使用指定的方式与外界通讯,它们使用了管理粒度非常细的隔离措施以提高安全性。例如,入站 SMTP、[SpamAssassin][2](反垃圾邮件平台)、[Dovecot][3] (安全的 IMAP 服务器),以及 webmail 都使用了独立的容器,它们之间相互不能看到对方的数据,因此,单个守护进程出现问题不会波及其它的进程。
另外,它们都是无状态容器,比如 SpamAssassin 和入站 SMTP每次收到电子邮件之后它们的容器都会被销毁并重建因此即便是有人找到了 bug 并利用了它,他们也不能访问以前的电子邮件或者接下来的电子邮件;他们只能访问他们自己挖掘出漏洞的那封电子邮件。幸运的是,大多数对外发布的、最容易受到攻击的服务都是隔离的和无状态的。
服务器应用程序(比如电子邮件服务器)被分解到多个容器中,它们中的每个都只能够使用指定的方式与外界通讯,它们使用了管理粒度非常细的隔离措施以提高安全性。例如,入站 SMTP[SpamAssassin][2](防垃圾邮件平台),[Dovecot][3] (安全 IMAP 服务器),并且 webmail 都使用了独立的容器,它们之间相互不能看到对方的数据,因此,单个守护进程出现问题不会波及其它的进程。
另外,它们都是无状态容器,比如 SpamAssassin 和入站 SMTP每次收到电子邮件之后它们的连接都会被拆除并重建因此即便是有人找到了 bug 并利用了它,他们也不能访问以前的电子邮件或者接下来的电子邮件;他们只能访问他们自己挖掘出漏洞的那封电子邮件。幸运的是,大多数对外发布的、最容易受到攻击的服务都是隔离的和无状态的。
所有存储的数据都使用 [dm-crypt][4] 进行加密。非公开服务,比如 DovecotIMAP或者 webmail都是在内部监听并使用 [ZeroTier One][5] 加密整个网络,因此只有你的设备(智能手机、笔记本电脑、平板等等)才能访问它们。
所有存储的数据都使用 [dm-crypt][4] 进行加密。非公开的服务,比如 DovecotIMAP或者 webmail都是在内部监听并使用 [ZeroTier One][5] 所提供的私有的加密层叠网络,因此只有你的设备(智能手机、笔记本电脑、平板等等)才能访问它们。
虽然电子邮件并不是端到端加密的(除非你使用了 [PGP][6]),但是非加密的电子邮件绝不会跨越网络,并且也不会存储在磁盘上。现在明文的电子邮件只存在于双方的私有邮件服务器上,它们都在他们的家中受到很好的安全保护并且只能通过他们的客户端访问(智能手机、笔记本电脑、平板等等)。
@ -26,9 +26,7 @@
### 展望
电子邮件是我使用 project/os 项目打包的第一个应用程序。想像一下,一个应用程序商店有全部的服务器软件,为易于安装和使用将它们打包到一起。想要一个博客?添加一个 WordPress 应用程序!想替换安全的 Dropbox ?添加一个 [Seafile][8] 应用程序或者一个 [Syncthing][9] 后端应用程序。 [IPFS][10] 节点? [Mastodon][11] 实例GitLab 服务器?各种家庭自动化/物联网后端服务?这里有大量的非常好的开源服务器软件 ,它们都非常易于安装,并且可以使用它们来替换那些有专利的云服务。
Nolan Leake 的 [在每个家庭中都有一个云0 系统管理员技能就可以在家里托管服务器][12] 将在三月 8 - 11日的 [Southern California Linux Expo][12] 进行。使用折扣代码 **OSDC** 去注册可以 50% 的价格得到门票。
电子邮件是我使用 project/os 项目打包的第一个应用程序。想像一下,一个应用程序商店有全部的服务器软件,打包起来易于安装和使用。想要一个博客?添加一个 WordPress 应用程序!想替换安全的 Dropbox ?添加一个 [Seafile][8] 应用程序或者一个 [Syncthing][9] 后端应用程序。 [IPFS][10] 节点? [Mastodon][11] 实例GitLab 服务器?各种家庭自动化/物联网后端服务?这里有大量的非常好的开源服务器软件 ,它们都非常易于安装,并且可以使用它们来替换那些有专利的云服务。
--------------------------------------------------------------------------------
@ -36,7 +34,7 @@ via: https://opensource.com/article/18/3/host-your-own-email
作者:[Nolan Leake][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,179 +1,116 @@
如何使用 Ansible 打补丁以及安装应用
======
> 使用 Ansible IT 自动化引擎节省更新的时间。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_osyearbook2016_sysadmin_cc.png?itok=Y1AHCKI4)
你有没有想过,如何打补丁、重启系统,然后继续工作?
如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的系统管理任务有时候需要几个小时才能完成,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。
如果你的回答是肯定的,那就需要了解一下 [Ansible][1] 了。它是一个配置管理工具,对于一些复杂的有时候需要几个小时才能完成的系统管理任务,又或者对安全性有比较高要求的时候,使用 Ansible 能够大大简化工作流程。
以我作为系统管理员的经验打补丁是一项最有难度的工作。每次遇到公共漏洞和暴露CVE, Common Vulnearbilities and Exposure通知或者信息安全漏洞预警IAVA, Information Assurance Vulnerability Alert时都必须要高度关注安全漏洞否则安全部门将会严肃追究自己的责任。
以我作为系统管理员的经验,打补丁是一项最有难度的工作。每次遇到<ruby>公共漏洞批露<rt>Common Vulnearbilities and Exposure</rt></ruby>CVE通知或者<ruby>信息保障漏洞预警<rt>Information Assurance Vulnerability Alert</rt></ruby>IAVA时都必须要高度关注安全漏洞否则安全部门将会严肃追究自己的责任。
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以 [yum 模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
使用 Ansible 可以通过运行[封装模块][2]以缩短打补丁的时间,下面以[yum模块][3]更新系统为例,使用 Ansible 可以执行安装、更新、删除、从其它地方安装(例如持续集成/持续开发中的 `rpmbuild`)。以下是系统更新的任务:
```
- name: update the system
yum:
name: "*"
state: latest
```
在第一行,我们给这个任务命名,这样可以清楚 Ansible 的工作内容。第二行表示使用 `yum` 模块在CentOS虚拟机中执行更新操作。第三行 `name: "*"` 表示更新所有程序。最后一行 `state: latest` 表示更新到最新的 RPM。
系统更新结束之后,需要重新启动并重新连接:
```
- name: restart system to reboot to newest kernel
shell: "sleep 5 && reboot"
async: 1
poll: 0
- name: wait for 10 seconds
pause:
seconds: 10
- name: wait for the system to reboot
wait_for_connection:
connect_timeout: 20
sleep: 5
delay: 5
timeout: 60
- name: install epel-release
yum:
name: epel-release
state: latest
```
`shell` 字段中的命令让系统在5秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为0表示直接执行不需要等待执行结果。等待10秒钟,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False`
`shell` 模块中的命令让系统在 5 秒休眠之后重新启动,我们使用 `sleep` 来保持连接不断开,使用 `async` 设定最大等待时长以避免发生超时,`poll` 设置为 0 表示直接执行不需要等待执行结果。暂停 10 秒钟以等待虚拟机恢复,使用 `wait_for_connection` 在虚拟机恢复连接后尽快连接。随后由 `install epel-release` 任务检查 RPM 的安装情况。你可以对这个剧本执行多次来验证它的幂等性,唯一会显示造成影响的是重启操作,因为我们使用了 `shell` 模块。如果不想造成实际的影响,可以在使用 `shell` 模块的时候 `changed_when: False`
现在我们已经知道如何对系统进行更新、重启虚拟机、重新连接、安装 RPM 包。下面我们通过 [Ansible Lightbulb][4] 来安装 NGINX:
```
- name: Ensure nginx packages are present
yum:
name: nginx, python-pip, python-devel, devel
state: present
notify: restart-nginx-service
- name: Ensure uwsgi package is present
pip:
name: uwsgi
state: present
notify: restart-nginx-service
- name: Ensure latest default.conf is present
template:
src: templates/nginx.conf.j2
dest: /etc/nginx/nginx.conf
backup: yes
notify: restart-nginx-service
- name: Ensure latest index.html is present
template:
src: templates/index.html.j2
dest: /usr/share/nginx/html/index.html
- name: Ensure nginx service is started and enabled
service:
name: nginx
state: started
enabled: yes
- name: Ensure proper response from localhost can be received
uri:
url: "http://localhost:80/"
return_content: yes
register: response
until: 'nginx_test_message in response.content'
retries: 10
delay: 1
```
And the handler that restarts the nginx service:
以及用来重启 nginx 服务的操作文件:
```
# 安装 nginx 的操作文件
- name: restart-nginx-service
service:
name: nginx
state: restarted
```
在这个角色里,我们使用 RPM 安装了 `nginx`、`python-pip`、`python-devel`、`devel`,用 PIP 安装了 `uwsgi`,接下来使用 `template` 模块复制 `nginx.conf``index.html` 以显示页面,并确保服务在系统启动时启动。然后就可以使用 `uri` 模块检查到页面的连接了。
这个是一个系统更新、系统重启、安装 RPM 包的剧本示例,后续可以继续安装 nginx当然这里可以替换成任何你想要的角色和应用程序。
```
- hosts: all
roles:
- centos-update
- nginx-simple
```
观看演示视频了解了解这个过程。
@ -182,6 +119,10 @@ And the handler that restarts the nginx service:
这只是关于如何更新系统、重启以及后续工作的示例。简单起见,我只添加了不带[变量][5]的包,当你在操作大量主机的时候,你就需要修改其中的一些设置了:
- [async & poll](https://docs.ansible.com/ansible/latest/playbooks_async.html)
- [serial](https://docs.ansible.com/ansible/latest/playbooks_delegation.html#rolling-update-batch-size)
- [forks](https://docs.ansible.com/ansible/latest/intro_configuration.html#forks)
这是由于在生产环境中如果你想逐一更新每一台主机的系统,你需要花相当一段时间去等待主机重启才能够继续下去。
有关 Ansible 进行自动化工作的更多用法,请查阅[其它文章][6]。
@ -192,7 +133,7 @@ via: https://opensource.com/article/18/3/ansible-patch-systems
作者:[Jonathan Lozada De La Matta][a]
译者:[HankChow](https://github.com/HankChow)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,46 +1,53 @@
使用 syslog-ng 可靠地记录物联网事件
======
> 用增强的日志守护进程 syslog-ng 来监控你的物联网设备。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openwires_fromRHT_520_0612LL.png?itok=PqZi55Ab)
现在,物联网设备和嵌入式系统越来越多。对于许多连接到因特网或者一个网络的设备来说,记录事件很有必要,因为你需要知道这些设备都做了些什么事情,这样你才能够解决可能出现的问题。
可以考虑去使用的一个监视工具是开源的 [syslog-ng][1] 应用程序,它是一个强化的、致力于可移植的、中心化的日志收集守护程序。它可以从许多不同种类的源、进程来收集日志,并且可以对这些日志进行过滤、存储、或者路由以便于做进一步的分析。syslog-ng 的大多数代码是用高效率的、高可移植的 C 代码写成的。它能够适用于各种场景,无论你是将它运行在一个处理能力很弱的设备上做一些简单的事情,还是运行在数据中心从成千上万的机器中收集日志的强大应用,它都能够胜任。
可以考虑去使用的一个监视工具是开源的 [syslog-ng][1] 应用程序,它是一个强化的、致力于可移植的、中心化的日志收集守护程序。它可以从许多不同种类的源、进程来收集日志,并且可以对这些日志进行处理和过滤,也可以存储或者路由它们以便于做进一步的分析。syslog-ng 的大多数代码是用高效率的、高可移植的 C 代码写成的。它能够适用于各种场景,无论你是将它运行在一个处理能力很弱的设备上做一些简单的事情,还是运行在数据中心从成千上万的机器中收集日志的强大应用,它都能够胜任。
你可能注意到在这个段落中,我使用了大量的溢美词汇。为了让你更清晰地了解它,我们来复习一下,但这将花费更多的时间,也了解的更深入一些。
### 日志
首先解释一下日志。日志是记录一台计算机上事件的东西。在一个典型的 Linux 机器上,你可以在 `/var/log` 目录中找到这些信息。例如,如果你通过 SSH 登到机器中,你将可以在其中一个日志文件中找到类似于如下内容的信息:
首先解释一下日志。<ruby>日志<rt>logging</rt></ruby>是记录一台计算机上事件的东西。在一个典型的 Linux 机器上,你可以在 `/var/log` 目录中找到这些信息。例如,如果你通过 SSH 登到机器中,你将可以在其中一个日志文件中找到类似于如下内容的信息:
`Jan 14 11:38:48 ``linux``-0jbu sshd[7716]: Accepted ``publickey`` for root from 127.0.0.1 port 48806 ssh2`
```
Jan 14 11:38:48 linux-0jbu sshd[7716]: Accepted publickey for root from 127.0.0.1 port 48806 ssh2
```
它的内容可能是关于你的 CPU 过热,通过 HTTP 下载了一个文档,或者你的应用程序认为重要的任何东西。
日志的内容可能是关于你的 CPU 过热、通过 HTTP 下载了一个文档,或者你的应用程序认为重要的任何东西。
### syslog-ng
正如我在上面所写的那样syslog-ng 应用程序是一个强化的、致力于可移植性、和中心化的日志收集守护程序。守护程序的意思是syslog-ng 是一个持续运行在后台的应用程序,用于收集日志信息。
正如我在上面所写的那样syslog-ng 应用程序是一个强化的、致力于可移植性、和中心化的日志收集守护程序。守护程序的意思是syslog-ng 是一个持续运行在后台的应用程序,在这里,它用于收集日志信息。
虽然现在大多数应用程序的 Linux 测试是限制在 x86_64 的机器上但是syslog-ng 也可以运行在大多数 BSD 和商业 UNIX 变种版本上的。从嵌入式/物联网的角度来看,这种能够运行在不同的 CPU 架构(包括 32 位和 64 位的 ARM、PowerPC、MIPS 等等)的能力甚至更为重要。(有时候,我通过阅读关于 syslog-ng 如何使用它们来学习新架构)
虽然现在大多数应用程序的 Linux 测试是限制在 x86_64 的机器上但是syslog-ng 也可以运行在大多数 BSD 和商业 UNIX 变种版本上的。从嵌入式/物联网的角度来看,这种能够运行在不同的 CPU 架构(包括 32 位和 64 位的 ARM、PowerPC、MIPS 等等)的能力甚至更为重要。(有时候,我通过阅读关于 syslog-ng 如何使用它们来学习新架构)
为什么中心化的日志收集如此重要?其中一个很重要的原因是易于使用,因为它放在一个地方,不用到成百上千的机器上挨个去检查它们的日志。另一个原因是可用性 —— 如果一个设备不论是什么原因导致了它不可用,你可以检查这个设备的日志信息。第三个原因是安全性;当你的设备被黑,检查设备日志可以发现攻击的踪迹。
为什么中心化的日志收集如此重要?其中一个很重要的原因是易于使用,因为它放在一个地方,不用到成百上千的机器上挨个去检查它们的日志。另一个原因是可用性 —— 即使一个设备不论是什么原因导致了它不可用,你可以检查这个设备的日志信息。第三个原因是安全性;当你的设备被黑,检查设备日志可以发现攻击的踪迹。
### syslog-ng 的四种用法
Syslog-ng 有四种主要的用法:收集、处理、过滤、和保存日志信息。
syslog-ng 有四种主要的用法:收集、处理、过滤、和保存日志信息。
**收集信息:** syslog-ng 能够从各种各样的 [特定平台源][2] 上收集信息,比如 `/dev/log``journal`,或者 `sun-streams`。作为一个中心化的日志收集器,传统的(`rfc3164`)和最新的(`rfc5424`)系统日志协议、以及它们 基于 UDP、TCP、和加密连接的各种变种,它都是支持的。你也可以从管道、套接字、文件、甚至应用程序输出来收集日志信息(或者各种文本数据)。
**收集信息:** syslog-ng 能够从各种各样的 [特定平台源][2] 上收集信息,比如 `/dev/log``journal`,或者 `sun-streams`。作为一个中心化的日志收集器,传统的(`rfc3164`)和最新的(`rfc5424`)系统日志协议、以及它们基于 UDP、TCP 和加密连接的各种变种,它都是支持的。你也可以从管道、套接字、文件、甚至应用程序输出来收集日志信息(或者各种文本数据)。
**处理日志信息:** 它的处理能力几乎是无限的。你可以用它内置的解析器来分类、规范以及结构化日志信息。如果它没有为你提供在你的应用场景中所需要的解析器,你甚至可以用 Python 来自己写一个解析器。你也可以使用地理数据来丰富信息,或者基于信息内容来附加一些字段。日志信息可以按处理它的应用程序所要求的格式进行重新格式化。你也可以重写日志信息 —— 当然了,你不能篡改日志内容 —— 比如在某些情况下,需要满足匿名要求的信息。
**处理日志信息:** 它的处理能力几乎是无限的。你可以用它内置的解析器来分类、规范以及结构化日志信息。如果它没有为你提供在你的应用场景中所需要的解析器,你甚至可以用 Python 来自己写一个解析器。你也可以使用地理数据来丰富信息,或者基于信息内容来附加一些字段。日志信息可以按处理它的应用程序所要求的格式进行重新格式化。你也可以重写日志信息 —— 当然了,不是篡改日志内容 —— 比如在某些情况下,需要满足匿名要求的信息。
**过滤日志:** 过滤日志的用法主要有两种:丢弃不需要保存的日志信息 —— 像调试级别的信息;和路由日志信息—— 确保正确的日志到达正确的目的地。后一种用法的一个例子是转发所有的认证相关的信息到一个安全信息与事件管理系统SIEM
**保存信息:** 传统的做法是,将文件保存在本地或者发送到中心化日志服务器;不论是哪种方式,它们都被发送到一个[平面文件][3]。经过这些年的改进syslog-ng 已经开始支持 SQL 数据库,并且在过去的几年里,大数据目的地,包括 HDFS、Kafka、MongoDB、和 Elasticsearch都被加入到 syslog-ng 的支持中。
**保存信息:** 传统的做法是,将文件保存在本地或者发送到中心化日志服务器;不论是哪种方式,它们都被发送到一个[普通文件][3]。经过这些年的改进syslog-ng 已经开始支持 SQL 数据库,并且在过去的几年里,包括 HDFS、Kafka、MongoDB、和 Elasticsearch 在内的大数据存储,都被加入到 syslog-ng 的支持中。
### 消息格式
当在你的 `/var/log` 目录中查看消息时,你将看到(如上面的 SSH 信息)大量的消息都是如下格式的内容:
`date + host name + application name + an almost complete English sentence`
```
日期 + 主机名 + 应用名 + 一句几乎完整的英文信息
```
在这里的每个应用程序事件都是用不同的语法描述的,基于这些数据去创建一个报告是个痛苦的任务。
@ -50,15 +57,15 @@ Syslog-ng 有四种主要的用法:收集、处理、过滤、和保存日志
### 物联网日志
我们从一个棘手的问题开始:哪个版本的 syslog-ng 最流行?在你回答之前,想想如下这些事实:这个项目启动于 20 年以前Red Hat 企业版 Linux EPEL 已经有了 3.5 版,并且当前版本是 3.14。当我在我的演讲中问到这个问题时,观众通常回答他们喜欢的 Linux 发行版。你们绝对想不到的是,正确答案竟然是 1.6 版最流行,这个版本已经有 15 年的历史的。这什么这个版本是最为流行的,因为它是包含在亚马逊 Kindle 阅读器中的版本,它是电子书阅读器,因为它运行在全球范围内超过 1 亿台的设备上。另外一个在消费类设备上运行 syslog-ng 的例子是 BMW i3 电动汽车。
我们从一个棘手的问题开始:哪个版本的 syslog-ng 最流行?在你回答之前,想想如下这些事实:这个项目启动于 20 年以前Red Hat 企业版 Linux EPEL 已经有了 3.5 版,而当前版本是 3.14。当我在我的演讲中问到这个问题时,观众通常回答是他们用的 Linux 发行版中自带的那个。你们绝对想不到的是,正确答案竟然是 1.6 版最流行,这个版本已经有 15 年的历史的。这什么这个版本是最为流行的,因为它是包含在亚马逊 Kindle 阅读器中的版本,它是电子书阅读器,因为它运行在全球范围内超过 1 亿台的设备上。另外一个在消费类设备上运行 syslog-ng 的例子是 BMW i3 电动汽车。
Kindle 使用 syslog-ng 去收集关于用户在这台设备上都做了些什么事情所有可能的信息。在 BMW 电动汽车上syslog-ng 所做的事情更复杂,基于内容过滤日志信息,并且在大多数情况下,只记录最重要的日志。
Kindle 使用 syslog-ng 去收集关于用户在这台设备上都做了些什么事情所有可能的信息。在 BMW 电动汽车上syslog-ng 所做的事情更复杂,基于内容过滤日志信息,并且在大多数情况下,只记录最重要的日志。
使用 syslog-ng 的其它类别设备还有网络和存储。一些比较知名的例子有Turris Omnia 开源 Linux 路由器和群晖 NAS 设备。在大多数案例中syslog-ng 是在设备上作为一个日志客户端来运行,但是在有些案例中,它运行为一个有富 Web 界面的中心日志服务器。
使用 syslog-ng 的其它类别设备还有网络和存储。一些比较知名的例子有Turris Omnia 开源 Linux 路由器和群晖 NAS 设备。在大多数案例中syslog-ng 是在设备上作为一个日志客户端来运行,但是在有些案例中,它运行为一个有富 Web 界面的中心日志服务器。
你还可以在一些行业服务中找到 syslog-ng 的身影。它运行在来自美国国家仪器有限公司NI的实时 Linux 设备上,执行测量和自动化任务。它也被用于从定制开发的应用程序中收集日志。从命令行就可以做配置,但是一个漂亮的 GUI 可用于浏览日志。
最后,这里还有大量的项目比如汽车和飞机syslog-ng 在它们上面既可以运行为客户端也可以运行为服务端。在这种使用案例中syslog-ng 一般用来收集所有的日志和测量数据,然后发送它们到处理这些日志的中心化服务器集群上,然后保存它们到支持大数据的目的地,以备进一步分析。
最后还有大量的项目比如汽车和飞机syslog-ng 在它们上面既可以运行为客户端也可以运行为服务端。在这种使用案例中syslog-ng 一般用来收集所有的日志和测量数据,然后发送它们到处理这些日志的中心化服务器集群上,然后保存它们到支持大数据的目的地,以备进一步分析。
### 对物联网的整体益处
@ -69,9 +76,9 @@ Kindle 使用 syslog-ng 去收集关于用户在这台设备上都做了些什
via: https://opensource.com/article/18/3/logging-iot-events-syslog-ng
作者:[Peter Czanik][a]
译者:[译者ID](https://github.com/译者ID)
校对:[qhwdw](https://github.com/qhwdw)
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,36 +1,38 @@
Jupyter Notebooks 入门
=====
> 通过 Jupyter 使用实时代码、方程式和可视化及文本创建交互式的共享笔记本。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ)
自从 papyrus 发布以来,出版社一直在努力以吸引读者的方式来格式化数据。在数学、科学、和编程领域,这是一个特别的问题,设计良好的图表、插图和方程式可以成为帮助人们理解技术信息的关键。
[Jupyter Notebook][1] 通过重新构想我们如何制作教学文本来解决这个问题。Jupyter (我在 2017 年 10 月在 [All Things Open][2] 上首次了解到)是一款开源应用程序,它使用户能够创建包含实时代码,方程式,可视化和文本的交互式共享笔记本
自从有了纸莎草纸以来,出版人们一直在努力以吸引读者的方式来格式化数据。尤其是在数学、科学、和编程领域,设计良好的图表、插图和方程式可以成为帮助人们理解技术信息的关键
Jupyter 从 [IPython 项目][3]发展而来,该项目具有交互式 shell 和基于浏览器的笔记本支持代码文本和数学表达式。Jupyter 支持超过 40 种编程语言,包括 PythonR 和 Julia其代码可以导出为 HTML,LaTeX,PDF图像和视频或者作为 [IPyhton][4] 笔记本与其他用户共享
[Jupyter Notebook][1] 通过重新构想我们如何制作教学文本来解决这个问题。Jupyter (我在 2017 年 10 月在 [All Things Open][2] 上首次了解到)是一款开源应用程序,它使用户能够创建包含实时代码、方程式、可视化和文本的交互式共享笔记本
一个有趣的事实是:"Jupyter" 是 "Julia, Python, 和 R" 的缩写
Jupyter 从 [IPython 项目][3]发展而来,它是个具有交互式 shell 和基于浏览器的笔记本支持代码、文本和数学表达式。Jupyter 支持超过 40 种编程语言,包括 Python、R 和 Julia其代码可以导出为 HTML、LaTeX、PDF、图像和视频或者作为 [IPyhton][4] 笔记本与其他用户共享
根据 Jupyter 项目网站介绍,它的一些用途包括“数据清理和转换,数值模拟,统计建模,数据可视化,机器学习等等”。科学机构正在使用 Jupyter Notebooks 来解释研究结果。代码可以来自实际数据可以调整和重新调整以可视化成不同的结果和情景。通过这种方式Jupyter Notebooks 已成为生动的文本和报告。
> 一个有趣的事实是“Jupyter” 是 “Julia、Python 和 R” 的缩写。
根据 Jupyter 项目网站介绍,它的一些用途包括“数据清理和转换,数值模拟,统计建模,数据可视化,机器学习等等”。科学机构正在使用 Jupyter Notebooks 来解释研究结果。代码可以来自实际数据可以调整和重新调整以可视化成不同的结果和情景。通过这种方式Jupyter Notebooks 变成了生动的文本和报告。
### 安装并开始 Jupyter
Jupyter 软件是开源的,在[修改过的 BSD 许可证][5] 下获得许可,它可以[安装在 LinuxMacOS 或 Windows 上][6]。有很多种方法可以安装 Jupyter我尝试在 Linux 和 MacOS 上安装 PIP 和 [Anaconda][7]。PIP 安装要求你的计算机上已经安装了 PythonJupyter 推荐 Python 3。
Jupyter 软件是开源的,其授权于[修改过的 BSD 许可证][5],它可以[安装在 Linux、MacOS 或 Windows 上][6]。有很多种方法可以安装 Jupyter我在 Linux 和 MacOS 上试过 PIP 和 [Anaconda][7] 安装方式。PIP 安装要求你的计算机上已经安装了 PythonJupyter 推荐 Python 3。
由于 Python 3 已经安装在我的电脑上,我通过在终端(在 Linux 或 Mac 上)运行以下命令来安装 Jupyter
```
$ python3 -m pip install --upgrade pip
$ python3 -m pip install jupyter
```
在终端提示种输入以下命令立即启动应用程序:
在终端提示符输入以下命令立即启动应用程序:
```
$ jupyter notebook
```
很快,我的浏览器打开并显示了我的 Jupyter Notebook 服务器在 `http://localhost:8888`。(支持的浏览器有 Google Chrome,Firefox 和 Safari
很快,我的浏览器打开并显示了我`http://localhost:8888` 的 Jupyter Notebook 服务器。(支持的浏览器有 Google Chrome、Firefox 和 Safari
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_1.png?itok=UyM1GuVG)
@ -38,18 +40,19 @@ $ jupyter notebook
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_2.png?itok=alDI432q)
一个带有一些默认值的新笔记本,它可以被改变(包括笔记本的名字),打开
一个带有一些默认值的新笔记本,它可以被改变(包括笔记本的名字),打开。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_3.png?itok=9zjG-5JC)
笔记本有两种不同的模式:命令和编辑。命令模式允许你添加或删除单元格。你可以通过按下 Escape 键进入命令模式,按 Enter 键或单击单元格进入编辑模式。
单元格周围的绿色突出显示你处于编辑模式,蓝色突出显示你处于命令模式。以下笔记本处于命令模式并准备好执行单元中的 Python 代码。注意,我已将笔记本的名称更改为 First Notebook。
笔记本有两种不同的模式:“命令模式”和“编辑模式”。命令模式允许你添加或删除单元格。你可以通过按下 `Escape` 键进入命令模式,按 `Enter` 键或单击单元格进入编辑模式。
单元格周围的绿色高亮显示你处于编辑模式,蓝色高亮显示你处于命令模式。以下笔记本处于命令模式并准备好执行单元中的 Python 代码。注意,我已将笔记本的名称更改为 “First Notebook”。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_4.png?itok=-QPxcuFX)
### 使用 Jupyter
Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还可以将 Markdown 与叙述性和解释性文本相结合。我想添加一个标题,所以我在代码上面添加了一个单元格,并在 Markdown 中输入了一个标题。当我按下 `Ctrl+Enter` 时,我的标题转换为 HTML。译注或者可以按下 Run 按钮。)
Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还可以用 Markdown 添加叙述性和解释性文本。我想添加一个标题,所以我在代码上面添加了一个单元格,并以 Markdown 输入了一个标题。当我按下 `Ctrl+Enter` 时,我的标题转换为 HTML。LCTT 译注:或者可以按下 Run 按钮。)
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_5.png?itok=-sr9A8-W)
@ -57,24 +60,24 @@ Jupyter Notebooks 的强大之处在于除了能够输入代码之外,你还
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_6.png?itok=o_g38ECp)
我也可以利用 IPython 的 [line magic 和 cell magic][8] 命令。你可以通过在代码单元内附加 `%``%%` 符号来列出魔术命令。例如,`%lsmagic` 产生所有可用于 Jupyter notebooks 的魔法命令。
我也可以利用 IPython 的 [line magic 和 cell magic][8] 命令。你可以通过在代码单元内附加 `%``%%` 符号来列出魔术命令。例如,`%lsmagic` 将输出所有可用于 Jupyter notebooks 的魔法命令。
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/jupyter_7.png?itok=uit0PtND)
这些魔术命令的例子包括 `%pwd`,它输出当前工作目录(例如 `/Users/YourName`)和 `%ls`,它列出当前工作目录中的所有文件和子目录。另一个神奇命令显示从笔记本中的 `matplotlib` 生成的图表。`%%html` 将该单元格中的任何内容呈现为 HTML这对嵌入视频和链接很有用还有 JavaScript 和 Bash 的单元魔术命令。
这些魔术命令的例子包括 `%pwd`——它输出当前工作目录(例如 `/Users/YourName`)和 `%ls`——它列出当前工作目录中的所有文件和子目录。另一个神奇命令显示从笔记本中的 `matplotlib` 生成的图表。`%%html` 将该单元格中的任何内容呈现为 HTML这对嵌入视频和链接很有用还有 JavaScript 和 Bash 的单元魔术命令。
如果你需要更多关于使用 Jupyter Notebooks 和它的特性的信息,帮助部分是非常完整的。
如果你需要更多关于使用 Jupyter Notebooks 和它的特性的信息,它的帮助部分是非常完整的。
人们用许多有趣的方式使用 Jupyter Notebooks;你可以在这个 [gallery][9] 里找到一些很好的例子。你如何使用Jupyter笔记本?请在下面的评论中分享你的想法。
人们用许多有趣的方式使用 Jupyter Notebooks;你可以在这个[展示栏目][9]里找到一些很好的例子。你如何使用 Jupyter 笔记本?请在下面的评论中分享你的想法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/getting-started-jupyter-notebooks
作者:[Don Watkins][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,17 +1,18 @@
Bootiso 让你安全地创建 USB 启动设备
Bootiso 让你安全地创建 USB 启动设备
======
![](https://www.ostechnix.com/wp-content/uploads/2018/04/USB-drive-720x340.png)
你好,新兵!你们有些人经常使用 **dd 命令**做各种各样的事,比如创建 USB 启动盘或者克隆硬盘分区。不过请牢记dd 是一个危险且有毁灭性的命令。如果你是个 Linux 的新手,最好避免使用 dd 命令。如果你不知道你在做什么你可能会在几分钟里把硬盘擦掉。从原理上说dd 只是从 **“if”** 读取然后写到 **“of”** 上。它才不管往哪里写呢。它根本不关心那里是否有分区表、引导区、家目录或是其他重要的东西。你叫它做什么它就做什么。可以使用像 [**Etcher**][1] 这样的用户友好的应用来代替它。这样你就可以在创建 USB 引导设备之前知道你将要格式化的是哪块盘。
今天,我发现了另一个可以安全创建 USB 引导设备的工具 **Bootiso** 。它实际上是一个 BASH 脚本,但真的很智能!它有很多额外的功能来帮我们安全创建 USB 引导盘。如果你想确定你的目标是 USB 设备(而不是内部驱动器),或者如果你想检测 USB 设备,你可以使用 Bootiso。下面是使用此脚本的显著优点:
你好,新兵!你们有些人经常使用 `dd` 命令做各种各样的事,比如创建 USB 启动盘或者克隆硬盘分区。不过请牢记,`dd` 是一个危险且有毁灭性的命令。如果你是个 Linux 的新手,最好避免使用 `dd` 命令。如果你不知道你在做什么,你可能会在几分钟里把硬盘擦掉。从原理上说,`dd` 只是从 `if` 读取然后写到 `of` 上。它才不管往哪里写呢。它根本不关心那里是否有分区表、引导区、家目录或是其他重要的东西。你叫它做什么它就做什么。可以使用像 [Etcher][1] 这样的用户友好的应用来代替它。这样你就可以在创建 USB 引导设备之前知道你将要格式化的是哪块盘。
今天,我发现了另一个可以安全创建 USB 引导设备的工具 Bootiso 。它实际上是一个 BASH 脚本,但真的很智能!它有很多额外的功能来帮我们安全创建 USB 引导盘。如果你想确保你的目标是 USB 设备(而不是内部驱动器),或者如果你想检测 USB 设备,你可以使用 Bootiso。下面是使用此脚本的显著优点:
* 如果只有一个 USB 驱动器Bootiso 会自动选择它。
* 如果有一个以上的 USB 驱动器存在,它可以让你从列表中选择其中一个。
* 万一你错误地选择一个内部硬盘驱动器,它将退出而不做任何事情。
* 它检查选定的 ISO 是否具有正确的 MIME 类型。如果 MIME 类型不正确,它将退出。
* 它判定所选的项目不是分区,如果判定失败则退出。
* 它将在擦除和分区 USB 驱动器之前提示用户确认。
* 它将在擦除和对 USB 驱动器分区之前提示用户确认。
* 列出可用的 USB 驱动器。
* 安装 syslinux 引导系统 (可选)。
* 自由且开源。
@ -19,46 +20,48 @@ Bootiso 让你安全地创建 USB 启动设备
### 使用 Bootiso 安全地创建 USB 驱动器
安装 Bootiso 非常简单。用这个命令下载最新版本:
```
$ curl -L https://rawgit.com/jsamr/bootiso/latest/bootiso -O
```
把下载的文件加到 **$PATH** 目录中,比如 /usr/local/bin/.
把下载的文件加到 `$PATH` 目录中,比如 `/usr/local/bin/`
```
$ sudo cp bootiso /usr/local/bin/
```
最后,添加运行权限:
```
$ sudo chmod +x /usr/local/bin/bootiso
```
搞定!现在就可以创建 USB 引导设备了。首先,让我们用命令看看现在有哪些 USB 驱动器:
```
$ bootiso -l
```
输出:
```
Listing USB drives available in your system:
NAME HOTPLUG SIZE STATE TYPE
sdb 1 7.5G running disk
```
如你所见,我只有一个 USB 驱动器。让我们继续通过命令用 ISO 文件创建 USB 启动盘:
```
$ bootiso bionic-desktop-amd64.iso
```
这个命令会提示你输入 SUDO 密码。输入密码并回车来安装缺失的组件(如果有的话),然后创建 USB 启动盘。
这个命令会提示你输入 `sudo` 密码。输入密码并回车来安装缺失的组件(如果有的话),然后创建 USB 启动盘。
输出:
```
[...]
Listing USB drives available in your system:
@ -79,77 +82,78 @@ ISO succesfully unmounted.
USB device succesfully unmounted.
USB device succesfully ejected.
You can safely remove it !
```
如果你的 ISO 文件 mine 类型不对,你会得到下列错误信息:
如果你的 ISO 文件 MIME 类型不对,你会得到下列错误信息:
```
Provided file `bionic-desktop-amd64.iso' doesn't seem to be an iso file (wrong mime type: `application/octet-stream').
Exiting bootiso...
```
当然,你也能像下面那样使用 **no-mime-check** 选项来跳过 mime 类型检查。
当然,你也能像下面那样使用 `no-mime-check` 选项来跳过 MIME 类型检查。
```
$ bootiso --no-mime-check bionic-desktop-amd64.iso
```
就像我前面提到的如果系统里只有1个 USB 设备 Bootiso 将自动选中它。所以我们不需要告诉它 USB 设备路径。如果你连接了多个设备,你可以像下面这样使用 **-d** 来指明 USB 设备。
就像我前面提到的,如果系统里只有 1 个 USB 设备 Bootiso 将自动选中它。所以我们不需要告诉它 USB 设备路径。如果你连接了多个设备,你可以像下面这样使用 `-d` 来指明 USB 设备。
```
$ bootiso -d /dev/sdb bionic-desktop-amd64.iso
```
用你自己的设备路径来换掉 “/dev/sdb”.
用你自己的设备路径来换掉 `/dev/sdb`
在多个设备情况下,如果你没有使用 **-d** 来指明要使用的设备Bootiso 会提示你选择可用的 USB 设备。
在多个设备情况下,如果你没有使用 `-d` 来指明要使用的设备Bootiso 会提示你选择可用的 USB 设备。
Bootiso 在擦除和改写 USB 盘分区前会要求用户确认。使用 `-y``assume-yes` 选项可以跳过这一步。
Bootiso 在擦除和改写 USB 盘分区前会要求用户确认。使用 **-y** 或 **assume-yes** 选项可以跳过这一步。
```
$ bootiso -y bionic-desktop-amd64.iso
```
您也可以把自动选择 USB 设备与 **-y** 选项连用,如下所示。
您也可以把自动选择 USB 设备与 `-y` 选项连用,如下所示。
```
$ bootiso -y -a bionic-desktop-amd64.iso
```
或者,
```
$ bootiso?--assume-yes?--autoselect bionic-desktop-amd64.iso
```
请记住,当你只连接一个 USB 驱动器时,它才会起作用。
Bootiso 会默认创建一个 **FAT 32** 分区,挂载后用 **“rsync”** 程序把 ISO 的内容拷贝到 USB 盘里。 如果你愿意也可以使用 “dd” 代替 “rsync” 。
Bootiso 会默认创建一个 FAT 32 分区,挂载后用 `rsync` 程序把 ISO 的内容拷贝到 USB 盘里。 如果你愿意也可以使用 `dd` 代替 `rsync`
```
$ bootiso --dd -d /dev/sdb bionic-desktop-amd64.iso
```
如果你想增加 USB 引导的成功概率,请使用 **“-b”** 或 **bootloader”** 选项。
如果你想增加 USB 引导的成功概率,请使用 `-b``bootloader` 选项。
```
$ bootiso -b bionic-desktop-amd64.iso
```
上面这条命令会安装 **syslinux** 引导程序安全模式。注意dd” 选项不可用.
上面这条命令会安装 `syslinux` 引导程序(安全模式)。注意,此时 `dd` 选项不可用。
在创建引导设备后Bootiso 会自动弹出 USB 设备。如果不想自动弹出,请使用 `-J``no-eject` 选项。
在创建引导设备后Bootiso 会自动弹出 USB 设备。如果不想自动弹出,请使用 **-J** 或 **no-eject** 选项。
```
$ bootiso -J bionic-desktop-amd64.iso
```
现在USB 设备依然连接中。你可以使用 “umount” 命令随时卸载它。
现在USB 设备依然连接中。你可以使用 `umount` 命令随时卸载它。
需要完整帮助信息,请输入:
```
$ bootiso -h
```
好,今天就到这里。希望这个脚本对你有帮助。好货不断,不要走开哦!
@ -160,9 +164,9 @@ $ bootiso -h
via: https://www.ostechnix.com/bootiso-lets-you-safely-create-bootable-usb-drive/
作者:[SK][a]
译者:[kennethXia](https://github.com/kennethXia)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
译者:[kennethXia](https://github.com/kennethXia)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,44 @@
使用 AppImageLauncher 轻松运行和集成 AppImage 文件
======
你有没有下载过 AppImage 文件,而你不知道如何使用它?或许你可能知道如何使用它,但是你每次要运行它时必须要进入到下载了该 .AppImage 的文件夹中来运行它,或者手动为其创建启动程序。
使用 AppImageLauncher这些就都是过去的问题。该程序可让你轻松运行 AppImage 文件,而无需使其可执行。但它最有趣的特点是可以轻松地将 AppImage 与你的系统进行整合AppImageLauncher 可以自动将 AppImage 程序快捷方式添加到桌面环境的程序启动器/菜单(包括程序图标和合适的说明)中。
这 里有个例子,我想在 Ubuntu 上使用 [Kdenlive][2],但我不想从仓库中安装它,因为它有大量的 KDE 依赖,我不想把它们弄到我的 Gnome 系统中。因为没有它的 Flatpak 或 Snap 镜像,我只能去下载了 Kdenlive 的 AppImage。
在没有把下载的 [Kdenline][2] AppImage 变成可执行的情况下,我第一次双击它时(安装好了 AppImageLauncherAppImageLauncher 提供了两个选项:
“Run once”或者“Integrate and run”。
![](https://3.bp.blogspot.com/-k1O4Bl3wuwU/Ws82wwzvEwI/AAAAAAAAALo/mTCmnEzFpmsmOz0BxH6ASOXIqc5jp63MwCLcBGAs/s1600/appimagelauncher_1.png)
点击 “Integrate and run”这个 AppImage 就被复制到 `~/.bin/` (家目录中的隐藏文件夹)并添加到菜单中,然后启动该程序。
要删除它也很简单,只要您使用的桌面环境支持桌面动作就行。例如,在 Gnome Shell 中只需右键单击“活动概览”中的应用程序图标然后选择“Remove from system”
![](https://1.bp.blogspot.com/--YONMK9mJyM/Ws825HZCpcI/AAAAAAAAALs/TJy9IVOjA0klMUqHQMOduserMTjmfT_tgCLcBGAs/s1600/appimagelauncher-remove-app.png)
更新:该应用只初步为 Ubuntu 和 Mint 做了开发,但它最近会提供 Debian、 Netrunner 和 openSUSE 支持。本文首次发布后添加的另一个功能是支持 AppImage 的更新;你在启动器中可以找到 “Update AppImage”。
### 下载 AppImageLauncher
AppImageLauncher 支持 Ubuntu、 Debian、Netrunner 和 openSUSE。如果你使用 Ubuntu 18.04,请确保你下载的 deb 包的名字中有“bionic”而其它的 deb 是用于旧一些的 Ubuntu 版本的。
- [下载 AppImageLauncher][1]
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/04/easily-run-and-integrate-appimage-files.html
作者:[Logix][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/TheAssassin/AppImageLauncher/releases
[2]:https://kdenlive.org/download/

View File

@ -1,17 +1,17 @@
有用的资源,献给那些想更多了解 Linux 的人
=====
Linux 是最流行和多功能的操作系统之一,它可以用在智能手机,电脑甚至汽车上。自 20 世纪 90 年代以来Linux 一直存在,并且仍然是最普遍的操作系统之一。
Linux 是最流行和多功能的操作系统之一,它可以用在智能手机,电脑甚至汽车上。自 20 世纪 90 年代以来Linux 存在至今,并且仍然是最普遍的操作系统之一。
Linux 实际上用于运行大多数网络服务,因为与其他操作系统相比,它被认为是相当稳定的。这是[人们选择 Linux 多于 Windows的原因][1]之一。此外Linux 为用户提供了隐私,并且根本不收集用户信息,而 Windows 10 及其 Cortana 语音控制系统总是需要更新你的个人信息。
Linux 实际上用于运行大多数网络服务,因为与其他操作系统相比,它被认为是相当稳定的。这是[人们选择 Linux 多于 Windows 的原因][1]之一。此外Linux 为用户提供了隐私,根本不收集用户信息,而 Windows 10 及其 Cortana 语音控制系统总是需要更新你的个人信息。
Linux 有很多优点。然而,人们并没有听到太多关于它的消息,因为它已被 Windows 和 Mac 挤出场。许多人开始使用 Linux 时会感到困惑,因为它与流行的操作系统有点不同。
Linux 有很多优点。然而,人们并没有听到太多关于它的消息,因为它已被 Windows 和 Mac 挤出(桌面)场。许多人开始使用 Linux 时会感到困惑,因为它与流行的操作系统有点不同。
为了帮助你,我们为那些想要了解更多关于 Linux 的人收集了 5 个有用的资源。
### 1.[ Linux 纯新手 ][2]
### 1、[Linux 纯新手][2]
如果你想尽可能多地学习 Linux你应该考虑 Eduonix 为初学者提供的 Linux 完整教程。这个课程将向你介绍 Linux 的所有功能,并为你提供所有必要的资料,以帮助你了解更多关于 Linux 工作原理的特性。
如果你想尽可能多地学习 Linux你应该考虑 Eduonix 为[初学者提供的 Linux 完整教程][2]。这个课程将向你介绍 Linux 的所有功能,并为你提供所有必要的资料,以帮助你了解更多关于 Linux 工作原理的特性。
如果你是以下情况,你应该选择本课程:
@ -20,58 +20,48 @@ Linux 有很多优点。然而,人们并没有听到太多关于它的消息
* 你想了解 Linux 如何与硬件配合使用;
* 你想学习如何操作 Linux 命令行。
### 2.[PC World: Linux 初学者指南][3]
### 2[PC World: Linux 初学者指南][3]
为想要在一个地方学习所有有关 Linux 的人提供免费资源。PC World 专注于计算机操作系统的各个方面,并为订阅用户提供最准确和最新的信息。在这里,你还可以了解更多关于 [Linux 的好处][4] 和关于其操作系统的最新消息。
为想要在一个地方学习所有有关 Linux 的人提供[免费资源][3]。PC World 专注于计算机操作系统的各个方面,并为订阅用户提供最准确和最新的信息。在这里,你还可以了解更多关于 [Linux 的好处][4] 和关于其操作系统的最新消息。
该资源为你提供以下信息:
* 如何安装 Linux
* 如何使用命令行;
* 如何安装软件;
* 如何操作 Linux 桌面环境。
### 3.[Linux 培训][5]
### 3、[Linux.comLinux 培训][5]
很多使用计算机的人都需要学习如何操作 Linux以防 Windows 操作系统突然崩溃。还有什么比使用官方资源来启动你的 Linux 培训更好呢?
很多使用计算机的人都需要学习如何操作 Linux以防 Windows 操作系统突然崩溃。还有什么比使用官方资源来启动你的 [Linux 培训][5]更好呢?
该资源提供了 Linux 训练的在线注册,你可以从官方来源获取最新信息。“一年前,我们的 IT 部门在官方网站上为我们提供了 Linux 培训” [Assignmenthelper.com.au][6] 的开发人员 Martin Gibson 说道。“我们选择了这门课,因为我们需要学习如何将我们的所有文件备份到另一个系统,为我们的客户提供最大的安全性,而且这个资源真的教会了我们所有的东西。”
该资源可用让你在线注册 Linux 训练,你可以从官方来源获取最新信息。“一年前,我们的 IT 部门在官方网站上为我们提供了 Linux 培训” [Assignmenthelper.com.au][6] 的开发人员 Martin Gibson 说道。“我们选择了这门课,因为我们需要学习如何将我们的所有文件备份到另一个系统,为我们的客户提供最大的安全性,而且这个资源真的教会了我们所有的东西。”
所以你肯定应该使用这个资源,如果:
* 你想获得有关操作系统的第一手信息;
* 想要了解如何在你的计算机上运行 Linux 的特性;
* 想要与其他 Linux 用户联系并与他们分享你的经验。
### 4. [The Linux Foundation: 视频培训][7]
### 4、 [Linux 基金会:视频培训][7]
如果你在阅读大量资源时容易感到无聊那么该网站绝对适合你。Linux Foundation 提供由 IT 专家,软件开发技术人员和技术顾问举办的视频培训,讲座和在线研讨会。
如果你在阅读大量资源时容易感到无聊那么该网站绝对适合你。Linux 基金会提供了由 IT 专家、软件开发技术人员和技术顾问举办的视频培训,讲座和在线研讨会。
所有培训视频分为以下类别:
* 开发人员: 使用 Linux Kernel 来处理 Linux 设备驱动程序Linux 虚拟化等;
* 开发人员: 使用 Linux 内核来处理 Linux 设备驱动程序、Linux 虚拟化等;
* 系统管理员:在 Linux 上开发虚拟主机,构建防火墙,分析 Linux 性能等;
* 用户Linux 入门,介绍嵌入式 Linux 等。
* 用户:开始使用 Linux介绍嵌入式 Linux 等。
### 5. [LinuxInsider][8]
### 5、 [LinuxInsider][8]
你知道吗?微软对 Linux 的效率感到惊讶,它[允许用户在微软云计算设备上运行 Linux][9]。如果你想了解更多关于 Linux 操作系统的知识Linux Insider 会向用户提供关于 Linux 操作系统的最新消息,以及最新更新和 Linux 特性的信息。
在此资源上,你将有机会:
* 参与 Linux 社区;
* 了解如何在各种设备上运行 Linux
* 看看评论;
* 参与博客讨论并阅读科技博客。
### 总结一下
@ -80,7 +70,7 @@ Linux 提供了很多好处,包括完全的隐私,稳定的操作甚至恶
### 关于作者
Lucy Benton 是一位数字营销专家商业顾问帮助人们将他们的梦想变为有利润的业务。现在她正在为营销和商业资源撰写。Lucy 还有自己的博客 [_Prowritingpartner.com_][10],在那里你可以查看她最近发表。
Lucy Benton 是一位数字营销专家商业顾问帮助人们将他们的梦想变为有利润的业务。现在她正在为营销和商业资源撰写。Lucy 还有自己的博客 [_Prowritingpartner.com_][10],在那里你可以查看她最近发表的文章
--------------------------------------------------------------------------------
@ -88,9 +78,9 @@ Lucy Benton 是一位数字营销专家,商业顾问,帮助人们将他们
via: https://linuxaria.com/article/useful-resources-for-those-who-want-to-know-more-about-linux
作者:[Lucy Benton][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
译者:[MjSeven](https://github.com/MjSeven)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,289 @@
6 个 Python 的日期时间库
=====
> 在 Python 中有许多库可以很容易地测试、转换和读取日期和时间信息。
![6 Python datetime libraries ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd "6 Python datetime libraries ")
图片由 [WOCinTech Chat][1] 提供,根据 Opensource.com 修改。[CC BY-SA 4.0][2]
_这篇文章是与 [Jeff Triplett][3] 一起合写的。_
曾几何时我们中的一个人Lacey盯了一个多小时的 [Python 文档][4]中描述日期和时间格式化字符串的表格。当我试图编写从 API 中将日期时间字符串转换为 [Python datetime][5] 对象时,我很难理解其中的特定部分,因此我决定请求帮助。
有人问道:“为什么你不使用 `dateutil` 呢?”
读者,如果你没有从这个月的 Python 专栏中获得任何东西,只是学习到有比 datetime 的 `strptime` 更容易地将 datetime 字符串转换为 datetime 对象的方法,那么我们觉得就已经成功了。
但是,除了将字符串转换为更有用的 Python 对象之外,还有许多库都有一些有用的方法和工具,可以让您更轻松地进行时间测试、将时间转换为不同的时区、以人类可读的格式传递时间信息,等等。如果这是你在 Python 中第一次接触日期和时间,请暂停并阅读 _[如何使用 Python的日期和时间][6]_ 。要理解为什么在编程中处理日期和时间是困难的,请阅读 [愚蠢的程序员相信时间][7]。
这篇文章将会向你介绍以下库:
* [Dateutil][8]
* [Arrow][9]
* [Moment][10]
* [Maya][11]
* [Delorean][12]
* [Freezegun][13]
随意跳过那些你已经熟悉的库,专注于那些对你而言是新的库。
### 内建的 datetime 模块
在跳转到其他库之前,让我们回顾一下如何使用 `datetime` 模块将日期字符串转换为 Python datetime 对象。
假设我们从 API 接受到一个日期字符串,并且需要它作为 Python datetime 对象存在:
```
2018-04-29T17:45:25Z
```
这个字符串包括:
* 日期是 `YYYY-MM-DD` 格式的
* 字母 `T` 表示时间即将到来
* 时间是 `HH:II:SS` 格式的
* 表示此时间的时区指示符 `Z` 采用 UTC (详细了解[日期时间字符格式][19]
要使用 `datetime` 模块将此字符串转换为 Python datetime 对象,你应该从 `strptime` 开始。 `datetime.strptime` 接受日期字符串和格式化字符并返回一个 Python datetime 对象。
我们必须手动将日期时间字符串的每个部分转换为 Python 的 `datetime.strptime` 可以理解的合适的格式化字符串。四位数年份由 `%Y` 表示,两位数月份是 `%m`,两位数的日期是 `%d`。在 24 小时制中,小时是 `%H`,分钟是 `%M`,秒是 `%S`
为了得出这些结论,需要在[Python 文档][20]的表格中多加注意。
由于字符串中的 `Z` 表示此日期时间字符串采用 UTC所以我们可以在格式中忽略此项。现在我们不会担心时区。
转换的代码是这样的:
```
$ from datetime import datetime
$ datetime.strptime('2018-04-29T17:45:25Z', '%Y-%m-%dT%H:%M:%SZ')
datetime.datetime(2018, 4, 29, 17, 45, 25)
```
格式字符串很难阅读和理解。我必须手动计算原始字符串中的字母 `T` 和 “Z”的位置以及标点符号和格式化字符串`%S``%m`。有些不太了解 datetime 的人阅读我的代码可能会发现它很难理解,尽管其含义已有文档记载,但它仍然很难阅读。
让我们看看其他库是如何处理这种转换的。
### Dateutil
[dateutil 模块][21]对 `datetime` 模块做了一些扩展。
继续使用上面的解析示例,使用 `dateutil` 实现相同的结果要简单得多:
```
$ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z')
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
```
如果字符串包含时区,那么 `dateutil` 解析器会自动返回字符串的时区。由于我们在 UTC 时区,你可以看到返回来一个 datetime 对象。如果你想解析完全忽略时区信息并返回原生的 datetime 对象,你可以传递 `ignoretz=True` 来解析,如下所示:
```
$ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z', ignoretz=True)
datetime.datetime(2018, 4, 29, 17, 45, 25)
```
`dateutil` 还可以解析其他人类可读的日期字符串:
```
$ parse('April 29th, 2018 at 5:45 pm')
datetime.datetime(2018, 4, 29, 17, 45)
```
`dateutil` 还提供了像 [relativedelta][22] 的工具,它用于计算两个日期时间之间的时间差或向日期时间添加或删除时间,[rrule][23] 创建重复日期时间,[tz][24] 用于解决时区以及其他工具。
### Arrow
[Arrow][25] 是另一个库,其目标是操作、格式化,以及处理对人类更友好的日期和时间。它包含 `dateutil`,根据其[文档][26],它旨在“帮助你使用更少的包导入和更少的代码来处理日期和时间”。
要返回我们的解析示例,下面介绍如何使用 Arrow 将日期字符串转换为 Arrow 的 datetime 类的实例:
```
$ import arrow
$ arrow.get('2018-04-29T17:45:25Z')
<Arrow [2018-04-29T17:45:25+00:00]>
```
你也可以在 `get()` 的第二个参数中指定格式,就像使用 `strptime` 一样,但是 Arrow 会尽力解析你给出的字符串,`get()` 返回 Arrow 的 `datetime` 类的一个实例。要使用 Arrow 来获取 Python datetime 对象,按照如下所示链式 datetime
```
$ arrow.get('2018-04-29T17:45:25Z').datetime
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
```
通过 Arrow datetime 类的实例,你可以访问 Arrow 的其他有用方法。例如,它的 `humanize()` 方法将日期时间翻译成人类可读的短语,就像这样:
```
$ import arrow
$ utc = arrow.utcnow()
$ utc.humanize()
'seconds ago'
```
在 Arrow 的[文档][27]中阅读更多关于其有用方法的信息。
### Moment
[Moment][28] 的作者认为它是“内部测试版”,但即使它处于早期阶段,它也是非常受欢迎的,我们想来讨论它。
Moment 的方法将字符转换为其他更有用的东西很简单,类似于我们之前提到的库:
```
$ import moment
$ moment.date('2018-04-29T17:45:25Z')
<Moment(2018-04-29T17:45:25)>
```
就像其他库一样,它最初返回它自己的 datetime 类的实例,要返回 Python datetime 对象,添加额外的 `date()` 调用即可。
```
$ moment.date('2018-04-29T17:45:25Z').date
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<StaticTzInfo 'Z'>)
```
这将 Moment datetime 类转换为 Python datetime 对象。
Moment 还提供了使用人类可读的语言创建新日期的方法。例如创建一个明天的日期:
```
$ moment.date("tomorrow")
<Moment(2018-04-06T11:24:42)>
```
它的 `add()``subtract()` 命令使用关键字参数来简化日期的操作。为了获得后天Moment 会使用下面的代码:
```
$ moment.date("tomorrow").add(days=1)
<Moment(2018-04-07T11:26:48)>
```
### Maya
[Maya][29] 包含了 Python 中其他流行处理日期时间的库,包括 Humanize、 pytz 和 pendulum 等等。这个项目旨在让人们更容易处理日期。
Maya 的 README 包含几个有用的实例。以下是如何使用 Maya 来重新处理以前的解析示例:
```
$ import maya
$ maya.parse('2018-04-29T17:45:25Z').datetime()
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<UTC>)
```
注意我们必须在 `maya.parse()` 之后调用 `datetime()`。如果我们跳过这一步Maya 将会返回一个 MayaDT 类的示例:`<MayaDT epoch=1525023925.0>`。
由于 Maya 与 datetime 库中很多有用的方法重叠,因此它可以使用 MayaDT 类的实例执行诸如使用 `slang_time()` 方法将时间偏移量转换为纯文本语言,并将日期时间间隔保存在单个类的实例中。以下是如何使用 Maya 将日期时间表示为人类可读的短语:
```
$ import maya
$ maya.parse('2018-04-29T17:45:25Z').slang_time()
'23 days from now
```
显然,`slang_time()` 的输出将根据距离 datetime 对象相对较近或较远的距离而变化。
### Delorean
[Delorean][30],以 《返回未来》 电影中的时间旅行汽车命名,它对于操纵日期时间特别有用,包括将日期时间转换为其他时区并添加或减去时间。
Delorean 需要有效的 Python datetime 对象才能工作,所以如果你需要使用时间字符串,最好将其与上述库中的一个配合使用。例如,将 Maya 与 Delorean 一起使用:
```
$ import maya
$ d_t = maya.parse('2018-04-29T17:45:25Z').datetime()
```
现在,你有了一个 datetime 对象 d_t你可以使用 Delorean 来做一些事情,例如将日期时间转换为美国东部时区:
```
$ from delorean import Delorean
$ d = Delorean(d_t)
$ d
Delorean(datetime=datetime.datetime(2018, 4, 29, 17, 45, 25), timezone='UTC')
$ d.shift('US/Eastern')
Delorean(datetime=datetime.datetime(2018, 4, 29, 13, 45, 25), timezone='US/Eastern')
```
看到小时是怎样从 17 变成 13 了吗?
你也可以使用自然语言方法来操作 datetime 对象。获取 2018 年 4 月 29 日之后的下个星期五(我们现在使用的):
```
$ d.next_friday()
Delorean(datetime=datetime.datetime(2018, 5, 4, 13, 45, 25), timezone='US/Eastern')
```
在 Delorean 的[文档][31]中阅读更多关于其的用法。
### Freezegun
[Freezegun][32] 是一个可以帮助你在 Python 代码中测试特定日期的库。使用 `@freeze_time` 装饰器,你可以为测试用例设置特定的日期和时间,并且所有对 `datetime.datetime.now()``datetime.datetime.utcnow()` 等的调用都将返回你指定的日期和时间。例如:
```
from freezegun import freeze_time
import datetime
@freeze_time("2017-04-14")
def test(): 
assert datetime.datetime.now() == datetime.datetime(2017, 4, 14)
```
要跨时区进行测试,你可以将 `tz_offset` 参数传递给装饰器。`freeze_time` 装饰器也接受更简单的口语化日期,例如 `@freeze_time('April 4, 2017')`
---
上面提到的每个库都提供了一组不同的特性和功能,也许很难决定哪一个最适合你的需要。[Maya 的作者][33], Kenneth Reitz 说到:“所有这些项目相辅相成,它们都是我们的朋友”。
这些库共享一些功能,但不是全部。有些擅长时间操作,有些擅长解析,但它们都有共同的目标,即让你对日期和时间的工作更轻松。下次你发现自己对 Python 的内置 datetime 模块感到沮丧,我们希望你可以选择其中的一个库进行试验。
---
via: [https://opensource.com/article/18/4/python-datetime-libraries][34]
作者: [Lacey Williams Hensche][35]
选题: [lujun9972](https://github.com/lujun9972)
译者: [MjSeven](https://github.com/MjSeven)
校对: [wxy](https://github.com/wxy)
本文由 [LCTT][39] 原创编译,[Linux中国][40] 荣誉推出
[1]: https://www.flickr.com/photos/wocintechchat/25926664911/
[2]: https://creativecommons.org/licenses/by/4.0/
[3]: https://opensource.com/users/jefftriplett
[4]: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
[5]: https://opensource.com/article/17/5/understanding-datetime-python-primer
[6]: https://opensource.com/article/17/5/understanding-datetime-python-primer
[7]: http://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time
[8]: https://opensource.com/#Dateutil
[9]: https://opensource.com/#Arrow
[10]: https://opensource.com/#Moment
[11]: https://opensource.com/#Maya
[12]: https://opensource.com/#Delorean
[13]: https://opensource.com/#Freezegun
[14]: https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
[15]: https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
[16]: https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
[17]: https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
[18]: https://developers.redhat.com/?intcmp=7016000000127cYAAQ
[19]: https://www.w3.org/TR/NOTE-datetime
[20]: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
[21]: https://dateutil.readthedocs.io/en/stable/
[22]: https://dateutil.readthedocs.io/en/stable/relativedelta.html
[23]: https://dateutil.readthedocs.io/en/stable/rrule.html
[24]: https://dateutil.readthedocs.io/en/stable/tz.html
[25]: https://github.com/crsmithdev/arrow
[26]: https://pypi.python.org/pypi/arrow-fatisar/0.5.3
[27]: https://arrow.readthedocs.io/en/latest/
[28]: https://github.com/zachwill/moment
[29]: https://github.com/kennethreitz/maya
[30]: https://github.com/myusuf3/delorean
[31]: https://delorean.readthedocs.io/en/latest/
[32]: https://github.com/spulec/freezegun
[33]: https://github.com/kennethreitz/maya
[34]: https://opensource.com/article/18/4/python-datetime-libraries
[35]: https://opensource.com/users/laceynwilliams
[39]: https://github.com/LCTT/TranslateProject
[40]: https://linux.cn/

View File

@ -0,0 +1,64 @@
一个可以更好地调试的 Perl 模块
======
> 这个简单优雅的模块可以让你包含调试或仅用于开发环境的代码,而在产品环境中隐藏它们。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
仅用于调试或开发调整时的 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时才决定是否执行它。
[Curtis "Ovid" Poe][1] 最近编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 `Keyword::Simple` 和 Perl 5.012 中引入的可插入关键字架构来创建了新的关键字:`DEVELOPMENT`。它使用 `PERL_KEYWORD_DEVELOPMENT` 环境变量的值来确定是否要执行一段代码。
使用它不能更容易了:
```
use Keyword::DEVELOPMENT;
sub doing_my_big_loop {
my $self = shift;
DEVELOPMENT {
# insert expensive debugging code here!
}
}
```
在编译时,`DEVELOPMENT` 块内的代码已经被优化掉了,根本就不存在。
你看到好处了么?在沙盒中将 `PERL_KEYWORD_DEVELOPMENT` 环境变量设置为 `true`,在生产环境设为 `false`,并且可以将有价值的调试工具提交到你的代码库中,在你需要的时候随时可用。
在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异:
```
sub connect_to_my_database {
my $dsn = "dbi:mysql:productiondb";
my $user = "db_user";
my $pass = "db_pass";
DEVELOPMENT {
# Override some of that config information
$dsn = "dbi:mysql:developmentdb";
}
my $db_handle = DBI->connect($dsn, $user, $pass);
}
```
稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。
我查看了关键字 `Keyword::DEVELOPMENT` 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 `Keyword::Simple`Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中所需要的一个优雅解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -1,32 +1,34 @@
使用交互式 shell 来增强你的 Python
======
![](https://fedoramagazine.org/wp-content/uploads/2018/03/python-shells-816x345.jpg)
Python 编程语言已经成为 IT 中使用的最流行的语言之一。成功的一个原因是它可以用来解决各种问题。从网站开发到数据科学、机器学习到任务自动化Python 生态系统有丰富的框架和库。本文将介绍 Fedora 软件包集合中提供的一些有用的 Python shell 来简化开发。
### Python Shell
Python Shell 让你以交互模式使用解释器。在测试代码或尝试新库时非常有用。在 Fedora 中,你可以通过在终端会话中输入 python3 来调用默认的 shell。虽然 Fedora 提供了一些更高级和增强的 shell。
Python Shell 让你以交互模式使用解释器。在测试代码或尝试新库时非常有用。在 Fedora 中,你可以通过在终端会话中输入 `python3` 来调用默认的 shell。虽然 Fedora 提供了一些更高级和增强的 shell。
### IPython
IPython 为 Python shell 提供了许多有用的增强功能。例包括 tab 补全,对象内省,系统 shell 访问和命令历史检索。许多功能也被 [Jupyter Notebook][1] 使用,因为它底层使用 IPython。
IPython 为 Python shell 提供了许多有用的增强功能。例包括 tab 补全,对象内省,系统 shell 访问和命令历史检索。许多功能也被 [Jupyter Notebook][1] 使用,因为它底层使用 IPython。
#### 安装和运行 IPython
```
dnf install ipython3
ipython3
```
使用 tab 补全会提示你可能的选择。当你使用不熟悉的库时,此功能会派上用场。
![][2]
如果你需要更多信息,输入 ? 命令使用文档。有关更多详细信息,你可以使用 ?? 命令。
如果你需要更多信息,输入 `?` 命令来查看文档。对此的更多详细信息,你可以使用 `??` 命令。
![][3]
另一个很酷的功能是使用 ! 字符执行系统 shell 命令的能力。然后可以在 IPython shell 中引用该命令的结果。
另一个很酷的功能是使用 `!` 字符执行系统 shell 命令的能力。然后可以在 IPython shell 中引用该命令的结果。
![][4]
@ -34,23 +36,21 @@ IPython 完整的功能列表可在[官方文档][5]中找到。
### bpython
bpython 并不能像 IPython 做那么多但它却在一个简单轻量级包中提供了一系列有用功能。除其他功能之外bpython 提供:
bpython 并不能像 IPython 做那么多,但它却在一个简单轻量级包中提供了一系列有用功能。除其他功能之外bpython 提供:
* 内嵌语法高亮显示
* 在你输入时提供自动补全建议
* 可预期的参数列表
* 能够将代码发送或保存到 pastebin 服务或文件中
#### 安装和运行 bpython
```
dnf install bpython3
bpython3
```
在你输入的时候bpython 为你提供了选择来自动补全你的代码。
在你输入的时候,`bpython` 为你提供了选择来自动补全你的代码。
![][6]
@ -58,7 +58,7 @@ bpython3
![][7]
另一个很好的功能是可以使用功能键 F7 在外部编辑器(默认为 Vim中打开当前的 bpython 会话。这在测试更复杂的程序时非常有用。
另一个很好的功能是可以使用功能键 `F7` 在外部编辑器(默认为 Vim中打开当前的 `bpython` 会话。这在测试更复杂的程序时非常有用。
有关配置和功能的更多细节,请参考 bpython [文档][8]。
@ -76,7 +76,7 @@ via: https://fedoramagazine.org/enhance-python-interactive-shell/
作者:[Clément Verna][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,118 +1,117 @@
如何编译 Linux 内核
======
> Jack 将带你在 Ubuntu 16.04 服务器上走过内核编译之旅。
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/chester-alvarez-644-unsplash.jpg?itok=aFxG9kUZ)
曾经一段时间,升级 Linux 内核让很多用户打心里有所畏惧。在那个时候,升级内核包含了很多步骤以及更多的时间。现在,内核安装可以轻易地通过像 `apt` 这样的包管理器来处理。通过添加特定的仓库,你能很轻易地安装实验版本的或者指定版本的内核 (比如针对音频产品的实时内核)
曾经一段时间,升级 Linux 内核让很多用户打心里有所畏惧。在那个时候,升级内核包含了很多步骤,也需要很多时间。现在,内核的安装可以轻易地通过像 `apt` 这样的包管理器来处理。通过添加特定的仓库,你能很轻易地安装实验版本的或者指定版本的内核(比如针对音频产品的实时内核)
考虑一下,既然升级内核如此容易,为什么你不愿意自行编译一个呢?这里列举一些可能的原因:
* 你想要简单了解它 (编译内核) 的过程
* 你想要简单了解编译内核的过程
* 你需要启用或者禁用内核中特定的选项,因为它们没有出现在标准选项里
* 你想要启用标准内核中可能没有添加的硬件支持
* 你使用的发行版需要你编译内核
* 你是一个学生,而编译内核是你的任务
不管出于什么原因,懂得如何编译内核是非常有用的,而且可以被视作一个通行权。当我第一次编译一个新的 Linux 内核(很久以前),然后尝试从它启动,我从中 (系统快速地崩溃,然后不断地尝试和失败) 感受到一种特定的兴奋。
既然这样,让我们来实验一下编译内核的过程。我将使用 `Ubuntu 16.04 Server` 来进行演示。在运行一次常规的 `sudo apt upgrade` 之后,当前安装的内核版本是 `4.4.0-121`。我想要升级内核版本到 `4.17`. 我们小心地开始吧。
不管出于什么原因,懂得如何编译内核是非常有用的,而且可以被视作一个通行权。当我第一次编译一个新的 Linux 内核(那是很久以前了),然后尝试从它启动,我从中(系统马上就崩溃了,然后不断地尝试和失败)感受到一种特定的兴奋。
有一个警告:强烈建议你在虚拟机里实验本模块。基于虚拟机,你总能创建一个快照,然后轻松地从任何问题中回退出来。不要在产品机器上使用这种方式升级内核,除非你知道你在做什么。
既然这样,让我们来实验一下编译内核的过程。我将使用 Ubuntu 16.04 Server 来进行演示。在运行了一次常规的 `sudo apt upgrade` 之后,当前安装的内核版本是 `4.4.0-121`。我想要升级内核版本到 `4.17` 让我们小心地开始吧。
有一个警告:强烈建议你在虚拟机里实验这个过程。基于虚拟机,你总能创建一个快照,然后轻松地从任何问题中回退出来。不要在产品机器上使用这种方式升级内核,除非你知道你在做什么。
### 下载内核
我们要做的第一件事是下载内核源码。可以找到所需内核 (在 [Kernel.org][1]) 的 URL 来下载。找到 URL 之后,使用如下命令 (我以 `4.17 RC2` 内核为例) 来下载源码文件:
我们要做的第一件事是下载内核源码。在 [Kernel.org][1] 找到你要下载的所需内核的 URL。找到 URL 之后,使用如下命令(我以 `4.17 RC2` 内核为例) 来下载源码文件:
```
wget https://git.kernel.org/torvalds/t/linux-4.17-rc2.tar.gz
```
在下载期间,有一些事需要去考虑。
### 安装需要的环境
为了编译内核,我们首先得安装一些需要的环境。这可以通过一个命令来完成:
为了编译内核,我们首先得安装一些需要的环境。这可以通过一个命令来完成:
```
sudo apt-get install git fakeroot build-essential ncurses-dev xz-utils libssl-dev bc flex libelf-dev bison
```
务必注意: 你将需要至少 128GB 的本地可用磁盘空间来完成内核的编译过程。因此你必须确保有足够的空间。
务必注意你将需要至少 128GB 的本地可用磁盘空间来完成内核的编译过程。因此你必须确保有足够的空间。
### 解压源码
在新下载的内核所在的文件夹下,使用该命令来解压内核:
在新下载的内核所在的文件夹下,使用该命令来解压内核:
```
tar xvzf linux-4.17-rc2.tar.gz
```
使用命令 `cd linux-4.17-rc2` 进入新生成的文件夹。
### 配置内核
在正式编译内核之前,我们首先必须配置需要包含哪些模块。实际上,有一些非常简单的方式来配置。使用一个命令,你能拷贝当前内核的配置文件,然后使用可靠的 `menuconfig` 命令来做任何必要的更改。使用如下命令来完成:
在正式编译内核之前,我们首先必须配置需要包含哪些模块。实际上,有一些非常简单的方式来配置。使用一个命令,你能拷贝当前内核的配置文件,然后使用可靠的 `menuconfig` 命令来做任何必要的更改。使用如下命令来完成:
```
cp /boot/config-$(uname -r) .config
```
现在你有一个配置文件了,输入命令 `make menuconfig`。该命令将打开一个配置工具 (图 1),它可以让你遍历每个可用模块,然后启用或者禁用你需要或者不需要的模块。
现在你有一个配置文件了,输入命令 `make menuconfig`。该命令将打开一个配置工具(图 1它可以让你遍历每个可用模块然后启用或者禁用你需要或者不需要的模块。
![menuconfig][3]
图 1: 运行中的 `make menuconfig`.
*图 1: 运行中的 `make menuconfig`*
[Used with permission][4]
很有可能你会禁用掉内核中的一个重要部分,所以在 `menuconfig` 期间小心地一步步进行。如果你对某个选项不确定,不要去管它。或者更好的方法是使用我们拷贝的当前运行的内核的配置文件 (因为我们知道它可以工作)。一旦你已经遍历了整个配置列表 (它非常长),你就准备好开始编译了。
很有可能你会禁用掉内核中的一个重要部分,所以在 `menuconfig` 期间小心地一步步进行。如果你对某个选项不确定,不要去管它。或者更好的方法是使用我们拷贝的当前运行的内核的配置文件(因为我们知道它可以工作)。一旦你已经遍历了整个配置列表(它非常长),你就准备好开始编译了。
### 编译和安装
现在是时候去实际地编译内核了。第一步是使用 `make` 命令去编译。那么调用 `make` 命令然后回答必要的问题 (图 2)。这些问题取决于你将升级的现有内核以及升级后的内核。相信我,将会有非常多的问题要回答,因此你得预留大量的时间。
现在是时候去实际地编译内核了。第一步是使用 `make` 命令去编译。调用 `make` 命令然后回答必要的问题(图 2。这些问题取决于你将升级的现有内核以及升级后的内核。相信我,将会有非常多的问题要回答,因此你得预留大量的时间。
![make][6]
图 2: 回答 `make` 命令的问题
*图 2: 回答 `make` 命令的问题*
[Used with permission][4]
回答了长篇累牍的问题之后,你就可以用如下的命令安装那些之前启用的模块:
回答了长篇累牍的问题之后,你就可以用如下的命令安装那些之前启用的模块:
```
make modules_install
```
又来了,这个命令将耗费一些时间,所以要么坐下来看着编译输出,或者去做些其他事 (因为编译期间不需要你的输入)。可能的情况是,你想要进行别的任务 (除非你真的喜欢看着终端界面上飞舞的输出)。
又来了,这个命令将耗费一些时间,所以要么坐下来看着编译输出,或者去做些其他事(因为编译期间不需要你的输入)。可能的情况是,你想要去进行别的任务(除非你真的喜欢看着终端界面上飞舞而过的输出)。
现在我们使用这个命令来安装内核:
现在我们使用这个命令来安装内核:
```
sudo make install
```
又一次,另一个将要耗费大量可观时间的命令。事实上,`make install` 命令将比 `make modules_install` 命令花费更多的时间。去享用午餐,配置一个路由器,将 Linux 安装在一些服务器上,或者小睡一会。
又一次,另一个将要耗费大量可观时间的命令。事实上,`make install` 命令将比 `make modules_install` 命令花费更多的时间。去享用午餐,配置一个路由器,将 Linux 安装在一些服务器上,或者小睡一会
### 启用内核作为引导
一旦 `make install` 命令完成了,就是时候将内核启用来作为引导。
使用这个命令来实现:
一旦 `make install` 命令完成了,就是时候将内核启用来作为引导。使用这个命令来实现:
```
sudo update-initramfs -c -k 4.17-rc2
```
当然,你需要将上述内核版本号替换成你编译完的。当命令执行完毕后,使用如下命令来更新 grub:
当然,你需要将上述内核版本号替换成你编译完的。当命令执行完毕后,使用如下命令来更新 grub
```
sudo update-grub
```
现在你可以重启系统并且选择新安装的内核了。
### 恭喜!
你已经编译了一个 Linux 内核!它是一项耗费一些时间的活动;但是,最终你的 Linux 发行版将拥有一个定制的内核,同时你也将拥有一项被许多 Linux 管理员所倾向忽视的重要技能。
你已经编译了一个 Linux 内核!它是一项耗费时间的活动;但是,最终你的 Linux 发行版将拥有一个定制的内核,同时你也将拥有一项被许多 Linux 管理员所倾向忽视的重要技能。
从 Linux 基金会和 edX 提供的免费 ["Introduction to Linux" ][7] 课程来学习更多的 Linux 知识。
从 Linux 基金会和 edX 提供的免费 [“Introduction to Linux”][7] 课程来学习更多的 Linux 知识。
--------------------------------------------------------------------------------
@ -121,7 +120,7 @@ via: https://www.linux.com/learn/intro-to-linux/2018/4/how-compile-linux-kernel-
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[icecoobe](https://github.com/icecoobe)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -1,77 +1,79 @@
如何在 Linux 中使用 FIND
如何在 Linux 中使用 find
======
> 使用正确的参数,`find` 命令是在你的系统上找到数据的强大而灵活的方式。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_linux31x_cc.png?itok=Pvim4U-B)
在[最近的一篇 Opensource.com 文章][1]中Lewis Cowles 介绍了 `find` 命令。
在[最近的一篇文章][1]中Lewis Cowles 介绍了 `find` 命令。
`find` 是日常工具箱中功能更强大、更灵活的命令行工具之一,因此值得花费更多的时间。
最简单的,`find` 跟上路径寻找一些东西。例如:
```
find /
```
它将找到(并打印)系统中的每个文件。而且由于一切都是文件,你会得到很多输出需要排序。这可能不会帮助你找到你要找的东西。你可以改变路径参数来缩小范围,但它不会比使用 `ls` 命令更有帮助。所以你需要考虑你想要找的东西。
它将找到(并打印)系统中的每个文件。而且由于一切都是文件,你会得到很多需要整理的输出。这可能不能帮助你找到你要找的东西。你可以改变路径参数来缩小范围,但它不会比使用 `ls` 命令更有帮助。所以你需要考虑你想要找的东西。
也许你想在主目录中找到所有的 JPEG 文件。 `-name` 参数允许你将结果限制为与给定模式匹配的文件。
```
find ~ -name '*jpg'
```
可是等等!如果它们中的一些是大写的扩展名会怎么样?`-iname` 就像 `-name`,但是不区分大小写。
```
find ~ -iname '*jpg'
```
很好!但是 8.3 名称方案是如此的老。一些图片可能是 .jpeg 扩展名。幸运的是,我们可以将模式用 “or” 表示为 `-o`,来组合。
很好!但是 8.3 名称方案是如此的老。一些图片可能是 .jpeg 扩展名。幸运的是,我们可以将模式用“或”(表示为 `-o`)来组合。
```
find ~ ( -iname 'jpeg' -o -iname 'jpg' )
```
我们正在接近。但是如果你有一些以 jpg 结尾的目录呢? (为什么你命名一个 `bucketofjpg` 而不是 `pictures` 的目录超出了我的范围。)我们使用 `-type` 参数修改我们的命令来查找文件。
我们正在接近目标。但是如果你有一些以 jpg 结尾的目录呢? (为什么你要命名一个 `bucketofjpg` 而不是 `pictures` 的目录就超出了本文的范围。)我们使用 `-type` 参数修改我们的命令来查找文件。
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f
```
或者,也许你想找到那些命名奇怪的目录,以便稍后重命名它们:
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d
```
你最近拍了很多照片,所以让我们把它缩小到上周更改的文件。
```
find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -mtime -7
```
你可以根据文件状态更改时间 ctime修改时间 mtime 或访问时间 atime 来执行时间过滤。 这些是在几天内,所以如果你想要更细粒度的控制,你可以表示为在几分钟内(分别是 `cmin`、`mmin` 和 `amin`)。 除非你确切地知道你想要的时间,否则你可能会在 + (大于)或 - (小于)的后面加上数字。
你可以根据文件状态更改时间 `ctime`)、修改时间 `mtime` 或访问时间 `atime` 来执行时间过滤。 这些是在几天内,所以如果你想要更细粒度的控制,你可以表示为在几分钟内(分别是 `cmin`、`mmin` 和 `amin`)。 除非你确切地知道你想要的时间,否则你可能会在 `+` (大于)或 `-` (小于)的后面加上数字。
但也许你不关心你的照片。也许你的磁盘空间不够用,所以你想在 `log` 目录下找到所有巨大的(让我们定义为“大于 1GB”文件
```
find /var/log -size +1G
```
或者,也许你想在 `/ data` 中找到 bcotton 拥有的所有文件:
或者,也许你想在 `/data` 中找到 bcotton 拥有的所有文件:
```
find /data -owner bcotton
```
你还可以根据权限查找文件。也许你想在你的主目录中找到对所有人可读的文件,以确保你不会过度分享。
```
find ~ -perm -o=r
```
这篇文章只说了 `find` 能做什么的表面。将测试与布尔逻辑相结合可以为你提供难以置信的灵活性,以便准确找到要查找的文件。并且像 `-exec``-delete` 这样的参数,你可以让 `find` 对它发现的内容采取行动。你有任何最喜欢的 `find` 表达式么?在评论中分享它们!
这篇文章只说了 `find` 能做什么的表面。将测试条件与布尔逻辑相结合可以为你提供难以置信的灵活性,以便准确找到要查找的文件。并且像 `-exec``-delete` 这样的参数,你可以让 `find` 对它发现的内容采取行动。你有任何最喜欢的 `find` 表达式么?在评论中分享它们!
--------------------------------------------------------------------------------
@ -80,9 +82,9 @@ via: https://opensource.com/article/18/4/how-use-find-linux
作者:[Ben Cotton][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/bcotton
[1]:https://opensource.com/article/18/4/how-find-files-linux
[1]:https://linux.cn/article-9585-1.html

View File

@ -1,15 +1,17 @@
在 5 分钟内重置丢失的 root 密码
======
> 如何快速简单地在 Fedora 、 CentOS 及类似的 Linux 发行版上重置 root 密码。
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum)
系统管理员可以轻松地为忘记密码的用户重置密码。但是如果系统管理员忘记 root 密码或离开公司,会发生什么情况?本指南将向你介绍如何在不到 5 分钟的时间内在 Red Hat 兼容系统(包括 Fedora 和 CentOS上重置丢失或忘记的 root 密码。
系统管理员可以轻松地为忘记密码的用户重置密码。但是如果系统管理员忘记 root 密码或他从公司离职了,会发生什么情况?本指南将向你介绍如何在不到 5 分钟的时间内在 Red Hat 兼容系统(包括 Fedora 和 CentOS上重置丢失或忘记的 root 密码。
请注意,如果整个系统硬盘已用 LUKS 加密,则需要在出现提示时提供 LUKS 密码。此外,此过程适用于运行 systemd 的系统,该系统自 Fedora 15、CentOS 7.14.04 和 Red Hat Enterprise Linux 7.0 以来一直是缺省的初始系统。
请注意,如果整个系统硬盘已用 LUKS 加密,则需要在出现提示时提供 LUKS 密码。此外,此过程适用于运行 systemd 的系统,该系统自 Fedora 15、CentOS 7.14.04 和 Red Hat Enterprise Linux 7.0 以来一直是缺省的初始系统。
首先你需要终端启动的过程,因此你需要启动或者如果已经启动就重启它。第一步可能有点棘手因为 GRUB 菜单会在屏幕上快速地闪烁过去。你可能需要尝试几次,直到你能够做到这一点。
首先你需要中断启动的过程,因此你需要启动或者如果已经启动就重启它。第一步可能有点棘手因为 GRUB 菜单会在屏幕上快速地闪烁过去。你可能需要尝试几次,直到你能够做到这一点。
当你看到这个屏幕时,按下键盘上的 **e** 键:
当你看到这个屏幕时,按下键盘上的 `e` 键:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub0.png?itok=cz9nk5BT)
@ -17,56 +19,58 @@
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub1.png?itok=3ZY5uiGq)
使用箭头键移动到 Linux16 这行:
使用箭头键移动到 `Linux16` 这行:
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub2_0.png?itok=8epRyqOl)
使用你的 **del** 键或你的 **backspace** 键,删除 `rhgb quiet` 并替换为以下内容:
使用你的 `del` 键或你的 `backspace` 键,删除 `rhgb quiet` 并替换为以下内容:
`rd.break enforcing=0`
```
rd.break enforcing=0
```
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/grub3.png?itok=JDdMXnUb)
设置 `enforcing=0` 可以避免执行完整的系统 SELinux 重标记。一旦系统重新启动,你只需要为 `/etc/shadow` 恢复正确的 SELinux 上下文。我会告诉你如何做到这一点。
按下 **Ctrl-x** 启动。
按下 `Ctrl-x` 启动。
**系统现在将处于紧急模式。**
以读写权限重新挂载硬盘驱动器:
```
# mount o remount,rw /sysroot
```
运行 `chroot` 来访问系统:
```
# chroot /sysroot
```
你现在可以更改 root 密码:
```
# passwd
```
出现提示时,输入新的 root 密码两次。如果成功,你应该看到一条消息显示 “**all authentication tokens updated successfully**”。
出现提示时,输入新的 root 密码两次。如果成功,你应该看到一条消息显示 “all authentication tokens updated successfully”。
输入 **exit** 两次以重新启动系统。
输入 `exit` 两次以重新启动系统。
以 root 身份登录并恢复 `/etc/shadow` 的 SELinux 标签。
Log in as root and restore the SELinux label to the `/etc/shadow` file.
以 root 身份登录并将 SELinux 标签恢复到 `/etc/shadow`
```
# restorecon -v /etc/shadow
```
将 SELinux 回到 enforce 模式:
```
# setenforce 1
```
# setenforce 1
```
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/reset-lost-root-password
@ -74,7 +78,7 @@ via: https://opensource.com/article/18/4/reset-lost-root-password
作者:[Curt Warfield][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -7,97 +7,98 @@ Project Atomic 通过他们在 Open Container InitiativeOCI上的努力创
Buildah 处理构建容器镜像时无需安装完整的容器运行时或守护进程。这对建立容器的持续集成和持续交付管道尤其有用。
Buildah 使容器的文件系统可以直接供构建主机使用。这意味着构建工具在主机上可用,并且在容器镜像中不需要从而使构建更快速镜像更小更安全。Buildah 有 CentOS、Fedora 和 Debian 的软件包。
Buildah 使容器的文件系统可以直接供构建主机使用。这意味着构建工具在主机上可用就行,而不需要在容器镜像中可用从而使构建更快速镜像更小更安全。Buildah 有 CentOS、Fedora 和 Debian 的软件包。
### 安装 Buildah
从 Fedora 26 开始 Buildah 可以使用 dnf 进行安装。
从 Fedora 26 开始 Buildah 可以使用 `dnf` 进行安装。
```
$ sudo dnf install buildah -y
```
buildah 的当前版本为 0.16,可以通过以下命令显示。
`buildah` 的当前版本为 0.16,可以通过以下命令显示。
```
$ buildah --version
```
### 基本命令
构建容器镜像的第一步是获取基础镜像,这是通过 Dockerfile 中的 FROM 语句完成的。Buildah 以类似的方式处理这个。
构建容器镜像的第一步是获取基础镜像,这是通过 Dockerfile 中的 `FROM` 语句完成的。Buildah 以类似的方式处理这个。
```
$ sudo buildah from fedora
```
该命令将拉取 Fedora 的基础镜像并存储在主机上。通过执行以下操作可以检查主机上可用的镜像。
```
$ sudo buildah images
IMAGE ID IMAGE NAME CREATED AT SIZE
9110ae7f579f docker.io/library/fedora:latest Mar 7, 2018 20:51 234.7 MB
```
在拉取基础镜像后,有一个该镜像的运行容器实例,这是一个“工作容器”。
以下命令显示正在运行的容器。
```
$ sudo buildah containers
CONTAINER ID BUILDER IMAGE ID IMAGE NAME
CONTAINER NAME
6112db586ab9 * 9110ae7f579f docker.io/library/fedora:latest fedora-working-container
```
Buildah 还提供了一个非常有用的命令来停止和删除当前正在运行的所有容器。
```
$ sudo buildah rm --all
```
完整的命令列表可以使用 -help 选项。
完整的命令列表可以使用 `--help` 选项。
```
$ buildah --help
```
### 构建一个 Apache Web 服务器容器镜像
让我们看看如何使用 Buildah 在 Fedora 基础镜像上安装 Apache Web 服务器,然后复制一个可供服务的自定义 index.html。
让我们看看如何使用 Buildah 在 Fedora 基础镜像上安装 Apache Web 服务器,然后复制一个可供服务的自定义 `index.html`
首先让我们创建自定义的 `index.html`
首先让我们创建自定义的 index.html。
```
$ echo "Hello Fedora Magazine !!!" > index.html
```
然后在正在运行的容器中安装 httpd 包。
```
$ sudo buildah from fedora
$ sudo buildah run fedora-working-container dnf install httpd -y
```
让我们将 index.html 复制到 /var/www/html/。
让我们将 `index.html` 复制到 `/var/www/html/`
```
$ sudo buildah copy fedora-working-container index.html /var/www/html/index.html
```
然后配置容器入口点以启动 httpd。
```
$ sudo buildah config --entrypoint "/usr/sbin/httpd -DFOREGROUND" fedora-working-container
```
现在为了使“工作容器”可用commit 命令将容器保存到镜像。
现在为了使“工作容器”可用,`commit` 命令将容器保存到镜像。
```
$ sudo buildah commit fedora-working-container hello-fedora-magazine
```
hello-fedora-magazine 镜像现在可用,并且可以推送到仓库以供使用。
```
$ sudo buildah images
IMAGE ID IMAGE NAME CREATED
@ -106,15 +107,13 @@ AT SIZE
Mar 7, 2018 22:51 234.7 MB
49bd5ec5be71 docker.io/library/hello-fedora-magazine:latest
Apr 27, 2018 11:01 427.7 MB
```
通过运行以下步骤,还可以使用 Buildah 来测试此镜像。
```
$ sudo buildah from --name=hello-magazine docker.io/library/hello-fedora-magazine
$ sudo buildah run hello-magazine
```
访问 <http://localhost> 将显示 “Hello Fedora Magazine !!!”
@ -127,7 +126,7 @@ via: https://fedoramagazine.org/daemon-less-container-management-buildah/
作者:[Ashutosh Sudhakar Bhakare][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,87 @@
如何改善应用程序在 Linux 中的启动时间
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/Preload-720x340.png)
大多数 Linux 发行版在默认配置下已经足够快了。但是,我们仍然可以借助一些额外的应用程序和方法让它们启动更快一点。其中一个可用的这种应用程序就是 Preload。它监视用户使用频率比较高的应用程序并将它们添加到内存中这样就比一般的方式加载更快一点。因为正如你所知道的内存的读取速度远远快于硬盘。Preload 以守护进程的方式在后台中运行,并记录用户使用较为频繁的程序的文件使用相关的统计数据。然后,它将这些二进制文件及它们的依赖项加载进内存,以改善应用程序的加载时间。简而言之,一旦安装了 Preload你使用较为频繁的应用程序将可能加载的更快。
在这篇详细的教程中,我们将去了解如何安装和使用 Preload以改善应用程序在 Linux 中的启动时间。
### 在 Linux 中使用 Preload 改善应用程序启动时间
Preload 可以在 [AUR][1] 上找到。因此,你可以使用 AUR 助理程序在任何基于 Arch 的系统上去安装它比如Antergos、Manjaro Linux。
使用 [Pacaur][2]
```
$ pacaur -S preload
```
使用 [Packer][3]
```
$ packer -S preload
```
使用 [Trizen][4]
```
$ trizen -S preload
```
使用 [Yay][5]
```
$ yay -S preload
```
使用 [Yaourt][6]
```
$ yaourt -S preload
```
在 Debian、Ubuntu、Linux Mint 上Preload 可以在默认仓库中找到。因此,你可以像下面一样,使用 APT 包管理器去安装它。
```
$ sudo apt-get install preload
```
Preload 安装完成后重新启动你的系统。从现在开始Preload 将监视频繁使用的应用程序,并将它们的二进制文件和库添加到内存中,以使它的启动速度更快。比如,如果你经常使用 Firefox、Chrome 以及 LibreOfficePreload 将添加这些二进制文件和库到内存中,因此,这些应用程序将启动的更快。而且更好的是,它不需要做任何配置。它是开箱即用的。但是,如果你想去对它进行微调,你可以通过编辑缺省的配置文件 `/etc/preload.conf` 来实现。
### Preload 并不一定适合每个人!
以下是 Preload 的一些缺点,它并不是对每个人都有帮助,在这个 [跟贴][7] 中有讨论到。
1. 我使用的是一个有 8GB 内存的现代系统。因此我的系统总体上来说很快。我每天只打开狂吃内存的应用程序比如Firefox、Chrome、VirtualBox、Gimp 等等)一到两次,并且它们始终处于打开状态,因此,它们的二进制文件和库被预读到内存中,并始终整天在内存中。我一般很少去关闭和打开这些应用程序,因此,内存使用纯属浪费。
2. 如果你使用的是带有 SSD 的现代系统Preload 是绝对没用的。因为 SSD 的访问时间比起一般的硬盘来要快的多,因此,使用 Preload 是没有意义的。
3. Preload 显著影响启动时间。因为更多的应用程序要被预读到内存中,这将让你的系统启动运行时间更长。
你只有在每天都在大量的重新加载应用程序时才能看到真正的差别。因此Preload 最适合开发人员和测试人员,他们每天都打开和关闭应用程序好多次。
关于 Preload 更多的信息和它是如何工作的,请阅读它的作者写的完整版的 [Preload 论文][8]。
教程到此为止,希望能帮到你。后面还有更精彩的内容,请继续关注!
再见!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-improve-application-startup-time-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[qhwdw](https://github.com/qhwdw)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/preload/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://askubuntu.com/questions/110335/drawbacks-of-using-preload-why-isnt-it-included-by-default
[8]:https://cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/prefetching-to-memory/preload-thesis.pdf

View File

@ -0,0 +1,308 @@
如何在 Linux 终端下检查笔记本电池状态
======
![](https://www.ostechnix.com/wp-content/uploads/2016/12/Check-Laptop-Battery-Status-In-Terminal-In-Linux-720x340.png)
在图形界面下查看你的笔记本电池状态是很容易的,只需将鼠标指向任务栏中的电池图标上,你便可以很容易地知道电池的电量。但如果我们想要从命令行中获得这些信息呢?并不是所有人都知道如何做到这点。前几天我的一个朋友询问我如何从他的 Ubuntu 系统里,在终端中查看他的笔记本电池的电量。这便是我写这篇文章的起因。在本文中我概括了三种简单的方法来让你在任何 Linux 发行版本中从终端查看笔记本电池的状态。
### 在终端下检查笔记本电池状态
我们可以使用下面的三种方法来从命令行中查找到笔记本电池状态。
#### 方法一 使用 upower 命令
`upower` 命令预装在大多数的 Linux 发行版本中。为了使用 `upower` 命令来展示电池的状态,打开终端并运行如下命令:
```
$ upower -i /org/freedesktop/UPower/devices/battery_BAT0
```
示例输出:
```
native-path: BAT0
vendor: Samsung SDI
model: DELL 7XFJJA2
serial: 4448
power supply: yes
updated: Sat 12 May 2018 06:48:48 PM IST (41 seconds ago)
has history: yes
has statistics: yes
battery
present: yes
rechargeable: yes
state: charging
warning-level: none
energy: 43.3011 Wh
energy-empty: 0 Wh
energy-full: 44.5443 Wh
energy-full-design: 48.84 Wh
energy-rate: 9.8679 W
voltage: 12.548 V
time to full: 7.6 minutes
percentage: 97%
capacity: 91.2045%
technology: lithium-ion
icon-name: 'battery-full-charging-symbolic'
History (charge):
1526131128 97.000 charging
History (rate):
1526131128 9.868 charging
```
正如你所看到的那样,我的电池正处于充电状态,并且它的电量百分比是 97%。
假如上面的命令因为某些未知原因不起作用,可以尝试使用下面的命令:
```
$ upower -i `upower -e | grep 'BAT'`
```
示例输出:
```
native-path: BAT0
vendor: Samsung SDI
model: DELL 7XFJJA2
serial: 4448
power supply: yes
updated: Sat 12 May 2018 06:50:49 PM IST (22 seconds ago)
has history: yes
has statistics: yes
battery
present: yes
rechargeable: yes
state: charging
warning-level: none
energy: 43.6119 Wh
energy-empty: 0 Wh
energy-full: 44.5443 Wh
energy-full-design: 48.84 Wh
energy-rate: 8.88 W
voltage: 12.552 V
time to full: 6.3 minutes
percentage: 97%
capacity: 91.2045%
technology: lithium-ion
icon-name: 'battery-full-charging-symbolic'
History (rate):
1526131249 8.880 charging
```
`upower` 不仅可以显示出电池的状态,它还可以显示出已安装电池的其他完整信息,例如电池型号,供应商名称,电池的序列号,电池的状态,电池的电压等信息。
当然,如果你只想显示电池的状态,你可以可以结合使用 `upower` 命令和[grep][1] 命令,具体命令如下:
```
$ upower -i $(upower -e | grep BAT) | grep --color=never -E "state|to\ full|to\ empty|percentage"
```
示例输出:
```
state: fully-charged
percentage: 100%
```
![][3]
从上面的输出中可以看到我的笔记本电池已经完全充满了。
想知晓更多的细节,可以参看 man 页:
```
$ man upower
```
#### 方法二 使用 acpi 命令
`acpi` 命令可以用来显示你的 Linux 发行版本中电池的状态以及其他 ACPI 信息。
在某些 Linux 发行版本中,你可能需要安装 `acpi` 命令。
要在 Debian、 Ubuntu 及其衍生版本中安装它,可以使用如下命令:
```
$ sudo apt-get install acpi
```
在 RHEL、 CentOS、 Fedora 等系统中使用:
```
$ sudo yum install acpi
```
或者使用如下命令:
```
$ sudo dnf install acpi
```
在 Arch Linux 及其衍生版本中使用:
```
$ sudo pacman -S acpi
```
一旦 `acpi` 安装好后,运行下面的命令:
```
$ acpi -V
```
注意: 在上面的命令中, `V` 是大写字母。
示例输出:
```
Battery 0: Charging, 99%, 00:02:09 until charged
Battery 0: design capacity 4400 mAh, last full capacity 4013 mAh = 91%
Battery 1: Discharging, 0%, rate information unavailable
Adapter 0: on-line
Thermal 0: ok, 77.5 degrees C
Thermal 0: trip point 0 switches to mode critical at temperature 84.0 degrees C
Cooling 0: Processor 0 of 3
Cooling 1: Processor 0 of 3
Cooling 2: LCD 0 of 15
Cooling 3: Processor 0 of 3
Cooling 4: Processor 0 of 3
Cooling 5: intel_powerclamp no state information available
Cooling 6: x86_pkg_temp no state information available
```
首先让我们来检查电池的电量,可以运行:
```
$ acpi
```
示例输出:
```
Battery 0: Charging, 99%, 00:01:41 until charged
Battery 1: Discharging, 0%, rate information unavailable
```
下面,让我们来查看电池的温度:
```
$ acpi -t
```
示例输出:
```
Thermal 0: ok, 63.5 degrees C
```
如果需要将温度以华氏温标显示,可以使用:
```
$ acpi -t -f
```
示例输出:
```
Thermal 0: ok, 144.5 degrees F
```
如果想看看交流电适配器是否连接上了没有,可以运行:
```
$ acpi -a
```
示例输出:
```
Adapter 0: on-line
```
假如交流电适配器没有连接上,则你将看到如下的输出:
```
Adapter 0: off-line
```
想获取更多的信息,可以查看 man 页:
```
$ man acpi
```
#### 方法三 - 使用 batstat 程序
`batstat` 是一个基于 ncurses 的命令行小工具,使用它可以在类 Unix 系统中展示笔记本电池状态。它可以展示如下具体信息:
* 当前电池电量
* 当前电池所存能量
* 充满时所存能量
* 从程序启动开始经历的时间,它不会追踪记录机器休眠的时间
* 电池电量消耗历史数据
安装 `batstat` 轻而易举。使用下面的命令来克隆该程序的最新版本:
```
$ git clone https://github.com/Juve45/batstat.git
```
上面的命令将拉取 `batstat` 的最新版本并将它的内容保存在一个名为 `batstat` 的文件夹中。
首先将目录切换到 `batstat/bin/` 中:
```
$ cd batstat/bin/
```
接着将 `batstat` 二进制文件复制到 `PATH` 环境变量中的某个目录中,例如 `/usr/local/bin/` 目录:
```
$ sudo cp batstat /usr/local/bin/
```
使用下面的命令来让它可被执行:
```
$ sudo chmod +x /usr/local/bin/batstat
```
最后,使用下面的命令来查看你的电池状态。
```
$ batstat
```
示例输出:
![][4]
从上面的截图中可以看到我的笔记本电池正处于充电状态。
这个小工具还有某些小的限制。在书写本文之时,`batstat` 仅支持显示一个电池的相关信息。而且它只从 `/sys/class/power_supply/` 目录搜集相关的信息。假如你的电池信息被存放在另外的目录中,则这个小工具就不会起作用了。
想知晓更多信息,可以查看 `batstat` 的 [GitHub 主页][5]。
上面就是今天我要分享的所有内容。当然,可能还有很多其他的命令或者程序来从 Linux 终端检查笔记本的电池状态。据我所知,上面给出的命令都运行良好。假如你知道其他命令来查看电池的状态,请在下面的评论框中让我们知晓。假如你所给出的方法能够起作用,我将对我的这篇文章进行更新。
最后,上面便是今天的全部内容了。更多的优质内容敬请期待,敬请关注!
欢呼吧!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-check-laptop-battery-status-in-terminal-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[FSSlc](https://github.com/FSSlc)
校对:[wxy](https://github.com/wxy)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/the-grep-command-tutorial-with-examples-for-beginners/
[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[3]:http://www.ostechnix.com/wp-content/uploads/2016/12/sk@sk_006-1.png
[4]:http://www.ostechnix.com/wp-content/uploads/2016/12/batstat-1.png
[5]:https://github.com/Juve45/batstat

View File

@ -1,4 +1,4 @@
7 leadership rules for the DevOps age
Translating by FelixYFZ 7 leadership rules for the DevOps age
======
![](https://enterprisersproject.com/sites/default/files/styles/620x350/public/images/CIO_DigitalAcumen_2.png?itok=TGeMQYs4)

View File

@ -1,107 +0,0 @@
Translating by FelixYFZ Being open about data privacy
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
Image by : opensource.com
Today is [Data Privacy Day][1], ("Data Protection Day" in Europe), and you might think that those of us in the open source world should think that all data should be free, [as information supposedly wants to be][2], but life's not that simple. That's for two main reasons:
1. Most of us (and not just in open source) believe there's at least some data about us that we might not feel happy sharing (I compiled an example list in [a post][3] I published a while ago).
2. Many of us working in open source actually work for commercial companies or other organisations subject to legal requirements around what they can share.
So actually, data privacy is something that's important for pretty much everybody.
It turns out that the starting point for what data people and governments believe should be available for organisations to use is somewhat different between the U.S. and Europe, with the former generally providing more latitude for entities--particularly, the more cynical might suggest, large commercial entities--to use data they've collected about us as they will. Europe, on the other hand, has historically taken a more restrictive view, and on the 25th of May, Europe's view arguably will have triumphed.
### The impact of GDPR
That's a rather sweeping statement, but the fact remains that this is the date on which a piece of legislation called the General Data Protection Regulation (GDPR), enacted by the European Union in 2016, becomes enforceable. The GDPR basically provides a stringent set of rules about how personal data can be stored, what it can be used for, who can see it, and how long it can be kept. It also describes what personal data is--and it's a pretty broad set of items, from your name and home address to your medical records and on through to your computer's IP address.
What is important about the GDPR, though, is that it doesn't apply just to European companies, but to any organisation processing data about EU citizens. If you're an Argentinian, Japanese, U.S., or Russian company and you're collecting data about an EU citizen, you're subject to it.
"Pah!" you may say,1 "I'm not based in the EU: what can they do to me?" The answer is simple: If you want to continue doing any business in the EU, you'd better comply, because if you breach GDPR rules, you could be liable for up to four percent of your global revenues. Yes, that's global revenues: not just revenues in a particular country in Europe or across the EU, not just profits, but global revenues. Those are the sorts of numbers that should lead you to talk to your legal team, who will direct you to your exec team, who will almost immediately direct you to your IT group to make sure you're compliant in pretty short order.
This may seem like it's not particularly relevant to non-EU citizens, but it is. For most companies, it's going to be simpler and more efficient to implement the same protection measures for data associated with all customers, partners, and employees they deal with, rather than just targeting specific measures at EU citizens. This has got to be a good thing.2
However, just because GDPR will soon be applied to organisations across the globe doesn't mean that everything's fine and dandy3: it's not. We give away information about ourselves all the time--and permission for companies to use it.
There's a telling (though disputed) saying: "If you're not paying, you're the product." What this suggests is that if you're not paying for a service, then somebody else is paying to use your data. Do you pay to use Facebook? Twitter? Gmail? How do you think they make their money? Well, partly through advertising, and some might argue that's a service they provide to you, but actually that's them using your data to get money from the advertisers. You're not really a customer of advertising--it's only once you buy something from the advertiser that you become their customer, but until you do, the relationship is between the the owner of the advertising platform and the advertiser.
Some of these services allow you to pay to reduce or remove advertising (Spotify is a good example), but on the other hand, advertising may be enabled even for services that you think you do pay for (Amazon is apparently working to allow adverts via Alexa, for instance). Unless we want to start paying to use all of these "free" services, we need to be aware of what we're giving up, and making some choices about what we expose and what we don't.
### Who's the customer?
There's another issue around data that should be exercising us, and it's a direct consequence of the amounts of data that are being generated. There are many organisations out there--including "public" ones like universities, hospitals, or government departments4--who generate enormous quantities of data all the time, and who just don't have the capacity to store it. It would be a different matter if this data didn't have long-term value, but it does, as the tools for handling Big Data are developing, and organisations are realising they can be mining this now and in the future.
The problem they face, though, as the amount of data increases and their capacity to store it fails to keep up, is what to do with it. Luckily--and I use this word with a very heavy dose of irony,5 big corporations are stepping in to help them. "Give us your data," they say, "and we'll host it for free. We'll even let you use the data you collected when you want to!" Sounds like a great deal, yes? A fantastic example of big corporations6 taking a philanthropic stance and helping out public organisations that have collected all of that lovely data about us.
Sadly, philanthropy isn't the only reason. These hosting deals come with a price: in exchange for agreeing to host the data, these corporations get to sell access to it to third parties. And do you think the public organisations, or those whose data is collected, will get a say in who these third parties are or how they will use it? I'll leave this as an exercise for the reader.7
### Open and positive
It's not all bad news, however. There's a growing "open data" movement among governments to encourage departments to make much of their data available to the public and other bodies for free. In some cases, this is being specifically legislated. Many voluntary organisations--particularly those receiving public funding--are starting to do the same. There are glimmerings of interest even from commercial organisations. What's more, there are techniques becoming available, such as those around differential privacy and multi-party computation, that are beginning to allow us to mine data across data sets without revealing too much about individuals--a computing problem that has historically been much less tractable than you might otherwise expect.
What does this all mean to us? Well, I've written before on Opensource.com about the [commonwealth of open source][4], and I'm increasingly convinced that we need to look beyond just software to other areas: hardware, organisations, and, relevant to this discussion, data. Let's imagine that you're a company (A) that provides a service to another company, a customer (B).8 There are four different types of data in play:
1. Data that's fully open: visible to A, B, and the rest of the world
2. Data that's known, shared, and confidential: visible to A and B, but nobody else
3. Data that's company-confidential: visible to A, but not B
4. Data that's customer-confidential: visible to B, but not A
First of all, maybe we should be a bit more open about data and default to putting it into bucket 1. That data--on self-driving cars, voice recognition, mineral deposits, demographic statistics--could be enormously useful if it were available to everyone.9 Also, wouldn't it be great if we could find ways to make the data in buckets 2, 3, and 4--or at least some of it--available in bucket 1, whilst still keeping the details confidential? That's the hope for some of these new techniques being researched. They're a way off, though, so don't get too excited, and in the meantime, start thinking about making more of your data open by default.
### Some concrete steps
So, what can we do around data privacy and being open? Here are a few concrete steps that occurred to me: please use the comments to contribute more.
* Check to see whether your organisation is taking GDPR seriously. If it isn't, push for it.
* Default to encrypting sensitive data (or hashing where appropriate), and deleting when it's no longer required--there's really no excuse for data to be in the clear to these days except for when it's actually being processed.
* Consider what information you disclose when you sign up to services, particularly social media.
* Discuss this with your non-technical friends.
* Educate your children, your friends' children, and their friends. Better yet, go and talk to their teachers about it and present something in their schools.
* Encourage the organisations you work for, volunteer for, or interact with to make data open by default. Rather than thinking, "why should I make this public?" start with "why shouldn't I make this public?"
* Try accessing some of the open data sources out there. Mine it, create apps that use it, perform statistical analyses, draw pretty graphs,10 make interesting music, but consider doing something with it. Tell the organisations that sourced it, thank them, and encourage them to do more.
1. Though you probably won't, I admit.
2. Assuming that you believe that your personal data should be protected.
3. If you're wondering what "dandy" means, you're not alone at this point.
4. Exactly how public these institutions seem to you will probably depend on where you live: [YMMV][5].
5. And given that I'm British, that's a really very, very heavy dose.
6. And they're likely to be big corporations: nobody else can afford all of that storage and the infrastructure to keep it available.
7. No. The answer's "no."
8. Although the example works for people, too. Oh, look: A could be Alice, B could be Bob…
9. Not that we should be exposing personal data or data that actually needs to be confidential, of course--not that type of data.
10. A friend of mine decided that it always seemed to rain when she picked her children up from school, so to avoid confirmation bias, she accessed rainfall information across the school year and created graphs that she shared on social media.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/being-open-about-data-privacy
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
[4]:https://opensource.com/article/17/11/commonwealth-open-source
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036

View File

@ -1,46 +0,0 @@
translating----geekpi
College student reflects on getting started in open source
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA)
I just completed the first semester of my second year in college, and I'm reflecting on what I learned in my classes. One class, in particular, stood out to me: "[Foundations of an Open Source World][1]," taught by Dr. Bryan Behrenshausen at Duke University. I enrolled in the class at the last minute because it seemed interesting and, if Im being honest because it fit my schedule.
On the first day, Dr. Behrenshausen asked if we students knew or had used any open source programs. Until that day I had hardly heard [the term “open source”][2] and certainly wasn't cognizant of any products that fell into that category. As the semester went on, however, it dawned on me that the passion I have towards my career aspirations would not exist without open source.
### Audacity and GIMP
My interest in technology started at age 12. Charged with the task of cutting music for my dance team, I searched the web for hours until I found Audacity, an open source audio editor. Audacity opened doors for me; no longer was I confined to repetitive eight-counts of the same beat. I started receiving requests left and right from others who wanted unique renditions of their favorite songs.
Weeks later, I stumbled upon a GIF on the internet of a cat with a Pop-Tart torso and a rainbow trail flying through space. I searched “how to make moving images” and discovered [GIMP][3], an open source graphics editor, and used it to create a GIF of "The Simpsons" for my brothers birthday present.
My budding interest grew into a full-time obsession: creating artwork on my clunky, laggy laptop. Since I didnt have much luck with charcoal, paint, or watercolors, I used [graphic design][4] as an outlet for creative expression. I spent hours in the computer lab learning the basics of HTML and CSS on [W3Schools][5] so that I could fill an online portfolio with my childish GIFs. A few months later, I published my first website on [WordPress][6].
### Why open source
Open source allows us to not only achieve our goals but to discover interests that drive those goals.
Fast-forward nearly a decade. Many things have changed, although some have stayed consistent: I still make graphics (mostly flyers), edit music for a dance group, and design websites (sleeker, more effective ones, I hope). The products I used have gone through countless version upgrades. But the most dramatic change is my approach to open source resources.
Considering the significance of open source products in my life has made me cherish the open movement and its mission. Open source projects remind me that there are initiatives in tech that promote social good and self-learning without being exclusive to those who are socioeconomically advantaged. My middle-school self, like countless others, couldnt afford to purchase the Adobe Creative Suite, GarageBand, or Squarespace. Open source platforms allow us to not only achieve our goals but to discover interests that drive those goals by broadening our access networks.
My advice? Enroll in a class on a whim. It just might change the way you view the world.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/college-getting-started
作者:[Christine Hwang][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/christinehwang
[1]:https://ssri.duke.edu/news/new-course-explores-open-source-principles
[2]:https://opensource.com/node/42001
[3]:https://www.gimp.org/
[4]:https://opensource.com/node/30251
[5]:https://www.w3schools.com/
[6]:https://opensource.com/node/31441

View File

@ -1,39 +0,0 @@
translating---geekpi
Easily Run And Integrate AppImage Files With AppImageLauncher
======
Did you ever download an AppImage file and you didn't know how to use it? Or maybe you know how to use it but you have to navigate to the folder where you downloaded the .AppImage file every time you want to run it, or manually create a launcher for it.
With AppImageLauncher, these are problems of the past. The application lets you **easily run AppImage files, without having to make them executable**. But its most interesting feature is easily integrating AppImages with your system: **AppImageLauncher can automatically add an AppImage application shortcut to your desktop environment's application launcher / menu (including the app icon and proper description).**
Without making the downloaded Kdenline AppImage executable manually, the first time I double click it (having AppImageLauncher installed), AppImageLauncher presents two options:
`Run once` or `Integrate and run`.
Clicking on Integrate and run, the AppImage is copied to the `~/.bin/` folder (hidden folder in the home directory) and is added to the menu, then the app is launched.
**Removing it is just as simple** , as long as the desktop environment you're using has support for desktop actions. For example, in Gnome Shell, simply **right click the application icon in the Activities Overview and select** `Remove from system` :
The AppImageLauncher GitHub page says that the application only supports Debian-based systems for now (this includes Ubuntu and Linux Mint) because it integrates deeply with the system. The application is currently in heavy development, and there are already issues opened by its developer to build RPM packages, so Fedora / openSUSE support might be added in the not too distant future.
### Download AppImageLauncher
The AppImageLauncher download page provides binaries for Debian, Ubuntu or Linux Mint (64bit), as well as a 64bit AppImage. The source is also available.
[Download AppImageLauncher][1]
--------------------------------------------------------------------------------
via: https://www.linuxuprising.com/2018/04/easily-run-and-integrate-appimage-files.html
作者:[Logix][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://plus.google.com/118280394805678839070
[1]:https://github.com/TheAssassin/AppImageLauncher/releases
[2]:https://kdenlive.org/download/

View File

@ -0,0 +1,137 @@
A year as Android Engineer
============================================================
![](https://cdn-images-1.medium.com/max/2000/1*tqshw1o4JZZlA1HW3Cki1Q.png)
>Awesome drawing by [Miquel Beltran][0]
Two years ago I started my career in tech. I started as QA Tester and then transitioned into a developer role a year after. Not without a lot of effort and a lot of personal time invested.
You can find that part of the story in this post about [how I switch careers from Biology to tech][1] and how I [learned Android for a year][2]. Today I want to talk about how I started my first role as Android developer, how I switched companies and how my first year as Android Engineer has been overall.
### My first role
My first role as Android developer started out just a year ago. The company I was working at provided me with the opportunity to transition from QA to Android developer by dedicating half of the time to each role.
This transition was thanks to the time I invested learning Android on evenings and weekends. I went through the [Android Basics Nanodegree][3], the [Android Developer Nanodegree][4] and as well I got the [Google Developers Certification][5]. [That part of the story is explained in more detail here.][6]
After two months I switched to full time when they hired another QA. Here's were all the challenges began!
Transitioning someone into a developer role is a lot harder than just providing them with a laptop and a git account. And here I explain some of the roadblocks I got during that time:
#### Lack of expectations
The first problem I faced was not knowing the expectations that the company had on me. My thought was that they expected me to deliver since the very first day, probably not like my experienced colleagues, but deliver by doing small tasks. This feeling caused me a lot of pressure. By not having clear goals, I was constantly thinking I wasn't good enough and that I was an impostor.
#### Lack of mentorship
There was no concept of mentorship in the company and the environment didnt allow us to work together. We barely did pair-programming, because there was always a deadline and the company wanted us to keep delivering. Luckily my colleagues were always willing to help! They sat with me to help me whenever I got stuck or asked for help.
#### Lack of feedback
I never got any feedback during that time. What was I doing well or bad? What could I improve? I didn't know since I didnt have anyone whom to report.
#### Lack of learning culture
I think that in order to keep up to date we need to continue learning by reading blog posts, watching talks, attending conferences, trying new things, etc. The company didnt offer learning hours during working time, which is unfortunately quite common as other devs told me. Without having learning time, I felt I wasn't entitled to spend even 10 minutes to read a blog post I found to be interesting and relevant for my job.
The problem was not only the lack of an explicit learning time allowance, but also that when I requested it, it got denied.
An example of that occurred when I finished my tasks for the sprint and we were already at the end of it, so I asked if I could spend the rest of the day learning Kotlin. This request got denied.
Another case was when I requested to attend an Android conference, and then I was asked to take days from my paid time off.
#### Impostor syndrome
The lack of expectations, the lack of feedback, and the lack of learning culture in the company made the first 9 months of my developer career even more challenging. I have the feeling that it contributed to increase my internal impostors syndrome.
One example of it was opening and reviewing pull requests. Sometimes I'd ask a colleague to check my code privately, rather than opening a pull request, to avoid showing my code to everyone.
Other times, when I was reviewing, I would spend minutes staring the "approve" button, worried of approving something that another colleague would have considered wrong.
And when I didn't agree on something, I was never speaking loud enough worried of a backslash due to my lack of knowledge.
> Sometimes Id ask a colleague to check my code privately […] to avoid showing my code to everyone.
* * *
### New company, new challenges
Later on I got a new opportunity in my hands. I was invited to the hiring process for a Junior Android Engineer position at [Babbel][7] thanks to a friend who worked with me in the past.
I met the team while volunteering in a local meet-up that was hosted at their offices. That helped me a lot when deciding to apply. I loved the company's motto: learning for all. Also everyone was very friendly and looked happy working there! But I didn't apply straight away, because why would I apply if I though that I wasn't good enough?
Luckily my friend and my partner pushed me to do it. They gave me the strength I needed to send my CV. Shortly after I got into the interview process. It was fairly simple: I had to do a coding challenge in the form of a small app, and then later a technical interview with the team and team fit interview with the hiring manager.
#### Hiring process
I spent the weekend with the coding challenge and sent it right after on Monday. Soon after I got invited for an on-site interview.
The technical interview was about the coding challenge itself, we talked about good and bad things on Android, why did I implemented things in a way, how could it be improved, etc. It followed a short team fit interview with the hiring manager, where we talked about challenges I faced, how I solved this problems, and so on.
They offered me the job and I accepted the offer!
On my first year working as Android developer, I spent 9 months in a company and the next 3 with my current employer.
#### Learning environment
One big change for me is having 1:1 meetings with my Engineering Manager every two weeks. That way, it's clear for me what are our expectations.
We are getting constant feedback and ideas on what to improve and how to help and be helped. Beside the internal trainings they offer, I also have a weekly learning time allowance to learn anything I want. So far, I've been using it to improve my Kotlin and RxJava knowledge.
We also do pair-programming almost daily. There's always paper and pens nearby my desk to sketch ideas. And I keep a second chair by my side so my colleagues can sit with me :-)
However, I still struggle with the impostor syndrome.
#### Still the Impostor syndrome
I still struggle with it. For example, when doing pair-programming, if we reach a topic I don't quite understand, even when my colleagues have the patience to explain it to me many times, there are times I just can't get it.
After the second or third time I start feeling a big pressure on my chest. How come I don't get it? Why is it so difficult for me to understand? This situation creates me anxiety.
I realized I need to accept that I might not understand a given topic but that getting the idea is the first step! Sometimes we just require more time and practice so it finally "compiles in our brains" :-)
For example, I used to struggle with Java interfaces vs. abstract classes, I just couldn't understand them completely, no matter how many examples I saw. But then I started using them, and even if I could not explain how they worked, I knew how and when to use them.
#### Confidence
The learning environment in my current company helps me in building confidence. Even if I've been asking a lot of questions, there's always room for more!
Having less experience doesn't mean your opinions will be less valued. For example if a proposed solution seems too complex, I will challenge them to write it in a clearer way. Also, I provide a different set of experience and points of view, which has been helpful so far for polishing the app user experience.
### To improve
The engineer role isn't just coding, but rather a broad range of skills. I am still at the beginning of the journey, and on my way of mastering it, I want to focus on the following ideas:
* Communication: as English isn't my first language, sometimes I struggle to transmit an idea, which is essential for my job. I can work on that by writing, reading and talking more.
* Give constructive feedback: I want to give meaningful feedback to my colleagues so they can grow with me as well.
* Be proud of my achievements: I need to create a list to track all kind of achievements, small or big, and my overall progress, so I can look back and feel good when I struggle.
* Not being obsessed on what I don't know: hard to do when there's so many new things coming up, so keeping focused on the essentials, and what's required for my project in hand, is important.
* Share more knowledge with my colleagues! That I'm a junior doesn't mean I don't have anything to share! I need to keep sharing articles and talks I find interesting. I know my colleagues appreciate that.
* Be patient and constantly learn: keep learning as I am doing, but being more patient with myself.
* Self-care: take breaks whenever needed and don't be hard with myself. Relaxing is also productive.
--------------------------------------------------------------------------------
via: https://proandroiddev.com/a-year-as-android-engineer-55e2a428dfc8
作者:[Lara Martín][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://proandroiddev.com/@laramartin
[0]:https://medium.com/@Miqubel
[1]:https://medium.com/@laramartin/how-i-took-my-first-step-in-it-6e9233c4684d
[2]:https://medium.com/udacity/a-year-of-android-ffba9f3e40b6
[3]:https://de.udacity.com/course/android-basics-nanodegree-by-google--nd803
[4]:https://de.udacity.com/course/android-developer-nanodegree-by-google--nd801
[5]:https://developers.google.com/training/certification/
[6]:https://medium.com/udacity/a-year-of-android-ffba9f3e40b6
[7]:http://babbel.com/

View File

@ -0,0 +1,164 @@
How To Downgrade A Package In Arch Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/Arch-Linux-720x340.jpg)
As you might know, Arch Linux is a rolling release and DIY (do-it-yourself) distribution. So you have to be bit careful while updating it often, especially installing or updating packages from the third party repositories like AUR. You might be end up with broken system if you dont know what you are doing. It is your responsibility to make Arch Linux more stable. However, we all do mistakes. It is difficult to be careful all time. Sometimes, you want to update to most bleeding edge, and you might be stuck with broken packages. Dont panic! In such cases, you can simply rollback to the old stable packages. This short tutorial describes how to downgrade a package in Arch Linux and its variants like Antergos, Manjaro Linux.
### Downgrade a package in Arch Linux
In Arch Linux, there is an utility called** “downgrade”** that helps you to downgrade an installed package to any available older version. This utility will check your local cache and the remote servers (Arch Linux repositories) for the old versions of a required package. You can pick any one of the old stable package from that list and install it.
This package is not available in the official repositories. You need to add the unofficial **archlinuxfr** repository.
To do so, edit **/etc/pacman.conf** file:
```
$ sudo nano /etc/pacman.conf
```
Add the following lines:
```
[archlinuxfr]
SigLevel = Never
Server = http://repo.archlinux.fr/$arch
```
Save and close the file.
Update the repositories with command:
```
$ sudo pacman -Sy
```
Then install “Downgrade” utility using the following command from your Terminal:
```
$ sudo pacman -S downgrade
```
**Sample output:**
```
resolving dependencies...
looking for conflicting packages...
Packages (1) downgrade-5.2.3-1
Total Download Size: 0.01 MiB
Total Installed Size: 0.10 MiB
:: Proceed with installation? [Y/n]
```
The typical usage of “downgrade” command is:
```
$ sudo downgrade [PACKAGE, ...] [-- [PACMAN OPTIONS]]
```
Let us say you want to downgrade **opera web browser** to any available old version.
To do so, run:
```
$ sudo downgrade opera
```
This command will list all available versions of opera package (both new and old) from your local cache and remote mirror.
**Sample output:**
```
Available packages:
1) opera-37.0.2178.43-1-x86_64.pkg.tar.xz (local)
2) opera-37.0.2178.43-1-x86_64.pkg.tar.xz (remote)
3) opera-37.0.2178.32-1-x86_64.pkg.tar.xz (remote)
4) opera-36.0.2130.65-2-x86_64.pkg.tar.xz (remote)
5) opera-36.0.2130.65-1-x86_64.pkg.tar.xz (remote)
6) opera-36.0.2130.46-2-x86_64.pkg.tar.xz (remote)
7) opera-36.0.2130.46-1-x86_64.pkg.tar.xz (remote)
8) opera-36.0.2130.32-2-x86_64.pkg.tar.xz (remote)
9) opera-36.0.2130.32-1-x86_64.pkg.tar.xz (remote)
10) opera-35.0.2066.92-1-x86_64.pkg.tar.xz (remote)
11) opera-35.0.2066.82-1-x86_64.pkg.tar.xz (remote)
12) opera-35.0.2066.68-1-x86_64.pkg.tar.xz (remote)
13) opera-35.0.2066.37-2-x86_64.pkg.tar.xz (remote)
14) opera-34.0.2036.50-1-x86_64.pkg.tar.xz (remote)
15) opera-34.0.2036.47-1-x86_64.pkg.tar.xz (remote)
16) opera-34.0.2036.25-1-x86_64.pkg.tar.xz (remote)
17) opera-33.0.1990.115-2-x86_64.pkg.tar.xz (remote)
18) opera-33.0.1990.115-1-x86_64.pkg.tar.xz (remote)
19) opera-33.0.1990.58-1-x86_64.pkg.tar.xz (remote)
20) opera-32.0.1948.69-1-x86_64.pkg.tar.xz (remote)
21) opera-32.0.1948.25-1-x86_64.pkg.tar.xz (remote)
22) opera-31.0.1889.174-1-x86_64.pkg.tar.xz (remote)
23) opera-31.0.1889.99-1-x86_64.pkg.tar.xz (remote)
24) opera-30.0.1835.125-1-x86_64.pkg.tar.xz (remote)
25) opera-30.0.1835.88-1-x86_64.pkg.tar.xz (remote)
26) opera-30.0.1835.59-1-x86_64.pkg.tar.xz (remote)
27) opera-30.0.1835.52-1-x86_64.pkg.tar.xz (remote)
28) opera-29.0.1795.60-1-x86_64.pkg.tar.xz (remote)
29) opera-29.0.1795.47-1-x86_64.pkg.tar.xz (remote)
30) opera-28.0.1750.51-1-x86_64.pkg.tar.xz (remote)
31) opera-28.0.1750.48-1-x86_64.pkg.tar.xz (remote)
32) opera-28.0.1750.40-1-x86_64.pkg.tar.xz (remote)
33) opera-27.0.1689.76-1-x86_64.pkg.tar.xz (remote)
34) opera-27.0.1689.69-1-x86_64.pkg.tar.xz (remote)
35) opera-27.0.1689.66-1-x86_64.pkg.tar.xz (remote)
36) opera-27.0.1689.54-2-x86_64.pkg.tar.xz (remote)
37) opera-27.0.1689.54-1-x86_64.pkg.tar.xz (remote)
38) opera-26.0.1656.60-1-x86_64.pkg.tar.xz (remote)
39) opera-26.0.1656.32-1-x86_64.pkg.tar.xz (remote)
40) opera-12.16.1860-2-x86_64.pkg.tar.xz (remote)
41) opera-12.16.1860-1-x86_64.pkg.tar.xz (remote)
select a package by number:
```
Just type the package number of your choice, and hit enter to install it.
Thats it. The current installed package will be downgraded to the old version.
**Also Read:[How To Downgrade All Packages To A Specific Date In Arch Linux][1]**
##### So, how can avoid broken packages and make Arch Linux more stable?
Check [**Arch Linux news**][2] and [**forums**][3] before updating Arch Linux to find out if there have been any reported problem. I have been using Arch Linux as my main OS for the past few weeks. Here is some simple tips that I have found over a period of time to avoid installing unstable packages in Arch Linux.
1. Avoid partial upgrades. It means that never run “pacman -Sy <package-name>”. This command will partially upgrade your system while installing a package. Instead, first use “pacman -Syu” to update the system and then use “package -S <package-name>” to a install package.
2. Avoid using “pacman -Syu force” command. The force flag will ignore the package and file conflicts and you might end-up with broken packages or broken system.
3. Do not skip dependency check. It means that do not use “pacman -Rdd <package-name>”. This command will avoid dependency check while removing a package. If you run this command, a critical dependency which is needed by another important package could be removed too. Eventually, it will break your Arch Linux.
4. It is always a good practice to make regular backup of important data and configuration files to avoid any data loss.
5. Be careful while installing packages from third party and unofficial repositories like AUR. And do not install packages that are in heavy development.
For more details, check the [**Arch Linux maintenance guide**][4].
I am not an Arch Linux expert, and I am still learning to make it more stable. Please feel free to let me know If you have any tips to make Arch Linux stable and safe in the comment section below. I am all ears.
Hope this helps. Thats all for now. I will be here again with another interesting article soon. Until then, stay tuned with OSTechNix.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/downgrade-package-arch-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/downgrade-packages-specific-date-arch-linux/
[2]:https://www.archlinux.org/news/
[3]:https://bbs.archlinux.org/
[4]:https://wiki.archlinux.org/index.php/System_maintenance

View File

@ -1,52 +0,0 @@
3 ways robotics affects the CIO role
======
![配图](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai.png?itok=toMIgELj)
As 2017 comes to a close, many CIOs are solidifying their goals for 2018. Perhaps yours involve robotic process automation (RPA.) For years, RPA has been a distant concept for many companies. But as organizations are forced to become ever more nimble and efficient, the potential benefits of RPA bear examining.
According to a recent[ survey][1] by Redwood Software and Sapio Research, IT decision makers believe that 59 percent of business processes can be automated in the next five years, creating new speed and efficiency while relieving their human counterparts of repetitive manual workloads. However, 20 percent of companies with more than 1000 employees currently have no RPA strategy in place.
For CIOs, RPA has implications for your role in the business and for your team. Here are three ways that the role of the CIO and other IT decision makers can change as RPA gains prominence in 2018:
### Added opportunity to be strategic change agents
As the pressure grows to do more with less, internal processes matter greatly. In every enterprise, employees across departments are performing critical - yet mundane - tasks every single day. These tasks may be boring and repetitive, but they must be performed quickly, and often with no room for error.
**[ For advice on maximizing your automation strategy's ROI, see our related article,[How to improve ROI on automation: 4 tips][2]. ]**
From back-office operations in finance to procurement, supply chain, accounting, customer service, and human resources, nearly every position within an organization is plagued with at least some monotonous tasks. For CIOs, this opens up an opportunity to unite the business with IT and spearhead strategic change with RPA.
Having evolved far beyond screen-scraping technology of the past, robots are now customizable, plug-and-play solutions that can be built to an organization's specific needs. With such a process-centric approach, companies can automate not only tasks previously executed by humans, but also application and system-specific tasks, such as ERP and other enterprise applications.
Enabling a greater level of automation for end-to-end processes is where the value lies. CIOs will be on the front line of this opportunity.
### Renewed focus on people and training
Technology shifts can be unnerving to employees, especially when these changes involve automating substantial portions of their daily duties. The CIO should articulate how RPA will change roles and responsibilities for the better, and fuel data-driven, strategic decisions that will ultimately impact the bottom line.
When implementing RPA, it's important to convey that humans will always be critical to the success of the organization, and that success requires the right balance of technology and human skills.
CIOs should also analyze workflow and implement better processes that go beyond mimicking end-user specific tasks. Through end-to-end process automation, CIOs can enable employees to shine.
Because it will be important to upskill and retrain employees throughout the automation process, CIOs must be prepared to collaborate with the C-suite to determine training programs that help employees navigate the change with confidence.
### Demand for long-term thinking
To succeed with robotic process automation, brands must take a long-term approach. This will require a scalable solution, which in turn will benefit the entire business model, including customers. When Amazon introduced faster delivery options for Prime customers, for example, it didn't just retool the entire order fulfillment process in its warehouses; it automated its online customer experience to make it simpler, faster, and easier than ever for consumers to place orders.
In the coming year, CIOs can approach technology in the same way, architecting holistic solutions to change the way an organization operates. Reducing headcount will net only so much in bottom-line results, but process automation allows CIOs to think bigger through optimization and empowerment. This approach gives CIOs the opportunity to build credibility for the long haul, for themselves and for RPA. This in turn will enhance the CIO's role as a navigator and contributor to the organization's overall success.
For CIOs, taking a long-term, strategic approach to RPA success takes time and hard work. Nevertheless, CIOs who commit the time to create a strategy that balances manpower and technology will deliver value now and in the future.
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2017/11/3-ways-robotics-affects-cio-role
作者:[Dennis Walsh][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/dennis-walsh
[1]:https://www.redwood.com/press-releases/it-decision-makers-speak-59-of-business-processes-could-be-automated-by-2022/
[2]:https://enterprisersproject.com/article/2017/11/how-improve-roi-automation-4-tips?sc_cid=70160000000h0aXAAQ

View File

@ -1,119 +0,0 @@
Testing IPv6 Networking in KVM: Part 2
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/banner_4.png?itok=yZBHylwd)
When last we met, in [Testing IPv6 Networking in KVM: Part 1][1], we learned about IPv6 private addressing. Today, we're going to use KVM to create networks for testing IPv6 to our heart's content.
Should you desire a refresh in using KVM, see [Creating Virtual Machines in KVM: Part 1][2] and [Creating Virtual Machines in KVM: Part 2 - Networking][3].
### Creating Networks in KVM
You need at least two virtual machines in KVM. Of course, you may create as many as you like. My little setup has Fedora, Ubuntu, and openSUSE. To create a new IPv6 network, open Edit > Connection Details > Virtual Networks in the main Virtual Machine Manager window. Click on the button with the green cross on the bottom left to create a new network (Figure 1).
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kvm-fig-1_0.png?itok=ruqjPXxd)
Figure 1: Create a network.
Give your new network a name, then click the Forward button. You may opt to not create an IPv4 network if you wish. When you create a new IPv4 network the Virtual Machine Manager will not let you create a duplicate network, or one with an invalid address. On my host Ubuntu system a valid address is highlighted in green, and an invalid address is highlighted in a tasteful rosy hue. On my openSUSE machine there are no colored highlights. Enable DHCP or not, and create a static route or not, then move on to the next window.
Check "Enable IPv6 network address space definition" and enter your private address range. You may use any IPv6 address class you wish, being careful, of course, to not allow your experiments to leak out of your network. We shall use the nice IPv6 unique local addresses (ULA), and use the online address generator at [Simple DNS Plus][4] to create our network address. Copy the "Combined/CID" address into the Network field (Figure 2).
![network address][6]
Figure 2: Copy the "Combined/CID" address into the Network field.
[Used with permission][7]
Virtual Machine Manager thinks my address is not valid, as evidenced by the rose highlight. Can it be right? Let us use ipv6calc to check:
```
$ ipv6calc -qi fd7d:844d:3e17:f3ae::/64
Address type: unicast, unique-local-unicast, iid, iid-local
Registry for address: reserved(RFC4193#3.1)
Address type has SLA: f3ae
Interface identifier: 0000:0000:0000:0000
Interface identifier is probably manual set
```
ipv6calc thinks it's fine. Just for fun, change one of the numbers to something invalid, like the letter g, and try it again. (Asking "What if...?" and trial and error is the awesomest way to learn.)
Let us carry on and enable DHCPv6 (Figure 3). You can accept the default values, or set your own.
![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-3.png?itok=F-oAAtN9)
We shall skip creating a default route definition and move on to the next screen, where we shall enable "Isolated Virtual Network" and "Enable IPv6 internal routing/networking".
### VM Network Selection
Now you can configure your virtual machines to use your new network. Open your VMs, and then click the "i" button at the top left to open its "Show virtual hardware details" screen. In the "Add Hardware" column click on the NIC button to open the network selector, and select your nice new IPv6 network. Click Apply, and then reboot. (Or use your favorite method for restarting networking, or renewing your DHCP lease.)
### Testing
What does ifconfig tell us?
```
$ ifconfig
ens3: flags=4163 UP,BROADCAST,RUNNING,MULTICAST mtu 1500
inet 192.168.30.207 netmask 255.255.255.0
broadcast 192.168.30.255
inet6 fd7d:844d:3e17:f3ae::6314
prefixlen 128 scopeid 0x0
inet6 fe80::4821:5ecb:e4b4:d5fc
prefixlen 64 scopeid 0x20
```
And there is our nice new ULA, fd7d:844d:3e17:f3ae::6314, and the auto-generated link-local address that is always present. Let's have some ping fun, pinging another VM on the network:
```
vm1 ~$ ping6 -c2 fd7d:844d:3e17:f3ae::2c9f
PING fd7d:844d:3e17:f3ae::2c9f(fd7d:844d:3e17:f3ae::2c9f) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=1 ttl=64 time=0.635 ms
64 bytes from fd7d:844d:3e17:f3ae::2c9f: icmp_seq=2 ttl=64 time=0.365 ms
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
When you're struggling to understand subnetting, this gives you a fast, easy way to try different addresses and see whether they work. You can assign multiple IP addresses to a single interface and then ping them to see what happens. In a ULA, the interface, or host, portion of the IP address is the last four quads, so you can do anything to those and still be in the same subnet, which in this example is f3ae. This example changes only the interface ID on one of my VMs, to show how you really can do whatever you want with those four quads:
```
vm1 ~$ sudo /sbin/ip -6 addr add fd7d:844d:3e17:f3ae:a:b:c:6314 dev ens3
vm2 ~$ ping6 -c2 fd7d:844d:3e17:f3ae:a:b:c:6314
PING fd7d:844d:3e17:f3ae:a:b:c:6314(fd7d:844d:3e17:f3ae:a:b:c:6314) 56 data bytes
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=1 ttl=64 time=0.744 ms
64 bytes from fd7d:844d:3e17:f3ae:a:b:c:6314: icmp_seq=2 ttl=64 time=0.364 ms
```
Now try it with a different subnet, which in this example is f4ae instead of f3ae:
```
$ ping6 -c2 fd7d:844d:3e17:f4ae:a:b:c:6314
PING fd7d:844d:3e17:f4ae:a:b:c:6314(fd7d:844d:3e17:f4ae:a:b:c:6314) 56 data bytes
From fd7d:844d:3e17:f3ae::1 icmp_seq=1 Destination unreachable: No route
From fd7d:844d:3e17:f3ae::1 icmp_seq=2 Destination unreachable: No route
```
This is also a great time to practice routing, which we will do in a future installment along with setting up auto-addressing without DHCP.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-2
作者:[CARLA SCHRODER][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/learn/intro-to-linux/2017/11/testing-ipv6-networking-kvm-part-1
[2]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-1
[3]:https://www.linux.com/learn/intro-to-linux/2017/5/creating-virtual-machines-kvm-part-2-networking
[4]:http://simpledns.com/private-ipv6.aspx
[5]:/files/images/kvm-fig-2png
[6]:https://www.linux.com/sites/lcom/files/styles/floated_images/public/kvm-fig-2.png?itok=gncdPGj- (network address)
[7]:https://www.linux.com/licenses/category/used-permission

View File

@ -1,3 +1,5 @@
translating by cizezsy
How To Kill The Largest Process In An Unresponsive Linux System
======
![](https://www.ostechnix.com/wp-content/uploads/2017/11/Kill-The-Largest-Process-720x340.png)

View File

@ -1,95 +0,0 @@
How to create mobile-friendly documentation
======
![配图](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd)
I'm not sold on the whole idea of [mobile first][1], but I do know that more people than ever are using mobile devices like smartphones and tablets to get information on the go. That includes online software and hardware documentation, much of which is lengthy and poorly suited for small screens. Often, it doesn't scale properly, and it can be difficult to navigate.
When people access documentation using a mobile device, they usually want a quick hit of information to learn how to perform a task or solve a problem. They don't want to wade through seemingly endless pages to find the specific piece of information they need. Fortunately, it's not hard to address this problem. Here are a few tips to help you structure your documentation to meet the needs of mobile readers.
### Think short
That means short sentences, short paragraphs, and short procedures. You're not writing a novel or a piece of long-form journalism. Make your documentation concise. Use as few words as possible to get ideas and information across.
Use a radio news report as a guide: Focus on the key elements and explain them in simple, direct language. Don't make your reader wade through screen after screen of turgid text.
Focus on the key elements and explain them in simple, direct language.
Also, get straight to the point. Focus on the information readers need when they need it. Documentation published online shouldn't resemble the thick manuals of yore. Don't lump everything together on a single page. Break your information into smaller chunks. Here's how to do that:
### Think topics
Also, get straight to the point. Focus on the information readers need when they need it. Documentation published online shouldn't resemble the thick manuals of yore. Don't lump everything together on a single page. Break your information into smaller chunks. Here's how to do that:
In the technical writing world, topics are individual, stand-alone chunks of information. Each topic comprises a single page on your site. Readers should be able to get the information they need--and only that information--from a specific topic. To make that happen, choose which topics to include in your documentation and decide how to organize them:
### Think DITA
[Darwin Information Typing Architecture][2], or DITA, is an XML model for writing and publishing. It's been [widely adopted][3] in the technical writing world, especially for longer documentation sets.
I'm not suggesting that you convert your documentation to XML (unless you really want to). Instead, consider applying DITA's concept of separate types of topics to your documentation:
* General: overview information
* Task: step-by-step procedures
* Concept: background or conceptual information
* Reference: specialized information like API references or data dictionaries
* Glossary: to define terms
* Troubleshooting: information on problems users may encounter and how to fix them
You'll wind up with a lot of individual pages. To connect those pages:
### Think linking
Many content management systems, wikis, and publishing frameworks include some form of navigation--usually a table of contents or [breadcrumbs][4]. It's the kind of navigation that fades into the background on a mobile device.
For stronger navigation, add explicit links between topics. Place those links at the end of each topic with the heading **See Also** or **Related Topics**. Each section should contain two to five links that point to overview, concept, and reference topics related to the current topic.
If you need to point to information outside of your documentation set, make sure the link opens in a new browser tab. That sends the reader to another site while also keeping them on your site.
If you need to point to information outside of your documentation set, make sure the link opens in a new browser tab.
That takes care of the text. What about graphics?
### Think unadorned
That takes care of the text. What about graphics?
With a few exceptions, images don't add much to documentation. Take a critical look at each image in your documentation. Then ask:
* Does it serve a purpose?
* Does it enhance the documentation?
* Will readers miss this image if I remove it?
With a few exceptions, images don't add much to documentation.
If the answer to those questions is no, remove the image.
If the answer to those questions is no, remove the image.
On the other hand, if you absolutely can't do without an image, make it [responsive][5]. That way, the image will automatically resize to fit in a smaller screen.
If you're still not sure about an image, Opensource.com community moderator Ben Cotton offers an [excellent explanation][6] of when to use screen captures in documentation.
### A final thought
With a little effort, you can structure your documentation to work well for mobile device users. Another plus: These changes improve documentation for desktop computer and laptop users, too.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/12/think-mobile
作者:[Scott Nesbitt][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/chrisshort
[1]:https://www.uxmatters.com/mt/archives/2012/03/mobile-first-what-does-it-mean.php
[2]:https://en.wikipedia.org/wiki/Darwin_Information_Typing_Architecture
[3]:http://dita.xml.org/book/list-of-organizations-using-dita
[4]:https://en.wikipedia.org/wiki/Breadcrumb_(navigation)
[5]:https://en.wikipedia.org/wiki/Responsive_web_design
[6]:https://opensource.com/business/15/9/when-does-your-documentation-need-screenshots

View File

@ -1,57 +0,0 @@
lontow translating
5 ways open source can strengthen your job search
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI)
Are you searching for a job in the bustling tech industry? Whether you're a seasoned member of the tech community looking for a new challenge or a recent graduate looking for your first job, contributing to open source projects can be a great way to boost your attractiveness as a candidate. Below are five ways your work on open source projects may strengthen your job hunt.
### 1. Get project experience
Perhaps the clearest way working on open source projects can assist in your job search is by giving you project experience. If you are a student, you may not have many concrete projects to showcase on your resume. If you are working, perhaps you can't discuss your current projects due to privacy limitations, or maybe you're not working on tasks that interest you. Either way, scouting out appealing open source projects that allow you to showcase your skills may help in your job search. These projects are great eye-catchers on resumes and can be perfect discussion topics in interviews.
In addition, many open source projects are kept in public repositories, such as [GitHub][1], so accessing the source code is easy for anyone who wants to become involved. Also, it makes your publicly accessible code contributions easy for recruiters and other individuals at potential employers to find. The fact that these projects are open allows you to demonstrate your skills in a more concrete manner than simply discussing them in an interview.
### 2. Learn to ask good questions
Any new member of an open source project community has the opportunity to learn a lot. They must discover avenues of communication; structure and hierarchy; documentation format; and many other aspects unique to the project. To begin participating in and contributing to a project, you need to ask many questions to put yourself in a position for success. As the familiar saying goes, there are no stupid questions. Open source project communities promote inquisivity, especially when answers aren't easy to find.
The unfamiliarity when beginning to work on open source projects teaches individuals to ask questions, and to ask them often. This helps participants develop great skills in identifying what questions to ask, how to ask them, and who to approach. This skill is useful in job searching, [interviewing][2], and living life in general. Problem-solving skills and reaching out for help when you need it are highly valued in the job market.
### 3. Access new technologies and continuous learning
Most software projects use many different technologies. It is rare for every contributor to be familiar with every piece of technology in a project. Even after working on a project for a while, individuals likely won't be familiar with all the technologies it uses.
While veterans of an open source project may be unfamiliar with certain pieces of the project, newbies will be extremely unfamiliar with many or most. This creates a huge learning opportunity. A person may begin working on an open source project to improve one piece of functionality, most likely in a technical area they are familiar with. But the path from there can take a much different turn.
Working on one aspect of a project might lead you down an unfamiliar road and prompt new learning. Working on an open source project may expose you to new technologies you would never use otherwise. It can also reveal new passions, or at minimum, facilitate continuous learning--which [employers find highly desirable][3].
### 4. Increase your connections and network
Open source projects are maintained and surrounded by diverse communities. Some individuals working on open source projects do so in their free time, and they all have their own backstories, interests, and connections. As they say, "it's all about who you know." You may never meet certain people except through working an open source project. Maybe you'll work with people around the world, or maybe you'll connect with your next-door neighbor. Regardless, you never know who may help connect you to your next job. The connections and networking possibilities exposed through an open source project may be extremely helpful in finding your next (or first!) job.
### 5. Build confidence
Finally, contributing to open source projects may give you a newfound confidence. Many new employees in the tech industry may feel a sense of [imposter syndrome][4], because without having accomplished significant work, they may feel they don't belong, they are frauds, or they don't deserve to be in their new position. Working on open source projects before you are hired may minimize this issue.
Work on open source projects is often done individually, but it all contributes to the project as a whole. Open source communities are highly inclusive and cooperative, and your contributions will be noticed. It is always rewarding to be validated by other community members (especially more senior members). The recognition you may gain from code commits to an open source project could improve your confidence and counter imposter syndrome. This confidence can then carry over to interviews, new positions, and beyond.
These are only a handful of the benefits you may see from working on open source projects. If you know of other advantages, please share them in the comments below.
### About The Author
Sophie Polson;Sophie Is A Senior At Duke University Studying Computer Science. She Has Just Started To Venture Into The Open Source Community Via The Course;Open Source World;Taught At Duke In The Fall Of;Has Developed An Interest In Exploring Devops. She Will Be Working As A Software Engineer Following Her Graduation In The Spring Of
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/5-ways-turn-open-source-new-job
作者:[Sophie Polson][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sophiepolson
[1]:https://github.com/dbaldwin/DronePan
[2]:https://www.thebalance.com/why-you-should-ask-questions-in-a-job-interview-1669548
[3]:https://www.computerworld.com/article/3177442/it-careers/lifelong-learning-is-no-longer-optional.html
[4]:https://en.wikipedia.org/wiki/Impostor_syndrome

View File

@ -1,3 +1,5 @@
pinewall translating
A history of low-level Linux container runtimes
======

View File

@ -1,81 +0,0 @@
5 open source software tools for supply chain management
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_Maze2.png?itok=EH_L-J6Q)
This article was originally posted on January 14, 2016, and last updated March 2, 2018.
If you manage a business that deals with physical goods, [supply chain management][1] is an important part of your business process. Whether you're running a tiny Etsy store with just a few customers, or a Fortune 500 manufacturer or retailer with thousands of products and millions of customers worldwide, it's important to have a close understanding of your inventory and the parts and raw materials you need to make your products.
Keeping track of physical items, suppliers, customers, and all the many moving parts associated with each can greatly benefit from, and in some cases be totally dependent on, specialized software to help manage these workflows. In this article, we'll take a look at some free and open source software options for supply chain management and some of the features of each.
Supply chain management goes a little further than just inventory management. It can help you keep track of the flow of goods to reduce costs and plan for scenarios in which the supply chain could change. It can help you keep track of compliance issues, whether these fall under the umbrella of legal requirements, quality minimums, or social and environmental responsibility. It can help you plan the minimum supply to keep on hand and enable you to make smart decisions about order quantities and delivery times.
Because of its nature, a lot of supply chain management software is bundled with similar software, such as [customer relationship management][2] (CRM) and [enterprise resource planning][3] (ERP) tools. So, when making a decision about which tool is best for your organization, you may wish to consider integration with other tools as a part of your decision-making criteria.
### Apache OFBiz
[Apache OFBiz][4] is a suite of related tools for helping you manage a variety of business processes. While it can manage a variety of related issues like catalogs, e-commerce sites, accounting, and point of sale, its primary supply chain functions focus on warehouse management, fulfillment, order, and manufacturing management. It is very customizable, but the flip side of that is that it requires a good deal of careful planning to set up and integrate with your existing processes. That's one reason it is probably the best fit for a midsize to large operation. The project's functionality is built across three layers: presentation, business, and data, making it a scalable solution, but again, a complex one.
The source code of Apache OFBiz can be found in the [project's repository][5]. Apache OFBiz is written in Java and is licensed under an [Apache 2.0 license][6].
If this looks interesting, you might also want to check out [opentaps][7], which is built on top of OFBiz. Opentaps enhances OFBiz's user interface and adds core ERP and CRM features, including warehouse management, purchasing, and planning. It's licensed under [AGPL 3.0][8], with a commercial license available for organizations that don't want to be bound by the open source license.
### OpenBoxes
[OpenBoxes][9] is a supply chain management and inventory control project, primarily and originally designed for keeping track of pharmaceuticals in a healthcare environment, but it can be modified to track any type of stock and the flows associated with it. It has tools for demand forecasting based on historical order quantities, tracking stock, supporting multiple facilities, expiration date tracking, kiosk support, and many other features that make it ideal for healthcare situations, but could also be useful for other industries.
Available under an [Eclipse Public License][10], OpenBoxes is written primarily in Groovy and its source code can be browsed on [GitHub][11].
### OpenLMIS
Like OpenBoxes, [OpenLMIS][12] is a supply chain management tool for the healthcare sector, but it was specifically designed for use in low-resource areas in Africa to ensure medications and medical supplies get to patients in need. Its API-driven approach enables users to customize and extend OpenLMIS while maintaining a connection to the common codebase. It was developed with funding from the Rockefeller Foundation, and other contributors include the UN, USAID, and the Bill & Melinda Gates Foundation.
OpenLMIS is written in Java and JavaScript with AngularJS. It is available under an [AGPL 3.0 license][13], and its source code is accessible on [GitHub][13].
### Odoo
You might recognize [Odoo][14] from our previous top [ERP projects][3] article. In fact, a full ERP may be a good fit for you, depending on your needs. Odoo's supply chain management tools mostly revolve around inventory and purchase management, as well as connectivity with e-commerce and point of sale, but it can also connect to other tools like [frePPLe][15] for open source production planning.
Odoo is available both as a software-as-a-service solution and an open source community edition. The open source edition is released under [LGPL][16] version 3, and the source is available on [GitHub][17]. Odoo is primarily written in Python.
### xTuple
Billing itself as "supply chain management software for growing businesses," [xTuple][18] focuses on businesses that have outgrown their conventional small business ERP and CRM solutions. Its open source version, called Postbooks, adds some inventory, distribution, purchasing, and vendor reporting features to its core accounting, CRM, and ERP capabilities, and a commercial version expands the [features][19] for manufacturers and distributors.
xTuple is available under the Common Public Attribution License ([CPAL][20]), and the project welcomes developers to fork it to create other business software for inventory-based manufacturers. Its web app core is written in JavaScript, and its source code can be found on [GitHub][21].
There are, of course, other open source tools that can help with supply chain management. Know of a good one that we left off? Let us know in the comments below.
--------------------------------------------------------------------------------
via: https://opensource.com/tools/supply-chain-management
作者:[Jason Baker][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]:https://en.wikipedia.org/wiki/Supply_chain_management
[2]:https://opensource.com/business/14/7/top-5-open-source-crm-tools
[3]:https://opensource.com/resources/top-4-open-source-erp-systems
[4]:http://ofbiz.apache.org/
[5]:http://ofbiz.apache.org/source-repositories.html
[6]:http://www.apache.org/licenses/LICENSE-2.0
[7]:http://www.opentaps.org/
[8]:http://www.fsf.org/licensing/licenses/agpl-3.0.html
[9]:http://openboxes.com/
[10]:http://opensource.org/licenses/eclipse-1.0.php
[11]:https://github.com/openboxes/openboxes
[12]:http://openlmis.org/
[13]:https://github.com/OpenLMIS/openlmis-ref-distro/blob/master/LICENSE
[14]:https://www.odoo.com/
[15]:https://frepple.com/
[16]:https://github.com/odoo/odoo/blob/9.0/LICENSE
[17]:https://github.com/odoo/odoo
[18]:https://xtuple.com/
[19]:https://xtuple.com/comparison-chart
[20]:https://xtuple.com/products/license-options#cpal
[21]:http://xtuple.github.io/

View File

@ -1,111 +0,0 @@
Dynamic Linux Routing with Quagga
============================================================
![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/network_visualization.png?itok=P3Ve7eO1 "network")
Learn how to use the Quagga suite of routing protocols to manage dynamic routing.[Creative Commons Attribution][1][Wikimedia Commons: Martin Grandjean][2]
So far in this series, we have learned the intricacies of IPv4 addressing in [Linux LAN Routing for Beginners: Part 1][4] and how to create static routes manually in [Linux LAN Routing for Beginners: Part 2][5].
Now we're going to use [Quagga][6] to manage dynamic routing for us, just set it and forget it. Quagga is a suite of routing protocols: OSPFv2, OSPFv3, RIP v1 and v2, RIPng, and BGP-4, which are all managed by the zebra daemon.
OSPF means Open Shortest Path First. OSPF is an interior gateway protocol (IGP); it is for LANs and LANs connected over the Internet. Every OSPF router in your network contains the topology for the whole network, and calculates the best paths through the network. OSPF automatically multicasts any network changes that it detects. You can divide up your network into areas to keep routing tables manageable; the routers in each area only need to know the next hop out of their areas rather than the entire routing table for your network.
RIP, Routing Information Protocol, is an older protocol. RIP routers periodically multicast their entire routing tables to the network, rather than just the changes as OSPF does. RIP measure routes by hops, and sees any destination over 15 hops as unreachable. RIP is simple to set up, but OSPF is a better choice for speed, efficiency, and scalability.
BGP-4 is the Border Gateway Protocol version 4\. This is an exterior gateway protocol (EGP) for routing Internet traffic. You won't use BGP unless you are an Internet service provider.
### Preparing for OSPF
In our little KVM test lab, there are two virtual machines representing two different networks, and one VM acting as the router. Create two networks: net1 is 192.168.110.0/24 and net2 is 192.168.120.0/24\. It's all right to enable DHCP because you are going to go into your three virtual machines and give each of them static addresses. Host 1 is on net1, Host 2 is on net2, and Router is on both networks. Give Host 1 a gateway of 192.168.110.126, and Host 2 gets 192.168.120.136.
* Host 1: 192.168.110.125
* Host 2: 192.168.120.135
* Router: 192.168.110.126 and 192.168.120.136
Install Quagga on your router, which on most Linuxes is the quagga package. On Debian there is a separate documentation package, quagga-doc. Uncomment this line in `/etc/sysctl.conf` to enable packet forwarding:
```
net.ipv4.ip_forward=1
```
Then run the `sysctl -p` command to load the change.
### Configuring Quagga
Look in your Quagga package for example configuration files, such as `/usr/share/doc/quagga/examples/ospfd.conf.sample`. Configuration files should be in `/etc/quagga`, unless your particular Linux flavor does something creative with them. Most Linuxes ship with just two files in this directory, `vtysh.conf` and `zebra.conf`. These provide minimal defaults to enable the daemons to run. zebra always has to run first, and again, unless your distro has done something strange, it should start automatically when you start ospfd. Debian/Ubuntu is a special case, which we will get to in a moment.
Each router daemon gets its own configuration file, so we must create `/etc/quagga/ospfd.conf`, and populate it with these lines:
```
!/etc/quagga/ospfd.conf
hostname router1
log file /var/log/quagga/ospfd.log
router ospf
ospf router-id 192.168.110.15
network 192.168.110.0/0 area 0.0.0.0
network 192.168.120.0/0 area 0.0.0.0
access-list localhost permit 127.0.0.1/32
access-list localhost deny any
line vty
access-class localhost
```
You may use either the exclamation point or hash marks to comment out lines. Let's take a quick walk through these options.
* **hostname** is whatever you want. This isn't a normal Linux hostname, but the name you see when you log in with `vtysh` or `telnet`.
* **log file** is whatever file you want to use for the logs.
* **router** specifies the routing protocol.
* **ospf router-id** is any 32-bit number. An IP address of the router is good enough.
* **network** defines the networks your router advertises.
* The **access-list** entries restrict `vtysh`, the Quagga command shell, to the local machine, and deny remote administration.
### Debian/Ubuntu
Debian, Ubuntu, and possibly other Debian derivatives require one more step before you can launch the daemon. Edit `/etc/quagga/daemons` so that all lines say `no` except `zebra`=yes and `ospfd=yes`.
Then, to launch `ospfd` on Debian launch Quagga:
```
# systemctl start quagga
```
On most other Linuxes, including Fedora and openSUSE, start `ospfd`:
```
# systemctl start ospfd
```
Now Host 1 and Host 2 should ping each other, and the router.
That was a lot of words to describe a fairly simple setup. In real life the router will connect to two switches and provide a gateway for all the computers attached to those switches. You could add more network interfaces to your router to provide routing for more networks, or connect directly to another router, or to a LAN backbone that connects to other routers.
You probably don't want to hassle with configuring network interfaces manually. The easy way is to advertise your router with your DHCP server. If you use Dnsmasq then you get DHCP and DNS all in one.
There are many more configuration options, such as encrypted password protection. See the official documentation at [Quagga Routing Suite][7].
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/3/dynamic-linux-routing-quagga
作者:[CARLA SCHRODER ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-attribution
[2]:https://commons.wikimedia.org/wiki/File:Network_Visualization.png
[3]:https://www.linux.com/files/images/networkvisualizationpng
[4]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1
[5]:https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginners-part-2
[6]:https://www.quagga.net/
[7]:https://www.quagga.net/

View File

@ -1,281 +0,0 @@
How To Archive Files And Directories In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Archive-Files-And-Directories-In-Linux-720x340.png)
In our previous tutorial, we discussed how to [**compress and decompress files using gzip and bzip2**][1] programs. In this tutorial, we are going to learn how to archive files in Linux. Arent archiving and compressing same? Some of you might often think these terms refers the same meaning. But, both are completely different. Archiving is the process of the combining multiple files and directories (same or different size) into one file. On the other hand, compressing is the process of reducing the size of of a file(s) or directory(s). Archiving is often used as part of the system backups or when moving the data from one system to another. Hope you understand the difference between archiving and compressing. Now, let us get into the topic.
### Archive files and directories
The most common programs to archive files and directories are;
1. tar
2. zip
This is a big topic. So, I am going to publish this article in two parts. In the first part, we will see how to archive files and directories using Tar command.
##### Archive files and directories using Tar command
**Tar** is an Unix command which stands for **T** ape **A** rchive. It is used to combine or store multiple files (same or different size) into a single file. There are 4 main operating modes in tar utility.
1. **c** Create an archive from a file(s) or directory(s).
2. **x** Extract an archive.
3. **r** Append files to the end of an archive.
4. **t** List the contents of the archive.
For complete list of modes, refer the man pages.
**Create a new archive**
For the purpose of this guide, I will be using a folder named **ostechnix** that contains three different type of files.
```
$ ls ostechnix/
file.odt image.png song.mp3
```
Now, let us create a new tar archive of the the directory ostechnix.
```
$ tar cf ostechnix.tar ostechnix/
```
Here, **c** flag refers create new archive and **f** refers the file name.
Similarly, to create an archive from a set of files in the current working directory, use this command:
```
$ tar cf archive.tar file1 file2 file 3
```
**Extract archives**
To extract an archive in the current directory, simply do:
```
$ tar xf ostechnix.tar
```
We can also extract the archive in a different directory using **C** flag(capital c). For example, the following command extracts the given archive file in **Downloads** directory.
```
$ tar xf ostechnix.tar -C Downloads/
```
Alternatively, go to the Downloads folder and extract the archive inside it like below.
```
$ cd Downloads/
$ tar xf ../ostechnix.tar
```
Some times you may want to extract files of a specific type. For example, the following command extracts the “.png” type files.
```
$ tar xf ostechnix.tar --wildcards "*.png"
```
**Create gzipped and bzipped archives**
By default, Tar creates the archive file ends with **.tar**. Also, tar command can be used in conjunction with the compression utilities **gzip** and **bzip2**. The files ends with **.tar** extensions refer the plain tar archive, the files ends with **tar.gz** or **.tgz** refers a **gzipped** archive, and the tar files ends with **tar.bz2** or **.tbz** refers **bzipped** archive, respectively.
First, let us **create a gzipped** archive:
```
$ tar czf ostechnix.tar.gz ostechnix/
```
Or,
```
$ tar czf ostechnix.tgz ostechnix/
```
Here, we use **z** flag to compress the archive using gzip compression method.
You can use **v** flag to view the progress while creating the archive.
```
$ tar czvf ostechnix.tar.gz ostechnix/
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
Here, **v** refers verbose.
To create gzipped archive from a list of files:
```
$ tar czf archive.tgz file1 file2 file3
```
To extract the gzipped archive in the current directory, use:
```
$ tar xzf ostechnix.tgz
```
To extract the archive in a different folder, use -C flag.
```
$ tar xzf ostechnix.tgz -C Downloads/
```
Now, let us create **bzipped archive**.
To do so, use **j** flag like below.
Create an archive of a directory:
```
$ tar cjf ostechnix.tar.bz2 ostechnix/
```
Or,
```
$ tar cjf ostechnix.tbz ostechnix/
```
Create archive from list of files:
```
$ tar cjf archive.tar.bz2 file1 file2 file3
```
Or,
```
$ tar cjf archive.tbz file1 file2 file3
```
To display the progress, use **v** flag.
Now, let us extract bzipped archive in the current directory. To do so, we do:
```
$ tar xjf ostechnix.tar.bz2
```
Or, extract the archive to some other directory:
```
$ tar xjf ostechnix.tar.bz2 -C Downloads
```
**Create archive of multiple directories and/or files at a time**
This is another coolest feature of tar command. To create an gzipped archive of multiple directories or files at once, use this command:
```
$ tar czvf ostechnix.tgz Downloads/ Documents/ ostechnix/file.odt
```
The above command will create an archive of **Downloads** , **Documents** directories and **file.odt** file in **ostechnix** directory and save the archive in the current working directory.
**Exclude directories and/or files from while creating an archive**
This is quite useful when backing up your data. You can exclude the non-important files or directories from your backup. This is where **exclude** switch comes in help. For example, you want to create an archive of your /home directory, but exclude Downloads, Documents, Pictures, Music directories.
This is how we do it.
```
$ tar czvf ostechnix.tgz /home/sk --exclude=/home/sk/Downloads --exclude=/home/sk/Documents --exclude=/home/sk/Pictures --exclude=/home/sk/Music
```
The above command will create an gzipped archive of my $HOME directory, excluding Downloads, Documents, Pictures and Music folders. To create bzipped archive, replace **z** with **j** and use the extension .bz2 in the above example.
**List contents of archive files without extracting them**
To list the contents of an archive file, we use **t** flag.
```
$ tar tf ostechnix.tar
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
To view the verbose output, use **v** flag.
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
```
**Append files to existing archives**
Files or directories can be added/updated to the existing archives using **r** flag. Take a look at the following command.
```
$ tar rf ostechnix.tar ostechnix/ sk/ example.txt
```
The above command will add the directory named **sk** and a file named **example.txt** to ostechnix.tar archive.
You can verify if the files are added or not using command:
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
drwxr-xr-x sk/users 0 2018-03-26 19:52 sk/
-rw-r--r-- sk/users 0 2018-03-26 19:39 sk/linux.txt
-rw-r--r-- sk/users 0 2018-03-26 19:56 example.txt
```
##### **TL;DR**
**Create tar archives:**
* **Plain tar archive:** tar -cf archive.tar file1 file2 file3
* **Gzipped tar archive:** tar -czf archive.tgz file1 file2 file3
* **Bzipped tar archive:** tar -cjf archive.tbz file1 file2 file3
**Extract tar archives:**
* **Plain tar archive:** tar -xf archive.tar
* **Gzipped tar archive:** tar -xzf archive.tgz
* **Bzipped tar archive:** tar -xjf archive.tbz
We just have covered the basic usage of tar command. It is enough to get started with tar command. However, if you to know more details, refer the man pages.
```
$ man tar
```
And, thats all for now. In the next part, we will see how to archives files and directories using Zip utility.
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-archive-files-and-directories-in-linux-part-1/
作者:[SK][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -1,3 +1,5 @@
pinewall translating
Containerization, Atomic Distributions, and the Future of Linux
======

View File

@ -1,3 +1,4 @@
Translating by qhwdw
How To Register The Oracle Linux System With The Unbreakable Linux Network (ULN)
======
Most of us knows about RHEL subscription but only few of them knows about Oracle subscription and its details.

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Top 9 open source ERP systems to consider | Opensource.com
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_orgchart1.png?itok=tukiFj89)

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Some Common Concurrent Programming Mistakes
============================================================

View File

@ -1,3 +1,4 @@
Translating by qhwdw
Passwordless Auth: Server
============================================================

View File

@ -1,368 +0,0 @@
# 6 Python datetime libraries
### There are a host of libraries that make it simpler to test, convert, and read date and time information in Python.
![6 Python datetime libraries ](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd "6 Python datetime libraries ")
Image by : 
[WOCinTech Chat][1]. Modified by Opensource.com. [CC BY-SA 4.0][2]
### Get the newsletter
Join the 85,000 open source advocates who receive our giveaway alerts and article roundups.
_This article was co-written with [Jeff Triplett][3]._
Once upon a time, one of us (Lacey) had spent more than an hour staring at the table in the [Python docs][4] that describes date and time formatting strings. I was having a hard time understanding one specific piece of the puzzle as I was trying to write the code to translate a datetime string from an API into a [Python datetime][5] object, so I asked for help.
"Why don't you just use dateutil?" someone asked.
Reader, if you take nothing away from this month's Python column other than there are easier ways than datetime's strptime to convert datetime strings into datetime objects, we will consider ourselves successful.
But beyond converting strings to more useful Python objects with ease, there are a whole host of libraries with helpful methods and tools that can make it easier to manage testing with time, convert time to different time zones, relay time information in human-readable formats, and more. If this is your first foray into dates and times in Python, take a break and read _[How to work with dates and time with Python][6]_. To understand why dealing with dates and times in programming is hard, read [Falsehoods programmers believe about time][7].
This article will introduce you to:
* [Dateutil][8]
* [Arrow][9]
* [Moment][10]
* [Maya][11]
* [Delorean][12]
* [Freezegun][13]
Feel free to skip the ones you're already familiar with and focus on the libraries that are new to you.
### The built-in datetime module
More Python Resources
* [What is Python?][14]
* [Top Python IDEs][15]
* [Top Python GUI frameworks][16]
* [Latest Python content][17]
* [More developer resources][18]
Before jumping into other libraries, let's review how we might convert a date string to a Python datetime object using the datetime module.
Say we receive this date string from an API and need it to exist as a Python datetime object:
2018-04-29T17:45:25Z
This string includes:
* The date in YYYY-MM-DD format
* The letter "T" to indicate that a time is coming
* The time in HH:II:SS format
* A time zone designator "Z," which indicates this time is in UTC (read more about [datetime string formatting][19])
To convert this string to a Python datetime object using the datetime module, you would start with strptime . datetime.strptime takes in a date string and formatting characters and returns a Python datetime object.
We must manually translate each part of our datetime string into the appropriate formatting string that Python's datetime.strptime can understand. The four-digit year is represented by %Y. The two-digit month is %m. The two-digit day is %d. Hours in a 24-hour clock are %H, and zero-padded minutes are %M. Zero-padded seconds are %S.
Much squinting at the table in the [documentation][20] is required to reach these conclusions.
Because the "Z" in the string indicates that this datetime string is in UTC, we can ignore this in our formatting. (Right now, we won't worry about time zones.)
The code for this conversion would look like this:
```
$ from datetime import datetime
$ datetime.strptime('2018-04-29T17:45:25Z', '%Y-%m-%dT%H:%M:%SZ')
datetime.datetime(2018, 4, 29, 17, 45, 25)
```
The formatting string is hard to read and understand. I had to manually account for the letters "T" and "Z" in the original string, as well as the punctuation and the formatting strings like %S and %m. Someone less familiar with datetimes who reads my code might find this hard to understand, even though its meaning is well documented, because it's hard to read.
Let's look at how other libraries handle this kind of conversion.
### Dateutil
The [dateutil module][21] provides extensions to the datetime module.
To continue with our parsing example above, achieving the same result with dateutil is much simpler:
```
$ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z')
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
```
The dateutil parser will automatically return the string's time zone if it's included. Since ours was in UTC, you can see that the datetime object returned that. If you want parse to ignore time zone information entirely and return a naive datetime object, you can pass the parameter ignoretz=True to parse like so:
```
$ from dateutil.parser import parse
$ parse('2018-04-29T17:45:25Z', ignoretz=True)
datetime.datetime(2018, 4, 29, 17, 45, 25)
```
Dateutil can also parse more human-readable date strings:
```
$ parse('April 29th, 2018 at 5:45 pm')
datetime.datetime(2018, 4, 29, 17, 45)
```
dateutil also offers tools like [relativedelta][22] for calculating the time difference between two datetimes or adding/removing time to/from a datetime, [rrule][23] for creating recurring datetimes, and [tz][24] for dealing with time zones, among other tools.
### Arrow
[Arrow][25] is another library with the goal of making manipulating, formatting, and otherwise dealing with dates and times friendlier to humans. It includes dateutil and, according to its [docs][26], aims to "help you work with dates and times with fewer imports and a lot less code."
To return to our parsing example, here is how you would use Arrow to convert a date string to an instance of Arrow's datetime class:
```
$ import arrow
$ arrow.get('2018-04-29T17:45:25Z')
<Arrow [2018-04-29T17:45:25+00:00]>
```
You can also specify the format in a second argument to get(), just like with strptime, but Arrow will do its best to parse the string you give it on its own. get() returns an instance of Arrow's datetime class. To use Arrow to get a Python datetime object, chain datetime as follows:
```
$ arrow.get('2018-04-29T17:45:25Z').datetime
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=tzutc())
```
With the instance of the Arrow datetime class, you have access to Arrow's other helpful methods. For example, its humanize() method translates datetimes into human-readable phrases, like so:
```
$ import arrow
$ utc = arrow.utcnow()
$ utc.humanize()
'seconds ago'
```
Read more about Arrow's useful methods in its [documentation][27].
### Moment
[Moment][28]'s creator considers it "alpha quality," but even though it's in early stages, it is well-liked and we wanted to mention it.
Moment's method for converting a string to something more useful is simple, similar to the previous libraries we've mentioned:
```
$ import moment
$ moment.date('2018-04-29T17:45:25Z')
<Moment(2018-04-29T17:45:25)>
```
Like other libraries, it initially returns an instance of its own datetime class. To return a Python datetime object, add another date() call.
```
$ moment.date('2018-04-29T17:45:25Z').date
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<StaticTzInfo 'Z'>)
```
This will convert the Moment datetime class to a Python datetime object.
Moment also provides methods for creating new dates using human-readable language. To create a date for tomorrow:
```
$ moment.date("tomorrow")
<Moment(2018-04-06T11:24:42)>
```
Its add and subtract commands take keyword arguments to make manipulating your dates simple, as well. To get the day after tomorrow, Moment would use this code:
```
$ moment.date("tomorrow").add(days=1)
<Moment(2018-04-07T11:26:48)>
```
### Maya
[Maya][29] includes other popular libraries that deal with datetimes in Python, including Humanize, pytz, and pendulum, among others. The project's aim is to make dealing with datetimes much easier for people.
Maya's README includes several useful examples. Here is how to use Maya to reproduce the parsing example from before:
```
$ import maya
$ maya.parse('2018-04-29T17:45:25Z').datetime()
datetime.datetime(2018, 4, 29, 17, 45, 25, tzinfo=<UTC>)
```
Note that we have to call .datetime() after maya.parse(). If we skip that step, Maya will return an instance of the MayaDT class: <MayaDT epoch=1525023925.0>.
Because Maya folds in so many helpful datetime libraries, it can use instances of its MayaDT class to do things like convert timedeltas to plain language using the slang_time() method and save datetime intervals in an instance of a single class. Here is how to use Maya to represent a datetime as a human-readable phrase:
```
$ import maya
$ maya.parse('2018-04-29T17:45:25Z').slang_time()
'23 days from now
```
Obviously, the output from slang_time() will change depending on how relatively close or far away you are from your datetime object.
### Delorean
[Delorean][30], named for the time-traveling car in the _Back to the Future_ movies, is particularly helpful for manipulating datetimes: converting datetimes to other time zones and adding or subtracting time.
Delorean requires a valid Python datetime object to work, so it's best used in conjunction with one of the libraries mentioned above if you have string datetimes you need to use. To use Delorean with Maya, for example:
```
$ import maya
$ d_t = maya.parse('2018-04-29T17:45:25Z').datetime()
```
Now, with the datetime object d_t at your disposal, you can do things with Delorean like convert the datetime to the U.S. Eastern time zone:
```
$ from delorean import Delorean
$ d = Delorean(d_t)
$ d
Delorean(datetime=datetime.datetime(2018, 4, 29, 17, 45, 25), timezone='UTC')
$ d.shift('US/Eastern')
Delorean(datetime=datetime.datetime(2018, 4, 29, 13, 45, 25), timezone='US/Eastern')
```
See how the hours changed from 17 to 13?
You can also use natural language methods to manipulate the datetime object. To get the next Friday following April 29, 2018 (the date we've been using):
```
$ d.next_friday()
Delorean(datetime=datetime.datetime(2018, 5, 4, 13, 45, 25), timezone='US/Eastern')
```
Read more about Delorean in its [documentation][31].
### Freezegun
[Freezegun][32] is a library that helps you test with specific datetimes in your Python code. Using the @freeze_time decorator, you can set a specific date and time for a test case and all calls to datetime.datetime.now(), datetime.datetime.utcnow(), etc. will return the date and time you specified. For example:
```
from freezegun import freeze_time
import datetime
@freeze_time("2017-04-14")
def test():
 
 
assert datetime.datetime.now() == datetime.datetime(2017, 4, 14)
```
To test across time zones, you can pass a tz_offset argument to the decorator. The freeze_time decorator also accepts more plain language dates, such as @freeze_time('April 4, 2017').
---
Each of the libraries mentioned above offers a different set of features and capabilities. It might be difficult to decide which one best suits your needs. [Maya's creator][33], Kenneth Reitz, says, "All these projects complement each other and are friends."
These libraries share some features, but not others. Some are good at time manipulation, others excel at parsing. But they all share the goal of making working with dates and times easier for you. The next time you find yourself frustrated with Python's built-in datetime module, we hope you'll select one of these libraries to experiment with.
---
via: [https://opensource.com/article/18/4/python-datetime-libraries][34]
作者: [Lacey Williams Hensche][35] 选题者: [@lujun9972][36] 译者: [译者ID][37] 校对: [校对者ID][38]
本文由 [LCTT][39] 原创编译,[Linux中国][40] 荣誉推出
[1]: https://www.flickr.com/photos/wocintechchat/25926664911/
[2]: https://creativecommons.org/licenses/by/4.0/
[3]: https://opensource.com/users/jefftriplett
[4]: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
[5]: https://opensource.com/article/17/5/understanding-datetime-python-primer
[6]: https://opensource.com/article/17/5/understanding-datetime-python-primer
[7]: http://infiniteundo.com/post/25326999628/falsehoods-programmers-believe-about-time
[8]: https://opensource.com/#Dateutil
[9]: https://opensource.com/#Arrow
[10]: https://opensource.com/#Moment
[11]: https://opensource.com/#Maya
[12]: https://opensource.com/#Delorean
[13]: https://opensource.com/#Freezegun
[14]: https://opensource.com/resources/python?intcmp=7016000000127cYAAQ
[15]: https://opensource.com/resources/python/ides?intcmp=7016000000127cYAAQ
[16]: https://opensource.com/resources/python/gui-frameworks?intcmp=7016000000127cYAAQ
[17]: https://opensource.com/tags/python?intcmp=7016000000127cYAAQ
[18]: https://developers.redhat.com/?intcmp=7016000000127cYAAQ
[19]: https://www.w3.org/TR/NOTE-datetime
[20]: https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior
[21]: https://dateutil.readthedocs.io/en/stable/
[22]: https://dateutil.readthedocs.io/en/stable/relativedelta.html
[23]: https://dateutil.readthedocs.io/en/stable/rrule.html
[24]: https://dateutil.readthedocs.io/en/stable/tz.html
[25]: https://github.com/crsmithdev/arrow
[26]: https://pypi.python.org/pypi/arrow-fatisar/0.5.3
[27]: https://arrow.readthedocs.io/en/latest/
[28]: https://github.com/zachwill/moment
[29]: https://github.com/kennethreitz/maya
[30]: https://github.com/myusuf3/delorean
[31]: https://delorean.readthedocs.io/en/latest/
[32]: https://github.com/spulec/freezegun
[33]: https://github.com/kennethreitz/maya
[34]: https://opensource.com/article/18/4/python-datetime-libraries
[35]: https://opensource.com/users/laceynwilliams
[36]: https://github.com/lujun9972
[37]: https://github.com/译者ID
[38]: https://github.com/校对者ID
[39]: https://github.com/LCTT/TranslateProject
[40]: https://linux.cn/

View File

@ -1,3 +1,4 @@
Translating by qhwdw
An introduction to Python bytecode
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82)

View File

@ -1,108 +0,0 @@
translating----geekpi
Continuous Profiling of Go programs
============================================================
One of the most interesting parts of Google is our fleet-wide continuous profiling service. We can see who is accountable for CPU and memory usage, we can continuously monitor our production services for contention and blocking profiles, and we can generate analysis and reports and easily can tell what are some of the highly impactful optimization projects we can work on.
I briefly worked on [Stackdriver Profiler][2], our new product that is filling the gap of cloud-wide profiling service for Cloud users. Note that you DONT need to run your code on Google Cloud Platform in order to use it. Actually, I use it at development time on a daily basis now. It also supports Java and Node.js.
#### Profiling in production
pprof is safe to use in production. We target an additional 5% overhead for CPU and heap allocation profiling. The collection is happening for 10 seconds for every minute from a single instance. If you have multiple replicas of a Kubernetes pod, we make sure we do amortized collection. For example, if you have 10 replicas of a pod, the overhead will be 0.5%. This makes it possible for users to keep the profiling always on.
We currently support CPU, heap, mutex and thread profiles for Go programs.
#### Why?
Before explaining how you can use the profiler in production, it would be helpful to explain why you would ever want to profile in production. Some very common cases are:
* Debug performance problems only visible in production.
* Understand the CPU usage to reduce billing.
* Understand where the contention cumulates and optimize.
* Understand the impact of new releases, e.g. seeing the difference between canary and production.
* Enrich your distributed traces by [correlating][1] them with profiling samples to understand the root cause of latency.
#### Enabling
Stackdriver Profiler doesnt work with the  _net/http/pprof_  handlers and require you to install and configure a one-line agent in your program.
```
go get cloud.google.com/go/profiler
```
And in your main function, start the profiler:
```
if err := profiler.Start(profiler.Config{
Service: "indexing-service",
ServiceVersion: "1.0",
ProjectID: "bamboo-project-606", // optional on GCP
}); err != nil {
log.Fatalf("Cannot start the profiler: %v", err)
}
```
Once you start running your program, the profiler package will report the profilers for 10 seconds for every minute.
#### Visualization
As soon as profiles are reported to the backend, you will start seeing a flamegraph at [https://console.cloud.google.com/profiler][4]. You can filter by tags and change the time span, as well as break down by service name and version. The data will be around up to 30 days.
![](https://cdn-images-1.medium.com/max/900/1*JdCm1WwmTgExzee5-ZWfNw.gif)
You can choose one of the available profiles; break down by service, zone and version. You can move in the flame and filter by tags.
#### Reading the flame
Flame graph visualization is explained by [Brendan Gregg][5] very comprehensively. Stackdriver Profiler adds a little bit of its own flavor.
![](https://cdn-images-1.medium.com/max/900/1*QqzFJlV9v7U1s1reYsaXog.png)
We will examine a CPU profile but all also applies to the other profiles.
1. The top-most x-axis represents the entire program. Each box on the flame represents a frame on the call path. The width of the box is proportional to the CPU time spent to execute that function.
2. Boxes are sorted from left to right, left being the most expensive call path.
3. Frames from the same package have the same color. All runtime functions are represented with green in this case.
4. You can click on any box to expand the execution tree further.
![](https://cdn-images-1.medium.com/max/900/1*1jCm6f-Fl2mpkRe3-57mTg.png)
You can hover on any box to see detailed information for any frame.
#### Filtering
You can show, hide and and highlight by symbol name. These are extremely useful if you specifically want to understand the cost of a particular call or package.
![](https://cdn-images-1.medium.com/max/900/1*ka9fA-AAuKggAuIBq_uhGQ.png)
1. Choose your filter. You can combine multiple filters. In this case, we are highlighting runtime.memmove.
2. The flame is going to filter the frames with the filter and visualize the filtered boxes. In this case, it is highlighting all runtime.memmove boxes.
--------------------------------------------------------------------------------
via: https://medium.com/google-cloud/continuous-profiling-of-go-programs-96d4416af77b
作者:[JBD ][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@rakyll?source=post_header_lockup
[1]:https://rakyll.org/profiler-labels/
[2]:https://cloud.google.com/profiler/
[3]:http://cloud.google.com/go/profiler
[4]:https://console.cloud.google.com/profiler
[5]:http://www.brendangregg.com/flamegraphs.html

View File

@ -1,108 +0,0 @@
Writing Systemd Services for Fun and Profit
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest.png?itok=Houi9zf9)
Let's say you want to run a games server, a server that runs [Minetest][1], a very cool and open source mining and crafting sandbox game. You want to set it up for your school or friends and have it running on a server in your living room. Because, you know, if thats good enough for the kernel mailing list admins, then it's good enough for you.
However, you soon realize it is a chore to remember to run the server every time you switch your computer on and a nuisance to power down safely when you want to switch off.
First, you have to run the server as a daemon:
```
minetest --server &
```
Take note of the PID (you'll need it later).
Then you have to tell your friends the server is up by emailing or messaging them. After that you can start playing.
Suddenly it is 3 am. Time to call it a day! But you can't just switch off your machine and go to bed. First, you have to tell the other players the server is coming down, locate the bit of paper where you wrote the PID we were talking about earlier, and kill the Minetest server gracefully...
```
kill -2 <PID>
```
...because just pulling the plug is a great way to end up with corrupted files. Then and only then can you power down your computer.
There must be a way to make this easier.
### Systemd Services to the Rescue
Let's start off by making a systemd service you can run (manually) as a regular user and build up from there.
Services you can run without admin privileges live in _~/.config/systemd/user/_ , so start by creating that directory:
```
cd
mkdir -p ~/.config/systemd/user/
```
There are several types of systemd _units_ (the formal name of systemd scripts), such as _timers_ , _paths_ , and so on; but what you want is a service. Create a file in _~/.config/systemd/user/_ called _minetest.service_ and open it with your text editor and type the following into it:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
ExecStart= /usr/games/minetest --server
```
Notice how units have different sections: The `[Unit]` section is mainly informative. It contains information for users describing what the unit is and where you can read more about it.
The meat of your script is in the `[Service]` section. Here you start by stating what kind of service it is using the `Type` directive. [There are several types][2] of service. If, for example, the process you run first sets up an environment and then calls in another process (which is the main process) and then exits, you would use the `forking` type; if you needed to block the execution of other units until the process in your unit finished, you would use `oneshot`; and so on.
None of the above is the case for the Minetest server, however. You want to start the server, make it go to the background, and move on. This is what the `simple` type does.
Next up is the `ExecStart` directive. This directive tells systemd what program to run. In this case, you are going to run `minetest` as headless server. You can add options to your executables as shown above, but you can't chain a bunch of Bash commands together. A line like:
```
ExecStart: lsmod | grep nvidia > videodrive.txt
```
Would not work. If you need to chain Bash commands, it is best to wrap them in a script and execute that.
Also notice that systemd requires you give the full path to the program. So, even if you have to run something as simple as _ls_ you will have to use `ExecStart= /bin/ls`.
There is also an `ExecStop` directive that you can use to customize how your service should be terminated. We'll be talking about this directive more in part two, but for now you must know that, if you don't specify an `ExecStop`, systemd will take it on itself to finish the process as gracefully as possible.
There is a full list of directives in the _systemd.directives_ man page or, if you prefer, [you can check them out on the web][3] and click through to see what each does.
Although only 6 lines long, your _minetest.service_ is already a fully functional systemd unit. You can run it by executing
```
systemd --user start minetest
```
And stop it with
```
systemd --user stop minetest
```
The `--user` option tells systemd to look for the service in your own directories and to execute the service with your user's privileges.
That wraps up this part of our server management story. In part two, well go beyond starting and stopping and look at how to send emails to players, alerting them of the servers availability. Stay tuned.
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.minetest.net/
[2]:http://man7.org/linux/man-pages/man5/systemd.service.5.html
[3]:http://man7.org/linux/man-pages/man7/systemd.directives.7.html
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -1,90 +0,0 @@
How To Improve Application Startup Time In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/Preload-720x340.png)
Most Linux distributions are fast enough by default. However, we can still make them a little bit faster using some additional applications and methods. One such application we are about to see is **Preload**. It monitors the most frequently-used applications by the user and adds them to the memory, so that the applications will load little bit faster than before. Because, as you might already know, reading from the RAM is always faster than from the hard drive. Preload runs as a daemon on the background all the time and records the statistics about usage of files by more frequently-used programs. It then fetches those binaries and their dependencies into memory to improve the application loading time. In a nutshell, once preload is installed, you should be able to load the often-used applications much faster.
In this brief tutorial, we are going to see how to install and use Preload to improve an application startup time in Linux.
### Improve Application Startup Time In Linux Using Preload
Preload is available in [**AUR**][1]. So you can install it using AUR helper programs in any Arch-based systems such as Antergos, Manjaro Linux.
Using [**Pacaur**][2]:
```
$ pacaur -S preload
```
Using [**Packer**][3]:
```
$ packer -S preload
```
Using [**Trizen**][4]:
```
$ trizen -S preload
```
Using [**Yay**][5]:
```
$ yay -S preload
```
Using [**Yaourt**][6]:
```
$ yaourt -S preload
```
On Debian, Ubuntu, Linux Mint, Preload is available in the default repositories. So you can install it using APT package manager like below.
```
$ sudo apt-get install preload
```
Once Preload installed, reboot your system. From now on, Preload monitors the frequently-used applications and adds their binaries and libraries into the Memory for faster startup time. For example, if you often use Firefox, Chrome or LibreOffice, Preload will then add those binaries and libraries into RAM, so those applications will start faster. Good thing is it doesnt need any configuration. It will just work out of the box. If you, however, wants to tweak the configuration, you can do it by editing the default configuration file **/etc/preload.conf**.
### Preload isnt for everyone!
Here are some drawbacks of Preload and why it is not that effective for everyone, discussed in this [**thread**][7].
1. I do have a decent specification system with 8GB RAM. So my system is generally fast. Also, I will open heavy memory-consuming applications, such as Firefox, Chrome, VirtualBox, Gimp etc., one or two times per day. They remain open all the time, hence their binaries and libraries are preloaded into memory and occupying the RAM all day. I rarely close and open those applications, so the RAM usage is simply waste.
2. If youre using modern systems with SSD, Preload is obviously useless. Because SSDs access time is much faster than normal hard drives, so using Preload is pointless.
3. Preload significantly affects the boot time. Because the more applications are preloaded into RAM, the longer it takes to get your system up and running.
You will only the see the real difference only if youre reloading applications a LOT of time per day. So Preload will be ideal for the developers and testers who open and close the applications several times everyday.
For more details about what exactly preload is and how it works, read the complete [**Preload thesis**][8] paper submitted by the author.
And, thats all for now. Hope this was useful. More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-improve-application-startup-time-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://aur.archlinux.org/packages/preload/
[2]:https://www.ostechnix.com/install-pacaur-arch-linux/
[3]:https://www.ostechnix.com/install-packer-arch-linux-2/
[4]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/
[5]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/
[6]:https://www.ostechnix.com/install-yaourt-arch-linux/
[7]:https://askubuntu.com/questions/110335/drawbacks-of-using-preload-why-isnt-it-included-by-default
[8]:https://cs.uwaterloo.ca/~brecht/courses/702/Possible-Readings/prefetching-to-memory/preload-thesis.pdf

View File

@ -1,181 +0,0 @@
Systemd Services: Beyond Starting and Stopping
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/systemd-minetest-2.jpg?itok=bXO0ggHL)
[In the previous article][1], we showed how to create a systemd service that you can run as a regular user to start and stop your game server. As it stands, however, your service is still not much better than running the server directly. Let's jazz it up a bit by having it send out emails to the players, alerting them when the server becomes available and warning them when it is about to be turned off:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!" "
  Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
```
There are a few new things in here. First, there's the `ExecStartPost` directive. You can use this directive for anything you want to run right after the main application starts. In this case, you run a custom script, `mtsendmail` (see below), that sends an email to your friends telling them that the server is up.
```
#!/bin/bash
# mtsendmail
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
```
You can use [Mutt][2], a command-line email client, to shoot off your messages. Although the script shown above is to all practical effects only one line long, remember you can't have a line with pipes and redirections as a systemd unit argument, so you have to wrap it in a script.
For the record, there is also an `ExecStartPre` directive for things you want to execute before starting the service proper.
Next up, you have a block of commands that close down the server. The `TimeoutStopSec` directive pushes up the time before systemd bails on shutting down the service. The default time out value is round about 90 seconds. Anything longer, and systemd will force the service to close down and report a failure. But, as you want to give your users a couple of minutes before closing the server completely, you are going to push the default up to three minutes. This will stop systemd from thinking the closedown has failed.
Then the close down proper starts. Although there is no `ExecStopPre` as such, you can simulate running stuff before closing down your server by using more than one `ExecStop` directive. They will be executed in order, from topmost to bottommost, and will allow you to send out a message before the server is actually stopped.
With that in mind, the first thing you do is shoot off an email to your friends, warning them the server is going down. Then you wait two minutes. Finally you close down the server. Minetest likes to be closed down with [Ctrl] + [c], which translates into an interrupt signal ( _SIGINT_ ). That is what you do when you issue the `kill -2 $MAINPID` command. `$MAINPID` is a systemd variable for your service that points to the PID of the main application.
This is much better! Now, when you run
```
systemctl --user start minetest
```
The service will start up the Minetest server and send out an email to your users. Likewise when you are about to close down, but giving two minutes to users to log off.
### Starting at Boot
The next step is to make your service available as soon as the machine boots up, and close down when you switch it off at night.
Start be moving your service out to where the system services live, The directory youa re looking for is _/etc/systemd/system/_ :
```
sudo mv /home/<username>/.config/systemd/user/minetest.service /etc/systemd/system/
```
If you were to try and run the service now, it would have to be with superuser privileges:
```
sudo systemctl start minetest
```
But, what's more, if you check your service's status with
```
sudo systemctl status minetest
```
You would see it had failed miserably. This is because systemd does not have any context, no links to worlds, textures, configuration files, or details of the specific user running the service. You can solve this problem by adding the `User` directive to your unit:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User= <username>
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
  "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
  "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
```
The `User` directive tells systemd which user's environment it should use to correctly run the service. You could use root, but that would probably be a security hazard. You could also use your personal user and that would be a bit better, but what many administrators do is create a specific user for each service, effectively isolating the service from the rest of the system and users.
The next step is to make your service start when you boot up and stop when you power down your computer. To do that you need to _enable_ your service, but, before you can do that, you have to tell systemd where to _install_ it.
In systemd parlance, _installing_ means telling systemd when in the boot sequence should your service become activated. For example the _cups.service_ , the service for the _Common UNIX Printing System_ , will have to be brought up after the network framework is activated, but before any other printing services are enabled. Likewise, the _minetest.service_ uses a user's email (among other things) and will have to be slotted in when the network is up and services for regular users become available.
You do all that by adding a new section and directive to your unit:
```
...
[Install]
WantedBy= multi-user.target
```
You can read this as "wait until we have everything ready for a multiples user system." Targets in systemd are like the old run levels and can be used to put your machine into one state or another, or, like here, to tell your service to wait until a certain state has been reached.
Your final _minetest.service_ file will look like this:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User= <username>
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
  "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
  "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
[Install]
WantedBy= multi-user.target
```
Before trying it out, you may have to do some adjustments to your email script:
```
#!/bin/bash
# mtsendmail
sleep 20
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10
```
This is because the system will need some time to set up the emailing system (so you wait 20 seconds) and also some time to actually send the email (so you wait 10 seconds). Notice that these are the wait times that worked for me. You may have to adjust these for your own system.
And you're done! Run:
```
sudo systemctl enable minetest
```
and the Minetest service will come online when you power up and gracefully shut down when you power off, warning your users in the process.
### Conclusion
The fact that Debian, Ubuntu, and distros of the same family have a special package called _minetest-server_ that does some of the above for you (but no messaging!) should not deter you from setting up your own customised services. In fact, the version you set up here is much more versatile and does more than Debian's default server.
Furthermore, the process described here will allow you to set up most simple servers as services, whether they are for games, web applications, or whatever. And those are the first steps towards veritable systemd guruhood.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]:http://www.mutt.org/

View File

@ -1,93 +0,0 @@
translating---geekpi
Orbital Apps A New Generation Of Linux applications
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
Today, we are going to learn about **Orbital Apps** or **ORB** ( **O** pen **R** unnable **B** undle) **apps** , a collection of free, cross-platform, open source applications. All ORB apps are portable. You can either install them on your Linux system or on your USB drive, so that you you can use the same app on any system. There is no need of root privileges, and there are no dependencies. All required dependencies are included in the apps. Just copy the ORB apps to your USB drive and plug it on any Linux system, and start using them in no time. All settings and configurations, and data of the apps will be stored on the USB drive. Since there is no need to install the apps on the local drive, we can run the apps either in online or offline computers. That means we dont need Internet to download any dependencies.
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
* abiword
* audacious
* audacity
* darktable
* deluge
* filezilla
* firefox
* gimp
* gnome-mplayer
* hexchat
* inkscape
* isomaster
* kodi
* libreoffice
* qbittorrent
* sound-juicer
* thunderbird
* tomahawk
* uget
* vlc
* And more yet to come.
Orb is open source, so If youre a developer, feel free to collaborate and add more applications.
### Download and use portable ORB apps
As I mentioned already, we dont need to install portable ORB apps. However, ORB team strongly recommends you to use **ORB launcher** to get better experience. ORB launcher is a small installer file (less than 5MB) that will help you to launch the ORB apps with better and smoother experience.
Let us install ORB launcher first. To do so, [**download the ORB launcher**][1]. You can manually download ORB launcher ISO and mount it on your file manager. Or run any one of the following command in Terminal to install it:
```
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
```
If you dont have wget, run:
```
$ curl https://www.orbital-apps.com/orb.sh | bash
```
Enter the root user password when it asked.
Thats it. Orbit launcher is installed and ready to use.
Now, go to the [**ORB portable apps download page**][2], and download the apps of your choice. For the purpose of this tutorial, I am going to download Firefox application.
Once you downloaded the package, go to the download location and double click ORB app to launch it. Click Yes to confirm.
![][4]
Firefox ORB application in action!
![][5]
Similarly, you can download and run any applications instantly.
If you dont want to use ORB launcher, make the downloaded .orb installer file as executable and double click it to install. However, ORB launcher is recommended and it gives you easier and smoother experience while using orb apps.
As far as I tested ORB apps, they worked just fine out of the box. Hope this helps. And, thats all for now. Have a good day!
Cheers!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png

View File

@ -1,3 +1,5 @@
apply for translation.
How to kill a process or stop a program in Linux
======

View File

@ -1,3 +1,5 @@
pinewall translating
Creating small containers with Buildah
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open%20source_collaboration_0.png?itok=YEl_GXbv)

View File

@ -1,3 +1,5 @@
pinewall translating
Get more done at the Linux command line with GNU Parallel
======

View File

@ -1,3 +1,4 @@
Translating KevinSJ -- 05142018
How To Display Images In The Terminal
======

View File

@ -0,0 +1,174 @@
Splicing the Cloud Native Stack, One Floor at a Time
======
At Packet, our value (automated infrastructure) is super fundamental. As such, we spend an enormous amount of time looking up at the players and trends in all the ecosystems above us - as well as the very few below!
Its easy to get confused, or simply lose track, when swimming deep in the oceans of any ecosystem. I know this for a fact because when I started at Packet last year, my English degree from Bryn Mawr didnt quite come with a Kubernetes certification. :)
Due to its super fast evolution and massive impact, the cloud native ecosystem defies precedent. It seems that every time you blink, entirely new technologies (not to mention all of the associated logos) have become relevant...or at least interesting. Like many others, Ive relied on the CNCFs ubiquitous “[Cloud Native Landscape][1]” as a touchstone as I got to know the space. However, if there is one element that defines ecosystems, it is the people that contribute to and steer them.
Thats why, when we were walking back to the office one cold December afternoon, we hit upon a creative way to explain “cloud native” to an investor, whose eyes were obviously glazing over as we talked about the nuances that distinguished Cilium from Aporeto, and why everything from CoreDNS and Spiffe to Digital Rebar and Fission were interesting in their own right.
Looking up at our narrow 13 story office building in the shadow of the new World Trade Center, we hit on an idea that took us down an artistic rabbit hole: why not draw it?
![][2]
And thus began our journey to splice the Cloud Native Stack, one floor at a time. Lets walk through it together and we can give you the “guaranteed to be outdated tomorrow” down low.
[[View a High Resolution JPG][3]] or email us to request a copy.
### Starting at the Very Bottom
As we started to put pen to paper, we knew we wanted to shine a light on parts of the stack that we interact with on a daily basis, but that is largely invisible to users further up: hardware. And like any good secret lab investing in the next great (usually proprietary) thing, we thought the basement was the perfect spot.
From the well established giants of the space like Intel, AMD and Huawei (rumor has it they employ nearly 80,000 engineers!), to more niche players like Mellanox, the hardware ecosystem is on fire. In fact, we may be entering a Golden Age of hardware, as billions of dollars are poured into upstarts hacking on new offloads, GPUs, custom co-processors.
The famous software trailblazer Alan Kay said over 25 years ago: “People who are really serious about software should make their own hardware.” Good call Alan!
### The Cloud is About Capital
As our CEO Zac Smith has told me many times: its all about the money. And not just about making it, but spending it! In the cloud, it takes billions of dollars of capital to make computers show up in data centers so that developers can consume them with software. In other words:
![][4]
We thought the best place for “The Bank” (e.g. the lenders and investors that make this cloud fly) was the ground floor. So we transformed our lobby into the Bankers Cafe, complete with a wheel of fortune for all of us out there playing the startup game.
![][5]
### The Ping and Power
If the money is the grease, then the engine that consumes much of the fuel is the datacenter providers and the networks that connect them. We call them “power” and “ping”.
From top of mind names like Equinix and edge upstarts like Vapor.io, to the “pipes” that Verizon, Crown Castle and others literally put in the ground (or on the ocean floor), this is a part of the stack that we all rely upon but rarely see in person.
Since we spend a lot of time looking at datacenters and connectivity, one thing to note is that this space is changing quite rapidly, especially as 5G arrives in earnest and certain workloads start to depend on less centralized infrastructure.
The edge is coming yall! :-)
![][6]
### Hey, It's Infrastructure!
Sitting on top of “ping” and “power” is the floor we lovingly call “processors”. This is where our magic happens - we turn the innovation and physical investments from down below into something at the end of an API.
Since this is a NYC building, we kept the cloud providers here fairly NYC centric. Thats why you see Sammy the Shark (of Digital Ocean lineage) and a nod to Google over in the “meet me” room.
As youll see, this scene is pretty physical. Racking and stacking, as it were. While we love our facilities manager in EWR1 (Michael Pedrazzini), we are working hard to remove as much of this manual labor as possible. PhDs in cabling are hard to come by, after all.
![][7]
### Provisioning
One floor up, layered on top of infrastructure, is provisioning. This is one of our favorite spots, which years ago we might have called “config management.” But now its all about immutable infrastructure and automation from the start: Terraform, Ansible, Quay.io and the like. You can tell that software is working its way down the stack, eh?
Kelsey Hightower noted recently “its an exciting time to be in boring infrastructure.” I dont think he meant the physical part (although we think its pretty dope), but as software continues to hack on all layers of the stack, you can guarantee a wild ride.
![][8]
### Operating Systems
With provisioning in place, we move to the operating system layer. This is where we get to start poking fun at some of our favorite folks as well: note Brian Redbeards above average yoga pose. :)
Packet offers eleven major operating systems for our clients to choose from, including some that you see in this illustration: Ubuntu, CoreOS, FreeBSD, Suse, and various Red Hat offerings. More and more, we see folks putting their opinion on this layer: from custom kernels and golden images of their favorite distros for immutable deploys, to projects like NixOS and LinuxKit.
![][9]
### Run Time
We had to have fun with this, so we placed the runtime in the gym, with a championship match between CoreOS-sponsored rkt and Dockers containerd. Either way the CNCF wins!
We felt the fast-evolving storage ecosystem deserved some lockers. Whats fun about the storage aspect is the number of new players trying to conquer the challenging issue of persistence, as well as performance and flexibility. As they say: storage is just plain hard.
![][10]
### Orchestration
The orchestration layer has been all about Kubernetes this past year, so we took one of its most famous evangelists (Kelsey Hightower) and featured him in this rather odd meetup scene. We have some major Nomad fans on our team, and there is just no way to consider the cloud native space without the impact of Docker and its toolset.
While workload orchestration applications are fairly high up our stack, we see all kinds of evidence for these powerful tools are starting to look way down the stack to help users take advantage of GPUs and other specialty hardware. Stay tuned - were in the early days of the container revolution!
![][11]
### Platforms
This is one of our favorite layers of the stack, because there is so much craft in how each platform helps users accomplish what they really want to do (which, by the way, isnt run containers but run applications!). From Rancher and Kontena, to Tectonic and Redshift to totally different approaches like Cycle.io and Flynn.io - were always thrilled to see how each of these projects servers users differently.
The main takeaway: these platforms are helping to translate all of the various, fast-moving parts of the cloud native ecosystem to users. Its great watching what they each come up with!
![][12]
### Security
When it comes to security, its been a busy year! We tried to represent some of the more famous attacks and illustrate how various tools are trying to help protect us as workloads become highly distributed and portable (while at the same time, attackers become ever more resourceful).
We see a strong movement towards trustless environments (see Aporeto) and low level security (Cilium), as well as tried and true approaches at the network level like Tigera. No matter your approach, its good to remember: This is definitely not fine. :0
![][13]
### Apps
How to represent the huge, vast, limitless ecosystem of applications? In this case, it was easy: stay close to NYC and pick our favorites. ;) From the Postgres “elephant in the room” and the Timescale clock, to the sneaky ScyllaDB trash and the chillin Travis dude - we had fun putting this slice together.
One thing that surprised us: how few people noticed the guy taking a photocopy of his rear end. I guess its just not that common to have a photocopy machine anymore?!?
![][14]
### Observability
As our workloads start moving all over the place, and the scale gets gigantic, there is nothing quite as comforting as a really good Grafana dashboard, or that handy Datadog agent. As complexity increases, the “SRE” generation are starting to rely ever more on alerting and other intelligence events to help us make sense of whats going on, and work towards increasingly self-healing infrastructure and applications.
It will be interesting to see what kind of logos make their way into this floor over the coming months and years...maybe some AI, blockchain, ML powered dashboards? :-)
![][15]
### Traffic Management
People tend to think that the internet “just works” but in reality, were kind of surprised it works at all. I mean, a loose connection of disparate networks at massive scale - you have to be joking!?
One reason it all sticks together is traffic management, DNS and the like. More and more, these players are helping to make the interest both faster and safer, as well as more resilient. Were especially excited to see upstarts like Fly.io and NS1 competing against well established players, and watching the entire ecosystem improve as a result. Keep rockin it yall!
![][16]
### Users
What good is a technology stack if you dont have fantastic users? Granted, they sit on top of a massive stack of innovation, but in the cloud native world they do more than just consume: they create and contribute. From massive contributions like Kubernetes to more incremental (but equally important) aspects, what were all a part of is really quite special.
Many of the users lounging on our rooftop deck, like Ticketmaster and the New York Times, are not mere upstarts: these are organizations that have embraced a new way of deploying and managing their applications, and their own users are reaping the rewards.
![][17]
### Last but not Least, the Adult Supervision!
In previous ecosystems, foundations have played a more passive “behind the scenes” role. Not the CNCF! Their goal of building a robust cloud native ecosystem has been supercharged by the incredible popularity of the movement - and theyve not only caught up but led the way.
From rock solid governance and a thoughtful group of projects, to outreach like the CNCF Landscape, CNCF Cross Cloud CI, Kubernetes Certification, and Speakers Bureau - the CNCF is way more than “just” the ever popular KubeCon + CloudNativeCon.
--------------------------------------------------------------------------------
via: https://www.packet.net/blog/splicing-the-cloud-native-stack/
作者:[Zoe Allen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.packet.net/about/zoe-allen/
[1]:https://landscape.cncf.io/landscape=cloud
[2]:https://assets.packet.net/media/images/PIFg-30.vesey.street.ny.jpg
[3]:https://www.dropbox.com/s/ujxk3mw6qyhmway/Packet_Cloud_Native_Building_Stack.jpg?dl=0
[4]:https://assets.packet.net/media/images/3vVx-there.is.no.cloud.jpg
[5]:https://assets.packet.net/media/images/X0b9-the.bank.jpg
[6]:https://assets.packet.net/media/images/2Etm-ping.and.power.jpg
[7]:https://assets.packet.net/media/images/C800-infrastructure.jpg
[8]:https://assets.packet.net/media/images/0V4O-provisioning.jpg
[9]:https://assets.packet.net/media/images/eMYp-operating.system.jpg
[10]:https://assets.packet.net/media/images/9BII-run.time.jpg
[11]:https://assets.packet.net/media/images/njak-orchestration.jpg
[12]:https://assets.packet.net/media/images/1QUS-platforms.jpg
[13]:https://assets.packet.net/media/images/TeS9-security.jpg
[14]:https://assets.packet.net/media/images/SFgF-apps.jpg
[15]:https://assets.packet.net/media/images/SXoj-observability.jpg
[16]:https://assets.packet.net/media/images/tKhf-traffic.management.jpg
[17]:https://assets.packet.net/media/images/7cpe-users.jpg

View File

@ -0,0 +1,86 @@
translating---geekpi
3 useful things you can do with the IP tool in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0)
It has been more than a decade since the `ifconfig` command has been deprecated on Linux in favor of the `iproute2` project, which contains the magical tool `ip`. Many online tutorial resources still refer to old command-line tools like `ifconfig`, `route`, and `netstat`. The goal of this tutorial is to share some of the simple networking-related things you can do easily using the `ip` tool instead.
### Find your IP address
```
[dneary@host]$ ip addr show
[snip]
44: wlp4s0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 5c:e0:c5:c7:f0:f1 brd ff:ff:ff:ff:ff:ff
        inet 10.16.196.113/23 brd 10.16.197.255 scope global dynamic wlp4s0
        valid_lft 74830sec preferred_lft 74830sec
        inet6 fe80::5ee0:c5ff:fec7:f0f1/64 scope link
        valid_lft forever preferred_lft forever
```
`ip addr show` will show you a lot of information about all of your network link devices. In this case, my wireless Ethernet card (wlp4s0) is the IPv4 address (the `inet` field) `10.16.196.113/23`. The `/23` means that there are 23 bits of the 32 bits in the IP address, which will be shared by all of the IP addresses in this subnet. IP addresses in the subnet will range from `10.16.196.0 to 10.16.197.254`. The broadcast address for the subnet (the `brd` field after the IP address) `10.16.197.255` is reserved for broadcast traffic to all hosts on the subnet.
We can show only the information about a single device using `ip addr show dev wlp4s0`, for example.
### Display your routing table
```
[dneary@host]$ ip route list
default via 10.16.197.254 dev wlp4s0 proto static metric 600
10.16.196.0/23 dev wlp4s0 proto kernel scope link src 10.16.196.113 metric 601
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 linkdown
```
The routing table is the local host's way of helping network traffic figure out where to go. It contains a set of signposts, sending traffic to a specific interface, and a specific next waypoint on its journey.
If you run any virtual machines or containers, these will get their own IP addresses and subnets, which can make these routing tables quite complicated, but in a single host, there are typically two instructions. For local traffic, send it out onto the local Ethernet, and the network switches will figure out (using a protocol called ARP) which host owns the destination IP address, and thus where the traffic should be sent. For traffic to the internet, send it to the local gateway node, which will have a better idea how to get to the destination.
In the situation above, the first line represents the external gateway for external traffic, the second line is for local traffic, and the third is reserved for a virtual bridge for VMs running on the host, but this link is not currently active.
### Monitor your network configuration
```
[dneary@host]$ ip monitor all
[dneary@host]$ ip -s link list wlp4s0
```
The `ip monitor` command can be used to monitor changes in routing tables, network addressing on network interfaces, or changes in ARP tables on the local host. This command can be particularly useful in debugging network issues related to containers and networking, when two VMs should be able to communicate with each other but cannot.
`all`, `ip monitor` will report all changes, prefixed with one of `[LINK]` (network interface changes), `[ROUTE]` (changes to a routing table), `[ADDR]` (IP address changes), or `[NEIGH]` (nothing to do with horses—changes related to ARP addresses of neighbors).
When used withwill report all changes, prefixed with one of(network interface changes),(changes to a routing table),(IP address changes), or(nothing to do with horses—changes related to ARP addresses of neighbors).
You can also monitor changes on specific objects (for example, a specific routing table or an IP address).
Another useful option that works with many commands is `ip -s`, which gives some statistics. Adding a second `-s` option adds even more statistics. `ip -s link list wlp4s0` above will give lots of information about packets received and transmitted, with the number of packets dropped, errors detected, and so on.
### Handy tip: Shorten your commands
In general, for the `ip` tool, you need to include only enough letters to uniquely identify what you want to do. Instead of `ip monitor`, you can use `ip mon`. Instead of `ip addr list`, you can use `ip a l`, and you can use `ip r` in place of `ip route`. `Ip link list` can be shorted to `ip l ls`. To read about the many options you can use to change the behavior of a command, visit the [ip manpage][1].
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/useful-things-you-can-do-with-IP-tool-Linux
作者:[Dave Neary][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/dneary
[1]:https://www.systutorials.com/docs/linux/man/8-ip-route/

View File

@ -0,0 +1,74 @@
translating---geekpi
Protect your Fedora system against this DHCP flaw
======
![](https://fedoramagazine.org/wp-content/uploads/2018/05/dhcp-cve-816x345.jpg)
A critical security vulnerability was discovered and disclosed earlier today in dhcp-client. This DHCP flaw carries a high risk to your system and data, especially if you use untrusted networks such as a WiFi access point you dont own. Read more here for how to protect your Fedora system.
Dynamic Host Control Protocol (DHCP) allows your system to get configuration from a network it joins. Your system will make a request for DHCP data, and typically a server such as a router answers. The server provides the necessary data for your system to configure itself. This is how, for instance, your system configures itself properly for networking when it joins a wireless network.
However, an attacker on the local network may be able to exploit this vulnerability. Using a flaw in a dhcp-client script that runs under NetworkManager, the attacker may be able to run arbitrary commands with root privileges on your system. This DHCP flaw puts your system and your data at high risk. The flaw has been assigned CVE-2018-1111 and has a [Bugzilla tracking bug][1].
### Guarding against this DHCP flaw
New dhcp packages contain fixes for Fedora 26, 27, and 28, as well as Rawhide. The maintainers have submitted these updates to the updates-testing repositories. They should show up in stable repos within a day or so of this post for most users. The desired packages are:
* Fedora 26: dhcp-4.3.5-11.fc26
* Fedora 27: dhcp-4.3.6-10.fc27
* Fedora 28: dhcp-4.3.6-20.fc28
* Rawhide: dhcp-4.3.6-21.fc29
#### Updating a stable Fedora system
To update immediately on a stable Fedora release, use this command [with sudo][2]. Type your password at the prompt, if necessary:
```
sudo dnf --refresh --enablerepo=updates-testing update dhcp-client
```
Later, use the standard stable repos to update. To update your Fedora system from the stable repos, use this command:
```
sudo dnf --refresh update dhcp-client
```
#### Updating a Rawhide system
If your system is on Rawhide, use these commands to download and update the packages immediately:
```
mkdir dhcp && cd dhcp
koji download-build --arch={x86_64,noarch} dhcp-4.3.6-21.fc29
sudo dnf update ./dhcp-*.rpm
```
After the nightly Rawhide compose, simply run sudo dnf update to get the update.
### Fedora Atomic Host
The fixes for Fedora Atomic Host are in ostree version 28.20180515.1. To get the update, run this command:
```
atomic host upgrade -r
```
This command reboots your system to apply the upgrade.
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/protect-fedora-system-dhcp-flaw/
作者:[Paul W. Frields][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/pfrields/
[1]:https://bugzilla.redhat.com/show_bug.cgi?id=1567974
[2]:https://fedoramagazine.org/howto-use-sudo/

View File

@ -0,0 +1,81 @@
Termux turns Android into a Linux development environment
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tux_penguin_linux_android.jpg?itok=ctgANLI7)
So you finally figured out how to exit Vim and you can write the most highly optimized version of "Hello World" this side of the Mississippi. Now it's time to up your game! Check out [Termux][1] for Android.
### What is Termux?
Termux is an Android terminal emulator and Linux environment. What that means in practice is that you can install Termux on most Android devices and do almost anything you would do in a full Linux development environment on that device. That all sounds cool, but you're probably asking yourself, "why would I want to code on my phone on a touch screen? That just sounds awful." Start thinking more along the lines of tablets paired with a keyboards or Chromebooks that can now run Android applications. These are very cheap devices that can now be used to introduce people to Linux hacking and development. I know many of us in the Linux community started out by installing Linux on an old PC.
Tablets and Chromebooks are this generation's old, junky computers. And there are plenty to go around. Why not use them to introduce the next generation to Linux? And since Termux can be installed with a click in the [Google Play Store][2] , I would argue Termux is the easiest way to introduce anyone to Linux. But don't leave all the fun for the noobs. Termux can accommodate many of your needs for a Linux development environment.
Termux is Linux, but it is based on Android and runs in a container. That means you can install it with no root access required—but it also means it may take some getting used to. In this article, I'll outline some tips and tricks I found to get Termux working as a full-time development environment.
### Where's all my stuff?
The base of the Termux filesystem that you can see starts around `/data/data/com.termux/files/`. Under that directory, you'll find your home directory and the `usr` directory, where all the Linux packages are installed. This is kind of weird, but no big deal right? You would be wrong, because almost every script on the planet is hard coded for `/bin/bash`. Other libraries, executables, and configuration files are in places inconsistent with other Linux distributions.
Termux provides lots of [packages][3] that have been modified to run correctly. Try looking there first instead of doing a custom build. However, you will still probably need to custom-build many things. You can try modifying your package's source code, and even though changing paths is easy, it gets old quick. Thankfully Termux also comes bundled with [termux-exec][4]. Termux-exec will redirect script paths on the fly to get them to work correctly.
You may still run into some hard-coded paths that termux-exec doesn't handle. Since you don't have root access in Termux, you can't just create a symlink to fix path issues. However, you can create a [chroot jail][5]. Using the [PRoot][6] package, you can create a chroot that you have full control over and allows you to modify anything you want. You can also make chroots of different Linux distributions. If you are a Fedora fan, you can use Termux and run it in a chroot jail. Check out the [PRoot page][6] for more distros and installation details, or you can use [this script][7] to make a Termux chroot jail. I've only tried the Termux chroot and the Ubuntu chroot. The Ubuntu chroot had some issues that needed to be worked around, so your mileage may vary depending on the version of Linux you choose.
### One user to rule them all
In Termux, everything is installed and run under one user. This isn't so much a problem, rather something you need to get used to. This also means the typical services and user groups you might be familiar with are nowhere to be found. And nothing auto-starts on boot, so it's up to you to manage the start and stop of services you might use, like databases, SSH, etc. Also remember, your one user can't modify the base system, so you will need to use a chroot if you need to do that. Since you don't have nice, preset start scripts, you will probably have to come up with some of your own.
For everyday development, I needed Postgres, Nginx, and Redis. I'd never started these services manually before; normally they start and stop for me automatically, and I had to do a little digging to find out how to start my favorite services. Here is a sample of the three services I just mentioned. Hopefully, these examples will point you in the right direction to use your favorite service. You can also look at a package's documentation to find information on how to start and stop it.
#### Postgres
Start: `pg_ctl -D $PREFIX/var/lib/postgresql start`
Stop: `pg_ctl -D $PREFIX/var/lib/postgresql stop`
#### Nginx
Start: `nginx`
Stop: `nginx -s stop`
#### Redis
Start: `redis-server $PREFIX/etc/redis.conf`
Stop: `kill "$("$PREFIX/bin/applets/cat" "$PREFIX/var/run/redis_6379.pid"`
### Broken dependencies
Android is built differently than other versions of Linux, and its kernel and libraries don't always match those in typical Linux software. You can see the [common porting problems][8] when trying to build software in Termux. You can work around most of them, but it may be too much effort to fix every dependency in your software.
For example, the biggest problem I ran into as a Python developer is by the Android kernel not supporting semaphores. The multi-processing library in Python depends on this functionality, and fixing this on my own was too difficult. Instead, I hacked around it by using a different deployment mechanism. Before I was using [uWSGI][9] to run my Python web services, so I switched to [Gunicorn][10]. This allowed me to route around using the standard Python multi-processing library. You may have to get a little creative to find alternative software dependencies when switching to Termux, but your list will probably be very small.
### Everyday Termux
When using Termux on a daily basis, you'll want to learn its [touch screen][11] or [hardware keyboard][12] shortcuts. You'll also need a text editor or IDE for coding. All the likely console-based editors are available through a quick package install: Vim, Emacs, and Nano. Termux is only console-based, so you won't be able to install any editors based on a graphical interface. I wanted to make sure Termux had a great IDE to go along with it, so I built the web-based Neutron64 editor to interface seamlessly with Termux. Just go to [Neutron64.com][13] and install [Neutron Beam][14] on Termux to start coding.
Check out [Termux][1] and turn your old Android devices into development powerhouses. Happy coding!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/termux
作者:[Paul Bailey][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/pizzapanther
[1]:https://termux.com/
[2]:https://play.google.com/store/apps/details?id=com.termux&hl=en_US
[3]:https://termux.com/package-management.html
[4]:https://wiki.termux.com/wiki/Termux-exec
[5]:https://en.wikipedia.org/wiki/Chroot
[6]:https://wiki.termux.com/wiki/PRoot
[7]:https://github.com/Neo-Oli/chrooted-termux
[8]:https://github.com/termux/termux-packages/blob/master/README.md#common-porting-problems
[9]:https://uwsgi-docs.readthedocs.io/en/latest/
[10]:http://gunicorn.org/
[11]:https://termux.com/touch-keyboard.html
[12]:https://termux.com/hardware-keyboard.html
[13]:https://www.neutron64.com/
[14]:https://www.neutron64.com/help/neutron-beam

View File

@ -0,0 +1,169 @@
A guide to Git branching
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arrows_translation_lead.jpg?itok=S4vAh9CP)
In my two previous articles in this series, we [started using Git][1] and learned how to [clone, modify, add, and delete][2] Git files. In this third installment, we'll explore Git branching and why and how it is used.
![tree branches][3]
Picture this tree as a Git repository. It has a lot of branches, long and short, stemming from the trunk and stemming from other branches. Let's say the tree's trunk represents a master branch of our repo. I will use `master` in this article as an alias for "master branch"—i.e., the central or first branch of a repo. To simplify things, let's assume that the `master` is a tree trunk and the other branches start from it.
### Why we need branches in a Git repo
* If you are creating a new feature for your project, there's a reasonable chance that adding it could break your working code. This would be very bad for active users of your project. It's better to start with a prototype, which you would want to design roughly in a different branch and see how it works, before you decide whether to add the feature to the repo's `master` for others to use.
* Another, probably more important, reason is [Git was made][4] for collaboration. If everyone starts programming on top of your repo's `master` branch, it will cause a lot of confusion. Everyone has different knowledge and experience (in the programming language and/or the project); some people may write faulty/buggy code or simply the kind of code/feature you may not want in your project. Using branches allows you to verify contributions and select which to add to the project. (This assumes you are the only owner of the repo and want full control of what code is added to it. In real-life projects, there are multiple owners with the rights to merge code in a repo.)
### Adding a branch
The main reasons for having branches are:
Let's go back to the [previous article in this series][2] and see what branching in our Demo directory looks like. If you haven't yet done so, follow the instructions in that article to clone the repo from GitHub and navigate to Demo. Run the following commands:
```
pwd
git branch
ls -la
```
The `pwd` command (which stands for present working directory) reports which directory you're in (so you can check that you're in Demo), `git branch` lists all the branches on your computer in the Demo repository, and `ls -la` lists all the files in the PWD. Now your terminal will look like this:
![Terminal output][5]
There's only one file, `README.md`, on the branch master. (Kindly ignore the other directories and files listed.)
Next, run the following commands:
```
git status
git checkout -b myBranch
git status
```
The first command, `git status` reports you are currently on `branch master`, and (as you can see in the terminal screenshot below) it is up to date with `origin/master`, which means all the files you have on your local copy of the branch master are also present on GitHub. There is no difference between the two copies. All commits are identical on both the copies as well.
The next command, `git checkout -b myBranch`, `-b` tells Git to create a new branch and name it `myBranch`, and `checkout` switches us to the newly created branch. Enter the third line, `git status`, to verify you are on the new branch you just created.
As you can see below, `git status` reports you are on branch `myBranch` and there is nothing to commit. This is because there is neither a new file nor any modification in existing files.
![Terminal output][6]
If you want to see a visual representation of branches, run the command `gitk`. If the computer complains `bash: gitk: command not found…`, then install `gitk`. (See documentation for your operating system for the install instructions.)
The image below reports what we've done in Demo: Your last commit was `Delete file.txt` and there were three commits before that. The current commit is noted with a yellow dot, previous commits with blue dots, and the three boxes between the yellow dot and `Delete file.txt` tell you where each branch is (i.e., what is the last commit on each branch). Since you just created `myBranch`, it is on the same commit as `master` and the remote counterpart of `master`, namely `remotes/origin/master`. (A big thanks to [Peter Savage][7] from Red Hat who made me aware of `gitk`.)
![Gitk output][8]
Now let's create a new file on our branch `myBranch` and let's observe terminal output. **** Run the following commands:
```
echo "Creating a newFile on myBranch" > newFile
cat newFile
git status
```
The first command, `echo`, creates a file named `newFile`, and `cat newFile` shows what is written in it. `git status` tells you the current status of our branch `myBranch`. In the terminal screenshot below, Git reports there is a file called `newFile` on `myBranch` and `newFile` is currently `untracked`. That means Git has not been told to track any changes that happen to `newFile`.
![Terminal output][9]
The next step is to add, commit, and push `newFile` to `myBranch` (go back to the last article in this series for more details).
```
git add newFile
git commit -m "Adding newFile to myBranch"
git push origin myBranch
```
In these commands, the branch in the `push` command is `myBranch` instead of `master`. Git is taking `newFile`, pushing it to your Demo repository in GitHub, and telling you it's created a new branch on GitHub that is identical to your local copy of `myBranch`. The terminal screenshot below details the run of commands and its output.
![Terminal output][10]
If you go to GitHub, you can see there are two branches to pick from in the branch drop-down.
![GitHub][11]
Switch to `myBranch` by clicking on it, and you can see the file you added on that branch.
![GitHub][12]
Now there are two different branches; one, `master`, has a single file, `README.md`, and the other, `myBranch`, has two files.
Now that you know how to create a branch, let's create another branch. Enter the following commands:
```
git checkout master
git checkout -b myBranch2
touch newFile2
git add newFile2
git commit -m "Adding newFile2 to myBranch2"
git push origin myBranch2
```
I won't show this terminal output as I want you to try it yourself, but you are more than welcome to check out the [repository on GitHub][13].
### Deleting a branch
Since we've added two branches, let's delete one of them (`myBranch`) using a two-step process.
**1\. Delete the local copy of your branch:** Since you can't delete a branch you're on, switch to the `master` branch (or another one you plan to keep) by running the commands shown in the terminal image below:
`git branch` lists the available branches; `checkout` changes to the `master` branch and `git branch -D myBranch` removes that branch. Run `git branch` again to verify there are now only two branches (instead of three).
**2\. Delete the branch from GitHub:** Delete the remote copy of `myBranch` by running the following command:
```
git push origin :myBranch
```
![Terminal output][14]
The colon (`:`) before the branch name in the `push` command tells GitHub to delete the branch. Another option is:
```
git push -d origin myBranch
```
as `-d` (or `--delete`) also tells GitHub to remove your branch.
Now that we've learned about using Git branches, in the next article in this series we'll look at how to fetch and rebase branch operations. These are essential things to know when you are working on a project with multiple contributors.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/git-branching
作者:[Kedar Vijay Kulkarni][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/kkulkarn
[1]:https://opensource.com/article/18/1/step-step-guide-git
[2]:https://opensource.com/article/18/2/how-clone-modify-add-delete-git-files
[3]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/tree-branches.jpg?itok=bQGpa5Uc (tree branches)
[4]:https://en.wikipedia.org/wiki/Git
[5]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_terminal1.png?itok=ZcAzRdlR (Terminal output)
[6]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_terminal2.png?itok=nIcfy2Vh (Terminal output)
[7]:https://opensource.com/users/psav
[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_commit3.png?itok=GoP51yE4 (Gitk output)
[9]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_terminal4.png?itok=HThID5aU (Terminal output)
[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_terminal5.png?itok=rHVdrJ0m (Terminal output)
[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_github6.png?itok=EyaKfCg2 (GitHub)
[12]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_github7.png?itok=0ZSu0W2P (GitHub)
[13]:https://github.com/kedark3/Demo/tree/myBranch2
[14]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/gitbranching_terminal9.png?itok=B0vaRkyI (Terminal output)

View File

@ -0,0 +1,182 @@
Manipulating Directories in Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/branches-238379_1920_0.jpg?itok=2PlNpsVu)
If you are new to this series (and to Linux), [take a look at our first installment][1]. In that article, we worked our way through the tree-like structure of the Linux filesystem, or more precisely, the File Hierarchy Standard. I recommend reading through it to make sure you understand what you can and cannot safely touch. Because this time around, Ill show how to get all touchy-feely with your directories.
### Making Directories
Let's get creative before getting destructive, though. To begin, open a terminal window and use `mkdir` to create a new directory like this:
```
mkdir <directoryname>
```
If you just put the directory name, the directory will appear hanging off the directory you are currently in. If you just opened a terminal, that will be your home directory. In a case like this, we say the directory will be created _relative_ to your current position:
```
$ pwd #This tells you where you are now -- see our first tutorial
/home/<username>
$ mkdir newdirectory #Creates /home/<username>/newdirectory
```
(Note that you do not have to type the text following the `#`. Text following the pound symbol `#` is considered a comment and is used to explain what is going on. It is ignored by the shell).
You can create a directory within an existing directory hanging off your current location by specifying it in the command line:
```
mkdir Documents/Letters
```
Will create the _Letters_ directory within the _Documents_ directory.
You can also create a directory above where you are by using `..` in the path. Say you move into the _Documents/Letters/_ directory you just created and you want to create a _Documents/Memos/_ directory. You can do:
```
cd Documents/Letters # Move into your recently created Letters/ directory
mkdir ../Memos
```
Again, all of the above is done relative to you current position. This is called using a _relative path_.
You can also use an _absolute path_ to directories: This means telling `mkdir` where to put your directory in relation to the root (`/`) directory:
```
mkdir /home/<username>/Documents/Letters
```
Change `<username>` to your user name in the command above and it will be equivalent to executing `mkdir Documents/Letters` from your home directory, except that it will work from wherever you are located in the directory tree.
As a side note, regardless of whether you use a relative or an absolute path, if the command is successful, `mkdir` will create the directory silently, without any apparent feedback whatsoever. Only if there is some sort of trouble will `mkdir` print some feedback after you hit _[Enter]_.
As with most other command-line tools, `mkdir` comes with several interesting options. The `-p` option is particularly useful, as it lets you create directories within directories within directories, even if none exist. To create, for example, a directory for letters to your Mom within _Documents/_ , you could do:
```
mkdir -p Documents/Letters/Family/Mom
```
And `mkdir` will create the whole branch of directories above _Mom/_ and also the directory _Mom/_ for you, regardless of whether any of the parent directories existed before you issued the command.
You can also create several folders all at once by putting them one after another, separated by spaces:
```
mkdir Letters Memos Reports
```
will create the directories _Letters/_ , _Memos/_ and _Reports_ under the current directory.
### In space nobody can hear you scream
... Which brings us to the tricky question of spaces in directory names. Can you use spaces in directory names? Yes, you can. Is it advised you use spaces? No, absolutely not. Spaces make everything more complicated and, potentially, dangerous.
Say you want to create a directory called _letters mom/_. If you didn't know any better, you could type:
```
mkdir letters mom
```
But this is WRONG! WRONG! WRONG! As we saw above, this will create two directories, _letters/_ and _mom/_ , but not _letters mom/_.
Agreed that this is a minor annoyance: all you have to do is delete the two directories and start over. No big deal.
But, wait! Deleting directories is where things get dangerous. Imagine you did create _letters mom/_ using a graphical tool, like, say [Dolphin][2] or [Nautilus][3]. If you suddenly decide to delete _letters mom/_ from a terminal, and you have another directory just called _letters/_ under the same directory, and said directory is full of important documents, and you tried this:
```
rmdir letters mom
```
You would risk removing _letters/_. I say "risk" because fortunately `rmdir`, the instruction used to remove directories, has a built in safeguard and will warn you if you try to delete a non-empty directory.
However, this:
```
rm -Rf letters mom
```
(and this is a pretty standard way of getting rid of directories and their contents) will completely obliterate _letters/_ and will never even tell you what just happened.
The `rm` command is used to delete files and directories. When you use it with the options `-R` (delete _recursively_ ) and `-f` ( _force_ deletion), it will burrow down into a directory and its subdirectories, deleting all the files they contain, then deleting the subdirectories themselves, then it will delete all the files in the top directory and then the directory itself.
`rm -Rf` is an instruction you must handle with extreme care.
My advice is, instead of spaces, use underscores (`_`), but if you still insist on spaces, there are two ways of getting them to work. You can use single or double quotes (`'` or `"`) like so:
```
mkdir 'letters mom'
mkdir "letters dad"
```
Or, you can _escape_ the spaces. Some characters have a special meaning for the shell. Spaces, as you have seen, are used to separate options and arguments on the command line. "Separating options and arguments" falls under the category of "special meaning". When you want the shell to ignore the special meaning of a character, you need to _escape_ it and to escape a character, you put a backslash (`\`) in front of it:
```
mkdir letters\ mom
mkdir letter\ dad
```
There are other special characters that would need escaping, like the apostrophe or single quote (`'`), double quotes (`"`), and the ampersand (`&`):
```
mkdir mom\ \&\ dad\'s\ letters
```
I know what you're thinking: If the backslash has a special meaning (to wit, telling the shell it has to escape the next character), that makes it a special character, too. Then, how would you escape the escape character which is `\`?
Turns out, the exact way you escape any other special character:
```
mkdir special\\characters
```
will produce a directory called _special\characters_.
Confusing? Of course. That's why you should avoid using special characters, including spaces, in directory names.
For the record, here is a list of special characters you can refer to just in case.
### Things to Remember
* Use `mkdir <directory name>` to create a new directory.
* Use `rmdir <directory name>` to delete a directory (only works if it is empty).
* Use `rm -Rf <directory name>` to annihilate a directory -- use with extreme caution.
* Use a relative path to create directories relative to your current directory: `mkdir newdir.`.
* Use an absolute path to create directories relative to the root directory (`/`): `mkdir /home/<username>/newdir`
* Use `..` to create a directory in the directory above the current directory: `mkdir ../newdir`
* You can create several directories all in one go by separating them with spaces on the command line: `mkdir onedir twodir threedir`
* You can mix and mash relative and absolute paths when creating several directories simultaneously: `mkdir onedir twodir /home/<username>/threedir`
* Using spaces and special characters in directory names guarantees plenty of headaches and heartburn. Don't do it.
For more information, you can look up the manuals of `mkdir`, `rmdir` and `rm`:
```
man mkdir
man rmdir
man rm
```
To exit the man pages, press _[q]_.
### Next Time
In the next installment, you'll learn about creating, modifying, and erasing files, as well as everything you need to know about permissions and privileges. See you then!
Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/4/linux-filesystem-explained
[2]:https://userbase.kde.org/Dolphin
[3]:https://projects-old.gnome.org/nautilus/screenshots.html
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,57 @@
translating---geekpi
How to find your IP address in Linux
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/satellite_radio_location.jpg?itok=KJUKSB6x)
Internet Protocol (IP) needs no introduction—we all use it daily. Even if you don't use it directly, when you type website-name.com on your web browser, it looks up the IP address of that URL and then loads the website.
Let's divide IP addresses into two categories: private and public. Private IP addresses are the ones your WiFi box (and company intranet) provide. They are in the range of 10.x.x.x, 172.16.x.x-172.31.x.x, and 192.168.x.x, where x=0 to 255. Public IP addresses, as the name suggests, are "public" and you can reach them from anywhere in the world. Every website has a unique IP address that can be reached by anyone and from anywhere; that is considered a public IP address.
Furthermore, there are two types of IP addresses: IPv4 and IPv6.
IPv4 addresses have the format x.x.x.x, where x=0 to 255. There are 2^32 (approximately 4 billion) possible IPv4 addresses.
IPv6 addresses have a more complex format using hex numbers. The total number of bits is 128, which means there are 2^128—340 undecillion!—possible IPv6 addresses. IPv6 was introduced to tackle the foreseeable exhaustion of IPv4 addresses in the near future.
As a network engineer, I recommend not sharing your machines public IP address with anyone. Your WiFi router has a public IP, which is the WAN (wide-area network) IP address, and it will be the same for any device connected to that WiFi. All the devices connected to the same WiFi have private IP addresses locally identified by the range provided above. For example, my laptop is connected with the IP address 192.168.0.5, and my phone is connected with 192.168.0.8. These are private IP addresses, but both would have the same public IP address.
The following commands will get you the IP address list to find public IP addresses for your machine:
1. `ifconfig.me`
2. `curl -4/-6 icanhazip.com`
3. `curl ipinfo.io/ip`
4. `curl api.ipify.org`
5. `curl checkip.dyndns.org`
6. `dig +short myip.opendns.com @resolver1.opendns.com`
7. `host myip.opendns.com resolver1.opendns.com`
8. `curl ident.me`
9. `curl bot.whatismyipaddress.com`
10. `curl ipecho.net/plain`
The following commands will get you the private IP address of your interfaces:
1. `ifconfig -a`
2. `ip addr (ip a)`
3. `hostname -I | awk {print $1}`
4. `ip route get 1.2.3.4 | awk '{print $7}'`
5. `(Fedora) Wifi-Settings→ click the setting icon next to the Wifi name that you are connected to → Ipv4 and Ipv6 both can be seen`
6. `nmcli -p device show`
_Note: Some utilities need to be installed on your system based on the Linux distro you are using. Also, some of the noted commands use a third-party website to get the IP_
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/how-find-ip-address-linux
作者:[Archit Modi][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/architmodi

View File

@ -0,0 +1,90 @@
How To Install Ncurses Library In Linux
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/install-ncurses-720x340.png)
**GNU Ncurses** is a programming library that allows the users to write text-based user interfaces(TUI). Many text-based games are created using this library. One popular example is [**PacVim**][1], a CLI game to learn VIM commands. In this brief guide, I will be explaining how to install Ncurses library in Unix-like operating systems.
### Install Ncurses Library In Linux
Ncurses is available in the default repositories of most Linux distributions. For instance, you can install it on Arch-based systems using the following command:
```
$ sudo pacman -S ncurses
```
On RHEL, CentOS:
```
$ sudo yum install ncurses-devel
```
On Fedora 22 and newer versions:
```
$ sudo dnf install ncurses-devel
```
On Debian, Ubuntu, Linux Mint:
```
$ sudo apt-get install libncurses5-dev libncursesw5-dev
```
The GNU ncureses might be bit old in the default repositories. If you want a most recent stable version, you can compile and install from the source as shown below.
Download the latest ncurses version from [**here**][2]. As of writing this guide, the latest version was 6.1.
```
$ wget https://ftp.gnu.org/pub/gnu/ncurses/ncurses-6.1.tar.gz
```
Extract the tar file:
```
$ tar xzf ncurses-6.1.tar.gz
```
This will create a folder named ncurses-6.1 in the current directory. Cd to the directory:
```
$ cd ncurses-6.1
$ ./configure --prefix=/opt/ncurses
```
Finally, compile and install using the following commands:
```
$ make
$ sudo make install
```
Verify the installation using command:
```
$ ls -la /opt/ncurses
```
Thats it. Ncurses have been installed on the Linux distribution. Go ahead and create your nice looking TUIs using Ncurses.
More good stuffs to come. Stay tuned!
Cheers!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-install-ncurses-library-in-linux/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
[2]:https://ftp.gnu.org/pub/gnu/ncurses/

View File

@ -0,0 +1,145 @@
How to Manage Fonts in Linux
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_main.jpg?itok=qcJks7-c)
Not only do I write technical documentation, I write novels. And because Im comfortable with tools like GIMP, I also create my own book covers (and do graphic design for a few clients). That artistic endeavor depends upon a lot of pieces falling into place, including fonts.
Although font rendering has come a long way over the past few years, it continues to be an issue in Linux. If you compare the look of the same fonts on Linux vs. macOS, the difference is stark. This is especially true when youre staring at a screen all day. But even though the rendering of fonts has yet to find perfection in Linux, one thing that the open source platform does well is allow users to easily manage their fonts. From selecting, adding, scaling, and adjusting, you can work with fonts fairly easily in Linux.
Here, Ill share some of the tips Ive depended on over the years to help extend my “font-ability” in Linux. These tips will especially help those who undertake artistic endeavors on the open source platform. Because there are so many desktop interfaces available for Linux (each of which deal with fonts in a different way), when a desktop environment becomes central to the management of fonts, Ill be focusing primarily on GNOME and KDE.
With that said, lets get to work.
### Adding new fonts
For the longest time, I have been a collector of fonts. Some might say I have a bit of an obsession. And since my early days of using Linux, Ive always used the same process for adding fonts to my desktops. There are two ways to do this:
* Make the fonts available on a per-user basis.
* Make the fonts available system-wide.
Because my desktops never have other users (besides myself), I only ever work with fonts on a per-user basis. However, I will show you how to do both. First, lets see how to add fonts on a per-user basis. The first thing you must do is find fonts. Both True Type Fonts (TTF) and Open Type Fonts (OTF) can be added. I add fonts manually. Do this is, I create a new hidden directory in ~/ called ~/.fonts. This can be done with the command:
```
mkdir ~/.fonts
```
With that folder created, I then move all of my TTF and OTF files into the directory. Thats it. Every font you add into that directory will now be available for use to your installed apps. But remember, those fonts will only be available to that one user.
If you want to make that collection of fonts available to all, heres what you do:
1. Open up a terminal window.
2. Change into the directory housing all of your fonts.
3. Copy all of those fonts with the commands sudo cp *.ttf *.TTF /usr/share/fonts/truetype/ and sudo cp *.otf *.OTF /usr/share/fonts/opentype
The next time a user logs in, theyll have access to all those glorious fonts.
### GUI Font Managers
There are a few ways to manage your fonts in Linux, via GUI. How its done will depend on your desktop environment. Lets examine KDE first. With the KDE that ships with Kubuntu 18.04, youll find a Font Management tool pre-installed. Open that tool and you can easily add, remove, enable, and disable fonts (as well as get information about all of the installed fonts. This tool also makes it easy for you to add and remove fonts for personal and system-wide use. Lets say you want to add a particular font for personal usage. To do this, download your font and then open up the Font Management tool. In this tool (Figure 1), click on Personal Fonts and then click the + Add button.
![adding fonts][2]
Figure 1: Adding personal fonts in KDE.
[Used with permission][3]
Navigate to the location of your fonts, select them, and click Open. Your fonts will then be added to the Personal section and are immediately available for you to use (Figure 2).
![KDE Font Manager][5]
Figure 2: Fonts added with the KDE Font Manager.
[Used with permission][3]
To do the same thing in GNOME requires the installation of an application. Open up either GNOME Software or Ubuntu Software (depending upon the distribution youre using) and search for Font Manager. Select Font Manager and then click the Install button. Once the software is installed, launch it from the desktop menu. With the tool open, lets install fonts on a per-user basis. Heres how:
1. Select User from the left pane (Figure 3).
2. Click the + button at the top of the window.
3. Navigate to and select the downloaded fonts.
4. Click Open.
![Adding fonts ][7]
Figure 3: Adding fonts in GNOME.
[Used with permission][3]
### Tweaking fonts
There are three concepts you must first understand:
* **Font Hinting:** The use of mathematical instructions to adjust the display of a font outline so that it lines up with a rasterized grid.
* **Anti-aliasing:** The technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals.
* **Scaling factor:** **** A scalable unit that allows you to multiple the point size of a font. So if youre font is 12pt and you have an scaling factor of 1, the font size will be 12pt. If your scaling factor is 2, the font size will be 24pt.
Lets say youve installed your fonts, but they dont look quite as good as youd like. How do you tweak the appearance of fonts? In both the KDE and GNOME desktops, you can make a few adjustments. One thing to consider with the tweaking of fonts is that taste is very much subjective. You might find yourself having to continually tweak until you get the fonts looking exactly how you like (dictated by your needs and particular taste). Lets first look at KDE.
Open up the System Settings tool and clock on Fonts. In this section, you can not only change various fonts, you can also enable and configure both anti-aliasing and enable font scaling factor (Figure 4).
![Configuring fonts][9]
Figure 4: Configuring fonts in KDE.
[Used with permission][3]
To configure anti-aliasing, select Enabled from the drop-down and then click Configure. In the resulting window (Figure 5), you can configure an exclude range, sub-pixel rendering type, and hinting style.
Once youve made your changes, click Apply. Restart any running applications and the new settings will take effect.
To do this in GNOME, you have to have either use Font Manager or GNOME Tweaks installed. For this, GNOME Tweaks is the better tool. If you open the GNOME Dash and cannot find Tweaks installed, open GNOME Software (or Ubuntu Software), and install GNOME Tweaks. Once installed, open it and click on the Fonts section. Here you can configure hinting, anti-aliasing, and scaling factor (Figure 6).
![Tweaking fonts][11]
Figure 6: Tweaking fonts in GNOME.
[Used with permission][3]
### Make your fonts beautiful
And thats the gist of making your fonts look as beautiful as possible in Linux. You may not see a macOS-like rendering of fonts, but you can certainly improve the look. Finally, the fonts you choose will have a large impact on how things look. Make sure youre installing clean, well-designed fonts; otherwise, youre fighting a losing battle.
Learn more about Linux through the free ["Introduction to Linux" ][12] course from The Linux Foundation and edX.
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/5/how-manage-fonts-linux
作者:[Jack Wallen][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/jlwallen
[2]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_1.jpg?itok=7yTTe6o3 (adding fonts)
[3]:https://www.linux.com/licenses/category/used-permission
[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_2.jpg?itok=_g0dyVYq (KDE Font Manager)
[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_3.jpg?itok=8o884QKs (Adding fonts )
[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_4.jpg?itok=QJpPzFED (Configuring fonts)
[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_6.jpg?itok=4cQeIW9C (Tweaking fonts)
[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,275 @@
What's a hero without a villain? How to add one to your Python game
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game-dogs-chess-play-lead.png?itok=NAuhav4Z)
In the previous articles in this series (see [part 1][1], [part 2][2], [part 3][3], and [part 4][4]), you learned how to use Pygame and Python to spawn a playable character in an as-yet empty video game world. But, what's a hero without a villain?
It would make for a pretty boring game if you had no enemies, so in this article, you'll add an enemy to your game and construct a framework for building levels.
It might seem strange to jump ahead to enemies when there's still more to be done to make the player sprite fully functional, but you've learned a lot already, and creating villains is very similar to creating a player sprite. So relax, use the knowledge you already have, and see what it takes to stir up some trouble.
For this exercise, you can download some pre-built assets from [Open Game Art][5]. Here are some of the assets I use:
+ Inca tileset
+ Some invaders
+ Sprites, characters, objects, and effects
### Creating the enemy sprite
Yes, whether you realize it or not, you basically already know how to implement enemies. The process is very similar to creating a player sprite:
1. Make a class so enemies can spawn.
2. Create an `update` function so enemies can detect collisions.
3. Create a `move` function so your enemy can roam around.
Start with the class. Conceptually, it's mostly the same as your Player class. You set an image or series of images, and you set the sprite's starting position.
Before continuing, make sure you have a graphic for your enemy, even if it's just a temporary one. Place the graphic in your game project's `images` directory (the same directory where you placed your player image).
A game looks a lot better if everything alive is animated. Animating an enemy sprite is done the same way as animating a player sprite. For now, though, keep it simple, and use a non-animated sprite.
At the top of the `objects` section of your code, create a class called Enemy with this code:
```
class Enemy(pygame.sprite.Sprite):
    '''
    Spawn an enemy
    '''
    def __init__(self,x,y,img):
        pygame.sprite.Sprite.__init__(self)
        self.image = pygame.image.load(os.path.join('images',img))
        self.image.convert_alpha()
        self.image.set_colorkey(ALPHA)
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
```
If you want to animate your enemy, do it the [same way][4] you animated your player.
### Spawning an enemy
You can make the class useful for spawning more than just one enemy by allowing yourself to tell the class which image to use for the sprite and where in the world the sprite should appear. This means you can use this same enemy class to generate any number of enemy sprites anywhere in the game world. All you have to do is make a call to the class, and tell it which image to use and the X and Y coordinates of your desired spawn point.
Again, this is similar in principle to spawning a player sprite. In the `setup` section of your script, add this code:
```
enemy   = Enemy(20,200,'yeti.png')# spawn enemy
enemy_list = pygame.sprite.Group()   # create enemy group
enemy_list.add(enemy)                # add enemy to group
```
In that sample code, `20` is the X position and `200` is the Y position. You might need to adjust these numbers, depending on how big your enemy sprite is, but try to get it to spawn in a place so that you can reach it with your player sprite. `Yeti.png` is the image used for the enemy.
Next, draw all enemies in the enemy group to the screen. Right now, you have only one enemy, but you can add more later if you want. As long as you add an enemy to the enemies group, it will be drawn to the screen during the main loop. The middle line is the new line you need to add:
```
    player_list.draw(world)
    enemy_list.draw(world)  # refresh enemies
    pygame.display.flip()
```
Launch your game. Your enemy appears in the game world at whatever X and Y coordinate you chose.
### Level one
Your game is in its infancy, but you will probably want to add another level. It's important to plan ahead when you program so your game can grow as you learn more about programming. Even though you don't even have one complete level yet, you should code as if you plan on having many levels.
Think about what a "level" is. How do you know you are at a certain level in a game?
You can think of a level as a collection of items. In a platformer, such as the one you are building here, a level consists of a specific arrangement of platforms, placement of enemies and loot, and so on. You can build a class that builds a level around your player. Eventually, when you create more than one level, you can use this class to generate the next level when your player reaches a specific goal.
Move the code you wrote to create an enemy and its group into a new function that will be called along with each new level. It requires some modification so that each time you create a new level, you can create several enemies:
```
class Level():
    def bad(lvl,eloc):
        if lvl == 1:
            enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy
            enemy_list = pygame.sprite.Group() # create enemy group
            enemy_list.add(enemy)              # add enemy to group
        if lvl == 2:
            print("Level " + str(lvl) )
        return enemy_list
```
The `return` statement ensures that when you use the `Level.bad` function, you're left with an `enemy_list` containing each enemy you defined.
Since you are creating enemies as part of each level now, your `setup` section needs to change, too. Instead of creating an enemy, you must define where the enemy will spawn and what level it belongs to.
```
eloc = []
eloc = [200,20]
enemy_list = Level.bad( 1, eloc )
```
Run the game again to confirm your level is generating correctly. You should see your player, as usual, and the enemy you added in this chapter.
### Hitting the enemy
An enemy isn't much of an enemy if it has no effect on the player. It's common for enemies to cause damage when a player collides with them.
Since you probably want to track the player's health, the collision check happens in the Player class rather than in the Enemy class. You can track the enemy's health, too, if you want. The logic and code are pretty much the same, but, for now, just track the player's health.
To track player health, you must first establish a variable for the player's health. The first line in this code sample is for context, so add the second line to your Player class:
```
        self.frame  = 0
        self.health = 10
```
In the `update` function of your Player class, add this code block:
```
        hit_list = pygame.sprite.spritecollide(self, enemy_list, False)
        for enemy in hit_list:
            self.health -= 1
            print(self.health)
```
This code establishes a collision detector using the Pygame function `sprite.spritecollide`, called `enemy_hit`. This collision detector sends out a signal any time the hitbox of its parent sprite (the player sprite, where this detector has been created) touches the hitbox of any sprite in `enemy_list`. The `for` loop is triggered when such a signal is received and deducts a point from the player's health.
Since this code appears in the `update` function of your player class and `update` is called in your main loop, Pygame checks for this collision once every clock tick.
### Moving the enemy
An enemy that stands still is useful if you want, for instance, spikes or traps that can harm your player, but the game is more of a challenge if the enemies move around a little.
Unlike a player sprite, the enemy sprite is not controlled by the user. Its movements must be automated.
Eventually, your game world will scroll, so how do you get an enemy to move back and forth within the game world when the game world itself is moving?
You tell your enemy sprite to take, for example, 10 paces to the right, then 10 paces to the left. An enemy sprite can't count, so you have to create a variable to keep track of how many paces your enemy has moved and program your enemy to move either right or left depending on the value of your counting variable.
First, create the counter variable in your Enemy class. Add the last line in this code sample:
```
        self.rect = self.image.get_rect()
        self.rect.x = x
        self.rect.y = y
        self.counter = 0 # counter variable
```
Next, create a `move` function in your Enemy class. Use an if-else loop to create what is called an infinite loop:
* Move right if the counter is on any number from 0 to 100.
* Move left if the counter is on any number from 100 to 200.
* Reset the counter back to 0 if the counter is greater than 200.
An infinite loop has no end; it loops forever because nothing in the loop is ever untrue. The counter, in this case, is always either between 0 and 100 or 100 and 200, so the enemy sprite walks right to left and right to left forever.
The actual numbers you use for how far the enemy will move in either direction depending on your screen size, and possibly, eventually, the size of the platform your enemy is walking on. Start small and work your way up as you get used to the results. Try this first:
```
    def move(self):
        '''
        enemy movement
        '''
        distance = 80
        speed = 8
        if self.counter >= 0 and self.counter <= distance:
            self.rect.x += speed
        elif self.counter >= distance and self.counter <= distance*2:
            self.rect.x -= speed
        else:
            self.counter = 0
        self.counter += 1
```
You can adjust the distance and speed as needed.
Will this code work if you launch your game now?
Of course not, and you probably know why. You must call the `move` function in your main loop. The first line in this sample code is for context, so add the last two lines:
```
    enemy_list.draw(world) #refresh enemy
    for e in enemy_list:
        e.move()
```
Launch your game and see what happens when you hit your enemy. You might have to adjust where the sprites spawn so that your player and your enemy sprite can collide. When they do collide, look in the console of [IDLE][6] or [Ninja-IDE][7] to see the health points being deducted.
![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/yeti.png?itok=4_GsDGor)
You may notice that health is deducted for every moment your player and enemy are touching. That's a problem, but it's a problem you'll solve later, after you've had more practice with Python.
For now, try adding some more enemies. Remember to add each enemy to the `enemy_list`. As an exercise, see if you can think of how you can change how far different enemy sprites move.
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/pygame-enemy
作者:[Seth Kenlon][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]: https://opensource.com/users/seth
[1]:https://opensource.com/article/17/10/python-101
[2]:https://opensource.com/article/17/12/game-framework-python
[3]:https://opensource.com/article/17/12/game-python-add-a-player
[4]:https://opensource.com/article/17/12/game-python-moving-player
[5]:https://opengameart.org
[6]:https://docs.python.org/3/library/idle.html
[7]:http://ninja-ide.org/

View File

@ -0,0 +1,82 @@
translating---geekpi
Starting user software in X
======
There are currently many ways of starting software when a user session starts.
This is an attempt to collect a list of pointers to piece the big picture together. It's partial and some parts might be imprecise or incorrect, but it's a start, and I'm happy to keep it updated if I receive corrections.
### x11-common
`man xsession`
* Started by the display manager for example, `/usr/share/lightdm/lightdm.conf.d/01_debian.conf` or `/etc/gdm3/Xsession`
* Debian specific
* Runs scripts in `/etc/X11/Xsession.d/`
* `/etc/X11/Xsession.d/40x11-common_xsessionrc` sources `~/.xsessionrc` which can do little more than set env vars, because it is run at the beginning of X session startup
* At the end, it starts the session manager (`gnome-session`, `xfce4-session`, and so on)
### systemd --user
* <https://wiki.archlinux.org/index.php/Systemd/User>
* Started by `pam_systemd`, so it might not have a DISPLAY variable set in the environment yet
* Manages units in:
* `/usr/lib/systemd/user/` where units provided by installed packages belong.
* `~/.local/share/systemd/user/` where units of packages that have been installed in the home directory belong.
* `/etc/systemd/user/` where system-wide user units are placed by the system administrator.
* `~/.config/systemd/user/` where the users put their own units.
* A trick to start a systemd user unit when the X session has been set up and the DISPLAY variable is available, is to call `systemctl start` from a `.desktop` autostart file.
### dbus activation
* <https://dbus.freedesktop.org/doc/system-activation.txt>
* A user process making a dbus request can trigger starting a server program
* For systems debugging, is there a way of monitoring what services are getting dbus activated?
### X session manager
* <https://en.wikipedia.org/wiki/X_session_manager>
* Run by `x11-common`'s `Xsession.d`
* Runs freedesktop autostart .desktop files
* Runs Desktop Environment specific software
### xdg autostart
* <https://specifications.freedesktop.org/autostart-spec/autostart-spec-latest.html>
* Run by the session manager
* If `/etc/xdg/autostart/foo.desktop` and `~/.config/autostart/foo.desktop` exist then only the file `~/.config/autostart/foo.desktop` will be used because `~/.config/autostart/` is more important than `/etc/xdg/autostart/`
* Is there an ordering or is it all in parallel?
### Other startup notes
#### ~/.Xauthority
To connect to an X server, a client needs to send a token from `~/.Xauthority`, which proves that they can read the user's provate data.
`~/.Xauthority` contains a token generated by display manager and communicated to X at startup.
To view its contents, use `xauth -i -f ~/.Xauthority list`
--------------------------------------------------------------------------------
via: http://www.enricozini.org/blog/2018/debian/starting-user-software/
作者:[Enrico Zini][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://www.enricozini.org/

View File

@ -0,0 +1,95 @@
对数据隐私持开放的态度
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_opendata.png?itok=M8L2HGVx)
Image by : opensource.com
今天是[数据隐私日][1](在欧洲叫"数据保护日"),你可能会认为现在我们处于一个开源的世界中,所有的数据都应该免费,[就像人们想的那样][2],但是现实并没那么简单。主要有两个原因:
1. 我们中的大多数(不仅仅是在开源中)认为至少有些关于我们自己的数据是不愿意分享出去的(我在之前发表的一篇文章中列举了一些列子[3]
2. 我们很多人虽然在开源中工作,但事实上是为了一些商业公司或者其他一些组织工作,也是在合法的要求范围内分享数据。
所以实际上,数据隐私对于每个人来说是很重要的。
事实证明,在美国和欧洲之间,人们和政府认为让组织使用的数据的起点是有些不同的。前者通常为实体提供更多的自由度,更愤世嫉俗的是--大型的商业体利用他们收集到的关于我们的数据。在欧洲完全是另一观念一直以来持有的多是有更多约束限制的观念而且在5月25日欧洲的观点可以说取得了胜利。
## 通用数据保护条例的影响
那是一个相当全面的声明其实事实上就是欧盟在2016年通过的一项关于通用数据保护的立法使它变得可实施。数据通用保护条例在私人数据怎样才能被保存如何才能被使用谁能使用能被持有多长时间这些方面设置了严格的规则。它描述了什么数据属于私人数据--而且涉及的条目范围非常广泛从你的姓名家庭住址到你的医疗记录以及接通你电脑的IP地址。
通用数据保护条例的重要之处是他并不仅仅适用于欧洲的公司,如果你是阿根廷人,日本人,美国人或者是俄罗斯的公司而且你正在收集涉及到欧盟居民的数据,你就要受到这个条例的约束管辖。
“哼!” 你可能会这样说,“我的业务不在欧洲:他们能对我有啥约束?” 答案很简答如果你想继续在欧盟做任何生意你最好遵守因为一旦你违反了通用数据保护条例的规则你将会受到你全球总收入百分之四的惩罚。是的你没听错是全球总收入不是仅仅在欧盟某一国家的的收入也不只是净利润而是全球总收入。这将会让你去叮嘱告知你的法律团队他们就会知会你的整个团队同时也会立即去指引你的IT团队确保你的行为相当短的时间内是符合要求的。
看上去这和欧盟之外的城市没有什么相关性但其实不然对大多数公司来说对所有的他们的顾客、合作伙伴以及员工实行同样的数据保护措施是件既简单又有效的事情而不是只是在欧盟的城市实施这将会是一件很有利的事情。2
然而,数据通用保护条例不久将在全球实施并不意味着一切都会变的很美好:事实并非如此,我们一直在丢弃关于我们自己的信息--而且允许公司去使用它。
有一句话是这么说的(尽管很争议):“如果你没有在付费,那么你就是产品。”这句话的意思就是如果你没有为某一项服务付费,那么其他的人就在付费使用你的数据。
你有付费使用Facebook、推特谷歌邮箱你觉得他们是如何赚钱的大部分是通过广告一些人会争论那是他们向你提供的一项服务而已但事实上是他们在利用你的数据从广告商里获取收益。你不是一个真正的广告的顾客-只有当你从看了广告后买了他们的商品之后你才变成了他们的顾客,但直到这个发生之前,都是广告平台和广告商的关系。
有些服务是允许你通过付费来消除广告的流媒体音乐平台声破天就是这样的但从另一方面来讲即使你认为付费的服务也可以启用广告列如亚马逊正在允许通过Alexa广告除非我们想要开始为这些所有的免费服务付费我们需要清除我们所放弃的而且在我们想要揭发和不想的里面做一些选择。
### 谁是顾客?
关于数据的另一个问题一直在困扰着我们它是产生的数据量的直接结果。有许多组织一直在产生巨量的数据包括公共的组织比如大学、医院或者是政府部门4--
而且他们没有能力去储存这些数据。如果这些数据没有长久的价值也就没什么要紧的,但事实正好相反,随着处理大数据的工具正在开发中,而且这些组织也认识到他们现在以及在不久的将来将能够去开采这些数据。
然而他们面临的是,随着数据的增长和存储量的不足他们是如何处理的。幸运--而且我是带有讽刺意味的使用了这个词5大公司正在介入去帮助他们。“把你们的数据给我们”他们说“我们将免费保存。我们甚至让你随时能够使用你所收集到的数据”这听起来很棒是吗这是大公司的一个极具代表性的列子站在慈善的立场上帮助公共组织管理他们收集到的关于我们的数据。
不幸的是慈善不是唯一的理由。他们是附有条件的作为同意保存数据的交换条件这些公司得到了将数据访问权限出售非第三方的权利。你认为公共组织或者是被收集数据的人在数据被出售使用权使给第三方在他们如何使用上面能有发言权吗我将把这个问题当做一个练习留给读者去思考。7
### 开放和积极
然而并不只有坏消息。政府中有一项在逐渐发展起来的“开放数据”运动鼓励部门能够将免费开放他们的数据给公众或者其他组织。这项行动目前正在被实施立法。许多
支援组织--尤其是那些收到公共基金的--正在开始推动同样的活动。即使商业组织也有些许的兴趣。而且,在技术上已经可行了,例如围绕不同的隐私和多方计算上,正在允许我们根据数据设置和不揭露太多关于个人的前提下开采数据--一个历史性的计算问题比你想象的要容易处理的多。
这些对我们来说意味着什么呢我之前在网站Opensource.com上写过关于[开源的共享福利][4],而且我越来越相信我们需要把我们的视野从软件拓展到其他区域硬件组织和这次讨论有关的数据。让我们假设一下你是A公司要提向另一家公司提供一项服务客户B。在游戏中有四种不同类型的数据
1. 数据完全开放:对A和B都是可得到的世界上任何人都可以得到
2. 数据是已知的共享的和机密的A和B可得到但其他人不能得到。
3. 数据是公司级别上保密的A公司可以得到但B顾客不能
4. 数据是顾客级别保密的B顾客可以得到但A公司不能
首先,也许我们对数据应该更开放些,将数据默认放到选项一中。如果那些数据对所有人开放--在无人驾驶、语音识别矿藏以及人口数据统计会有相当大的作用的9
如果我们能够找到方法将数据放到选项23和4中不是很好嘛--或者至少它们中的一些--在选项一中是可以实现的,同时仍将细节保密?这就是研究这些新技术的希望。
然而又很长的路要走,所以不要太兴奋,同时,开始考虑将你的的一些数据默认开放。
### 一些具体的措施
我们如何处理数据的隐私和开放?下面是我想到的一些具体的措施:欢迎大家评论做出更多的贡献。
* 检查你的组织是否正在认真严格的执行通用数据保护条例。如果没有,去推动实施它。
* 要默认去加密敏感数据(或者适当的时候用散列算法),当不再需要的时候及时删掉--除非数据正在被处理使用否则没有任何借口让数据清晰可见。
* 当你注册一个服务的时候考虑一下你公开了什么信息,特别是社交媒体类的。
* 和你的非技术朋友讨论这个话题。
* 教育你的孩子,你朋友的孩子以及他们的朋友。然而最好是去他们的学校和他们的老师交谈在他们的学校中展示。
* 鼓励你工作志愿服务的组织,或者和他们互动推动数据的默认开放。不是去思考为什么我要使数据开放而是以我为什么不让数据开放开始。
* 尝试去访问一些开源数据。开采使用它。开发应用来使用它进行数据分析画漂亮的图10 制作有趣的音乐,考虑使用它来做些事。告诉组织去使用它们,感谢它们,而且鼓励他们去做更多。
1. 我承认你可能尽管不会
2. 假设你坚信你的个人数据应该被保护。
3. 如果你在思考“极好的”的寓意,在这点上你并不孤独。
4. 事实上这些机构能够有多开放取决于你所居住的地方。
5. 假设我是英国人,那是非常非常大的剂量。
6. 他们可能是巨大的公司:没有其他人能够负担得起这么大的存储和基础架构来使数据保持可用。
7. 不,答案是“不”。
8. 尽管这个列子也同样适用于个人。看看A可能是Alice,B 可能是BOb...
9. 并不是说我们应该暴露个人的数据或者是这样的数据应该被保密,当然--不是那类的数据。
10. 我的一个朋友当她接孩子放学的时候总是下雨,所以为了避免确认失误,她在整个学年都访问天气信息并制作了图表分享到社交媒体上。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/being-open-about-data-privacy
作者:[Mike Bursell][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者FelixYFZ](https://github.com/FelixYFZ)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://en.wikipedia.org/wiki/Data_Privacy_Day
[2]:https://en.wikipedia.org/wiki/Information_wants_to_be_free
[3]:https://aliceevebob.wordpress.com/2017/06/06/helping-our-governments-differently/
[4]:https://opensource.com/article/17/11/commonwealth-open-source
[5]:http://www.outpost9.com/reference/jargon/jargon_40.html#TAG2036

View File

@ -0,0 +1,44 @@
大学生对开始开源的反思
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_women_computing_2.png?itok=JPlR5aCA)
我刚刚完成大学二年级的第一学期,我正在思考我在课堂上学到的东西。有一节课特别引起了我的注意:“[开源世界的基础][1]”,它由杜克大学的 Bryan Behrenshausen 博士讲授。我在最后一刻参加了课程,因为它看起来很有趣,诚实地来说,因为它符合我的日程安排。
第一天Behrenshausen 博士问我们学生是否知道或使用过任何开源程序。直到那一天,我几乎没有听过[术语“开源”][2],当然也不知道任何属于该类别的产品。然而,随着学期的继续,对我而言,如果没有开源,我对事业抱负的激情就不会存在。
### Audacity 和 GIMP
我对技术的兴趣始于 12 岁。我负责为我的舞蹈团队裁剪音乐,我在网上搜索了几个小时,直到找到开源音频编辑器 Audacity。Audacity 为我敞开了大门。我不再局限于重复的 8 音节。我开始接受其他想要独特演绎他们最喜爱歌曲的人的请求。
几周后,我偶然在互联网上看到了一只有着 Pop-Tart 躯干并且后面拖着彩虹在太空飞行的猫。我搜索了“如何制作动态图像”,并发现了一个开源的图形编辑器 [GIMP][3],并用它为我兄弟做了一张“辛普森一家”的 GIF 作为生日礼物。
我萌芽的兴趣成长为完全的痴迷:在我笨重的、落后的笔记本上制作艺术品。由于我没有很好的炭笔,油彩或水彩,所以我用[图形设计][4]作为创意的表达。我花了几个小时在计算机实验室上[W3Schools][5]学习 HTML 和 CSS 的基础知识,以便我可以用我幼稚的 GIF 填充在线作品集。几个月后,我在 [WordPress][6] 发布了我的第一个网站。
### 为什么开源
开源让我们不仅可以实现我们的目标,还可以发现驱动这些目标的兴趣。
快进近十年。虽然有些仍然保持一致,但许多事情已经发生了变化:我仍然在制作图形(主要是传单),为舞蹈团编辑音乐,以及设计网站(我希望更时尚、更有效)。我使用的产品经历了无数版本升级。但最戏剧性的变化是我对开源资源的态度。
考虑到开源产品在我的生活中的意义,使我珍视开放运动和它的使命。开源项目提醒我,科技领域有些倡议可以促进社会的良好和自我学习,而不会被那些具有社会经济优势的人排斥。我中学时的自己,像无数其他人一样,无法购买 Adobe Creative Suite、GarageBand 或 Squarespace。开源平台使我们不仅能够实现我们的目标而且还能通过扩大我们接触来发现推动这些目标的兴趣。
我的建议?一时心血来潮报名上课。它可能会改变你对世界的看法。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/3/college-getting-started
作者:[Christine Hwang][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/christinehwang
[1]:https://ssri.duke.edu/news/new-course-explores-open-source-principles
[2]:https://opensource.com/node/42001
[3]:https://www.gimp.org/
[4]:https://opensource.com/node/30251
[5]:https://www.w3schools.com/
[6]:https://opensource.com/node/31441

View File

@ -1,103 +0,0 @@
#[探秘“栈”之旅I][1]
早些时候,我们讲解了 [“剖析内存中的程序之秘”][2]我们欣赏了在一台电脑中是如何运行我们的程序的。今天我们去探索栈的调用它在大多数编程语言和虚拟机中都默默地存在。在此过程中我们将接触到一些平时很难见到的东西像闭包closures、递归、以及缓冲溢出等等。但是我们首先要作的事情是描绘出栈是如何运作的。
栈非常重要因为它持有着在一个程序中运行的函数而函数又是一个软件的重要组成部分。事实上程序的内部操作都是非常简单的。它大部分是由函数向栈中推入数据或者从栈中弹出数据的相互调用组成的虽然为数据分配内存是在堆上但是在跨函数的调用中数据必须要保存下来不论是低级low-leverl的 C 软件还是像 JavaScript 和 C# 这样的基于虚拟机的语言,它们都是这样的。而对这些行为的深刻理解,对排错、性能调优以及大概了解究竟发生了什么是非常重要的。
当一个函数被调用时将会创建一个栈帧stack frame去支持函数的运行。这个栈帧包含函数的本地变量和调用者传递给它的参数。这个栈帧也包含了允许被调用的函数安全返回给调用者的内部事务信息。栈帧的精确内容和结构因处理器架构和函数调用规则而不同。在本文中我们以 Intel x86 架构和使用 C 风格的函数调用cdecl的栈为例。下图是一个处于栈顶部的一个单个栈帧
![](https://manybutfinite.com/img/stack/stackIntro.png)
在图上的场景中,有三个 CPU 寄存器进入栈。栈指针 `esp`(译者注:扩展栈指针寄存器) 指向到栈的顶部。栈的顶部总是被最后一个推入到栈且还没有弹出的东西所占据,就像现实世界中堆在一起的一叠板子或者面值 $100 的钞票。
保存在 `esp` 中的地址始终在变化着,因为栈中的东西不停被推入和弹出,而它总是指向栈中的最后一个推入的东西。许多 CPU 指令的一个副作用就是自动更新 `esp`,离开寄存器而使用栈是行不通的。
在 Intel 的架构中,绝大多数情况下,栈的增长是向着低位内存地址的方向。因此,这个“顶部” 在包含数据(在这种情况下,包含的数据是 `local_buffer`)的栈中是处于低位的内存地址。注意,关于从 `esp``local_buffer` 的箭头,这里并没有模糊的地方。这个箭头代表着事务:它专门指向到由 `local_buffer` 所拥有的第一个字节,因为,那是一个保存在 `esp` 中的精确地址。
第二个寄存器跟踪的栈是 `ebp`(译者注:扩展基址指针寄存器),它包含一个基指针或者称为帧指针。它指向到一个当前运行的函数的栈帧内的固定的位置,并且它为参数和本地变量的访问提供一个稳定的参考点(基址)。仅当开始或者结束调用一个函数时,`ebp` 的内容才会发生变化。因此,我们可以很容易地处理每个在栈中的从 `ebp` 开始偏移后的一个东西。如下图所示。
不像 `esp` `ebp` 大多数情况下是在程序代码中通过花费很少的 CPU 来进行维护的。有时候,完成抛弃 `ebp` 有一些性能优势,可以通过 [编译标志][3] 来做到这一点。Linux 内核中有一个实现的示例。
最后,`eax`(译者注:扩展的 32 位通用数据寄存器)寄存器是被调用规则所使用的寄存器,对于大多数 C 数据类型来说,它的作用是转换一个返回值给调用者。
现在,我们来看一下在我们的栈帧中的数据。下图清晰地按字节展示了字节的内容,就像你在一个调试器中所看到的内容一样,内存是从左到右、从底部到顶部增长的,如下图所示:
![](https://manybutfinite.com/img/stack/frameContents.png)
本地变量 `local_buffer` 是一个字节数组它包含一个空终止null-terminated的 ascii 字符串,这是一个 C 程序中的基本元素。这个字符串可以从任意位置读取,例如,从键盘输入或者来自一个文件,它只有 7 个字节的长度。因为,`local_buffer` 只能保存 8 字节,在它的左侧保留了 1 个未使用的字节。这个字节的内容是未知的,因为栈的推入和弹出是极其活跃的,除了你写入的之外,你从不知道内存中保存了什么。因为 C 编译器并不为栈帧初始化内存,所以它的内容是未知的并且是随机的 - 除非是你自己写入。这使得一些人对此很困惑。
再往上走,`local1` 是一个 4 字节的整数,并且你可以看到每个字节的内容。它似乎是一个很大的数字,所有的零都在 8 后面,在这里可能会让你误入歧途。
Intel 处理器是按从小到大的机制来处理的这表示在内存中的数字也是首先从小的位置开始的。因此在一个多字节数字中最小的标志字节在内存中处于低端地址。因为一般情况下是从左边开始显示的这背离了我们一般意义上对数字的认识。我们讨论的这种从小到大的机制使我想起《Gulliver 游记》:就像 Lilliput 吃鸡蛋是从小头开始的一样Intel 处理器处理它们的数字也是从字节的小端开始的。
因此,`local1` 事实上只保存了一个数字 8就像一个章鱼的腿。然而`param1` 在第二个字节的位置有一个值 2因此它的数学上的值是 2 * 256 = 512我们与 256 相乘是因为,每个位置值的范围都是从 0 到 255。同时`param2` 承载的数量是 1 * 256 * 256 = 65536。
这个栈帧的内部数据是由两个重要的部分组成:前一个栈帧的地址和函数的出口(返回地址)上运行的指令的地址。它们一起确保了函数能够正常返回,从而使程序可以继续正常运行。
现在,我们来看一下栈帧是如何产生的,以及去建立一个它们如何共同工作的内部蓝图。在刚开始的时候,栈的增长是非常令人困惑的,因为它发生的一切都不是你所期望的东西。例如,在栈上从 `esp` 减去 8去分配一个 8 字节,而减法是以一种奇怪的方式去开始的。
我们来看一个简单的 C 程序:
```
Simple Add Program - add.c
int add(int a, int b)
{
int result = a + b;
return result;
}
int main(int argc)
{
int answer;
answer = add(40, 2);
}
```
假设我们在 Linux 中不使用命令行参数去运行它。当你运行一个 C 程序时,去真实运行的第一个代码是 C 运行时库,由它来调用我们的 `main` 函数。下图展示了程序运行时每一步都发生了什么。每个图链接的 GDB 输出展示了内存的状态和寄存器。你也可以看到所使用的 [GDB 命令][4],以及整个 [GDB 输出][5]。如下:
![](https://manybutfinite.com/img/stack/mainProlog.png)
第 2 步和第 3 步,以及下面的第 4 步,都只是函数的开端,几乎所有的函数都是这样的:`ebp` 的当前值保存着栈的顶部,然后,将 `esp` 的内容拷贝到 `ebp`,维护一个新帧。`main` 的开端和任何一个其它函数都是一样,但是,不同之处在于,当程序启动时 `ebp` 被清零。
如果你去检查栈下面的整形变量argc你将找到更多的数据包括指向到程序名和命令行参数传统的 C 参数数组、Unix 环境变量以及它们真实的内容的指针。但是,在这里这些并不是重点,因此,继续向前调用 add()
![](https://manybutfinite.com/img/stack/callAdd.png)
`main``esp` 减去 12 之后得到它所需的栈空间,它为 a 和 b 设置值。在内存中值展示为十六进制,并且是从小到大的格式。与你从调试器中看到的一样。一旦设置了参数值,`main` 将调用 `add` ,并且它开始运行:
![](https://manybutfinite.com/img/stack/addProlog.png)
现在,有一点小激动!我们进入了另一个开端,在这时你可以明确看到栈帧是如何从 `ebp` 的一个链表开始进入到栈的。这就是在高级语言中调试器和异常对象如何对它们的栈进行跟踪的。当一个新帧产生时,你也可以看到更多这种从 `ebp``esp` 的典型的捕获。我们再次从 `esp` 中做减法得到更多的栈空间。
`ebp` 寄存器的值拷贝到内存时,这里也有一个稍微有些怪异的地方。在这里发生的奇怪事情是,寄存器并没有真的按字节顺序拷贝:因为对于内存,没有像寄存器那样的“增长的地址”。因此,通过调试器的规则以最自然的格式给人展示了寄存器的值:从最重要的到最不重要的数字。因此,这个在从小到大的机制中拷贝的结果,与内存中常用的从左到右的标记法正好相反。我想用图去展示你将会看到的东西,因此有了下面的图。
在比较难懂的部分,我们增加了注释:
![](https://manybutfinite.com/img/stack/doAdd.png)
这是一个临时寄存器,用于帮你做加法,因此没有什么警报或者惊喜。对于加法这样的作业,栈的动作正好相反,我们留到下次再讲。
对于任何读到这篇文章的人都应该有一个小礼物,因此,我做了一个大的图表展示了 [组合到一起的所有步骤][6]。
一旦把它们全部布置好了,看上起似乎很乏味。这些小方框给我们提供了很多帮助。事实上,在计算机科学中,这些小方框是主要的展示工具。我希望这些图片和寄存器的移动能够提供一种更直观的构想图,将栈的增长和内存的内容整合到一起。从软件的底层运作来看,我们的软件与一个简单的图灵机器差不多。
这就是我们栈探秘的第一部分,再讲一些内容之后,我们将看到构建在这个基础上的高级编程的概念。下周见!
--------------------------------------------------------------------------------
via:https://manybutfinite.com/post/journey-to-the-stack/
作者:[Gustavo Duarte][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:http://duartes.org/gustavo/blog/about/
[1]:https://manybutfinite.com/post/journey-to-the-stack/
[2]:https://manybutfinite.com/post/anatomy-of-a-program-in-memory
[3]:http://stackoverflow.com/questions/14666665/trying-to-understand-gcc-option-fomit-frame-pointer
[4]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-commands.txt
[5]:https://github.com/gduarte/blog/blob/master/code/x86-stack/add-gdb-output.txt
[6]:https://manybutfinite.com/img/stack/callSequence.png

View File

@ -1,72 +1,71 @@
用 Ansible 实现网络自动化
================
> 了解 Ansible 的功能,这是一个无代理的、可扩展的配置管理系统。
### 网络自动化
随着 IT 行业的技术变化从服务器虚拟化到公有和私有云以及自服务能力、容器化应用、平台即服务Paas交付一直以来落后的一个领域是网络。
随着 IT 行业的技术变化,从服务器虚拟化到公有和私有云以及自服务能力、容器化应用、平台即服务PaaS交付有一直以来落后的一个领域就是网络。
在过去的五年多网络行业似乎有很多新的趋势出现它们中的很多被归入到软件定义网络SDN
>注意
> SDN 是新出现的一种构建、管理、操作和部署网络的方法。SDN 最初的定义是需要将控制层和数据层(包转发)物理分离,并且,解耦合的控制层必须管理好各自的设备。
> SDN 是新出现的一种构建、管理、操作和部署网络的方法。SDN 最初的定义是出于将控制层和数据层(包转发)物理分离的需要,并且,解耦合的控制层必须管理好各自的设备。
> 如今,_在 SDN_ 旗下已经有许多技术,包括基于<ruby>控制器的网络<rt>controller-based networks</rt></ruby>、网络设备上的 API、网络自动化、白盒交换机、策略网络化、网络功能虚拟化NFV等等。
> 如今,在 SDN 旗下已经有许多技术,包括<ruby>基于控制器的网络<rt>controller-based networks</rt></ruby>、网络设备上的 API、网络自动化、白盒交换机、策略网络化、网络功能虚拟化NFV等等。
> 由于这篇报告的目的,我们参考 SDN 的解决方案作为我们的解决方案,其中包括一个网络控制器作为解决方案的一部分,并且提升了该网络的可管理性,但并不需要从数据层解耦控制层。
这些趋势的之一是,网络设备的 API 作为管理和操作这些设备的一种方法而出现,真正地提供了机器对机器的通讯。当需要自动化和构建网络应用时 API 简化了开发过程,在数据如何建模时提供了更多结构。例如,当启用 API 的设备在 JSON/XML 中返回数据时,它是结构化的,并且比返回原生文本信息、需要手工去解析的仅支持命令行的设备更易于使用。
这些趋势的之一是,网络设备的 API 作为管理和操作这些设备的一种方法而出现,真正地提供了机器对机器的通讯。当需要自动化和构建网络应用时 API 简化了开发过程,在数据如何建模时提供了更多结构。例如,当启用 API 的设备以 JSON/XML 返回数据时,它是结构化的,并且比返回原生文本信息、需要手工去解析的仅支持命令行的设备更易于使用。
在 API 之前用于配置和管理网络设备的两个主要机制是命令行接口CLI和简单网络管理协议SNMP。让我们来了解一下它们CLI 是一个设备的人机界面,而 SNMP 并不是为设备提供的实时编程接口。
幸运的是,因为很多供应商争相为设备增加 API有时候 _是因为_ 它被放到需求建议书RFP就带来了一个非常好的副作用 —— 支持网络自动化。当真正的 API 发布时访问设备内数据的过程以及管理配置就会被极大简化因此我们将在本报告中对此进行评估。虽然使用许多传统方法也可以实现自动化比如CLI/SNMP。
幸运的是,因为很多供应商争相为设备增加 API有时候 _是因为_ 它被放到需求建议书RFP就带来了一个非常好的副作用 —— 支持网络自动化。当真正的 API 发布时访问设备内数据的过程以及管理配置就会被极大简化因此我们将在本报告中对此进行评估。虽然使用许多传统方法也可以实现自动化比如CLI/SNMP。
> 注意
> 随着未来几个月或几年的网络设备更新,供应商的 API 无疑应该被测试,并且要做为采购网络设备(虚拟和物理)的关键决策标准。如果供应商提供一些库或集成到自动化工具中,或者如果被用于一个开放的标准/协议用户应该知道数据是如何通过设备建模的API 使用的传输类型是什么。
> 随着未来几个月或几年的网络设备更新,供应商的 API 无疑应该被做为采购网络设备(虚拟和物理)的关键决策标准而测试和使用。如果供应商提供一些库或集成到自动化工具中,或者如果被用于一个开放的标准协议用户应该知道数据是如何通过设备建模的API 使用的传输类型是什么。
总而言之,网络自动化,像大多数的自动化类型一样,是为了更快地工作。工作的更快是好事,降低部署和配置改变的时间并不总是许多 IT 组织需要去解决的问题。
总而言之,网络自动化,像大多数类型的自动化一样,是为了更快地工作。工作的更快是好事,降低部署和配置改变的时间并不总是许多 IT 组织需要去解决的问题。
包括速度,我们现在看看这些各种类型的 IT 组织逐渐采用网络自动化的几种原因。你应该注意到,同样的原则也适用于其它类型的自动化。
包括速度在内,我们现在看看这些各种类型的 IT 组织逐渐采用网络自动化的几种原因。你应该注意到,同样的原则也适用于其它类型的自动化。
### 简化架构
今天,每个网络都是一个独特的“雪花”型,并且,网络工程师们为能够解决传输和网络应用问题而感到自豪,这些问题最终使网络不仅难以维护和管理,而且也很难去实现自动化。
今天,每个网络都是一片独特的“雪花”,并且,网络工程师们为能够通过一次性的改变来解决传输和网络应用问题而感到自豪,而这最终导致网络不仅难以维护和管理,而且也很难去实现自动化。
需要从一开始就包含到新的架构和设计中去部署,而不是去考虑网络自动化和管理作为一个二级或三级项目。哪个特性可以跨不同的供应商工作哪个扩展可以跨不同的平台工作当使用特别的网络设备平台时API 类型或者自动化工程是什么?当这些问题在设计程之前得到答案,最终的架构将变成简单的、可重复的、并且易于维护 _和_ 自动化的,在整个网络中将很少启用供应商专用的扩展。
网络自动化和管理需要从一开始就包含到新的架构和设计中去部署而不是作为一个二级或三级项目。哪个特性可以跨不同的供应商工作哪个扩展可以跨不同的平台工作当使用特别的网络设备平台时API 类型或者自动化工程是什么?当这些问题在设计程之前得到答案,最终的架构将变成简单的、可重复的、并且易于维护 _和_ 自动化的,在整个网络中将很少启用供应商专用的扩展。
### 确定的结果
在一个企业组织中改变审查会议change review meeting去评估即将到来的网络上的变化、它们对外部系统的影响、以及回滚计划。在这个世界上人们为这些即 _将到来的变化_ 去接触 CLI输入错误的命令造成的影响是灾难性的。想像一下一个有三位、四位、五位、或者 50 位工程师的团队。每位工程师应对 _即将到来的变化_ 都有他们自己的独特的方法。并且,在管理这些变化的期间,使用一个 CLI 或者 GUI 的能力并不会消除和减少出现错误的机率。
使用经过验证和测试过的网络自动化可以帮助实现更多的可预测行为,并且使执行团队有更好的机会实现确实性结果,在保证任务没有人为错误的情况下首次正确完成的道路上更进一步。
在一个企业组织中,<ruby>改变审查会议<rt>change review meeting</rt></ruby>会评估面临的网络变化、它们对外部系统的影响、以及回滚计划。在人们通过 CLI 来执行这些 _面临的变化_ 的世界上,输入错误的命令造成的影响是灾难性的。想像一下,一个有 3 位、4 位、5位或者 50 位工程师的团队。每位工程师应对 _面临的变化_ 都有他们自己的独特的方法。并且,在管理这些变化的期间,一个人使用 CLI 或者 GUI 的能力并不会消除和减少出现错误的机率。
使用经过验证和测试过的网络自动化可以帮助实现更多的可预测行为,并且使执行团队更有可能实现确实性结果,在保证任务没有人为错误的情况下首次正确完成的道路上更进一步。
### 业务灵活性
不用说,网络自动化不仅为部署变化提供速度和灵活性,而且使得根据业务需要去从网络设备中检索数据的速度变得更快。自从服务器虚拟化实现以后,服务器和虚拟化使得管理员有能力在瞬间去部署一个新的应用程序。而且,更多的快速部署应用程序的问题出现在,配置一个 VLAN虚拟局域网、路由器、FW ACL防火墙的访问控制列表、或者负载均衡策略需要多长时间
不用说,网络自动化不仅为部署变化提供速度和灵活性,而且使得根据业务需要去从网络设备中检索数据的速度变得更快。自从服务器虚拟化到来以后,服务器和虚拟化使得管理员有能力在瞬间去部署一个新的应用程序。而且,随着应用程序可以更快地部署,随之浮现的问题是为什么还需要花费如此长的时间配置一个 VLAN虚拟局域网、路由器、FW ACL防火墙的访问控制列表或者负载均衡策略呢
在一个组织内通过去熟悉大多数的通用工作流和 _为什么_ 网络改变是真实的需求?新的部署过程自动化工具,如 Ansible 将使这些变得非常简单。
这一章将介绍一些关于为什么应该去考虑网络自动化的高级知识点。在下一节,我们将带你去了解 Ansible 是什么,并且继续深入了解各种不同规模的 IT 组织的网络自动化的不同类型。
通过了解在一个组织内最常见的工作流和 _为什么_ 真正需要改变网络,部署如 Ansible 这样的现代的自动化工具将使这些变得非常简单。
这一章介绍了一些关于为什么应该去考虑网络自动化的高级知识点。在下一节,我们将带你去了解 Ansible 是什么,并且继续深入了解各种不同规模的 IT 组织的网络自动化的不同类型。
### 什么是 Ansible
Ansible 是存在于开源世界里的一种最新的 IT 自动化和配置管理平台。它经常被拿来与其它工具如 Puppet、Chef和 SaltStack 去比较。Ansible 作为一个由 Michael DeHaan 创建的开源项目出现于 2012 年Michael DeHaan 也创建了 Cobbler 和 cocreated Func它们在开源社区都非常流行。在 Ansible 开源项目创建之后不足 18 个月时间, Ansilbe 公司成立,并收到了 $6 million 的一系列资金。它成为并一直保持着第一的贡献者和 Ansible 开源项目的支持者。在 2015 年 10 月Red Hat 获得了 Ansible 公司。
Ansible 是存在于开源世界里的一种最新的 IT 自动化和配置管理平台。它经常被拿来与其它工具如 Puppet、Chef 和 SaltStack 去比较。Ansible 作为一个由 Michael DeHaan 创建的开源项目出现于 2012 年Michael DeHaan 也创建了 Cobbler 和 cocreated Func它们在开源社区都非常流行。在 Ansible 开源项目创建之后不足 18 个月时间, Ansilbe 公司成立,并收到了六百万美金 A 轮投资。该公司成为 Ansible 开源项目排名第一的贡献者和支持者,并一直保持着。在 2015 年 10 月Red Hat 收购了 Ansible 公司。
但是Ansible 到底是什么?
_Ansible 是一个无需代理和可扩展的超级简单的自动化平台。_
让我们更深入地了解它的细节,并且看一看 Ansible 的属性,它帮助 Ansible 在行业内获得大量的吸引力traction
让我们更深入地了解它的细节,并且看一看那些使 Ansible 在行业内获得广泛认可的属性。
### 简单
Ansible 的其中一个吸引人的属性是,使用它你 _不_ 需要特定的编程技能。所有的指令,或者任务都是自动化的,在一个标准的、任何人都可以理解的人类可读的数据格式的一个文档中。在 30 分钟之内完成安装和自动化任务的情况并不罕见!
Ansible 的其中一个吸引人的属性是,使用它你 _不_ 需要特定的编程技能。所有的指令,或者任务都是自动化的,以一个标准的、任何人都可以理解的人类可读的数据格式的文档化。在 30 分钟之内完成安装和自动化任务的情况并不罕见!
例如,下列的一个 Ansible playbook 任务是用于去确保在一个 Cisco Nexus 交换机中存在一个 VLAN
例如,下列来自一个 Ansible <ruby>剧本<rt>playbook</rt></ruby>的任务用于去确保在一个 VLAN 存在于一个 Cisco Nexus 交换机中
```
- nxos_vlan: vlan_id=100 name=web_vlan
@ -74,9 +73,9 @@ Ansible 的其中一个吸引人的属性是,去使用它你 _不_ 需要特
你无需熟悉或写任何代码就可以明确地看出它将要做什么!
###### 注意
> 注意
这个报告的下半部分涉到 Ansible 术语(playbooks、plays、tasks、modules、等等的细节。但是在我们为网络自动化使用 Ansible 时,我们也同时有一些详细的示例去解释这些关键概念
> 这个报告的下半部分涉到 Ansible 术语(<ruby>剧本<rt>playbook</rt></ruby><ruby>行动<rt>play</rt></ruby><ruby>任务<rt>task</rt></ruby><ruby>模块<rt>module</rt></ruby>等等)的细节。在我们使用 Ansible 进行网络自动化时,提及这些关键概念时我们会有一些简短的示例
### 无代理

View File

@ -0,0 +1,52 @@
机器人学影响 CIO 角色的 3 种方式
======
![配图](https://enterprisersproject.com/sites/default/files/styles/620x350/public/cio_ai.png?itok=toMIgELj)
随着 2017 即将结束,许多 CIO 们的 2018 年目标也将确定。或许你们将参与到机器人流程自动化RPA中。多年以来RPA 对许多公司来说只是一个可望不可及的概念。但是随着组织被迫变得越来越敏捷高效RPA 所具有的潜在优势开始受到重视。
根据 Redwood Sofeware 和 Sapio Research 的最新 [研究报告][1]IT 决策者们相信,未来 5 年有 59% 的业务可以被自动化处理,从而产生新的速度和效率,并且消减相应的重复性的人工工作量的岗位。但是,目前在相应岗位上没有实施 RPA 的公司中,有 20% 的公司员工超过 1000 人。
对于 CIO 们来说RPA 会潜在影响你在业务中的角色和你的团队。随着 RPA 的地位在 2018 年中越来越重要CIO 们和其它 IT 决策者的角色也能会受到如下三种方式的改变:
### 成为战略性变革代理人的机会增加
随着压力的增长,用最少的时间做更多的事,内部流程越来越重要。在每个企业中,员工们每天做着既重要又枯燥的一些任务。这些任务可能是枯燥且重复性的任务,但是要求他们必须快速完成且不能出错。
**[有关你的自动化策略的 ROI 建议,查看我们相关的文章,[如何在自动化方面提升 ROI4 个小提示][2]。]**
在财务中从后台的一些操作到采购、供应链、账务、客户服务、以及人力资源,在一个组织中几乎所有的岗位都有一些枯燥的任务。这就为 CIO 们提供了一个机会,将 IT 与业务联合起来,成为使用 RPA 进行战略变革的先锋。
除了过去的屏幕抓取技术,机器人现在已经实现可定制化,即插即用的解决方案可以根据企业的需要进行设计。使用这种以流程为中心的方法,企业不仅可以将以前由人来完成的任务进行自动化,也可以将应用程序和系统特定任务自动化,比如 ERP 和企业其它应用程序。
为端到端的流程实施更高级别的自动化将是 CIO 们的价值所在。CIO 们将会站在这种机遇的最前沿。
### 重新关注人和培训
技术的变动将让员工更加恐慌尤其是当这些自动化的变动涉及到他们每日职责的相当大的部分。CIO 们应该清楚地告诉他们RPA 将如何让他们的角色和责任变得更好,以及用数据说明的、最终将影响到他们底线的战略决策。
当实施 RPA 时,清晰明确地表达出“人对组织的成功来说是很重要的”信息是很关键的,并且,这种成功需要使技术和人的技能实现适当的平衡。
CIO 们也应该分析工作流并实施更好的流程这种流程能够超越以前由终端用户完成的特定任务。通过端到端的流程自动化CIO 们可以让员工表现更加出色。
因为在整个自动化的过程中提升和培训员工技术是很重要的CIO 们必须与企业高管们一起去决定,帮助员工自信地应对变化的培训项目。
### 需要长远的规划
为确保机器人流程自动化的成功,甲方必须有一个长期的方法。这将要求一个弹性的解决方案,它将对整个业务模型有好处,包括消费者。比如,当年亚马逊为他的 Prime 消费者推广快速投递服务时,他们不仅在他们的仓库中重新调整了订单交付流程,他们也自动化了他们的在线客户体验,让它们变得更简化、更快速,并且让消费者比以前更容易下订单。
在即将到来的一年里CIO 们可以用同样的方法来应用技术,构建整体的解决方案来改变组织的动作方式。裁员只会让最终效益产生很大影响,但是流程自动化将使 CIO 们能够通过优化和授权来考虑更大的问题。这种方法让 CIO 们去为他们自己和 RPA 构建长期的信誉。这反过来将强化 CIO 们作为探路者的角色,并为整个组织的成功作出贡献。
对于 CIO 们来说,采用一个长期的、策略性的方法让 RPA 成功,需要时间和艰苦的努力。尽管如此,那些致力于创建一种人力和技术平衡战略的 CIO 们,将会在现在和将来实现更大的价值。
--------------------------------------------------------------------------------
via: https://enterprisersproject.com/article/2017/11/3-ways-robotics-affects-cio-role
作者:[Dennis Walsh][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://enterprisersproject.com/user/dennis-walsh
[1]:https://www.redwood.com/press-releases/it-decision-makers-speak-59-of-business-processes-could-be-automated-by-2022/
[2]:https://enterprisersproject.com/article/2017/11/how-improve-roi-automation-4-tips?sc_cid=70160000000h0aXAAQ

View File

@ -1,81 +0,0 @@
从专有到开源的十个简单步骤
======
"开源软件的确不是很安全,因为每个人都能使用它,而且他们能够随意的进行编译并且用他们自己写的不好的东西进行替换。"举手示意谁之前听说过这个1
当我和顾客讨论的时候是的他们有时候会让我和顾客交谈对于场景2的人来说这是很常见的。在前一篇文章中"[许多人的评论并不一定能防止错误代码]",我会
谈论尤其是安全的软件这块--并没有如外界所说的那样比专有软件安全,但是和专有软件比起来,我每次还是比较青睐开源软件。但我听到--关于开源软件不是很安全--表明了有时候仅仅解释开源需要工作投入是不够的,但是我们也需要积极的参与进去。
我并不期望能够达到牛顿或者维特根斯坦的逻辑水平,但是我会尽我所能,而且我会在结尾做个总结,如果你感兴趣的话可以去快速的浏览一下。
### 关键因素
首先,我们必须明白没有任何一款软件是绝对安全的。无论是专有软件还是开源软件。第二,我们应该接受确实还是存在一些很不错的专利软件的。第三,也存在一些不好的开源软件。第四,有很多优秀的,很有天赋的,专业的架构师,设计师和软件工程师设计开发专利软件。
但也有些摩擦:第五点,从事专有软件的人员是有限的,而且你不可能总是能够雇佣到最好的员工。即使在政府部门或者公共组织--他们拥有丰富的人才资源池,但在安全应用这块,他们的人才也是有限的。
第六点,开发,测试,提升改善开源软件的人总是无限的,而且还包含最好的人才。第七(也是我最欢的一),这群人找那个包含很多编写专有软件的人才。第八,许多政府或者公共组织开发的软件也都逐渐开源了。
第九,如果你在担心你在运行的软件的不被支持或者来源不明,好消息是:有一批组织会来检查软件代码的来源,提供支持和补丁更新。他们会按照专利软件模式那样去运行开源软件:他们的技术标准就是去签署认证以便你可以验证你正在运行的开源软件不是来源不明或者是恶意的软件。
第十点(也是这篇文章的重点),当你运行,测试,在问题上进行反馈,发现问题并且报告的时候,你正在为共福利贡献知识,专业技能以及经验,这就是开源,正因为你的所做的这些而变得更好。如果你是通过个人或者提供支持的商业组织,,你已经成为了这个组织的一部分了。开源让软件变得越来越好,你可以看到它们的变化。没有什么是隐藏封闭的,它是完全开放的。事情会变坏吗?是的,但是我们能够及时发现问题并且修复。
这个共享福利并不适用于专有软件:保持隐藏的东西是不能照亮个丰富世界的。
我知道作为一个英国人在使用联邦这个词的时候要小心谨慎的;它和帝国连接着的,但我所表达的不是这个意思。它不是克伦威尔在对这个词所表述的意思,无论如何,他是一个有争议的历史人物。我所表达的意思是这个词有共同和福利连接,福利不是指钱而是全人类都能拥有的福利。
我真的很相信这点的。如果i想从这篇文章中国得到一些虔诚的信息的话那应该是第十条共享福利是我们的遗产我们的经验我们的知识我们的责任。共享福利是全人类都能拥有的。我们共同拥有它而且它是一笔无法估量的财富。
### 便利贴
1. (几乎)没有一款软件是完美无缺的。
2. 有很好的专有软件。
3. 有不好的专有软件。
4. 有聪明,有才能,专注的人开开发专有软件。
5. 从事开发完善专有软件的人是有限的,即使在政府或者公共组织也是如此。
6. 相对来说从事开源软件的人是无限的。
7. …而且包括很多从事专有软件的人才。
8. 政府和公共组织的人经常开源它们的软件.
9. 有商业组织会为你的开源软件提供支持.
10. 贡献--即使使用--为开源软件贡献.
1 OK--you can put your hands down now.
2 Should this be capitalized? Is there a particular field, or how does it work? I'm not sure.
3 I have a degree in English literature and theology--this probably won't surprise regular readers of my articles.4
4 Not, I hope, because I spout too much theology,5 but because it's often full of long-winded, irrelevant humanities (U.S. English: "liberal arts") references.
5 Emacs. Every time.
6 Not even Emacs. And yes, I know that there are techniques to prove the correctness of some software. (I suspect that Emacs doesn't pass many of them…)
7 Hand up here: I'm employed by one of them, [Red Hat][3]. Go have a look--it's a fun place to work, and [we're usually hiring][4].
8 Assuming that they fully abide by the rules of the open source licence(s) they're using, that is.
9 Erstwhile "Lord Protector of England, Scotland, and Ireland"--that Cromwell.
10 Oh, and choose Emacs over Vi variants, obviously.
This article originally appeared on [Alice, Eve, and Bob - a security blog][5] and is republished with permission.
--------------------------------------------------------------------------------
via: https://opensource.com/article/17/11/commonwealth-open-source
作者:[Mike Bursell][a]
译者:[FelixYFZ](https://github.com/FelixYFZ)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/mikecamel
[1]:https://opensource.com/article/17/10/many-eyes
[2]:https://en.wikipedia.org/wiki/Apologetics
[3]:https://www.redhat.com/
[4]:https://www.redhat.com/en/jobs
[5]:https://aliceevebob.com/2017/10/24/the-commonwealth-of-open-source/

View File

@ -0,0 +1,58 @@
5 个理由,开源助你求职成功
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/resume_career_document_general.png?itok=JEaFL2XI)
你正在在繁华的技术行业中寻找工作吗?无论你是寻找新挑战的技术团体老手,还是正在寻找第一份工作的毕业生,参加开源项目都是可以让你在众多应聘者中脱颖而出的好方法。以下是从事开源项目工作可以增强你求职竞争力的五个理由。
### 1. 获得项目经验
或许从事开源项目工作能带给你的最明显的好处是提供了项目经验。如果你是一个学生,你可能没有很多实质上的项目在你的简历中展示。如果你还在工作,由于保密限制,或者你对正在完成的任务不感兴趣,你不能或者不能很详细的讨论你当前的项目。无论那种情况,找出并参加那些有吸引力的,而且又正好可以展现你的技能的开源项目,无疑对求职有帮助。这些项目不仅在众多简历中引人注目,而且可以是面试环节中完美的谈论主题。
另外,很多开源项目托管在公共仓库(比如 [Github][1] )上,所以对任何想参与其中的任何人,获取这些项目的源代码都异常简单。同时,你对项目的公开代码贡献,也能很方便的被招聘单位或者潜在雇主找到。开源项目提供了一个可以让你以一种更实际的的方式展现你的技能,而不是仅仅在面试中纸上谈兵。
### 2. 学会提问
开源项目团体的新成员总会有机会去学习大量的新技能。他们肯定会发现特定项目的多种交流方式,结构层次,文档格式,和其他的方方面面。在刚刚参与到项目中时,你需要问大量的问题,才能找准自己的定位。正如俗语说得好,没有愚蠢的问题。开源社区提倡好奇心,特别是在问题答案不容易找到的时候。
在从事开源项目工作初期,对项目的不熟悉感会驱使个人去提问,去经常提问。这可以帮助参与者学会提问。学会去分辨问什么,怎么问,问谁。学会提问在找工作,[面试][2],甚至生活中都非常有用。解决问题和寻求帮助的能力在人才市场中都非常重要。
### 3. 获取新的技能与持续学习
大量的软件项目同时使用很多不同的技术。很少有贡献者可以熟悉项目中的所有技术。即使已经在项目中工作了一段时间后,很多人很可能也不能对项目中所用的所有技术都熟悉。
虽然一个开源项目中的老手可能会对项目的一些特定的方面不熟悉,但是新手不熟悉的显然更多。这种情况产生了大量的学习机会。在一个人刚开始从事开源工作时,可能只是去提高项目中的一些小功能,甚至很可能是在他熟悉的领域。但是以后的旅程就大不相同了。
从事项目的某一方面的工作可能会把你带进一个不熟悉的领域,可能会驱使你开始新的学习。而从事开源项目的工作,可能会把你带向一个你以前可能从没用过的技术。这会激起新的激情,或者,至少促进你继续学习([这正是雇主渴望具备的能力][3])。
### 4.增加人脉
开源项目被不同的社区维护和支持。一些人在他们的业余时间进行开源工作,他们都有各自的经历,兴趣和人脉。正如他们所说,“你了解什么人决定你成为什么人”。不通过开源项目,可能你永远不会遇到特定的人。或许你和世界各地的人一起工作,或许你和你的邻里有联系。但是,你不是知道谁能帮你找到下一份工作。参加开源项目扩展人脉的可能性将对你寻找下一份(或者第一份)工作极有帮助。
### 5. 建立自信
最后,参与开源项目可能给你新的自信。很多科技企业的新员工会有些[冒充者综合症][4]。由于没有完成重要工作,他们会感到没有归属感,好像自己是冒名顶替的那个人,认为自己配不上他们的新职位。在被雇佣前参加开源项目可以最小化这种问题。
开源项目的工作往往是独立完成的,但是对于项目来说,所有的贡献是一个整体。开源社区具有强大的包容性和合作性,只要你有所贡献,一定会被看到。别的社区成员(特别是更高级的成员)对你肯定无疑也是一种回报。在你进行代码提交时获得的认可可以提高你的自信,打败冒充者综合症。这份自信也会被带到面试,新职位,等等。
这只是从事开源工作的一些好处。如果你知道更多的好处,请在下方评论区留言分享。
### 关于作者
Sophie Polson;Sophie 一名研究计算机科学的杜克大学的学生。通过杜克大学 2017 秋季课程 “开源世界( Open Source World )”,开始了开源社区的冒险。对探索 [DevOps][5] 十分有兴趣。在 2018 春季毕业后,将成为一名软件工程师。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/5-ways-turn-open-source-new-job
作者:[Sophie Polson][a]
译者:[Lontow](https://github.com/lontow)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/sophiepolson
[1]:https://github.com/dbaldwin/DronePan
[2]:https://www.thebalance.com/why-you-should-ask-questions-in-a-job-interview-1669548
[3]:https://www.computerworld.com/article/3177442/it-careers/lifelong-learning-is-no-longer-optional.html
[4]:https://en.wikipedia.org/wiki/Impostor_syndrome
[5]:https://en.wikipedia.org/wiki/DevOps
^^^^^^

View File

@ -1,118 +0,0 @@
两款 Linux 桌面端可用的科学计算器
======
Translating by zyk2290
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_OpenData_CityNumbers.png?itok=lC03ce76)
Image by : opensource.com
每个Linux 桌面环境都至少带有一个功能简单的桌面计算器,但大多数计算器只能进行一些简单的计算。
幸运的是,还是有例外的:不仅可以做得比开平方根和一些三角函数还多,而且还很简单。这里将介绍两款强大的计算器,外加一大堆额外的功能。
### SpeedCrunch
[SpeedCrunch][1]是一款高精度科学计算器有着 Qt5 图像界面前端,并且强烈依赖键盘。
![SpeedCrunch graphical interface][3]
SpeedCrunch 在工作时
It supports working with units and comes loaded with all kinds of functions.
所有函数都支持与单位一起工作。
例如,
`2 * 10^6 newton / (meter^2)`
你可以得到
`= 2000000 pascal`
SpeedCrunch 会默认地将结果转化为国际标准单位,但还是可以用"in"命令转换
例如:
`3*10^8 meter / second in kilo meter / hour`
结果是:
`= 1080000000 kilo meter / hour`
`F5` 键可以将所有结果转为科学计数法 (`1.08e9 kilo meter / hour`)`F2`键可以只将那些很大的数或很小的数转为科学计数法。更多选项可以在配置(Configuration)页面找到。
可用的函数的列表看上去非常惊艳。它可以在 Linux 、 Windows、macOS.。许可证是GPLv2你可以在[Bitbucket][4]上得到它的源码。
### Qalculate!
[Qalculate!][5](有感叹号)有一段长而复杂的历史。
这个项目给了我们一个强大的库,而这个库可以被其它程序使用(在 Plasma 桌面中krunner 可以用它来计算),以及一个用 GTK3 搭建的图形界面前端。它允许你转换单位,处理物理常量,创建图像,使用复数,矩阵以及向量,选择任意准确度,等等
![Qalculate! Interface][7]
正在在 Qalculate! 寻找物理常量
在使用单位上Qalculate! 会比SppedCrunch 更加直观,而且可以识别一些常用前缀。你有听说过 exapascal 吗?反正我没有(太阳的中心大概在 `~26 PPa`),但 Qalculate! ,可以准确识别出 `1 EPa`。同时Qalculate! 可以更加灵活地处理语法错误所以你不需要担心打括号如果没有歧义Qalculate! 会直接给出正确答案。
一段时间之后这个计划看上去被遗弃了。但在2016年它又变得强大了在一年里更新了10个版本。它的许可证是 GPLv2 (源码在 [GitHub][8] 上提供Linux 、Windows 、macOS的版本。
### Bonus calculators
#### 转换一切
好吧,这不是“计算器”,但这个程序非常好用
大部分单位转换器只是一大个基本单位列表以及一大堆基本组合,但[ConvertAll][9]与它们不一样。有试过把光年转换为英尺每秒吗不管它们说不说得通只要你想转换任何种类的单位ConvertAll 就是你要的工具。
只需要在相应的输入框内输入转换前和转换后的单位:如果单位相容,你会直接得到答案。
主程序是在 PyQt5 上搭建的,但也有[JavaScript 的在线版本][10]。
#### (wx)Maxima with the units package
有时候(好吧,很多时候)一款桌面计算器时候不够你用的,然后你需要更多的原力(raw power?)
[Maxima][11]是一款计算机代数系统LCTT 译者注:进行符号运算的软件。这种系统的要件是数学表示式的符号运算),你可以用它计算导数、积分、方程、特征值和特征向量、泰勒级数、拉普拉斯变换与傅立叶变换,以及任意精度的数字计算、二维或三维图像··· ···列出这些都够我们写几页纸的了。
[wxMaxima][12]是一个设计精湛的 Maxima 的图形前端,它简化了许多 Maxima 的选项,但并不会影响其它。在 Maxima 的基础上wxMaxima 还允许你创建 “笔记本”notebooks你可以在上面写一些笔记保存你的图像等。其中一项 (wx)Maxima 最惊艳的功能是它可以处理标注单位(dimension units)。
`load("unit")`
只需要输入`load("unit")`
按 Shift+Enter等几秒钟的时间然后你就可以开始了
默认地,单位包与基本 MKS 单位工作,但如果你喜欢,例如,要拿到 `N`为单位而不是 `kg*m/s2`,你只需要输入:`setunits(N)`
Maxima 的帮助(也可以在 wxMaxima 的帮助菜单中找到)会给你更多信息。
你使用这些程序吗?你知道还有其它好的科学、工程用途的桌面计算器或者其它相关的计算器吗?在评论区里告诉我们吧!
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/1/scientific-calculators-linux
作者:[Ricardo Berlasso][a]
译者:[译者ID](https://github.com/译者ID)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/rgb-es
[1]:http://speedcrunch.org/index.html
[2]:/file/382511
[3]:https://opensource.com/sites/default/files/u128651/speedcrunch.png "SpeedCrunch graphical interface"
[4]:https://bitbucket.org/heldercorreia/speedcrunch
[5]:https://qalculate.github.io/
[6]:/file/382506
[7]:https://opensource.com/sites/default/files/u128651/qalculate-600.png "Qalculate! Interface"
[8]:https://github.com/Qalculate
[9]:http://convertall.bellz.org/
[10]:http://convertall.bellz.org/js/
[11]:http://maxima.sourceforge.net/
[12]:https://andrejv.github.io/wxmaxima/

View File

@ -0,0 +1,81 @@
供应链管理方面的 5 个开源软件工具
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_Maze2.png?itok=EH_L-J6Q)
本文最初发表于 2016 年 1 月 14 日,最后的更新日期为 2018 年 3 月 2 日。
如果你正在管理着处理实体货物的业务,[供应链管理][1] 是你的业务流程中非常重要的一部分。不论你是经营着一个只有几个客户的小商店,还是在世界各地拥有数百万计客户和成千上万产品的世界财富 500 强的制造商或零售商,很清楚地知道你的库存和制造产品所需要的零部件,对你来说都是非常重要的事情。
保持对货品、供应商、客户的持续跟踪,并且所有与它们相关的变动部分都会从中受益,并且,在某些情况下完全依赖专门的软件来帮助管理这些工作流。在本文中,我们将去了解一些免费的和开源的供应链管理方面的软件,以及它们的其中一些功能。
供应链管理比单纯的库存管理更为强大。它能帮你去跟踪货物流以降低成本,以及为可能发生的各种糟糕的变化来制定应对计划。它能够帮你对出口合规性进行跟踪,不论是合法性、最低品质要求、还是社会和环境的合规性。它能够帮你计划最低供应量,让你能够在订单数量和交付时间之间做出明智的决策。
由于它的本质决定了许多供应链管理软件是与类似的软件捆绑在一起的,比如,[客户关系管理][2]CRM和 [企业资源计划管理][3] ERP。因此当你选择哪个工具更适合你的组织时你可能会考虑与其它工具集成作为你的决策依据之一。
### Apache OFBiz
[Apache OFBiz][4] 是一套帮你管理多种业务流程的相关工具。虽然它能管理多种相关问题,比如,目录、电子商务网站、帐户、和销售点,它在供应链管理方面的主要功能关注于仓库管理、履行、订单、和生产管理。它的可定制性很强,但是,它需要大量的规划去设置和集成到你现有的流程中。这就是它适用于中大型业务的原因之一。项目的功能构建于三个层面:展示层、业务层、和数据层,它是一个弹性很好的解决方案,但是,再强调一遍,它也很复杂。
Apache OFBiz 的源代码在 [项目仓库][5] 中可以找到。Apache OFBiz 是用 Java 写的,并且它是按 [Apache 2.0 license][6] 授权的。
如果你对它感兴趣,你也可以去查看 [opentaps][7],它是在 OFBiz 之上构建的。Opentaps 强化了 OFBiz 的用户界面,并且添加了 ERP 和 CRM 的核心功能,包括仓库管理、采购和计划。它是按 [AGPL 3.0][8] 授权使用的,对于不接受开源授权的组织,它也提供了商业授权。
### OpenBoxes
[OpenBoxes][9] 是一个供应链管理和存货管理项目,最初的主要设计目标是为了医疗行业中的药品跟踪管理,但是,它可以通过修改去跟踪任何类型的货品和相关的业务流。它有一个需求预测工具,可以基于历史订单数量、存储跟踪、支持多种场所、过期日期跟踪、销售点支持等进行预测,并且它还有许多其它功能,这使它成为医疗行业的理想选择,但是,它也可以用于其它行业。
它在 [Eclipse Public License][10] 下可用OpenBoxes 主要是由 Groovy 写的,它的源代码可以在 [GitHub][11] 上看到。
### OpenLMIS
与 OpenBoxes 类似,[OpenLMIS][12] 也是一个医疗行业的供应链管理工具,但是,它专用设计用于在非洲的资源缺乏地区使用,以确保有限的药物和医疗用品能够用到需要的病人上。它是 API 驱动的,这样用户可以去定制和扩展 OpenLMIS同时还能维护一个与通用基准代码的连接。它是由络克菲勒基金会开发的其它的贡献者包括联合国、美国国际开发署、和比尔 & 梅林达 盖茨基金会。
OpenLMIS 是用 Java 和 JavaScript 的 AngularJS 写的。它在 [AGPL 3.0 license][13] 下使用,它的源代码在 [GitHub][13] 上可以找到。
### Odoo
你可能在我们以前的 [ERP 项目][3] 榜的文章上见到过 [Odoo][14]。事实上,根据你的需要,一个全功能的 ERP 对你来说是最适合的。Odoo 的供应链管理工具主要围绕存货和采购管理,同时还与电子商务网站和销售点连接,但是,它也可以与其它的工具连接,比如,与 [frePPLe][15] 连接,它是一个开源的生产计划工具。
Odoo 既有软件即服务的解决方案,也有开源的社区版本。开源的版本是以 [LGPL][16] 版本 3 下发行的,源代码在 [GitHub][17] 上可以找到。Odoo 主要是用 Python 来写的。
### xTuple
[xTuple][18] 标称自己是“为成长中的企业提供供应链管理软件”,它专注于已经超越了传统的小型企业 ERP 和 CRM 解决方案的企业。它的开源版本称为 Postbooks添加了一些存货、分销、采购、以及供应商报告的功能它提供的核心功能是帐务、CRM、以及 ERP 功能,而它的商业版本扩展了制造和分销的 [功能][19]。
xTuple 在 [CPAL][20] 下使用,这个项目欢迎开发者去 fork 它,为基于存货的制造商去创建其它的业务软件。它的 Web 应用核心是用 JavaScript 写的,它的源代码在 [GitHub][21] 上可以找到。
就这些,当然了,还有其它的可以帮你处理供应链管理的开源软件。如果你知道还有更好的软件,请在下面的评论区告诉我们。
--------------------------------------------------------------------------------
via: https://opensource.com/tools/supply-chain-management
作者:[Jason Baker][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/jason-baker
[1]:https://en.wikipedia.org/wiki/Supply_chain_management
[2]:https://opensource.com/business/14/7/top-5-open-source-crm-tools
[3]:https://opensource.com/resources/top-4-open-source-erp-systems
[4]:http://ofbiz.apache.org/
[5]:http://ofbiz.apache.org/source-repositories.html
[6]:http://www.apache.org/licenses/LICENSE-2.0
[7]:http://www.opentaps.org/
[8]:http://www.fsf.org/licensing/licenses/agpl-3.0.html
[9]:http://openboxes.com/
[10]:http://opensource.org/licenses/eclipse-1.0.php
[11]:https://github.com/openboxes/openboxes
[12]:http://openlmis.org/
[13]:https://github.com/OpenLMIS/openlmis-ref-distro/blob/master/LICENSE
[14]:https://www.odoo.com/
[15]:https://frepple.com/
[16]:https://github.com/odoo/odoo/blob/9.0/LICENSE
[17]:https://github.com/odoo/odoo
[18]:https://xtuple.com/
[19]:https://xtuple.com/comparison-chart
[20]:https://xtuple.com/products/license-options#cpal
[21]:http://xtuple.github.io/

View File

@ -0,0 +1,111 @@
使用 Quagga 实现 Linux 动态路由
============================================================
![network](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/network_visualization.png?itok=P3Ve7eO1 "network")
学习如何使用 Quagga 套件的路由协议去管理动态路由。[Creative Commons Attribution][1][Wikimedia Commons: Martin Grandjean][2]
迄今为止,本系列文章中,我们已经在 [Linux 局域网路由新手指南:第 1 部分][4] 中学习了复杂的 IPv4 地址,在  [Linux 局域网路由新手指南:第 2 部分][5] 中学习了如何去手工创建静态路由。
今天,我们继续使用 [Quagga][6] 去管理动态路由这是一个安装完后就不用理它的的软件。Quagga 是一个支持 OSPFv2、OSPFv3、RIP v1 和 v2、RIPng、以及 BGP-4 的路由协议套件,并全部由 zebra 守护程序管理。
OSPF 的意思是最短路径优先。OSPF 是一个内部网关协议IGP;它可以用在局域网和跨因特网的局域网互联中。在你的网络中的每个 OSPF 路由器都包含整个网络的拓扑并计算通过网络的最短路径。OSPF 会通过多播的方式自动对外传播它检测到的网络变化。你可以将你的网络分割为区域,以保持路由表的可管理性;每个区域的路由器只需要知道离开它的区域的下一跳接口地址,而不用记录你的网络的整个路由表。
RIP路由信息协议是一个很老的协议RIP 路由器向网络中周期性多播它的整个路由表,而不是像 OSPF 那样只多播网络的变化。RIP 通过跳数来测量路由,任何超过 15 跳的路由它均视为不可到达。RIP 设置很简单,但是 OSPF 在速度、效率、以及弹性方面更佳。
BGP-4 是边界网关协议版本 4。这是用于因特网流量路由的外部网关协议EGP。你不会用到 BGP 协议的,除非你是因特网服务提供商。
### 准备使用 OSPF
在我们的小型 KVM 测试实验室中用两台虚拟机表示两个不同的网络然后将另一台虚拟机配置为路由器。创建两个网络net1 是 192.168.110.0/24 而 net2 是 192.168.120.0/24。启用 DHCP 是明智的否则你要分别进入这三个虚拟机去为它们设置静态地址。Host 1 在 net1 中Host 2 在 net2 中,而路由器同时与这两个网络连接。设置 Host 1 的网关地址为 192.168.110.126Host 2 的网关地址为 192.168.120.136。
* Host 1 192.168.110.125
* Host 2192.168.120.135
* Router192.168.110.126 和 192.168.120.136
在路由器上安装 Quagga。在大多数 Linux 中它是一个 quagga 包。在 Debian 上还有一个单独的文档包 quagga-doc。取消 `/etc/sysctl.conf` 配置文件中如下这一行的注释去启用包转发功能:
```
net.ipv4.ip_forward=1
```
然后,运行 `sysctl -p` 命令让变化生效。
### 配置 Quagga
查看你的 Quagga 包中的示例配置文件,比如,`/usr/share/doc/quagga/examples/ospfd.conf.sample`。除非你的 Linux 版本按你的喜好做了创新,否则,一般情况下配置文件应该在 `/etc/quagga` 目录中。大多数 Linux 版本在这个目录下有两个文件,`vtysh.conf`  和 `zebra.conf`。它们提供了守护程序运行所需要的最小配置。除非你的发行版做了一些特殊的配置否则zebra 总是首先运行,当你启动 ospfd 的时候它将自动启动。Debian/Ubuntu 是一个特例,稍后我们将会说到它。
每个路由器守护程序将读取它自己的配置文件,因此,我们必须创建 `/etc/quagga/ospfd.conf`,并输入如下内容:
```
!/etc/quagga/ospfd.conf
hostname router1
log file /var/log/quagga/ospfd.log
router ospf
ospf router-id 192.168.110.15
network 192.168.110.0/0 area 0.0.0.0
network 192.168.120.0/0 area 0.0.0.0
access-list localhost permit 127.0.0.1/32
access-list localhost deny any
line vty
access-class localhost
```
你可以使用感叹号(!)或者井号(#)去注释掉这些行。我们来快速浏览一下这些选项。
* **hostname** 是你希望的任何内容。这里不是一般意义上的 Linux 主机名,但是,当你使用 `vtysh` 或者 `telnet` 登入时,你将看到它们。
* **log file** 是你希望用于保存日志的任意文件。
* **router** 指定路由协议。
* **ospf router-id** 是任意的 32 位数字。使用路由器的一个 IP 地址就是很好的选择。
* **network** 定义你的路由器要通告的网络。
* **access-list** 限制 `vtysh` 登入,它是 Quagga 命令行 shell它允许本地机器登入并拒绝任何远程管理。
### Debian/Ubuntu
在你启动守护程序之前Debian/Ubuntu 相对其它的 Debian 衍生版可能多需要一步到多步。编辑 `/etc/quagga/daemons` ,除了 `zebra`=yes 和 `ospfd=yes` 外,使其它所有的行的值为 `no`
然后,在 Debian 上运行 `ospfd` 去启动 Quagga
```
# systemctl start quagga
```
在大多数的其它 Linux 上,包括 Fedora 和 openSUSE用如下命令启动 `ospfd`
```
# systemctl start ospfd
```
现在Host 1 和 Host 2 将可以互相 ping 通对方和路由器。
这里用了许多篇幅去描述非常简单的设置。在现实中,路由器将连接两个交换机,然后为连接到这个交换机上的所有电脑提供一个网关。你也可以在你的路由器上添加更多的网络接口,这样你的路由器就可以为更多的网络提供路由服务,或者也可以直接连接到其它路由器上,或者连接到连接其它路由器的骨干网络上。
你或许不愿意如此麻烦地手工配置网络接口。最简单的方法是使用你的 DHCP 服务器去宣告你的路由器。如果你使用了 Dnsmasq那么你就有了一个 DHCP 和 DNS 的一体化解决方案。
还有更多的配置选项,比如,加密的密码保护。更多内容请查看 [Quagga 路由套件][7] 的官方文档。
--------------------------------------------------------------------------------
via: https://www.linux.com/learn/intro-to-linux/2018/3/dynamic-linux-routing-quagga
作者:[CARLA SCHRODER ][a]
译者:[qhwdw](https://github.com/qhwdw)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/cschroder
[1]:https://www.linux.com/licenses/category/creative-commons-attribution
[2]:https://commons.wikimedia.org/wiki/File:Network_Visualization.png
[3]:https://www.linux.com/files/images/networkvisualizationpng
[4]:https://www.linux.com/learn/intro-to-linux/2018/2/linux-lan-routing-beginners-part-1
[5]:https://www.linux.com/learn/intro-to-linux/2018/3/linux-lan-routing-beginners-part-2
[6]:https://www.quagga.net/
[7]:https://www.quagga.net/

View File

@ -0,0 +1,240 @@
在 Linux 中如何归档文件和目录
=====
![](https://www.ostechnix.com/wp-content/uploads/2018/03/Archive-Files-And-Directories-In-Linux-720x340.png)
在我们之前的教程中,我们讨论了如何[使用 gzip 和 bzip2 压缩和解压缩文件][1]。在本教程中,我们将学习如何在 Linux 归档文件。归档和压缩有什么不同吗?你们中的一些人可能经常认为这些术语有相同的含义。但是,这两者完全不同。归档是将多个文件和目录(相同或不同大小)组合成一个文件的过程。另一方面,压缩是减小文件或目录大小的过程。归档通常用作系统备份的一部分,或者将数据从一个系统移至另一个系统时。希望你了解归档和压缩之间的区别。现在,让我们进入主题。
### 归档文件和目录
归档文件和目录最常见的程序是:
1. tar
2. zip
这是一个很大的话题,所以,我将分两部分发表这篇文章。在第一部分中,我们将看到如何使用 tar 命令来归档文件和目录。
##### 使用 tar 命令归档文件和目录
**Tar** 是一个 Unix 命令,代表 **T**ape **A**rchive这里我将其翻译为 磁带归档,希望校正者修正以下)。它用于将多个文件(相同或不同大小)组合或存储到一个文件中。在 tar 实用程序中有 4 种主要的操作模式。
1. **c** 从文件或目录中建立归档
2. **x** 提取归档
3. **r** 将文件追加到归档
4. **t** 列出归档的内容
有关完整的模式列表,参阅 man 手册页。
**创建一个新的归档**
为了本指南,我将使用名为 **ostechnix** 的文件夹,其中包含三种不同类型的文件。
```
$ ls ostechnix/
file.odt image.png song.mp3
```
现在,让我们为 ostechnix 目录创建一个新的 tar 归档。
```
$ tar cf ostechnix.tar ostechnix/
```
这里,**c**标志指的是创建新的归档,**f** 是指定归档文件。
同样,对当前工作目录中的一组文件创建归档文件,使用以下命令:
```
$ tar cf archive.tar file1 file2 file 3
```
**提取归档**
要在当前目录中提取归档文件,只需执行以下操作:
```
$ tar xf ostechnix.tar
```
我们还可以使用 **C** 标志(大写字母 C将归档提取到不同的目录中。例如以下命令在 **Downloads** 目录中提取给定的归档文件。
```
$ tar xf ostechnix.tar -C Downloads/
```
或者,转到 Downloads 文件夹并像下面一样提取其中的归档。
```
$ cd Downloads/
$ tar xf ../ostechnix.tar
```
有时,你可能想要提取特定类型的文件。例如,以下命令提取 “.png” 类型的文件。
```
$ tar xf ostechnix.tar --wildcards "*.png"
```
**创建 gzip 和 bzip 格式的压缩归档**
默认情况下tar 创建归档文件以 **.tar** 结尾。另外tar 命令可以与压缩实用程序 **gzip****bzip** 结合使用。文件结尾以 **.tar** 为扩展名使用普通 tar 归档文件,文件以 **tar.gz****.tgz** 结尾使用 **gzip** 归档并压缩文件tar 文件以 **tar.bz2****.tbz** 结尾使用 **bzip** 归档并压缩。
首先,让我们来**创建一个 gzip 归档**
```
$ tar czf ostechnix.tar.gz ostechnix/
```
或者
```
$ tar czf ostechnix.tgz ostechnix/
```
这里,我们使用 **z** 标志来使用 gzip 压缩方法压缩归档文件。
你可以使用 **v** 标志在创建归档时查看进度。
```
$ tar czvf ostechnix.tar.gz ostechnix/
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
这里,**v** 指显示进度。
从一个文件列表创建 gzip 归档文件:
```
$ tar czf archive.tgz file1 file2 file3
```
要提取当前目录中的 gzip 归档文件,使用:
```
$ tar xzf ostechnix.tgz
```
要提取其他文件夹中的归档,使用 -C 标志:
```
$ tar xzf ostechnix.tgz -C Downloads/
```
现在,让我们创建 **bzip 归档**。为此,请使用下面的 **j** 标志。
创建一个目录的归档:
```
$ tar cjf ostechnix.tar.bz2 ostechnix/
```
```
$ tar cjf ostechnix.tbz ostechnix/
```
从一个列表文件中创建归档:
```
$ tar cjf archive.tar.bz2 file1 file2 file3
```
```
$ tar cjf archive.tbz file1 file2 file3
```
为了显示进度,使用 **v** 标志。
现在,在当前目录下,让我们提取一个 bzip 归档。这样做:
```
$ tar xjf ostechnix.tar.bz2
```
或者,提取归档文件到其他目录:
```
$ tar xjf ostechnix.tar.bz2 -C Downloads
```
**一次创建多个目录和/或文件的归档**
这是 tar 命令的另一个最酷的功能。要一次创建多个目录或文件的 gzip 归档文件,使用以下文件:
```
$ tar czvf ostechnix.tgz Downloads/ Documents/ ostechnix/file.odt
```
上述命令创建 **Downloads**, **Documents** 目录和 **ostechnix** 目录下的 **file.odt** 文件的归档,并将归档保存在当前工作目录中。
**在创建归档时跳过目录和/或文件**
这在备份数据时非常有用。你可以在备份中排除不重要的文件或目录,这是 **exclude** 选项所能帮助的。例如你想要创建 /home 目录的归档,但不希望包括 Downloads, Documents, Pictures, Music 这些目录。
这是我们的做法:
```
$ tar czvf ostechnix.tgz /home/sk --exclude=/home/sk/Downloads --exclude=/home/sk/Documents --exclude=/home/sk/Pictures --exclude=/home/sk/Music
```
上述命令将对我的 $HOME 目录创建一个 gzip 归档,其中不包括 Downloads, Documents, Pictures 和 Music 目录。要创建 bzip 归档,将 **z** 替换为 **j**,并在上例中使用扩展名 .bz2。
**列出归档文件但不提取它们**
要列出归档文件的内容,我们使用 **t** 标志。
```
$ tar tf ostechnix.tar
ostechnix/
ostechnix/file.odt
ostechnix/image.png
ostechnix/song.mp3
```
要查看详细输出,使用 **v** 标志。
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
```
**追加文件到归档**
文件或目录可以使用 **r** 标志添加/更新到现有的归档。看看下面的命令:
```
$ tar rf ostechnix.tar ostechnix/ sk/ example.txt
```
上面的命令会将名为 **sk** 的目录和名为 **exmple.txt** 添加到 ostechnix.tar 归档文件中。
你可以使用以下命令验证文件是否已添加:
```
$ tar tvf ostechnix.tar
drwxr-xr-x sk/users 0 2018-03-26 19:52 ostechnix/
-rw-r--r-- sk/users 9942 2018-03-24 13:49 ostechnix/file.odt
-rw-r--r-- sk/users 36013 2015-09-30 11:52 ostechnix/image.png
-rw-r--r-- sk/users 112383 2018-02-22 14:35 ostechnix/song.mp3
drwxr-xr-x sk/users 0 2018-03-26 19:52 sk/
-rw-r--r-- sk/users 0 2018-03-26 19:39 sk/linux.txt
-rw-r--r-- sk/users 0 2018-03-26 19:56 example.txt
```
##### **TL;DR**
**创建 tar 归档:**
* **普通 tar 归档:** tar -cf archive.tar file1 file2 file3
* **Gzip tar 归档:** tar -czf archive.tgz file1 file2 file3
* **Bzip tar 归档:** tar -cjf archive.tbz file1 file2 file3
**提取 tar 归档:**
* **普通 tar 归档:** tar -xf archive.tar
* **Gzip tar 归档:** tar -xzf archive.tgz
* **Bzip tar 归档:** tar -xjf archive.tbz
我们只介绍了 tar 命令的基本用法,这些对于开始使用 tar 命令足够了。但是,如果你想了解更多详细信息,参阅 man 手册页。
```
$ man tar
```
好吧,这就是全部了。在下一部分中,我们将看到如何使用 Zip 实用程序来归档文件和目录。
干杯!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/how-to-archive-files-and-directories-in-linux-part-1/
作者:[SK][a]
译者:[MjSeven](https://github.com/MjSeven)
校对:[校对者ID](https://github.com/校对者ID)
选题:[lujun9972](https://github.com/lujun9972)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/

View File

@ -1,80 +0,0 @@
一个更好的调试 Perl 模块
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/annoyingbugs.png?itok=ywFZ99Gs)
只有在调试或开发调整时才使用 Perl 代码块有时会很有用。这很好,但是这样的代码块可能会对性能产生很大的影响, 尤其是在运行时决定是否执行它。
[Curtis“Ovid”Poe][1] 编写了一个可以帮助解决这个问题的模块:[Keyword::DEVELOPMENT][2]。该模块利用 Keyword::Simple 和 Perl 5.012 中引入的可插入关键字架构来创建新的关键字DEVELOPMENT。它使用 PERL_KEYWORD_DEVELOPMENT 环境变量的值来确定是否要执行一段代码。
使用它并不容易:
```
use Keyword::DEVELOPMENT;
       
sub doing_my_big_loop {
    my $self = shift;
    DEVELOPMENT {
        # insert expensive debugging code here!
    }
}Keyworddoing_my_big_loopDEVELOPMENT
```
在编译时DEVELOPMENT 块内的代码已经被优化掉了,根本就不存在。
你看到好处了么?在沙盒中将 PERL_KEYWORD_DEVELOPMENT 环境变量设置为 true在生产环境设为 false并且可以将有价值的调试工具提交到你的代码库中在你需要的时候随时可用。
在缺乏高级配置管理的系统中,你也可以使用此模块来处理生产和开发或测试环境之间的设置差异:
```
sub connect_to_my_database {
       
    my $dsn = "dbi:mysql:productiondb";
    my $user = "db_user";
    my $pass = "db_pass";
   
    DEVELOPMENT {
        # Override some of that config information
        $dsn = "dbi:mysql:developmentdb";
    }
   
    my $db_handle = DBI->connect($dsn, $user, $pass);
}connect_to_my_databaseDEVELOPMENTDBI
```
稍后对此代码片段的增强使你能在其他地方,比如 YAML 或 INI 中读取配置信息,但我希望您能在此看到该工具。
我查看了关键字 Keyword::DEVELOPMENT 的源码,花了大约半小时研究,“天哪,我为什么没有想到这个?”安装 Keyword::Simple 后Curtis 给我们的模块就非常简单了。这是我长期以来在自己的编码实践中需要的一个优雅解决方案。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/4/perl-module-debugging-code
作者:[Ruth Holloway][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/druthb
[1]:https://metacpan.org/author/OVID
[2]:https://metacpan.org/pod/release/OVID/Keyword-DEVELOPMENT-0.04/lib/Keyword/DEVELOPMENT.pm

View File

@ -1,89 +0,0 @@
如何重置 Fedora 上的 root 密码
======
![](https://fedoramagazine.org/wp-content/uploads/2018/04/resetrootpassword-816x345.jpg)
系统管理员可以轻松地为忘记密码的用户重置密码。但是如果系统管理员忘记 root 密码会发生什么?本指南将告诉你如何重置遗失或忘记的 root 密码。请注意,要重置 root 密码,你需要能够接触到本机以重新启动并访问 GRUB 设置。此外,如果系统已加密,你还需要知道 LUKS 密码。
### 编辑 GRUB 设置
首先你需要中断启动过程。所以你需要打开系统,如果已经打开就重新启动。第一步很棘手,因为 grub 菜单往往会在屏幕上快速闪过。
当你看到 GRUB 菜单时,请按键盘上的 **E** 键:
![][1]
按下 e 后显示以下屏幕:
![][2]
使用箭头键移动到 **linux16** 这行。
![][3]
使用您的**删除**键或**后退**键,删除 **rhgb quiet** 并替换为以下内容。
```
rd.break enforcing=0
```
![][4]
编辑好后,按下 **Ctrl-x** 启动系统。如果系统已加密,则系统会提示你输入 LUKS 密码。
**注意:** 设置 enforcing=0避免执行完整的系统 SELinux 重新标记。系统重启后,为 /etc/shadow 恢复正确的 SELinux 上下文。(这个会进一步解释)
### 挂载文件系统
系统现在将处于紧急模式。以读写权限重新挂载硬盘:
```
# mount o remount,rw /sysroot
```
### 更改密码
运行 chroot 访问系统。
```
# chroot /sysroot
```
你现在可以更改 root 密码。
```
# passwd
```
出现提示时输入新的 root 密码两次。如果成功,你应该看到一条 **all authentication tokens updated successfully** 消息。
输入 **exit** 两次重新启动系统。
以 root 身份登录并将 SELinux 标签恢复到 /etc/shadow 中。
```
# restorecon -v /etc/shadow
```
将 SELinux 变回 enforce 模式。
```
# setenforce 1
```
--------------------------------------------------------------------------------
via: https://fedoramagazine.org/reset-root-password-fedora/
作者:[Curt Warfield][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://fedoramagazine.org/author/rcurtiswarfield/
[1]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub.png
[2]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub2.png
[3]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub3.png
[4]:https://fedoramagazine.org/wp-content/uploads/2018/04/grub4.png

View File

@ -0,0 +1,106 @@
Go 程序的持续分析
============================================================
Google 最有趣的部分之一就是我们的持续分析服务。我们可以看到谁在使用 CPU 和内存,我们可以持续地监控我们的生产服务以争用和阻止配置文件,并且我们可以生成分析和报告,并轻松地告诉我们可以进行哪些有重要影响的优化。
我简单研究了 [Stackdriver Profiler][2],这是我们的新产品,它填补了针对云端用户云端分析服务的空白。请注意,你无需在 Google 云平台上运行你的代码即可使用它。实际上,我现在每天都在开发时使用它。它也支持 Java 和 Node.js。
#### 在生产中分析
pprof 可安全地用于生产。我们针对 CPU 和堆分配分析的额外开销会增加 5%。一个实例中每分钟收集 10 秒。如果你有一个 Kubernetes Pod 的多个副本,我们确保进行分摊收集。例如,如果你拥有一个 pod 的 10 个副本,模式,那么开销将变为 0.5%。这使用户可以始终进行分析。
我们目前支持 Go 程序的 CPU、堆、互斥和线程分析。
#### 为什么?
在解释如何在生产中使用分析器之前,先解释为什么你想要在生产中进行分析将有所帮助。一些非常常见的情况是:
* 调试仅在生产中可见的性能问题。
* 了解 CPU 使用率以减少费用。
* 了解争用的累积和优化的地方。
* 了解新版本的影响,例如看到 canary 和生产之间的区别。
* 通过[关联][1]分析样本以了解延迟的根本原因来丰富你的分布式经验。
#### 启用
Stackdriver Profiler 不能与 _net/http/pprof_ 处理程序一起使用,并要求你在程序中安装和配置一个一行的代理。
```
go get cloud.google.com/go/profiler
```
在你的主函数中,启动分析器:
```
if err := profiler.Start(profiler.Config{
Service: "indexing-service",
ServiceVersion: "1.0",
ProjectID: "bamboo-project-606", // optional on GCP
}); err != nil {
log.Fatalf("Cannot start the profiler: %v", err)
}
```
当你运行你的程序后profiler 包将每分钟报告给 profiler 10 秒钟。
#### 可视化
当分析报告给后端后,你将在 [https://console.cloud.google.com/profiler][4] 上看到火焰图。你可以按标签过滤并更改时间范围,也可以按服务名称和版本进行细分。数据将会长达 30 天。
![](https://cdn-images-1.medium.com/max/900/1*JdCm1WwmTgExzee5-ZWfNw.gif)
你可以选择其中一个分析,按服务,区域和版本分解。你可以在火焰中移动并通过标签进行过滤。
#### 阅读火焰图
火焰图可视化由 [Brendan Gregg][5] 非常全面地解释了。Stackdriver Profiler 增加了一点它自己的特点。
![](https://cdn-images-1.medium.com/max/900/1*QqzFJlV9v7U1s1reYsaXog.png)
我们将查看一个 CPU 分析,但这也适用于其他分析。
1. 最上面的 x 轴表示整个程序。火焰上的每个框表示调用路径上的一帧。框的宽度与执行该函数花费的 CPU 时间成正比。
2. 框从左到右排序,左边是花费最多的调用路径。
3. 来自同一包的帧具有相同的颜色。这里所有运行时功能均以绿色表示。
4. 你可以单击任何框进一步展开执行树。
![](https://cdn-images-1.medium.com/max/900/1*1jCm6f-Fl2mpkRe3-57mTg.png)
你可以将鼠标悬停在任何框上查看任何帧的详细信息。
#### 过滤
你可以显示,隐藏和高亮符号名称。如果你特别想了解某个特定调用或包的消耗,这些信息非常有用。
![](https://cdn-images-1.medium.com/max/900/1*ka9fA-AAuKggAuIBq_uhGQ.png)
1. 选择你的过滤器。你可以组合多个过滤器。在这里,我们将高亮显示 runtime.memmove。
2. 火焰将使用过滤器过滤帧并可视化过滤后的框。在这种情况下,它高亮显示所有 runtime.memmove 框。
--------------------------------------------------------------------------------
via: https://medium.com/google-cloud/continuous-profiling-of-go-programs-96d4416af77b
作者:[JBD ][a]
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://medium.com/@rakyll?source=post_header_lockup
[1]:https://rakyll.org/profiler-labels/
[2]:https://cloud.google.com/profiler/
[3]:http://cloud.google.com/go/profiler
[4]:https://console.cloud.google.com/profiler
[5]:http://www.brendangregg.com/flamegraphs.html

View File

@ -0,0 +1,106 @@
编写有趣且有价值的 Systemd 服务
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minetest.png?itok=Houi9zf9)
让我们假设你希望搭建一个游戏服务器,运行 [Minetest][1] 这款非常酷、开源的、以采集 & 合成为主题的沙盒游戏。你希望将游戏运行在位于客厅的服务器中,以便搭建完成后可供你的学校或朋友使用。考虑到内核邮件列表管理就是通过这种方式完成的,那么对你来说也是足够的。
但你很快发现每次开机之后需要启动服务器,每次关机之前需要安全地关闭服务器,十分繁琐和麻烦。
最初,你可能用守护进程的方式运行服务器:
```
minetest --server &
```
记住进程 PID 以便后续使用。
接着,你还需要通过邮件或短信的方式将服务器已经启动的信息告知你的朋友。然后你就可以开始游戏了。
转眼之间,已经凌晨三点,今天的战斗即将告一段落。但在你关闭主机、睡个好觉之前,还需要做一些操作。首先,你需要通知其它玩家服务器即将关闭,找到记录我们之前提到的 PID 的纸条,然后友好地关闭 Minetest 服务器。
```
kill -2 <PID>
```
因为直接关闭主机电源很可能导致文件损坏。下一步也是最后一步,关闭主机电源。
一定有方法能让事情变得更简单。
### 让 Systemd 服务拯救你
让我们从构建一个普通用户可以(手动)运行的 systemd 服务开始,然后再逐步增加内容。
不需要管理员权限即可运行的服务位于 _~/.config/systemd/user/_,故首先需要创建这个目录:
```
cd
mkdir -p ~/.config/systemd/user/
```
有很多类型的 systemd _units_ (曾经叫做 systemd 脚本),包括 _timers__paths_ 等,但我们这里关注的是 service 类型。在 _~/.config/systemd/user/_ 目录中创建 _minetest.service_ 文件,使用文本编辑器打开并输入如下内容:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
ExecStart= /usr/games/minetest --server
```
可以看到 units 中包含不同的段,其中 `[Unit]` 段主要为用户提供信息,给出 unit 的描述及如何获得更多相关文档。
脚本核心位于 `[Service]` 段,首先使用 `Type` 指令确定服务类型。服务[有多种类型][2],下面给出两个示例。如果你运行的进程设置环境变量、调用另外一个进程(主进程)、退出运行,那么你应该使用的服务类型为 `forking`。如果你希望在你的 unit 对应进程结束运行前阻断其他 units 运行,那么你应该使用的服务类型为 `oneshot`
但 Minetest 服务器的情形与上面两种都不同,你希望启动服务器并使其在后台持续运行;这种情况下应该使用 `simple` 类型。
下面来看 `ExecStart` 指令,它给出 systemd 需要运行的程序。在本例中,你希望在后台运行 `minetest` 服务器。如上所示,你可以在可执行程序后面添加参数,但不能将一系列 Bash 命令通过管道连接起来。下面给出的例子无法工作:
```
ExecStart: lsmod | grep nvidia > videodrive.txt
```
如果你需要将 Bash 命令通过管道连接起来,可以将其封装到一个脚本中,然后运行该脚本。
还需要注意一点systemd 要求你给出程序的完整路径。故如果你想使用 `simeple` 类型运行类似 _ls_ 的命令,你需要使用 `ExecStart= /bin/ls`
另外还有 `ExecStop` 指令用于定制服务终止的方式。我们会在第二部分讨论这个指令,但你要了解,如果你没有指定 `ExecStop`systemd 会帮你尽可能友好地终止进程。
_systemd.directives_ 的帮助页中包含完整指令列表,另外你可以在[网站][3]上找到同样的列表,点击即可查看每个指令的具体信息。
虽然只有 6 行,但你的 _minetest.service_ 已经是一个有完整功能的 systemd unit。执行如下命令启动服务
```
systemd --user start minetest
```
执行如下命令终止服务
```
systemd --user stop minetest
```
选项 `--user` 告知 systemd 在你的本地目录中检索服务并用你的用户权限执行服务。
我们的服务器管理故事到此完成了第一部分。在第二部分,我们将在启动和终止服务的基础上,学习如何给用户发邮件、告知用户服务器的可用性。敬请期待。
可以通过 Linux 基金会和 edX 的免费课程 [Linux 入门][4]学习更多 Linux 知识。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.minetest.net/
[2]:http://man7.org/linux/man-pages/man5/systemd.service.5.html
[3]:http://man7.org/linux/man-pages/man7/systemd.directives.7.html
[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux

View File

@ -0,0 +1,180 @@
Systemd 服务:比启动停止服务更进一步
======
![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/systemd-minetest-2.jpg?itok=bXO0ggHL)
在上一篇[文章][1]中,我们展示了如何创建一个 systemd 服务并使用普通用户启动和终止游戏服务器。但到目前为止,使用这个服务并不比直接运行服务器高明多少。让我们更进一步,让其可以向玩家发邮件,包括在服务器可用时通知玩家,在服务器关闭前警告玩家:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?" "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!" "
  Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
```
这里涉及几个新的指令。首先是 `ExecStartPost` 指令,该指令可以在主进程启动后马上执行任何你指定的操作。在本例中,你执行了一个自定义脚本 `mtsendmail` (内容如下),该脚本以邮件形式通知你的朋友服务器已经启动。
```
#!/bin/bash
# mtsendmail
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
```
我们使用 [Mutt][2] 这个命令后邮件客户端发送消息。虽然从实际效果来看,上述脚本仅有 1 行,但 systemd unit 的参数中不能包含管道及重定向操作,故我们需要将其封装到脚本中。
顺便提一下,还有一个 `ExecStartPre` 指令,用于在服务主进程执行之前进行指定操作。
接下来我们看到,关闭服务器涉及了好几条指令。`TimeoutStopSec` 指令用于设置 systemd 友好关闭服务的最大等候时间,默认值大约是 90 秒。超过这个最大等候时间systemd 会强制关闭服务并报错。考虑到你希望在彻底关闭服务器前给用户预留几分钟的时间,你需要将超时时间提高至 3 分钟,这样 systemd 就不会误认为服务关闭时出现问题。
接下来是关闭服务的具体指令部分。虽然没有 `ExecStopPre` 这样的指令,但你可以通过多次使用 `ExecStop` 指令实现关闭服务器前执行操作的目标。多个 `ExecStop` 指令按从上到下的顺序依次运行,这样你就可以在服务器真正关闭前向用户发送消息。
通过这个特性,你首先应该向你的朋友发邮件,警告其服务器即将关闭,然后等待两分钟,最后关闭服务器。可以使用 [Ctrl] + [c] 关闭 Minetest 服务器,该操作会被转换为一个中断信号 ( _SIGINT_ );当你执行 `kill -2 $MAINPID` 时就会发送该中断信号,其中 `$MAINPID` 是 systemd 变量,用于记录你服务中主进程的 PID 信息。
看上去好多了!如果你此时启动服务,
```
systemctl --user start minetest
```
服务会启动 Minetest 服务器并向你的用户发送邮件。关闭服务的情形基本类似,只不过会额外留给用户 2 分钟时间退出登录。
### 开机自启动
下一步我们让你的服务在主机启动后立即可用,在主机关闭时自动关闭。
我们需要将你的服务文件移动到系统服务目录,即 _/etc/systemd/system/_
```
sudo mv /home/<username>/.config/systemd/user/minetest.service /etc/systemd/system/
```
如果你希望此时启动该服务,你需要拥有超级用户权限:
```
sudo systemctl start minetest
```
另外,可以使用如下命令检查服务状态:
```
sudo systemctl status minetest
```
你会发现服务很糟糕地处于失败状态,这是因为 systemd 不能通过上下文信息、特征、配置文件得知具体使用哪个用户运行该服务。在 unit 文件中增加 `User` 指令可以解决这个问题。
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User= <username>
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
  "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
  "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
```
systemd 从 `User` 指令中得知应使用哪个用户的环境变量来正确运行该服务。你可以使用 root 用户,但这可能产生安全风险;使用你的个人用户会好一些,但不少管理员的做法是为服务单独创建一个用户,这样可以有效地将服务与其它用户和系统组件相互隔离。
下一步我们让你的服务在系统启动时自动启动,系统关闭时自动关闭。要达到这个目的,你需要 _enable_ 你的服务;但在这之前,你还需要告知 systemd 从哪里 _install_
对于 systemd 而言_installing_ 告知 systemd 在系统启动的具体哪个步骤激活你的服务。以 _通用 Unix 打印系统_ _cups.service_ 为例它的启动在网络框架启动之后、其它打印服务启动之前。又如_minetest.server_ 需要使用用户邮件(及其它组件),需要等待网络和普通用户对应的服务就绪后才可启动。
你只需要在 unit 中添加一个新段和新指令:
```
...
[Install]
WantedBy= multi-user.target
```
你可以将其理解为“等待多用户系统的全部内容就绪”。systemd 中的 targets 类似于旧系统中的 run levels可以用于将主机转移到一个或另一个状态也可以像本例中这样让你的服务等待指定状态出现后运行。
你的最终 _minetest.service_ 文件如下:
```
# minetest.service
[Unit]
Description= Minetest server
Documentation= https://wiki.minetest.net/Main_Page
[Service]
Type= simple
User= <username>
ExecStart= /usr/games/minetest --server
ExecStartPost= /home/<username>/bin/mtsendmail.sh "Ready to rumble?"
  "Minetest Starting up"
TimeoutStopSec= 180
ExecStop= /home/<username>/bin/mtsendmail.sh "Off to bed. Nightie night!"
  "Minetest Stopping in 2 minutes"
ExecStop= /bin/sleep 120
ExecStop= /bin/kill -2 $MAINPID
[Install]
WantedBy= multi-user.target
```
在尝试新的服务之前,你还需要对邮件脚本做一些调整:
```
#!/bin/bash
# mtsendmail
sleep 20
echo $1 | mutt -F /home/<username>/.muttrc -s "$2" my_minetest@mailing_list.com
sleep 10
```
这是因为系统需要一定的时间启动邮件系统(这里等待 20 秒),也需要一定时间完成邮件发送(这里等待 10 秒)。注意脚本中的等待时间数值适用于我的系统,你可能需要针对你的系统调整数值。
大功告成啦。执行如下操作:
```
sudo systemctl enable minetest
```
你的 Minetest 服务将在系统启动时自动启动,在系统关闭时友好关闭并通知你的用户。
### 总结
事实上 Debian, Ubuntu 和一些族类的发行版提供了 _minetest-server_ 这个特别的软件包,可以提供一部分上述功能,(但不包括邮件通知功能)。即使如此,你还是可以建立你独有的自定义服务;事实上,你目前建立的服务比 Debian 默认提供的服务更加通用,可以提供更多功能。
更进一步的说,我们这里描述的流程可以让你将大多数简单服务器转换为服务,类型可以是游戏、网站应用或其它应用。同时,这也是你名副其实地踏入 systemd 大师殿堂的第一步。
--------------------------------------------------------------------------------
via: https://www.linux.com/blog/learn/2018/5/systemd-services-beyond-starting-and-stopping
作者:[Paul Brown][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.linux.com/users/bro66
[1]:https://www.linux.com/blog/learn/intro-to-linux/2018/5/writing-systemd-services-fun-and-profit
[2]:http://www.mutt.org/

View File

@ -0,0 +1,93 @@
Orbital Apps - 新一代 Linux 程序
======
![](https://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-720x340.jpg)
今天,我们要了解 **Orbital Apps****ORB****O**pen **R**unnable **B**undle**apps**(开放可运行程序包),一个免费、跨平台的开源程序集合。所有 ORB 程序都是可移动的。你可以将它们安装在你的 Linux 系统或 USB 驱动器上,以便你可以在任何系统上使用相同的程序。它们不需要 root 权限,并且没有依赖关系。所有必需的依赖关系都包含在程序中。只需将 ORB 程序复制到 USB 驱动器并将其插入到任何 Linux 系统中就立即开始使用它们。所有设置和配置以及程序的数据都将存储在 USB 驱动器上。由于不需要在本地驱动器上安装程序,我们可以在联机或脱机的计算机上运行应用程序。这意味着我们不需要 Internet 来下载任何依赖。
ORB apps are compressed up to 60% smaller, so we can store and use them even from the small sized USB drives. All ORB apps are signed with PGP/RSA and distributed via TLS 1.2. All Applications are packaged without any modifications, they are not even re-compiled. Here is the list of currently available portable ORB applications.
ORB 程序压缩了 60因此我们甚至可以从小型USB驱动器存储和使用它们。所有ORB应用程序都使用PGP / RSA进行签名并通过TLS 1.2进行分发。所有应用程序打包时都不做任何修改甚至不会重新编译。以下是当前可用的便携式ORB应用程序列表。
* abiword
* audacious
* audacity
* darktable
* deluge
* filezilla
* firefox
* gimp
* gnome-mplayer
* hexchat
* inkscape
* isomaster
* kodi
* libreoffice
* qbittorrent
* sound-juicer
* thunderbird
* tomahawk
* uget
* vlc
* 未来还有更多。
Orb is open source, so If youre a developer, feel free to collaborate and add more applications.
Orb 是开源的,所以如果你是开发人员,欢迎协作并添加更多程序。
### 下载并使用可移动 ORB 程序
正如我已经提到的,我们不需要安装可移动 ORB 程序。但是ORB 团队强烈建议你使用 **ORB 启动器** 来获得更好的体验。 ORB 启动器是一个小的安装程序(小于 5MB它可帮助你启动 ORB 程序,并获得更好,更流畅的体验。
让我们先安装 ORB 启动器。为此,[**下载 ORB 启动器**][1]。你可以手动下载 ORB 启动器的 ISO 并将其挂载到文件管理器上。或者在终端中运行以下任一命令来安装它:
```
$ wget -O - https://www.orbital-apps.com/orb.sh | bash
```
如果你没有 wget请运行
```
$ curl https://www.orbital-apps.com/orb.sh | bash
```
询问时输入 root 用户和密码。
就是这样。Orbit 启动器已安装并可以使用。
现在,进入 [**ORB 可移动程序下载页面**][2],并下载你选择的程序。在本教程中,我会下载 Firefox。
下载完后,进入下载位置并双击 ORB 程序来启动它。点击 Yes 确认。
![][4]
Firefox ORB 程序能用了!
![][5]
同样,你可以立即下载并运行任何程序。
如果你不想使用 ORB 启动器,请将下载的 .orb 安装程序设置为可执行文件,然后双击它进行安装。不过,建议使用 ORB 启动器,它可让你在使用 ORB 程序时更轻松、更顺畅。
就我测试的 ORB 程序而言,它们打开即可使用。希望这篇文章有帮助。今天就是这些。祝你有美好的一天!
干杯!!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/orbitalapps-new-generation-ubuntu-linux-applications/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://www.orbital-apps.com/documentation/orb-launcher-all-installers
[2]:https://www.orbital-apps.com/download/portable_apps_linux/
[3]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
[4]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-1-2.png
[5]:http://www.ostechnix.com/wp-content/uploads/2016/05/orbital-apps-2.png

View File

@ -1,36 +1,36 @@
3 Methods To Install Latest Python3 Package On CentOS 6 System
在 CentOS 6 系统上安装最新版 Python3 软件包的 3 种方法
======
CentOS is RHEL clone and comes with free of cost. Its a industry standard and cutting edge operating system, this has been used by 90% of webhosting provider since its supporting the leading edge server control panel called cPanel/WHM.
CentOS 克隆自 RHEL无需付费即可使用。CentOS 是一个企业级标准的、前沿的操作系统,被超过 90% 的网络主机托管商采用,因为它提供了技术领先的服务器控制面板 cPanel/WHM。
This control panel allowing users to manage everything through control panel without entering into terminal.
该控制面板使得用户无需进入命令行即可通过其管理一切。
As we already know that RHEL has long term support and doesnt offer the latest version of packages due to stability.
众所周知RHEL 提供长期支持,出于稳定性考虑,不提供最新版本的软件包。
If you want to install latest version of packages, which is not available in the default repository and you have to install manually by compiling the source package.
如果你想安装的最新版本软件包不在默认源中,你需要手动编译源码安装。
Its a high risk because we cant upgrade the manually installed packages to latest version if they release new version and we have to reinstall manually.
但手动编译安装的方式有不小的风险,即如果出现新版本,无法升级手动安装的软件包;你不得不重新手动安装。
In this case what will be the solution and suggested method to install latest version of package? Yes, this can be done by adding the necessary third party repository to system.
那么在这种情况下,安装最新版软件包的推荐方法和方案是什么呢?是的,可以通过为系统添加所需的第三方源来达到目的。
There are many third party repositories are available for Enterprise Linux but only few of repositories are suggested to use by CentOS communicant, which doesnt alter the base packages in large scale.
可供企业级 Linux 使用的第三方源有很多,但只有几个是 CentOS 社区推荐使用的,它们在很大程度上不修改基础软件包。
They are usually well maintained and provide a substantial number of additional packages to CentOS.
这几个推荐的源维护的很好,为 CentOS 提供大量补充软件包。
In this tutorial, we will teach you, how to install latest Python 3 package on CentOS 6 system.
在本教程中,我们将向你展示,如何在 CentOS 6 操作系统上安装最新版本的 Python 3 软件包。
### Method-1 : Using Software Collections Repository (SCL)
### 方法 1使用 Software Collections 源 (SCL)
The SCL repository is now maintained by a CentOS SIG, which rebuilds the Red Hat Software Collections and also provides some additional packages of their own.
SCL 源目前由 CentOS SIG 维护,除了重新编译构建 Red Hat 的 Software Collections 外,还额外提供一些它们自己的软件包。
It contains newer versions of various programs that can be installed alongside existing older packages and invoked by using the scl command.
该源中包含不少程序的更高版本,可以在不改变原有旧版本程序包的情况下安装,使用时需要通过 scl 命令调用。
Run the following command to install Software Collections Repository on CentOS
运行如下命令可以在 CentOS 上安装 SCL 源:
```
# yum install centos-release-scl
```
Check the available python 3 version.
检查可用的 Python 3 版本:
```
# yum info rh-python35
Loaded plugins: fastestmirror, security
@ -51,49 +51,49 @@ Description : This is the main package for rh-python35 Software Collection.
```
Run the below command to install latest available python 3 package from scl.
运行如下命令从 scl 源安装可用的最新版 python 3
```
# yum install rh-python35
```
Run the below special scl command to enable the installed package version at the shell.
运行如下特殊的 scl 命令,在当前 shell 中启用安装的软件包:
```
# scl enable rh-python35 bash
```
Run the below command to check installed python3 version.
运行如下命令检查安装的 python3 版本:
```
# python --version
Python 3.5.1
```
Run the following command to get a list of SCL packages have been installed on system.
运行如下命令获取系统已安装的 SCL 软件包列表:
```
# scl -l
rh-python35
```
### Method-2 : Using EPEL Repository (Extra Packages for Enterprise Linux)
### 方法 2使用 EPEL 源 (Extra Packages for Enterprise Linux)
EPEL stands for Extra Packages for Enterprise Linux maintained by Fedora Special Interest Group.
EPEL 是 Extra Packages for Enterprise Linux 的缩写,该源由 Fedora SIG (Special Interest Group) 维护。
They creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL), Oracle Linux (OL).
该 SIG 为企业级 Linux 创建、维护并管理一系列高品质补充软件包,受益的企业级 Linux 发行版包括但不限于红帽企业级 Linux (RHEL), CentOS, Scientific Linux (SL) 和 Oracle Linux (OL)等。
EPEL packages are usually based on their Fedora counterparts and will never conflict with or replace packages in the base Enterprise Linux distributions.
EPEL 通常基于 Fedora 对应代码提供软件包,不会与企业级 Linux 发行版中的基础软件包冲突或替换其中的软件包。
**Suggested Read :** [Install / Enable EPEL Repository on RHEL, CentOS, Oracle Linux & Scientific Linux][1]
**推荐阅读:** [在 RHEL, CentOS, Oracle Linux 或 Scientific Linux 上安装启用 EPEL 源][1]
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command.
EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
```
# yum install epel-release
```
Check the available python 3 version.
检查可用的 python 3 版本:
```
# yum --disablerepo="*" --enablerepo="epel" info python34
Loaded plugins: fastestmirror, security
@ -117,13 +117,13 @@ Description : Python 3 is a new version of the language that is incompatible wit
```
Run the below command to install latest available python 3 package from EPEL repository.
运行如下命令从 EPEL 源安装可用的最新版 python 3 软件包:
```
# yum --disablerepo="*" --enablerepo="epel" install python34
```
By default this will not install matching pip & setuptools and we have to install by running below command.
默认情况下并不会安装 pip 和 setuptools我们需要运行如下命令手动安装
```
# curl -O https://bootstrap.pypa.io/get-pip.py
% Total % Received % Xferd Average Speed Time Time Time Current
@ -144,28 +144,28 @@ Successfully installed pip-10.0.1 setuptools-39.1.0 wheel-0.31.0
```
Run the below command to check installed python3 version.
运行如下命令检查已安装的 python3 版本:
```
# python3 --version
Python 3.4.5
```
### Method-3 : Using IUS Community Repository
### 方法 3使用 IUS 社区源
IUS Community is a CentOS Community Approved third-party RPM repository which contains latest upstream versions of PHP, Python, MySQL, etc.., packages for Enterprise Linux (RHEL & CentOS) 5, 6 & 7.
IUS 社区是 CentOS 社区批准的第三方 RPM 源,为企业级 Linux (RHEL 和 CentOS) 5, 6 和 7 版本提供最新上游版本的 PHP, Python, MySQL 等软件包。
IUS Community Repository have dependency with EPEL Repository so we have to install EPEL repository prior to IUS repository installation. Follow the below steps to install & enable EPEL & IUS Community Repository to RPM systems and install the packages.
IUS 社区源依赖于 EPEL 源,故我们需要先安装 EPEL 源,然后再安装 IUS 社区源。按照下面的步骤安装启用 EPEL 源和 IUS 社区源,利用该 RPM 系统安装软件包。
**Suggested Read :** [Install / Enable IUS Community Repository on RHEL & CentOS][2]
**推荐阅读:** [在 RHEL 或 CentOS 上安装启用 IUS 社区源][2]
EPEL package is included in the CentOS Extras repository and enabled by default so, we can install this by running below command.
EPEL 软件包位于 CentOS 的 Extra 源中,已经默认启用,故我们只需运行如下命令即可:
```
# yum install epel-release
```
Download IUS Community Repository Shell script
下载 IUS 社区源安装脚本:
```
# curl 'https://setup.ius.io/' -o setup-ius.sh
% Total % Received % Xferd Average Speed Time Time Time Current
@ -174,13 +174,13 @@ Download IUS Community Repository Shell script
```
Install/Enable IUS Community Repository.
安装启用 IUS 社区源:
```
# sh setup-ius.sh
```
Check the available python 3 version.
检查可用的 python 3 版本:
```
# yum --enablerepo=ius info python36u
Loaded plugins: fastestmirror, security
@ -218,13 +218,13 @@ Description : Python is an accessible, high-level, dynamically typed, interprete
```
Run the below command to install latest available python 3 package from IUS repository.
运行如下命令从 IUS 源安装最新可用版本的 python 3 软件包:
```
# yum --enablerepo=ius install python36u
```
Run the below command to check installed python3 version.
运行如下命令检查已安装的 python3 版本:
```
# python3.6 --version
Python 3.6.5
@ -237,7 +237,7 @@ via: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-cen
作者:[PRAKASH SUBRAMANIAN][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[译者ID](https://github.com/译者ID)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出

View File

@ -0,0 +1,47 @@
LikeCoin一种给创作者的开放内容许可的加密货币
======
![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0)
传统观点表明,作家、摄影师、艺术家和其他创作者在 Creative Commons 和其他开放许可下免费共享内容的不会得到报酬。这意味着大多数独立创作者无法通过在互联网上发布他们的作品来赚钱。输入 [LikeCoin][1]:一个新的开源项目,旨在制定这个惯例,艺术家经常为了分发而不得不妥协或牺牲,这是过去的事情。
LikeCoin 协议旨在通过创意内容获利,因此创作者可以专注于创造出色的内容而不是出售。
协议同样基于去中心化技术,它可以跟踪何时使用内容,并使用 LikeCoin一种 [Ethereum ERC-20][2] 加密货币令牌奖励其创作者。它通过“创造性证明”算法进行操作,该算法一部分根据作品收到多少个“喜欢”,一部分根据有多少作品衍生自它而分配 LikeCoin。由于开放授权内容有更多机会被重复使用并获得 LikeCoin 令牌,因此系统鼓励内容创作者在 Creative Commons 许可下发布。
### 如何运行
当通过 LikeCoin 协议上传创意片段时,内容创作者将包括作品的元数据,包括作者信息及其 InterPlanetary 关联数据([IPLD][3])。这些数据构成了衍生作品的家族图表;我们称作品与其衍生品之间的关系为“内容足迹”。这种结构使得内容的继承树可以很容易地追溯到原始作品。
LikeCoin 令牌将作品衍生历史记录的信息分发给创作者。由于所有创意作品都包含作者钱包的元数据,因此相应的 LikeCoin 份额可以通过算法计算并分发。
LikeCoin 可以通过两种方式获得奖励:直接由想要通过支付内容创建者来表示赞赏的个人或通过 Creators Pool 收集观众的“赞”的并根据内容的 LikeRank 分配 LikeCoin。基于在 LikeCoin 协议中的内容追踪LikeRank 衡量作品重要性(或者我们在这定义的创造性)。一般来说,一副作品有越多的衍生作品,创意内容的创新越多,内容就会有更高的 LikeRank。 LikeRank 是内容创新性的量化者。
### 如何参与?
LikeCoin 仍然非常新,我们期望在 2018 年晚些时候推出我们的第一个去中心化程序来奖励 Creative Commons 的内容,并与更大的社区无缝连接。
LikeCoin 的大部分代码都可以在 [LikeCoin GitHub][4] 仓库中通过[ GPL 3.0 许可证][5]访问。由于它仍处于积极开发阶段,一些实验代码尚未公开,但我们会尽快完成。
我们欢迎功能请求pull requestfork 和 star。请参与我们在 Github 上的开发,并加入我们在 [Telegram][6] 的讨论组。我们同样在 [Medium][7]、[Facebook][8]、[Twitter][9] 和我们的网站 [like.co][1] 发布关于我们进展的最新消息。
--------------------------------------------------------------------------------
via: https://opensource.com/article/18/5/likecoin
作者:[Kin Ko][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[geekpi](https://github.com/geekpi)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://opensource.com/users/ckxpress
[1]:https://like.co/
[2]:https://en.wikipedia.org/wiki/ERC20
[3]:https://ipld.io/
[4]:https://github.com/likecoin
[5]:https://www.gnu.org/licenses/gpl-3.0.en.html
[6]:https://t.me/likecoin
[7]:http://medium.com/likecoin
[8]:http://fb.com/likecoin.foundation
[9]:https://twitter.com/likecoin_fdn

View File

@ -0,0 +1,256 @@
You-Get - 支持 80+ 网站的命令行多媒体下载器
======
![](https://www.ostechnix.com/wp-content/uploads/2018/05/you-get-1-720x340.jpg)
你们大多数人可能用过或听说过 **Youtube-dl**,这个命令行程序可以从包括 Youtube 在内的 100+ 网站下载视频。我偶然发现了一个类似的工具,名字叫做 **"You-Get"**。这是一个 Python 编写的命令行下载器,可以让你从 YoutubeFacebookTwitter 等很多热门网站下载图片,音频和视频。目前该下载器支持 80+ 站点,点击[**这里**][1]查看所有支持的网站。
You-Get 不仅仅是一个下载器,它还可以将在线视频导流至你的视频播放器。更进一步,它还允许你在 Google 上搜索视频只要给出搜索项You-Get 使用 Google 搜索并下载相关度最高的视频。另外值得一提的特性是,它允许你暂停和恢复下载过程。它是一个完全自由、开源及跨平台的应用,适用于 LinuxMacOS 及 Windows。
### 安装 You-Get
确保你已经安装如下依赖项:
+ Python 3
+ FFmpeg (强烈推荐) 或 Libav
+ (可选) RTMPDump
有多种方式安装 You-Get其中官方推荐采用 pip 包管理器安装。如果你还没有安装 pip可以参考如下链接
[如何使用 pip 管理 Python 软件包][2]
需要注意的是,你需要安装 Python 3 版本的 pip。
接下来,运行如下命令安装 You-Get
```
$ pip3 install you-get
```
可以使用命令升级 You-Get 至最新版本:
```
$ pip3 install --upgrade you-get
```
### 开始使用 You-Get
使用方式与 Youtube-dl 工具基本一致。
**下载视频**
下载视频,只需运行:
```
$ you-get https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
stream:
- itag: 22
container: mp4
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
Downloading The Last of The Mohicans by Alexandro Querevalú.mp4 ...
100% ( 56.9/ 56.9MB) ├███████████████████████████████████████████████████████┤[1/1] 752 kB/s
```
下载视频前你可能希望查看视频的细节信息。You-Get 提供了 **info”** 或 **“-i”** 参数,使用该参数可以获得给定视频所有可用的分辨率和格式。
```
$ you-get -i https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --info https://www.youtube.com/watch?v=HXaglTFJLMc
```
输出示例如下:
```
site: YouTube
title: The Last of The Mohicans by Alexandro Querevalú
streams: # Available quality and codecs
[ DASH ] ____________________________________
- itag: 137
container: mp4
quality: 1920x1080
size: 101.9 MiB (106816582 bytes)
# download-with: you-get --itag=137 [URL]
- itag: 248
container: webm
quality: 1920x1080
size: 90.3 MiB (94640185 bytes)
# download-with: you-get --itag=248 [URL]
- itag: 136
container: mp4
quality: 1280x720
size: 56.9 MiB (59672392 bytes)
# download-with: you-get --itag=136 [URL]
- itag: 247
container: webm
quality: 1280x720
size: 52.6 MiB (55170859 bytes)
# download-with: you-get --itag=247 [URL]
- itag: 135
container: mp4
quality: 854x480
size: 32.2 MiB (33757856 bytes)
# download-with: you-get --itag=135 [URL]
- itag: 244
container: webm
quality: 854x480
size: 28.0 MiB (29369484 bytes)
# download-with: you-get --itag=244 [URL]
[ DEFAULT ] _________________________________
- itag: 22
container: mp4
quality: hd720
size: 56.9 MiB (59654303 bytes)
# download-with: you-get --itag=22 [URL]
```
默认情况下You-Get 会下载标记为 **DEFAULT** 的格式。如果你对格式或分辨率不满意,可以选择你喜欢的格式,使用格式对应的 itag 值即可。
```
$ you-get --itag=244 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**下载音频**
执行下面的命令,可以从 soundcloud 网站下载音频:
```
$ you-get 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
Site: SoundCloud.com
Title: ALL GIRLS ARE THE SAME (PROD. NICK MIRA)
Type: MP3 (audio/mpeg)
Size: 2.58 MiB (2710046 Bytes)
Downloading ALL GIRLS ARE THE SAME (PROD. NICK MIRA).mp3 ...
100% ( 2.6/ 2.6MB) ├███████████████████████████████████████████████████████┤[1/1] 983 kB/s
```
查看音频文件细节,使用 **-i** 参数:
```
$ you-get -i 'https://soundcloud.com/uiceheidd/all-girls-are-same-999-prod-nick-mira'
```
**下载图片**
运行如下命令下载图片:
```
$ you-get https://pixabay.com/en/mountain-crumpled-cyanus-montanus-3393209/
```
You-Get 也可以下载网页中的全部图片:
You-Get can also download all images from a web page.
```
$ you-get https://www.ostechnix.com/pacvim-a-cli-game-to-learn-vim-commands/
```
**搜索视频**
你只需向 You-Get 传递一个任意的搜索项,而无需给出有效的 URLYou-Get 会使用 Google 搜索并下载与你给出搜索项最相关的视频。(译者注Google 的机器人检测机制可能导致 503 报错导致该功能无法使用)。
```
$ you-get 'Micheal Jackson'
Google Videos search:
Best matched result:
site: YouTube
title: Michael Jackson - Beat It (Official Video)
stream:
- itag: 43
container: webm
quality: medium
size: 29.4 MiB (30792050 bytes)
# download-with: you-get --itag=43 [URL]
Downloading Michael Jackson - Beat It (Official Video).webm ...
100% ( 29.4/ 29.4MB) ├███████████████████████████████████████████████████████┤[1/1] 2 MB/s
```
**观看视频**
You-Get 可以将在线视频导流至你的视频播放器或浏览器,跳过广告和评论部分。(译者注:使用 -p 参数需要对应的 vlc/chrominum 命令可以调用,一般适用于具有图形化界面的操作系统)。
以 VLC 视频播放器为例,使用如下命令在其中观看视频:
```
$ you-get -p vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
或者
```
$ you-get --player vlc https://www.youtube.com/watch?v=HXaglTFJLMc
```
类似地,将视频导流至以 chromium 为例的浏览器中,使用如下命令:
```
$ you-get -p chromium https://www.youtube.com/watch?v=HXaglTFJLMc
```
![][3]
在上述屏幕截图中,可以看到并没有广告和评论部分,只是一个包含视频的简单页面。
**设置下载视频的路径及文件名**
默认情况下,使用视频标题作为默认文件名,下载至当前工作目录。当然,你可以按照你的喜好进行更改,使用 **output-dir/-o** 参数可以指定路径,使用 **output-filename/-O** 参数可以指定下载文件的文件名。
```
$ you-get -o ~/Videos -O output.mp4 https://www.youtube.com/watch?v=HXaglTFJLMc
```
**暂停和恢复下载**
**CTRL+C** 可以取消下载。一个以 **.download** 为扩展名的临时文件会保存至输出路径。下次你使用相同的参数下载时,下载过程将延续上一次的过程。
当文件下载完成后,以 .download 为扩展名的临时文件会自动消失。如果这时你使用同样参数下载You-Get 会跳过下载;如果你想强制重新下载,可以使用 **force/-f** 参数。
查看命令的帮助部分可以获取更多细节,命令如下:
```
$ you-get --help
```
这次的分享到此结束,后续还会介绍更多的优秀工具,敬请期待!
感谢各位阅读!
--------------------------------------------------------------------------------
via: https://www.ostechnix.com/you-get-a-cli-downloader-to-download-media-from-80-websites/
作者:[SK][a]
选题:[lujun9972](https://github.com/lujun9972)
译者:[pinewall](https://github.com/pinewall)
校对:[校对者ID](https://github.com/校对者ID)
本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出
[a]:https://www.ostechnix.com/author/sk/
[1]:https://you-get.org/#supported-sites
[2]:https://www.ostechnix.com/manage-python-packages-using-pip/
[3]:http://www.ostechnix.com/wp-content/uploads/2018/05/you-get.jpg