@@ -253,7 +254,7 @@ $(this).html(''); });titletitletitletitletitle ``` -接下来,定义表模型。 这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 dom 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 Ajax 连接。 请注意,使用 Groovy GString 调用 Grails **createLink()** 的方法创建 URL,在 **EmployeeController** 中指向 **browserLister** 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 +接下来,定义表模型。这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 DOM 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 AJAX 连接。 请注意,使用 Groovy GString 调用 Grails `createLink()` 的方法创建 URL,在 `EmployeeController` 中指向 `browserLister` 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 ``` var table = $('#employee_dt').DataTable( { @@ -302,7 +303,7 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen_4.png) -这是另一个屏幕截图,显示了过滤和多列排序(寻找 position 包括字符 “dev” 的员工,先按 office 排序,然后按姓氏排序): +这是另一个屏幕截图,显示了过滤和多列排序(寻找 “position” 包括字符 “dev” 的员工,先按 “office” 排序,然后按姓氏排序): ![](https://opensource.com/sites/default/files/uploads/screen_5.png) @@ -314,37 +315,37 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen7.png) -好的,视图部分看起来非常简单; 因此,控制器必须做所有繁重的工作,对吧? 让我们来看看… +好的,视图部分看起来非常简单;因此,控制器必须做所有繁重的工作,对吧? 让我们来看看…… #### 控制器 browserLister 操作 -回想一下,我们看到过这个字符串 +回想一下,我们看到过这个字符串: ``` "${createLink(controller: 'employee', action: 'browserLister')}" ``` -对于从 DataTables 模型中调用 Ajax 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 **EmployeeController** 的链接,位于: +对于从 DataTables 模型中调用 AJAX 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 `EmployeeController` 的链接,位于: ``` embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy ``` -特别是控制器方法 **browserLister()**。我在代码中留了一些 print 语句,以便在运行时能够在终端看到中间结果。 +特别是控制器方法 `browserLister()`。我在代码中留了一些 `print` 语句,以便在运行时能够在终端看到中间结果。 ```     def browserLister() {         // Applies filters and sorting to return a list of desired employees ``` -首先,打印出传递给 **browserLister()** 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 +首先,打印出传递给 `browserLister()` 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 ```       println "employee browserLister params $params"         println() ``` -接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 **jqdtParams**的 Groovy 映射: +接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 `jqdtParams` 的 Groovy 映射: ``` def jqdtParams = [:] @@ -363,7 +364,7 @@ println "employee dataTableParams $jqdtParams" println() ``` -接下来,列数据,一个名为 **columnMap**的 Groovy 映射: +接下来,列数据,一个名为 `columnMap` 的 Groovy 映射: ``` def columnMap = jqdtParams.columns.collectEntries { k, v -> @@ -386,7 +387,7 @@ println "employee columnMap $columnMap" println() ``` -接下来,从 **columnMap** 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 **allColumnList**和 **orderList**: +接下来,从 `columnMap` 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 `allColumnList` 和 `orderList` : ``` def allColumnList = columnMap.keySet() as List @@ -395,7 +396,7 @@ def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as In println "employee orderList $orderList" ``` -我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭; 在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 **createAlias** 以允许我们进入相关类别 Position 和 Office: +我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭;在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 `createAlias` 以允许我们进入相关类别 `Position` 和 `Office`: ``` def filterer = { @@ -424,14 +425,14 @@ def filterer = { } ``` -是时候应用上述内容了。第一步是获取分页代码所需的所有 Employee 实例的总数: +是时候应用上述内容了。第一步是获取分页代码所需的所有 `Employee` 实例的总数: ```         def recordsTotal = Employee.count()         println "employee recordsTotal $recordsTotal" ``` -接下来,将过滤器应用于 Employee 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): +接下来,将过滤器应用于 `Employee` 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): ```         def c = Employee.createCriteria() @@ -467,7 +468,7 @@ def filterer = { 要完全清楚,JTable 中的分页代码管理三个计数:数据集中的记录总数,应用过滤器后得到的数字,以及要在页面上显示的数字(显示是滚动还是分页)。 排序应用于所有过滤的记录,并且分页应用于那些过滤的记录的块以用于显示目的。 -接下来,处理命令返回的结果,在每行中创建指向 Employee,Position 和 Office 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: +接下来,处理命令返回的结果,在每行中创建指向 `Employee`、`Position` 和 `Office` 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: ```         def dollarFormatter = new DecimalFormat('$##,###.##') @@ -490,14 +491,15 @@ def filterer = { } ``` -大功告成 +大功告成。 + 如果你熟悉 Grails,这可能看起来比你原先想象的要多,但这里没有火箭式的一步到位方法,只是很多分散的操作步骤。但是,如果你没有太多接触 Grails(或 Groovy),那么需要了解很多新东西 - 闭包,代理和构建器等等。 在那种情况下,从哪里开始? 最好的地方是了解 Groovy 本身,尤其是 [Groovy closures][18] 和 [Groovy delegates and builders][19]。然后再去阅读上面关于 Grails 和 Hibernate 条件查询的建议阅读文章。 ### 结语 -jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 +jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但 DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 我使用这种方法制作了几个数据浏览器,允许用户选择要查看和累积记录计数的列,或者只是浏览数据。即使在相对适度的 VPS 上的百万行表中,性能也很好。 @@ -512,7 +514,7 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables 作者:[Chris Hermansen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -528,11 +530,11 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables [9]: http://sdkman.io/ [10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html [11]: https://opensource.com/file/410061 -[12]: https://opensource.com/sites/default/files/uploads/screen_1.png "Embrow home screen" +[12]: https://opensource.com/sites/default/files/uploads/screen_1.png [13]: https://opensource.com/file/410066 -[14]: https://opensource.com/sites/default/files/uploads/screen_2.png "Office list screenshot" +[14]: https://opensource.com/sites/default/files/uploads/screen_2.png [15]: https://opensource.com/file/410071 -[16]: https://opensource.com/sites/default/files/uploads/screen3.png "Employee controller screenshot" +[16]: https://opensource.com/sites/default/files/uploads/screen3.png [17]: https://gsp.grails.org/latest/ref/Tags/createLink.html [18]: http://groovy-lang.org/closures.html [19]: http://groovy-lang.org/dsls.html diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/published/201811/20180928 What containers can teach us about DevOps.md similarity index 85% rename from translated/tech/20180928 What containers can teach us about DevOps.md rename to published/201811/20180928 What containers can teach us about DevOps.md index 0aec91e2fb..3a0a360603 100644 --- a/translated/tech/20180928 What containers can teach us about DevOps.md +++ b/published/201811/20180928 What containers can teach us about DevOps.md @@ -1,7 +1,7 @@ 容器技术对 DevOps 的一些启发 ====== -容器技术的使用支撑了目前 DevOps 三大主要实践:工作流,及时反馈,持续学习。 +> 容器技术的使用支撑了目前 DevOps 三大主要实践:工作流、及时反馈、持续学习。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) @@ -11,11 +11,11 @@ #### 容器中的工作流 -每个容器都可以看成一个独立的运行环境,对于容器内部,不需要考虑外部的宿主环境、集群环境、以及其它基础设施。在容器内部,每个功能看起来都是以传统的方式运行。从外部来看,容器内运行的应用一般作为整个应用系统架构的一部分:比如 web API,web app 用户界面,数据库,任务执行,缓存系统,垃圾回收等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 +每个容器都可以看成一个独立的运行环境,对于容器内部,不需要考虑外部的宿主环境、集群环境,以及其它基础设施。在容器内部,每个功能看起来都是以传统的方式运行。从外部来看,容器内运行的应用一般作为整个应用系统架构的一部分:比如 web API、web app 用户界面、数据库、任务执行、缓存系统、垃圾回收等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 #### 现实中的工作流 -那些跟“容器”一样业务功能独立的团队,也可以借鉴这种容器思维。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造 [Spacely’s Sprockets][2] 等),还是技术中的工作流(开发、测试、运维、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用这种线性的工作流有效降低了工作耦合性。 +那些跟“容器”一样业务功能独立的团队,也可以借鉴这种容器思维。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造 [《杰森一家》中的斯贝斯利太空飞轮][2] 等),还是技术中的工作流(开发、测试、运维、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用这种线性的工作流有效降低了工作耦合性。 #### DevOps 中的工作流 @@ -23,7 +23,7 @@ DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力 > 践行这样的工作流后,可以避免将一个已知缺陷带到工作流的下游,避免局部优化导致可能的全局性能下降,要不断探索如何优化工作流,持续加深对于系统的理解。 -—— Gene Kim,[支撑 DevOps 的三个实践][3],IT 革命,2017.4.25 +> —— Gene Kim,《[支撑 DevOps 的三个实践][3]》,IT 革命,2017.4.25 ### 反馈 @@ -33,7 +33,7 @@ DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力 #### 现实中的反馈 -在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速地定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队被淹没在大量不相关的事件时,那些真正需要快速反馈的重要信息很容易被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快地意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不那么搞笑了)。 +在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速地定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队被淹没在大量不相关的事件时,那些真正需要快速反馈的重要信息很容易被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快地意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不那么搞笑了)。(LCTT 译注:露西和埃塞尔是上世纪 50 年代的著名黑白情景喜剧《我爱露西》中的主角) #### DevOps 中的反馈 @@ -41,7 +41,7 @@ DevOps 中的第二条原则,就是快速收集所有相关的有用信息, > 快速的反馈对于提高技术的质量、可用性、安全性至关重要。 -—— Gene Kim 及其他,DevOps 手册:如何在技术组织中创造世界级的敏捷性,可靠性和安全性,IT 革命,2016 +> —— Gene Kim 等人,《DevOps 手册:如何在技术组织中创造世界级的敏捷性,可靠性和安全性》,IT 革命,2016 ### 持续学习 @@ -71,7 +71,7 @@ DevOps 中的第二条原则,就是快速收集所有相关的有用信息, > 实验和冒险让我们能够不懈地改进我们的工作,但也要求我们尝试之前未用过的工作方式。 -—— Gene Kim 及其他,[凤凰计划:让你了解 IT、DevOps 以及如何取得商业成功][7],IT 革命,2013 +> —— Gene Kim 等人,《[凤凰计划:让你了解 IT、DevOps 以及如何取得商业成功][7]》,IT 革命,2013 ### 容器技术带给 DevOps 的启迪 @@ -84,7 +84,7 @@ via: https://opensource.com/article/18/9/containers-can-teach-us-devops 作者:[Chris Hermansen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[littleji](https://github.com/littleji) -校对:[pityonline](https://github.com/pityonline) +校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md new file mode 100644 index 0000000000..734ac021cb --- /dev/null +++ b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md @@ -0,0 +1,259 @@ +使用 Pandoc 将你的书转换成网页和电子书 +====== + +> 通过 Markdown 和 Pandoc,可以做到编写一次,发布两次。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +Pandoc 是一个命令行工具,用于将文件从一种标记语言转换为另一种标记语言。在我 [对 Pandoc 的简介][1] 一文中,我演示了如何把 Markdown 编写的文本转换为网页、幻灯片和 PDF。 + +在这篇后续文章中,我将深入探讨 [Pandoc][2],展示如何从同一个 Markdown 源文件生成网页和 ePub 格式的电子书。我将使用我即将发布的电子书《[面向对象思想的 GRASP 原则][3]》为例进行讲解,这本电子书正是通过以下过程创建的。 + +首先,我将解释这本书使用的文件结构,然后介绍如何使用 Pandoc 生成网页并将其部署在 GitHub 上;最后,我演示了如何生成对应的 ePub 格式电子书。 + +你可以在我的 GitHub 仓库 [Programming Fight Club][4] 中找到相应代码。 + +### 设置图书结构 + +我用 Markdown 语法完成了所有的写作,你也可以使用 HTML 标记,但是当 Pandoc 将 Markdown 转换为 ePub 文档时,引入的 HTML 标记越多,出现问题的风险就越高。我的书按照每章一个文件的形式进行组织,用 Markdown 的 `H1` 标记(`#`)声明每章的标题。你也可以在每个文件中放置多个章节,但将它们放在单独的文件中可以更轻松地查找内容并在以后进行更新。 + +元信息遵循类似的模式,每种输出格式都有自己的元信息文件。元信息文件定义有关文档的信息,例如要添加到 HTML 中的文本或 ePub 的许可证。我将所有 Markdown 文档存储在名为 `parts` 的文件夹中(这对于用来生成网页和 ePub 的 Makefile 非常重要)。下面以一个例子进行说明,让我们看一下目录,前言和关于本书(分为 `toc.md`、`preface.md` 和 `about.md` 三个文件)这三部分,为清楚起见,我们将省略其余的章节。 + +关于本书这部分内容的开头部分类似: + +``` +# About this book {-} + +## Who should read this book {-} + +Before creating a complex software system one needs to create a solid foundation. +General Responsibility Assignment Software Principles (GRASP) are guidelines to assign +responsibilities to software classes in object-oriented programming. +``` + +每一章完成后,下一步就是添加元信息来设置网页和 ePub 的格式。 + +### 生成网页 + +#### 创建 HTML 元信息文件 + +我创建的网页的元信息文件(`web-metadata.yaml`)是一个简单的 YAML 文件,其中包含 ` ` 标签中的作者、标题、和版权等信息,以及 HTML 文件中开头和结尾的内容。 + +我建议(至少)包括 `web-metadata.yaml` 文件中的以下字段: + +``` +--- +title: GRASP principles for the Object-oriented mind +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +header-includes: +- | + ```{=html} + + + ``` +include-before: +- | + ```{=html} +

If you like this book, please consider + spreading the word or + + buying me a coffee + +

+ ``` +include-after: +- | + ```{=html} +
+
+
+ +
+
+ ``` +--- +``` + +下面几个变量需要注意一下: + +- `header-includes` 变量包含将要嵌入 `` 标签的 HTML 文本。 +- 调用变量后的下一行必须是 `- |`。再往下一行必须以与 `|` 对齐的三个反引号开始,否则 Pandoc 将无法识别。`{= html}` 告诉 Pandoc 其中的内容是原始文本,不应该作为 Markdown 处理。(为此,需要检查 Pandoc 中的 `raw_attribute` 扩展是否已启用。要进行此检查,键入 `pandoc --list-extensions | grep raw` 并确保返回的列表包含名为 `+ raw_html` 的项目,加号表示已启用。) +- 变量 `include-before` 在网页开头添加一些 HTML 文本,此处我请求读者帮忙宣传我的书或给我打赏。 +- `include-after` 变量在网页末尾添加原始 HTML 文本,同时显示我的图书许可证。 + +这些只是其中一部分可用的变量,查看 HTML 中的模板变量(我的文章 [Pandoc简介][1] 中介绍了如何查看 LaTeX 的模版变量,查看 HTML 模版变量的过程是相同的)对其余变量进行了解。 + +#### 将网页分成多章 + +网页可以作为一个整体生成,这会产生一个包含所有内容的长页面;也可以分成多章,我认为这样会更容易阅读。我将解释如何将网页划分为多章,以便读者不会被长网页吓到。 + +为了使网页易于在 GitHub Pages 上部署,需要创建一个名为 `docs` 的根文件夹(这是 GitHub Pages 默认用于渲染网页的根文件夹)。然后我们需要为 `docs` 下的每一章创建文件夹,将 HTML 内容放在各自的文件夹中,将文件内容放在名为 `index.html` 的文件中。 + +例如,`about.md` 文件将转换成名为 `index.html` 的文件,该文件位于名为 `about`(`about/index.html`)的文件夹中。这样,当用户键入 `http:///about/` 时,文件夹中的 `index.html` 文件将显示在其浏览器中。 + +下面的 `Makefile` 将执行上述所有操作: + +``` +# Your book files +DEPENDENCIES= toc preface about + +# Placement of your HTML files +DOCS=docs + +all: web + +web: setup $(DEPENDENCIES) +        @cp $(DOCS)/toc/index.html $(DOCS) + + +# Creation and copy of stylesheet and images into +# the assets folder. This is important to deploy the +# website to Github Pages. +setup: +        @mkdir -p $(DOCS) +        @cp -r assets $(DOCS) + + +# Creation of folder and index.html file on a +# per-chapter basis + +$(DEPENDENCIES): +        @mkdir -p $(DOCS)/$@ +        @pandoc -s --toc web-metadata.yaml parts/$@.md \ +        -c /assets/pandoc.css -o $(DOCS)/$@/index.html + +clean: +        @rm -rf $(DOCS) + +.PHONY: all clean web setup +``` + +选项 `- c /assets/pandoc.css` 声明要使用的 CSS 样式表,它将从 `/assets/pandoc.cs` 中获取。也就是说,在 `` 标签内,Pandoc 会添加这样一行: + +``` + +``` + +使用下面的命令生成网页: + +``` +make +``` + +根文件夹现在应该包含如下所示的文件结构: + +``` +.---parts +|    |--- toc.md +|    |--- preface.md +|    |--- about.md +| +|---docs +    |--- assets/ +    |--- index.html +    |--- toc +    |     |--- index.html +    | +    |--- preface +    |     |--- index.html +    | +    |--- about +          |--- index.html +    +``` + +#### 部署网页 + +通过以下步骤将网页部署到 GitHub 上: + +1. 创建一个新的 GitHub 仓库 +2. 将内容推送到新创建的仓库 +3. 找到仓库设置中的 GitHub Pages 部分,选择 `Source` 选项让 GitHub 使用主分支的内容 + +你可以在 [GitHub Pages][5] 的网站上获得更多详细信息。 + +[我的书的网页][6] 便是通过上述过程生成的,可以在网页上查看结果。 + +### 生成电子书 + +#### 创建 ePub 格式的元信息文件 + +ePub 格式的元信息文件 `epub-meta.yaml` 和 HTML 元信息文件是类似的。主要区别在于 ePub 提供了其他模板变量,例如 `publisher` 和 `cover-image` 。ePub 格式图书的样式表可能与网页所用的不同,在这里我使用一个名为 `epub.css` 的样式表。 + +``` +--- +title: 'GRASP principles for the Object-oriented Mind' +publisher: 'Programming Language Fight Club' +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +cover-image: assets/cover.png +stylesheet: assets/epub.css +... +``` + +将以下内容添加到之前的 `Makefile` 中: + +``` +epub: +        @pandoc -s --toc epub-meta.yaml \ +        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub +``` + +用于产生 ePub 格式图书的命令从 HTML 版本获取所有依赖项(每章的名称),向它们添加 Markdown 扩展,并在它们前面加上每一章的文件夹路径,以便让 Pandoc 知道如何进行处理。例如,如果 `$(DEPENDENCIES` 变量只包含 “前言” 和 “关于本书” 两章,那么 `Makefile` 将会这样调用: + +``` +@pandoc -s --toc epub-meta.yaml \ +parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub +``` + +Pandoc 将提取这两章的内容,然后进行组合,最后生成 ePub 格式的电子书,并放在 `Assets` 文件夹中。 + +这是使用此过程创建 ePub 格式电子书的一个 [示例][7]。 + +### 过程总结 + +从 Markdown 文件创建网页和 ePub 格式电子书的过程并不困难,但有很多细节需要注意。遵循以下大纲可能使你更容易使用 Pandoc。 + +- HTML 图书: + - 使用 Markdown 语法创建每章内容 + - 添加元信息 + - 创建一个 `Makefile` 将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 +- ePub 电子书: + - 使用之前创建的每一章内容 + - 添加新的元信息文件 + - 创建一个 `Makefile` 以将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 + + +------ + +via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[1]: https://linux.cn/article-10228-1.html +[2]: https://pandoc.org/ +[3]: https://www.programmingfightclub.com/ +[4]: https://github.com/kikofernandez/programmingfightclub +[5]: https://pages.github.com/ +[6]: https://www.programmingfightclub.com/grasp-principles/ +[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/published/20181002 4 open source invoicing tools for small businesses.md b/published/201811/20181002 4 open source invoicing tools for small businesses.md similarity index 100% rename from published/20181002 4 open source invoicing tools for small businesses.md rename to published/201811/20181002 4 open source invoicing tools for small businesses.md diff --git a/translated/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md similarity index 67% rename from translated/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md rename to published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md index a110960a9f..58996654e5 100644 --- a/translated/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md +++ b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md @@ -1,42 +1,42 @@ -Greg Kroah-Hartman 解释内核社区如何保护 Linux -============================================================ +Greg Kroah-Hartman 解释内核社区是如何使 Linux 安全的 +============ ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel-security_0.jpg?itok=hOaTQwWV) -内核维护者 Greg Kroah-Hartman 谈论内核社区如何保护 Linux 不遭受损害。[Creative Commons Zero][2] -由于 Linux 使用量持续扩大,内核社区去提高全世界最广泛使用的技术 — Linux 内核的安全性的重要程序越来越高。安全不仅对企业客户很重要,它对消费者也很重要,因为 80% 的移动设备都使用了 Linux。在本文中,Linux 内核维护者 Greg Kroah-Hartman 带我们了解内核社区如何应对威胁。 +> 内核维护者 Greg Kroah-Hartman 谈论内核社区如何保护 Linux 不遭受损害。 + +由于 Linux 使用量持续扩大,内核社区去提高这个世界上使用最广泛的技术 —— Linux 内核的安全性的重要性越来越高。安全不仅对企业客户很重要,它对消费者也很重要,因为 80% 的移动设备都使用了 Linux。在本文中,Linux 内核维护者 Greg Kroah-Hartman 带我们了解内核社区如何应对威胁。 ### bug 不可避免 - ![Greg Kroah-Hartman](https://www.linux.com/sites/lcom/files/styles/floated_images/public/greg-k-h.png?itok=p4fREYuj "Greg Kroah-Hartman") -Greg Kroah-Hartman [Linux 基金会][1] +*Greg Kroah-Hartman [Linux 基金会][1]* -正如 Linus Torvalds 曾经说过,大多数安全问题都是 bug 造成的,而 bug 又是软件开发过程的一部分。是个软件就有 bug。 +正如 Linus Torvalds 曾经说过的,大多数安全问题都是 bug 造成的,而 bug 又是软件开发过程的一部分。是软件就有 bug。 -Kroah-Hartman 说:“就算是 bug ,我们也不知道它是安全的 bug 还是不安全的 bug。我修复的一个著名 bug,在三年后才被 Red Hat 认定为安全漏洞“。 +Kroah-Hartman 说:“就算是 bug,我们也不知道它是安全的 bug 还是不安全的 bug。我修复的一个著名 bug,在三年后才被 Red Hat 认定为安全漏洞“。 在消除 bug 方面,内核社区没有太多的办法,只能做更多的测试来寻找 bug。内核社区现在已经有了自己的安全团队,它们是由熟悉内核核心的内核开发者组成。 -Kroah Hartman 说:”当我们收到一个报告时,我们就让参与这个领域的核心开发者去修复它。在一些情况下,他们可能是同一个人,让他们进入安全团队可以更快地解决问题“。但他也强调,内核所有部分的开发者都必须清楚地了解这些问题,因为内核是一个可信环境,它必须被保护起来。 +Kroah-Hartman 说:”当我们收到一个报告时,我们就让参与这个领域的核心开发者去修复它。在一些情况下,他们可能是同一个人,让他们进入安全团队可以更快地解决问题“。但他也强调,内核所有部分的开发者都必须清楚地了解这些问题,因为内核是一个可信环境,它必须被保护起来。 -Kroah Hartman 说:”一旦我们修复了它,我们就将它放到我们的栈分析规则中,以便于以后不再重新出现这个 bug。“ +Kroah-Hartman 说:”一旦我们修复了它,我们就将它放到我们的栈分析规则中,以便于以后不再重新出现这个 bug。“ -除修复 bug 之外,内核社区也不断加固内核。Kroah Hartman 说:“我们意识到,我们需要一些主动的缓减措施。因此我们需要加固内核。” +除修复 bug 之外,内核社区也不断加固内核。Kroah-Hartman 说:“我们意识到,我们需要一些主动的缓减措施,因此我们需要加固内核。” -Kees Cook 和其他一些人付出了巨大的努力,带来了一直在内核之外的加固特性,并将它们合并或适配到内核中。在每个内核发行后,Cook 都对所有新的加固特性做一个总结。但是只加固内核是不够的,供应商必须要启用这些新特性来让它们充分发挥作用。但他们并没有这么做。 +Kees Cook 和其他一些人付出了巨大的努力,带来了一直在内核之外的加固特性,并将它们合并或适配到内核中。在每个内核发行后,Cook 都对所有新的加固特性做一个总结。但是只加固内核是不够的,供应商们必须要启用这些新特性来让它们充分发挥作用,但他们并没有这么做。 -Kroah-Hartman [每周发布一个稳定版内核][5],而为了长周期的支持,公司只从中挑选一个,以便于设备制造商能够利用它。但是,Kroah-Hartman 注意到,除了 Google Pixel 之外,大多数 Android 手机并不包含这些额外的安全加固特性,这就意味着,所有的这些手机都是有漏洞的。他说:“人们应该去启用这些加固特性”。 +Kroah-Hartman [每周发布一个稳定版内核][5],而为了长期的支持,公司们只从中挑选一个,以便于设备制造商能够利用它。但是,Kroah-Hartman 注意到,除了 Google Pixel 之外,大多数 Android 手机并不包含这些额外的安全加固特性,这就意味着,所有的这些手机都是有漏洞的。他说:“人们应该去启用这些加固特性”。 -Kroah-Hartman 说:“我购买了基于 Linux 内核 4.4 的所有旗舰级手机,去查看它们中哪些确实升级了新特性。结果我发现只有一家公司升级了它们的内核”。“我在整个供应链中努力去解决这个问题,因为这是一个很棘手的问题。它涉及许多不同的组织 — SoC 制造商、运营商、等等。关键点是,需要他们把我们辛辛苦苦设计的内核去推送给大家。 +Kroah-Hartman 说:“我购买了基于 Linux 内核 4.4 的所有旗舰级手机,去查看它们中哪些确实升级了新特性。结果我发现只有一家公司升级了它们的内核。……我在整个供应链中努力去解决这个问题,因为这是一个很棘手的问题。它涉及许多不同的组织 —— SoC 制造商、运营商等等。关键点是,需要他们把我们辛辛苦苦设计的内核去推送给大家。” -好消息是,与消息电子产品不一样,像 Red Hat 和 SUSE 这样的大供应商,在企业环境中持续对内核进行更新。使用容器、pod、和虚拟化的现代系统做到这一点更容易了。无需停机就可以毫不费力地更新和重启。事实上,现在来保证系统安全相比过去容易多了。 +好消息是,与消费电子产品不一样,像 Red Hat 和 SUSE 这样的大供应商,在企业环境中持续对内核进行更新。使用容器、pod 和虚拟化的现代系统做到这一点更容易了。无需停机就可以毫不费力地更新和重启。事实上,现在来保证系统安全相比过去容易多了。 ### Meltdown 和 Spectre -没有任何一个关于安全的讨论能够避免提及 Meltdown 和 Spectre。内核社区一直致力于修改新发现的和已查明的安全漏洞。不管怎样,Intel 已经因为这些事情改变了它们的策略。 +没有任何一个关于安全的讨论能够避免提及 Meltdown 和 Spectre 缺陷。内核社区一直致力于修改新发现的和已查明的安全漏洞。不管怎样,Intel 已经因为这些事情改变了它们的策略。 Kroah-Hartman 说:“他们已经重新研究如何处理安全 bug,以及如何与社区合作,因为他们知道他们做错了。内核已经修复了几乎所有大的 Spectre 问题,但是还有一些小问题仍在处理中”。 @@ -57,7 +57,7 @@ via: https://www.linux.com/blog/2018/10/greg-kroah-hartman-explains-how-kernel-c 作者:[SWAPNIL BHARTIYA][a] 选题:[oska874][b] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181004 Functional programming in Python- Immutable data structures.md b/published/201811/20181004 Functional programming in Python- Immutable data structures.md similarity index 100% rename from published/20181004 Functional programming in Python- Immutable data structures.md rename to published/201811/20181004 Functional programming in Python- Immutable data structures.md diff --git a/published/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md similarity index 100% rename from published/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md rename to published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md diff --git a/translated/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md similarity index 70% rename from translated/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md rename to published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md index 6f8dd512b8..63f74a4816 100644 --- a/translated/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md +++ b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md @@ -1,13 +1,13 @@ 服务器的 LinuxBoot:告别 UEFI、拥抱开源 -============================================================ +============ -[LinuxBoot][13] 是私有的 [UEFI][15] 固件的 [替代者][14]。它发布于去年,并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年,LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。 +[LinuxBoot][13] 是私有的 [UEFI][15] 固件的开源 [替代品][14]。它发布于去年,并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年,LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。 这个项目最初是由 Ron Minnich 在 2017 年 1 月提出,它是 LinuxBIOS 的创造人,并且在 Google 领导 [coreboot][17] 的工作。 -Google、Facebook、[Horizon 计算解决方案][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。 +Google、Facebook、[Horizon Computing Solutions][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。 -它开放允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的[运行时][22] 和用他们自己的密钥去 [刷入固件][23]。他们不需要等待供应商的更新。 +它的开放性允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的 [运行时环境][22] 和用他们自己的密钥去 [刷入固件][23],而不需要等待供应商的更新。 下面是第一次使用 NERF BIOS 去引导 [Ubuntu Xenial][24] 的视频: @@ -17,40 +17,36 @@ Google、Facebook、[Horizon 计算解决方案][18]、和 [Two Sigma][19] 共 ### LinuxBoot 超越 UEFI 的优势 -![LinuxBoot vs UEFI](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/linuxboot-uefi.png) +![LinuxBoot vs UEFI](https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/linuxboot-uefi.png?w=800&ssl=1) 下面是一些 LinuxBoot 超越 UEFI 的主要优势: -### 启动速度显著加快 +#### 启动速度显著加快 它能在 20 秒钟以内完成服务器启动,而 UEFI 需要几分钟的时间。 -### 显著的灵活性 +#### 显著的灵活性 -LinuxBoot 可以用在各种设备、文件系统和 Linux 支持的协议上。 +LinuxBoot 可以用在 Linux 支持的各种设备、文件系统和协议上。 -### 更加安全 +#### 更加安全 相比 UEFI 而言,LinuxBoot 在设备驱动程序和文件系统方面进行更加严格的检查。 -我们可能主张 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。 +我们可能争辩说 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。 -其它方面,LinuxBoot 有非常少的二进制文件,它仅用了大约一百多 KB,相比而言,UEFI 的二进制文件有 32 MB。 +另一方面,LinuxBoot 有非常小的二进制文件,它仅用了大约几百 KB,相比而言,而 UEFI 的二进制文件有 32 MB。 严格来说,LinuxBoot 与 UEFI 不一样,更适合于[可信计算基础][28]。 -[建议阅读 Linux 上最好的自由开源的 Adobe 产品的替代者][29] - LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动 Windows/非 Linux 内核,但这影响并不大,因为主流的云都是基于 Linux 的服务器。 ### LinuxBoot 的采用者 -自 2011 年, [Facebook][32] 发起了[开源计算项目][31],它的一些服务器是基于[开源][33]设计的,目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试: +自 2011 年, [Facebook][32] 发起了[开源计算项目(OCP)][31],它的一些服务器是基于[开源][33]设计的,目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试: * Winterfell - * Leopard - * Tioga Pass 更多 [OCP][34] 硬件在[这里][35]有一个简短的描述。OCP 基金会通过[开源系统固件][36]运行一个专门的固件项目。 @@ -58,9 +54,7 @@ LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动 支持 LinuxBoot 的其它一些设备有: * [QEMU][9] 仿真的 [Q35][10] 系统 - * [Intel S2600wf][11] - * [Dell R630][12] 上个月底(2018 年 9 月 24 日),[Equus 计算解决方案][37] [宣布][38] 发行它的 [白盒开放式™][39] M2660 和 M2760 服务器,作为它们的定制的、成本优化的、开放硬件服务器和存储平台的一部分。它们都支持 LinuxBoot 灵活定制服务器的 BIOS,以提升安全性和设计一个非常快的纯净的引导体验。 @@ -73,10 +67,10 @@ LinuxBoot 在 [GitHub][40] 上有很丰富的文档。你喜欢它与 UEFI 不 via: https://itsfoss.com/linuxboot-uefi/ -作者:[ Avimanyu Bandyopadhyay][a] +作者:[Avimanyu Bandyopadhyay][a] 选题:[oska874][b] 译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181008 3 areas to drive DevOps change.md b/published/201811/20181008 3 areas to drive DevOps change.md similarity index 100% rename from published/20181008 3 areas to drive DevOps change.md rename to published/201811/20181008 3 areas to drive DevOps change.md diff --git a/published/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md similarity index 100% rename from published/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md rename to published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md diff --git a/translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md similarity index 56% rename from translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md rename to published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md index 26d315f64b..c0859f1dc1 100644 --- a/translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md +++ b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -3,7 +3,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg) -几周前,Steam 宣布要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 +之前,Steam [宣布][1]要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 据 Steam 网站称,测试版本中有以下这些新功能: @@ -13,29 +13,27 @@ * 改进了对游戏控制器的支持,游戏自动识别所有 Steam 支持的控制器,比起游戏的原始版本,能够获得更多开箱即用的控制器兼容性。 * 和 vanilla WINE 比起来,游戏的多线程性能得到了极大的提高。 - - ### 安装 如果你有兴趣,想尝试一下 Steam 和 Proton。请按照下面这些简单的步骤进行操作。(请注意,如果你已经安装了最新版本的 Steam,可以忽略启用 Steam 测试版这个第一步。在这种情况下,你不再需要通过 Steam 测试版来使用 Proton。) -打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持22个游戏。 +打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持 22 个游戏。 ![][3] -现在点击客户端顶部的 Steam 选项,这会显示一个下拉菜单。然后选择设置。 +现在点击客户端顶部的 “Steam” 选项,这会显示一个下拉菜单。然后选择“设置”。 ![][4] -现在弹出了设置窗口,选择账户选项,并在 Beta participation 旁边,点击更改。 +现在弹出了设置窗口,选择“账户”选项,并在 “参与 Beta 测试” 旁边,点击“更改”。 ![][5] -现在将 None 更改为 Steam Beta Update。 +现在将 “None” 更改为 “Steam Beta Update”。 ![][6] -点击确定,然后系统会提示你重新启动。 +点击“确定”,然后系统会提示你重新启动。 ![][7] @@ -43,11 +41,11 @@ ![][8] -在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定有为提供支持的游戏使用 Stream Play 这个复选框,让所有的游戏都使用 Steam Play 进行运行,而不是 steam 中游戏特定的选项。兼容性工具应该是 Proton。 +在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定勾选了“为提供支持的游戏使用 Stream Play” 、“让所有的游戏都使用 Steam Play 运行”,“使用这个工具替代 Steam 中游戏特定的选项”。这个兼容性工具应该就是 Proton。 ![][9] -Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 +Steam 客户端会要求你重新启动,照做,然后重新登录你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 ![][10] @@ -69,7 +67,7 @@ Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Stea ![][16] -一些游戏可能会受到 Proton 测试性质的影响,在下面这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 +一些游戏可能会受到 Proton 测试性质的影响,在这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 -------------------------------------------------------------------------------- @@ -79,25 +77,25 @@ via: https://fedoramagazine.org/play-windows-games-steam-play-proton/ 作者:[Francisco J. Vergara Torres][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/patxi/ [1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 [2]: https://fedoramagazine.org/third-party-repositories-fedora/ -[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png -[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png -[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png -[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png -[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png -[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png -[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png -[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png -[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png -[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png -[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png -[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png -[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png -[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png +[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-768x505.png +[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-768x432.png +[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-768x503.png +[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6.png +[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7.png +[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10.png +[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-768x503.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-768x501.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-768x498.png +[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-768x501.png +[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-768x500.png +[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-768x432.png +[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-768x432.png [17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831 diff --git a/published/20181010 5 alerting and visualization tools for sysadmins.md b/published/201811/20181010 5 alerting and visualization tools for sysadmins.md similarity index 100% rename from published/20181010 5 alerting and visualization tools for sysadmins.md rename to published/201811/20181010 5 alerting and visualization tools for sysadmins.md diff --git a/published/20181010 An introduction to using tcpdump at the Linux command line.md b/published/201811/20181010 An introduction to using tcpdump at the Linux command line.md similarity index 100% rename from published/20181010 An introduction to using tcpdump at the Linux command line.md rename to published/201811/20181010 An introduction to using tcpdump at the Linux command line.md diff --git a/translated/talk/20181014 How Lisp Became God-s Own Programming Language.md b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md similarity index 94% rename from translated/talk/20181014 How Lisp Became God-s Own Programming Language.md rename to published/201811/20181014 How Lisp Became God-s Own Programming Language.md index 4666cad45e..017a67799f 100644 --- a/translated/talk/20181014 How Lisp Became God-s Own Programming Language.md +++ b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md @@ -30,6 +30,8 @@ Lisp 是怎么成为上帝的编程语言的 > 我知道,上帝偏爱那一门 > 名字是四个字母的语言。 + +(LCTT 译注:参见 “四个字母”,参见:[四字神名](https://zh.wikipedia.org/wiki/%E5%9B%9B%E5%AD%97%E7%A5%9E%E5%90%8D),致谢 [no1xsyzy](https://github.com/LCTT/TranslateProject/issues/11320)) 以下这句话我实在不好在人前说;不过,我还是觉得,这样一种 “Lisp 是奥术魔法”的文化模因实在是有史以来最奇异、最迷人的东西。Lisp 是象牙塔的产物,是人工智能研究的工具;因此,它对于编程界的俗人而言总是陌生的,甚至是带有神秘色彩的。然而,当今的程序员们[开始怂恿彼此,“在你死掉之前至少试一试 Lisp”][4],就像这是一种令人恍惚入迷的致幻剂似的。尽管 Lisp 是广泛使用的编程语言中第二古老的(只比 Fortran 年轻一岁)[^1] ,程序员们也仍旧在互相怂恿。想象一下,如果你的工作是为某种组织或者团队推广一门新的编程语言的话,忽悠大家让他们相信你的新语言拥有神力难道不是绝佳的策略吗?—— 但你如何能够做到这一点呢?或者,换句话说,一门编程语言究竟是如何变成人们口中“隐晦知识的载体”的呢? @@ -83,7 +85,7 @@ SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日 *SICP 封面上的画作。* -说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 微积分之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。 +说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 演算之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。 然而,这本书的内容通常并不比封面正常多少。SICP 跟你读过的所有计算机科学教科书都不同。在引言中,作者们表示,这本书不只教你怎么用 Lisp 编程 —— 它是关于“现象的三个焦点:人的心智、复数的计算机程序,和计算机”的作品 [^19]。在之后,他们对此进行了解释,描述了他们对如下观点的坚信:编程不该被当作是一种计算机科学的训练,而应该是“程序性认识论procedural epistemology”的一种新表达方式 [^20]。程序是将那些偶然被送入计算机的思想组织起来的全新方法。这本书的第一章简明地介绍了 Lisp,但是之后的绝大部分都在讲述更加抽象的概念。其中包括了对不同编程范式的讨论,对于面向对象系统中“时间”和“一致性”的讨论;在书中的某一处,还有关于通信的基本限制可能会如何带来同步问题的讨论 —— 而这些基本限制在通信中就像是光速不变在相对论中一样关键 [^21]。都是些高深难懂的东西。 @@ -97,7 +99,7 @@ SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日 理所当然地,确定人们对 Lisp 重新燃起热情的具体时间并不可能;但这多半是保罗·格雷厄姆发表他那几篇声称 Lisp 是首选入门语言的短文之后的事了。保罗·格雷厄姆是 Y-Combinator 的联合创始人和《Hacker News》的创始者,他这几篇短文有很大的影响力。例如,在短文《[胜于平庸][20]Beating the Averages》中,他声称 Lisp 宏使 Lisp 比其它语言更强。他说,因为他在自己创办的公司 Viaweb 中使用 Lisp,他得以比竞争对手更快地推出新功能。至少,[一部分程序员][21]被说服了。然而,庞大的主流程序员群体并未换用 Lisp。 -实际上出现的情况是,Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表理解。C# 有了 Linq。Ruby……嗯,[Ruby 是 Lisp 的一种][22]。就如格雷厄姆之前在 2001 年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化” [^23]。尽管其它语言变得越来越像 Lisp,Lisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年,Lisp 的诞生二十周年纪念日上,麦卡锡写道,Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优” [^24]。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产,Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前,Lisp 都不会走下神坛。 +实际上出现的情况是,Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表推导式。C# 有了 Linq。Ruby……嗯,[Ruby 是 Lisp 的一种][22]。就如格雷厄姆之前在 2001 年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化” [^23]。尽管其它语言变得越来越像 Lisp,Lisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年,Lisp 的诞生二十周年纪念日上,麦卡锡写道,Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优” [^24]。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产,Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前,Lisp 都不会走下神坛。 -------------------------------------------------------------------------------- diff --git a/published/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md similarity index 100% rename from published/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md rename to published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md diff --git a/published/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md b/published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md similarity index 100% rename from published/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md rename to published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md diff --git a/published/20181017 Browsing the web with Min, a minimalist open source web browser.md b/published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md similarity index 100% rename from published/20181017 Browsing the web with Min, a minimalist open source web browser.md rename to published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md diff --git a/published/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md b/published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md similarity index 100% rename from published/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md rename to published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md diff --git a/published/20181017 Design faster web pages, part 2- Image replacement.md b/published/201811/20181017 Design faster web pages, part 2- Image replacement.md similarity index 100% rename from published/20181017 Design faster web pages, part 2- Image replacement.md rename to published/201811/20181017 Design faster web pages, part 2- Image replacement.md diff --git a/published/20181017 How To Determine Which System Manager Is Running On Linux System.md b/published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md similarity index 100% rename from published/20181017 How To Determine Which System Manager Is Running On Linux System.md rename to published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md diff --git a/published/20181019 Edit your videos with Pitivi on Fedora.md b/published/201811/20181019 Edit your videos with Pitivi on Fedora.md similarity index 100% rename from published/20181019 Edit your videos with Pitivi on Fedora.md rename to published/201811/20181019 Edit your videos with Pitivi on Fedora.md diff --git a/published/20181019 How to use Pandoc to produce a research paper.md b/published/201811/20181019 How to use Pandoc to produce a research paper.md similarity index 100% rename from published/20181019 How to use Pandoc to produce a research paper.md rename to published/201811/20181019 How to use Pandoc to produce a research paper.md diff --git a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md similarity index 83% rename from translated/talk/20181019 What is an SRE and how does it relate to DevOps.md rename to published/201811/20181019 What is an SRE and how does it relate to DevOps.md index 80700d6fb9..03bd773fa7 100644 --- a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md +++ b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md @@ -1,15 +1,15 @@ 什么是 SRE?它和 DevOps 是怎么关联的? ===== -大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 +> 大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) -虽然站点可靠性工程师(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 +虽然站点可靠性工程师site reliability engineer(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 ### 什么是站点可靠性工程? -谷歌的几个工程师写的《 [SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” +谷歌的几个工程师写的《[SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” 虽然系统管理员从很久之前就在写代码,但是过去的很多时候系统管理团队是手动管理机器的。当时他们管理的机器可能有几十台或者上百台,不过当这个数字涨到了几千甚至几十万的时候,就不能简单的靠人去解决问题了。规模如此大的情况下,很明显应该用代码去管理机器(以及机器上运行的软件)。 @@ -19,13 +19,13 @@ ### SRE 和 DevOps -站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“devs”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 +站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“dev”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在将服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 这种情况会导致大量失衡。开发和运维的目标总是不一致 —— 开发希望用户体验到“最新最棒”的代码,但是运维想要的是变更尽量少的稳定系统。运维是这样假定的,任何变更都可能引发不稳定,而不做任何变更的系统可以一直保持稳定。(减少软件的变更次数并不是避免故障的唯一因素,认识到这一点很重要。例如,虽然你的 web 应用保持不变,但是当用户数量涨到十倍时,服务可能就会以各种方式出问题。) DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开发团队时刻都想把新代码部署上线,那么他们也必须对新代码引起的故障负责。就像亚马逊的 [Werner Vogels 说的][4]那样,“谁开发,谁运维”(生产环境)。但是开发人员已经有一大堆问题了。他们不断的被推动着去开发老板要的产品功能。再让他们去了解基础设施,包括如何部署、配置还有监控服务,这对他们的要求有点太多了。所以就需要 SRE 了。 -开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师,图形设计师,前端工程师,后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署,配置,监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 +开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师、图形设计师、前端工程师、后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署、配置、监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 所以 SRE 不仅仅是“写代码的运维工程师”。相反,SRE 是开发团队的成员,他们有着不同的技能,特别是在发布部署、配置管理、监控、指标等方面。但是,就像前端工程师必须知道如何从数据库中获取数据一样,SRE 也不是只负责这些领域。为了提供更容易升级、管理和监控的产品,整个团队共同努力。 @@ -37,7 +37,7 @@ DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开 让开发人员做 SRE 最显著的优点是,团队规模变大的时候也能很好的扩展。而且,开发人员将会全面地了解应用的特性。但是,许多初创公司的基础设施包含了各种各样的 SaaS 产品,这种多样性在基础设施上体现的最明显,因为连基础设施本身也是多种多样。然后你们在某个基础设施上引入指标系统、站点监控、日志分析、容器等等。这些技术解决了一部分问题,也增加了复杂度。开发人员除了要了解应用程序的核心技术(比如开发语言),还要了解上述所有技术和服务。最终,掌握所有的这些技术让人无法承受。 -另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“ 三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 +另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 有一个关键信息我还没提到:其他的工程师。他们可能很渴望了解发布部署的原理,也很想尽全力学会使用指标系统。而且,雇一个 SRE 可不是一件简单的事儿。因为你要找的是一个既懂系统管理又懂软件工程的人。(我之所以明确地说软件工程而不是说“能写代码”,是因为除了写代码之外软件工程还包括很多东西,比如编写良好的测试或文档。) @@ -54,7 +54,7 @@ via: https://opensource.com/article/18/10/sre-startup 作者:[Craig Sebenik][a] 选题:[lujun9972][b] 译者:[BeliteX](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181022 5 tips for choosing the right open source database.md b/published/201811/20181022 5 tips for choosing the right open source database.md similarity index 100% rename from published/20181022 5 tips for choosing the right open source database.md rename to published/201811/20181022 5 tips for choosing the right open source database.md diff --git a/published/20181022 How to set up WordPress on a Raspberry Pi.md b/published/201811/20181022 How to set up WordPress on a Raspberry Pi.md similarity index 100% rename from published/20181022 How to set up WordPress on a Raspberry Pi.md rename to published/201811/20181022 How to set up WordPress on a Raspberry Pi.md diff --git a/published/20181023 Getting started with functional programming in Python using the toolz library.md b/published/201811/20181023 Getting started with functional programming in Python using the toolz library.md similarity index 100% rename from published/20181023 Getting started with functional programming in Python using the toolz library.md rename to published/201811/20181023 Getting started with functional programming in Python using the toolz library.md diff --git a/published/20181024 4 cool new projects to try in COPR for October 2018.md b/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md similarity index 100% rename from published/20181024 4 cool new projects to try in COPR for October 2018.md rename to published/201811/20181024 4 cool new projects to try in COPR for October 2018.md diff --git a/published/20181024 Get organized at the Linux command line with Calcurse.md b/published/201811/20181024 Get organized at the Linux command line with Calcurse.md similarity index 100% rename from published/20181024 Get organized at the Linux command line with Calcurse.md rename to published/201811/20181024 Get organized at the Linux command line with Calcurse.md diff --git a/translated/tech/20181025 Monitoring database health and behavior- Which metrics matter.md b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md similarity index 60% rename from translated/tech/20181025 Monitoring database health and behavior- Which metrics matter.md rename to published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md index 5fbaf71b69..b8cfabc248 100644 --- a/translated/tech/20181025 Monitoring database health and behavior- Which metrics matter.md +++ b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md @@ -1,9 +1,11 @@ -监测数据库的健康和行为: 有哪些重要指标? +监测数据库的健康和行为:有哪些重要指标? ====== -对数据库的监测可能过于困难或者没有监测到关键点。本文将讲述如何正确的监测数据库。 + +> 对数据库的监测可能过于困难或者没有找到关键点。本文将讲述如何正确的监测数据库。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) -我们没有足够的讨论数据库。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。 +我们没有对数据库讨论过多少。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。 ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos) @@ -13,11 +15,11 @@ ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4) -更具体地说,数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。 +更具体地说,数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。 ### 问题 -最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是Scrooge McDuck,你可以浏览所有可用的指标。如果这是Wrestlemania,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标? +最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是史高治Scrooge McDuck(LCTT 译注:史高治,唐老鸭的舅舅,以一毛不拔著称),你不会放过任何一个可用的指标。如果这是摔角狂热Wrestlemania 比赛,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标? ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt) @@ -29,11 +31,11 @@ 开始检测数据库的最好方法是跟踪它所接到请求的数量。我们对数据库有较高期望;期望它能稳定的存储数据,并处理我们抛给它的所有查询,这些查询可能是一天一次大规模查询,或者是来自用户一天到晚的数百万次查询。吞吐量可以告诉我们数据库是否如我们期望的那样工作。 -你也可以将请求安照类型(读,写,服务器端,客户端等)分组,以开始分析流量。 +你也可以将请求按照类型(读、写、服务器端、客户端等)分组,以开始分析流量。 ### 执行时间:数据库完成工作需要多长时间? -这个指标看起来很明显,但往往被忽视了。 你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而,参考上下文来讨论执行时间非常重要:像InfluxDB这样的时间序列数据库中的慢与像MySQL这样的关系型数据库中的慢不一样。InfluxDB中的慢可能意味着毫秒,而MySQL的“SLOW_QUERY”变量的默认值是10秒。 +这个指标看起来很明显,但往往被忽视了。你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而,参考上下文来讨论执行时间非常重要:像 InfluxDB 这样的时间序列数据库中的慢与像 MySQL 这样的关系型数据库中的慢不一样。InfluxDB 中的慢可能意味着毫秒,而 MySQL 的 `SLOW_QUERY` 变量的默认值是 10 秒。 ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8) @@ -43,7 +45,7 @@ 一旦你知道数据库正在处理多少请求以及每个请求需要多长时间,你就需要添加一层复杂性以开始从这些指标中获得实际值。 -如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是否忙碌了100秒、10秒,或者介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。 +如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是忙碌了 100 秒、10 秒,还是介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。 并发性还能影响延迟,这不仅包括任务完成所需的时间(执行时间),还包括任务在处理之前需要等待的时间。 @@ -53,15 +55,15 @@ ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP) -该指标对于确定数据库的整体健康和性能特别有用。如果只能在80%的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。 +该指标对于确定数据库的整体健康和性能特别有用。如果只能在 80% 的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。 ### 好消息 -监测和分析似乎非常困难,特别是因为我们大多数人不是数据库专家,我们可能没有时间去理解这些指标。但好消息是,大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库(Postgres:pg_stats、CouchDB:Runtime_.、InfluxDB:_internal等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。 +监测和分析似乎非常困难,特别是因为我们大多数人不是数据库专家,我们可能没有时间去理解这些指标。但好消息是,大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库(Postgres:`pg_stats`、CouchDB:`Runtime_Statistics`、InfluxDB:`_internal` 等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。 ### 结论 -数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。 从吞吐量,执行时间,并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。 +数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。从吞吐量、执行时间、并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。 ![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx) @@ -74,7 +76,7 @@ via: https://opensource.com/article/18/10/database-metrics-matter 作者:[Katy Farmer][a] 选题:[lujun9972][b] 译者:[ChiZelin](https://github.com/ChiZelin) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181025 Understanding Linux Links- Part 2.md b/published/201811/20181025 Understanding Linux Links- Part 2.md similarity index 100% rename from published/20181025 Understanding Linux Links- Part 2.md rename to published/201811/20181025 Understanding Linux Links- Part 2.md diff --git a/translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md similarity index 83% rename from translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md rename to published/201811/20181025 What breaks our systems- A taxonomy of black swans.md index 22d2bdd3df..e3aa38e75a 100644 --- a/translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md +++ b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md @@ -1,27 +1,27 @@ 让系统崩溃的黑天鹅分类 ====== -在严重的故障发生之前,找到引起问题的异常事件,并修复它。 +> 在严重的故障发生之前,找到引起问题的异常事件,并修复它。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/black-swan-pair_0.png?itok=MkshwqVg) -黑天鹅用来比喻造成严重影响的小概率事件(比如 2008 年的金融危机)。在生产环境的系统中,黑天鹅是指这样的事情:它引发了你不知道的问题,造成了重大影响,不能快速修复或回滚,也不能用值班说明书上的其他标准响应来解决。它是事发几年后你还在给新人说起的事件。 +黑天鹅Black swan用来比喻造成严重影响的小概率事件(比如 2008 年的金融危机)。在生产环境的系统中,黑天鹅是指这样的事情:它引发了你不知道的问题,造成了重大影响,不能快速修复或回滚,也不能用值班说明书上的其他标准响应来解决。它是事发几年后你还在给新人说起的事件。 从定义上看,黑天鹅是不可预测的,不过有时候我们能找到其中的一些模式,针对有关联的某一类问题准备防御措施。 -例如,大部分故障的直接原因是变更(代码、环境或配置)。虽然这种方式触发的 bug 是独特的,不可预测的,但是常见的金丝雀发布对避免这类问题有一定的作用,而且自动回滚已经成了一种标准止损策略。 +例如,大部分故障的直接原因是变更(代码、环境或配置)。虽然这种方式触发的 bug 是独特的、不可预测的,但是常见的金丝雀发布对避免这类问题有一定的作用,而且自动回滚已经成了一种标准止损策略。 随着我们的专业性不断成熟,一些其他的问题也正逐渐变得容易理解,被归类到某种风险并有普适的预防策略。 ### 公布出来的黑天鹅事件 -所有科技公司都有生产环境的故障,只不过并不是所有公司都会分享他们的事故分析。那些公开讨论事故的公司帮了我们的忙。下列事故都描述了某一类问题,但它们绝对不是只属于一个类别。我们的系统中都有黑天鹅在潜伏着,只是有些人还不知道而已。 +所有科技公司都有生产环境的故障,只不过并不是所有公司都会分享他们的事故分析。那些公开讨论事故的公司帮了我们的忙。下列事故都描述了某一类问题,但它们绝对不是只一个孤例。我们的系统中都有黑天鹅在潜伏着,只是有些人还不知道而已。 #### 达到上限 达到任何类型的限制都会引发严重事故。这类问题的一个典型例子是 2017 年 2 月 [Instapaper 的一次服务中断][1]。我把这份事故报告给任何一个运维工作者看,他们读完都会脊背发凉。Instapaper 生产环境的数据库所在的文件系统有 2 TB 的大小限制,但是数据库服务团队并不知情。在没有任何报错的情况下,数据库不再接受任何写入了。完全恢复需要好几天,而且还得迁移数据库。 -资源限制有各式各样的触发场景。Sentry 遇到了 [Postgres 的最大事务 ID 限制][2]。Platform.sh 遇到了[管道缓冲区大小限制][3]。SparkPost [触发了 AWS 的 DDos 保护][4]。Foursquare 在他们的一个 [MongoDB 耗尽内存][5]时遭遇了性能骤降。 +资源限制有各式各样的触发场景。Sentry 遇到了 [Postgres 的最大事务 ID 限制][2]。Platform.sh 遇到了[管道缓冲区大小限制][3]。SparkPost [触发了 AWS 的 DDoS 保护][4]。Foursquare 在他们的一个 [MongoDB 耗尽内存][5]时遭遇了性能骤降。 提前了解系统限制的一个办法是定期做测试。好的压力测试(在生产环境的副本上做)应该包含写入事务,并且应该把每一种数据存储都写到超过当前生产环境的容量。压力测试时很容易忽略的是次要存储(比如 Zookeeper)。如果你是在测试时遇到了资源限制,那么你还有时间去解决问题。鉴于这种资源限制问题的解决方案可能涉及重大的变更(比如数据存储拆分),所以时间是非常宝贵的。 @@ -32,7 +32,7 @@ #### 扩散的慢请求 > “这个世界的关联性远比我们想象中更大。所以我们看到了更多 Nassim Taleb 所说的‘黑天鹅事件’ —— 即罕见事件以更高的频率离谱地发生了,因为世界是相互关联的” -> — [Richard Thaler][6] +> —— [Richard Thaler][6] HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服务中断给搞垮了][7],他们关于这次事故原因的分析报告很好地诠释了分布式计算系统之间存在多么大的关联。在这个事件里,负载均衡器的连接池被来自 AWS 上的客户访问占满了,因为这些连接很耗时。同样的现象还会发生在应用的线程、锁、数据库连接上 —— 任何能被慢操作占满的资源。 @@ -40,7 +40,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 重试的间隔应该用指数退避来限制一下,并加入一些时间抖动。Square 有一次服务中断是 [Redis 存储的过载][9],原因是有一段代码对失败的事务重试了 500 次,没有任何重试退避的方案,也说明了过度重试的潜在风险。另外,针对这种情况,[断路器][10]设计模式也是有用的。 -应该设计出监控仪表盘来清晰地展示所有资源的[使用率,饱和度和报错][11],这样才能快速发现问题。 +应该设计出监控仪表盘来清晰地展示所有资源的[使用率、饱和度和报错][11],这样才能快速发现问题。 #### 突发的高负载 @@ -48,7 +48,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 在预定时刻同时发生的事件并不是突发大流量的唯一原因。Slack 经历过一次短时间内的[多次服务中断][12],原因是非常多的客户端断开连接后立即重连,造成了突发的大负载。 CircleCI 也经历过一次[严重的服务中断][13],当时 Gitlab 从故障中恢复了,所以数据库里积累了大量的构建任务队列,服务变得饱和而且缓慢。 -几乎所有的服务都会受突发的高负载所影响。所以对这类可能出现的事情做应急预案——并测试一下预案能否正常工作——是必须的。客户端退避和[减载][14]通常是这些方案的核心。 +几乎所有的服务都会受突发的高负载所影响。所以对这类可能出现的事情做应急预案 —— 并测试一下预案能否正常工作 —— 是必须的。客户端退避和[减载][14]通常是这些方案的核心。 如果你的系统必须不间断地接收数据,并且数据不能被丢掉,关键是用可伸缩的方式把数据缓冲到队列中,后续再处理。 @@ -57,7 +57,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 > “复杂的系统本身就是有风险的系统” > —— [Richard Cook, MD][15] -过去几年里软件的运维操作趋势是更加自动化。任何可能降低系统容量的自动化操作(比如擦除磁盘,退役设备,关闭服务)都应该谨慎操作。这类自动化操作的故障(由于系统有 bug 或者有不正确的调用)能很快地搞垮你的系统,而且可能很难恢复。 +过去几年里软件的运维操作趋势是更加自动化。任何可能降低系统容量的自动化操作(比如擦除磁盘、退役设备、关闭服务)都应该谨慎操作。这类自动化操作的故障(由于系统有 bug 或者有不正确的调用)能很快地搞垮你的系统,而且可能很难恢复。 谷歌的 Christina Schulman 和 Etienne Perot 在[用安全规约协助保护你的数据中心][16]的演讲中给了一些例子。其中一次事故是将谷歌整个内部的内容分发网络(CDN)提交给了擦除磁盘的自动化系统。 @@ -69,11 +69,11 @@ Schulman 和 Perot 建议使用一个中心服务来管理规约,限制破坏 ### 防止黑天鹅事件 -可能在等着击垮系统的黑天鹅可不止上面这些。有很多其他的严重问题是能通过一些技术来避免的,像金丝雀发布,压力测试,混沌工程,灾难测试和模糊测试——当然还有冗余性和弹性的设计。但是即使用了这些技术,有时候你的系统还是会有故障。 +可能在等着击垮系统的黑天鹅可不止上面这些。有很多其他的严重问题是能通过一些技术来避免的,像金丝雀发布、压力测试、混沌工程、灾难测试和模糊测试 —— 当然还有冗余性和弹性的设计。但是即使用了这些技术,有时候你的系统还是会有故障。 -为了确保你的组织能有效地响应,在服务中断期间,请保证关键技术人员和领导层有办法沟通协调。例如,有一种你可能需要处理的烦人的事情,那就是网络完全中断。拥有故障时仍然可用的通信通道非常重要,这个通信通道要完全独立于你们自己的基础设施和基础设施的依赖。举个例子,假如你使用 AWS,那么把故障时可用的通信服务部署在 AWS 上就不明智了。在和你的主系统无关的地方,运行电话网桥或 IRC 服务器是比较好的方案。确保每个人都知道这个通信平台,并练习使用它。 +为了确保你的组织能有效地响应,在服务中断期间,请保证关键技术人员和领导层有办法沟通协调。例如,有一种你可能需要处理的烦人的事情,那就是网络完全中断。拥有故障时仍然可用的通信通道非常重要,这个通信通道要完全独立于你们自己的基础设施及对其的依赖。举个例子,假如你使用 AWS,那么把故障时可用的通信服务部署在 AWS 上就不明智了。在和你的主系统无关的地方,运行电话网桥或 IRC 服务器是比较好的方案。确保每个人都知道这个通信平台,并练习使用它。 -另一个原则是,确保监控和运维工具对生产环境系统的依赖尽可能的少。将控制平面和数据平面分开,你才能在系统不健康的时候做变更。不要让数据处理和配置变更或监控使用同一个消息队列,比如——应该使用不同的消息队列实例。在 [SparkPost: DNS 挂掉的那一天][4] 这个演讲中,Jeremy Blosser 讲了一个这类例子,很关键的工具依赖了生产环境的 DNS 配置,但是生产环境的 DNS 出了问题。 +另一个原则是,确保监控和运维工具对生产环境系统的依赖尽可能的少。将控制平面和数据平面分开,你才能在系统不健康的时候做变更。不要让数据处理和配置变更或监控使用同一个消息队列,比如,应该使用不同的消息队列实例。在 [SparkPost: DNS 挂掉的那一天][4] 这个演讲中,Jeremy Blosser 讲了一个这类例子,很关键的工具依赖了生产环境的 DNS 配置,但是生产环境的 DNS 出了问题。 ### 对抗黑天鹅的心理学 @@ -83,7 +83,7 @@ Schulman 和 Perot 建议使用一个中心服务来管理规约,限制破坏 ### 了解更多 -关于黑天鹅(或者以前的黑天鹅)事件以及应对策略,还有很多其他的事情可以说。如果你想了解更多,我强烈推荐你去看这两本书,它们是关于生产环境中的弹性和稳定性的:Susan Fowler 写的[生产微服务][19],还有 Michael T. Nygard 的 [Release It!][20]。 +关于黑天鹅(或者以前的黑天鹅)事件以及应对策略,还有很多其他的事情可以说。如果你想了解更多,我强烈推荐你去看这两本书,它们是关于生产环境中的弹性和稳定性的:Susan Fowler 写的《[生产微服务][19]》,还有 Michael T. Nygard 的 《[Release It!][20]》。 -------------------------------------------------------------------------------- @@ -92,7 +92,7 @@ via: https://opensource.com/article/18/10/taxonomy-black-swans 作者:[Laura Nolan][a] 选题:[lujun9972][b] 译者:[BeliteX](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181026 An Overview of Android Pie.md b/published/201811/20181026 An Overview of Android Pie.md similarity index 81% rename from translated/tech/20181026 An Overview of Android Pie.md rename to published/201811/20181026 An Overview of Android Pie.md index 9eb3ea5206..7aae6a1f0f 100644 --- a/translated/tech/20181026 An Overview of Android Pie.md +++ b/published/201811/20181026 An Overview of Android Pie.md @@ -1,40 +1,38 @@ Android 9.0 概览 ====== +> 第九代 Android 带来了更令人满意的用户体验。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/android-pie.jpg?itok=Sx4rbOWY) -我们来谈论一下 Android。尽管 Android 只是一款内核经过修改的 Linux,但经过多年的发展,Android 开发者们(或许包括正在阅读这篇文章的你)已经为这个平台的演变做出了很多值得称道的贡献。当然,可能很多人都已经知道,但我们还是要说,Android 并不完全开源,当你使用 Google 服务的时候,就已经接触到闭源的部分了。Google Play 商店就是其中之一,它不是一个开放的服务,不过这与 Android 是否开源没有太直接的联系,而是为了让你享用到美味、营养、高效、省电的馅饼(注:Android 9.0 代号为 Pie)。 +我们来谈论一下 Android。尽管 Android 只是一款内核经过修改的 Linux,但经过多年的发展,Android 开发者们(或许包括正在阅读这篇文章的你)已经为这个平台的演变做出了很多值得称道的贡献。当然,可能很多人都已经知道,但我们还是要说,Android 并不完全开源,当你使用 Google 服务的时候,就已经接触到闭源的部分了。Google Play 商店就是其中之一,它不是一个开放的服务。不过无论 Android 开源与否,这就是一个美味、营养、高效、省电的馅饼(LCTT 译注:Android 9.0 代号为 Pie)。 -我在我的 Essential PH-1 手机上运行了 Android 9.0(我真的很喜欢这款手机,也很了解这家公司的境况并不好)。在我自己体验了一段时间之后,我认为它是会被大众接受的。那么 Android 9.0 到底好在哪里呢?下面我们就来深入探讨一下。我们的出发点是用户的角度,而不是开发人员的角度,因此我也不会深入探讨太底层的方面。 +我在我的 Essential PH-1 手机上运行了 Android 9.0(我真的很喜欢这款手机,也知道这家公司的境况并不好)。在我自己体验了一段时间之后,我认为它是会被大众接受的。那么 Android 9.0 到底好在哪里呢?下面我们就来深入探讨一下。我们的出发点是用户的角度,而不是开发人员的角度,因此我也不会深入探讨太底层的方面。 ### 手势操作 Android 系统在新的手势操作方面投入了很多,但实际体验却不算太好。这个功能确实引起了我的兴趣。在这个功能发布之初,大家都对它了解甚少,纷纷猜测它会不会让用户使用多点触控的手势来浏览 Android 界面?又或者会不会是一个完全颠覆人们认知的东西? -实际上,手势操作比大多数人设想的要更加微妙和简单,因为很多功能都浓缩到了 Home 键上。打开手势操作功能之后,Recent 键的功能就合并到 Home 键上了。因此,如果需要查看最近打开的应用程序,就不能简单地通过 Recent 键来查看,而应该从 Home 键向上轻扫一下。(图1) +实际上,手势操作比大多数人设想的要更加微妙而简单,因为很多功能都浓缩到了 Home 键上。打开手势操作功能之后,Recent 键的功能就合并到 Home 键上了。因此,如果需要查看最近打开的应用程序,就不能简单地通过 Recent 键来查看,而应该从 Home 键向上轻扫一下。(图 1) ![Android Pie][2] -图 1:Android 9.0 中的”最近的应用程序“界面。 +*图 1:Android 9.0 中的”最近的应用程序“界面。* 另一个不同的地方是 App Drawer。类似于查看最近打开的应用,需要在 Home 键向上滑动才能打开 App Drawer。 -而后退按钮则没有去掉。在应用程序需要用到后退功能时,它就会出现在屏幕的左下方。有时候即使应用程序自己带有后退按钮,Android 的后退按钮也会出现。 +而后退按钮则没有去掉。在应用程序需要用到后退功能时,它就会出现在主屏幕的左下方。有时候即使应用程序自己带有后退按钮,Android 的后退按钮也会出现。 当然,如果你不喜欢使用手势操作,也可以禁用这个功能。只需要按照下列步骤操作: - 1. 打开”设置“ - - 2. 向下滑动并进入 系统 > 手势 - + 2. 向下滑动并进入“系统 > 手势” 3. 从 Home 键向上滑动 - - 4. 将 On/Off 滑块(图2)滑动至 Off 位置 + 4. 将 On/Off 滑块(图 2)滑动至 Off 位置 ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_2.png?itok=cs2tqZut) -图 2:关闭手势操作。 +*图 2:关闭手势操作。* ### 电池寿命 @@ -42,25 +40,19 @@ Android 系统在新的手势操作方面投入了很多,但实际体验却不 对于这个功能的唯一一个警告是,如果人工智能出现问题并导致电池电量过早耗尽,就只能通过恢复出厂设置来解决这个问题了。尽管有这样的缺陷,在电池续航时间方面,Android 9.0 也比 Android 8.0 有所改善。 -### 分屏功能 +### 分屏功能的变化 分屏对于 Android 来说不是一个新功能,但在 Android 9.0 上,它的使用方式和以往相比略有不同,而且只对于手势操作有影响,不使用手势操作的用户不受影响。要在 Android 9.0 上使用分屏功能,需要按照下列步骤操作: + 1. 从 Home 键向上滑动,打开“最近的应用程序”。 + 2. 找到需要放置在屏幕顶部的应用程序。 + 3. 长按应用程序顶部的图标以显示新的弹出菜单。(图 3) + 4. 点击分屏,应用程序会在屏幕的上半部分打开。 + 5. 找到要打开的第二个应用程序,然后点击它添加到屏幕的下半部分。 + ![Adding an app][5] -图 3:在 Android 9.0 上将应用添加到分屏模式中。 - -[Used with permission][3] - - 1. 从 Home 键向上滑动,打开“最近的应用程序”。 - - 2. 找到需要放置在屏幕顶部的应用程序。 - - 3. 长按应用程序顶部的图标以显示新的弹出菜单。(图 3) - - 4. 点击分屏,应用程序会在屏幕的上半部分打开。 - - 5. 找到要打开的第二个应用程序,然后点击它添加到屏幕的下半部分。 +*图 3:在 Android 9.0 上将应用添加到分屏模式中。* 使用分屏功能关闭应用程序的方法和原来保持一致。 @@ -72,7 +64,7 @@ Android 系统在新的手势操作方面投入了很多,但实际体验却不 ![Actions][7] -图 4:Android 应用操作。 +*图 4:Android 应用操作。* ### 声音控制 @@ -82,17 +74,17 @@ Android 9.0 这次优化针对的是设备上快速控制声音的按钮。如 ![Sound control][9] -图 5:Android 9.0 上的声音控制。 +*图 5:Android 9.0 上的声音控制。* ### 屏幕截图 -由于我要撰写关于 Android 的文章,所以我会常常需要进行屏幕截图。而 Android 9.0 有意向我最喜欢的更新,就是分享屏幕截图。Android 9.0 可以在截取屏幕截图后,直接共享、编辑,或者删除不喜欢的截图,而不需要像以前一样打开 Google 相册、找到要共享的屏幕截图、打开图像然后共享图像。 +由于我要撰写关于 Android 的文章,所以我会常常需要进行屏幕截图。而 Android 9.0 有一项我最喜欢的更新,就是分享屏幕截图。Android 9.0 可以在截取屏幕截图后,直接共享、编辑,或者删除不喜欢的截图,而不需要像以前一样打开 Google 相册、找到要共享的屏幕截图、打开图像然后共享图像。 + +如果你想分享屏幕截图,只需要在截图后等待弹出菜单,点击分享(图 6),从标准的 Android 分享菜单中分享即可。 ![Sharing ][11] -图 6:共享屏幕截图变得更加容易。 - -如果你想分享屏幕截图,只需要在截图后等待弹出菜单,点击分享(图 6),从标准的 Android 分享菜单中分享即可。 +*图 6:共享屏幕截图变得更加容易。* ### 更令人满意的 Android 体验 @@ -105,7 +97,7 @@ via: https://www.linux.com/learn/2018/10/overview-android-pie 作者:[Jack Wallen][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md b/published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md similarity index 100% rename from published/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md rename to published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md diff --git a/published/20181027 Design faster web pages, part 3- Font and CSS tweaks.md b/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md similarity index 100% rename from published/20181027 Design faster web pages, part 3- Font and CSS tweaks.md rename to published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md diff --git a/published/20181029 4 open source Android email clients.md b/published/201811/20181029 4 open source Android email clients.md similarity index 100% rename from published/20181029 4 open source Android email clients.md rename to published/201811/20181029 4 open source Android email clients.md diff --git a/published/20181029 Machine learning with Python- Essential hacks and tricks.md b/published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md similarity index 100% rename from published/20181029 Machine learning with Python- Essential hacks and tricks.md rename to published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md diff --git a/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md new file mode 100644 index 0000000000..f675342f6f --- /dev/null +++ b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md @@ -0,0 +1,367 @@ +我们如何得知安装的包来自哪个仓库? +========== + +有时候你可能想知道安装的软件包来自于哪个仓库。这将帮助你在遇到包冲突问题时进行故障排除。 + +因为[第三方仓库][1]拥有最新版本的软件包,所以有时候当你试图安装一些包的时候会出现兼容性的问题。 + +在 Linux 上一切都是可能的,因为你可以安装一个即使在你的发行版系统上不能使用的包。 + +你也可以安装一个最新版本的包,即使你的发行版系统仓库还没有这个版本,怎么做到的呢? + +这就是为什么出现了第三方仓库。它们允许用户从库中安装所有可用的包。 + +几乎所有的发行版系统都允许第三方软件库。一些发行版还会官方推荐一些不会取代基础仓库的第三方仓库,例如 CentOS 官方推荐安装 [EPEL 库][2]。 + +下面是常用的仓库列表和它们的详细信息。 + + * CentOS: [EPEL][2]、[ELRepo][3] 等是 [Centos 社区认证仓库](4)。 + * Fedora: [RPMfusion 仓库][5] 是经常被很多 [Fedora][6] 用户使用的仓库。 + * ArchLinux: ArchLinux 社区仓库包含了来自于 Arch 用户仓库的可信用户审核通过的软件包。 + * openSUSE: [Packman 仓库][7] 为 openSUSE 提供了各种附加的软件包,特别是但不限于那些在 openSUSE Build Service 应用黑名单上的与多媒体相关的应用和库。它是 openSUSE 软件包的最大外部软件库。 + * Ubuntu:个人软件包归档(PPA)是一种软件仓库。开发者们可以创建这种仓库来分发他们的软件。你可以在 PPA 导航页面找到相关信息。同时,你也可以启用 Cananical 合作伙伴软件仓库。 + +### 仓库是什么? + +软件仓库是存储特定的应用程序的软件包的集中场所。 + +所有的 Linux 发行版都在维护他们自己的仓库,并允许用户在他们的机器上获取和安装包。 + +每个厂商都提供了各自的包管理工具来管理它们的仓库,例如搜索、安装、更新、升级、删除等等。 + +除了 RHEL 和 SUSE 以外大部分 Linux 发行版都是自由软件。要访问付费的仓库,你需要购买其订阅服务。 + +### 为什么我们需要启用第三方仓库? + +在 Linux 里,并不建议从源代码安装包,因为这样做可能会在升级软件和系统的时候产生很多问题,这也是为什么我们建议从库中安装包而不是从源代码安装。 + +### 在 RHEL/CentOS 系统上我们如何得知安装的软件包来自哪个仓库? + +这可以通过很多方法实现。我们会给你所有可能的选择,你可以选择一个对你来说最合适的。 + +#### 方法-1:使用 yum 命令 + +RHEL 和 CentOS 系统使用 RPM 包,因此我们能够使用 [Yum 包管理器][8] 来获得信息。 + +YUM 即 “Yellodog Updater, Modified” 是适用于基于 RPM 的系统例如 RHEL 和 CentOS 的一个开源命令行前端包管理工具。 + +`yum` 是从发行版仓库和其他第三方库中获取、安装、删除、查询和管理 RPM 包的一个主要工具。 + +``` +# yum info apachetop +Loaded plugins: fastestmirror +Loading mirror speeds from cached hostfile + * epel: epel.mirror.constant.com +Installed Packages +Name : apachetop +Arch : x86_64 +Version : 0.15.6 +Release : 1.el7 +Size : 65 k +Repo : installed +From repo : epel +Summary : A top-like display of Apache logs +URL : https://github.com/tessus/apachetop +License : BSD +Description : ApacheTop watches a logfile generated by Apache (in standard common or + : combined logformat, although it doesn't (yet) make use of any of the extra + : fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-2:使用 yumdb 命令 + +`yumdb info` 提供了类似于 `yum info` 的信息但是它又提供了包校验和数据、类型、用户信息(谁安装的软件包)。从 yum 3.2.26 开始,yum 已经开始在 rpmdatabase 之外存储额外的信息(user 表示软件是用户安装的,dep 表示它是作为依赖项引入的)。 + +``` +# yumdb info lighttpd +Loaded plugins: fastestmirror +lighttpd-1.4.50-1.el7.x86_64 + checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37 + checksum_type = sha256 + command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel + from_repo = epel + from_repo_revision = 1540756729 + from_repo_timestamp = 1540757483 + installed_by = 0 + origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm + reason = user + releasever = 7 + var_contentdir = centos + var_infra = stock + var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8 +``` + +`lighttpd` 包来自 EPEL 仓库。 + +#### 方法-3:使用 rpm 命令 + +[RPM 命令][9] 即 “Red Hat Package Manager” 是一个适用于基于 Red Hat 的系统(例如 RHEL、CentOS、Fedora、openSUSE & Mageia)的强大的命令行包管理工具。 + +这个工具允许你在你的 Linux 系统/服务器上安装、更新、移除、查询和验证软件。RPM 文件具有 .rpm 后缀名。RPM 包是用必需的库和依赖关系构建的,不会与系统上安装的其他包冲突。 + +``` +# rpm -qi apachetop +Name : apachetop +Version : 0.15.6 +Release : 1.el7 +Architecture: x86_64 +Install Date: Mon 29 Oct 2018 06:47:49 AM EDT +Group : Applications/Internet +Size : 67020 +License : BSD +Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5 +Source RPM : apachetop-0.15.6-1.el7.src.rpm +Build Date : Sat 20 Jun 2015 09:02:37 PM EDT +Build Host : buildvm-22.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : https://github.com/tessus/apachetop +Summary : A top-like display of Apache logs +Description : +ApacheTop watches a logfile generated by Apache (in standard common or +combined logformat, although it doesn't (yet) make use of any of the extra +fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-4:使用 repoquery 命令 + +`repoquery` 是一个从 YUM 库查询信息的程序,类似于 rpm 查询。 + +``` +# repoquery -i httpd + +Name : httpd +Version : 2.4.6 +Release : 80.el7.centos.1 +Architecture: x86_64 +Size : 9817285 +Packager : CentOS BuildSystem +Group : System Environment/Daemons +URL : http://httpd.apache.org/ +Repository : updates +Summary : Apache HTTP Server +Source : httpd-2.4.6-80.el7.centos.1.src.rpm +Description : +The Apache HTTP Server is a powerful, efficient, and extensible +web server. +``` + +`httpd` 包来自 CentOS updates 仓库。 + +### 在 Fedora 系统上我们如何得知安装的包来自哪个仓库? + +DNF 是 “Dandified yum” 的缩写。DNF 是使用 hawkey/libsolv 库作为后端的下一代 yum 包管理器(yum 的分支)。从 Fedora 18 开始 Aleš Kozumplík 开始开发 DNF,并最终在 Fedora 22 上得以应用/启用。 + +[dnf 命令][10] 用于在 Fedora 22 以及之后的系统上安装、更新、搜索和删除包。它会自动解决依赖并使安装包的过程变得顺畅,不会出现任何问题。 + +``` +$ dnf info tilix +Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST. +Installed Packages +Name : tilix +Version : 1.6.4 +Release : 1.fc26 +Arch : x86_64 +Size : 3.6 M +Source : tilix-1.6.4-1.fc26.src.rpm +Repo : @System +From repo : updates +Summary : Tiling terminal emulator +URL : https://github.com/gnunn1/tilix +License : MPLv2.0 and GPLv3+ and CC-BY-SA +Description : Tilix is a tiling terminal emulator with the following features: + : + : - Layout terminals in any fashion by splitting them horizontally or vertically + : - Terminals can be re-arranged using drag and drop both within and between + : windows + : - Terminals can be detached into a new window via drag and drop + : - Input can be synchronized between terminals so commands typed in one + : terminal are replicated to the others + : - The grouping of terminals can be saved and loaded from disk + : - Terminals support custom titles + : - Color schemes are stored in files and custom color schemes can be created by + : simply creating a new file + : - Transparent background + : - Supports notifications when processes are completed out of view + : + : The application was written using GTK 3 and an effort was made to conform to + : GNOME Human Interface Guidelines (HIG). +``` + +`tilix` 包来自 Fedora updates 仓库。 + +### 在 openSUSE 系统上我们如何得知安装的包来自哪个仓库? + +Zypper 是一个使用 libzypp 的命令行包管理器。[Zypper 命令][11] 提供了存储库访问、依赖处理、包安装等功能。 + +``` +$ zypper info nano + +Loading repository data... +Reading installed packages... + + +Information for package nano: +----------------------------- +Repository : Main Repository (OSS) +Name : nano +Version : 2.4.2-5.3 +Arch : x86_64 +Vendor : openSUSE +Installed Size : 1017.8 KiB +Installed : No +Status : not installed +Source package : nano-2.4.2-5.3.src +Summary : Pico editor clone with enhancements +Description : + GNU nano is a small and friendly text editor. It aims to emulate + the Pico text editor while also offering a few enhancements. +``` + +`nano` 包来自于 openSUSE Main 仓库(OSS)。 + +### 在 ArchLinux 系统上我们如何得知安装的包来自哪个仓库? + +[Pacman 命令][12] 即包管理器工具(package manager utility ),是一个简单的用来安装、构建、删除和管理 Arch Linux 软件包的命令行工具。Pacman 使用 libalpm 作为后端来执行所有的操作。 + +``` +# pacman -Ss chromium +extra/chromium 48.0.2564.116-1 + The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +extra/qt5-webengine 5.5.1-9 (qt qt5) + Provides support for web applications using the Chromium browser project +community/chromium-bsu 0.9.15.1-2 + A fast paced top scrolling shooter +community/chromium-chromevox latest-1 + Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This + package does not contain the extension code. +community/fcitx-mozc 2.17.2313.102-1 + Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese + Input) +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +或者,我们可以使用以下选项获得关于包的详细信息。 + +``` +# pacman -Si chromium +Repository : extra +Name : chromium +Version : 48.0.2564.116-1 +Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +Architecture : x86_64 +URL : http://www.chromium.org/ +Licenses : BSD +Groups : None +Provides : None +Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus + flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir + desktop-file-utils hicolor-icon-theme +Optional Deps : kdebase-kdialog: needed for file dialogs in KDE + gnome-keyring: for storing passwords in GNOME keyring + kwallet: for storing passwords in KWallet +Conflicts With : None +Replaces : None +Download Size : 44.42 MiB +Installed Size : 172.44 MiB +Packager : Evangelos Foutras +Build Date : Fri 19 Feb 2016 04:17:12 AM IST +Validated By : MD5 Sum SHA-256 Sum Signature +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +### 在基于 Debian 的系统上我们如何得知安装的包来自哪个仓库? + +在基于 Debian 的系统例如 Ubuntu、LinuxMint 上可以使用两种方法实现。 + +#### 方法-1:使用 apt-cache 命令 + +[apt-cache 命令][13] 可以显示存储在 APT 内部数据库的很多信息。这些信息是一种缓存,因为它们是从列在 `source.list` 文件里的不同的源中获得的。这个过程发生在 apt 更新操作期间。 + +``` +$ apt-cache policy python3 +python3: + Installed: 3.6.3-0ubuntu2 + Candidate: 3.6.3-0ubuntu3 + Version table: + 3.6.3-0ubuntu3 500 + 500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages + *** 3.6.3-0ubuntu2 500 + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + 100 /var/lib/dpkg/status +``` + +`python3` 包来自 Ubuntu updates 仓库。 + +#### 方法-2:使用 apt 命令 + +[APT 命令][14] 即 “Advanced Packaging Tool”,是 `apt-get` 命令的替代品,就像 DNF 是如何取代 YUM 一样。它是具有丰富功能的命令行工具并将所有的功能例如 `apt-cache`、`apt-search`、`dpkg`、`apt-cdrom`、`apt-config`、`apt-ket` 等包含在一个命令(APT)中,并且还有几个独特的功能。例如我们可以通过 APT 轻松安装 .dpkg 包,但我们不能使用 `apt-get` 命令安装,更多类似的功能都被包含进了 APT 命令。`apt-get` 因缺失了很多未被解决的特性而被 `apt` 取代。 + +``` +$ apt -a show notepadqq +Package: notepadqq +Version: 1.3.2-1~artful1 +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Download-Size: 356 kB +APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. + +Package: notepadqq +Version: 1.2.0-1~artful1 +Status: install ok installed +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Homepage: http://notepadqq.altervista.org +Download-Size: unknown +APT-Manual-Installed: yes +APT-Sources: /var/lib/dpkg/status +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. +``` + +`notepadqq` 包来自 Launchpad PPA。 + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[zianglei](https://github.com/zianglei) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/repository/ +[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ +[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ +[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ +[6]: https://fedoraproject.org/wiki/Third_party_repositories +[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ +[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[9]: https://www.2daygeek.com/rpm-command-examples/ +[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ diff --git a/published/20181030 How To Analyze And Explore The Contents Of Docker Images.md b/published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md similarity index 100% rename from published/20181030 How To Analyze And Explore The Contents Of Docker Images.md rename to published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md diff --git a/published/20181031 8 creepy commands that haunt the terminal - Opensource.com.md b/published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md similarity index 100% rename from published/20181031 8 creepy commands that haunt the terminal - Opensource.com.md rename to published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md diff --git a/published/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md b/published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md similarity index 100% rename from published/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md rename to published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md diff --git a/translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md similarity index 83% rename from translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md rename to published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md index b51fbd8221..ef04dc33dd 100644 --- a/translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md +++ b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md @@ -1,10 +1,11 @@ -如何从 Windows OS 7、8 和 10 创建可启动的 Linux USB 盘? +如何从 Windows 7、8 和 10 创建可启动的 Linux USB 盘? ====== + 如果你想了解 Linux,首先要做的是在你的系统上安装 Linux 系统。 它可以通过两种方式实现,使用 Virtualbox、VMWare 等虚拟化应用,或者在你的系统上安装 Linux。 -如果你倾向从 Windows 系统迁移到 Linux 系统或计划在备用机上安装 Linux 系统,那么你须为此创建可启动的 USB 盘。 +如果你倾向于从 Windows 系统迁移到 Linux 系统或计划在备用机上安装 Linux 系统,那么你须为此创建可启动的 USB 盘。 我们已经写过许多[在 Linux 上创建可启动 USB 盘][1] 的文章,如 [BootISO][2]、[Etcher][3] 和 [dd 命令][4],但我们从来没有机会写一篇文章关于在 Windows 中创建 Linux 可启动 USB 盘的文章。不管怎样,我们今天有机会做这件事了。 @@ -22,29 +23,32 @@ 有许多程序可供使用,但我的首选是 [Universal USB Installer][6],它使用起来非常简单。只需访问 Universal USB Installer 页面并下载该程序即可。 -### 步骤3:如何使用 Universal USB Installer 创建可启动的 Ubuntu ISO +### 步骤3:创建可启动的 Ubuntu ISO 这个程序在使用上不复杂。首先连接 USB 盘,然后点击下载的 Universal USB Installer。启动后,你可以看到类似于我们的界面。 + ![][8] - * **`步骤 1:`** 选择Ubuntu 系统。 - * **`步骤 2:`** 选择 Ubuntu ISO 下载位置。 - * **`步骤 3:`** 默认它选择的是 USB 盘,但是要验证一下,接着勾选格式化选项。 - - + * 步骤 1:选择 Ubuntu 系统。 + * 步骤 2:选择 Ubuntu ISO 下载位置。 + * 步骤 3:它默认选择的是 USB 盘,但是要验证一下,接着勾选格式化选项。 ![][9] -当你点击 `Create` 按钮时,它会弹出一个带有警告的窗口。不用担心,只需点击 `Yes` 继续进行此操作即可。 +当你点击 “Create” 按钮时,它会弹出一个带有警告的窗口。不用担心,只需点击 “Yes” 继续进行此操作即可。 + ![][10] USB 盘分区正在进行中。 + ![][11] -要等待一会儿才能完成。如你您想将它移至后台,你可以点击 `Background` 按钮。 +要等待一会儿才能完成。如你您想将它移至后台,你可以点击 “Background” 按钮。 + ![][12] 好了,完成了。 + ![][13] 现在你可以进行[安装 Ubuntu 系统][14]了。但是,它也提供了一个 live 模式,如果你想在安装之前尝试,那么可以使用它。 @@ -56,7 +60,7 @@ via: https://www.2daygeek.com/create-a-bootable-live-usb-drive-from-windows-usin 作者:[Prakash Subramanian][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md similarity index 67% rename from translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md rename to published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md index cecd641b89..ea0dbe77e7 100644 --- a/translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md +++ b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md @@ -1,35 +1,38 @@ CPod:一个开源、跨平台播客应用 ====== + 播客是一个很好的娱乐和获取信息的方式。事实上,我会听十几个不同的播客,包括技术、神秘事件、历史和喜剧。当然,[Linux 播客][1]也在此列表中。 今天,我们将看一个简单的跨平台应用来收听你的播客。 ![][2] -推荐的播客和播客搜索 + +*推荐的播客和播客搜索* ### 应用程序 -[CPod][3] 是 [Zack Guard(z -----)][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在最大的操作系统(Linux、Windows、Mac OS)上运行。 +[CPod][3] 是 [Zack Guard(z-------------)][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在大多数操作系统(Linux、Windows、Mac OS)上运行。 -一个小事:CPod 最初被命名为 Cumulonimbus。 +> 一个小事:CPod 最初被命名为 Cumulonimbus。 应用的大部分被两个面板占用,来显示内容和选项。屏幕左侧的小条让你可以使用应用的不同功能。CPod 的不同栏目包括主页、队列、订阅、浏览和设置。 ![cpod settings][6] -设置 + +*设置* ### CPod 的功能 以下是 CPod 提供的功能列表: * 简洁,干净的设计 - * 可在顶级计算机平台上使用 + * 可在主流计算机平台上使用 * 有 Snap 包 * 搜索 iTunes 的播客目录 - * 下载以及无需下载播放节目 + * 可下载也可无需下载就播放节目 * 查看播客信息和节目 * 搜索播客的个别节目 - * 黑暗模式 + * 深色模式 * 改变播放速度 * 键盘快捷键 * 将你的播客订阅与 gpodder.net 同步 @@ -39,13 +42,13 @@ CPod:一个开源、跨平台播客应用 * 多语言支持 - ![search option in cpod application][7] -搜索 ZFS 节目 + +*搜索 ZFS 节目* ### 在 Linux 上体验 CPod -我最后在两个系统上安装了 CPod:ArchLabs 和 Windows。[Arch 用户仓库]​​[8] 中有两个版本的 CPod。但是,它们都已过时,一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我不得已而有不同的体验。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。 +我最后在两个系统上安装了 CPod:ArchLabs 和 Windows。[Arch 用户仓库​][8] 中有两个版本的 CPod。但是,它们都已过时,一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我的体验有所不同。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。 我马上能够找到我最喜欢的播客。我可以粘贴 RSS 源的 URL 来添加 iTunes 列表中没有的那些播客。 @@ -55,7 +58,7 @@ CPod:一个开源、跨平台播客应用 ### 安装 CPod -在 [GitHub][11]上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。 +在 [GitHub][11] 上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。 你可以使用 [Snap][12] 安装 CPod。你需要做的就是使用以下命令: @@ -63,14 +66,15 @@ CPod:一个开源、跨平台播客应用 sudo snap install cpod ``` -就像我之前说的那样,CPod 的 [Arch 用户仓库]​​[8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch(或基于 Arch 的发行版),我建议你这样做。 +就像我之前说的那样,CPod 的 [Arch 用户仓库][8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch(或基于 Arch 的发行版),我建议你这样做。 ![cpod for Linux pidcasts][13] -播放其中一个我最喜欢的播客 + +*播放其中一个我最喜欢的播客* ### 最后的想法 -总的来说,我喜欢 CPod。它外观漂亮,使用简单。事实上,我更喜欢原来的名字 (Cumulonimbus),但是它有点拗口。 +总的来说,我喜欢 CPod。它外观漂亮,使用简单。事实上,我更喜欢原来的名字(Cumulonimbus),但是它有点拗口。 我刚刚在程序中遇到两个问题。首先,我希望每个播客都有评分。其次,在打开黑暗模式后,根据长度、日期、下载状态和播放进度对剧集进行排序的菜单不起作用。 @@ -85,23 +89,23 @@ via: https://itsfoss.com/cpod-podcast-app/ 作者:[John Paul][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/john/ [b]: https://github.com/lujun9972 [1]: https://itsfoss.com/linux-podcasts/ -[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod1.1.jpg +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod1.1.jpg?w=800&ssl=1 [3]: https://github.com/z-------------/CPod [4]: https://github.com/z------------- [5]: https://electronjs.org/ -[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod2.1.png -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod4.1.jpg +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod2.1.png?w=800&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod4.1.jpg?w=800&ssl=1 [8]: https://aur.archlinux.org/packages/?O=0&K=cpod [9]: https://latenightlinux.com/ [10]: https://itsfoss.com/what-is-zfs/ [11]: https://github.com/z-------------/CPod/releases [12]: https://snapcraft.io/cumulonimbus -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod3.1.jpg +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod3.1.jpg?w=800&ssl=1 [14]: http://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20181105 Commandline quick tips- How to locate a file.md b/published/201811/20181105 Commandline quick tips- How to locate a file.md similarity index 67% rename from translated/tech/20181105 Commandline quick tips- How to locate a file.md rename to published/201811/20181105 Commandline quick tips- How to locate a file.md index 1a56fcfc20..6b8d9a1109 100644 --- a/translated/tech/20181105 Commandline quick tips- How to locate a file.md +++ b/published/201811/20181105 Commandline quick tips- How to locate a file.md @@ -1,24 +1,24 @@ -命令行快捷提示:如何定位一个文件 +命令行快速技巧:如何定位一个文件 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg) -我们都会有文件存储在电脑里 —— 目录,相片,源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。 +我们都会有文件存储在电脑里 —— 目录、相片、源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。 -好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:ls、tree 和 tree。 +好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:`ls`、`tree` 和 `find`。 ### ls -如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,ls 就是为此而生的。 +如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,`ls` 就是为此而生的。 -只需运行 ls 就可以列出当下目录中所有可见的文件和目录: +只需运行 `ls` 就可以列出当下目录中所有可见的文件和目录: ``` $ ls Documents Music Pictures Videos notes.txt ``` -添加 **-l** 选项可以查看文件的相关信息。同时再加上 **-h** 选项,就可以用一种人们易读的格式查看文件的大小: +添加 `-l` 选项可以查看文件的相关信息。同时再加上 `-h` 选项,就可以用一种人们易读的格式查看文件的大小: ``` $ ls -lh @@ -30,7 +30,7 @@ drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Videos -rw-r--r-- 1 adam adam 43K Nov 2 13:12 notes.txt ``` -**ls** 也可以搜索一个指定位置: +`ls` 也可以搜索一个指定位置: ``` $ ls Pictures/ @@ -44,7 +44,7 @@ $ ls *.txt notes.txt ``` -少了点什么?想要查看一个隐藏文件?没问题,使用 **-a** 选项: +少了点什么?想要查看一个隐藏文件?没问题,使用 `-a` 选项: ``` $ ls -a @@ -52,7 +52,7 @@ $ ls -a .. .bash_profile .vimrc Music Videos ``` -**ls** 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多: +`ls` 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多: ``` $ man ls @@ -60,13 +60,13 @@ $ man ls ### tree -如果你想查看你的文件的树状结构,tree 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装: +如果你想查看你的文件的树状结构,`tree` 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装: ``` $ sudo dnf install tree ``` -如果不带任何选项或者参数地运行 tree,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件: +如果不带任何选项或者参数地运行 `tree`,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件: ``` $ tree @@ -89,7 +89,7 @@ $ tree `-- notes.txt ``` -如果列出的太多了,使用 -L 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级: +如果列出的太多了,使用 `-L` 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级: ``` $ tree -L 2 @@ -118,13 +118,13 @@ Documents/work/ `-- status-reports.txt ``` -如果使用 tree 列出的是一个很大的树状图,你可以把它跟 less 组合使用: +如果使用 `tree` 列出的是一个很大的树状图,你可以把它跟 `less` 组合使用: ``` $ tree | less ``` -再一次,tree 有很多其他的选项可以使用,你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项: +再一次,`tree` 有很多其他的选项可以使用,你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项: ``` $ man tree @@ -134,13 +134,13 @@ $ man tree 那么如果不知道文件在哪里呢?就让我们来找到它们吧! -要是你的系统中没有 find,你可以使用 DNF 安装它: +要是你的系统中没有 `find`,你可以使用 DNF 安装它: ``` $ sudo dnf install findutils ``` -运行 find 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。 +运行 `find` 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。 ``` $ find @@ -167,7 +167,7 @@ $ find ./Music ``` -但是 find 真正强大的是你可以使用文件名进行搜索: +但是 `find` 真正强大的是你可以使用文件名进行搜索: ``` $ find -name do-things.sh @@ -184,6 +184,7 @@ $ find -name "*.txt" ./Documents/work/project-abc/project-notes.txt ./notes.txt ``` + 你也可以根据大小寻找文件。如果你的空间不足的时候,这种方法也许特别有用。现在来列出所有大于 1 MB 的文件: ``` @@ -207,7 +208,7 @@ $ find Documents -name "*project*" -type f Documents/work/project-abc/project-notes.txt ``` -最后再一次,find 还有很多供你使用的选项,要是你想使用它们,man 手册页绝对可以帮到你: +最后再一次,`find` 还有很多供你使用的选项,要是你想使用它们,man 手册页绝对可以帮到你: ``` $ man find @@ -220,7 +221,7 @@ via: https://fedoramagazine.org/commandline-quick-tips-locate-file/ 作者:[Adam Šamalík][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181105 Introducing pydbgen- A random dataframe-database table generator.md b/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md similarity index 100% rename from published/20181105 Introducing pydbgen- A random dataframe-database table generator.md rename to published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md diff --git a/translated/tech/20181105 Revisiting the Unix philosophy in 2018.md b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md similarity index 50% rename from translated/tech/20181105 Revisiting the Unix philosophy in 2018.md rename to published/201811/20181105 Revisiting the Unix philosophy in 2018.md index 09e6c7fa53..7c9931e601 100644 --- a/translated/tech/20181105 Revisiting the Unix philosophy in 2018.md +++ b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md @@ -1,67 +1,65 @@ 2018 重温 Unix 哲学 ====== -在现代微服务环境中,构建小型,集中应用程序的旧策略又再一次流行了起来。 +> 在现代微服务环境中,构建小型、单一的应用程序的旧策略又再一次流行了起来。 + ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) -1984年,Rob Pike 和 Brian W 在 AT&T 贝尔实验室技术期刊上发表了名为 “[Unix 环境编程][1]” 的文章,其中他们使用 BSD 的 **cat -v** 例子来认证 Unix 哲学。简而言之,Unix 哲学是:构建小型,单一的应用程序——不管用什么语言——只做一件小而美的事情,用 **stdin** / **stdout** 进行通信,并通过管道进行连接。 +1984 年,Rob Pike 和 Brian W. Kernighan 在 AT&T 贝尔实验室技术期刊上发表了名为 “[Unix 环境编程][1]” 的文章,其中他们使用 BSD 的 `cat -v` 例子来认证 Unix 哲学。简而言之,Unix 哲学是:构建小型、单一的应用程序 —— 不管用什么语言 —— 只做一件小而美的事情,用 `stdin` / `stdout` 进行通信,并通过管道进行连接。 听起来是不是有点耳熟? 是的,我也这么认为。这就是 James Lewis 和 Martin Fowler 给出的 [微服务的定义][2] 。 -> 简单来说,微服务架构的风格是将应用程序开发为一套单一,小型服务的方法,每个服务都运行在它的进程中,并用轻量级机制进行通信,通常是 HTTP 资源 API 。 +> 简单来说,微服务架构的风格是将单个 应用程序开发为一套小型服务的方法,每个服务都运行在它的进程中,并用轻量级机制进行通信,通常是 HTTP 资源 API 。 -虽然一个 *nix 程序或者是一个微服务本身可能非常局限甚至不是很有趣,但是当这些独立工作的单元组合在一起的时候就显示出了它们真正的好处和强大。 +虽然一个 *nix 程序或者是一个微服务本身可能非常局限甚至不是很有用,但是当这些独立工作的单元组合在一起的时候就显示出了它们真正的好处和强大。 ### *nix程序 vs 微服务 -下面的表格对比了 *nix 环境中的程序(例如 **cat** 或 **lsof**)与微服务环境中的程序。 +下面的表格对比了 *nix 环境中的程序(例如 `cat` 或 `lsof`)与微服务环境中的程序。 -| | *nix 程序 | 微服务 | -| ----------------------------------- | -------------------------- | ---------------------------------- | -| 执行单元 | 程序使用 `stdin/stdout` | 使用 HTTP 或 gRPC API | -| 数据流 | 管道 | ? | -| 可配置和参数化 | 命令行参数 | | -| 环境变量和配置文件 | JSON/YAML 文档 | | -| 发现 | 包管理器, man, make | DNS, 环境变量, OpenAPI | +| | *nix 程序 | 微服务 | +| ------------- | ------------------------- | ----------------------- | +| 执行单元 | 程序使用 `stdin`/`stdout` | 使用 HTTP 或 gRPC API | +| 数据流 | 管道 | ? | +| 可配置和参数化 | 命令行参数、环境变量和配置文件 | JSON/YAML 文档 | +| 发现 | 包管理器、man、make | DNS、环境变量、OpenAPI | 让我们详细的看看每一行。 #### 执行单元 -*nix 系统(像 Linux)中的执行单元是一个可执行的文件(二进制或者是脚本),理想情况下,它们从 `stdin` 读取输入并将输出写入 `stdout`。而微服务通过暴露一个或多个通信接口来提供服务,比如 HTTP 和 gRPC APIs。在这两种情况下,你都会发现无状态示例(本质上是纯函数行为)和有状态示例,除了输入之外,还有一些内部(持久)状态决定发生了什么。 +*nix 系统(如 Linux)中的执行单元是一个可执行的文件(二进制或者是脚本),理想情况下,它们从 `stdin` 读取输入并将输出写入 `stdout`。而微服务通过暴露一个或多个通信接口来提供服务,比如 HTTP 和 gRPC API。在这两种情况下,你都会发现无状态示例(本质上是纯函数行为)和有状态示例,除了输入之外,还有一些内部(持久)状态决定发生了什么。 #### 数据流 -传统的,*nix 程序能够通过管道进行通信。换名话说,我们要感谢 [Doug McIlroy][3],你不需要创建临时文件来传递,而可以在每个进程之间处理无穷无尽的数据流。据我所知,除了 [2017 年做的基于 `Apache Kafka` 小实验][4],没有什么能比得上管道化的微服务了。 +传统的,*nix 程序能够通过管道进行通信。换句话说,我们要感谢 [Doug McIlroy][3],你不需要创建临时文件来传递,而可以在每个进程之间处理无穷无尽的数据流。据我所知,除了我在 [2017 年做的基于 Apache Kafka 小实验][4],没有什么能比得上管道化的微服务了。 #### 可配置和参数化 -你是如何配置程序或者服务的,无论是永久性的服务还是即时的服务?是的,在 *nix 系统上,你通常有三种方法:命令行参数,环境变量,或全面化的配置文件。在微服务架构中,典型的做法是用 YAML ( 或者甚至是worse,JSON ) 文档,定制好一个服务的布局和配置以及依赖的组件和通信,存储,和运行时配置。例如 [ Kubernetes 资源定义][5],[Nomad 工作规范][6],或 [Docker 组件][7] 文档。这些可能参数化也可能不参数化;也就是说,除非你知道一些模板语言,像 Kubernetes 中的 [Helm][8],否则你会发现你使用了很多 **sed -i** 这样的命令。 +你是如何配置程序或者服务的,无论是永久性的服务还是即时的服务?是的,在 *nix 系统上,你通常有三种方法:命令行参数、环境变量,或全面的配置文件。在微服务架构中,典型的做法是用 YAML(或者甚至是 JSON)文档,定制好一个服务的布局和配置以及依赖的组件和通信、存储和运行时配置。例如 [Kubernetes 资源定义][5]、[Nomad 工作规范][6] 或 [Docker 编排][7] 文档。这些可能参数化也可能不参数化;也就是说,除非你知道一些模板语言,像 Kubernetes 中的 [Helm][8],否则你会发现你使用了很多 `sed -i` 这样的命令。 #### 发现 -你怎么知道有哪些程序和服务可用,以及如何使用它们?在 *nix 系统中通常都有一个包管理器和一个很好用的 man 页面;使用他们,应该能够回答你所有的问题。在微服务的设置中,在寻找一个服务的时候会相对更自动化一些。除了像 [Airbnb 的 SmartStack][9] 或 [Netflix 的 Eureka][10] 等可以定制以外,通常还有基于环境变量或基于 DNS 的[方法][11],允许您动态的发现服务。同样重要的是,事实上 [OpenAPI][12] 为 HTTP API 提供了一套标准文档和设计模式,[gRPC][13] 为一些耦合性强的高性能项目也做了同样的事情。最后非常重要的一点是,考虑到开发人员的经验(DX),应该从写一份好的 [Makefiles][14] 开始,并以编写符合 [**风格**][15] 的文档结束。 +你怎么知道有哪些程序和服务可用,以及如何使用它们?在 *nix 系统中通常都有一个包管理器和一个很好用的 man 页面;使用它们,应该能够回答你所有的问题。在微服务的设置中,在寻找一个服务的时候会相对更自动化一些。除了像 [Airbnb 的 SmartStack][9] 或 [Netflix 的 Eureka][10] 等可以定制以外,通常还有基于环境变量或基于 DNS 的[方法][11],允许您动态的发现服务。同样重要的是,事实上 [OpenAPI][12] 为 HTTP API 提供了一套标准文档和设计模式,[gRPC][13] 为一些耦合性强的高性能项目也做了同样的事情。最后非常重要的一点是,考虑到开发者经验(DX),应该从写一份好的 [Makefile][14] 开始,并以编写符合 [风格][15] 的文档结束。 ### 优点和缺点 -*nix 系统和微服务都提供了许多挑战和机遇 +*nix 系统和微服务都提供了许多挑战和机遇。 #### 模块性 -设计一个简洁,有清晰的目的并且能够很好的和其它模块配合是很困难的。甚至是在不同版本中实现并引入相应的异常处理流程都很困难的。在微服务中,这意味着重试逻辑和超时机制,或者将这些功能外包到服务网格( service mesh )是不是一个更好的选择呢?这确实比较难,可如果你做好了,那它的可重用性是巨大的。 +要设计一个简洁、有清晰的目的,并且能够很好地和其它模块配合的某个东西是很困难的。甚至是在不同版本中实现并引入相应的异常处理流程都很困难的。在微服务中,这意味着重试逻辑和超时机制,或者将这些功能外包到服务网格service mesh是不是一个更好的选择呢?这确实比较难,可如果你做好了,那它的可重用性是巨大的。 -#### Observability +#### 可观测性 -#### 预测 - -在一个巨型(2018年)或是一个试图做任何事情的大型程序(1984)年,当事情开始变坏的时候,应当能够直接的找到问题的根源。但是在一个 +在一个独石monolith(2018 年)或是一个试图做任何事情的大型程序(1984 年),当情况恶化的时候,应当能够直接的找到问题的根源。但是在一个 ``` yes | tr \\n x | head -c 450m | grep n ``` -或者在一个微服务设置中请求一个路径,例如,涉及20个服务,你怎么弄清楚是哪个服务的问题?幸运的是,我们有很多标准,特别是 [OpenCensus][16] 和 [OpenTracing][17]。如果您希望转向微服务,可预测性仍然可能是最大的问题。 +或者在一个微服务设置中请求一个路径,例如,涉及 20 个服务,你怎么弄清楚是哪个服务的问题?幸运的是,我们有很多标准,特别是 [OpenCensus][16] 和 [OpenTracing][17]。如果您希望转向微服务,可预测性仍然可能是最大的问题。 #### 全局状态 @@ -77,8 +75,8 @@ via: https://opensource.com/article/18/11/revisiting-unix-philosophy-2018 作者:[Michael Hausenblas][a] 选题:[lujun9972][b] -译者:[Jamkr](https://github.com/Jamkr) -校对:[校对者ID](https://github.com/校对者ID) +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181105 Some Good Alternatives To ‘du- Command.md b/published/201811/20181105 Some Good Alternatives To ‘du- Command.md similarity index 100% rename from published/20181105 Some Good Alternatives To ‘du- Command.md rename to published/201811/20181105 Some Good Alternatives To ‘du- Command.md diff --git a/published/20181107 Gitbase- Exploring git repos with SQL.md b/published/201811/20181107 Gitbase- Exploring git repos with SQL.md similarity index 100% rename from published/20181107 Gitbase- Exploring git repos with SQL.md rename to published/201811/20181107 Gitbase- Exploring git repos with SQL.md diff --git a/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md new file mode 100644 index 0000000000..4d7112d397 --- /dev/null +++ b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md @@ -0,0 +1,186 @@ +在 Linux 中如何查找一个命令或进程的执行时间 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png) + +在类 Unix 系统中,你可能知道一个命令或进程开始执行的时间,以及[一个进程运行了多久][1]。 但是,你如何知道这个命令或进程何时结束或者它完成运行所花费的总时长呢? 在类 Unix 系统中,这是非常容易的! 有一个专门为此设计的程序名叫 **GNU time**。 使用 `time` 程序,我们可以轻松地测量 Linux 操作系统中命令或程序的总执行时间。 `time` 命令在大多数 Linux 发行版中都有预装,所以你不必去安装它。 + +### 在 Linux 中查找一个命令或进程的执行时间 + +要测量一个命令或程序的执行时间,运行: + +``` +$ /usr/bin/time -p ls +``` + +或者, + +``` +$ time ls +``` + +输出样例: + +``` +dir1 dir2 file1 file2 mcelog + +real 0m0.007s +user 0m0.001s +sys 0m0.004s +``` + +``` +$ time ls -a +. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful +.. .bashrc dir2 .gnupg .profile .wget-hsts +.bash_history .cache file1 .local .stack + +real 0m0.008s +user 0m0.001s +sys 0m0.005s +``` + +以上命令显示出了 `ls` 命令的总执行时间。 你可以将 `ls` 替换为任何命令或进程,以查找总的执行时间。 + +输出详解: + + 1. `real` —— 指的是命令或程序所花费的总时间 + 2. `user` —— 指的是在用户模式下程序所花费的时间 + 3. `sys` —— 指的是在内核模式下程序所花费的时间 + + + +我们也可以将命令限制为仅运行一段时间。参考如下教程了解更多细节: + +- [在 Linux 中如何让一个命令运行特定的时长](https://www.ostechnix.com/run-command-specific-time-linux/) + +### time 与 /usr/bin/time + +你可能注意到了, 我们在上面的例子中使用了两个命令 `time` 和 `/usr/bin/time` 。 所以,你可能会想知道他们的不同。 + +首先, 让我们使用 `type` 命令看看 `time` 命令到底是什么。对于那些我们不了解的 Linux 命令,`type` 命令用于查找相关命令的信息。 更多详细信息,[请参阅本指南][2]。 + +``` +$ type -a time +time is a shell keyword +time is /usr/bin/time +``` + +正如你在上面的输出中看到的一样,`time` 是两个东西: + + * 一个是 BASH shell 中内建的关键字 + * 一个是可执行文件,如 `/usr/bin/time` + +由于 shell 关键字的优先级高于可执行文件,当你没有给出完整路径只运行 `time` 命令时,你运行的是 shell 内建的命令。 但是,当你运行 `/usr/bin/time` 时,你运行的是真正的 **GNU time** 命令。 因此,为了执行真正的命令你可能需要给出完整路径。 + +在大多数 shell 中如 BASH、ZSH、CSH、KSH、TCSH 等,内建的关键字 `time` 是可用的。 `time` 关键字的选项少于该可执行文件,你可以使用的唯一选项是 `-p`。 + +你现在知道了如何使用 `time` 命令查找给定命令或进程的总执行时间。 想进一步了解 GNU time 工具吗? 继续阅读吧! + +### 关于 GNU time 程序的简要介绍 + +GNU time 程序运行带有给定参数的命令或程序,并在命令完成后将系统资源使用情况汇总到标准输出。 与 `time` 关键字不同,GNU time 程序不仅显示命令或进程的执行时间,还显示内存、I/O 和 IPC 调用等其他资源。 + +`time` 命令的语法是: + +``` +/usr/bin/time [options] command [arguments...] +``` + +上述语法中的 `options` 是指一组可以与 `time` 命令一起使用去执行特定功能的选项。 下面给出了可用的选项: + + * `-f, –format` —— 使用此选项可以根据需求指定输出格式。 + * `-p, –portability` —— 使用简要的输出格式。 + * `-o file, –output=FILE` —— 将输出写到指定文件中而不是到标准输出。 + * `-a, –append` —— 将输出追加到文件中而不是覆盖它。 + * `-v, –verbose` —— 此选项显示 `time` 命令输出的详细信息。 + * `–quiet` – 此选项可以防止 `time` 命令报告程序的状态. + +当不带任何选项使用 GNU time 命令时,你将看到以下输出。 + +``` +$ /usr/bin/time wc /etc/hosts +9 28 273 /etc/hosts +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k +0inputs+0outputs (0major+73minor)pagefaults 0swaps +``` + +如果你用 shell 关键字 `time` 运行相同的命令, 输出会有一点儿不同: + +``` +$ time wc /etc/hosts +9 28 273 /etc/hosts + +real 0m0.006s +user 0m0.001s +sys 0m0.004s +``` + +有时,你可能希望将系统资源使用情况输出到文件中而不是终端上。 为此, 你可以使用 `-o` 选项,如下所示。 + +``` +$ /usr/bin/time -o file.txt ls +dir1 dir2 file1 file2 file.txt mcelog +``` + +正如你看到的,`time` 命令不会显示到终端上。因为我们将输出写到了`file.txt` 的文件中。 让我们看一下这个文件的内容: + +``` +$ cat file.txt +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k +0inputs+0outputs (0major+106minor)pagefaults 0swaps +``` + +当你使用 `-o` 选项时, 如果你没有一个名为 `file.txt` 的文件,它会创建一个并把输出写进去。如果文件存在,它会覆盖文件原来的内容。 + +你可以使用 `-a` 选项将输出追加到文件后面,而不是覆盖它的内容。 + +``` +$ /usr/bin/time -a file.txt ls +``` + +`-f` 选项允许用户根据自己的喜好控制输出格式。 比如说,以下命令的输出仅显示用户,系统和总时间。 + +``` +$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls +dir1 dir2 file1 file2 mcelog +0:00.00 real, 0.00 user, 0.00 sys +``` + +请注意 shell 中内建的 `time` 命令并不具有 GNU time 程序的所有功能。 + +有关 GNU time 程序的详细说明可以使用 `man` 命令来查看。 + +``` +$ man time +``` + +想要了解有关 Bash 内建 `time` 关键字的更多信息,请运行: + +``` +$ help time +``` + +就到这里吧。 希望对你有所帮助。 + +会有更多好东西分享哦。 请关注我们! + +加油哦! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[caixiangyue](https://github.com/caixiangyue) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/find-long-process-running-linux/ +[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/ diff --git a/published/201811/20181108 Choosing a printer for Linux.md b/published/201811/20181108 Choosing a printer for Linux.md new file mode 100644 index 0000000000..0d13ffd990 --- /dev/null +++ b/published/201811/20181108 Choosing a printer for Linux.md @@ -0,0 +1,79 @@ +为 Linux 选择打印机 +====== + +> Linux 为打印机提供了广泛的支持。学习如何利用它。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +我们在传闻已久的无纸化社会方面取得了重大进展,但我们仍需要不时打印文件。如果你是 Linux 用户,并有一台没有 Linux 安装盘的打印机,或者你正准备在市场上购买新设备,那么你很幸运。因为大多数 Linux 发行版(以及 MacOS)都使用通用 Unix 打印系统([CUPS][1]),它包含了当今大多数打印机的驱动程序。这意味着 Linux 为打印机提供了比 Windows 更广泛的支持。 + +### 选择打印机 + +如果你需要购买新打印机,了解它是否支持 Linux 的最佳方法是查看包装盒或制造商网站上的文档。你也可以搜索 [Open Printing][2] 数据库。它是检查各种打印机与 Linux 兼容性的绝佳资源。 + +以下是与 Linux 兼容的佳能打印机的一些 Open Printing 结果。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png) + +下面的截图是 Open Printing 的 Hewlett-Packard LaserJet 4050 的结果 —— 根据数据库,它应该可以“完美”工作。这里列出了建议驱动以及通用说明,让我了解它适用于 CUPS、行式打印守护程序(LPD)、LPRng 等。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png) + +在任何情况下,最好在购买打印机之前检查制造商的网站并询问其他 Linux 用户。 + +### 检查你的连接 + +有几种方法可以将打印机连接到计算机。如果你的打印机是通过 USB 连接的,那么可以在 Bash 提示符下输入 `lsusb` 来轻松检查连接。 + +``` +$ lsusb +``` + +该命令返回 “Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard” —— 这没有太多价值,但可以得知打印机已连接。我可以通过输入以下命令获得有关打印机的更多信息: + +``` +$ dmesg | grep -i usb +``` + +结果更加详细。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png) + +如果你尝试将打印机连接到并口(假设你的计算机有并口 —— 如今很少见),你可以使用此命令检查连接: + +``` +$ dmesg | grep -i parport +``` + +返回的信息可以帮助我为我的打印机选择正确的驱动程序。我发现,如果我坚持使用流行的名牌打印机,大部分时间我都能获得良好的效果。 + +### 设置你的打印机软件 + +Fedora Linux 和 Ubuntu Linux 都包含简单的打印机设置工具。[Fedora][3] 为打印问题的答案维护了一个出色的 wiki。可以在 GUI 中的设置轻松启动这些工具,也可以在命令行上调用 `system-config-printer`。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png) + +HP 支持 Linux 打印的 [HP Linux 成像和打印][4] (HPLIP) 软件可能已安装在你的 Linux 系统上。如果没有,你可以为你的发行版[下载][5]最新版本。打印机制造商 [Epson][6] 和 [Brother][7] 也有带有 Linux 打印机驱动程序和信息的网页。 + +你最喜欢的 Linux 打印机是什么?请在评论中分享你的意见。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/choosing-printer-linux + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://www.cups.org/ +[2]: http://www.openprinting.org/printers +[3]: https://fedoraproject.org/wiki/Printing +[4]: https://developers.hp.com/hp-linux-imaging-and-printing +[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip +[6]: https://epson.com/Support/wa00821 +[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on diff --git a/published/20181108 The Difference Between more, less And most Commands.md b/published/201811/20181108 The Difference Between more, less And most Commands.md similarity index 100% rename from published/20181108 The Difference Between more, less And most Commands.md rename to published/201811/20181108 The Difference Between more, less And most Commands.md diff --git a/published/20181109 7 reasons I love open source.md b/published/201811/20181109 7 reasons I love open source.md similarity index 100% rename from published/20181109 7 reasons I love open source.md rename to published/201811/20181109 7 reasons I love open source.md diff --git a/published/201811/20181113 4 tips for learning Golang.md b/published/201811/20181113 4 tips for learning Golang.md new file mode 100644 index 0000000000..ed80a40ded --- /dev/null +++ b/published/201811/20181113 4 tips for learning Golang.md @@ -0,0 +1,80 @@ +学习 Golang 的 4 个技巧 +====== + +> 到达 Golang 大陆:一位资深开发者之旅。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr) + +2014 年夏天…… + +> IBM:“我们需要你弄清楚这个 Docker。” + +> 我:“没问题。” + +> IBM:“那就开始吧。” + +> 我:“好的。”(内心声音):”Docker 是用 Go 编写的。是吗?“(Google 一下)“哦,一门编程语言。我在我的岗位上已经学习了很多了。这不会太难。” + +我的大学新生编程课是使用 VAX 汇编程序教授的。在数据结构课上,我们使用 Pascal —— 在图书馆计算机中心的旧电脑上使用软盘加载。在一门更高一级的课程中,我的教授教授喜欢用 ADA 去展示所有的例子。在我们的 Sun 工作站上,我通过各种 UNIX 的实用源代码学到了一点 C。在 IBM,OS/2 源代码中我们使用了 C 和一些 x86 汇编程序;在一个与 Apple 合作的项目中我们大量使用 C++ 的面向对象功能。不久后我学到了 shell 脚本,开始是 csh,但是在 90 年代中期发现 Linux 后就转到了 Bash。在 90 年代后期,我在将 IBM 的定制的 JVM 代码中的即时(JIT)编译器移植到 Linux 时,我不得不开始学习 m4(与其说是编程语言,不如说是一种宏处理器)。 + +一晃 20 年……我从未因为学习一门新的编程语言而焦灼。但是 [Go][1] 让我感觉有些不同。我打算公开贡献,上传到 GitHub,让任何有兴趣的人都可以看到!作为一个 40 多岁的资深开发者的 Go 新手,我不想成为一个笑话。我们都知道程序员的骄傲,不想丢人,不论你的经验水平如何。 + +我早期的调研显示,Go 似乎比某些语言更 “地道”。它不仅仅是让代码可以编译;也需要让代码可以 “Go Go Go”。 + +现在,我的个人的 Go 之旅四年间有了几百个拉取请求(PR),我不是致力于成为一个专家,但是现在我觉得贡献和编写代码比我在 2014 年的时候更舒服了。所以,你该怎么教一个老人新的技能或者一门编程语言呢?以下是我自己在前往 Golang 大陆之旅的四个步骤。 + +### 1、不要跳过基础 + +虽然你可以通过复制代码来进行你早期的学习(谁还有时间阅读手册!?),Go 有一个非常易读的 [语言规范][2],它写的很易于理解,即便你在语言或者编译理论方面没有取得硕士学位。鉴于 Go 的 **参数:类型** 顺序的特有习惯,以及一些有趣的语言功能,例如通道和 go 协程,搞定这些新概念是非常重要的是事情。阅读这个附属的文档 [高效 Go 编程][3],这是 Golang 创造者提供的另一个重要资源,它将为你提供有效和正确使用语言的准备。 + +### 2、从最好的中学习 + +有许多宝贵的资源可供挖掘,可以将你的 Go 知识提升到下一个等级。最近在 [GopherCon][4] 上的所有讲演都可以在网上找到,如这个 [GopherCon US 2018][5] 的详尽列表。这些讲演的专业知识和技术水平各不相同,但是你可以通过它们轻松地找到一些你所不了解的事情。[Francesc Campoy][6] 创建了一个名叫 [JustForFunc][7] 的 Go 编程视频系列,其不断增多的剧集可以用来拓宽你的 Go 知识和理解。直接搜索 “Golang" 可以为那些想要了解更多信息的人们展示许多其它视频和在线资源。 + +想要看代码?在 GitHub 上许多受欢迎的云原生项目都是用 Go 写的:[Docker/Moby][8]、[Kubernetes][9]、[Istio][10]、[containerd][11]、[CoreDNS][12],以及许多其它的。语言纯粹主义者可能会认为一些项目比另外一些更地道,但这些都是很好的起点,可以看到在高度活跃的项目的大型代码库中使用 Go 的程度。 + +### 3、使用优秀的语言工具 + +你会很快了解到 [gofmt][13] 的宝贵之处。Go 最漂亮的一个地方就在于没有关于每个项目代码格式的争论 —— **gofmt** 内置在语言的运行环境中,并且根据一系列可靠的、易于理解的语言规则对 Go 代码进行格式化。我不知道有哪个基于 Golang 的项目会在持续集成中不坚持使用 **gofmt** 检查拉取请求。 + +除了直接构建于运行环境和 SDK 中的一系列有价值的工具之外,我强烈建议使用一个对 Golang 的特性有良好支持的编辑器或者 IDE。由于我经常在命令行中进行工作,我依赖于 Vim 加上强大的 [vim-go][14] 插件。我也喜欢微软提供的 [VS Code][15],特别是它的 [Go 语言][16] 插件。 + +想要一个调试器?[Delve][17] 项目在不断的改进和成熟,它是在 Go 二进制文件上进行 [gdb][18] 式调试的强有力的竞争者。 + +### 4、写一些代码 + +你要是不开始尝试使用 Go 写代码,你永远不知道它有什么好的地方。找一个有 “需要帮助” 问题标签的项目,然后开始贡献代码。如果你已经使用了一个用 Go 编写的开源项目,找出它是否有一些可以用初学者方式解决的 Bug,然后开始你的第一个拉取请求。与生活中的大多数事情一样,实践出真知,所以开始吧。 + +事实证明,你可以教会一个资深的老开发者一门新的技能甚至编程语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/learning-golang + +作者:[Phill Estes][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/estesp +[b]: https://github.com/lujun9972 +[1]: https://golang.org/ +[2]: https://golang.org/ref/spec +[3]: https://golang.org/doc/effective_go.html +[4]: https://www.gophercon.com/ +[5]: https://tqdev.com/2018-gophercon-2018-videos-online +[6]: https://twitter.com/francesc +[7]: https://www.youtube.com/channel/UC_BzFbxG2za3bp5NRRRXJSw +[8]: https://github.com/moby/moby +[9]: https://github.com/kubernetes/kubernetes +[10]: https://github.com/istio/istio +[11]: https://github.com/containerd/containerd +[12]: https://github.com/coredns/coredns +[13]: https://blog.golang.org/go-fmt-your-code +[14]: https://github.com/fatih/vim-go +[15]: https://code.visualstudio.com/ +[16]: https://code.visualstudio.com/docs/languages/go +[17]: https://github.com/derekparker/delve +[18]: https://www.gnu.org/software/gdb/ diff --git a/translated/tech/20181113 The alias And unalias Commands Explained With Examples.md b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md similarity index 58% rename from translated/tech/20181113 The alias And unalias Commands Explained With Examples.md rename to published/201811/20181113 The alias And unalias Commands Explained With Examples.md index da859b2d29..1448918a1e 100644 --- a/translated/tech/20181113 The alias And unalias Commands Explained With Examples.md +++ b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md @@ -1,22 +1,23 @@ 举例说明 alias 和 unalias 命令 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias-command-720x340.png) -如果不是一个深度的命令行用户的话,你可能已经忘记了这些复杂且冗长的 Linux 命令了。当然,有很多方法可以让你 [**回想起遗忘的命令**][1]。你可以简单的 [**保存常用的命令**][2] 然后按需使用。也可以在终端里 [**标记重要的命令**][3],然后在任何时候你想要的时间使用它们。而且,Linux 有一个内建命令 **history** 可以帮助你记忆这些命令。另外一个最简便的方式就是为这些命令创建一个别名。你可以为任何经常重复调用的常用命令创建别名,而不仅仅是长命令。通过这种方法,你不必再过多地记忆这些命令。这篇文章中,我们将会在 Linux 环境下举例说明 **alias** 和 **unalias** 命令。 +如果不是一个命令行重度用户的话,过了一段时间之后,你就可能已经忘记了这些复杂且冗长的 Linux 命令了。当然,有很多方法可以让你 [回想起遗忘的命令][1]。你可以简单的 [保存常用的命令][2] 然后按需使用。也可以在终端里 [标记重要的命令][3],然后在任何时候你想要的时间使用它们。而且,Linux 有一个内建命令 `history` 可以帮助你记忆这些命令。另外一个记住这些如此长的命令的简便方式就是为这些命令创建一个别名。你可以为任何经常重复调用的常用命令创建别名,而不仅仅是长命令。通过这种方法,你不必再过多地记忆这些命令。这篇文章中,我们将会在 Linux 环境下举例说明 `alias` 和 `unalias` 命令。 ### alias 命令 -**alias** 使用一个用户自定义的字符串来代替一个或者一串命令(包括多个选项,参数)。这个字符串可以是一个简单的名字或者缩写,不管这个命令原来多么复杂。alias 命令已经预装在 shell(包括 BASH,Csh,Ksh 和 Zsh 等) 当中。 +`alias` 使用一个用户自定义的字符串来代替一个或者一串命令(包括多个选项、参数)。这个字符串可以是一个简单的名字或者缩写,不管这个命令原来多么复杂。`alias` 命令已经预装在 shell(包括 BASH、Csh、Ksh 和 Zsh 等) 当中。 - -alias 的通用语法是: +`alias` 的通用语法是: ``` alias [alias-name[=string]...] ``` + 接下来看几个例子。 -**列出别名** +#### 列出别名 可能在你的系统中已经设置了一些别名。有些应用在你安装它们的时候可能已经自动创建了别名。要查看已经存在的别名,运行: @@ -40,7 +41,7 @@ alias pbpaste='xclip -selection clipboard -o' alias update='newsbeuter -r && sudo pacman -Syu' ``` -**创建一个新的别名** +#### 创建一个新的别名 像我之前说的,你不必去记忆这些又臭又长的命令。你甚至不必一遍一遍的运行长命令。只需要为这些命令创建一个简单易懂的别名,然后在任何你想使用的时候运行这些别名就可以了。这种方式会让你爱上命令行。 @@ -54,21 +55,22 @@ $ du -h --max-depth=1 | sort -hr $ alias du='du -h --max-depth=1 | sort -hr' ``` -这里的 **du** 就是这条命令的别名。这个别名可以被设置为任何名字,主要便于记忆和区别。 +这里的 `du` 就是这条命令的别名。这个别名可以被设置为任何名字,主要便于记忆和区别。 在创建一个别名的时候,使用单引号或者双引号都是可以的。这两种方法最后的结果没有任何区别。 -现在你可以运行这个别名(例如我们这个例子中的 **du** )。它和上面的原命令将会产生相同的结果。 +现在你可以运行这个别名(例如我们这个例子中的 `du` )。它和上面的原命令将会产生相同的结果。 这个别名仅限于当前 shell 会话中。一旦你退出了当前 shell 会话,别名也就失效了。为了让这些别名长久有效,你需要把它们添加到你 shell 的配置文件当中。 -BASH,编辑 **~/.bashrc** 文件: +BASH,编辑 `~/.bashrc` 文件: ``` $ nano ~/.bashrc ``` 一行添加一个别名: + ![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias.png) 保存并退出这个文件。然后运行以下命令更新修改: @@ -79,21 +81,20 @@ $ source ~/.bashrc 现在,这些别名在所有会话中都可以永久使用了。 -ZSH,你需要添加这些别名到 **~/.zshrc**文件中。 -Fish,跟上面的类似,添加这些别名到 **~/.config/fish/config.fish** 文件中。 +ZSH,你需要添加这些别名到 `~/.zshrc`文件中。Fish,跟上面的类似,添加这些别名到 `~/.config/fish/config.fish` 文件中。 -**查看某个特定的命令别名** +#### 查看某个特定的命令别名 -像我上面提到的,你可以使用 ‘alias’ 命令列出你系统中所有的别名。如果你想查看跟给定的别名有关的命令,例如 ‘du’,只需要运行: +像我上面提到的,你可以使用 `alias` 命令列出你系统中所有的别名。如果你想查看跟给定的别名有关的命令,例如 `du`,只需要运行: ``` $ alias du alias du='du -h --max-depth=1 | sort -hr' ``` -像你看到的那样,上面的命令可以显示与单词 ‘du’ 有关的命令。 +像你看到的那样,上面的命令可以显示与单词 `du` 有关的命令。 -关于 别名 命令更多的细节,参阅 man 手册页: +关于 `alias` 命令更多的细节,参阅 man 手册页: ``` $ man alias @@ -101,23 +102,23 @@ $ man alias ### unalias 命令 -跟它的名字说的一样,**unalias** 命令可以很轻松地从你的系统当中移除别名。unalias 命令的通用语法是: +跟它的名字说的一样,`unalias` 命令可以很轻松地从你的系统当中移除别名。`unalias` 命令的通用语法是: ``` unalias ``` -要移除命令的别名,像我们之前创建的 ‘du’,只需要运行: +要移除命令的别名,像我们之前创建的 `du`,只需要运行: ``` $ unalias du ``` -unalias 命令不仅会从当前会话中移除别名,也会从你的 shell 配置文件中永久地移除别名。 +`unalias` 命令不仅会从当前会话中移除别名,也会从你的 shell 配置文件中永久地移除别名。 还有一种移除别名的方法,是创建具有相同名称的新别名。 -要从当前会话中移除所有的别名,使用 **-a** 选项: +要从当前会话中移除所有的别名,使用 `-a` 选项: ``` $ unalias -a @@ -144,7 +145,7 @@ via: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-exa 作者:[SK][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md new file mode 100644 index 0000000000..2218dfcd2c --- /dev/null +++ b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md @@ -0,0 +1,55 @@ +GPL 合作承诺的发展历程 +====== + +> GPL 合作承诺GPL Cooperation Commitment消除了开发者对许可证失效的顾虑,从而达到促进技术创新的目的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Law_balance_open_source.png?itok=5c4JhuEY) + +假如能免于顾虑,技术创新和发展将会让世界发生天翻地覆的改变。[GPL 合作承诺][1]GPL Cooperation Commitment就这样应运而生,只为通过公平、一致、可预测的许可证来让科技创新无后顾之忧。 + +去年,我曾经写过一篇文章,讨论了许可证对开源软件下游用户的影响。在进行研究的时候,我就发现许可证的约束力并不强,而且很多情况下是不可预测的。因此,我在文章中提出了一个能使开源许可证具有一致性和可预测性的潜在解决方案。但我只考虑到了诸如通过法律系统立法的“传统”方法。 + +2017 年 11 月,RedHat、IBM、Google 和 Facebook 提出了这种我从未考虑过的非传统的解决方案:GPL 合作承诺。GPL 合作承诺规定了 GPL 公平一致执行的方式。我认为,GPL 合作承诺之所以有这么深刻的意义,有以下两个原因:一是许可证的公平性和一致性对于开源社区的发展来说至关重要,二是法律对不可预测性并不容忍。 + +### 了解 GPL + +要了解 GPL 合作承诺,首先要了解什么是 GPL。GPL 是 [GNU 通用许可证][2]GNU General Public License的缩写,它是一个公共版权的开源许可证,这就意味着开源软件的分发者必须向下游用户公开源代码。GPL 还禁止对下游的使用作出限制,要求个人用户不得拒绝他人对开源软件的使用自由、研究自由、共享自由和改进自由。GPL 规定,只要下游用户满足了许可证的要求和条件,就可以使用该许可证。如果被许可人出现了不符合许可证的情况,则视为违规。 + +按照第二版 GPL(GPLv2)的描述,许可证会在任何违规的情况下自动终止,这就导致了部分开发者对 GPL 有所抗拒。而在第三版 GPL(GPLv3)中则引入了“[治愈条款][3]cure provision”,这一条款规定,被许可人可以在 30 天内对违反 GPL 的行为进行改正,如果在这个缓冲期内改正完成,许可证就不会被终止。 + +这一规定消除了许可证被无故终止的顾虑,从而让软件的开发者和用户专注于开发和创新。 + +### GPL 合作承诺做了什么 + +GPL 合作承诺将 GPLv3 的治愈条款应用于使用 GPLv2 的软件上,让使用 GPLv2 许可证的开发者避免许可证无故终止的窘境,并与 GPLv3 许可证保持一致。 + +很多软件开发者都希望正确合规地做好一件事情,但有时候却不了解具体的实施细节。因此,GPL 合作承诺的重要性就在于能够对软件开发者们做出一些引导,让他们避免因一些简单的错误导致许可证违规终止。 + +Linux 基金会技术顾问委员会在 2017 年宣布,Linux 内核项目将会[采用 GPLv3 的治愈条款][4]。在 GPL 合作承诺的推动下,很多大型科技公司和个人开发者都做出了相同的承诺,会将该条款扩展应用于他们采用 GPLv2(或 LGPLv2.1)许可证的所有软件,而不仅仅是对 Linux 内核的贡献。 + +GPL 合作承诺的广泛采用将会对开源社区产生非常积极的影响。如果更多的公司和个人开始采用 GPL 合作承诺,就能让大量正在使用 GPLv2 或 LGPLv2.1 许可证的软件以更公平和更可预测的形式履行许可证中的条款。 + +截至 2018 年 11 月,包括 IBM、Google、亚马逊、微软、腾讯、英特尔、RedHat 在内的 40 余家行业巨头公司都已经[签署了 GPL 合作承诺][5],以期为开源社区创立公平的标准以及提供可预测的执行力。GPL 合作承诺是开源社区齐心协力引领开源未来发展方向的一个成功例子。 + +GPL 合作承诺能够让下游用户了解到开发者对他们的尊重,同时也表示了开发者使用了 GPLv2 许可证的代码是安全的。如果你想查阅更多信息,包括如何将自己的名字添加到 GPL 合作承诺中,可以访问 [GPL 合作承诺的网站][6]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/gpl-cooperation-commitment + +作者:[Brooke Driver][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bdriver +[b]: https://github.com/lujun9972 +[1]: https://gplcc.github.io/gplcc/ +[2]: https://www.gnu.org/licenses/licenses.en.html +[3]: https://opensource.com/article/18/6/gplv3-anniversary +[4]: https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html +[5]: https://gplcc.github.io/gplcc/Company/Company-List.html +[6]: http://gplcc.github.io/gplcc + diff --git a/translated/tech/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md similarity index 84% rename from translated/tech/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md rename to published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md index 60b53cca9a..99a92d917b 100644 --- a/translated/tech/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md +++ b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md @@ -1,18 +1,19 @@ ProtectedText:一个免费的在线加密笔记 ====== + ![](https://www.ostechnix.com/wp-content/uploads/2018/11/protected-text-720x340.png) -记录笔记是我们每个人必备的重要技能,它可以帮助我们把自己听到、读到、学到的内容长期地保留下来,也有很多的应用和工具都能让我们更好地记录笔记。下面我要介绍一个叫做 **ProtectedText** 的应用,一个可以将你的笔记在线上保存起来的免费的加密笔记。它是一个免费的 web 服务,在上面记录文本以后,它将会对文本进行加密,只需要一台支持连接到互联网并且拥有 web 浏览器的设备,就可以访问到记录的内容。 +记录笔记是我们每个人必备的重要技能,它可以帮助我们把自己听到、读到、学到的内容长期地保留下来,也有很多的应用和工具都能让我们更好地记录笔记。下面我要介绍一个叫做 **ProtectedText** 的应用,这是一个可以将你的笔记在线上保存起来的免费的加密笔记。它是一个免费的 web 服务,在上面记录文本以后,它将会对文本进行加密,只需要一台支持连接到互联网并且拥有 web 浏览器的设备,就可以访问到记录的内容。 ProtectedText 不会向你询问任何个人信息,也不会保存任何密码,没有广告,没有 Cookies,更没有用户跟踪和注册流程。除了拥有密码能够解密文本的人,任何人都无法查看到笔记的内容。而且,使用前不需要在网站上注册账号,写完笔记之后,直接关闭浏览器,你的笔记也就保存好了。 ### 在加密笔记本上记录笔记 -访问 这个链接,就可以打开 ProtectedText 页面了。这个时候你将进入网站主页,接下来需要在页面上的输入框输入一个你想用的名称,或者在地址栏后面直接加上想用的名称。这个名称是一个自定义的名称(例如 ),是你查看自己保存的笔记的专有入口。 +访问 这个链接,就可以打开 ProtectedText 页面了(LCTT 译注:如果访问不了,你知道的)。这个时候你将进入网站主页,接下来需要在页面上的输入框输入一个你想用的名称,或者在地址栏后面直接加上想用的名称。这个名称是一个自定义的名称(例如 ),是你查看自己保存的笔记的专有入口。 ![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-1.png) -如果你选用的名称还没有被占用,你就会看到下图中的提示信息。点击“Create”键就可以创建你的个人笔记页了。 +如果你选用的名称还没有被占用,你就会看到下图中的提示信息。点击 “Create” 键就可以创建你的个人笔记页了。 ![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-2.png) @@ -20,11 +21,11 @@ ProtectedText 不会向你询问任何个人信息,也不会保存任何密码 ProtectedText 使用 AES 算法对你的笔记内容进行加密和解密,而计算散列则使用了 SHA512 算法。 -笔记记录完毕以后,点击顶部的“Save”键保存。 +笔记记录完毕以后,点击顶部的 “Save” 键保存。 ![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-3.png) -按下保存键之后,ProtectedText 会提示你输入密码以加密你的笔记内容。按照它的要求输入两次密码,然后点击“Save”键。 +按下保存键之后,ProtectedText 会提示你输入密码以加密你的笔记内容。按照它的要求输入两次密码,然后点击 “Save” 键。 ![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-4.png) @@ -45,15 +46,12 @@ ProtectedText 还有配套的 [Android 应用][6] 可以让你在移动设备上 * 存储的内容没有到期时间,只要你愿意,笔记内容可以一直保存在服务器上 * 可以让你的数据限制为私有或公开开放 - - **缺点** * 尽管客户端代码是公开的,但服务端代码并没有公开,因此你无法自行搭建一个类似的服务。如果你不信任这个网站,请不要使用。 * 由于网站不存储你的任何个人信息,包括你的密码,因此如果你丢失了密码,数据将永远无法恢复。网站方还声称他们并不清楚谁拥有了哪些数据,所以一定要牢记密码。 - 如果你想通过一种简单的方式将笔记保存到线上,并且需要在不需要安装任何工具的情况下访问,那么 ProtectedText 会是一个好的选择。如果你还知道其它类似的应用程序,欢迎在评论区留言! @@ -65,7 +63,7 @@ via: https://www.ostechnix.com/protectedtext-a-free-encrypted-notepad-to-save-yo 作者:[SK][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181115 How to install a device driver on Linux.md b/published/201811/20181115 How to install a device driver on Linux.md new file mode 100644 index 0000000000..bd1c3fd353 --- /dev/null +++ b/published/201811/20181115 How to install a device driver on Linux.md @@ -0,0 +1,144 @@ +如何在 Linux 上安装设备驱动程序 +====== + +> 学习 Linux 设备驱动如何工作,并知道如何使用它们。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc) + +对于一个熟悉 Windows 或者 MacOS 的人,想要切换到 Linux,它们都会面临一个艰巨的问题就是怎么安装和配置设备驱动。这是可以理解的,因为 Windows 和 MacOS 都有一套机制把这个过程做得非常的友好。比如说,当你插入一个新的硬件设备, Windows 能够自动检测并会弹出一个窗口询问你是否要继续驱动程序的安装。你也可以从网络上下载驱动程序,仅仅需要双击解压或者是通过设备管理器导入驱动程序即可。 + +而这在 Linux 操作系统上并非这么简单。第一个原因是, Linux 是一个开源的操作系统,所以有 [数百种 Linux 发行版的变体][1]。也就是说不可能做一个指南来适应所有的 Linux 发行版。因为每种 Linux 安装驱动程序的过程都有差异。 + +第二,大多数默认的 Linux 驱动程序也都是开源的,并被集成到了系统中,这使得安装一些并未包含的驱动程序变得非常复杂,即使已经可以检测大多数的硬件设备。第三,不同发行版的许可也有差异。例如,[Fedora 禁止事项][2] 禁止包含专有的、受法律保护,或者是违反美国法律的驱动程序。而 Ubuntu 则让用户[避免使用受法律保护或闭源的硬件设备][3]。 + +为了更好的学习 Linux 驱动程序是如何工作的,我建议阅读 《Linux 设备驱动程序》一书中的 [设备驱动程序简介][4]。 + +### 两种方式来寻找驱动程序 + +#### 1、 用户界面 + +如果是一个刚从 Windows 或 MacOS 转过来的 Linux 新手,那你会很高兴知道 Linux 也提供了一个通过向导式的程序来查看驱动程序是否可用的方法。 Ubuntu 提供了一个 [附加驱动程序][5] 选项。其它的 Linux 发行版也提供了帮助程序,像 [GNOME 的包管理器][6],你可以使用它来检查驱动程序是否可用。 + +#### 2、 命令行 + +如果你通过漂亮的用户界面没有找到驱动程序,那又该怎么办呢?或许你只能通过没有任何图形界面的 shell?甚至你可以使用控制台来展现你的技能。你有两个选择: + +1. **通过一个仓库** + + 这和 MacOS 中的 [homebrew][7] 命令行很像。通过使用 `yum`、 `dnf`、`apt-get` 等等。你基本可以通过添加仓库,并更新包缓存。 +2. **下载、编译,然后自己构建** + + 这通常包括直接从网络,或通过 `wget` 命令下载源码包,然后运行配置和编译、安装。这超出了本文的范围,但是你可以在网络上找到很多在线指南,如果你选择的是这条路的话。 + +### 检查是否已经安装了这个驱动程序 + +在进一步学习安装 Linux 驱动程序之前,让我们来学习几条命令,用来检测驱动程序是否已经在你的系统上可用。 + +[lspci][8] 命令显示了系统上所有 PCI 总线和设备驱动程序的详细信息。 + +``` +$ lscpci +``` + +或者使用 `grep`: + +``` +$ lscpci | grep SOME_DRIVER_KEYWORD +``` + +例如,你可以使用 `lspci | grep SAMSUNG` 命令,如果你想知道是否安装过三星的驱动。 + +[dmesg][9] 命令显示了所有内核识别的驱动程序。 + +``` +$ dmesg +``` + +或配合 `grep` 使用: + +``` +$ dmesg | grep SOME_DRIVER_KEYWORD +``` + +任何识别到的驱动程序都会显示在结果中。 + +如果通过 `dmesg` 或者 `lscpi` 命令没有识别到任何驱动程序,尝试下这两个命令,看看驱动程序至少是否加载到硬盘。 + +``` +$ /sbin/lsmod +``` + +和 + +``` +$ find /lib/modules +``` + +技巧:和 `lspci` 或 `dmesg` 一样,通过在上面的命令后面加上 `| grep` 来过滤结果。 + +如果一个驱动程序已经被识别到了,但是通过 `lscpi` 或 `dmesg` 并没有找到,这意味着驱动程序已经存在于硬盘上,但是并没有加载到内核中,这种情况,你可以通过 `modprobe` 命令来加载这个模块。 + +``` +$ sudo modprobe MODULE_NAME +``` + +使用 `sudo` 来运行这个命令,因为这个模块要使用 root 权限来安装。 + +### 添加仓库并安装 + +可以通过 `yum`、`dnf` 和 `apt-get` 几种不同的方式来添加一个仓库;一个个介绍完它们并不在本文的范围。简单一点来说,这个示例将会使用 `apt-get` ,但是这个命令和其它的几个都是很类似的。 + +#### 1、删除存在的仓库,如果它存在 + +``` +$ sudo apt-get purge NAME_OF_DRIVER* +``` + +其中 `NAME_OF_DRIVER` 是你的驱动程序的可能的名称。你还可以将模式匹配加到正则表达式中来进一步过滤。 + +#### 2、将仓库加入到仓库表中,这应该在驱动程序指南中有指定 + +``` +$ sudo add-apt-repository REPOLIST_OF_DRIVER +``` + +其中 `REPOLIST_OF_DRIVER` 应该从驱动文档中有指定(例如:`epel-list`)。 + +#### 3、更新仓库列表 + +``` +$ sudo apt-get update +``` + +#### 4、安装驱动程序 + +``` +$ sudo apt-get install NAME_OF_DRIVER +``` + +#### 5、检查安装状态 + +像上面说的一样,通过 `lscpi` 命令来检查驱动程序是否已经安装成功。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/how-install-device-driver-linux + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/List_of_Linux_distributions +[2]: https://fedoraproject.org/wiki/Forbidden_items?rd=ForbiddenItems +[3]: https://www.ubuntu.com/licensing +[4]: https://www.xml.com/ldd/chapter/book/ch01.html +[5]: https://askubuntu.com/questions/47506/how-do-i-install-additional-drivers +[6]: https://help.gnome.org/users/gnome-packagekit/stable/add-remove.html.en +[7]: https://brew.sh/ +[8]: https://en.wikipedia.org/wiki/Lspci +[9]: https://en.wikipedia.org/wiki/Dmesg diff --git a/published/201811/20181116 Akash Angle- How do you Fedora.md b/published/201811/20181116 Akash Angle- How do you Fedora.md new file mode 100644 index 0000000000..ccd764f8aa --- /dev/null +++ b/published/201811/20181116 Akash Angle- How do you Fedora.md @@ -0,0 +1,62 @@ +Akash Angle:你如何使用 Fedora? +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-816x345.jpg) + +我们最近采访了Akash Angle 来了解他如何使用 Fedora。这是 Fedora Magazine 上 Fedora [系列的一部分][1]。该系列介绍 Fedora 用户以及他们如何使用 Fedora 完成工作。请通过[反馈表单][2]与我们联系表达你对成为受访者的兴趣。 + +### Akash Angle 是谁? + +Akash 是一位不久前抛弃 Windows 的 Linux 用户。作为一名过去 9 年的狂热 Fedora 用户,他已经尝试了几乎所有的 Fedora 定制版和桌面环境来完成他的日常任务。是一位校友给他介绍了 Fedora。 + +### 使用什么硬件? + +Akash 在工作时使用联想 B490。它配备了英特尔酷睿 i3-3310 处理器和 240GB 金士顿 SSD。Akash 说:“这台笔记本电脑非常适合一些日常任务,如上网、写博客,以及一些照片编辑和视频编辑。虽然不是专业的笔记本电脑,而且规格并不是那么高端,但它完美地完成了工作。“ + +他使用一个入门级的罗技无线鼠标,并希望能有一个机械键盘。他的 PC 是一台定制桌面电脑,拥有最新的第 7 代 Intel i5 7400 处理器和 8GB Corsair Vengeance 内存。 + +![][3] + +### 使用什么软件? + +Akash 是 GNOME 3 桌面环境的粉丝。他喜欢该操作系统为完成基本任务而加入的华丽功能。 + +出于实际原因,他更喜欢全新安来升级到最新 Fedora 版本。他认为 Fedora 29 可以说是最好的工作站。Akash 说这种说法得到了各种科技传播网站和开源新闻网站评论的支持。 + +为了播放视频,他的首选是打包为 [Flatpak][4] 的 VLC 视频播放器 ,它提供了最新的稳定版本。当 Akash 想截图时,他的终极工具是 [Shutter,Magazine 曾介绍过][5]。对于图形处理,GIMP 是他不能离开的工具。 + +Google Chrome 稳定版和开发版是他最常用的网络浏览器。他还使用 Chromium 和 Firefox 的默认版本,有时甚至会使用 Opera。 + +由于他是一名资深用户,所以 Akash 其余时候都使用终端。GNOME Terminal 是他使用的一个终端。 + +#### 最喜欢的壁纸 + +他最喜欢的壁纸之一是下面最初来自 Fedora 16 的壁纸: + +![][6] + +这是他目前在 Fedora 29 工作站上使用的壁纸之一: + +![][7] + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/akash-angle-how-do-you-fedora/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/tag/how-do-you-fedora/ +[2]: https://fedoramagazine.org/submit-an-idea-or-tip/ +[3]: https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-desktop-300x259.png +[4]: https://fedoramagazine.org/getting-started-flatpak/ +[5]: https://fedoramagazine.org/screenshot-everything-shutter-fedora/ +[6]: https://fedoramagazine.org/wp-content/uploads/2018/11/Fedora-16-300x188.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/11/wallpaper2you_72588-300x169.jpg diff --git a/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md new file mode 100644 index 0000000000..333beaad19 --- /dev/null +++ b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md @@ -0,0 +1,55 @@ +如何在 SUSE 12 Linux 中进入单用户模式? +====== + +> 一篇了解如何在 SUSE 12 Linux 服务器中进入单用户模式的简短文章。 + +![How to enter single user mode in SUSE 12 Linux][1] + +在这篇简短的文章中,我们将向你介绍在 SUSE 12 Linux 中进入单用户模式的步骤。在排除系统主要问题时,单用户模式始终是首选。单用户模式禁用网络并且没有其他用户登录,你可以排除许多多用户系统的情况,可以帮助你快速排除故障。单用户模式最常见的一种用处是[重置忘记的 root 密码][2]。 + +### 1、暂停启动过程 + +首先,你需要拥有机器的控制台才能进入单用户模式。如果它是虚拟机那就是虚拟机控制台,如果它是物理机那么你需要连接它的 iLO/串口控制台。重启系统并在 GRUB 启动菜单中按任意键停止内核的自动启动。 + +![Kernel selection menu at boot in SUSE 12][3] + +### 2、编辑内核的启动选项 + +进入上面的页面后,在所选内核(通常是你首选的最新内核)上按 `e` 更新其启动选项。你会看到下面的页面。 + +![grub2 edits in SUSE 12][4] + +现在,向下滚动到内核引导行,并在行尾添加 `init=/bin/bash`,如下所示。 + +![Edit to boot in single user shell][5] + +### 3、引导编辑后的内核 + +现在按 `Ctrl-x` 或 `F10` 来启动这个编辑过的内核。内核将以单用户模式启动,你将看到 `#` 号提示符,即有服务器的 root 访问权限。此时,根文件系统以只读模式挂载。因此,你对系统所做的任何更改都不会被保存。 + +运行以下命令以将根文件系统重新挂载为可重写入的。 + +``` +kerneltalks:/ # mount -o remount,rw / +``` + +这就完成了!继续在单用户模式中做你必要的事情吧。完成后不要忘了重启服务器引导到普通多用户模式。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/howto/how-to-enter-single-user-mode-in-suse-12-linux/ + +作者:[kerneltalks][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://kerneltalks.com +[b]: https://github.com/lujun9972 +[1]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/How-to-enter-single-user-mode-in-SUSE-12-Linux.png +[2]: https://kerneltalks.com/linux/recover-forgotten-root-password-rhel/ +[3]: https://a1.kerneltalks.com/wp-content/uploads/2018/11/Grub-menu-in-SUSE-12.png +[4]: https://a3.kerneltalks.com/wp-content/uploads/2018/11/grub2-editor.png +[5]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/Edit-to-boot-in-single-user-shell.png diff --git a/published/201811/20181119 How To Customize Bash Prompt In Linux.md b/published/201811/20181119 How To Customize Bash Prompt In Linux.md new file mode 100644 index 0000000000..190fdb914b --- /dev/null +++ b/published/201811/20181119 How To Customize Bash Prompt In Linux.md @@ -0,0 +1,313 @@ +在 Linux 上自定义 bash 命令提示符 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/BASH-720x340.jpg) + +众所周知,**bash**(the **B**ourne-**A**gain **Sh**ell)是目前绝大多数 Linux 发行版使用的默认 shell。本文将会介绍如何通过添加颜色和样式来自定义 bash 命令提示符的显示。尽管很多插件或工具都可以很轻易地满足这一需求,但我们也可以不使用插件和工具,自己手动自定义一些基本的显示方式,例如添加或者修改某些元素、更改前景色、更改背景色等等。 + +### 在 Linux 中自定义 bash 命令提示符 + +在 bash 中,我们可以通过更改 `$PS1` 环境变量的值来自定义 bash 命令提示符。 + +一般情况下,bash 命令提示符会是以下这样的形式: + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal.png) + +在上图这种默认显示形式当中,“sk” 是我的用户名,而 “ubuntuserver” 是我的主机名。 + +只要插入一些以反斜杠开头的特殊转义字符串,就可以按照你的喜好修改命令提示符了。下面我来举几个例子。 + +在开始之前,我强烈建议你预先备份 `~/.bashrc` 文件。 + +``` +$ cp ~/.bashrc ~/.bashrc.bak +``` + +#### 更改 bash 命令提示符中的 username@hostname 部分 + +如上所示,bash 命令提示符一般都带有 “username@hostname” 部分,这个部分是可以修改的。 + +只需要编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件的最后添加一行: + +``` +PS1="ostechnix> " +``` + +将上面的 “ostechnix” 替换为任意一个你想使用的单词,然后按 `ESC` 并输入 `:wq` 保存、退出文件。 + +执行以下命令使刚才的修改生效: + +``` +$ source ~/.bashrc +``` + +你就可以看见 bash 命令提示符中出现刚才添加的 “ostechnix” 了。 + +![][3] + +再来看看另一个例子,比如将 “username@hostname” 替换为 “Hello@welcome>”。 + +同样是像刚才那样修改 `~/.bashrc` 文件。 + +``` +export PS1="Hello@welcome> " +``` + +然后执行 `source ~/.bashrc` 让修改结果立即生效。 + +以下是我在 Ubuntu 18.04 LTS 上修改后的效果。 + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-1.png) + +#### 仅显示用户名 + +如果需要仅显示用户名,只需要在 `~/.bashrc` 文件中加入以下这一行。 + +``` +export PS1="\u " +``` + +这里的 `\u` 就是一个转义字符串。 + +下面提供了一些可以添加到 `$PS1` 环境变量中的用以改变 bash 命令提示符样式的转义字符串。每次修改之后,都需要执行 `source ~/.bashrc` 命令才能立即生效。 + +#### 显示用户名和主机名 + +``` +export PS1="\u\h " +``` + +命令提示符会这样显示: + +``` +skubuntuserver +``` + +#### 显示用户名和完全限定域名 + +``` +export PS1="\u\H " +``` + +#### 在用户名和主机名之间显示其它字符 + +如果你还需要在用户名和主机名之间显示其它字符(例如 `@`),可以使用以下格式: + +``` +export PS1="\u@\h " +``` + +命令提示符会这样显示: + +``` +sk@ubuntuserver +``` + +#### 显示用户名、主机名,并在末尾添加 $ 符号 + +``` +export PS1="\u@\h\\$ " +``` + +#### 综合以上两种显示方式 + +``` +export PS1="\u@\h> " +``` + +命令提示符最终会这样显示: + +``` +sk@ubuntuserver> +``` + +相似地,还可以添加其它特殊字符,例如冒号、分号、星号、下划线、空格等等。 + +#### 显示用户名、主机名、shell 名称 + +``` +export PS1="\u@\h>\s " +``` + +#### 显示用户名、主机名、shell 名称以及 shell 版本 + +``` +export PS1="\u@\h>\s\v " +``` + +bash 命令提示符显示样式: + +![][4] + +#### 显示用户名、主机名、当前目录 + +``` +export PS1="\u@\h\w " +``` + +如果当前目录是 `$HOME` ,会以一个波浪线(`~`)显示。 + +#### 在 bash 命令提示符中显示日期 + +除了用户名和主机名,如果还想在 bash 命令提示符中显示日期,可以在 `~/.bashrc` 文件中添加以下内容: + +``` +export PS1="\u@\h>\d " +``` + +![][5] + +#### 在 bash 命令提示符中显示日期及 12 小时制时间 + +``` +export PS1="\u@\h>\d\@ " +``` + +#### 显示日期及 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\T " +``` + +#### 显示日期及 24 小时制时间 + +``` +export PS1="\u@\h>\d\A " +``` + +#### 显示日期及 24 小时制 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\t " +``` + +以上是一些常见的可以改变 bash 命令提示符的转义字符串。除此以外的其它转义字符串,可以在 bash 的 man 手册 PROMPTING 章节中查阅。 + +你也可以随时执行以下命令查看当前的命令提示符样式。 + +``` +$ echo $PS1 +``` + +#### 在 bash 命令提示符中去掉 username@hostname 部分 + +如果我不想做任何调整,直接把 username@hostname 部分整个去掉可以吗?答案是肯定的。 + +如果你是一个技术方面的博主,你有可能会需要在网站或者博客中上传自己的 Linux 终端截图。或许你的用户名和主机名太拉风、太另类,不想让别人看到,在这种情况下,你就需要隐藏命令提示符中的 “username@hostname” 部分。 + +如果你不想暴露自己的用户名和主机名,只需要按照以下步骤操作。 + +编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件末尾添加这一行: + +``` +PS1="\W> " +``` + +输入 `:wq` 保存并关闭文件。 + +执行以下命令让修改立即生效。 + +``` +$ source ~/.bashrc +``` + +现在看一下你的终端,“username@hostname” 部分已经消失了,只保留了一个 `~>` 标记。 + +![][6] + +如果你想要尽可能简单的操作,又不想弄乱你的 `~/.bashrc` 文件,最好的办法就是在系统中创建另一个用户(例如 “user@example”、“admin@demo”)。用带有这样的命令提示符的用户去截图或者录屏,就不需要顾虑自己的用户名或主机名被别人看见了。 + +**警告:**在某些情况下,这种做法并不推荐。例如像 zsh 这种 shell 会继承当前 shell 的设置,这个时候可能会出现一些意想不到的问题。这个技巧只用于隐藏命令提示符中的 “username@hostname” 部分,仅此而已,如果把这个技巧挪作他用,也可能会出现异常。 + +### 为 bash 命令提示符着色 + +目前我们也只是变更了 bash 命令提示符中的内容,下面介绍一下如何对命令提示符进行着色。 + +通过向 `~/.bashrc` 文件写入一些配置,可以修改 bash 命令提示符的前景色(也就是文本的颜色)和背景色。 + +例如,下面这一行配置可以令某些文本的颜色变成红色: + +``` +export PS1="\u@\[\e[31m\]\h\[\e[m\] " +``` + +添加配置后,执行 `source ~/.bashrc` 立即生效。 + +你的 bash 命令提示符就会变成这样: + +![][7] + +类似地,可以用这样的配置来改变背景色: + +``` +export PS1="\u@\[\e[31;46m\]\h\[\e[m\] " +``` + +![][8] + +### 添加 emoji + +大家都喜欢 emoji。还可以按照以下配置把 emoji 插入到命令提示符中。 + +``` +PS1="\W 🔥 >" +``` + +需要注意的是,emoji 的显示取决于使用的字体,因此某些终端可能会无法正常显示 emoji,取而代之的是一些乱码或者单色表情符号。 + +### 自定义 bash 命令提示符有点难,有更简单的方法吗? + +如果你是一个新手,编辑 `$PS1` 环境变量的过程可能会有些困难,因为命令提示符中的大量转义字符串可能会让你有点晕头转向。但不要担心,有一个在线的 bash `$PS1` 生成器可以帮助你轻松生成各种 `$PS1` 环境变量值。 + +就是这个[网站][9]: + +[![EzPrompt](https://www.ostechnix.com/wp-content/uploads/2017/10/EzPrompt.png)][9] + +只需要直接选择你想要的 bash 命令提示符样式,添加颜色、设计排序,然后就完成了。你可以预览输出,并将配置代码复制粘贴到 `~/.bashrc` 文件中。就这么简单。顺便一提,本文中大部分的示例都是通过这个网站制作的。 + +### 我把我的 ~/.bashrc 文件弄乱了,该如何恢复? + +正如我在上面提到的,强烈建议在更改 `~/.bashrc` 文件前做好备份(在更改其它重要的配置文件之前也一定要记得备份)。这样一旦出现任何问题,你都可以很方便地恢复到更改之前的配置状态。当然,如果你忘记了备份,还可以按照下面这篇文章中介绍的方法恢复为默认配置。 + +- [如何将 `~/.bashrc` 文件恢复到默认配置][10] + +这篇文章是基于 ubuntu 的,但也适用于其它的 Linux 发行版。不过事先声明,这篇文章的方法会将 `~/.bashrc` 文件恢复到系统最初时的状态,你对这个文件做过的任何修改都将丢失。 + +感谢阅读! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/cdn-cgi/l/email-protection +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-2.png +[5]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-1.png +[7]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-4/ +[8]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-5/ +[9]: http://ezprompt.net/ +[10]: https://www.ostechnix.com/restore-bashrc-file-default-settings-ubuntu/ + diff --git a/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md new file mode 100644 index 0000000000..9fbf743381 --- /dev/null +++ b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md @@ -0,0 +1,86 @@ +如何更换 Ubuntu 系统的 GDM 登录界面背景 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-3.png) + +Ubuntu 18.04 LTS 桌面系统在登录、锁屏和解锁状态下,我们会看到一个纯紫色的背景。它是 GDM(GNOME 显示管理器GNOME Display Manager)从 ubuntu 17.04 版本开始使用的默认背景。有一些人可能会不喜欢这个纯色的背景,想换一个酷一点、更吸睛的!如果是这样,你找对地方了。这篇短文将会告诉你如何更换 Ubuntu 18.04 LTS 的 GDM 登录界面的背景。 + +### 更换 Ubuntu 的登录界面背景 + +这是 Ubuntu 18.04 LTS 桌面系统默认的登录界面。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-1.png) + +不管你喜欢与否,你总是会不经意在登录、解屏/锁屏的时面对它。别担心!你可以随便更换一个你喜欢的图片。 + +在 Ubuntu 上更换桌面壁纸和用户的资料图像不难。我们可以点击鼠标就搞定了。但更换解屏/锁屏的背景则需要修改文件 `ubuntu.css`,它位于 `/usr/share/gnome-shell/theme`。 + +修改这个文件之前,最好备份一下它。这样我们可以避免出现问题时可以恢复它。 + +``` +$ sudo cp /usr/share/gnome-shell/theme/ubuntu.css /usr/share/gnome-shell/theme/ubuntu.css.bak +``` + +修改文件 `ubuntu.css`: + +``` +$ sudo nano /usr/share/gnome-shell/theme/ubuntu.css +``` + +在文件中找到关键字 `lockDialogGroup`,如下行: + +``` +#lockDialogGroup { + background: #2c001e url(resource:///org/gnome/shell/theme/noise-texture.png); + background-repeat: repeat; +} +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/ubuntu_css.png) + +可以看到,GDM 默认登录的背景图片是 `noise-texture.png`。 + +现在修改为你自己的图片路径。也可以选择 .jpg 或 .png 格式的文件,两种格式的图片文件都是支持的。修改完成后的文件内容如下: + +``` +#lockDialogGroup { + background: #2c001e url(file:///home/sk/image.png); + background-repeat: no-repeat; + background-size: cover; + background-position: center; +} +``` + +请注意 `ubuntu.css` 文件里这个关键字的修改,我把修改点加粗了。 + +你可能注意到,我把原来的 `... url(resource:///org/gnome/shell/theme/noise-texture.png);` 修改为 `... url(file:///home/sk/image.png);`。也就是说,你可以把 `... url(resource ...` 修改为 `.. url(file ...`。 + +同时,你可以把参数 `background-repeat:` 的值 `repeat` 修改为 `no-repeat`,并增加另外两行。你可以直接复制上面几行的修改到你的 `ubuntu.css` 文件,对应的修改为你的图片路径。 + +修改完成后,保存和关闭此文件。然后系统重启生效。 + +下面是 GDM 登录界面的最新背景图片: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-2.png) + +是不是很酷,你都看到了,更换 GDM 登录的默认背景很简单。你只需要修改 `ubuntu.css` 文件中图片的路径然后重启系统。是不是很简单也很有意思. + +你可以修改 `/usr/share/gnome-shell/theme` 目录下的文件 `gdm3.css` ,具体修改内容和修改结果和上面一样。同时记得修改前备份要修改的文件。 + +就这些了。如果有好的东东再分享了,请大家关注! + +后会有期。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-change-gdm-login-screen-background-in-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[Guevaraya](https://github.com/guevaraya) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181126 How to use multiple programming languages without losing your mind.md b/published/201811/20181126 How to use multiple programming languages without losing your mind.md new file mode 100644 index 0000000000..bbb310fa4e --- /dev/null +++ b/published/201811/20181126 How to use multiple programming languages without losing your mind.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (How to use multiple programming languages without losing your mind) +[#]: via: (https://opensource.com/article/18/11/multiple-programming-languages) +[#]: author: (Bart Copeland https://opensource.com/users/bartcopeland) +[#]: url: (https://linux.cn/article-10291-1.html) + +如何使用多种编程语言而又不失理智 +====== + +> 多语言编程环境是一把双刃剑,既带来好处,也带来可能威胁组织的复杂性。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2) + +如今,随着各种不同的编程语言的出现,许多组织已经变成了数字多语种组织digital polyglots。开源打开了一个语言和技术堆栈的世界,开发人员可以使用这些语言和技术堆栈来完成他们的任务,包括开发、支持过时的和现代的软件应用。 + +与那些只说母语的人相比,通晓多种语言的人可以与数百万人交谈。在软件环境中,开发人员不会引入新的语言来达到特定的目的,也不会更好地交流。一些语言对于一项任务来说很棒,但是对于另一项任务来说却不行,因此使用多种编程语言可以让开发人员使用合适的工具来完成这项任务。这样,所有的开发都是多语种的;这只是野兽的本性。 + +多语种环境的创建通常是渐进的和情景化的。例如,当一家企业收购一家公司时,它就承担了该公司的技术堆栈 —— 包括其编程语言。或者,随着技术领导的改变,新的领导者可能会将不同的技术纳入其中。技术也有过时的时候,随着时间的推移,增加了组织必须维护的编程语言和技术的数量。 + +多语言环境对企业来说是一把双刃剑,既带来好处,也带来复杂性和挑战。最终,如果这种情况得不到控制,多语言将会扼杀你的企业。 + +### 棘手的技术绕口令 + +如果有多种不同的技术 —— 编程语言、过时的工具和新兴的技术堆栈 —— 就有复杂性。工程师团队花更多的时间努力改进编程语言,包括许可证、安全性和依赖性。与此同时,管理层缺乏对代码合规性的监督,无法衡量风险。 + +发生的情况是,企业具有不同程度的编程语言质量和工具支持的高度可变性。当你需要和十几个人一起工作时,很难成为一种语言的专家。一个能流利地说法语和意大利语的人和一个能用八种语言串成几个句子的人在技能水平上有很大差异。开发人员和编程语言也是如此。 + +随着更多编程语言的加入,困难只会增加,导致数字巴别塔的出现。 + +答案是不要拿走开发人员工作所需的工具。添加新的编程语言可以建立他们的技能基础,并为他们提供合适的设备来完成他们的工作。所以,你想对你的开发者说“是”,但是随着越来越多的编程语言被添加到企业中,它们会拖累你的软件开发生命周期(SDLC)。在规模上,所有这些语言和工具都可能扼杀企业。 + +企业应注意三个主要问题: + +1. **可见性:** 团队聚在一起执行项目,然后解散。应用程序已经发布,但从未更新 —— 为什么要修复那些没有被破坏的东西?因此,当发现一个关键漏洞时,企业可能无法了解哪些应用程序受到影响,这些应用程序包含哪些库,甚至无法了解它们是用什么语言构建的。这可能导致成本高昂的“勘探项目”,以确保漏洞得到适当解决。 + +2. **更新或编码:** 一些企业将更新和修复功能集中在一个团队中。其他人要求每个“比萨团队”管理自己的开发工具。无论是哪种情况,工程团队和管理层都要付出机会成本:这些团队没有编码新特性,而是不断更新和修复开源工具中的库,因为它们移动得如此之快。 + +3. **重新发明轮子:** 由于代码依赖性和库版本不断更新,当发现漏洞时,与应用程序原始版本相关联的工件可能不再可用。因此,许多开发周期都被浪费在试图重新创建一个可以修复漏洞的环境上。 + +将你组织中的每种编程语言乘以这三个问题,开始时被认为是分子一样小的东西突然看起来像珠穆朗玛峰。就像登山者一样,没有合适的设备和工具,你将无法生存。 + +### 找到你的罗塞塔石碑 + +一个全面的解决方案可以满足 SDLC 中企业及其个人利益相关者的需求。企业可以使用以下最佳实践创建解决方案: + + 1. 监控生产中运行的代码,并根据应用程序中使用的标记组件(例如,常见漏洞和暴露组件)的风险做出响应。 + 2. 定期接收更新以保持代码的最新和无错误。 + 3. 使用商业开源支持来获得编程语言版本和平台的帮助,这些版本和平台已经接近尾声,并且不受社区支持。 + 4. 标准化整个企业中的特定编程语言构建,以实现跨团队的一致环境,并最大限度地减少依赖性。 + 5. 根据相关性设置何时触发更新、警报或其他类型事件的阈值。 + 6. 为您的包管理创建一个单一的可信来源;这可能需要知识渊博的技术提供商的帮助。 + 7. 根据您的特定标准,只使用您需要的软件包获得较小的构建版本。 + +使用这些最佳实践,开发人员可以最大限度地利用他们的时间为企业创造更多价值,而不是执行基本的工具或构建工程任务。这将在软件开发生命周期(SDLC)的所有环境中创建代码一致性。由于维护编程语言和软件包分发所需的资源更少,这也将提高效率和节约成本。这种新的操作方式将使技术人员和管理人员的生活更加轻松。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/multiple-programming-languages + +作者:[Bart Copeland][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bartcopeland +[b]: https://github.com/lujun9972 diff --git a/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md new file mode 100644 index 0000000000..0f51e0e7a9 --- /dev/null +++ b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md @@ -0,0 +1,401 @@ +学习 Linux/*BSD/Unix 的 30 个最佳在线文档 +====== + +手册页(man)是由系统管理员和 IT 技术开发人员写的,更多的是为了作为参考而不是教你如何使用。手册页对于已经熟悉使用 Linux、Unix 和 BSD 操作系统的人来说是非常有用的。如果你仅仅需要知道某个命令或者某个配置文件的格式那么你可以使用手册页,但是手册页对于 Linux 新手来说并没有太大的帮助。想要通过使用手册页来学习一些新东西不是一个好的选择。这里有将提供 30 个学习 Linux 和 Unix 操作系统的最佳在线网页文档。 + +![Dennis Ritchie and Ken Thompson working with UNIX PDP11][1] + +值得一提的是,相对于 Linux,BSD 的手册页更好。 + +### #1:Red Hat Enterprise Linux(RHEL) + +![Red hat Enterprise Linux 文档][2] + +RHEL 是由红帽公司开发的面向商业市场的 Linux 发行版。红帽的文档是最好的文档之一,涵盖从 RHEL 的基础到一些高级主题比如安全、SELinux、虚拟化、目录服务器、服务器集群、JBOSS 应用程序服务器、高可用性集群(HPC)等。红帽的文档已经被翻译成 22 种语言,发布成多页面 HTML、单页面 HTML、PDF、EPUB 等文件格式。好消息同样的文档你可以用于 Centos 和 Scientific Linux(社区企业发行版)。这些文档随操作系统一起下载提供,也就是说当你没有网络的时候,你也可以使用它们。RHEL 的文档**涵盖从安装到配置器群的所有内容**。唯一的缺点是你需要成为付费用户。当然这对于企业公司来说是一件完美的事。 + +1. RHEL 文档:[HTML/PDF格式][3](LCTT 译注:**此链接**需要付费用户才可以访问) +2. 是否支持论坛:只能通过红帽公司的用户网站提交支持案例。 + +#### 关于 CentOS Wiki 和论坛的说明 + +![Centos Linux Wiki][4] + +CentOS(社区企业操作系统Community ENTerprise Operating System)是由 RHEL 提供的自由源码包免费重建的。它为个人电脑或其它用途提供了可靠的、免费的企业级 Linux。你可以不用付出任何支持和认证费用就可以获得 RHEL 的稳定性。CentOS的 wiki 分为 Howto、技巧等等部分,链接如下: + +1. 文档:[wiki 格式][87] +2. 是否支持论坛:[是][88] + +### #2:Arch 的 Wiki 和论坛 + +![Arch Linux wiki 和教程][5] + +Arch linux 是一个独立开发的 Linux 操作系统,它有基于 wiki 网站形式的非常不错的文档。它是由 Arch 社区的一些用户共同协作开发出来的,并且允许任何用户添加或修改内容。这些文档教程被分为几类比如说优化、软件包管理、系统管理、X window 系统还有获取安装 Arch Linux 等。它的[官方论坛][7]在解决许多问题的时候也非常有用。它有总共 4 万多个注册用户、超过 1 百万个帖子。 该 wiki 包含一些 **其它 Linux 发行版也适用的通用信息**。 + +1. Arch 社区文档:[Wiki 格式][8] +2. 是否支持论坛:[是][7] + +### #3:Gentoo Linux Wiki 和论坛 + +![Gentoo Linux 手册和 Wiki][9] + +Gentoo Linux 基于 Portage 包管理系统。Gentoo Linux 用户根据它们选择的配置在本地编译源代码。多数 Gentoo Linux 用户都会定制自己独有的程序集。 Gentoo Linux 的文档会给你一些有关 Gentoo Linux 操作系统的说明和一些有关安装、软件包、网络和其它等主要出现的问题的解决方法。Gentoo 有对你来说 **非常有用的论坛**,论坛中有超过 13 万 4 千的用户,总共发了有 5442416 个文章。 + +1. Gentoo 社区文档:[手册][10] 和 [Wiki 格式][11] +2. 是否支持论坛:[是][12] + +### #4:Ubuntu Wiki 和文档 + +![Ubuntu Linux Wiki 和论坛][14] + +Ubuntu 是领先的台式机和笔记本电脑发行版之一。其官方文档由 Ubuntu 文档工程开发维护。你可以在从官方文档中查看大量的信息,比如如何开始使用 Ubuntu 的教程。最好的是,此处包含的这些信息也可用于基于 Debian 的其它系统。你可能会找到由 Ubuntu 的用户们创建的社区文档,这是一份有关 Ubuntu 的使用教程和技巧等。Ubuntu Linux 有着网络上最大的 Linux 社区的操作系统,它对新用户和有经验的用户均有助益。 + +1. Ubuntu 社区文档:[wiki 格式][15] +2. Ubuntu 官方文档:[wiki 格式][16] +3. 是否支持论坛:[是][17] + +### #5:IBM Developer Works + +![IBM: Linux 程序员和系统管理员用到的技术][18] + +IBM Developer Works 为 Linux 程序员和系统管理员提供技术资源,其中包含数以百计的文章、教程和技巧来协助 Linux 程序员的编程工作和应用开发还有系统管理员的日常工作。 + +1. IBM 开发者项目文档:[HTML 格式][19] +2. 是否支持论坛:[是][20] + +### #6:FreeBSD 文档和手册 + +![Freebsd Documentation][21] + +FreeBSD 的手册是由 FreeBSD 文档项目FreeBSD Documentation Project所创建的,它介绍了 FreeBSD 操作系统的安装、管理和一些日常使用技巧等内容。FreeBSD 的手册页通常比 GNU Linux 的手册页要好一点。FreeBSD **附带有全部最新手册页的文档**。 FreeBSD 手册涵盖任何你想要的内容。手册包含一些通用的 Unix 资料,这些资料同样适用于其它的 Linux 发行版。FreeBSD 官方论坛会在你遇到棘手问题时给予帮助。 + +1. FreeBSD 文档:[HTML/PDF 格式][90] +2. 是否支持论坛:[是][91] + +### #7:Bash Hackers Wiki + +![Bash Hackers wiki][22] + +这是一个对于 bash 使用者来说非常好的资源。Bash 使用者的 wiki 是为了归纳所有类型的 GNU Bash 文档。这个项目的动力是为了提供可阅读的文档和资料来避免用户被迫一点一点阅读 Bash 的手册,有时候这是非常麻烦的。Bash Hackers Wiki 分为各个类,比如说脚本和通用资料、如何使用、代码风格、bash 命令格式和其它。 + +1. Bash 用户教程:[wiki 格式][23] + +### #8:Bash 常见问题 + +![Bash 常见问题:一些有关 GNU/BASH 常见问题的解决方法][24] + +这是一个为 bash 新手设计的一个 wiki。它收集了 IRC 网络的 #bash 频道里常见问题的解决方法,这些解决方法是由该频道的普通成员提供。当你遇到问题的时候不要忘了在 [BashPitfalls][25] 部分检索查找答案。这些常见问题的解决方法可能会倾向于 Bash,或者偏向于最基本的 Bourne Shell,这决定于是谁给出的答案。大多数情况会尽力提供可移植的(Bourne)和高效的(Bash,在适当情况下)的两类答案。 + +1. Bash 常见问题:[wiki 格式][26] + +### #9: Howtoforge - Linux 教程 + +![Howtoforge][27] + +博客作者 Falko 在 Howtoforge 上有一些非常不错的东西。这个网站提供了 Linux 关于各种各样主题的教程,比如说其著名的“最佳服务器系列”,网站将主题分为几类,比如说 web 服务器、linux 发行版、DNS 服务器、虚拟化、高可用性、电子邮件和反垃圾邮件、FTP 服务器、编程主题还有一些其它的内容。这个网站也支持德语。 + +1. Howtoforge: [html 格式][28] +2. 是否支持论坛:是 + +### #10:OpenBSD 常见问题和文档 + +![OpenBSD 文档][29] + +OpenBSD 是另一个基于 BSD 的类 Unix 计算机操作系统。OpenBSD 是由 NetBSD 项目分支而来。OpenBSD 因高质量的代码和文档、对软件许可协议的坚定立场和强烈关注安全问题而闻名。OpenBSD 的文档分为多个主题类别,比如说安装、包管理、防火墙设置、用户管理、网络、磁盘和磁盘阵列管理等。 + +1. OpenBSD:[html 格式][30] +2. 是否支持论坛:否,但是可以通过 [邮件列表][31] 来咨询 + +### #11: Calomel - 开源研究和参考文档 + +![开源研究和参考文档][32] + +这个极好的网站是专门作为开源软件和那些特别专注于 OpenBSD 的软件的文档来使用的。这是最简洁的引导网站之一,专注于高质量的内容。网站内容分为多个类,比如说 DNS、OpenBSD、安全、web 服务器、Samba 文件服务器、各种工具等。 + +1. Calomel 官网:[html 格式][33] +2. 是否支持论坛:否 + +### #12:Slackware 书籍项目 + +![Slackware Linux 手册和文档][34] + +Slackware Linux 是我的第一个 Linux 发行版。Slackware 是基于 Linux 内核的最早的发行版之一,也是当前正在维护的最古老的 Linux 发行版。 这个发行版面向专注于稳定性的高级用户。 Slackware 也是很少有的的“类 Unix” 的 Linux 发行版之一。官方的 Slackware 手册是为了让用户快速开始了解 Slackware 操作系统的使用方法而设计的。 这不是说它将包含发行版的每一个方面,而是为了说明它的实用性和给使用者一些有关系统的基础工作使用方法。手册分为多个主题,比如说安装、网络和系统配置、系统管理、包管理等。 + +1. Slackware Linux 手册:[html 格式][35]、pdf 和其它格式 +2. 是否支持论坛:是 + +### #13:Linux 文档项目(TLDP) + +![Linux 学习网站和文档][36] + +Linux 文档项目Linux Documentation Project旨在给 Linux 操作系统提供自由、高质量文档。网站是由志愿者创建和维护的。网站分为具体主题的帮助、由浅入深的指南等。在此我想推荐一个非常好的[文档][37],这个文档既是一个教程也是一个 shell 脚本编程的参考文档,对于新用户来说这个 HOWTO 的[列表][38]也是一个不错的开始。 + +1. Linux [文档工程][39] 支持多种查阅格式 +2. 是否支持论坛:否 + +### #14:Linux Home Networking + +![Linux Home Networking][40] + +Linux Home Networking 是学习 linux 的另一个比较好的资源,这个网站包含了 Linux 软件认证考试的内容比如 RHCE,还有一些计算机培训课程。网站包含了许多主题,比如说网络、Samba 文件服务器、无线网络、web 服务器等。 + +1. Linux [home networking][41] 可通过 html 格式和 PDF(少量费用)格式查阅 +2. 是否支持论坛:是 + +### #15:Linux Action Show + +![Linux 播客][42] + +Linux Action Show(LAS) 是一个关于 Linux 的播客。这个网站是由 Bryan Lunduke、Allan Jude 和 Chris Fisher 共同管理的。它包含了 FOSS 的最新消息。网站内容主要是评论一些应用程序和 Linux 发行版。有时候也会发布一些和开源项目著名人物的采访视频。 + +1. Linux [action show][43] 支持音频和视频格式 +2. 是否支持论坛:是 + +### #16:Commandlinefu + +![Commandlinefu 的最优 Unix / Linux 命令][45] + +Commandlinefu 列出了各种有用或有趣的 shell 命令。这里所有命令都可以评论、讨论和投票(支持或反对)。对于所有 Unix 命令行用户来说是一个极好的资源。不要忘了查看[评选出来的最佳命令][44]。 + + 1. [Commandlinefu][46] 支持 html 格式 + 2. 是否支持论坛:否 + +### #17:Debian 管理技巧和资源 + +![Debian Linux 管理: 系统管理员技巧和教程][48] + +这个网站包含一些只和 Debian GNU/Linux 相关的主题、技巧和教程,特别是包含了关于系统管理的有趣和有用的信息。你可以在上面贡献文章、建议和问题。提交了之后不要忘记查看[最佳文章列表][47]里有没有你的文章。 + +1. Debian [系统管理][49] 支持 html 格式 +2. 是否支持论坛:否 + +### #18: Catonmat - Sed、Awk、Perl 教程 + +![Sed 流编辑器、 Awk 文本处理工具、 Perl 语言教程][50] + +这个网站是由博客作者 Peteris Krumins 维护的。主要关注命令行和 Unix 编程主题,比如说 sed 流编辑器、perl 语言、AWK 文本处理工具等。不要忘了查看 [sed 介绍][51]、sed 含义解释,还有命令行历史的[权威介绍][53]。 + +1. [catonmat][55] 支持 html 格式 +2. 是否支持论坛:否 + +### #19:Debian GNU/Linux 文档和 Wiki + +![Debian Linux 教程和 Wiki][56] + +Debian 是另外一个 Linux 操作系统,其主要使用的软件以 GNU 许可证发布。Debian 因严格坚持 Unix 和自由软件的理念而闻名,它也是很受欢迎并且有一定影响力的 Linux 发行版本之一。 Ubuntu 等发行版本都是基于 Debian 的。Debian 项目以一种易于访问的形式提供给用户合适的文档。这个网站分为 Wiki、安装指导、常见问题、支持论坛几个模块。 + +1. Debian GNU/Linux [文档][57] 支持 html 和其它格式访问 +2. Debian GNU/Linux [wiki][58] +3. 是否支持论坛:[是][59] + +### #20:Linux Sea + +Linux Sea 这本书提供了比较通俗易懂但充满技术(从最终用户角度来看)的 Linux 操作系统的介绍,使用 Gentoo Linux 作为例子。它既没有谈论 Linux 内核或 Linux 发行版的历史,也没有谈到 Linux 用户不那么感兴趣的细节。 + +1. Linux [sea][60] 支持 html 格式访问 +2. 是否支持论坛: 否 + +### #21:O'reilly Commons + +![免费 Linux / Unix / Php / Javascript / Ubuntu 学习笔记][61] + +O'reilly 出版社发布了不少 wiki 格式的文章。这个网站主要是为了给那些喜欢创作、参考、使用、修改、更新和修订来自 O'Reilly 或者其它来源的素材的社区提供资料。这个网站包含关于 Ubuntu、PHP、Spamassassin、Linux 等的免费书籍。 + +1. Oreilly [commons][62] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #22:Ubuntu 袖珍指南 + +![Ubuntu 新手书籍][63] + +这本书的作者是 Keir Thomas。这本指南(或者说是书籍)对于所有 ubuntu 用户来说都值得一读。这本书旨在向用户介绍 Ubuntu 操作系统和其所依赖的理念。你可以从官网下载这本书的 PDF 版本,也可以在亚马逊买印刷版。 + +1. Ubuntu [pocket guide][64] 支持 PDF 和印刷版本. +2. 是否支持论坛:否 + +### #23: Linux: Rute User's Tutorial and Exposition + +![GNU/LINUX system administration book][65] + +这本书涵盖了 GNU/LINUX 系统管理,主要是对主流的发布版本比如红帽和 Debian 的说明,可以作为新用户的教程和高级管理员的参考。这本书旨在给出 Unix 系统的每个面的简明彻底的解释和实践性的例子。想要全面了解 Linux 的人都不需要再看了 —— 这里没有涉及的内容。 + +1. Linux: [Rute User's Tutorial and Exposition][66] 支持印刷版和 html 格式 +2. 是否支持论坛:否 + +### #24:高级 Linux 编程 + +![高级 Linux 编程][67] + +这本书是写给那些已经熟悉了 C 语言编程的程序员的。这本书采取一种教程式的方式来讲述大多数在 GNU/Linux 系统应用编程中重要的概念和功能特性。如果你是一个已经对 GNU/Linux 系统编程有一定经验的开发者,或者是对其它类 Unix 系统编程有一定经验的开发者,或者对 GNU/Linux 软件开发有兴趣,或者想要从非 Unix 系统环境转换到 Unix 平台并且已经熟悉了优秀软件的开发原则,那你很适合读这本书。另外,你会发现这本书同样适合于 C 和 C++ 编程。 + +1. [高级 Linux 编程][68] 支持印刷版和 PDF 格式 +2. 是否支持论坛:否 + +### #25: LPI 101 Course Notes + +![Linux 国际专业协会认证书籍][69] + +LPIC 1、2、3 级是用于 Linux 系统管理员认证的。这个网站提供了 LPI 101 和 LPI 102 的测试训练。这些是根据 GNU 自由文档协议GNU Free Documentation Licence(FDL)发布的。这些课程材料基于 Linux 国际专业协会的 LPI 101 和 102 考试的目标。这个课程是为了提供给你一些必备的 Linux 系统的操作和管理的技能。 + +1. LPI [训练手册][70] 支持 PDF 格式 +2. 是否支持论坛:否 + +### #26: FLOSS 手册 + +![FLOSS Manuals is a collection of manuals about free and open source software][72] + +FLOSS 手册是一系列关于自由和开源软件以及用于创建它们的工具和使用这些工具的社区的手册。社区的成员包含作者、编辑、设计师、软件开发者、积极分子等。这些手册中说明了怎样安装使用一些自由和开源软件,如何操作(比如设计和维持在线安全)开源软件,这其中也包含如何使用或支持自由软件和格式的自由文化服务手册。你也会发现关于一些像 VLC、 [Linux 视频编辑][71]、 Linux、 OLPC / SUGAR、 GRAPHICS 等软件的手册。 + +1. 你可以浏览 [FOSS 手册][73] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #27:Linux 入门包 + +![Linux 入门包][74] + +刚接触 Linux 这个美好世界?想找一个简单的入门方式?你可以下载一个 130 页的指南来入门。这个指南会向你展示如何在你的个人电脑上安装 Linux,如何浏览桌面,掌握最主流行的 Linux 程序和修复可能出现的问题的方法。 + +1. [Linux 入门包][75]支持 PDF 格式 +2. 是否支持论坛:否 + +### #28:Linux.com - Linux 信息来源 + +Linux.com 是 Linux 基金会的一个产品。这个网站上提供一些新闻、指南、教程和一些关于 Linux 的其它信息。利用全球 Linux 用户的力量来通知、写作、连接 Linux 的事务。 + +1. 在线访问 [Linux.com][76] +2. 是否支持论坛:是 + +### #29: LWN + +LWN 是一个注重自由软件及用于 Linux 和其它类 Unix 操作系统的软件的网站。这个网站有周刊、基本上每天发布的单独文章和文章的讨论对话。该网站提供有关 Linux 和 FOSS 相关的开发、法律、商业和安全问题的全面报道。 + +1. 在线访问 [lwn.net][77] +2. 是否支持论坛:否 + +### #30:Mac OS X 相关网站 + +与 Mac OS X 相关网站的快速链接: + +* [Mac OS X 提示][78] —— 这个网站专用于苹果的 Mac OS X Unix 操作系统。网站有很多有关 Bash 和 Mac OS X 的使用建议、技巧和教程 +* [Mac OS 开发库][79] —— 苹果拥有大量和 OS X 开发相关的优秀系列内容。不要忘了看一看 [bash shell 脚本入门][80] +* [Apple 知识库][81] - 这个有点像 RHN 的知识库。这个网站提供了所有苹果产品包括 OS X 相关的指南和故障报修建议。 + +### #30: NetBSD + +(LCTT 译注:没错,又一个 30) + +NetBSD 是另一个基于 BSD Unix 操作系统的自由开源操作系统。NetBSD 项目专注于系统的高质量设计、稳定性和性能。由于 NetBSD 的可移植性和伯克利式的许可证,NetBSD 常用于嵌入式系统。这个网站提供了一些 NetBSD 官方文档和各种第三方文档的链接。 + +1. 在线访问 [netbsd][82] 文档,支持 html、PDF 格式 +2. 是否支持论坛:否 + +### 你要做的事 + +这是我的个人列表,这可能并不完全是权威的,因此如果你有你自己喜欢的独特 Unix/Linux 网站,可以在下方参与评论分享。 + +// 图片来源: [Flickr photo][83] PanelSwitchman。一些连接是用户在我们的 Facebook 粉丝页面上建议添加的。 + +### 关于作者 + +作者是 nixCraft 的创建者和经验丰富的系统管理员以及 Linux 操作系统 / Unix shell 脚本的培训师。它曾与全球客户及各行各业合作,包括 IT、教育,国防和空间研究以及一些非营利部门。可以关注作者的 [Twitter][84]、[Facebook][85]、[Google+][86]。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-bsd-documentations.html + +作者:[Vivek Gite][a] +译者:[ScarboroughCoral](https://github.com/ScarboroughCoral) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg "Dennis Ritchie and Ken Thompson working with UNIX PDP11" +[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs.png "Red hat Enterprise Linux Docs" +[3]:https://access.redhat.com/documentation/en-us/ +[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki.png "Centos Linux Wiki, Support, Documents" +[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki.png "Arch Linux wiki and tutorials " +[6]:https://wiki.archlinux.org/index.php/Category:Networking_%28English%29 +[7]:https://bbs.archlinux.org/ +[8]:https://wiki.archlinux.org/ +[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1.png "Gentoo Linux Handbook and Wiki" +[10]:http://www.gentoo.org/doc/en/handbook/ +[11]:https://wiki.gentoo.org +[12]:https://forums.gentoo.org/ +[13]:http://gentoo-wiki.com +[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png "Ubuntu Linux Wiki and Forums" +[15]:https://help.ubuntu.com/community +[16]:https://help.ubuntu.com/ +[17]:https://ubuntuforums.org/ +[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png "IBM: Technical for Linux programmers and system administrators" +[19]:https://www.ibm.com/developerworks/learn/linux/index.html +[20]:https://www.ibm.com/developerworks/community/forums/html/public?lang=en +[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png "Freebsd Documentation" +[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki.png "Bash hackers wiki for bash users" +[23]:http://wiki.bash-hackers.org/doku.php +[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq.png "Bash FAQ: Answers to frequently asked questions about GNU/BASH" +[25]:http://mywiki.wooledge.org/BashPitfalls +[26]:https://mywiki.wooledge.org/BashFAQ +[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge.png "Howtoforge tutorials" +[28]:https://howtoforge.com/ +[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq.png "OpenBSD Documenation" +[30]:https://www.openbsd.org/faq/index.html +[31]:https://www.openbsd.org/mail.html +[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png "Open Source Research and Reference Documentation" +[33]:https://calomel.org +[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book.png "Slackware Linux Book and Documentation " +[35]:http://www.slackbook.org/ +[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp.png "Linux Learning Site and Documentation " +[37]:http://tldp.org/LDP/abs/html/index.html +[38]:http://tldp.org/HOWTO/HOWTO-INDEX/howtos.html +[39]:http://tldp.org/ +[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking.png "Linux Home Networking " +[41]:http://www.linuxhomenetworking.com/ +[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show.png "Linux Podcast " +[43]:http://www.jupiterbroadcasting.com/show/linuxactionshow/ +[44]:https://www.commandlinefu.com/commands/browse/sort-by-votes +[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png "The best Unix / Linux Commands " +[46]:https://commandlinefu.com/ +[47]:https://www.debian-administration.org/hof +[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png "Debian Linux Adminstration: Tips and Tutorial For Sys Admin" +[49]:https://www.debian-administration.org/ +[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat.png "Sed, Awk, Perl Tutorials" +[51]:http://www.catonmat.net/blog/worlds-best-introduction-to-sed/ +[52]:https://www.catonmat.net/blog/sed-one-liners-explained-part-one/ +[53]:https://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/ +[54]:https://www.catonmat.net/blog/awk-one-liners-explained-part-one/ +[55]:https://catonmat.net/ +[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki.png "Debian Linux Tutorials and Wiki" +[57]:https://www.debian.org/doc/ +[58]:https://wiki.debian.org/ +[59]:https://www.debian.org/support +[60]:http://swift.siphos.be/linux_sea/ +[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly.png "Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books" +[62]:http://commons.oreilly.com/wiki/index.php/O%27Reilly_Commons +[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide.png "Ubuntu Book For New Users" +[64]:http://ubuntupocketguide.com/ +[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute.png "GNU/LINUX system administration free book" +[66]:https://web.archive.org/web/20160204213406/http://rute.2038bug.com/rute.html.gz +[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming.png "Download Advanced Linux Programming PDF version" +[68]:https://github.com/MentorEmbedded/advancedlinuxprogramming +[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic.png "Download Linux Professional Institute Certification PDF Book" +[70]:http://academy.delmar.edu/Courses/ITSC1358/eBooks/LPI-101.LinuxTrainingCourseNotes.pdf +[71]://www.cyberciti.biz/faq/top5-linux-video-editing-system-software/ +[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png "Download manuals about free and open source software" +[73]:https://flossmanuals.net/ +[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter.png "New to Linux? Start Linux starter book [ PDF version ]" +[75]:http://www.tuxradar.com/linuxstarterpack +[76]:https://linux.com +[77]:https://lwn.net/ +[78]:http://hints.macworld.com/ +[79]:https://developer.apple.com/library/mac/navigation/ +[80]:https://developer.apple.com/library/mac/#documentation/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html +[81]:https://support.apple.com/kb/index?page=search&locale=en_US&q= +[82]:https://www.netbsd.org/docs/ +[83]:https://www.flickr.com/photos/9479603@N02/3311745151/in/set-72157614479572582/ +[84]:https://twitter.com/nixcraft +[85]:https://facebook.com/nixcraft +[86]:https://plus.google.com/+CybercitiBiz +[87]:https://wiki.centos.org/ +[88]:https://www.centos.org/forums/ +[90]: https://www.freebsd.org/docs.html +[91]: https://forums.freebsd.org/ diff --git a/published/201812/20171012 7 Best eBook Readers for Linux.md b/published/201812/20171012 7 Best eBook Readers for Linux.md new file mode 100644 index 0000000000..346eed6bb6 --- /dev/null +++ b/published/201812/20171012 7 Best eBook Readers for Linux.md @@ -0,0 +1,188 @@ +7 个最佳 Linux 电子书阅读器 +====== + +**摘要:** 本文中我们涉及一些 Linux 最佳电子书阅读器。这些应用提供更佳的阅读体验甚至可以管理你的电子书。 + +![最佳 Linux 电子书阅读器][1] + +最近,随着人们发现在手持设备、Kindle 或者 PC 上阅读更加舒适,对电子图书的需求有所增加。至于 Linux 用户,也有各种电子书应用满足你阅读和整理电子书的需求。 + +在本文中,我们选出了七个最佳 Linux 电子书阅读器。这些电子书阅读器最适合 pdf、epub 和其他电子书格式。 + +我提供的是 Ubuntu 安装说明,因为我现在使用它。如果你使用的是[非 Ubuntu 发行版][2],你能在你的发行版软件仓库中找到大多数这些电子书应用。 + +### 1. Calibre + +[Calibre][3] 是 Linux 最受欢迎的电子书应用。老实说,这不仅仅是一个简单的电子书阅读器。它是一个完整的电子书解决方案。你甚至能[通过 Calibre 创建专业的电子书][4]。 + +通过强大的电子书管理和易用的界面,它提供了创建和编辑电子书的功能。Calibre 支持多种格式和与其它电子书阅读器同步。它也可以让你轻松转换一种电子书格式到另一种。 + +Calibre 最大的缺点是,资源消耗太多,因此作为一个独立的电子阅读器来说是一个艰难的选择。 + +![Calibre][5] + +#### 特性 + + * 管理电子书:Calibre 通过管理元数据来排序和分组电子书。你能从各种来源下载一本电子书的元数据或创建和编辑现有的字段。 + * 支持所有主流电子书格式:Calibre 支持所有主流电子书格式并兼容多种电子阅读器。 + * 文件转换:在转换时,你能通过改变电子书风格,创建内容表和调整边距的选项来转换任何一种电子书格式到另一种。你也能转换个人文档为电子书。 + * 从 web 下载杂志期刊:Calibre 能从各种新闻源或者通过 RSS 订阅源传递故事。 + * 分享和备份你的电子图书馆:它提供了一个选项,可以托管你电子书集合到它的服务端,从而你能与好友共享或用任何设备从任何地方访问。备份和导入/导出特性可以确保你的收藏安全和方便携带。 + +#### 安装 + +你能在主流 Linux 发行版的软件库中找到它。对于 Ubuntu,在软件中心搜索它或者使用下面的命令: + +``` +sudo apt-get install calibre +``` + +### 2. FBReader + +![FBReader: Linux 电子书阅读器][6] + +[FBReader][7] 是一个开源的轻量级多平台电子书阅读器,它支持多种格式,比如 ePub、fb2、mobi、rtf、html 等。它包括了一些可以访问的流行网络电子图书馆,那里你能免费或付费下载电子书。 + +#### 特性 + + * 支持多种文件格式和设备比如 Android、iOS、Windows、Mac 和更多。 + * 同步书集、阅读位置和书签。 + * 在线管理你图书馆,可以从你的 Linux 桌面添加任何书到所有设备。 + * 支持 Web 浏览器访问你的书集。 + * 支持将书籍存储在 Google Drive ,可以通过作者,系列或其他属性整理书籍。 + +#### 安装 + +你能从官方库或者在终端中输入以下命令安装 FBReader 电子阅读器。 + +``` +sudo apt-get install fbreader +``` + +或者你能从[这里][8]抓取一个以 .deb 包,并在你的基于 Debian 发行版的系统上安装它。 + +### 3. Okular + +[Okular][9] 是另一个开源的基于 KDE 开发的跨平台文档查看器,它已经作为 KDE 应用发布的一部分了。 + +![Okular][10] + +#### 特性 + + * Okular 支持多种文档格式像 PDF、Postscript、DjVu、CHM、XPS、ePub 和其他。 + * 支持在 PDF 文档中评论、高亮和绘制不同的形状等。 + * 无需修改原始 PDF 文件,分别保存上述这些更改。 + * 电子书中的文本能被提取到一个文本文件,并且有个名为 Jovie 的内置文本阅读服务。 + +备注:查看这个应用的时候,我发现这个应用在 Ubuntu 和它的衍生系统中不支持 ePub 文件格式。其他发行版用户仍然可以发挥它全部的潜力。 + +#### 安装 + +Ubuntu 用户可以在终端中键入下面的命令来安装它: + +``` +sudo apt-get install okular +``` + +### 4. Lucidor + +Lucidor 是一个易用的、支持 epub 文件格式和在 OPDS 格式中编目的电子阅读器。它也具有在本地书架里组织电子书集、从互联网搜索和下载,和将 Web 订阅和网页转换成电子书的功能。 + +Lucidor 是 XULRunner 应用程序,它向您展示了具有类似火狐的选项卡式布局,和存储数据和配置时的行为。它是这个列表中最简单的电子阅读器,包括诸如文本说明和滚动选项之类的配置。 + +![lucidor][11] + +你可以通过选择单词并右击“查找单词”来查找该单词在 Wiktionary.org 的定义。它也包含 web 订阅或 web 页面作为电子书的选项。 + +你能从[这里][12]下载和安装 deb 或者 RPM 包。 + +### 5. Bookworm + +![Bookworm Linux 电子阅读器][13] + +Bookworm 是另一个支持多种文件格式诸如 epub、pdf、mobi、cbr 和 cbz 的自由开源的电子阅读器。我写了一篇关于 Bookworm 应用程序的特性和安装的专题文章,到这里阅读:[Bookworm:一个简单而强大的 Linux 电子阅读器][14] + +#### 安装 + +``` +sudo apt-add-repository ppa:bookworm-team/bookworm +sudo apt-get update +sudo apt-get install bookworm +``` + +### 6. Easy Ebook Viewer + +[Easy Ebook Viewer][15] 是又一个用于读取 ePub 文件的很棒的 GTK Python 应用。具有基本章节导航、从上次阅读位置继续、从其他电子书文件格式导入、章节跳转等功能,Easy Ebook Viewer 是一个简单而简约的 ePub 阅读器. + +![Easy-Ebook-Viewer][16] + +这个应用仍然处于初始阶段,只支持 ePub 文件。 + +#### 安装 + +你可以从 [GitHub][17] 下载源代码,并自己编译它及依赖项来安装 Easy Ebook Viewer。或者,以下终端命令将执行完全相同的工作。 + +``` +sudo apt install git gir1.2-webkit-3.0 libwebkitgtk-3.0-0 gir1.2-gtk-3.0 python3-gi +git clone https://github.com/michaldaniel/Ebook-Viewer.git +cd Ebook-Viewer/ +sudo make install +``` + +成功完成上述步骤后,你可以从 Dash 启动它。 + +### 7. Buka + +Buka 主要是一个具有简单而清爽的用户界面的电子书管理器。它目前支持 PDF 格式,旨在帮助用户更加关注内容。拥有 PDF 阅读器的所有基本特性,Buka 允许你通过箭头键导航,具有缩放选项,并且能并排查看两页。 + +你可以创建单独的 PDF 文件列表并轻松地在它们之间切换。Buka 也提供了一个内置翻译工具,但是你需要有效的互联网连接来使用这个特性。 + +![Buka][19] + +#### 安装 + +你能从[官方下载页面][20]下载一个 AppImage。如果你不知道如何做,请阅读[如何在 Linux 下使用 AppImage][21]。或者,你可以通过命令行安装它: + +``` +sudo snap install buka +``` + +### 结束语 + +就我个人而言,我发现 Calibre 最适合我的需要。当然,Bookworm 看起来很有前途,这几天我经常使用它。不过,电子书应用的选择完全取决于你的喜好。 + +你使用哪个电子书应用呢?在下面的评论中让我们知道。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-ebook-readers-linux/ + +作者:[Ambarish Kumar][a] +译者:[zjon](https://github.com/zjon) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ambarish/ +[1]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/best-ebook-readers-linux.png +[2]:https://itsfoss.com/non-ubuntu-beginner-linux/ +[3]:https://www.calibre-ebook.com +[4]:https://itsfoss.com/create-ebook-calibre-linux/ +[5]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Calibre-800x603.jpeg +[6]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/fbreader-800x624.jpeg +[7]:https://fbreader.org +[8]:https://fbreader.org/content/fbreader-beta-linux-desktop +[9]:https://okular.kde.org/ +[10]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Okular-800x435.jpg +[11]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/lucidor-2.png +[12]:http://lucidor.org/lucidor/download.php +[13]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/08/bookworm-ebook-reader-linux-800x450.jpeg +[14]:https://itsfoss.com/bookworm-ebook-reader-linux/ +[15]:https://github.com/michaldaniel/Ebook-Viewer +[16]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Easy-Ebook-Viewer.jpg +[17]:https://github.com/michaldaniel/Ebook-Viewer.git +[18]:https://github.com/oguzhaninan/Buka +[19]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Buka2-800x555.png +[20]:https://github.com/oguzhaninan/Buka/releases +[21]:https://itsfoss.com/use-appimage-linux/ diff --git a/published/201812/20171108 Continuous infrastructure- The other CI.md b/published/201812/20171108 Continuous infrastructure- The other CI.md new file mode 100644 index 0000000000..67a35c7c3d --- /dev/null +++ b/published/201812/20171108 Continuous infrastructure- The other CI.md @@ -0,0 +1,108 @@ +持续基础设施:另一个 CI +====== + +> 想要提升你的 DevOps 效率吗?将基础设施当成你的 CI 流程中的重要的一环。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q) + +持续交付(CD)和持续集成(CI)是 DevOps 的两个众所周知的方面。但在 CI 大肆流行的今天却忽略了另一个关键性的 "I":基础设施infrastructure。 + +曾经有一段时间 “基础设施”就意味着无头headless的黑盒子、庞大的服务器,和高耸的机架 —— 更不用说漫长的采购流程和对盈余负载的错误估计。后来到了虚拟机时代,把基础设施处理得很好,虚拟化 —— 以前的世界从未有过这样。我们不再需要管理实体的服务器。仅仅是简单的点击,我们就可以创建和销毁、开始和停止、升级和降级我们的服务器。 + +有一个关于银行的流行故事:它们实现了数字化,并且引入了在线表格,用户需要手动填写表格、打印,然后邮寄回银行(LCTT 译注:我真的遇到过有人问我这样的需求怎么办)。这就是我们今天基础设施遇到的情况:使用新技术来做和以前一样的事情。 + +在这篇文章中,我们会看到在基础设施管理方面的进步,将基础设施视为一个版本化的组件并试着探索不可变服务器immutable server的概念。在后面的文章中,我们将了解如何使用开源工具来实现持续的基础设施。 + +![continuous infrastructure pipeline][2] + +*实践中的持续集成流程* + +这是我们熟悉的 CI,尽早发布、经常发布的循环流程。这个流程缺少一个关键的组件:基础设施。 + +突击小测试: + +* 你怎样创建和升级你的基础设施? +* 你怎样控制和追溯基础设施的改变? +* 你的基础设施是如何与你的业务进行匹配的? +* 你是如何确保在正确的基础设施配置上进行测试的? + +要回答这些问题,就要了解持续基础设施continuous infrastructure。把 CI 构建流程分为代码持续集成continuous integration code(CIc)和基础设施持续集成continuous integration infrastructure(CIi)来并行开发和构建代码和基础设施,再将两者融合到一起进行测试。把基础设施构建视为 CI 流程中的重要的一环。 + +![pipeline with infrastructure][4] + +*包含持续基础设施的 CI 流程* + +关于 CIi 定义的几个方面: + +1. 代码 + + 通过代码来创建基础设施架构,而不是通过安装。基础设施如代码Infrastructure as code(IaC)是使用配置脚本创建基础设施的现代最流行的方法。这些脚本遵循典型的编码和单元测试周期(请参阅下面关于 Terraform 脚本的示例)。 +2. 版本 + + IaC 组件在源码仓库中进行版本管理。这让基础设施的拥有了版本控制的所有好处:一致性,可追溯性,分支和标记。 +3. 管理 + + 通过编码和版本化的基础设施管理,你可以使用你所熟悉的测试和发布流程来管理基础设施的开发。 + +CIi 提供了下面的这些优势: + +1. 一致性Consistency + + 版本化和标记化的基础设施意味着你可以清楚的知道你的系统使用了哪些组件和配置。这建立了一个非常好的 DevOps 实践,用来鉴别和管理基础设施的一致性。 +2. 可重现性Reproducibility + + 通过基础设施的标记和基线,重建基础设施变得非常容易。想想你是否经常听到这个:“但是它在我的机器上可以运行!”现在,你可以在本地的测试平台中快速重现类似生产环境,从而将环境像变量一样在你的调试过程中删除。 +3. 可追溯性Traceability + + 你是否还记得曾经有过多少次寻找到底是谁更改了文件夹权限的经历,或者是谁升级了 `ssh` 包?代码化的、版本化的,发布的基础设施消除了临时性变更,为基础设施的管理带来了可追踪性和可预测性。 +4. 自动化Automation + + 借助脚本化的基础架构,自动化是下一个合乎逻辑的步骤。自动化允许你按需创建基础设施,并在使用完成后销毁它,所以你可以将更多宝贵的时间和精力用在更重要的任务上。 +5. 不变性Immutability + + CIi 带来了不可变基础设施等创新。你可以创建一个新的基础设施组件而不是通过升级(请参阅下面有关不可变设施的说明)。 + +持续基础设施是从运行基础环境到运行基础组件的进化。像处理代码一样,通过证实的 DevOps 流程来完成。对传统的 CI 的重新定义包含了缺少的那个 “i”,从而形成了连贯的 CD 。 + +**(CIc + CIi) = CI -> CD** + +### 基础设施如代码 (IaC) + +CIi 流程的一个关键推动因素是基础设施如代码infrastructure as code(IaC)。IaC 是一种使用配置文件进行基础设施创建和升级的机制。这些配置文件像其他的代码一样进行开发,并且使用版本管理系统进行管理。这些文件遵循一般的代码开发流程:单元测试、提交、构建和发布。IaC 流程拥有版本控制带给基础设施开发的所有好处,如标记、版本一致性,和修改可追溯。 + +这有一个简单的 Terraform 脚本用来在 AWS 上创建一个双层基础设施的简单示例,包括虚拟私有云(VPC)、弹性负载(ELB),安全组和一个 NGINX 服务器。[Terraform][5] 是一个通过脚本创建和更改基础设施架构和开源工具。 + +![terraform script][7] + +*Terraform 脚本创建双层架构设施的简单示例* + +完整的脚本请参见 [GitHub][8]。 + +### 不可变基础设施 + +你有几个正在运行的虚拟机,需要更新安全补丁。一个常见的做法是推送一个远程脚本单独更新每个系统。 + +要是不更新旧系统,如何才能直接丢弃它们并部署安装了新安全补丁的新系统呢?这就是不可变基础设施immutable infrastructure。因为之前的基础设施是版本化的、标签化的,所以安装补丁就只是更新该脚本并将其推送到发布流程而已。 + +现在你知道为什么要说基础设施在 CI 流程中特别重要了吗? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/continuous-infrastructure-other-ci + +作者:[Girish Managoli][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/gammay +[1]:/file/376916 +[2]:https://opensource.com/sites/default/files/images/life-uploads/figure1.jpg (continuous infrastructure pipeline in use) +[3]:/file/376921 +[4]:https://opensource.com/sites/default/files/images/life-uploads/figure2.jpg (CI pipeline with infrastructure) +[5]:https://github.com/hashicorp/terraform +[6]:/file/376926 +[7]:https://opensource.com/sites/default/files/images/life-uploads/figure3_0.png (sample terraform script) +[8]:https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/two-tier diff --git a/published/201812/20171111 A CEOs Guide to Emacs.md b/published/201812/20171111 A CEOs Guide to Emacs.md new file mode 100644 index 0000000000..4a92e5710b --- /dev/null +++ b/published/201812/20171111 A CEOs Guide to Emacs.md @@ -0,0 +1,286 @@ +CEO 的 Emacs 秘籍 +=========== + +几年前,不,是几十年前,我就在用 Emacs。不论是码代码、编写文档,还是管理邮件和日程,我都用这个编辑器,或者是说操作系统,而且我还乐此不疲。许多年过去了,我也转向了其他更新、更好的工具。结果,就连最基本的文件浏览,我都已经忘了在不用鼠标的情况下该怎么操作。大约三个月前,我意识到我在应用程序和计算机之间切换上耗费了大量的时间,于是我决定再次使用 Emacs。这是个很正确的决定,原因有以下几个。其中包括用 `.emacs` 和 Dropbox 来搭建一个良好的、可移植的环境的一些技巧。 + +对于那些还没用过 Emacs 的人来说,Emacs 会让你爱恨交加。它有点像一个房子大小的鲁布·戈德堡机械Rube Goldberg machine,乍一看,它具备烤面包机的所有功能。这听起来不像是一种认可,但关键词是“乍一看”。一旦你了解了 Emacs,你就会意识到它其实是一台可以当发动机用的热核烤面包机……好吧,只是指文本处理的所有事情。当考虑到你计算机的使用周期在很大程度上都是与文本有关时,这是一个相当大胆的声明。大胆,但却是真的。 + +也许对我来说更重要的是,Emacs 是我曾经使用过的一个应用,并让我觉得我真正的拥有它,而不是把我塑造成一个匿名的“用户”,就好像位于 [Soma][30](LCTT 译注:旧金山的一个街区)或雷蒙德(LCTT 译注:微软总部所在地)附近某个高档办公室的产品营销部门把钱作为明确的目标一样。现代生产力和创作应用程序(如 Pages 或 IDE)就像碳纤维赛车,它们装备得很好,也很齐全。而 Emacs 就像一盒经典的 [Campagnolo][31] (LCTT 译注:世界上最好的三个公路自行车套件系统品牌之一)零件和一个漂亮的自行车牵引式钢框架,但缺少曲柄臂和刹车杆,你必须在网上某个小众文化中找到它们。前者更快而且很完整,后者是无尽的快乐或烦恼的源泉,当然这取决于你自己,而且这种快乐或烦恼会伴随到你死。我就是那种在找到一堆老古董或用 `Emacs Lisp` 配置编辑器时会感到高兴的人,具体情况因人而异。 + +![1933 steel bicycle](https://www.fugue.co/hubfs/Imported_Blog_Media/bicycle-1.jpg) + +*一辆我还在骑的 1933 年产的钢制自行车。你可以看看框架管差别: [https://www.youtube.com/watch?v=khJQgRLKMU0][6]* + +这可能给人一种 Emacs 已经过气或过时的印象。然而并不是,Emacs 是强大和永恒的,只要你耐心地去理解它的一些规则。Emacs 的规则很另类,也很奇怪,但其中的逻辑却引人注目,且魅力十足。对于我来说, Emacs 更像是未来而不是过去。就像牵引式钢框架在未来几十年里将会变得好用和舒适,而神奇的碳纤维自行车将会被扔进垃圾场,在撞击中粉碎一样,Emacs 也将会作为一种在最新的流行应用早已被遗忘的时候的好用的工具继续存在这里。 + +使用 Lisp 代码来构建个人工作环境,并将这个恰到好处的环境移植到任何计算机,如果这种想法打动了你,那么你将会爱上 Emacs。如果你喜欢很潮、很炫的,又不想投入太多时间和精力的情况下就能直接工作的话,那么 Emacs 可能不适合你。我已经不再写代码了(除了 Ludwig 和 Emacs Lisp),但是 Fugue 公司的很多工程师都使用 Emacs 来提高码代码的效率。我公司有 30% 的工程师用 Emacs,40% 用 IDE 和 30% 的用 vim。但这篇文章是写给 CEO 和其他[精英][32]Pointy-Haired Bosses(PHB[^1] )(以及对 Emacs 感兴趣的人)的,所以我将解释或者说辩解我为什么喜欢它以及我如何使用它。同时我也希望我能介绍清楚,从而让你有个良好的体验,而不是花上几个小时去 Google。 + +### 恒久优势 + +使用 Emacs 带来的长期优势是让生活更轻松。与最后的收获相比,最开始的付出完全值得。想想这些: + +#### 简单高效 + +Org 模式本身就值得花时间,但如果你像我一样,你通常要处理十几份左右的文件 —— 从博客帖子到会议事务清单,再到员工评估。在现代计算世界中,这通常意味着要使用多个应用程序,所有这些程序都有不同的用户界面、保存方式、排序和搜索方式。结果就是你需要不断转换思维环境,记住各种细节。我讨厌在程序间切换,这是在强人所难,因为这是个不完整界面模型[^2] ,并且我讨厌记住本该由计算机记住的东西。在单个环境下,Emacs 对 PHB 甚至比对于程序员更高效,因为程序员更多时候只需要专注于一个程序。转换思维环境的成本比表面上的要更高。操作系统和应用程序厂商已经构建了各种界面,以分散我们对这一现实的注意力。如果你是技术人员,通过快捷键(`M-:`)来访问功能强大的[语言解释器][33]会方便的多[^3] 。 + +许多应用程序可以全天全屏地用于编辑文本。但Emacs 是唯一的,因为它既是编辑器也是 Emacs Lisp 解释器。从本质上来说,你工作时只要用电脑上的一两个键就能完成。如果你略懂编程的话,就会发现这代表着你可以在 Emacs 中做 _任何事情_。一旦你在内存中存储了这些指令,你的电脑就可以在工作时几乎实时地为你提供高效的运转。你不会想用 Emacs Lisp 来重建 Excel,因为只要用简单的一两行代码就能实现 Excel 中大多数的功能。比如说我要处理数字,我更有可能转到 scratch 缓冲区,编写一些代码,而不是打开电子表格。即便是要写一封比较大的邮件时,我通常也会先在 Emacs 中写完,然后再复制粘贴到邮件客户端中。当你可以流畅的书写时,为什么要去切换呢?你可以先从一两个简单的算术开始,随着时间的推移,你可以很容易的在 Emacs 中添加你所需要处理的计算。这在应用程序中可能是独一无二的,同时还提供了让为其他的人创造的丰富特性。还记得艾萨克·阿西莫夫书中那些神奇的终端吗? Emacs 是我所遇到的最接近它们的东西[^4] 。我决定不再用什么应用程序来做这个或那个。相反,我只是工作。拥有一个伟大的工具并致力于此,这才是真正的动力和效率。 + +#### 静中造物 + +拥有所发现的最好的文本编辑功能的最终结果是什么?有一群人在做各种各样有用的补充吗?发挥了 Lisp 键盘的全部威力了吗?我用 Emacs 来完成所有的创作性工作,音乐和图片除外。 + +我办公桌上有两个显示器。其中一块竖屏是将 Emacs 全天全屏显示,另一个显示浏览器,用来搜索和阅读,我通常也会打开一个终端。我将日历、邮件等放在 OS X 的另一个桌面上,当我使用 Emacs 时,这个桌面会隐藏起来,同时我也会关掉所有通知。这样就能让我专注于我手头上在做的事了。我发现,越是先进的 UI 应用程序,消除干扰越是不可能,因为这些应用程序致力于提供帮助和易用性。我不需要经常被提醒该如何操作,我已经做了成千上万次了,我真正需要的是一张干净整洁的白纸用来思考。也许因为年龄和自己的“恶习”,我不太喜欢处在嘈杂的环境中,但我认为这值得一试。看看在你电脑环境中有一些真正的宁静是怎样的。当然,现在很多应用程序都有隐藏界面的模式,谢天谢地,苹果和微软现在都有了真正意义上的全屏模式。但是,没有并没有应用程序可以强大到足以“处理”大多数事务。除非你整天写代码,或者像出书一样,处理很长的文档,否则你仍然会面临其他应用程序的干扰。而且,大多数现代应用程序似乎同时显得自视甚高,缺乏功能和可用性[^5] 。比起 office 桌面版,我更讨厌它的在线版。 + +![](https://www.fugue.co/hubfs/Imported_Blog_Media/desktop-1.jpg) + +*我的桌面布局, Emacs 在左边* + +但是沟通呢?创造和沟通之间的差别很大。当我将这两件事在不同时间段处理时,我的效率会更高。我们 Fugue 公司使用 Slack,痛并快乐着。我把 Slack 和我的日历、电子邮件放在一个即时通讯的桌面上,这样,当我正在做事时,我就能够忽略所有的聊天信息了。虽然只要一个 Slackstorm 或一封风投或董事会董事的电子邮件,就能让我立刻丢掉手头工作。但是,大多数事情通常可以等上一两个小时。 + +#### 普适恒久 + +第三个原因是,我发现 Emacs 比其它的环境更有优势的是,你可以很容易地用它来处理事务。我的意思是,你所需要的只是通过类似于 Dropbox 的网站同步一两个目录,而不是让大量的应用程序以它们自己的方式进行交互和同步。然后,你可以在任何你已经精心打造了适合你的目的的套件的环境中工作了。我在 OS X、Windows,或有时在 Linux 都是这样做的。它非常简单可靠。这种功能很有用,以至于我害怕处理 Pages、Google Docs、Office 或其他类型的文件和应用程序,这些文件和应用程序会迫使我回到文件系统或云中的某个地方去寻找。 + +限制在计算机上永久存储的因素是文件格式。假设人类已经解决了存储问题[^6] ,随着时间的推移,我们面临的问题是我们能否够继续访问我们创建的信息。文本文件是保存时间最久的格式。你可以用 Emacs 轻松地打开 1970 年的文本文件。然而对于 Office 应用程序却并非如此。同时文本文件要比 Office 应用程序数据文件小得多,也要好的多。作为一个数码背包迷,作为一个在脑子里一闪而过就会做很多小笔记的人,拥有一个简单、轻便、永久、随时可用的东西对我来说很重要。 + +如果你准备尝试 Emacs,请继续读下去!下面的部分不是完整的教程,但是在读完后,就可以动手操作了。 + +### 驾驭之道 —— 专业定制 + +所有这些强大、精神上的平静和安宁的代价是,Emacs 有一个陡峭的学习曲线,它的一切都与你以前所习惯的不同。一开始,这会让你觉得你是在浪费时间在一个过时和奇怪的应用程序上,就好像穿越到过去。这有点像你只开过车,却要你去学骑自行车[^7] 。 + +#### 类型抉择 + +我用的是来自 GNU 的 OS X 和 Windows 的通用版本的 Emacs。你可以在 [http://emacsformacos.com/][35] 获取 OS X 版本,在 [http://www.gnu.org/software/emacs/][37] 获取 Windows 版本。市面上还有很多其他版本,尤其是 Mac 版本,但我发现,要做一些功能强大的东西(涉及到 Lisp 和许多模式),学习曲线要比实际操作低得多。下载,然后我们就可以开始了[^8] ! + +#### 驾驭之始 + +在本文中,我将使用 Emacs 的按键和组合键约定。`C` 表示 `Control` 键,`M` 表示 `meta`(通常是 `Alt` 或 `Option` 键),以及用于组合键的连字符。因此,`C-h t` 表示同时按下 `Control` 和 `h` 键,然后释放,再按下 `t`。这个组合快捷键会指向一个教程,这是你首先要做的一件事。 + +不要使用方向键或鼠标。它们可以工作,但是你应该给自己一周的时间来使用 Emacs 教程中的原生的导航命令。一旦你这些命令变为了肌肉记忆,你可能就会乐在其中,无论到哪里,你都会非常想念它们。这个 Emacs 教程在介绍它们方面做得很好,但是我将进行总结,所以你不需要阅读全部内容。最无聊的是,不用方向键,用 `C-b` 向前移动,用 `C-f` 向后移动,上一行用 `C-p`,下一行用 `C-n`。你可能会想:“我用方向键就很好,为什么还要这样做?” 有几个原因。首先,你不需要从主键盘区将你的手移开。第二,使用 `Alt`(或用 Emacs 的说法 `Meta`)键来向前或向后在单词间移动。显而易见这样更方便。第三,如果想重复某个命令,可以在命令前面加上一个数字。在编辑文档时,我经常使用这种方法,通过估计向后移动多少个单词或向上或向下移动多少行,然后按下 `C-9 C-p` 或 `M-5 M-b` 之类的快捷键。其它真正重要的导航命令基于开头用 `a` 和结尾用 `e`。在行中使用 `C-a|e`,在句中使用 `M-a|e`。为了让句中的命令正常工作,需要在句号后增加两个空格,这同时提供了一个有用的特性,并消除了脑中一个过时的[观点][38]。如果需要将文档导出到单个空间[发布环境][39],可以编写一个宏来执行此操作。 + +Emacs 所附带的教程很值得去看。对于真正缺乏耐心的人,我将介绍一些重要的命令,但那个教程非常有用。记住:用 `C-h t` 进入教程。 + +#### 驾驭之复制粘贴 + +你可以把 Emacs 设为 CUA 模式,这将会以熟悉的方式工作来操作复制粘贴,但是原生的 Emacs 方法更好,而且你一旦学会了它,就很容易。你可以使用 `Shift` 和导航命令来标记区域(如同选择)。所以 `C-F` 是选中光标前的一个字符,等等。亦可以用 `M-w` 来复制,用 `C-w` 剪切,然后用 `C-y` 粘贴。这些实际上叫做删除killing召回yanking,但它非常类似于剪切和粘贴。在删除中还有一些小技巧,但是现在,你只需要关注剪切、复制和粘贴。如果你开始尝试了,那么 `C-x u` 是撤销。 + +#### 驾驭之 Ido 模式 + +相信我,Ido 会让文件的工作变得很简单。通常,你在 Emacs 中处理文件不需要使用一个单独的访达或文件资源管理器的窗口。相反,你可以用编辑器的命令来创建、打开和保存文件。如果没有 Ido 的话,这将有点麻烦,所以我建议你在学习其他之前安装好它。 Ido 是 Emacs 的 22 版时开始出现的,但是需要对你的 `.emacs` 文件做一些调整,来确保它一直开启着。这是个配置环境的好理由。 + +Emacs 中的大多数功能都表现在模式上。要安装指定的模式,需要做两件事。嗯,一开始你需要做一些额外的事情,但这些只需要做一次,然后再做这两件事。那么,这件额外的事情是你需要一个单独的位置来放置所有 Emacs Lisp 文件,并且你需要告诉 Emacs 这个位置在哪。我建议你在 Dropbox 上创建一个单独的目录,那是你 Emacs 主目录。在这里,你需要创建一个 `.emacs` 文件和 `.emacs.d` 目录。在 `.emacs.d` 目录下,创建一个 `lisp` 的目录。就像这样: + +``` +home +| ++.emacs +| +-.emacs.d + | + -lisp +``` + +你可以将 `.el` 文件,比如说模式文件,放到 `home/.emacs.d/lisp` 目录下,然后在你的 `.emacs` 文件中添加以下代码来指明该路径: + +``` +(add-to-list 'load-path "~/.emacs.d/lisp/") +``` + +Ido 模式是 Emacs 自带的,所以你不需要在你的 `lisp` 目录中放这个 `.el` 文件,但你仍然需要添加上面代码,因为下面的介绍会使用到它. + +#### 驾驭之符号链接 + +等等,这里写的 `.emacs` 和 `.emacs.d` 都是存放在你的主目录下,但我们把它们放到了 Dropbox 的某些愚蠢的文件夹!对,这就让你的环境在任何地方都很容易使用。把所有东西都保存在 Dropbox 上,并做符号链接到 `~` 下的 `.emacs` 、`.emacs.d` 和你的主要存放文档的目录。在 OS X 上,使用 `ln -s` 命令非常简单,但在 Windows 上却很麻烦。幸运的是,Emacs 提供了一种简单的方法来替代 Windows 上的符号链接,Windows 的 `HOME` 环境变量。转到 Windows 的环境变量(Windows 10,你可以按 Windows 键然后输入 “环境变量” 来搜索,这是 Windows 10 最好的地方了),在你的帐户下创建一个指向你在 Dropbox 中 Emacs 的文件夹的 `HOME` 环境变量。如果你想方便地浏览 Dropbox 之外的本地文件,你可能想在你的实际主目录下建立一个到 Dropbox 下 Emacs 主目录的符号链接。 + +至此,你已经完成了在任意机器上指向你的 Emacs 配置和文件所需的技巧。如果你买了一台新电脑,或者用别人的电脑一小时或一天,你就可以得到你的整个工作环境。第一次操作起来似乎有点难,但是一旦你知道你在做什么,就(最多)只需要 10 分钟。 + +但我们现在是在配置 Ido …… + +按下 `C-x` `C-f` 然后输入 `~/.emacs` 和两次回车来创建 `.emacs` 文件,将下面几行添加进去: + +``` +;; set up ido mode +(require `ido) +(setq ido-enable-flex-matching t) +(setq ido-everywhere t) +(ido-mode 1) +``` + +在 `.emacs` 窗口开着的时候,执行 `M-x evaluate-buffer` 命令。如果某处弄错了的话,将得到一个错误,或者你将得到 Ido。Ido 改变了在 minibuffer 中操作文件操方式。关于这个有一篇比较好的文档,但是我也会指出一些技巧。有效地使用 `~/`;你可以在 minibuffer 的任何地方输入 `~/`,它就会跳转到主目录。这就意味着,你应该让你的大部分东西就近的放在主目录下。我用 `~/org` 目录来保存所有非代码的东西,用 `~/code` 保存代码。一旦你进入到正确的目录,通常会拥有一组具有不同扩展名的文件,特别是当你使用 Org 模式并从中发布的话。你可以输入 `.` 和想要的扩展名,无论你的在文件名的什么位置,Ido 都会将选择限制在具有该扩展名的文件中。例如,我在 Org 模式下写这篇博客,所以该文件是: + +``` +~/org/blog/emacs.org +``` + +我偶尔也会用 Org 模式发布成 HTML 格式,所以我将在同一目录下得到 `emacs.html` 文件。当我想打开该 Org 文件时,我会输入: + +``` +C-x C-f ~/o[RET]/bl[RET].or[RET] +``` + +其中 `[RET]` 是我使用 `Ido` 模式的自动补全而按下的回车键。所以,这只需要按 12 个键,如果你习惯了的话, 这将比打开访达或文件资源管理器再用鼠标点要节省 _很_ 多时间。 Ido 模式很有用,而这只是操作 Emacs 的一种实用模式而已。下面让我们去探索一些其它对完成工作很有帮助的模式吧。 + +#### 驾驭之字体风格 + +我推荐在 Emacs 中使用漂亮的字体族。它们可以使用不同的括号、0 和其他字符进行自定义。你可以在字体文件本身中构建额外的行间距。我推荐 1.5 倍的行间距,并在代码和数据中使用不等宽字体。写作中我用 `Serif` 字体,它有一种紧凑但时髦的感觉。你可以在 [http://input.fontbureau.com/][40] 上找到它们,在那里你可以根据自己的喜好进行定制。你可以使用 Emacs 中的菜单手动设置字体,但这会将代码保存到你的 `.emacs` 文件中,如果你使用多个设备,你可能需要一些不同的设置。我将我的 `.emacs` 设置为根据使用的机器的名称来相应配置屏幕。代码如下: + +``` +;; set up fonts for different OSes. OSX toggles to full screen. +(setq myfont "InputSerif") +(cond +((string-equal system-name "Sampo.local") + (set-face-attribute 'default nil :font myfont :height 144) + (toggle-frame-fullscreen)) +((string-equal system-name "Morpheus.local") + (set-face-attribute 'default nil :font myfont :height 144)) +((string-equal system-name "ILMARINEN") + (set-face-attribute 'default nil :font myfont :height 106)) +((string-equal system-name "UKKO") + (set-face-attribute 'default nil :font myfont :height 104))) +``` + +你应该将 Emacs 中的 `system-name` 的值替换成你通过 `(system-name)` 得到的值。注意,在 Sampo (我的 MacBook)上,我还将 Emacs 设置为全屏。我也想在 Windows 实现这个功能,但是 Windows 和 Emacs 好像互相嫌弃对方,当我尝试配置时,它总是不稳定。相反,我只能在启动后手动全屏。 + +我还建议去掉 Emacs 中的上世纪 90 年代出现的难看工具栏,当时比较流行在应用程序中使用工具栏。我还去掉了一些其它的“电镀层”,这样我就有了一个简单、高效的界面。把这些加到你的 `.emacs` 的文件中来去掉工具栏和滚动条,但要保留菜单(在 OS X 上,它将被隐藏,除非你将鼠标到屏幕顶部): + +``` +(if (fboundp 'scroll-bar-mode) (scroll-bar-mode -1)) +(if (fboundp 'tool-bar-mode) (tool-bar-mode -1)) +(if (fboundp 'menu-bar-mode) (menu-bar-mode 1)) +``` + +#### 驾驭之 Org 模式 + +我基本上是在 Org 模式下处理工作的。它是我创作文档、记笔记、列任务清单以及 90% 其他工作的首选环境。Org 模式是笔记和待办事项列表的组合工具,最初是由一个在会议中使用笔记本电脑的人构想出来的。我反对在会议中使用笔记本电脑,自己也不使用,所以我的用法与他的有些不同。对我来说,Org 模式主要是一种处理结构中内容的方式。在 Org 模式中有标题和副标题等,它们的作用就像一个大纲。Org 模式允许你展开或隐藏大纲树,还可以重新排列该树。这正合我意,并且我发现用这种方式使用它是一种乐趣。 + +Org 模式也有很多让生活愉快的小功能。例如,脚注处理非常好,LaTeX/PDF 输出也很好。Org 模式能够根据所有文档中的待办事项生成议程,并能很好地将它们与日期/时间联系起来。我不把它用在任何形式的外部任务上,这些任务都是在一个共享的日历上处理的,但是在创建事物和跟踪我未来需要创建的东西时,它是无价的。安装它,你只要将 `org-mode.el` 放到你的 `lisp` 目录下。如果你想要它基于文档的结构进行缩进并在打开时全部展开的话,在你的 `.emacs` 文件中添加如下代码: + +``` +;; set up org mode +(setq org-startup-indented t) +(setq org-startup-folded "showall") +(setq org-directory "~/org") +``` + +最后一行是让 Org 模式知道在哪里查找要包含在议程和其他事情中的文件。我把 Org 模式保存在我的主目录中,也就是说,像前面介绍的一样,它是 Dropbox 目录的一个符号链接。 + +我有一个总是在缓冲区中打开的 `stuff.org` 文件。我把它当作记事本。Org 模式使得提取待办事项和有期限的事情变得很容易。当你能够内联 Lisp 代码并在需要计算它时,它特别有用。拥有包含内容的代码非常方便。同样,你可以使用 Emacs 访问实际的计算机,这是一种解放。 + +##### 用 Org 模式进行发布 + +我关心的是文档的外观及格式。我刚开始工作时是个设计师,而且我认为信息可以,也应该表现得清晰和美丽。Org 模式对将 LaTeX 生成 PDF 支持的很好,LaTeX 虽然也有学习曲线,但是很容易处理一些简单的事务。 + +如果你想使用字体和样式,而不是典型的 LaTeX 字体和样式,你需要做些事。首先,你要用到 XeLaTeX,这样就可以使用普通的系统字体,而不是 LaTeX 的特殊字体。接下来,你需要将以下代码添加到 `.emacs` 中: + +``` +(setq org-latex-pdf-process + '("xelatex -interaction nonstopmode %f" + "xelatex -interaction nonstopmode %f")) +``` + +我把这个放在 `.emacs` 中 Org 模式配置部分的末尾,使文档变得更整洁。这让你在从 Org 模式发布时可以使用更多格式化选项。例如,我经常使用: + +``` +#+LaTeX_HEADER: \usepackage{fontspec} +#+LATEX_HEADER: \setmonofont[Scale=0.9]{Input Mono} +#+LATEX_HEADER: \setromanfont{Maison Neue} +#+LATEX_HEADER: \linespread{1.5} +#+LATEX_HEADER: \usepackage[margin=1.25in]{geometry} + +#+TITLE: Document Title Here +``` + +这些都可以在 `.org` 文件中找到。我们的公司规定的正文字体是 `Maison Neue`,但你也可以在这写上任何适当的东西。我很是抵制 `Maison Neue`,因为这是一种糟糕的字体,任何人都不应该使用它。 + +这个文件是一个使用该配置输出为 PDF 的实例。这就是开箱即用的 LaTeX 一样。在我看来这还不错,但是字体很平淡,而且有点奇怪。此外,如果你使用标准格式,人们会觉得他们正在阅读的东西是、或者假装是一篇学术论文。别怪我没提醒你。 + +#### 驾驭之 Ace Jump 模式 + +这只是一个辅助模式,而不是一个主模式,但是你也需要它。其工作原理有点像之前提到的 Jef Raskin 的 Leap 功能[^9] 。 按下 `C-c C-SPC`,然后输入要跳转到单词的第一个字母。它会高亮显示所有以该字母开头的单词,并将其替换为字母表中的字母。你只需键入所需位置的字母,光标就会跳转到该位置。我常将它作为导航键或是用来检索。将 `.el` 文件下到你的 `lisp` 目录下,并在 `.emacs` 文件添加如下代码: + +``` +;; set up ace-jump-mode +(add-to-list 'load-path "which-folder-ace-jump-mode-file-in/") +(require 'ace-jump-mode) +(define-key global-map (kbd "C-c C-SPC" ) 'ace-jump-mode) +``` + +### 待续 + +本文已经够详细了,你能在其中得到你所想要的。我很想知道除编程之外(或用于编程)Emacs 的使用情况,及其是否高效。在我使用 Emacs 的过程中,可能存在一些自作聪明的老板式想法,如果你能指出来,我将不胜感激。之后,我可能会写一些更新来介绍其它特性或模式。我很确定我将会向你展示如何在 Emacs 和 Ludwig 模式下使用 Fugue,因为我会将它发展成比代码高亮更有用的东西。更多想法,请在 Twitter 上 [@fugueHQ][41] 。 + +### 脚注 + +[^1]: 如果你是位精英,但从没涉及过技术方面,那么 Emacs 并不适合你。对于少数的人来说,Emacs 可能会为他们开辟一条通往计算机技术方面的道路,但这只是极少数。如果你知道怎么使用 Unix 或 Windows 的终端,或者曾编辑过 dotfile,或者说你曾写过一点代码的话,这对使用 Emacs 有很大的帮助。 + +[^2]: 参考链接: http://archive.wired.com/wired/archive/2.08/tufte.html + +[^3]: 我主要是在写作时使用这个模式来进行一些运算。比如说,当我在给一个新雇员写一封入职信时,我想要算这封入职信中有多少个选项。由于我在我的 `.emacs` 为 outstanding-shares 定义了一个变量,所以我只要按下 `M-:` 然后输入 `(* .001 outstanding-shares)` 就能再无需打开计算器或电子表格的情况下得到精度为 0.001 的结果。我使用了 _大量_ 的变量来避免程序间切换。 + +[^4]: 缺少的部分是 web。有个名为 eww 的 Emacs 网页浏览器能够让你在 Emacs 中浏览网页。我用的就是这个,因为它既能拦截广告(LCTT 译注:实质上是无法显示,/laugh),同时也在可读性方面为 web 开发者消除了大多数差劲的选项。这个其实有点类似于 Safari 的阅读模式。不幸的是,大部分网站都有很多令人讨厌的繁琐的东西以及难以转换为文本的导航, + +[^5]: 易用性和易学性这两者经常容易被搞混。易学性是指学习使用工具的难易程度。而易用性是指工具高效的程度。通常来说,这是要差别的,就想鼠标和菜单栏的差别一样。菜单栏很容易学会,但是却不怎么高效,以致于早期会存在一些键盘的快捷键。除了在 GUI 方面上,Raskin 在很多方面上的观点都很正确。如今,操作系统正在将一些合适的搜索映射到键盘的快捷键上。比如说在 OS X 和 Windows 上,我默认的导航方式就是搜索。Ubuntu 的搜索做的很差劲,如同它的 GUI 一样差劲。 + +[^6]: 在有网的情况下,[AWS S3][42] 是解决文件存储问题的有效方案。数万亿个对象存在 S3 中,但是从来没有遗失过。大部分提供云存储的服务都是在 S3 上或是模拟 S3 构建的。没人能够拥有 S3 一样的规模,所以我将重要的文件通过 Dropbox 存储在上面。 + +[^7]: 目前,你可能会想:“这个人和自行车有什么关系?”……我在各个层面上都喜欢自行车。自行车是迄今为止发明的最具机械效率的交通工具。自行车可以是真正美丽的事物。而且,只要注意点的话,自行车可以用一辈子。早在 2001 年,我曾向 Rivendell Bicycle Works 订购了一辆自行车,现在我每次看到那辆自行车依然很高兴,自行车和 Unix 是我接触过的最好的两个发明。对了,还有 Emacs。 + +[^8]: 这个网站有一个很棒的 Emacs 教程,但不是这个。当我浏览这个页面时,我确实得到了一些对获取高效的 Emacs 配置很重要的知识,但无论怎么说,这都不是个替代品。 + +[^9]: 20 世纪 80 年代,Jef Raskin 与 Steve Jobs 在 Macintosh 项目上闹翻后, Jef Raskin 又设计了 [Canon Cat 计算机][43]。这台 Cat 计算机是以文档为中心的界面(所有的计算机都应如此),并以一种全新的方式使用键盘,你现在可以用 Emacs 来模仿这种键盘。如果现在有一台现代的,功能强大的 Cat 计算机并配有一个高分辨的显示器和 Unix 系统的话,我立马会用 Mac 来换。[https://youtu.be/o_TlE_U_X3c?t=19s][28] + +-------------------------------------------------------------------------------- + +via: https://blog.fugue.co/2015-11-11-guide-to-emacs.html + +作者:[Josh Stella][a] +译者:[oneforalone](https://github.com/oneforalone) +校对:[wxy](https://github.com/wxy), [oneforalone](https://github.com/oneforalone) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.fugue.co/authors/josh.html +[1]:https://blog.fugue.co/2013-10-16-vpc-on-aws-part3.html +[2]:https://blog.fugue.co/2013-10-02-vpc-on-aws-part2.html +[3]:http://ww2.fugue.co/2017-05-25_OS_AR_GartnerCoolVendor2017_01-LP-Registration.html +[4]:https://blog.fugue.co/authors/josh.html +[5]:https://twitter.com/joshstella +[6]:https://www.youtube.com/watch?v=khJQgRLKMU0 +[7]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phb +[8]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufte +[9]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreter +[10]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#eww +[11]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usability +[12]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3 +[13]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicycles +[14]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorial +[15]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncat +[16]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phbOrigin +[17]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufteOrigin +[18]:http://archive.wired.com/wired/archive/2.08/tufte.html +[19]:http://archive.wired.com/wired/archive/2.08/tufte.html +[20]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreterOrigin +[21]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#ewwOrigin +[22]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usabilityOrigin +[23]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3Origin +[24]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicyclesOrigin +[25]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorialOrigin +[26]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncatOrigin +[27]:https://youtu.be/o_TlE_U_X3c?t=19s +[28]:https://youtu.be/o_TlE_U_X3c?t=19s +[29]:https://blog.fugue.co/authors/josh.html +[30]:http://www.huffingtonpost.com/zachary-ehren/soma-isnt-a-drug-san-fran_b_987841.html +[31]:http://www.campagnolo.com/US/en +[32]:http://www.businessinsider.com/best-pointy-haired-boss-moments-from-dilbert-2013-10 +[33]:http://www.webopedia.com/TERM/I/interpreter.html +[34]:http://emacsformacosx.com/ +[35]:http://emacsformacosx.com/ +[36]:http://www.gnu.org/software/emacs/ +[37]:http://www.gnu.org/software/emacs/ +[38]:http://www.huffingtonpost.com/2015/05/29/two-spaces-after-period-debate_n_7455660.html +[39]:http://practicaltypography.com/one-space-between-sentences.html +[40]:http://input.fontbureau.com/ +[41]:https://twitter.com/fugueHQ +[42]:https://baike.baidu.com/item/amazon%20s3/10809744?fr=aladdin +[43]:https://en.wikipedia.org/wiki/Canon_Cat diff --git a/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md new file mode 100644 index 0000000000..bca48001bf --- /dev/null +++ b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md @@ -0,0 +1,106 @@ +TLDR 页:Linux 手册页的简化替代品 +============== + +[![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22] + +在终端上使用各种命令执行重要任务是 Linux 桌面体验中不可或缺的一部分。Linux 这个开源操作系统拥有[丰富的命令][23],任何用户都无法全部记住所有这些命令。而使事情变得更复杂的是,每个命令都有自己的一组带来丰富的功能的选项。 + +为了解决这个问题,人们创建了[手册页][12]man page,(手册 —— man 是 manual 的缩写)。首先,它是用英文写成的,包含了大量关于不同命令的深入信息。有时候,当你在寻找命令的基本信息时,它就会显得有点庞杂。为了解决这个问题,人们创建了[TLDR 页][13]。 + +### 什么是 TLDR 页? + +TLDR 页的 GitHub 仓库将其描述为简化的、社区驱动的手册页集合。在实际示例的帮助下,努力让使用手册页的体验变得更简单。如果还不知道,TLDR 取自互联网的常见俚语:太长没读Too Long Didn’t Read。 + +如果你想比较一下,让我们以 `tar` 命令为例。 通常,手册页的篇幅会超过 1000 行。`tar` 是一个归档实用程序,经常与 `bzip` 或 `gzip` 等压缩方法结合使用。看一下它的手册页: + +[![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] + +而另一方面,TLDR 页面让你只是浏览一下命令,看看它是如何工作的。 `tar` 的 TLDR 页面看起来像这样,并带有一些方便的例子 —— 你可以使用此实用程序完成的最常见任务: + +[![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] + +让我们再举一个例子,向你展示 TLDR 页面为 `apt` 提供的内容: + +[![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] + +如上,它向你展示了 TLDR 如何工作并使你的生活更轻松,下面让我们告诉你如何在基于 Linux 的操作系统上安装它。 + +### 如何在 Linux 上安装和使用 TLDR 页? + +最成熟的 TLDR 客户端是基于 Node.js 的,你可以使用 NPM 包管理器轻松安装它。如果你的系统上没有 Node 和 NPM,请运行以下命令: + +``` +sudo apt-get install nodejs +sudo apt-get install npm +``` + +如果你使用的是 Debian、Ubuntu 或 Ubuntu 衍生发行版以外的操作系统,你可以根据自己的情况使用`yum`、`dnf` 或 `pacman`包管理器。 + +现在,通过在终端中运行以下命令,在 Linux 机器上安装 TLDR 客户端: + +``` +sudo npm install -g tldr +``` + +一旦安装了此终端实用程序,最好在尝试之前更新其缓存。 为此,请运行以下命令: + +``` +tldr --update +``` + +执行此操作后,就可以阅读任何 Linux 命令的 TLDR 页面了。 为此,只需键入: + +``` +tldr +``` + +[![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17] + +你还可以运行其[帮助命令](https://github.com/tldr-pages/tldr-node-client),以查看可与 TLDR 一起使用的各种参数,以获取所需输出。 像往常一样,这个帮助页面也附有例子。 + +### TLDR 的 web、Android 和 iOS 版本 + +你会惊喜地发现 TLDR 页不仅限于你的 Linux 桌面。 相反,它也可以在你的 Web 浏览器中使用,可以从任何计算机访问。 + +要使用 TLDR Web 版本,请访问 [tldr.ostera.io][18] 并执行所需的搜索操作。 + +或者,你也可以下载 [iOS][19] 和 [Android][20] 应用程序,并随时随地学习新命令。 + +[![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21] + +你觉得这个很酷的 Linux 终端技巧很有意思吗? 请尝试一下,让我们知道您的反馈。 + +-------------------------------------------------------------------------------- + +via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ + +作者:[Adarsh Verma][a] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fossbytes.com/author/adarsh/ +[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/ +[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/ +[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/ +[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/ +[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/ +[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/ +[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/ +[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F +[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14 +[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ +[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/ +[13]:https://github.com/tldr-pages/tldr +[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg +[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg +[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg +[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg +[18]:https://tldr.ostera.io/ +[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8 +[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid +[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg +[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/ diff --git a/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md new file mode 100644 index 0000000000..3aa2e6f3ea --- /dev/null +++ b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: (jlztan) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (Celebrate Christmas In Linux Way With These Wallpapers) +[#]: via: (https://itsfoss.com/christmas-linux-wallpaper/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) +[#]: url: (https://linux.cn/article-10381-1.html) + +以 Linux 的方式庆祝圣诞节 +====== + +当前正是假日季,很多人可能已经在庆祝圣诞节了。祝你圣诞快乐,新年快乐。 + +为了延续节日氛围,我将向你展示一些非常棒的圣诞主题的 [Linux 壁纸][1]。在呈现这些壁纸之前,先来看一棵 Linux 终端下的圣诞树。 + +### 让你的桌面飘雪(针对 GNOME 用户) + +- [Let it Snow on Your Linux Desktop](https://youtu.be/1QI1ludzZuA) + +如果您在 Ubuntu 18.04 或任何其他 Linux 发行版中使用 GNOME 桌面,您可以使用一个小的 [GNOME 扩展][55]并在桌面上飘雪。 + +您可以从软件中心或 GNOME 扩展网站获取此 gsnow 扩展。我建议您阅读一些关于[使用 GNOME 扩展][55]的内容。 + +安装此扩展程序后,您会在顶部面板上看到一个小雪花图标。 如果您单击一次,您会看到桌面屏幕上的小絮状物掉落。 + +![](https://itsfoss.com/wp-content/uploads/2018/12/snowfall-on-linux-desktop-1.webm) + +你可以再次点击该图标来禁止雪花落下。 + +### 在 Linux 终端下显示圣诞树 + +![Display Christmas Tree in Linux Terminal](https://i.giphy.com/xUNda6KphvbpYxL3tm.gif) + +如果你想要在终端里显示一个动画的圣诞树,你可以使用如下命令: + +``` +curl https://raw.githubusercontent.com/sergiolepore/ChristBASHTree/master/tree-EN.sh | bash +``` + +要是不想一直从互联网上获取这棵圣诞树,也可以从它的 [GitHub 仓库][2] 中获取对应的 shell 脚本,更改权限之后按照运行普通 shell 脚本的方式运行它。 + +### 使用 Perl 在 Linux 终端下显示圣诞树 + +[![Christmas Tree in Linux terminal by NixCraft][3]][4] + +这个技巧最初由 [NixCraft][5] 分享,你需要为此安装 Perl 模块。 + +说实话,我不喜欢使用 Perl 模块,因为卸载它们真的很痛苦。所以使用这个 Perl 模块时需谨记,你必须手动移除它。 + +``` +perl -MCPAN -e 'install Acme::POE::Tree' +``` + +你可以阅读 [原文][5] 来了解更多信息。 + +### 下载 Linux 圣诞主题壁纸 + +所有这些 Linux 圣诞主题壁纸都是由 Mark Riedesel 制作的,你可以在 [他的网站][6] 上找到很多其他艺术品。 + +自 2002 年以来,他几乎每年都在制作这样的壁纸。可以理解的是,最早的一些壁纸不具有现代的宽高比。我把它们按时间倒序排列。 + +注意一个小地方,这里显示的图片都是高度压缩的,因此你要通过图片下方提供的链接进行下载。 + +![Christmas Linux Wallpaper][56] + +*[下载此壁纸][57]* + +![Christmas Linux Wallpaper][7] + +*[下载此壁纸][8]* + +[![Christmas Linux Wallpapers][9]][10] + +*[下载此壁纸][11]* + +[![Christmas Linux Wallpapers][12]][13] + +*[下载此壁纸][14]* + +[![Christmas Linux Wallpapers][15]][16] + +*[下载此壁纸][17]* + +[![Christmas Linux Wallpapers][18]][19] + +*[下载此壁纸][20]* + +[![Christmas Linux Wallpapers][21]][22] + +*[下载此壁纸][23]* + +[![Christmas Linux Wallpapers][24]][25] + +*[下载此壁纸][26]* + +[![Christmas Linux Wallpapers][27]][28] + +*[下载此壁纸][29]* + +[![Christmas Linux Wallpapers][30]][31] + +*[下载此壁纸][32]* + +[![Christmas Linux Wallpapers][33]][34] + +*[下载此壁纸][35]* + +[![Christmas Linux Wallpapers][36]][37] + +*[下载此壁纸][38]* + +[![Christmas Linux Wallpapers][39]][40] + +*[下载此壁纸][41]* + +[![Christmas Linux Wallpapers][42]][43] + +*[下载此壁纸][44]* + +[![Christmas Linux Wallpapers][45]][46] + +*[下载此壁纸][47]* + +[![Christmas Linux Wallpapers][48]][49] + +*[下载此壁纸][50]* + +### 福利:Linux 圣诞颂歌 + +这是给你的一份福利,给像我们一样的 Linux 爱好者的关于 Linux 的圣诞颂歌。 + +在 [《计算机世界》的一篇文章][51] 中,[Sandra Henry-Stocker][52] 分享了这些圣诞颂歌。摘录片段如下: + +这一段用的 [Chestnuts Roasting on an Open Fire][53] 的曲调: + +> Running merrily on open source +> +> With users happy as can be +> +> We’re using Linux and getting lots done + +> And happy everything is free + +这一段用的 [The Twelve Days of Christmas][54] 的曲调: + +> On my first day with Linux, my admin gave to me a password and a login ID +> +> On my second day with Linux my admin gave to me two new commands and a password and a login ID + +在 [这里][51] 阅读完整的颂歌。 + +Linux 快乐! + +------ + +via: https://itsfoss.com/christmas-linux-wallpaper/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/beautiful-linux-wallpapers/ +[2]: https://github.com/sergiolepore/ChristBASHTree +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/perl-tree.gif?resize=600%2C622&ssl=1 +[4]: https://itsfoss.com/christmas-linux-wallpaper/perl-tree/ +[5]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-unix-desktop-fun-christmas-tree-for-your-terminal/ +[6]: http://www.klowner.com/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmas-linux-wallpaper-featured.jpeg?resize=800%2C450&ssl=1 +[8]: http://klowner.com/wallery/christmas_tux_2017/download/ChristmasTux2017_3840x2160.png +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2016_3840x2160_result.jpg?resize=800%2C450&ssl=1 +[10]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2016_3840x2160_result/ +[11]: http://www.klowner.com/wallpaper/christmas_tux_2016/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2015_2560x1920_result.jpg?resize=800%2C600&ssl=1 +[13]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2015_2560x1920_result/ +[14]: http://www.klowner.com/wallpaper/christmas_tux_2015/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2014_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[16]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2014_2560x1440_result/ +[17]: http://www.klowner.com/wallpaper/christmas_tux_2014/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2013_result.jpg?resize=800%2C450&ssl=1 +[19]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2013_result/ +[20]: http://www.klowner.com/wallpaper/christmas_tux_2013/ +[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2012_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[22]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2012_2560x1440_result/ +[23]: http://www.klowner.com/wallpaper/christmas_tux_2012/ +[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2011_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[25]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2011_2560x1440_result/ +[26]: http://www.klowner.com/wallpaper/christmas_tux_2011/ +[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2010_5120x2880_result.jpg?resize=800%2C450&ssl=1 +[28]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2010_5120x2880_result/ +[29]: http://www.klowner.com/wallpaper/christmas_tux_2010/ +[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2009_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[31]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2009_1600x1200_result/ +[32]: http://www.klowner.com/wallpaper/christmas_tux_2009/ +[33]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2008_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[34]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2008_2560x1600_result/ +[35]: http://www.klowner.com/wallpaper/christmas_tux_2008/ +[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2007_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[37]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2007_2560x1600_result/ +[38]: http://www.klowner.com/wallpaper/christmas_tux_2007/ +[39]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2006_1024x768_result.jpg?resize=800%2C600&ssl=1 +[40]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2006_1024x768_result/ +[41]: http://www.klowner.com/wallpaper/christmas_tux_2006/ +[42]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2005_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[43]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2005_1600x1200_result/ +[44]: http://www.klowner.com/wallpaper/christmas_tux_2005/ +[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2004_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[46]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2004_1600x1200_result/ +[47]: http://www.klowner.com/wallpaper/christmas_tux_2004/ +[48]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2002_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[49]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2002_1600x1200_result/ +[50]: http://www.klowner.com/wallpaper/christmas_tux_2002/ +[51]: http://www.computerworld.com/article/3151076/linux/merry-linux-to-you.html +[52]: https://twitter.com/bugfarm +[53]: https://www.youtube.com/watch?v=dhzxQCTCI3E +[54]: https://www.youtube.com/watch?v=oyEyMjdD2uk +[55]: https://itsfoss.com/gnome-shell-extensions/ +[56]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2018.jpeg?w=800&ssl=1 +[57]: http://www.klowner.com/wallery/christmas_tux_2018/download/ChristmasTux2018_4K_3840x2160.png diff --git a/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md new file mode 100644 index 0000000000..12f45713d4 --- /dev/null +++ b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md @@ -0,0 +1,240 @@ +我的个人电子邮件系统设置:notmuch、mbsync、Postfix 和 dovecot +====== + +我使用个人电子邮件系统已经相当长的时间了,但是一直没有记录过文档。最近我换了我的笔记本电脑(职业变更导致的变动),我在试图重新创建本地邮件系统时迷茫了。所以这篇文章是一个给自己看的文档,这样我就不用费劲就能再次搭建出来。 + +### 服务器端 + +我运行自己的邮件服务器,并使用 Postfix 作为 SMTP 服务器,用 Dovecot 实现 IMAP。我不打算详细介绍如何配置这些设置,因为我的设置主要是通过使用 Jonas 为 Redpill 基础架构创建的脚本完成的。什么是 Redpill?(用 Jonas 自己的话说): + +> \ Redpill 是一个概念:一种设置 Debian hosts 去跨组织协作的方式 +> +> \ 我发展了这个概念,并将其首次用于 Redpill 网中网:redpill.dk,其中涉及到了我自己的网络(jones.dk),我的主要客户的网络(homebase.dk),一个包括 Skolelinux Germany(free-owl.de)的在德国的网络,和 Vasudev 的网络(copyninja.info) + +除此之外, 我还有一个 dovecot sieve 过滤,根据邮件的来源,对邮件进行高级分类,将其放到各种文件夹中。所有的规则都存在于每个有邮件地址的账户下的 `~/dovecot.sieve` 文件中。 + +再次,我不会详细介绍如何设置这些东西,因为这不是我这个帖子的目标。 + +### 在我的笔记本电脑上 + +在我的笔记本电脑上,我已经按照 4 个部分设置 + + 1. 邮件同步:使用 `mbsync` 命令完成 + 2. 分类:使用 notmuch 完成 + 3. 阅读:使用 notmuch-emacs 完成 + 4. 邮件发送:使用作为中继服务器和 SMTP 客户端运行的 Postfix 完成。 + +### 邮件同步 + +邮件同步是使用 `mbsync` 工具完成的, 我以前是 OfflineIMAP 的用户,最近切换到 `mbsync`,因为我觉得它比 OfflineIMAP 的配置更轻量、更简单。该命令是由 isync 包提供的。 + +配置文件是 `~/.mbsyncrc`。下面是我的例子与一些个人设置。 + +``` +IMAPAccount copyninja +Host imap.copyninja.info +User vasudev +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + + +IMAPAccount gmail-kamathvasudev +Host imap.gmail.com +User kamathvasudev@gmail.com +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + +IMAPStore copyninja-remote +Account copyninja + +IMAPStore gmail-kamathvasudev-remote +Account gmail-kamathvasudev + +MaildirStore copyninja-local +Path ~/Mail/vasudev-copyninja.info/ +Inbox ~/Mail/vasudev-copyninja.info/INBOX + +MaildirStore gmail-kamathvasudev-local +Path ~/Mail/Gmail-1/ +Inbox ~/Mail/Gmail-1/INBOX + +Channel copyninja +Master :copyninja-remote: +Slave :copyninja-local: +Patterns * +Create Both +SyncState * +Sync All + +Channel gmail-kamathvasudev +Master :gmail-kamathvasudev-remote: +Slave :gmail-kamathvasudev-local: +# Exclude everything under the internal [Gmail] folder, except the interesting folders +Patterns * ![Gmail]* +Create Both +SyncState * +Sync All +``` + +对上述配置中的一些有趣部分进行一下说明。一个是 PassCmd,它允许你提供 shell 命令来获取帐户的密码。这样可以避免在配置文件中填写密码。我使用 gpg 的对称加密,并在我的磁盘上存储密码。这当然是由 Unix ACL 保护安全的。 + +实际上,我想使用我的公钥来加密文件,但当脚本在后台或通过 systemd 运行时,解锁文件看起来很困难 (或者说几乎不可能)。如果你有更好的建议,我洗耳恭听:-)。 + +下一个指令部分是 Patterns。这使你可以有选择地同步来自邮件服务器的邮件。这对我来说真的很有帮助,可以排除所有的 “[Gmail]/ folders” 垃圾目录。 + +### 邮件分类 + +一旦邮件到达你的本地设备,我们需要一种方法来轻松地在邮件读取器中读取邮件。我最初的设置使用本地 dovecot 实例提供同步的 Maildir,并在 Gnus 中阅读。这种设置相比于设置所有的服务器软件是有点大题小作,但 Gnus 无法很好地应付 Maildir 格式,这是最好的方法。这个设置也有一个缺点,那就是在你快速搜索邮件时,要搜索大量邮件。而这就是 notmuch 的用武之地。 + +notmuch 允许我轻松索引上千兆字节的邮件档案而找到我需要的东西。我已经创建了一个小脚本,它结合了执行 `mbsync` 和 `notmuch`。我使用 dovecot sieve 来基于实际上创建在服务器端的 Maildirs 标记邮件。下面是我的完整 shell 脚本,它执行同步分类和删除垃圾邮件的任务。 + +``` +#!/bin/sh + +MBSYNC=$(pgrep mbsync) +NOTMUCH=$(pgrep notmuch) + +if [ -n "$MBSYNC" -o -n "$NOTMUCH" ]; then + echo "Already running one instance of mail-sync. Exiting..." + exit 0 +fi + +echo "Deleting messages tagged as *deleted*" +notmuch search --format=text0 --output=files tag:deleted |xargs -0 --no-run-if-empty rm -v + +echo "Moving spam to Spam folder" +notmuch search --format=text0 --output=files tag:Spam and \ + to:vasudev@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur +notmuch search --format=text0 --output=files tag:Spam and + to:vasudev-debian@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur + + +MDIR="vasudev-copyninja.info vasudev-debian Gmail-1" +mbsync -Va +notmuch new + +for mdir in $MDIR; do + echo "Processing $mdir" + for fdir in $(ls -d /home/vasudev/Mail/$mdir/*); do + if [ $(basename $fdir) != "INBOX" ]; then + echo "Tagging for $(basename $fdir)" + notmuch tag +$(basename $fdir) -inbox -- folder:$mdir/$(basename $fdir) + fi + done +done +``` + +因此,在运行 `mbsync` 之前,我搜索所有标记为“deleted”的邮件,并将其从系统中删除。接下来,我在我的帐户上查找标记为“Spam”的邮件,并将其移动到“Spam”文件夹。你没看错,这些邮件逃脱了垃圾邮件过滤器进入到我的收件箱,并被我亲自标记为垃圾邮件。 + +运行 `mbsync` 后,我基于它们的文件夹标记邮件(搜索字符串 `folder:`)。这让我可以很容易地得到一个邮件列表的内容,而不需要记住列表地址。 + +### 阅读邮件 + +现在,我们已经实现同步和分类邮件,是时候来设置阅读部分。我使用 notmuch-emacs 界面来阅读邮件。我使用 emacs 的 Spacemacs 风格,所以我花了一些时间写了一个私有层,它将我所有的快捷键和分类集中在一个地方,而不会扰乱我的整个 `.spacemacs` 文件。你可以在 [notmuch-emacs-layer 仓库][1] 找到我的私有层的代码。 + +### 发送邮件 + +能阅读邮件这还不够,我们也需要能够回复邮件。而这是最近是我感到迷茫的一个略显棘手的部分,以至于不得不写这篇文章,这样我就不会再忘记了。(当然也不必在网络上参考一些过时的帖子。) + +我的系统发送邮件使用 Postfix 作为 SMTP 客户端,使用我自己的 SMTP 服务器作为它的中继主机。中继的问题是,它不能是具有动态 IP 的主机。有两种方法可以允许具有动态 IP 的主机使用中继服务器, 一种是将邮件来源的 IP 地址放入 `my_network` 或第二个使用 SASL 身份验证。 + +我的首选方法是使用 SASL 身份验证。为此,我首先要为每台机器创建一个单独的账户,它将把邮件中继到我的主服务器上。想法是不使用我的主帐户 SASL 进行身份验证。(最初我使用的是主账户,但 Jonas 给出了可行的按账户的想法) + +``` +adduser _relay +``` + +这里替换 `` 为你的笔记本电脑的名称或任何你正在使用的设备。现在我们需要调整 Postfix 作为中继服务器。因此,在 Postfix 配置中添加以下行: + +``` +# SASL authentication +smtp_sasl_auth_enable = yes +smtp_tls_security_level = encrypt +smtp_sasl_tls_security_options = noanonymous +relayhost = [smtp.copyninja.info]:submission +smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd +``` + +因此, 这里的 `relayhost` 是用于将邮件转发到互联网的 Postfix 实例的服务器名称。`submission` 的部分 Postfix 将邮件转发到端口 587(安全端口)。`smtp_sasl_tls_security_options` 设置为不允许匿名连接。这是必须的,以便中继服务器信任你的移动主机,并同意为你转发邮件。 + +`/etc/postfix/sasl_passwd` 是你需要存储用于服务器 SASL 身份验证的帐户密码的文件。将以下内容放入其中。 + +``` +[smtp.example.com]:submission user:password +``` + +用你已放入 `relayhost` 配置的 SMTP 服务器名称替换 `smtp.example.com`。用你创建的 `_relay` 用户及其密码替换 `user` 和 `passwd`。 + +若要保护 `sasl_passwd` 文件,并为 Postfix 创建它的哈希文件,使用以下命令。 + +``` +chown root:root /etc/postfix/sasl_passwd +chmod 0600 /etc/postfix/sasl_passwd +postmap /etc/postfix/sasl_passwd +``` + +最后一条命令将创建 `/etc/postfix/sasl_passwd.db` 文件,它是你的文件的 `/etc/postfix/sasl_passwd` 的哈希文件,具有相同的所有者和权限。现在重新加载 Postfix,并使用 `mail` 命令检查邮件是否从你的系统中发出。 + +### Bonus 的部分 + +好吧,因为我有一个脚本创建以上结合了邮件的同步和分类。我继续创建了一个 systemd 计时器,以定期同步后台的邮件。就我而言,每 10 分钟一次。下面是 `mailsync.timer` 文件。 + +``` +[Unit] +Description=Check Mail Every 10 minutes +RefuseManualStart=no +RefuseManualStop=no + +[Timer] +Persistent=false +OnBootSec=5min +OnUnitActiveSec=10min +Unit=mailsync.service + +[Install] +WantedBy=default.target +``` + +下面是 mailsync.service 服务,这是 mailsync.timer 执行我们的脚本所需要的。 + +``` +[Unit] +Description=Check Mail +RefuseManualStart=no +RefuseManualStop=yes + +[Service] +Type=oneshot +ExecStart=/usr/local/bin/mail-sync +StandardOutput=syslog +StandardError=syslog +``` + +将这些文件置于 `/etc/systemd/user` 目录下并运行以下代码去开启它们: + +``` +systemctl enable --user mailsync.timer +systemctl enable --user mailsync.service +systemctl start --user mailsync.timer +``` + +这就是我从系统同步和发送邮件的方式。我从 Jonas Smedegaard 那里了解到了 afew,他审阅了这篇帖子。因此, 下一步, 我将尝试使用 afew 改进我的 notmuch 配置,当然还会有一个后续的帖子:-)。 + +-------------------------------------------------------------------------------- + +via: https://copyninja.info/blog/email_setup.html + +作者:[copyninja][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://copyninja.info +[1]:https://source.copyninja.info/notmuch-emacs-layer.git/ diff --git a/published/201812/20180101 27 open solutions to everything in education.md b/published/201812/20180101 27 open solutions to everything in education.md new file mode 100644 index 0000000000..48a4f3fa3c --- /dev/null +++ b/published/201812/20180101 27 open solutions to everything in education.md @@ -0,0 +1,91 @@ +27 个全方位的开放式教育解决方案 +====== + +> 阅读这些 2017 年 Opensource.com 发布的开放如何改进教育和学习的好文章。 + +![27 open solutions to everything in education](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OpenEducationResources_520x292_cm.png?itok=9y4FGgRo) + +开放式理念 (从开源软件到开放硬件,再到开放原则) 正在改变教育的范式。因此,为了庆祝今年发生的一切,我收集了 2017 年(译注:本文原发布于 2018 年初)在 Opensource.com 上发表的 27 篇关于这个主题的最好的文章。我把它们分成明确的主题,而不是按人气来分类。而且,如果这 27 个故事不能满足你对教育方面开源信息的胃口,那就看看我们的合作文章吧 “[教育如何借助 Linux 和树莓派][30]”。 + +### 开放对每个人都有好处 + +1. [书评:《OPEN》探讨了开放性的广泛文化含义][1]:Scott Nesbitt 评价 David Price 的书 《OPEN》 ,该书探讨了 “开放” 不仅仅是技术转变的观点,而是 “我们未来将如何工作、生活和学习”。 +2. [通过开源技能快速开始您的职业生涯][2]: VM (Vicky) Brasseur 指出了如何借助学习开源在工作群体中脱颖而出。这个建议不仅仅是针对程序员的;设计师、作家、营销人员和其他创意专业人士也对开源的成功至关重要。 +3. [研究生学位可以让你跳槽到开源职位][3]:引用的研究表明会 Linux 技能会带来更高的薪水, Joshua Pearce 说对开源的熟练和研究生学位是无与伦比的职业技能组合。 +4. [彻底改变了宾夕法尼亚的学校文化的三种实践][4]:Charlie Reisinger 向我们展示了开放式实践是如何在宾夕法尼亚州的一个学区创造一种更具包容性、敏捷性和开放性的文化的。Charlie 说,这不仅仅是为了省钱;该区还受益于 “开放式领导原则,促进师生创新,帮助更好地吸引社区,创造一个更有活力和包容性的学习社区”。 +5. [使用开源工具促使学生进步的 15 种方法][5]:我写了开源是如何让学生自由探索、补拙和学习的,不管他们是在学习基本的数字化素养,还是通过有趣的项目来扩展这些技能。 +6. [开发人员有机会编写好的代码][6]:开源往往是对社会有益的项目的支柱。正如 Benetech Labs 副总裁 Ahn Bui 在这次采访中指出的那样:“建立开放数据标准是打破数据孤岛不可或缺的一步。这些开放标准将为互操作性提供基础,进而转化为更多的组织共同建设,往往更具成本效益。最终目标是以同样的成本甚至更低的成本为更多的人服务。” + +### 用于再融合和再利用的开放式教育资源 + +1. [学术教员可以和维基百科一起教学吗?][7]:Wiki Ed 的项目总监 LiAnna Davis 讨论开放式教育资源open educational resources (OER) ,如 Wiki Ed,是如何提供高质量且经济实惠的开源学习资源作为课堂教学工具。 +2. [书本内外?开放教育资源的状态][8]:Cable Green 是 Creative Common 开放教育主管,分享了高等教育中教育面貌是如何变化的,以及 Creative Common 正在采取哪些措施来促进教育。 +3. [急需符合标准的课程的学校系统找到了希望][9]:Karen Vaites 是 Open Up Resources 社区布道师和首席营销官,谈论了非营利组织努力为 K-12 学校提供开放的、标准一致的课程。 +4. [夏威夷大学如何解决当今高等教育的问题][10]:夏威夷大学 Manoa 分校的教育技术专家 Billy Meinke 表示,在大学课程中过渡到 ORE 将 “使教师能够控制他们教授的内容,我们预计这将为他们节省学生的费用。” +5. [开放式课程如何削减高等教育成本][11]:塞勒学院的教育总监 Devon Ritter 报告了塞勒学院是如何建立以公开许可内容为基础的大学学分课程,目的是使更多的人能够负担得起和获得高等教育。 +6. [开放教育资源运动在提速][12]:Alexis Clifton 是纽约州立大学的 OER 服务的执行董事,描述了纽约 800 万美元的投资如何刺激开放教育的增长,并使大学更实惠。 +7. [开放项目合作,从小学到大学教室][13]:来自杜克大学的 Aria F. Chernik 探索 OSPRI (开源教育学的研究与创新), 这是杜克大学和红帽的合作,旨在建立一个 21 世纪的,开放设计的 preK-12 学习生态系统。 +8. [Perma.cc 将阻止学术链接腐烂][14]::弗吉尼亚理工大学的 Phillip Young 写的关于 Perma.cc 的文章,这种一种“链接腐烂”的解决方案,在学术论文中的超链接随着时间的推移而消失或变化的概览很高。 +9. [开放教育:学生如何通过创建开放教科书来节省资金][15]:OER 先驱 Robin DeRosa 谈到 “引入公开许可教科书的自由,以及教育和学习应结合包容性生态系统,以增强公益的总体理念”。 + +### 课堂上的开源工具 + +1. [开源棋盘游戏如何拯救地球][16]:Joshua Pearce 写的关于拯救地球的一个棋盘游戏,这是一款让学生在玩乐和为创客社区做出贡献的同时解决环境问题的棋盘游戏。 +2. [一个教孩子们如何阅读的新 Android 应用程序][17]:Michael Hall 谈到了他在儿子被诊断为自闭症后为他开发的儿童识字应用 Phoenicia,以及良好编码的价值,和为什么用户测试比你想象的更重要。 +3. [8 个用于教育的开源 Android 应用程序][18]:Joshua Allen Holm 推荐了 8 个来自 F-Droid 软件库的开源应用,使我们可以将智能手机用作学习工具。 +4. [MATLA B的 3 种开源替代方案][19]:Jason Baker 更新了他 2016 年的开源数学计算软件调查报告,提供了 MATLAB 的替代方案,这是数学、物理科学、工程和经济学中几乎无处不在的昂贵的专用解决方案。 +5. [SVG 与教孩子编码有什么关系?][20]:退休工程师 Jay Nick 谈论他如何使用艺术作为一种创新的方式,向学生介绍编码。他在学校做志愿者,使用 SVG 来教授一种结合数学和艺术原理的编码方法。 +6. [5 个破灭的神话:在高等教育中使用开源][21]: 拥有德克萨斯理工大学美术博士学位的 Kyle Conway 分享他在一个由专有解决方案统治的世界中使用开源工具的经验。 Kyle 说有一种偏见,反对在计算机科学以外的学科中使用开源:“很多人认为非技术专业的学生不能使用 Linux,他们对在高级学位课程中使用 Linux 的人做出了很多假设……嗯,这是有可能的,我就是证明。” +7. [大学开源工具列表][22]:Aaron Cocker 概述了他在攻读计算机科学本科学位时使用的开源工具 (包括演示、备份和编程软件)。 +8. [5 个可帮助您学习优秀 KDE 应用程序][23]:Zsolt Szakács 提供五个 KDE 应用程序,可以帮助任何想要学习新技能或培养现有技能的人。 + +### 在教室编码 + +1. [如何尽早让下一代编码][24]:Bryson Payne 说我们需要在高中前教孩子们学会编码: 到了九年级,80% 的女孩和 60% 的男孩已经从 STEM 职业中自选。但他建议,这不仅仅是就业和缩小 IT 技能差距的问题。“教一个年轻人编写代码可能是你能给他们的最改变生活的技能。而且这不仅仅是一个职业提升者。编码是关于解决问题,它是关于创造力,更重要的是,它是提升能力”。 +2. [孩子们无法在没有计算机的情况下编码][25]:Patrick Masson 推出了 FLOSS 儿童桌面计划, 该计划教授服务不足学校的学生使用开源软件 (如 Linux、LibreOffice 和 GIMP) 重新利用较旧的计算机。该计划不仅为破旧或退役的硬件注入新的生命,还为学生提供了重要的技能,而且还为学生提供了可能转化为未来职业生涯的重要技能。 +3. [如今 Scratch 是否能像 80 年代的 LOGO 语言一样教孩子们编码?][26]:Anderson Silva 提出使用 [Scratch][27] 以激发孩子们对编程的兴趣,就像在 20 世纪 80 年代开始使用 LOGO 语言时一样。 +4. [通过这个拖放框架学习Android开发][28]:Eric Eslinger 介绍了 App Inventor,这是一个编程框架,用于构建 Android 应用程序使用可视块语言(类似 Scratch 或者 [Snap][29])。 + +在这一年里,我们了解到,教育领域的各个方面都有了开放的解决方案,我预计这一主题将在 2018 年及以后继续下去。在未来的一年里,你是否希望 Opensource.com 涵盖开放式的教育主题?如果是, 请在评论中分享你的想法。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/best-open-education + +作者:[Don Watkins][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://opensource.com/article/17/7/book-review-open +[2]:https://opensource.com/article/17/8/jump-start-your-career +[3]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab +[4]:https://opensource.com/article/17/7/open-school-leadership +[5]:https://opensource.com/article/17/7/empower-students-open-source-tools +[6]:https://opensource.com/article/17/3/interview-anh-bui-benetech-labs +[7]:https://opensource.com/article/17/1/Wiki-Education-Foundation +[8]:https://opensource.com/article/17/2/future-textbooks-cable-green-creative-commons +[9]:https://opensource.com/article/17/1/open-up-resources +[10]:https://opensource.com/article/17/2/interview-education-billy-meinke +[11]:https://opensource.com/article/17/7/college-alternatives +[12]:https://opensource.com/article/17/10/open-educational-resources-alexis-clifton +[13]:https://opensource.com/article/17/3/education-should-be-open-design +[14]:https://opensource.com/article/17/9/stop-link-rot-permacc +[15]:https://opensource.com/article/17/11/creating-open-textbooks +[16]:https://opensource.com/article/17/7/save-planet-board-game +[17]:https://opensource.com/article/17/4/phoenicia-education-software +[18]:https://opensource.com/article/17/8/8-open-source-android-apps-education +[19]:https://opensource.com/alternatives/matlab +[20]:https://opensource.com/article/17/5/coding-scalable-vector-graphics-make-steam +[21]:https://opensource.com/article/17/5/how-linux-higher-education +[22]:https://opensource.com/article/17/6/open-source-tools-university-student +[23]:https://opensource.com/article/17/6/kde-education-software +[24]:https://opensource.com/article/17/8/teach-kid-code-change-life +[25]:https://opensource.com/article/17/9/floss-desktops-kids +[26]:https://opensource.com/article/17/3/logo-scratch-teach-programming-kids +[27]:https://scratch.mit.edu/ +[28]:https://opensource.com/article/17/8/app-inventor-android-app-development +[29]:http://snap.berkeley.edu/ +[30]:https://opensource.com/article/17/12/best-opensourcecom-linux-and-raspberry-pi-education diff --git a/published/201812/20180104 How Creative Commons benefits artists and big business.md b/published/201812/20180104 How Creative Commons benefits artists and big business.md new file mode 100644 index 0000000000..aefc804479 --- /dev/null +++ b/published/201812/20180104 How Creative Commons benefits artists and big business.md @@ -0,0 +1,66 @@ +你所不知道的知识共享(CC) +====== + +> 知识共享为艺术家提供访问权限和原始素材。大公司也从中受益。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/CreativeCommons_ideas_520x292_1112JS.png?itok=otei0vKb) + +我毕业于电影学院,毕业后在一所电影学校教书,之后进入一家主流电影工作室,我一直在从事电影相关的工作。创意产业的方方面面面临着同一个问题:创作者需要原材料。有趣的是,自由文化运动提出了解决方案,具体来说是在自由文化运动中出现的知识共享Creative Commons组织。 + +### 知识共享能够为我们提供展示片段和小样 + +和其他事情一样,创造力也需要反复练习。幸运的是,在我刚开始接触电脑时,就在一本关于渲染工场的专业杂志中接触到了开源这个存在。当时我并不理解所谓的“开源”是什么,但我知道只有开源工具能帮助我在领域内稳定发展。对我来说,知识共享也是如此。知识共享可以为艺术家们提供充满丰富艺术资源的工作室。 + +我在电影学院任教时,经常需要给学生们准备练习编辑、录音、拟音、分级、评分的示例录像。在 Jim Munroe 的独立作品 [Infest Wisely][1] 中和 [Vimeo][2] 上的知识共享内容里我总能找到我想要的。这些逼真的镜头覆盖内容十分广泛,从独立制作到昂贵的高品质的升降镜头(一般都会用无人机代替)都有。 + +![](https://opensource.com/sites/default/files/u128651/bunny.png) + +对实验主义艺术来说,确有无尽可能。知识共享提供了丰富的素材,这些材料可以用来整合、混剪等等,可以满足一位视觉先锋能够想到的任何用途。 + +在接触知识共享之前,如果我想要使用写实镜头,如果在大学,只能用之前的学生和老师拍摄的或者直接使用版权库里的镜头,或者使用有受限的版权保护的镜头。 + +### 坚守版权的底线很重要 + +知识共享同样能够创造经济效益。在某大型计算机公司的渲染工场工作时,我负责在某些硬件设施上测试渲染的运行情况,而这个测试时刻面临着被搁置的风险。做这些测试时,我用的都是[大雄兔][3]的资源,因为这个电影和它的组件都是可以免费使用和分享的。如果没有这个小短片,在接触写实资源之前我都没法完成我的实验,因为对于一个计算机公司来说,雇佣一只 3D 艺术家来按需布景是不太现实的。 + +令我震惊的是,与开源类似,知识共享已经用我们难以想象的方式支撑起了大公司。知识共享的使用可能会也可能不会影响公司的日常流程,但它填补了不足,让工作流程顺利进行。我没见到谁在他们的书中将流畅工作归功于知识共享的应用,但它确实无处不在。 + +![](https://opensource.com/sites/default/files/u128651/sintel.png) + +我也见过一些开放版权的电影,比如[辛特尔][4],在最近的电视节目中播放了它的短片,电视的分辨率已经超过了标准媒体。 + +### 知识共享可以提供大量原材料 + +艺术家需要原材料。画家需要颜料、画笔和画布。雕塑家需要陶土和工具。数字内容编辑师需要数字内容,无论它是剪贴画还是音效或者是电子游戏里的现成的精灵。 + +数字媒介赋予了人们超能力,让一个人就能完成需要一组人员才能完成的工作。事实上,我们大部分都好高骛远。我们想做高大上的项目,想让我们的成果不论是视觉上还是听觉上都无与伦比。我们想塑造的是宏大的世界,紧张的情节,能引起共鸣的作品,但我们所拥有的时间精力和技能与之都不匹配,达不到想要的效果。 + +是知识共享再一次拯救了我们,在 [Freesound.org][5]、 [Openclipart.org][6]、 [OpenGameArt.org][7] 等等网站上都有大量的开放版权艺术材料。通过知识共享,艺术家可以使用各种他们自己没办法创造的原材料,来完成他们原本完不成的工作。 + +最神奇的是,不用自己投资,你放在网上给大家使用的原材料就能变成精美的作品,而这是你从没想过的。我在知识共享上面分享了很多音乐素材,它们现在用于无数的专辑和电子游戏里。有些人用了我的材料会通知我,有些是我自己发现的,所以这些材料的应用可能比我知道的还有多得多。有时我会偶然看到我亲手画的标志出现在我从没听说过的软件里。我见到过我为 [Opensource.com][8] 写的文章在别处发表,有的是论文的参考文献,白皮书或者参考资料中。 + +### 知识共享所代表的自由文化也是一种文化 + +“自由文化”这个说法过于累赘,文化,从概念上来说,是一个有机的整体。在这种文化中社会逐渐成长发展,从一个人到另一个。它是人与人之间的互动和思想交流。自由文化是自由缺失的现代世界里的特殊产物。 + +如果你也想对这样的局限进行反抗,想把你的思想、作品、你自己的文化分享给全世界的人,那么就来和我们一起,使用知识共享吧! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/creative-commons-real-world + +作者:[Seth Kenlon][a] +译者:[Valoniakim](https://github.com/Valoniakim) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:http://infestwisely.com +[2]:https://vimeo.com/creativecommons +[3]:https://peach.blender.org/ +[4]:https://durian.blender.org/ +[5]:http://freesound.org +[6]:http://openclipart.org +[7]:http://opengameart.org +[8]:https://opensource.com/ diff --git a/published/201812/20180128 Getting Linux Jobs.md b/published/201812/20180128 Getting Linux Jobs.md new file mode 100644 index 0000000000..9bfaf0e1e5 --- /dev/null +++ b/published/201812/20180128 Getting Linux Jobs.md @@ -0,0 +1,95 @@ +Linux 求职建议 +====== + +通过对招聘网站数据的仔细研究,我们发现,即使是非常有经验的 Linux 程序员,也会在面试中陷入困境。 + +这就导致了很多优秀并且有经验的人无缘无故地找不到合适的工作,因为如今的就业市场需要我们有一些手段来提高自己的竞争力。 + +我有两个同事和一个表哥,他们都有 RedHat 认证,管理过比较大的服务器机房,也都收到过前雇主的认真推荐。 + +可是,在他们应聘的时候,所有的这些证书、本身的能力、工作经验好像都没有起到任何作用,他们所面对的招聘广告是某人从技术词汇中临时挑选的一些“技能片段”所组成的。 + +现如今,礼貌变得过时了,**不回应**变成了发布招聘广告的公司的新沟通方式。 + +这同样也意味着大多公司的招聘或者人事可能会**错过**非常优秀的应聘者。 + +我之所以敢说的如此肯定,是因为现在招聘广告大多数看上去都非常的滑稽。 + +[Reallylinux.com][3] 另一位特约撰稿人 Walter ,发表过一篇关于 [招聘广告疯掉了][4] 的文章。 + +他说的也许是对的,可是我认为 Linux 工作应聘者可以通过注意招聘广告的**三个关键点**避免落入陷阱。 + +**首先**,很少会有 Linux 系统管理员的招聘广告只针对 Linux 有要求。 + +一定要注意很少有 Linux 系统管理员的职位是实际在服务器上跑 Linux的,反而,很多在搜索 “Linux 管理员” 得到的职位实际上是指大量的 *NX 操作系统的。 + +举个例子,有一则关于 **Linux 管理员** 的招聘广告: + +> 该职位需要为建立系统集成提供支持,尤其是 BSD 应用的系统安装... + +或者有一些其他的要求: + +> 有 Windows 系统管理经验的。 + +最为讽刺的是,如果你在应聘面试的时候表现出专注于 Linux 的话,你可能不会被聘用。 + +另外,如果你直接把 Linux 写在你的特长或者专业上,他们可能都不会仔细看你的简历,因为他们根本区分不了 UNIX、BSD、Linux。 + +最终的结果就是,如果你太老实,只在简历上写了 Linux,你可能会被直接过掉,但是如果你把 Linux 改成 UNIX/Linux 的话,可能会走得更远。 + +我有两个同事最后修改了他们的简历,然后获得了更好的面试机会,虽然依旧没有被聘用,因为大多数招聘广告其实已经内定人员了,这些招聘信息被放出来仅仅是为了表现出他们有招聘的想法。 + +**第二点**,公司里唯一在乎系统管理员职位的只有技术主管,其他人包括人事或管理层根本不关心这个。 + +我记得有一次开会旁听的时候,听见一个执行副总裁把服务器管理人员说成“一毛钱一打的人”,这种想法是多么的奇怪啊。 + +讽刺的是,等到邮件系统出故障,电话交换机连接时不时会断开,或者核心商业文件从企业内网中消失的时候,这些总裁又是最先打电话给系统管理员的。 + +或许如果他们不整天在电话留言中说那么多空话,或者不往邮件里塞满妻子的照片和旅行途中的照片的话,服务器可能就不会崩溃。 + +请注意,招聘 Linux 运维或者服务器管理员的广告被放出来是因为公司**技术层**认为有迫切的需求。你也不需要和人事或者公司高层聊什么,搞清楚谁是招聘的技术经理然后打电话给他们。 + +你需要直接联系他们因为“有些技术问题”是人事回答不了的,即使你只有 60 秒的时间可以和他们交流,你也必须抓住这个机会和真正有需求并且懂技术的人沟通。 + +那如果人事的漂亮 MM 不让你直接联系技术怎么办呢? + +开始记得问人事一些技术性问题,比如说他们的 Linux 集群是如何建立的,它们运行在独立的虚拟机上吗?这些技术性的问题会让人事变得不耐烦,最后让你有机会问出“我能不能直接联系你们团队的技术人员”。 + +如果对方的回答是“应该可以”或者“稍后回复你”,那么他们可能已经在两周前就已经计划好了找一个人来填补这个空缺,比如说人事部员工的未婚夫。**他们只是不希望看起来太像裙带主义,而是带有一点利己主义的不确定主义。** + +所以一定要记得花点时间弄清楚到底谁是发布招聘广告的直接**技术**负责人,然后和他们聊一聊,这可能会让你少一番胡扯并且让你更有可能应聘成功。 + +**第三点**,现在的招聘广告很少有完全真实的内容了。 + +我以前见过一个招聘具有高级别专家也不会有的专门知识的初级系统管理员的广告,他们的计划是列出公司的发展计划蓝图,然后找到应聘者。 + +在这种情况下,你应聘 Linux 管理员职位应该提供几个关键性信息,例如工作经验和相关证书。 + +诀窍在于,用这些关键词尽量装点你的简历,以匹配他们的招聘信息,这样他们几乎不可能发现你缺失了哪个关键词。 + +这并不一定会让你成功找到一份工作,但它可以让你获得一次面试机会,这也算是一个巨大的进步。 + +通过理解和应用以上三点,或许可以让那些寻求 Linux 管理员工作的人能够比那些只有一线地狱机会的人领先一步。 + +即使这些建议不能让你马上得到面试机会,你也可以利用这些经验和意识去参加贸易展或公司主办的技术会议等活动。 + +我强烈建议你们也经常参加这种活动,尤其是当它们比较近的话,可以给你一个扩展人脉的机会。 + +请记住,如今的职业人脉已经失去了原来的意义了,现在只是可以用来获取“哪些公司实际上在招聘、哪些公司只是为了给股东带来增长的表象而在职位方面撒谎”的小道消息。 + + +-------------------------------------------------------------------------------- + +via: http://reallylinux.com/docs/gettinglinuxjobs.shtml + +作者:[Andrea W.Codingly][a] +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://reallylinux.com +[1]:http://www.reallylinux.com +[2]:http://reallylinux.com/docs/linuxrecessionproof.shtml +[3]:http://reallylinux.com +[4]:http://reallylinux.com/docs/wantadsmad.shtml diff --git a/published/201812/20180130 Graphics and music tools for game development.md b/published/201812/20180130 Graphics and music tools for game development.md new file mode 100644 index 0000000000..7e77e30d67 --- /dev/null +++ b/published/201812/20180130 Graphics and music tools for game development.md @@ -0,0 +1,179 @@ +用于游戏开发的图形和音乐工具 +====== +> 要在三天内打造一个可玩的游戏,你需要一些快速而稳定的好工具。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Life_opengame.png?itok=JPxruL3k) + +在十月初,我们的俱乐部马歇尔大学的 [Geeks and Gadgets][1] 参加了首次 [Open Jam][2],这是一个庆祝最佳开源工具的游戏 Jam。游戏 Jam 是一种活动,参与者以团队协作的方式来开发有趣的计算机游戏。Jam 一般都很短,仅有三天,并且非常累。Opensource.com 在八月下旬[发布了][3] Open Jam 活动,足有 [45 支游戏][4] 进入到了竞赛中。 + +我们的俱乐部希望在我们的项目中创建和使用开放源码软件,所以 Open Jam 自然是我们想要参与的 Jam 了。我们提交的游戏是一个实验性的游戏,名为 [Mark My Words][5]。我们使用了多种自由和开放源码 (FOSS) 工具来开发它;在这篇文章中,我们将讨论一些我们使用的工具和我们注意到可能有潜在阻碍的地方。 + +### 音频工具 + +#### MilkyTracker + +[MilkyTracker][6] 是一个可用于编曲老式视频游戏中的音乐的软件包。它是一种[音乐声道器][7]music tracker,是一个强大的 MOD 和 XM 文件创建器,带有基于特征网格的模式编辑器。在我们的游戏中,我们使用它来编曲大多数的音乐片段。这个程序最好的地方是,它比我们其它的大多数工具消耗更少的硬盘空间和内存。虽然如此,MilkyTracker 仍然非常强大。 + +![](https://opensource.com/sites/default/files/u128651/mtracker.png) + +其用户界面需要一会来习惯,这里有对一些想试用 MilkyTracker 的音乐家的一些提示: + + * 转到 “Config > Misc.” ,设置编辑模式的控制风格为 “MilkyTracker”,这将给你提供几乎全部现代键盘快捷方式。 + * 用 `Ctrl+Z` 撤销 + * 用 `Ctrl+Y` 重做 + * 用空格键切换模式编辑方式 + * 用退格键删除先前的音符 + * 用插入键来插入一行 + * 默认情况下,一个音符将持续作用,直到它在该频道上被替换。你可以明确地结束一个音符,通过使用一个反引号(`)键来插入一个 KeyOff 音符 + * 在你开始谱写乐曲前,你需要创建或查找采样。我们建议在诸如 [Freesound][9] 或 [ccMixter][10] 这样的网站上查找采用 [Creative Commons][8] 协议的采样, + +另外,把 [MilkyTracker 文档页面][11] 放在手边。它含有数不清的教程和手册的链接。一个好的起点是在该项目 wiki 上的 [MilkyTracker 指南][12]。 + +#### LMMS + +我们的两个音乐家使用多用途的现代音乐创建工具 [LMMS][13]。它带有一个绝妙的采样和效果库,以及多种多样的灵活的插件来生成独特的声音。LMMS 的学习曲线令人吃惊的低,在某种程度上是因为其好用的节拍/低音线编辑器。 + +![](https://opensource.com/sites/default/files/u128651/lmms_plugins.png) + +我们对于想试试 LMMS 的音乐家有一个建议:使用插件。对于 [chiptune][14]式音乐,我们推荐 [sfxr][15]、[BitInvader][16] 和 [FreeBoy][17]。对于其它风格,[ZynAddSubFX][18] 是一个好的选择。它配备了各种合成仪器,可以根据您的需要进行更改。 + +### 图形工具 + +#### Tiled + +在开放源码游戏开发中,[Tiled][19] 是一个流行的贴片地图编辑器。我们使用它为来为我们在游戏场景中组合连续的、复古式的背景。 + +![](https://opensource.com/sites/default/files/u128651/tiled.png) + +Tiled 可以导出地图为 XML、JSON 或普通的图片。它是稳定的、跨平台的。 + +Tiled 的功能之一允许你在地图上定义和放置随意的游戏对象,例如硬币和提升道具,但在 jam 期间我们没有使用它。你需要做的全部是以贴片集的方式加载对象的图像,然后使用“插入平铺”来放置它们。 + +一般来说,对于需要一个地图编辑器的项目,Tiled 是我们所推荐的软件中一个不可或缺的部分。 + +#### Piskel + +[Piskel][20] 是一个像素艺术编辑器,它的源文件代码以 [Apache 2.0 协议][21] 发布。在这次 Jam 期间,们的大多数的图像资源都使用 Piskel 来处理,我们当然也将在未来的工程中使用它。 + +在这个 Jam 期间,Piskel 极大地帮助我们的两个功能是洋葱皮Onion skin精灵序列图spritesheet导出。 + +##### 洋葱皮 + +洋葱皮功能将使 Piskel 以虚影显示你编辑的动画的前一帧和后一帧的,像这样: + +![](https://opensource.com/sites/default/files/u128651/onionshow.gif) + +洋葱皮是很方便的,因为它适合作为一个绘制指引和帮助你在整个动画进程中保持角色的一致形状和体积。 要启用它,只需单击屏幕右上角预览窗口下方的洋葱形图标即可。 + +![](https://opensource.com/sites/default/files/u128651/onionenable.png) + +##### 精灵序列图导出 + +Piskel 将动画导出为精灵序列图的能力也非常有用。精灵序列图是一个包含动画所有帧的光栅图像。例如,这是我们从 Piskel 导出的精灵序列图: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist.png) + +该精灵序列图包含两帧。一帧位于图像的上半部分,另一帧位于图像的下半部分。精灵序列图通过从单个文件加载整个动画,大大简化了游戏的代码。这是上面精灵序列图的动画版本: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist-anim.gif) + +##### Unpiskel.py + +在 Jam 期间,我们很多次想批量转换 Piskel 文件到 PNG 文件。由于 Piskel 文件格式基于 JSON,我们写一个基于 GPLv3 协议的名为 [unpiskel.py][22] 的 Python 小脚本来做转换。 + +它像这样被调用的: + +``` +python unpiskel.py input.piskel +``` + +这个脚本将从一个 Piskel 文件(这里是 `input.piskel`)中提取 PNG 数据帧和图层,并将它们各自存储。这些文件采用模式 `NAME_XX_YY.png` 命名,在这里 `NAME` 是 Piskel 文件的缩减名称,`XX` 是帧的编号,`YY` 是层的编号。 + +因为脚本可以从一个 shell 中调用,它可以用在整个文件列表中。 + +``` +for f in *.piskel; do python unpiskel.py "$f"; done +``` + +### Python、Pygame 和 cx_Freeze + +#### Python 和 Pygame + +我们使用 [Python][23] 语言来制作我们的游戏。它是一个脚本语言,通常被用于文本处理和桌面应用程序开发。它也可以用于游戏开发,例如像 [Angry Drunken Dwarves][24] 和 [Ren'Py][25] 这样的项目所展示的。这两个项目都使用一个称为 [Pygame][26] 的 Python 库来显示图形和产生声音,所以我们也决定在 Open Jam 中使用这个库。 + +Pygame 被证明是既稳定又富有特色,并且它对我们创建的街机式游戏来说是很棒的。在低分辨率时,库的速度足够快的,但是在高分辨率时,它只用 CPU 的渲染开始变慢。这是因为 Pygame 不使用硬件加速渲染。然而,开发者可以充分利用 OpenGL 基础设施的优势。 + +如果你正在寻找一个好的 2D 游戏编程库,Pygame 是值得密切注意的一个。它的网站有 [一个好的教程][27] 可以作为起步。务必看看它! + +#### cx_Freeze + +准备发行我们的游戏是有趣的。我们知道,Windows 用户不喜欢装一套 Python,并且要求他们来安装它可能很过分。除此之外,他们也可能必须安装 Pygame,在 Windows 上,这不是一个简单的工作。 + +很显然:我们必须放置我们的游戏到一个更方便的格式中。很多其他的 Open Jam 参与者使用专有的游戏引擎 Unity,它能够使他们的游戏在网页浏览器中来玩。这使得它们非常方便地来玩。便利性是一个我们的游戏中根本不存在的东西。但是,感谢生机勃勃的 Python 生态系统,我们有选择。已有的工具可以帮助 Python 程序员将他们的游戏做成 Windows 上的发布版本。我们考虑过的两个工具是 [cx_Freeze][28] 和 [Pygame2exe][29](它使用 [py2exe][30])。我们最终决定用 cx_Freeze,因为它是跨平台的。 + +在 cx_Freeze 中,你可以把一个单脚本游戏打包成发布版本,只要在 shell 中运行一个命令,像这样: + +``` +cxfreeze main.py --target-dir dist +``` + +`cxfreeze` 的这个调用将把你的脚本(这里是 `main.py`)和在你系统上的 Python 解释器捆绑到到 `dist` 目录。一旦完成,你需要做的是手动复制你的游戏的数据文件到 `dist` 目录。你将看到,`dist` 目录包含一个可以运行来开始你的游戏的可执行文件。 + +这里有使用 cx_Freeze 的更复杂的方法,允许你自动地复制数据文件,但是我们发现简单的调用 `cxfreeze` 足够满足我们的需要。感谢这个工具,我们使我们的游戏玩起来更便利一些。 + +### 庆祝开源 + +Open Jam 是庆祝开源模式的软件开发的重要活动。这是一个分析开源工具的当前状态和我们在未来工作中需求的一个机会。对于游戏开发者探求其工具的使用极限,学习未来游戏开发所必须改进的地方,游戏 Jam 或许是最好的时机。 + +开源工具使人们能够在不损害自由的情况下探索自己的创造力,而无需预先投入资金。虽然我们可能不会成为专业的游戏开发者,但我们仍然能够通过我们的简短的实验性游戏 [Mark My Words][5] 获得一点点体验。它是一个以语言学为主题的游戏,描绘了虚构的书写系统在其历史中的演变。还有很多其他不错的作品提交给了 Open Jam,它们都值得一试。 真的,[去看看][31]! + +在本文结束前,我们想要感谢所有的 [参加俱乐部的成员][32],使得这次经历真正的有价值。我们也想要感谢 [Michael Clayton][33]、[Jared Sprague][34] 和 [Opensource.com][35] 主办 Open Jam。简直酷毙了。 + +现在,我们对读者提出了一些问题。你是一个 FOSS 游戏开发者吗?你选择的工具是什么?务必在下面留下一个评论! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/graphics-music-tools-game-dev + +作者:[Charlie Murphy][a] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rsg167 +[1]:http://mugeeks.org/ +[2]:https://itch.io/jam/open-jam-1 +[3]:https://opensource.com/article/17/8/open-jam-announcement +[4]:https://opensource.com/article/17/11/open-jam +[5]:https://mugeeksalpha.itch.io/mark-omy-words +[6]:http://milkytracker.titandemo.org/ +[7]:https://en.wikipedia.org/wiki/Music_tracker +[8]:https://creativecommons.org/ +[9]:https://freesound.org/ +[10]:http://ccmixter.org/view/media/home +[11]:http://milkytracker.titandemo.org/documentation/ +[12]:https://github.com/milkytracker/MilkyTracker/wiki/MilkyTracker-Guide +[13]:https://lmms.io/ +[14]:https://en.wikipedia.org/wiki/Chiptune +[15]:https://github.com/grimfang4/sfxr +[16]:https://lmms.io/wiki/index.php?title=BitInvader +[17]:https://lmms.io/wiki/index.php?title=FreeBoy +[18]:http://zynaddsubfx.sourceforge.net/ +[19]:http://www.mapeditor.org/ +[20]:https://www.piskelapp.com/ +[21]:https://github.com/piskelapp/piskel/blob/master/LICENSE +[22]:https://raw.githubusercontent.com/MUGeeksandGadgets/MarkMyWords/master/tools/unpiskel.py +[23]:https://www.python.org/ +[24]:https://www.sacredchao.net/~piman/angrydd/ +[25]:https://renpy.org/ +[26]:https://www.Pygame.org/ +[27]:http://Pygame.org/docs/tut/PygameIntro.html +[28]:https://anthony-tuininga.github.io/cx_Freeze/ +[29]:https://Pygame.org/wiki/Pygame2exe +[30]:http://www.py2exe.org/ +[31]:https://itch.io/jam/open-jam-1/entries +[32]:https://github.com/MUGeeksandGadgets/MarkMyWords/blob/3e1e8aed12ebe13acccf0d87b06d4f3bd124b9db/README.md#credits +[33]:https://twitter.com/mwcz +[34]:https://twitter.com/caramelcode +[35]:https://opensource.com/ diff --git a/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md new file mode 100644 index 0000000000..4272904f5c --- /dev/null +++ b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md @@ -0,0 +1,869 @@ +编写你的第一行 HTML 代码,来帮助蝙蝠侠写一封情书 +====== + +![](https://cdn-images-1.medium.com/max/1000/1*kZxbQJTdb4jn_frfqpRg9g.jpeg) + +在一个美好的夜晚,你的肚子拒绝消化你在晚餐吃的大块披萨,所以你不得不在睡梦中冲进洗手间。 + +在浴室里,当你在思考为什么会发生这种情况时,你听到一个来自通风口的低沉声音:“嘿,我是蝙蝠侠。” + +这时,你会怎么做呢? + +在你恐慌并处于关键时刻之前,蝙蝠侠说:“我需要你的帮助。我是一个超级极客,但我不懂 HTML。我需要用 HTML 写一封情书,你愿意帮助我吗?” + +谁会拒绝蝙蝠侠的请求呢,对吧?所以让我们用 HTML 来写一封蝙蝠侠的情书。 + +### 你的第一个 HTML 文件 + +HTML 网页与你电脑上的其它文件一样。就同一个 .doc 文件以 MS Word 打开,.jpg 文件在图像查看器中打开一样,一个 .html 文件在浏览器中打开。 + +那么,让我们来创建一个 .html 文件。你可以在 Notepad 或其它任何编辑器中完成此任务,但我建议使用 VS Code。[在这里下载并安装 VS Code][2]。它是免费的,也是我唯一喜欢的微软产品。 + +在系统中创建一个目录,将其命名为 “HTML Practice”(不带引号)。在这个目录中,再创建一个名为 “Batman's Love Letter”(不带引号)的目录,这将是我们的项目根目录。这意味着我们所有与这个项目相关的文件都会在这里。 + +打开 VS Code,按下 `ctrl+n` 创建一个新文件,按下 `ctrl+s` 保存文件。切换到 “Batman's Love Letter” 文件夹并将其命名为 “loveletter.html”,然后单击保存。 + +现在,如果你在文件资源管理器中双击它,它将在你的默认浏览器中打开。我建议使用 Firefox 来进行 web 开发,但 Chrome 也可以。 + +让我们将这个过程与我们已经熟悉的东西联系起来。还记得你第一次拿到电脑吗?我做的第一件事是打开 MS Paint 并绘制一些东西。你在 Paint 中绘制一些东西并将其另存为图像,然后你可以在图像查看器中查看该图像。之后,如果要再次编辑该图像,你在 Paint 中重新打开它,编辑并保存它。 + +我们目前的流程非常相似。正如我们使用 Paint 创建和编辑图像一样,我们使用 VS Code 来创建和编辑 HTML 文件。就像我们使用图像查看器查看图像一样,我们使用浏览器来查看我们的 HTML 页面。 + +### HTML 中的段落 + +我们有一个空的 HTML 文件,以下是蝙蝠侠想在他的情书中写的第一段。 + +“After all the battles we fought together, after all the difficult times we saw together, and after all the good and bad moments we’ve been through, I think it’s time I let you know how I feel about you.” + +复制这些到 VS Code 中的 loveletter.html。单击 “View -> Toggle Word Wrap (alt+z)” 自动换行。 + +保存并在浏览器中打开它。如果它已经打开,单击浏览器中的刷新按钮。 + +瞧!那是你的第一个网页! + +我们的第一段已准备就绪,但这不是在 HTML 中编写段落的推荐方法。我们有一种特定的方法让浏览器知道一个文本是一个段落。 + +如果你用 `

` 和 `

` 来包裹文本,那么浏览器将识别 `

` 和 `

` 中的文本是一个段落。我们这样做: + +``` +

After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.

+``` + +通过在 `

` 和 `

`中编写段落,你创建了一个 HTML 元素。一个网页就是 HTML 元素的集合。 + +让我们首先来认识一些术语:`

` 是开始标签,`

` 是结束标签,“p” 是标签名称。元素开始和结束标签之间的文本是元素的内容。 + +### “style” 属性 + +在上面,你将看到文本覆盖屏幕的整个宽度。 + +我们不希望这样。没有人想要阅读这么长的行。让我们设定段落宽度为 550px。 + +我们可以通过使用元素的 `style` 属性来实现。你可以在其 `style` 属性中定义元素的样式(例如,在我们的示例中为宽度)。以下行将在 `p` 元素上创建一个空样式属性: + +``` +

...

+``` + +你看到那个空的 `""` 了吗?这就是我们定义元素外观的地方。现在我们要将宽度设置为 550px。我们这样做: + +``` +

+ After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+``` + +我们将 `width` 属性设置为 `550px`,用冒号 `:` 分隔,以分号 `;` 结束。 + +另外,注意我们如何将 `

` 和 `

` 放在单独的行中,文本内容用一个制表符缩进。像这样设置代码使其更具可读性。 + +### HTML 中的列表 + +接下来,蝙蝠侠希望列出他所钦佩的人的一些优点,例如: + +``` +You complete my darkness with your light. I love: +- the way you see good in the worst things +- the way you handle emotionally difficult situations +- the way you look at Justice +I have learned a lot from you. You have occupied a special place in my heart over time. +``` + +这看起来很简单。 + +让我们继续,在 `

` 下面复制所需的文本: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love: + - the way you see good in the worse + - the way you handle emotionally difficult situations + - the way you look at Justice + I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新浏览器。 + +![](https://cdn-images-1.medium.com/max/1000/1*M0Ae5ZpRTucNyucfaaz4uw.jpeg) + +哇!这里发生了什么,我们的列表在哪里? + +如果你仔细观察,你会发现没有显示换行符。在代码中我们在新的一行中编写列表项,但这些项在浏览器中显示在一行中。 + +如果你想在 HTML(新行)中插入换行符,你必须使用 `
`。让我们来使用 `
`,看看它长什么样: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love:
+ - the way you see good in the worse
+ - the way you handle emotionally difficult situations
+ - the way you look at Justice
+ I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新: + +![](https://cdn-images-1.medium.com/max/1000/1*Mj4Sr_jUliidxFpEtu0pXw.jpeg) + +好的,现在它看起来就像我们想要的那样! + +另外,注意我们没有写一个 `
`。有些标签不需要结束标签(它们被称为自闭合标签)。 + +还有一件事:我们没有在两个段落之间使用 `
`,但第二个段落仍然是从一个新行开始,这是因为 `

` 元素会自动插入换行符。 + +我们使用纯文本编写列表,但是有两个标签可以供我们使用来达到相同的目的:`

    ` and `
  • `。 + +让我们解释一下名字的意思:ul 代表无序列表Unordered List,li 代表列表项目List Item。让我们使用它们来展示我们的列表: + +``` +

    + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

    +``` + +``` +

    + You complete my darkness with your light. I love: +

      +
    • the way you see good in the worse
    • +
    • the way you handle emotionally difficult situations
    • +
    • the way you look at Justice
    • +
    + I have learned a lot from you. You have occupied a special place in my heart over the time. +

    +``` + +在复制代码之前,注意差异部分: + +* 我们删除了所有的 `
    `,因为每个 `
  • ` 会自动显示在新行中 +* 我们将每个列表项包含在 `
  • ` 和 `
  • ` 之间 +* 我们将所有列表项的集合包裹在 `
      ` 和 `
    ` 之间 +* 我们没有像 `

    ` 元素那样定义 `

      ` 元素的宽度。这是因为 `
        ` 是 `

        ` 的子节点,`

        ` 已经被约束到 550px,所以 `

          ` 不会超出这个范围。 + +让我们保存文件并刷新浏览器以查看结果: + +![](https://cdn-images-1.medium.com/max/1000/1*aPlMpYVZESPwgUO3Iv-qCA.jpeg) + +你会立即注意到在每个列表项之前显示了重点标志。我们现在不需要在每个列表项之前写 “-”。 + +经过仔细检查,你会注意到最后一行超出 550px 宽度。这是为什么?因为 HTML 不允许 `
            ` 元素出现在 `

            ` 元素中。让我们将第一行和最后一行放在单独的 `

            ` 元素中: + +``` +

            + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

            +``` + +``` +

            + You complete my darkness with your light. I love: +

            +``` + +``` +
              +
            • the way you see good in the worse
            • +
            • the way you handle emotionally difficult situations
            • +
            • the way you look at Justice
            • +
            +``` + +``` +

            + I have learned a lot from you. You have occupied a special place in my heart over the time. +

            +``` + +保存并刷新。 + +注意,这次我们还定义了 `
              ` 元素的宽度。那是因为我们现在已经将 `
                ` 元素放在了 `

                ` 元素之外。 + +定义情书中所有元素的宽度会变得很麻烦。我们有一个特定的元素用于此目的:`

                ` 元素。一个 `
                ` 元素就是一个通用容器,用于对内容进行分组,以便轻松设置样式。 + +让我们用 `
                ` 元素包装整个情书,并为其赋予宽度:550px 。 + +``` +
                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +棒极了,我们的代码现在看起来简洁多了。 + +### HTML 中的标题 + +到目前为止,蝙蝠侠对结果很高兴,他希望在情书上标题。他想写一个标题: “Bat Letter”。当然,你已经看到这个名字了,不是吗?:D + +你可以使用 `

                `、`

                `、`

                `、`

                `、`

                ` 和 `
                ` 标签来添加标题,`

                ` 是最大的标题和最主要的标题,`

                ` 是最小的标题。 + +![](https://cdn-images-1.medium.com/max/1000/1*Ud-NzfT-SrMgur1WX4LCkQ.jpeg) + +让我们在第二段之前使用 `

                ` 做主标题和一个副标题: + +``` +
                +

                Bat Letter

                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +保存,刷新。 + +![](https://cdn-images-1.medium.com/max/1000/1*rzyIl-gHug3nQChqfscU3w.jpeg) + +### HTML 中的图像 + +我们的情书尚未完成,但在继续之前,缺少一件大事:蝙蝠侠标志。你见过是蝙蝠侠的东西但没有蝙蝠侠的标志吗? + +并没有。 + +所以,让我们在情书中添加一个蝙蝠侠标志。 + +在 HTML 中包含图像就像在一个 Word 文件中包含图像一样。在 MS Word 中,你到 “菜单 -> 插入 -> 图像 -> 然后导航到图像位置为止 -> 选择图像 -> 单击插入”。 + +在 HTML 中,我们使用 `` 标签让浏览器知道我们需要加载的图像,而不是单击菜单。我们在 `src` 属性中写入文件的位置和名称。如果图像在项目根目录中,我们可以简单地在 `src` 属性中写入图像文件的名称。 + +在我们深入编码之前,从[这里][3]下载蝙蝠侠标志。你可能希望裁剪图像中的额外空白区域。复制项目根目录中的图像并将其重命名为 “bat-logo.jpeg”。 + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +我们在第 3 行包含了 `` 标签。这个标签也是一个自闭合的标签,所以我们不需要写 ``。在 `src` 属性中,我们给出了图像文件的名称。这个名称应与图像名称完全相同,包括扩展名(.jpeg)及其大小写。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*uMNWAISOACJlzDOONcrGXw.jpeg) + +该死的!刚刚发生了什么? + +当使用 `` 标签包含图像时,默认情况下,图像将以其原始分辨率显示。在我们的例子中,图像比 550px 宽得多。让我们使用 `style` 属性定义它的宽度: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +你会注意到,这次我们定义宽度使用了 “%” 而不是 “px”。当我们在 “%” 中定义宽度时,它将占据父元素宽度的百分比。因此,100% 的 550px 将为我们提供 550px。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*5c0ngx3BFVlyyP6UNtfYyg.jpeg) + +太棒了!这让蝙蝠侠的脸露出了羞涩的微笑 :)。 + +### HTML 中的粗体和斜体 + +现在蝙蝠侠想在最后几段中承认他的爱。他有以下文本供你用 HTML 编写: + +“I have a confession to make + +It feels like my chest _does_ have a heart. You make my heart beat. Your smile brings a smile to my face, your pain brings pain to my heart. + +I don’t show my emotions, but I think this man behind the mask is falling for you.” + +当阅读到这里时,你会问蝙蝠侠:“等等,这是给谁的?”蝙蝠侠说: + +“这是给超人的。” + +![](https://cdn-images-1.medium.com/max/1000/1*UNDvfIZQJ1Q_goHc-F-IPA.jpeg) + +你说:哦!我还以为是给神奇女侠的呢。 + +蝙蝠侠说:不,这是给超人的,请在最后写上 “I love you Superman.”。 + +好的,我们来写: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +这封信差不多完成了,蝙蝠侠另外想再做两次改变。蝙蝠侠希望在最后段落的第一句中的 “does” 一词是斜体,而 “I love you Superman” 这句话是粗体的。 + +我们使用 `` 和 `` 以斜体和粗体显示文本。让我们来更新这些更改: + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +![](https://cdn-images-1.medium.com/max/1000/1*6hZdQJglbHUcEEHzouk2eA.jpeg) + +### HTML 中的样式 + +你可以通过三种方式设置样式或定义 HTML 元素的外观: + +* 内联样式:我们使用元素的 `style` 属性来编写样式。这是我们迄今为止使用的,但这不是一个好的实践。 +* 嵌入式样式:我们在由 `` 包裹的 “style” 元素中编写所有样式。 +* 链接样式表:我们在具有 .css 扩展名的单独文件中编写所有元素的样式。此文件称为样式表。 + +让我们来看看如何定义 `
                ` 的内联样式: + +``` +
                +``` + +我们可以在 `` 里面写同样的样式: + +``` +div{ + width:550px; +} +``` + +在嵌入式样式中,我们编写的样式是与元素分开的。所以我们需要一种方法来关联元素及其样式。第一个单词 “div” 就做了这样的活。它让浏览器知道花括号 `{...}` 里面的所有样式都属于 “div” 元素。由于这种语法确定要应用样式的元素,因此它称为一个选择器。 + +我们编写样式的方式保持不变:属性(`width`)和值(`550px`)用冒号(`:`)分隔,以分号(`;`)结束。 + +让我们从 `
                ` 和 `` 元素中删除内联样式,将其写入 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +保存并刷新,结果应保持不变。 + +但是有一个大问题,如果我们的 HTML 文件中有多个 `
                ` 和 `` 元素该怎么办?这样我们在 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +HTML 已经准备好了嵌入式样式。 + +但是,你可以看到,随着我们包含越来越多的样式,`` 将变得很大。这可能很快会混乱我们的主 HTML 文件。 + +因此,让我们更进一步,通过将 ``。 + +我们需要使用 HTML 文件中的 `` 标签来将新创建的 CSS 文件链接到 HTML 文件。以下是我们如何做到这一点: + +``` + +``` + +我们使用 `` 元素在 HTML 文档中包含外部资源,它主要用于链接样式表。我们使用的三个属性是: + +* `rel`:关系。链接文件与文档的关系。具有 .css 扩展名的文件称为样式表,因此我们保留 rel=“stylesheet”。 +* `type`:链接文件的类型;对于一个 CSS 文件来说它是 “text/css”。 +* `href`:超文本参考。链接文件的位置。 + +link 元素的结尾没有 ``。因此,`` 也是一个自闭合的标签。 + +``` + +``` + +如果只是得到一个女朋友,那么很容易:D + +可惜没有那么简单,让我们继续前进。 + +这是我们 “loveletter.html” 的内容: + +``` + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +“style.css” 内容: + +``` +#letter-container{ + width:550px; +} +#header-bat-logo{ + width:100%; +} +``` + +保存文件并刷新,浏览器中的输出应保持不变。 + +### 一些手续 + +我们的情书已经准备好给蝙蝠侠,但还有一些正式的片段。 + +与其他任何编程语言一样,HTML 自出生以来(1990 年)经历过许多版本,当前版本是 HTML5。 + +那么,浏览器如何知道你使用哪个版本的 HTML 来编写页面呢?要告诉浏览器你正在使用 HTML5,你需要在页面顶部包含 ``。对于旧版本的 HTML,这行不同,但你不需要了解它们,因为我们不再使用它们了。 + +此外,在之前的 HTML 版本中,我们曾经将整个文档封装在 `` 标签内。整个文件分为两个主要部分:头部在 `` 里面,主体在 `` 里面。这在 HTML5 中不是必须的,但由于兼容性原因,我们仍然这样做。让我们用 ``, ``、 `` 和 `` 更新我们的代码: + +``` + + + + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +主要内容在 `` 里面,元信息在 `` 里面。所以我们把 `
                ` 保存在 `` 里面并加载 `` 里面的样式表。 + +保存并刷新,你的 HTML 页面应显示与之前相同的内容。 + +### HTML 的标题 + +我发誓,这是最后一次改变。 + +你可能已经注意到选项卡的标题正在显示 HTML 文件的路径: + +![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg) + +我们可以使用 `` 标签来定义 HTML 文件的标题。标题标签也像链接标签一样在 `<head>` 内部。让我们我们在标题中加上 “Bat Letter”: + +``` +<!DOCTYPE html> +<html> +<head> + <title>Bat Letter + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +保存并刷新,你将看到在选项卡上显示的是 “Bat Letter” 而不是文件路径。 + +蝙蝠侠的情书现在已经完成。 + +恭喜!你用 HTML 制作了蝙蝠侠的情书。 + +![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg) + +### 我们学到了什么 + +我们学习了以下新概念: + + * 一个 HTML 文档的结构 + * 在 HTML 中如何写元素(`

                `) + * 如何使用 style 属性在元素内编写样式(这称为内联样式,尽可能避免这种情况) + * 如何在 `` 中编写元素的样式(这称为嵌入式样式) + * 在 HTML 中如何使用 `` 在单独的文件中编写样式并链接它(这称为链接样式表) + * 什么是标签名称,属性,开始标签和结束标签 + * 如何使用 id 属性为一个元素赋予 id + * CSS 中的标签选择器和 id 选择器 + +我们学习了以下 HTML 标签: + + * `

                `:用于段落 + * `
                `:用于换行 + * `

                  `、`
                • `:显示列表 + * `
                  `:用于分组我们信件的元素 + * `

                  `、`

                  `:用于标题和子标题 + * ``:用于插入图像 + * ``、``:用于粗体和斜体文字样式 + * `. - -* Linked stylesheet: We write styles of all the elements in a separate file with .css extension. This file is called Stylesheet. - -Let’s have a look at how we defined the inline style of the “div” until now: - -``` -
                  -``` - -We can write this same style inside `` like this: - -``` -div{ - width:550px; -} -``` - -In embedded styling, the styles we write are separate from the elements. So we need a way to relate the element and its style. The first word “div” does exactly that. It lets the browser know that whatever style is inside the curly braces `{…}` belongs to the “div” element. Since this phrase determines which element to apply the style to, it’s called a selector. - -The way we write style remains same: property(width) and value(550px) separated by a colon(:) and ended by a semicolon(;). - -Let’s remove inline style from our “div” and “img” element and write it inside the ` -``` - -``` -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -``` - -``` -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  -``` - -Save and refresh, and the result should remain the same. - -There is one big problem though — what if there is more than one “div” and “img” element in our HTML file? The styles that we defined for div and img inside the “style” element will apply to every div and img on the page. - -If you add another div in your code in the future, then that div will also become 550px wide. We don’t want that. - -We want to apply our styles to the specific div and img that we are using right now. To do this, we need to give our div and img element unique ids. Here’s how you can give an id to an element using its “id” attribute: - -``` -
                  -``` - -and here’s how to use th diff --git a/sources/tech/20180202 Tips for success when getting started with Ansible.md b/sources/tech/20180202 Tips for success when getting started with Ansible.md deleted file mode 100644 index 539db2ac86..0000000000 --- a/sources/tech/20180202 Tips for success when getting started with Ansible.md +++ /dev/null @@ -1,73 +0,0 @@ -Tips for success when getting started with Ansible -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus-big-data.png?itok=L34b2exg) - -Ansible is an open source automation tool used to configure servers, install software, and perform a wide variety of IT tasks from one central location. It is a one-to-many agentless mechanism where all instructions are run from a control machine that communicates with remote clients over SSH, although other protocols are also supported. - -While targeted for system administrators with privileged access who routinely perform tasks such as installing and configuring applications, Ansible can also be used by non-privileged users. For example, a database administrator using the `mysql` login ID could use Ansible to create databases, add users, and define access-level controls. - -Let's go over a very simple example where a system administrator provisions 100 servers each day and must run a series of Bash commands on each one before handing it off to users. - -![](https://opensource.com/sites/default/files/u128651/mapping-bash-commands-to-ansible.png) - -This is a simple example, but should illustrate how easily commands can be specified in yaml files and executed on remote servers. In a heterogeneous environment, conditional statements can be added so that certain commands are only executed in certain servers (e.g., "only execute `yum` commands in systems that are not Ubuntu or Debian"). - -One important feature in Ansible is that a playbook describes a desired state in a computer system, so a playbook can be run multiple times against a server without impacting its state. If a certain task has already been implemented (e.g., "user `sysman` already exists"), then Ansible simply ignores it and moves on. - -### Definitions - - * **Tasks:**``A task is the smallest unit of work. It can be an action like "Install a database," "Install a web server," "Create a firewall rule," or "Copy this configuration file to that server." - * **Plays:**``A play is made up of tasks. For example, the play: "Prepare a database to be used by a web server" is made up of tasks: 1) Install the database package; 2) Set a password for the database administrator; 3) Create a database; and 4) Set access to the database. - * **Playbook:**``A playbook is made up of plays. A playbook could be: "Prepare my website with a database backend," and the plays would be 1) Set up the database server; and 2) Set up the web server. - * **Roles:**``Roles are used to save and organize playbooks and allow sharing and reuse of playbooks. Following the previous examples, if you need to fully configure a web server, you can use a role that others have written and shared to do just that. Since roles are highly configurable (if written correctly), they can be easily reused to suit any given deployment requirements. - * **Ansible Galaxy:**``Ansible [Galaxy][1] is an online repository where roles are uploaded so they can be shared with others. It is integrated with GitHub, so roles can be organized into Git repositories and then shared via Ansible Galaxy. - - - -These definitions and their relationships are depicted here: - -![](https://opensource.com/sites/default/files/u128651/ansible-definitions.png) - -Please note this is just one way to organize the tasks that need to be executed. We could have split up the installation of the database and the web server into separate playbooks and into different roles. Most roles in Ansible Galaxy install and configure individual applications. You can see examples for installing [mysql][2] and installing [httpd][3]. - -### Tips for writing playbooks - -The best source for learning Ansible is the official [documentation][4] site. And, as usual, online search is your friend. I recommend starting with simple tasks, like installing applications or creating users. Once you are ready, follow these guidelines: - - * When testing, use a small subset of servers so that your plays execute faster. If they are successful in one server, they will be successful in others. - * Always do a dry run to make sure all commands are working (run with `--check-mode` flag). - * Test as often as you need to without fear of breaking things. Tasks describe a desired state, so if a desired state is already achieved, it will simply be ignored. - * Be sure all host names defined in `/etc/ansible/hosts` are resolvable. - * Because communication to remote hosts is done using SSH, keys have to be accepted by the control machine, so either 1) exchange keys with remote hosts prior to starting; or 2) be ready to type in "Yes" to accept SSH key exchange requests for each remote host you want to manage. - * Although you can combine tasks for different Linux distributions in one playbook, it's cleaner to write a separate playbook for each distro. - - - -### In the final analysis - -Ansible is a great choice for implementing automation in your data center: - - * It's agentless, so it is simpler to install than other automation tools. - * Instructions are in YAML (though JSON is also supported) so it's easier than writing shell scripts. - * It's open source software, so contribute back to it and make it even better! - - - -How have you used Ansible to automate your data center? Share your experience in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/tips-success-when-getting-started-ansible - -作者:[Jose Delarosa][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jdelaros1 -[1]:https://galaxy.ansible.com/ -[2]:https://galaxy.ansible.com/bennojoy/mysql/ -[3]:https://galaxy.ansible.com/xcezx/httpd/ -[4]:http://docs.ansible.com/ diff --git a/sources/tech/20180205 Rancher - Container Management Application.md b/sources/tech/20180205 Rancher - Container Management Application.md deleted file mode 100644 index f1900aff9d..0000000000 --- a/sources/tech/20180205 Rancher - Container Management Application.md +++ /dev/null @@ -1,221 +0,0 @@ -Rancher - Container Management Application -====== -Docker is a cutting-edge software used for containerization, that is used in most of IT companies to reduce infrastructure cost. - -By default docker comes without any GUI, which is easy for Linux administrator to manage it and it’s very difficult for developers to manage. When it’s come to production then it’s very difficult for Linux admin too. So, what would be the best solution to manage the docker without any trouble. - -The only way is GUI. The Docker API has allowed third party applications to interfacing with Docker. There are many docker GUI applications available in the market. We already wrote an article about Portainer application. Today we are going to discuss about Rancher. - -Containers make software development easier, enabling you to write code faster and run it better. However, running containers in production can be hard. - -**Suggested Read :** [Portainer – A Simple Docker Management GUI][1] - -### What is Rancher - -[Rancher][2] is a complete container management platform that makes it easy to deploy and run containers in production on any infrastructure. It provides infrastructure services such as multi-host networking, global and local load balancing, and volume snapshots. It integrates native Docker management capabilities such as Docker Machine and Docker Swarm. It offers a rich user experience that enables devops admins to operate Docker in production at large scale. - -Navigate to following article for docker installation on Linux. - -**Suggested Read :** -**(#)** [How to install Docker in Linux][3] -**(#)** [How to play with Docker images on Linux][4] -**(#)** [How to play with Docker containers on Linux][5] -**(#)** [How to Install, Run Applications inside Docker Containers][6] - -### Rancher Features - - * Set up Kubernetes in two minutes - * Launch apps with single click (90 popular Docker applications) - * Deploy and manage Docker easily - * complete container management platform for production environment - * Quickly deploy containers in production - * Automate container deployment and operations with a robust technology - * Modular infrastructure services - * Rich set of orchestration tools - * Rancher supports multiple authentication mechanisms - - - -### How to install Rancher - -Rancher installation is very simple since it’s runs as a lightweight Docker containers. Rancher is deployed as a set of Docker containers. Running Rancher is as simple as launching two containers. One container as the management server and another container on a node as an agent. Simple run the following command to deploy rancher on Linux. - -Rancher server offers two different package tags like `stable` & `latest`. The below commands will pull appropriate build rancher image and install on your system. It will only take a couple of minutes for Rancher server to start up. - - * `stable` : This tag will be their latest development builds. These builds will have been validated through rancher CI automation framework which is not advisable for deployment in production. - * `latest` : It’s a latest stable release version which is recommend for production environment. - - - -Rancher installation comes with many varieties. In this tutorial we are going to discuss about two variants. - - * Install rancher server in a single container (Inbuilt Rancher Database) - * Install rancher server in a single container (External Database) - - - -### Method-1 - -Run the following commands to install rancher server in a single container (Inbuilt Rancher Database). -``` -$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:stable - -$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:latest - -``` - -### Method-2 - -Instead of using the internal database that comes with Rancher server, you can start Rancher server pointing to an external database. First create required database, database user for the same. -``` -> CREATE DATABASE IF NOT EXISTS cattle COLLATE = 'utf8_general_ci' CHARACTER SET = 'utf8'; -> GRANT ALL ON cattle.* TO 'cattle'@'%' IDENTIFIED BY 'cattle'; -> GRANT ALL ON cattle.* TO 'cattle'@'localhost' IDENTIFIED BY 'cattle'; - -``` - -Run the following command to start Rancher connecting to an external database. -``` -$ sudo docker run -d --restart=unless-stopped -p 8080:8080 rancher/server \ - --db-host myhost.example.com --db-port 3306 --db-user username --db-pass password --db-name cattle - -``` - -If you want to test Rancher 2.0 use the following command to start. -``` -$ sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/server:preview - -``` - -### Access & Setup Rancher Through GUI - -Navigate to the following URL `http://hostname:8080` or `http://server_ip:8080` to access rancher GUI. -[![][7]![][7]][8] - -### How To Register the Host - -Register your host URL which allow hosts to connect to the Rancher API. It’s one time setup. - -To do, Click “Add a Host” link under the main menu or Go to >> Infrastructure >> Add Hosts then hit `save` button. -[![][7]![][7]][9] - -By default access control authentication is disabled in rancher so first we have to enable the access control authentication through available method, otherwise anyone can access the GUI. - -Go to >> Admin >> Access Control and input the following values and finally hit `Enable Authentication` button to enable it. In my case i’m enabling via `local authentication` - - * **`Login UserName`** Input your descried login username - * **`Full Name`** Input your full name - * **`Password`** Input your descried password - * **`Confirm Password`**Confirm the password once again - - - -[![][7]![][7]][10] - -Logout and login back with your new login credential. -[![][7]![][7]][11] - -Now, i can see the local authentication is enabled. -[![][7]![][7]][12] - -### How To Add Hosts - -After register your host, it will take you to next page where you can choose Linux machines from varies cloud providers. We are going to add the host that is running Rancher server, so select the `custom` option and input the required information. - -Enter your server public IP address in the 4th step and run the command which is displaying in the 5th step into your terminal then finally hit `close` button. -``` -$ sudo docker run -e CATTLE_AGENT_IP="192.168.1.112" --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v1.2.9 http://192.168.1.112:8080/v1/scripts/3F8217A1DCF01A7B7F8A:1514678400000:D7WeLUcEUnqZOt8rWjrvoaUE - -INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.1.112:8080/v1 -INFO: Attempting to connect to: http://66.70.189.137:8080/v1 -INFO: http://192.168.1.112:8080/v1 is accessible -INFO: Inspecting host capabilities -INFO: Boot2Docker: false -INFO: Host writable: true -INFO: Token: xxxxxxxx -INFO: Running registration -INFO: Printing Environment -INFO: ENV: CATTLE_ACCESS_KEY=A35151AB87C15633DFB4 -INFO: ENV: CATTLE_AGENT_IP=192.168.1.112 -INFO: ENV: CATTLE_HOME=/var/lib/cattle -INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken -INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx -INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx -INFO: ENV: CATTLE_URL=http://192.168.1.112:8080/v1 -INFO: ENV: DETECTED_CATTLE_AGENT_IP=172.17.0.1 -INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v1.2.9 -INFO: Deleting container rancher-agent -INFO: Launched Rancher Agent: 3415a1fd101f3c57d9cff6aef373c0ce66a3e20772122d2ca832039dcefd92fd - -``` - -[![][7]![][7]][13] - -Wait few seconds then the newly added host will be visible. To bring this Go to Infrastructure >> Hosts page. -[![][7]![][7]][14] - -### How To View Containers - -Just navigate the following location to view a list of running containers. Go to >> Infrastructure >> Containers. -[![][7]![][7]][15] - -### How To Create Container - -It’s very simple, just navigate the following location to create a container. - -Go to >> Infrastructure >> Containers >> “Add Container” and input the required information as per your requirement. To test this, i’m going to create Centos container with latest OS. -[![][7]![][7]][16] - -The same has been listed here. Infrastructure >> Containers -[![][7]![][7]][17] - -Hit on the `Container` name to view the container performances information like CPU, memory, network and storage. -[![][7]![][7]][18] - -To manage the container such as stop, start, clone, restart, etc. Choose the particular container then hit `Three dot's` button in the left side of the container or `Actions` button to perform. -[![][7]![][7]][19] - -If you want console access of the container, just hit `Execute Shell` option in the action button. -[![][7]![][7]][20] - -### How To Deploy Container From Application Catalog - -Rancher provides a catalog of application templates that make it easy to deploy in single click. It’s maintain popular applications (nearly 90) contributed by the Rancher community. -[![][7]![][7]][21] - -Go to >> Catalog >> All >> Choose the required application >> Finally hit “Launch” button to deploy. -[![][7]![][7]][22] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/rancher-a-complete-container-management-platform-for-production-environment/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://www.2daygeek.com/portainer-a-simple-docker-management-gui/ -[2]:http://rancher.com/ -[3]:https://www.2daygeek.com/install-docker-on-centos-rhel-fedora-ubuntu-debian-oracle-archi-scentific-linux-mint-opensuse/ -[4]:https://www.2daygeek.com/list-search-pull-download-remove-docker-images-on-linux/ -[5]:https://www.2daygeek.com/create-run-list-start-stop-attach-delete-interactive-daemonized-docker-containers-on-linux/ -[6]:https://www.2daygeek.com/install-run-applications-inside-docker-containers/ -[7]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[8]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-1.png -[9]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-2.png -[10]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3.png -[11]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-3a.png -[12]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-4.png -[13]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-5.png -[14]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-6.png -[15]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-7.png -[16]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-8.png -[17]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-9.png -[18]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-10.png -[19]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-11.png -[20]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-12.png -[21]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-13.png -[22]:https://www.2daygeek.com/wp-content/uploads/2018/02/Install-rancher-container-management-application-in-linux-14.png diff --git a/sources/tech/20180206 Power(Shell) to the people.md b/sources/tech/20180206 Power(Shell) to the people.md deleted file mode 100644 index 941dceffe5..0000000000 --- a/sources/tech/20180206 Power(Shell) to the people.md +++ /dev/null @@ -1,156 +0,0 @@ -Power(Shell) to the people -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lightbulbs.png?itok=pwp22hTw) - -Earlier this year, [PowerShell Core][1] [became generally available][2] under an Open Source ([MIT][3]) license. PowerShell is hardly a new technology. From its first release for Windows in 2006, PowerShell's creators [sought][4] to incorporate the power and flexibility of Unix shells while remedying their perceived deficiencies, particularly the need for text manipulation to derive value from combining commands. - -Five major releases later, PowerShell Core allows the same innovative shell and command environment to run natively on all major operating systems, including OS X and Linux. Some (read: almost everyone) may still scoff at the audacity and/or the temerity of this Windows-born interloper to offer itself to platforms that have had strong shell environments since time immemorial (at least as defined by a millennial). In this post, I hope to make the case that PowerShell can provide advantages to even seasoned users. - -### Consistency across platforms - -If you plan to port your scripts from one execution environment to another, you need to make sure you use only the commands and syntaxes that work. For example, on GNU systems, you would obtain yesterday's date as follows: -``` -date --date="1 day ago" - -``` - -On BSD systems (such as OS X), the above syntax wouldn't work, as the BSD date utility requires the following syntax: -``` -date -v -1d - -``` - -Because PowerShell is licensed under a permissive license and built for all platforms, you can ship it with your application. Thus, when your scripts run in the target environment, they'll be running on the same shell using the same command implementations as the environment in which you tested your scripts. - -### Objects and structured data - -*nix commands and utilities rely on your ability to consume and manipulate unstructured data. Those who have lived for years with `sed` `grep` and `awk` may be unbothered by this statement, but there is a better way. - -Let's redo the yesterday's date example in PowerShell. To get the current date, run the `Get-Date` cmdlet (pronounced "commandlet"): -``` -> Get-Date                         - - - -Sunday, January 21, 2018 8:12:41 PM - -``` - -The output you see isn't really a string of text. Rather, it is a string representation of a .Net Core object. Just like any other object in any other OOP environment, it has a type and most often, methods you can call. - -Let's prove this: -``` -> $(Get-Date).GetType().FullName - -System.DateTime - -``` - -The `$(...)` syntax behaves exactly as you'd expect from POSIX shells—the result of the evaluation of the command in parentheses is substituted for the entire expression. In PowerShell, however, the $ is strictly optional in such expressions. And, most importantly, the result is a .Net object, not text. So we can call the `GetType()` method on that object to get its type object (similar to `Class` object in Java), and the `FullName` [property][5] to get the full name of the type. - -So, how does this object-orientedness make your life easier? - -First, you can pipe any object to the `Get-Member` cmdlet to see all the methods and properties it has to offer. -``` -> (Get-Date) | Get-Member -PS /home/yevster/Documents/ArticlesInProgress> $(Get-Date) | Get-Member         - - -   TypeName: System.DateTime - - -Name                 MemberType     Definition                                 -----                 ----------     ----------                                 -Add                  Method         datetime Add(timespan value)               -AddDays              Method         datetime AddDays(double value)             -AddHours             Method         datetime AddHours(double value)             -AddMilliseconds      Method         datetime AddMilliseconds(double value)     -AddMinutes           Method         datetime AddMinutes(double value)           -AddMonths            Method         datetime AddMonths(int months)             -AddSeconds           Method         datetime AddSeconds(double value)           -AddTicks             Method         datetime AddTicks(long value)               -AddYears             Method         datetime AddYears(int value)               -CompareTo            Method         int CompareTo(System.Object value), int ... -``` - -You can quickly see that the DateTime object has an `AddDays` that you can quickly use to get yesterday's date: -``` -> (Get-Date).AddDays(-1) - - -Saturday, January 20, 2018 8:24:42 PM -``` - -To do something slightly more exciting, let's call Yahoo's weather service (because it doesn't require an API token) and get your local weather. -``` -$city="Boston" -$state="MA" -$url="https://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20weather.forecast%20where%20woeid%20in%20(select%20woeid%20from%20geo.places(1)%20where%20text%3D%22${city}%2C%20${state}%22)&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys" -``` - -Now, we could do things the old-fashioned way and just run `curl $url` to get a giant blob of JSON, or... -``` -$weather=(Invoke-RestMethod $url) -``` - -If you look at the type of `$weather` (by running `echo $weather.GetType().FullName`), you will see that it's a `PSCustomObject`. It's a dynamic object that reflects the structure of the JSON. - -And PowerShell will be thrilled to help you navigate through it with its tab completion. Just type `$weather.` (making sure to include the ".") and press Tab. You will see all the root-level JSON keys. Type one, followed by a "`.`", press Tab again, and you'll see its children (if any). - -Thus, you can easily navigate to the data you want: -``` -> echo $weather.query.results.channel.atmosphere.pressure                                                               -1019.0 - - -> echo $weather.query.results.channel.wind.chill                                                                       -41 -``` - -And if you have JSON or CSV lying around (or returned by an outside command) as unstructured data, just pipe it into the `ConvertFrom-Json` or `ConvertFrom-CSV` cmdlet, respectively, and you can have your data in nice clean objects. - -### Computing vs. automation - -We use shells for two purposes. One is for computing, to run individual commands and to manually respond to their output. The other is automation, to write scripts that execute multiple commands and respond to their output programmatically. - -A problem that most of us have learned to overlook is that these two purposes place different and conflicting requirements on the shell. Computing requires the shell to be laconic. The fewer keystrokes a user can get away with, the better. It's unimportant if what the user has typed is barely legible to another human being. Scripts, on the other hand, are code. Readability and maintainability are key. And here, POSIX utilities often fail us. While some commands do offer both laconic and readable syntaxes (e.g. `-f` and `--force`) for some of their parameters, the command names themselves err on the side of brevity, not readability. - -PowerShell includes several mechanisms to eliminate that Faustian tradeoff. - -First, tab completion eliminates typing of argument names. For instance, type `Get-Random -Mi`, press Tab and PowerShell will complete the argument for you: `Get-Random -Minimum`. But if you really want to be laconic, you don't even need to press Tab. For instance, PowerShell will understand -``` -Get-Random -Mi 1 -Ma 10 -``` - -because `Mi` and `Ma` each have unique completions. - -You may have noticed that all PowerShell cmdlet names have a verb-noun structure. This can help script readability, but you probably don't want to keep typing `Get-` over and over in the command line. So don't! If you type a noun without a verb, PowerShell will look for a `Get-` command with that noun. - -Caution: although PowerShell is not case-sensitive, it's a good practice to capitalize the first letter of the noun when you intend to use a PowerShell command. For example, typing `date` will call your system's `date` utility. Typing `Date` will call PowerShell's `Get-Date` cmdlet. - -And if that's not enough, PowerShell has aliases to create simple names. For example, if you type `alias -name cd`, you will discover the `cd` command in PowerShell is itself an alias for the `Set-Location` command. - -So to review—you get powerful tab completion, aliases, and noun completions to keep your command names short, automatic and consistent parameter name truncation, while still enjoying a rich, readable syntax for scripting. - -### So... friends? - -There are just some of the advantages of PowerShell. There are more features and cmdlets I haven't discussed (check out [Where-Object][6] or its alias `?` if you want to make `grep` cry). And hey, if you really feel homesick, PowerShell will be happy to launch your old native utilities for you. But give yourself enough time to get acclimated in PowerShell's object-oriented cmdlet world, and you may find yourself choosing to forget the way back. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/powershell-people - -作者:[Yev Bronshteyn][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/yevster -[1]:https://github.com/PowerShell/PowerShell/blob/master/README.md -[2]:https://blogs.msdn.microsoft.com/powershell/2018/01/10/powershell-core-6-0-generally-available-ga-and-supported/ -[3]:https://spdx.org/licenses/MIT -[4]:http://www.jsnover.com/Docs/MonadManifesto.pdf -[5]:https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/properties -[6]:https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/where-object?view=powershell-6 diff --git a/sources/tech/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md b/sources/tech/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md deleted file mode 100644 index 0cb3755ca1..0000000000 --- a/sources/tech/20180209 A review of Virtual Labs virtualization solutions for MOOCs - WebLog Pro Olivier Berger.md +++ /dev/null @@ -1,255 +0,0 @@ -A review of Virtual Labs virtualization solutions for MOOCs – WebLog Pro Olivier Berger -====== -### 1 Introduction - -This is a memo that tries to capture some of the experience gained in the [FLIRT project][3] on the topic of Virtual Labs for MOOCs (Massive Open Online Courses). - -In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners. - -We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud. - -We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers. - -Disclaimer: This memo doesn’t intend to point to extensive literature on the subject, so part of our analysis may be biased by our particular context. - -### 2 Context : MOOCs - -Many MOOCs (Massive Open Online Courses) include a kind of “virtual laboratory” for learners to experiment with tools, as a way to apply the knowledge, practice, and be more active in the learning process. In quite a few (technical) disciplines, this can consist in using a set of standard applications in a professional domain, which represent typical tools that would be used in real life scenarii. - -Our main perspective will be that of a MOOC editor and of MOOC production teams which want to make “virtual labs” available for MOOC participants. - -Such a “virtual lab” would typically contain installations of existing applications, pre-installed and configured, and loaded with scenario data in order to perform a lab. - -The main constraint here is that such labs would typically be fabricated with limited software development expertise and funds[1][4]. Thus we consider here only the assembly of existing “normal” applications and discard the option of developping novel “serious games” and simulator applications for such MOOCs. - -#### 2.1 The FLIRT project - -The [FLIRT project][5] groups a consortium of 19 partners in Industry, SMEs and Academia to work on a collection of MOOCs and SPOCs for professional development in Networks and Telecommunications. Lead by Institut Mines Telecom, it benefits from the funding support of the French “Investissements d’avenir” programme. - -As part of the FLIRT roadmap, we’re leading an “innovation task” focused on Virtual Labs in the context of the Cloud. This memo was produced as part of this task. - -#### 2.2 Some challenges in virtual labs design for distant learning - -Virtual Labs used in distance learning contexts require the use of software applications in autonomy, either running on a personal, or professional computer. In general, the technical skills of participants may be diverse. So much for the quality (bandwith, QoS, filtering, limitations: firewaling) of the hardware and networks they use at home or at work. It’s thus very optimistic to seek for one solution fits all strategy. - -Most of the time there’s a learning curve on getting familiar with the tools which students will have to use, which constitutes as many challenges to overcome for beginners. These tools may not be suited for beginners, but they will still be selected by the trainers as they’re representative of the professional context being taught. - -In theory, this usability challenge should be addressed by devising an adapted pedagogical approach, especially in a context of distance learning, so that learners can practice the labs on their own, without the presence of a tutor or professor. Or some particular prerequisite skills could be required (“please follow System Administration 101 before applying to this course”). - -Unfortunately there are many cases where instructors basically just translate to a distant learning scenario, previous lab resources that had previously been devised for in presence learning. This lets learner faced with many challenges to overcome. The only support resource is often a regular forum on the MOOC’s LMS (Learning Management System). - -My intuition[2][6] is that developing ad-hoc simulators for distant education would probably be more efficient and easy to use for learners. But that would require a too high investment for the designers of the courses. - -In the context of MOOCs which are mainly free to participate to, not much investment is possible in devising ad-hoc lab applications, and instructors have to rely on existing applications, tools and scenarii to deliver a cheap enough environment. Furthermore, technical or licensing constraints[3][7] may lead to selecting lab tools which may not be easy to learn, but have the great advantage or being freely redistributable[4][8]. - -### 3 Virtual Machines for Virtual Labs - -The learners who will try unattended learning in such typical virtual labs will face difficulties in making specialized applications run. They must overcome the technical details of downloading, installing and configuring programs, before even trying to perform a particular pedagogical scenario linked to the matter studied. - -To diminish these difficulties, one traditional approach for implementing labs in MOOCs has been to assemble in advance a Virtual Machine image. This already made image can then be downloaded and run with a virtual machine simulator (like [VirtualBox][9][5][10]). - -The pre-loaded VM will already have everything ready for use, so that the learners don’t have to install anything on their machines. - -An alternative is to let learners download and install the needed software tools themselves, but this leads to so many compatibility issues or technical skill prerequisites, that this is often not advised, and mentioned only as a fallback option. - -#### 3.1 Downloading and installation issues - -Experience shows[2][11] that such virtual machines also bring some issues. Even if installation of every piece of software is no longer required, learners still need to be able to run the VM simulator on a wide range of diverse hardware, OSes and configurations. Even managing to download the VMs, still causes many issues (lack admin privileges, weight vs download speed, memory or CPU load, disk space, screen configurations, firewall filtering, keayboard layout, etc.). - -These problems aren’t generally faced by the majority of learners, but the impacted minority is not marginal either, and they generally will produce a lot of support requests for the MOOC team (usually in the forums), which needs to be anticipated by the community managers. - -The use of VMs is no show stopper for most, but can be a serious problem for a minority of learners, and is then no silver bullet. - -Some general usability issues may also emerge if users aren’t used to the look and feel of the enclosed desktop. For instance, the VM may consist of a GNU/Linux desktop, whereas users would use a Windows or Mac OS system. - -#### 3.2 Fabrication issues for the VM images - -On the MOOC team’s side, the fabrication of a lightweight, fast, tested, license-free and easy to use VM image isn’t necessarily easy. - -Software configurations tend to rot as time passes, and maintenance may not be easy when the later MOOC editions evolutions lead to the need to maintain the virtual lab scenarii years later. - -Ideally, this would require adopting an “industrial” process in building (and testing) the lab VMs, but this requires quite an expertise (system administration, packaging, etc.) that may or not have been anticipated at the time of building the MOOC (unlike video editing competence, for instance). - -Our experiment with the [Vagrant][12] technology [[0][13]] and Debian packaging was interesting in this respect, as it allowed us to use a well managed “script” to precisely control the build of a minimal VM image. - -### 4 Virtual Labs as a Service - -To overcome the difficulties in downloading and running Virtual Machines on one’s local computer, we have started exploring the possibility to run these applications in a kind of Software as a Service (SaaS) context, “on the cloud”. - -But not all applications typically used in MOOC labs are already available for remote execution on the cloud (unless the course deals precisely with managing email in GMail). - -We have then studied the option to use such an approach not for a single application, but for a whole virtual “desktop” which would be available on the cloud. - -#### 4.1 IaaS deployments - -A way to achieve this goal is to deploy Virtual Machine images quite similar to the ones described above, on the cloud, in an Infrastructure as a Service (IaaS) context[6][14], to offer access to remote desktops for every learners. - -There are different technical options to achieve this goal, but a simplified description of the architecture can be seen as just running Virtual Machines on a single IaaS platform instead of on each learner’s computer. Access to the desktop and application interfaces is made possible with the use of Web pages (or other dedicated lightweight clients) which will display a “full screen” display of the remote desktop running for the user on the cloud VM. Under the hood, the remote display of a Linux desktop session is made with technologies like [VNC][15] and [RDP][16] connecting to a [Guacamole][17] server on the remote VM. - -In the context of the FLIRT project, we have made early experiments with such an architecture. We used the CloVER solution by our partner [ProCAN][18] which provides a virtual desktops broker between [OpenEdX][19] and an [OpenStack][20] IaaS public platform. - -The expected benefit is that users don’t have to install anything locally, as the only tool needed locally is a Web browser (displaying a full-screen [HTML5 canvas][21] displaying the remote desktop run by the Guacamole server running on the cloud VM. - -But there are still some issues with such an approach. First, the cost of operating such an infrastructure : Virtual Machines need to be hosted on a IaaS platform, and that cost of operation isn’t null[7][22] for the MOOC editor, compared to the cost of VirtualBox and a VM running on the learner’s side (basically zero for the MOOC editor). - -Another issue, which could be more problematic lies in the need for a reliable connection to the Internet during the whole sequences of lab execution by the learners[8][23]. Even if Guacamole is quite efficient at compressing rendering traffic, some basic connectivity is needed during the whole Lab work sessions, preventing some mobile uses for instance. - -One other potential annoyance is the potential delays for making a VM available to a learner (provisioning a VM), when huge VMs images need to be copied inside the IaaS platform when a learner connects to the Virtual Lab activity for the first time (several minutes delays). This may be worse if the VM image is too big (hence the need for optimization of the content[9][24]). - -However, the fact that all VMs are running on a platform under the control of the MOOC editor allows new kind of features for the MOOC. For instance, learners can submit results of their labs directly to the LMS without the need to upload or copy-paste results manually. This can help monitor progress or perform evaluation or grading. - -The fact that their VMs run on the same platform also allows new kinds of pedagogical scenarii, as VMs of multiple learners can be interconnected, allowing cooperative activities between learners. The VM images may then need to be instrumented and deployed in particular configurations, which may require the use of a dedicated broker like CloVER to manage such scenarii. - -For the records, we have yet to perform a rigorous benchmarking of such a solution in order to evaluate its benefits, or constraints given particular contexts. In FLIRT, our main focus will be in the context of SPOCs for professional training (a bit different a context than public MOOCs). - -Still this approach doesn’t solve the VMs fabrication issues for the MOOC staff. Installing software inside a VM, be it local inside a VirtualBox simulator of over the cloud through a remote desktop display, makes not much difference. This relies mainly on manual operations and may not be well managed in terms of quality of the process (reproducibility, optimization). - -#### 4.2 PaaS deployments using containers - -Some key issues in the IaaS context described above, are the cost of operation of running full VMs, and long provisioning delays. - -We’re experimenting with new options to address these issues, through the use of [Linux containers][25] running on a PaaS (Platform as a Service) platform, instead of full-fleshed Virtual Machines[10][26]. - -The main difference, with containers instead of Virtual Machines, lies in the reduced size of images, and much lower CPU load requirements, as the container remove the need for one layer of virtualization. Also, the deduplication techniques at the heart of some virtual file-systems used by container platforms lead to really fast provisioning, avoiding the need to wait for the labs to start. - -The traditional making of VMs, done by installing packages and taking a snapshot, was affordable for the regular teacher, but involved manual operations. In this respect, one other major benefit of containers is the potential for better industrialization of the virtual lab fabrication, as they are generally not assembled manually. Instead, one uses a “scripting” approach in describing which applications and their dependencies need to be put inside a container image. But this requires new competence from the Lab creators, like learning the [Docker][27] technology (and the [OpenShift][28] PaaS, for instance), which may be quite specialized. Whereas Docker containers tend to become quite popular in Software Development faculty (through the “[devops][29]” hype), they may be a bit new to other field instructors. - -The learning curve to mastering the automation of the whole container-based labs installation needs to be evaluated. There’s a trade-off to consider in adopting technology like Vagrant or Docker: acquiring container/PaaS expertise vs quality of industrialization and optimization. The production of a MOOC should then require careful planning if one has to hire or contract with a PaaS expert for setting up the Virtual Labs. - -We may also expect interesting pedagogical benefits. As containers are lightweight, and platforms allow to “easily” deploy multiple interlinked containers (over dedicated virtual networks), this enables the setup of more realistic scenarii, where each learner may be provided with multiple “nodes” over virtual networks (all running their individual containers). This would be particularly interesting for Computer Networks or Security teaching for instance, where each learner may have access both to client and server nodes, to study client-server protocols, for instance. This is particularly interesting for us in the context of our FLIRT project, where we produce a collection of Computer Networks courses. - -Still, this mode of operation relies on a good connectivity of the learners to the Cloud. In contexts of distance learning in poorly connected contexts, the PaaS architecture doesn’t solve that particular issue compared to the previous IaaS architecture. - -### 5 Future server-less Virtual Labs with WebAssembly - -As we have seen, the IaaS or PaaS based Virtual Labs running on the Cloud offer alternatives to installing local virtual machines on the learner’s computer. But they both require to be connected for the whole duration of the Lab, as the applications would be executed on the remote servers, on the Cloud (either inside VMs or containers). - -We have been thinking of another alternative which could allow the deployment of some Virtual Labs on the local computers of the learners without the hassles of downloading and installing a Virtual Machine manager and VM image. We envision the possibility to use the infrastructure provided by modern Web browsers to allow running the lab’s applications. - -At the time of writing, this architecture is still highly experimental. The main idea is to rebuild the applications needed for the Lab so that they can be run in the “generic” virtual machine present in the modern browsers, the [WebAssembly][30] and Javascript execution engine. - -WebAssembly is a modern language which seeks for maximum portability, and as its name hints, is a kind of assembly language for the Web platform. What is of interest for us is that WebAssembly is portable on most modern Web browsers, making it a very interesting platform for portability. - -Emerging toolchains allow recompiling applications written in languages like C or C++ so that they can be run on the WebAssembly virtual machine in the browser. This is interesting as it doesn’t require modifying the source code of these programs. Of course, there are limitations, in the kind of underlying APIs and libraries compatible with that platform, and on the sandboxing of the WebAssembly execution engine enforced by the Web browser. - -Historically, WebAssembly has been developped so as to allow running games written in C++ for a framework like Unity, in the Web browser. - -In some contexts, for instance for tools with an interactive GUI, and processing data retrieved from files, and which don’t need very specific interaction with the underlying operating system, it seems possible to port these programs to WebAssembly for running them inside the Web browser. - -We have to experiment deeper with this technology to validate its potential for running Virtual Labs in the context of a Web browser. - -We used a similar approach in the past in porting a Relational Database course lab to the Web browser, for standalone execution. A real database would run in the minimal SQLite RDBMS, recompiled to JavaScript[11][31]. Instead of having to download, install and run a VM with a RDBMS, the students would only connect to a Web page, which would load the DBMS in memory, and allow performing the lab SQL queries locally, disconnected from any third party server. - -In a similar manner, we can think for instance, of a Lab scenario where the Internet packet inspector features of the Wireshark tool would run inside the WebAssembly virtual machine, to allow dissecting provided capture files, without having to install Wireshard, directly into the Web browser. - -We expect to publish a report on that last experiment in the future with more details and results. - -### 6 Conclusion - -The most promising architecture for Virtual Lab deployments seems to be the use of containers on a PaaS platform for deploying virtual desktops or virtual application GUIs available in the Web browser. - -This would allow the controlled fabrication of Virtual Labs containing the exact bits needed for learners to practice while minimizing the delays. - -Still the need for always-on connectivity can be a problem. - -Also, the potential for inter-networked containers allowing the kind of multiple nodes and collaborative scenarii we described, would require a lot of expertise to develop, and management platforms for the MOOC operators, which aren’t yet mature. - -We hope to be able to report on our progress in the coming months and years on those aspects. - -### 7 References - - - -[0] -Olivier Berger, J Paul Gibson, Claire Lecocq and Christian Bac “Designing a virtual laboratory for a relational database MOOC”. International Conference on Computer Supported Education, SCITEPRESS, 23-25 may 2015, Lisbonne, Portugal, 2015, vol. 7, pp. 260-268, ISBN 978-989-758-107-6 – [DOI: 10.5220/0005439702600268][1] ([preprint (HTML)][2]) - -### 8 Copyright - - [![Creative Commons License](https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png)][45] - -This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][46] - -. - -### Footnotes: - -[1][32] – The FLIRT project also works on business model aspects of MOOC or SPOC production in the context of professional development, but the present memo starts from a minimalitic hypothesis where funding for course production is quite limited. - -[2][33] – research-based evidence needed - -[3][34] – In typical MOOCs which are free to participate, the VM should include only gratis tools, which typically means a GNU/Linux distribution loaded with applications available under free and open source licenses. - -[4][35] – Typically, Free and Open Source software, aka Libre Software - -[5][36] – VirtualBox is portable on many operating systems, making it a very popular solution for such a need - -[6][37] – the IaaS platform could typically be an open cloud for MOOCs or a private cloud for SPOCs (for closer monitoring of student activity or security control reasons). - -[7][38] – Depending of the expected use of the lab by learners, this cost may vary a lot. The size and configuration required for the included software may have an impact (hence the need to minimize the footprint of the VM images). With diminishing costs in general this may not be a show stopper. Refer to marketing figures of commercial IaaS offerings for accurate figures. Attention to additional licensing costs if the OS of the VM isn’t free software, or if other licenses must be provided for every learners. - -[8][39] – The needs for always-on connectivity may not be a problem for professional development SPOCs where learners connect from enterprise networks for instance. It may be detrimental when MOOCs are very popular in southern countries where high bandwidth is both unreliable and expensive. - -[9][40] – In this respect, providing a full Linux desktop inside the VM doesn’t necessarily make sense. Instead, running applications full-screen may be better, avoiding installation of whole desktop environments like Gnome or XFCE… but which has usability consequences. Careful tuning and testing is needed in any case. - -[10][41] – The availability of container based architectures is quite popular in the industry, but has not yet been deployed to a large scale in the context of large public MOOC hosting platforms, to our knowledge, at the time of writing. There are interesting technical challenges which the FLIRT project tries to tackle together with its partner ProCAN. - -[11][42] – See the corresponding paragraph [http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env][43] in [0][44] - - --------------------------------------------------------------------------------- - -via: https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/ - -作者:[Author;Olivier Berger;Télécom Sudparis][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www-public.tem-tsp.eu -[1]:http://dx.doi.org/10.5220/0005439702600268 -[2]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/ -[3]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#org50fdc1a -[4]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.1 -[5]:http://flirtmooc.wixsite.com/flirt-mooc-telecom -[6]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2 -[7]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.3 -[8]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.4 -[9]:http://virtualbox.org -[10]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.5 -[11]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.2 -[12]:https://www.vagrantup.com/ -[13]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50 -[14]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.6 -[15]:https://en.wikipedia.org/wiki/Virtual_Network_Computing -[16]:https://en.wikipedia.org/wiki/Remote_Desktop_Protocol -[17]:http://guacamole.apache.org/ -[18]:https://www.procan-group.com/ -[19]:https://open.edx.org/ -[20]:http://openstack.org/ -[21]:https://en.wikipedia.org/wiki/Canvas_element -[22]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.7 -[23]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.8 -[24]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.9 -[25]:https://www.redhat.com/en/topics/containers -[26]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.10 -[27]:https://en.wikipedia.org/wiki/Docker_(software) -[28]:https://www.openshift.com/ -[29]:https://en.wikipedia.org/wiki/DevOps -[30]:http://webassembly.org/ -[31]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fn.11 -[32]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.1 -[33]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.2 -[34]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.3 -[35]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.4 -[36]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.5 -[37]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.6 -[38]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.7 -[39]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.8 -[40]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.9 -[41]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.10 -[42]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#fnr.11 -[43]:http://www-inf.it-sudparis.eu/PROSE/csedu2015/#standalone-sql-env -[44]:https://www-public.tem-tsp.eu/~berger_o/weblog/a-review-of-virtual-labs-virtualization-solutions-for-moocs/#orgde5af50 -[45]:http://creativecommons.org/licenses/by-nc-sa/4.0/ -[46]:http://creativecommons.org/licenses/by-nc-sa/4.0/ diff --git a/sources/tech/20180217 Louis-Philippe Véronneau .md b/sources/tech/20180217 Louis-Philippe Véronneau .md deleted file mode 100644 index bab2f4c169..0000000000 --- a/sources/tech/20180217 Louis-Philippe Véronneau .md +++ /dev/null @@ -1,59 +0,0 @@ -Louis-Philippe Véronneau - -====== -I've been watching [Critical Role][1]1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do. - -I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr. - -After the 10th time opening the terminal on my phone to download the podcast using some `wget` magic I decided enough was enough: I was going to write a dumb script to download them all in one batch. - -I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using `youtube-dl` to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google. - -I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use: -``` -https://itunes.apple.com/lookup?id=1243705452&entity=podcast - -``` - -Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page [has a proper RSS feed][2]. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page. - -Anyway, once you have the RSS feed, it's only a matter of using `grep` and `sed` until you get what you want. - -Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance! - -Video clip loop of the Critical Role doing a victory dance. - -### Script - -Here's the bash script I wrote. You will need `recode` to run it, as the RSS feed includes some HTML entities. -``` -# Get the whole RSS feed -wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/ - -# Extract the URLS and the episode titles -mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) ) -titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o ".\+" \ - | sed -r 's@@@g; s@ @\\@g' | recode html..utf8) ) - -# Download all the episodes under their titles -for i in ${!titles[*]} -do - wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]} -done - -``` - -1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it.. - --------------------------------------------------------------------------------- - -via: https://veronneau.org/downloading-all-the-critical-role-podcasts-in-one-batch.html - -作者:[Louis-Philippe Véronneau][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://veronneau.org/ -[1]:https://en.wikipedia.org/wiki/Critical_Role -[2]:http://criticalrolepodcast.geekandsundry.com/feed/ diff --git a/sources/tech/20180226 -Getting to Done- on the Linux command line.md b/sources/tech/20180226 -Getting to Done- on the Linux command line.md deleted file mode 100644 index c325a0d884..0000000000 --- a/sources/tech/20180226 -Getting to Done- on the Linux command line.md +++ /dev/null @@ -1,126 +0,0 @@ -'Getting to Done' on the Linux command line -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah) -There is a lot of talk about getting things done at the command line. How many articles are there about using obscure flags with `ls`, nifty regular expressions with Sed and Awk, and how to parse out lots of text with Perl? That isn't what this is about. - -This is about [Getting _to_ Done][1], making sure that the stuff we have to do actually gets tracked and done using tools that don't require a graphical desktop, a web browser, or an internet connection. To do this, we'll look at four ways of tracking your to-do list: plaintext files, Todo.txt, TaskWarrior, and Org-mode. - -### Plain (and simple) text - - -![plaintext][3] - -I like to use Vim, but you can use Nano too. - -The most straightforward way to manage your to-do list is using a plaintext file in your editor of choice. Just open an empty file and add tasks, one per line. When you are done, delete the line. Simple, effective, and it doesn't matter what you use to do it. There are a couple of drawbacks to this method, though. Once you delete a line and save the file, it is gone forever. That can be a problem if you have to report on what you have done this week or last week. And while using a simple file is flexible, it can also get cluttered really easily. - -### Todo.txt: Plaintext leveled up - - -![todo.txt screen][5] - -Neat, organized, and easy to use - -That leads us to the [Todo.txt][6] file format and application. Installation is simple—[download][7] the latest release from GitHub and run `sudo make install` from the unpacked archive. - - -![Installing todo.txt][9] - -It works from a Git clone as well. - -Todo.txt makes it very easy to add tasks, list tasks, and mark them as done: - -| `todo.sh add "Some Task"` | add "Some Task" to my todo list | -| `todo.sh ls` | list all my tasks | -| `todo.sh ls due:2018-02-15` | list all tasks due on February 15, 2018 | -| `todo.sh do 3` | mark task number 3 as "done" | - -The actual list is still in plaintext, and you can edit it with your favorite text editor as long as you follow the [correct format][10]. - -There is also a very robust help built into the application. - - -![Syntax highlighting in todo.txt][12] - -You can even get syntax highlighting. - -There is also a large selection of add-ons, as well as specifications for writing your own. There are even browser extensions, mobile apps, and desktop apps that support the Todo.txt format. - - -![GNOME extensions in todo.txt][14] - -Even GNOME extensions. - -The biggest drawback to Todo.txt is the lack of an automatic or built-in synchronization mechanism. Most (if not all) of the browser extensions and mobile apps require Dropbox to perform synchronization between the app and the copy on your desktop. If you would like something with sync built-in, we have... - -### Taskwarrior: Now we're cooking with Python - -[Taskwarrior][15] is a Python application with many of the same features as Todo.txt. However, it stores the data in a database and has built-in synchronization capabilities. It also keeps track of what is next, notes how old tasks are, and will warn you if you have something more important to do than what you just did. - -[Installation][16] of Taskwarrior can be done either with your distribution's package manager, through Python's `pip` utility, or built from source. Using it is also pretty straightforward, with commands similar to Todo.txt: - -| `task add "Some Task"` | Add "Some Task" to the list | -| `task list` | List all tasks | -| `task list due ``:today` | List all tasks due on today's date | -| `task do 3` | Complete task number 3 | - -Taskwarrior also has some pretty nice text user interfaces. - -![Taskwarrior in Vit][18] - -I like Vit, which was inspired by Vim. - -Unlike Todo.txt, Taskwarrior can synchronize with a local or remote server. A very basic synchronization server called `taskd` is available if you wish to run your own, and there are several services available if you do not. - -Taskwarrior also has a thriving and extensive ecosystem of add-ons and extensions, as well as mobile and desktop apps. - -![Taskwarrior on GNOME][20] - -Taskwarrior looks really nice on GNOME. - -The only disadvantage to Taskwarrior is that, unlike the other programs listed here, you cannot directly modify the to-do list itself. You can export the task list to various formats, modify the export, and then re-import the files, but it is a lot clunkier than just opening the file directly in a text editor. - -Which brings us to the most powerful of them all... - -### Emacs Org-mode: Hulk smash tasks - -![Org-mode][22] - -Emacs has everything. - -Emacs [Org-mode][23] is by far the most powerful, most flexible open source to-do list manager out there. It supports multiple files, uses plaintext, is almost infinitely customizable, and understands calendars, due dates, and schedules. It is also significantly more complicated to set up than the other applications listed here. But once it is set up, it does everything the other applications do and more. If you are familiar with or a fan of [Bullet Journals][24], Org-mode is possibly the closest you can get on a computer. - -Org-mode will run anywhere Emacs runs, and there are a few mobile applications that can interact with it as well. Unfortunately, there are no desktop apps or browser extensions that support Org. Despite all that, Org-mode is still one of the best applications for tracking your to-do list, since it is so very powerful. - -### Choose your tool - -In the end, the goal of all these programs is to help you track what you need to do and make sure you don't forget to do something. While they all have the same basic functions, choosing which one is right for you depends on a lot of factors. Do you want synchronization built-in or not? Do you need a mobile app? Do any of the add-ons include a "must have" feature? Whatever your choice, remember that the program alone cannot make you more organized, but it can help. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line - -作者:[Kevin Sonney][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://www.scruminc.com/getting-done/ -[3]:https://opensource.com/sites/default/files/u128651/plain-text.png (plaintext) -[5]:https://opensource.com/sites/default/files/u128651/todo-txt.png (todo.txt screen) -[6]:http://todotxt.org/ -[7]:https://github.com/todotxt/todo.txt-cli/releases -[9]:https://opensource.com/sites/default/files/u128651/todo-txt-install.png (Installing todo.txt) -[10]:https://github.com/todotxt/todo.txt -[12]:https://opensource.com/sites/default/files/u128651/todo-txt-vim.png (Syntax highlighting in todo.txt) -[14]:https://opensource.com/sites/default/files/u128651/tod-txt-gnome.png (GNOME extensions in todo.txt) -[15]:https://taskwarrior.org/ -[16]:https://taskwarrior.org/download/ -[18]:https://opensource.com/sites/default/files/u128651/taskwarrior-vit.png (Taskwarrior in Vit) -[20]:https://opensource.com/sites/default/files/u128651/taskwarrior-gnome.png (Taskwarrior on GNOME) -[22]:https://opensource.com/sites/default/files/u128651/emacs-org-mode.png (Org-mode) -[23]:https://orgmode.org/ -[24]:http://bulletjournal.com/ diff --git a/sources/tech/20180302 How to manage your workstation configuration with Ansible.md b/sources/tech/20180302 How to manage your workstation configuration with Ansible.md deleted file mode 100644 index fd24cd48ed..0000000000 --- a/sources/tech/20180302 How to manage your workstation configuration with Ansible.md +++ /dev/null @@ -1,170 +0,0 @@ -How to manage your workstation configuration with Ansible -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) - -Configuration management is a very important aspect of both server administration and DevOps. The "infrastructure as code" methodology makes it easy to deploy servers in various configurations and dynamically scale an organization's resources to keep up with user demands. But less attention is paid to individual administrators who want to automate the setup of their own laptops and desktops (workstations). - -In this series, I'll show you how to automate your workstation setup via [Ansible][1] , which will allow you to easily restore your entire configuration if you want or need to reload your machine. In addition, if you have multiple workstations, you can use this same approach to make the configuration identical on each. In this first article, we'll set up basic configuration management for our personal or work computers and set the foundation for the rest of the series. By the end of this article, you'll have a working setup to benefit from right away. Each article will automate more things and grow in complexity. - -### Why Ansible? - -Many configuration management solutions are available, including Salt Stack, Chef, and Puppet. I prefer Ansible because it's lighter in terms of resource utilization, its syntax is easier to read, and when harnessed properly it can revolutionize your configuration management. Ansible's lightweight nature is especially relevant to the topic at hand, because we may not want to run an entire server just to automate the setup of our laptops and desktops. Ideally, we want something fast; something we can use to get up and running quickly should we need to restore our workstations or synchronize our configuration between multiple machines. My specific method for Ansible (which I'll demonstrate in this article) is perfect for this—there's no server to maintain. You just download your configuration and run it. - -### My approach - -Typically, Ansible is run from a central server. It utilizes an inventory file, which is a text file that contains a list of all the hosts and their IP addresses or domain names we want Ansible to manage. This is great for static environments, but it is not ideal for workstations. The reason being we really don't know what the status of our workstations will be at any one moment. Perhaps I powered down my desktop or my laptop may be suspended and stowed in my bag. In either case, the Ansible server would complain, as it can't reach my machines if they are offline. We need something that's more of an on-demand approach, and the way we'll accomplish that is by utilizing `ansible-pull`. The `ansible-pull` command, which is part of Ansible, allows you to download your configuration from a Git repository and apply it immediately. You won't need to maintain a server or an inventory list; you simply run the `ansible-pull` command, feed it a Git repository URL, and it will do the rest for you. - -### Getting started - -First, install Ansible on the computer you want it to manage. One problem is that a lot of distributions ship with an older version. I can tell you from experience you'll definitely want the latest version available. New features are introduced into Ansible quite frequently, and if you're running an older version, example syntax you find online may not be functional because it's using features that aren't implemented in the version you have installed. Even point releases have quite a few new features. One example of this is the `dconf` module, which is new to Ansible as of 2.4. If you try to utilize syntax that makes use of this module, unless you have 2.4 or newer it will fail. In Ubuntu and its derivatives, we can easily install the latest version of Ansible with the official personal package archive ([PPA][2]). The following commands will do the trick: -``` -sudo apt-get install software-properties-common - -sudo apt-add-repository ppa:ansible/ansible - -sudo apt-get update - -sudo apt-get install ansible - -``` - -If you're not using Ubuntu, [consult Ansible's documentation][3] on how to obtain it for your platform. - -Next, we'll need a Git repository to hold our configuration. The easiest way to satisfy this requirement is to create an empty repository on GitHub, or you can utilize your own Git server if you have one. To keep things simple, I'll assume you're using GitHub, so adjust the commands if you're using something else. Create a repository in GitHub; you'll end up with a repository URL that will be similar to this: -``` -git@github.com:/ansible.git - -``` - -Clone that repository to your local working directory (ignore any message that complains that the repository is empty): -``` -git clone git@github.com:/ansible.git - -``` - -Now we have an empty repository we can work with. Change your working directory to be inside the repository (`cd ./ansible` for example) and create a file named `local.yml` in your favorite text editor. Place the following configuration in that file: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install htop - -    apt: name=htop - -``` - -The file you just created is known as a **playbook** , and the instruction to install `htop` (a package I arbitrarily picked to serve as an example) is known as a **play**. The playbook itself is a file in the YAML format, which is a simple to read markup language. A full walkthrough of YAML is beyond the scope of this article, but you don't need to have an expert understanding of it to be proficient with Ansible. The configuration is easy to read; by simply looking at this file, you can easily glean that we're installing the `htop` package. Pay special attention to the `apt` module on the last line, which will only work on Debian-based systems. You can change this to `yum` instead of `apt` if you're using a Red Hat platform or change it to `dnf` if you're using Fedora. The `name` line simply gives information regarding our task and will be shown in the output. Therefore, you'll want to make sure the name is descriptive so it's easy to find if you need to troubleshoot multiple plays. - -Next, let's commit our new file to our repository: -``` -git add local.yml - -git commit -m "initial commit" - -git push origin master - -``` - -Now our new playbook should be present in our repository on GitHub. We can apply the playbook we created with the following command: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -If executed properly, the `htop` package should be installed on your system. You might've seen some warnings near the beginning that complain about the lack of an inventory file. This is fine, as we're not using an inventory file (nor do we need to for this use). At the end of the output, it will give you an overview of what it did. If `htop` was installed properly, you should see `changed=1` on the last line of the output. - -How did this work? The `ansible-pull` command uses the `-U` option, which expects a repository URL. I gave it the `https` version of the repository URL for security purposes because I don't want any hosts to have write access back to the repository (`https` is read-only by default). The `local.yml` playbook name is assumed, so we didn't need to provide a filename for the playbook—it will automatically run a playbook named `local.yml` if it finds it in the repository's root. Next, we used `sudo` in front of the command since we are modifying the system. - -Let's go ahead and add additional packages to our playbook. I'll add two additional packages so that it looks like this: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install htop - -    apt: name=htop - - - -  - name: Install mc - -    apt: name=mc - -    - -  - name: Install tmux - -    apt: name=tmux - -``` - -I added additional plays (tasks) for installing two other packages, `mc` and `tmux`. It doesn't matter what packages you choose to have this playbook install; I just picked these arbitrarily. You should install whichever packages you want all your systems to have. The only caveat is that you have to know that the packages exist in the repository for your distribution ahead of time. - -Before we commit and apply this updated playbook, we should clean it up. It will work fine as it is, but (to be honest) it looks kind of messy. Let's try installing all three packages in just one play. Replace the contents of your `local.yml` with this: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install packages - -    apt: name={{item}} - -    with_items: - -      - htop - -      - mc - -      - tmux - -``` - -Now that looks cleaner and more efficient. We used `with_items` to consolidate our package list into one play. If we want to add additional packages, we simply add another line with a hyphen and a package name. Consider `with_items` to be similar to a `for` loop. Every package we list will be installed. - -Commit our new changes back to the repository: -``` -git add local.yml - -git commit -m "added additional packages, cleaned up formatting" - -git push origin master - -``` - -Now we can run our playbook to benefit from the new configuration: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -Admittedly, this example doesn't do much yet; all it does is install a few packages. You could've installed these packages much faster just using your package manager. However, as this series continues, these examples will become more complex and we'll automate more things. By the end, the Ansible configuration you'll create will automate more and more tasks. For example, the one I use automates the installation of hundreds of packages, sets up `cron` jobs, handles desktop configuration, and more. - -From what we've accomplished so far, you can probably already see the big picture. All we had to do was create a repository, put a playbook in that repository, then utilize the `ansible-pull` command to pull down that repository and apply it to our machine. We didn't need to set up a server. In the future, if we want to change our config, we can pull down the repo, update it, then push it back to our repository and apply it. If we're setting up a new machine, we only need to install Ansible and apply the configuration. - -In the next article, we'll automate this even further via `cron` and some additional items. In the meantime, I've copied the code for this article into [my GitHub repository][4] so you can check your syntax against mine. I'll update the code as we go along. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/manage-workstation-ansible - -作者:[Jay LaCroix][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://www.ansible.com/ -[2]:https://launchpad.net/ubuntu/+ppas -[3]:http://docs.ansible.com/ansible/latest/intro_installation.html -[4]:https://github.com/jlacroix82/ansible_article diff --git a/sources/tech/20180307 3 open source tools for scientific publishing.md b/sources/tech/20180307 3 open source tools for scientific publishing.md deleted file mode 100644 index 0bbc3578e9..0000000000 --- a/sources/tech/20180307 3 open source tools for scientific publishing.md +++ /dev/null @@ -1,78 +0,0 @@ -3 open source tools for scientific publishing -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV) -One industry that lags behind others in the adoption of digital or open source tools is the competitive and lucrative world of scientific publishing. Worth over £19B ($26B) annually, according to figures published by Stephen Buranyi in [The Guardian][1] last year, the system for selecting, publishing, and sharing even the most important scientific research today still bears many of the constraints of print media. New digital-era technologies present a huge opportunity to accelerate discovery, make science collaborative instead of competitive, and redirect investments from infrastructure development into research that benefits society. - -The non-profit [eLife initiative][2] was established by the funders of research, in part to encourage the use of these technologies to this end. In addition to publishing an open-access journal for important advances in life science and biomedical research, eLife has made itself into a platform for experimentation and showcasing innovation in research communication—with most of this experimentation based around the open source ethos. - -Working on open publishing infrastructure projects gives us the opportunity to accelerate the reach and adoption of the types of technology and user experience (UX) best practices that we consider important to the advancement of the academic publishing industry. Speaking very generally, the UX of open source products is often left undeveloped, which can in some cases dissuade people from using it. As part of our investment in OSS development, we place a strong emphasis on UX in order to encourage users to adopt these products. - -All of our code is open source, and we actively encourage community involvement in our projects, which to us means faster iteration, more experimentation, greater transparency, and increased reach for our work. - -The projects that we are involved in, such as the development of Libero (formerly known as [eLife Continuum][3]) and the [Reproducible Document Stack][4], along with our recent collaboration with [Hypothesis][5], show how OSS can be used to bring about positive changes in the assessment, publication, and communication of new discoveries. - -### Libero - -Libero is a suite of services and applications available to publishers that includes a post-production publishing system, a full front-end user interface pattern suite, Libero's Lens Reader, an open API, and search and recommendation engines. - -Last year, we took a user-driven approach to redesigning the front end of Libero, resulting in less distracting site “furniture” and a greater focus on research articles. We tested and iterated all the key functional areas of the site with members of the eLife community to ensure the best possible reading experience for everyone. The site’s new API also provides simpler access to content for machine readability, including text mining, machine learning, and online application development. - -The content on our website and the patterns that drive the new design are all open source to encourage future product development for both eLife and other publishers that wish to use it. - -### The Reproducible Document Stack - -In collaboration with [Substance][6] and [Stencila][7], eLife is also engaged in a project to create a Reproducible Document Stack (RDS)—an open stack of tools for authoring, compiling, and publishing computationally reproducible manuscripts online. - -Today, an increasing number of researchers are able to document their computational experiments through languages such as [R Markdown][8] and [Python][9]. These can serve as important parts of the experimental record, and while they can be shared independently from or alongside the resulting research article, traditional publishing workflows tend to relegate these assets as a secondary class of content. To publish papers, researchers using these languages often have little option but to submit their computational results as “flattened” outputs in the form of figures, losing much of the value and reusability of the code and data references used in the computation. And while electronic notebook solutions such as [Jupyter][10] can enable researchers to publish their code in an easily reusable and executable form, that’s still in addition to, rather than as an integral part of, the published manuscript. - -The [Reproducible Document Stack][11] project aims to address these challenges through development and publication of a working prototype of a reproducible manuscript that treats code and data as integral parts of the document, demonstrating a complete end-to-end technology stack from authoring through to publication. It will ultimately allow authors to submit their manuscripts in a format that includes embedded code blocks and computed outputs (statistical results, tables, or graphs), and have those assets remain both visible and executable throughout the publication process. Publishers will then be able to preserve these assets directly as integral parts of the published online article. - -### Open annotation with Hypothesis - -Most recently, we introduced open annotation in collaboration with [Hypothesis][12] to enable users of our website to make comments, highlight important sections of articles, and engage with the reading public online. - -Through this collaboration, the open source Hypothesis software was customized with new moderation features, single sign-on authentication, and user-interface customization options, giving publishers more control over its implementation on their sites. These enhancements are already driving higher-quality discussions around published scholarly content. - -The tool can be integrated seamlessly into publishers’ websites, with the scholarly publishing platform [PubFactory][13] and content solutions provider [Ingenta][14] already taking advantage of its improved feature set. [HighWire][15] and [Silverchair][16] are also offering their publishers the opportunity to implement the service. - -### Other industries and open source - -Over time, we hope to see more publishers adopt Hypothesis, Libero, and other projects to help them foster the discovery and reuse of important scientific research. But the opportunities for innovation eLife has been able to leverage because of these and other OSS technologies are also prevalent in other industries. - -The world of data science would be nowhere without the high-quality, well-supported open source software and the communities built around it; [TensorFlow][17] is a leading example of this. Thanks to OSS and its communities, all areas of AI and machine learning have seen rapid acceleration and advancement compared to other areas of computing. Similar is the explosion in usage of Linux as a cloud web host, followed by containerization with Docker, and now the growth of Kubernetes, one of the most popular open source projects on GitHub. - -All of these technologies enable organizations to do more with less and focus on innovation instead of reinventing the wheel. And in the end, that’s the real benefit of OSS: It lets us all learn from each other’s failures while building on each other's successes. - -We are always on the lookout for opportunities to engage with the best emerging talent and ideas at the interface of research and technology. Find out more about some of these engagements on [eLife Labs][18], or contact [innovation@elifesciences.org][19] for more information. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/scientific-publishing-software - -作者:[Paul Shanno][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/pshannon -[1]:https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science -[2]:https://elifesciences.org/about -[3]:https://elifesciences.org/inside-elife/33e4127f/elife-introduces-continuum-a-new-open-source-tool-for-publishing -[4]:https://elifesciences.org/for-the-press/e6038800/elife-supports-development-of-open-technology-stack-for-publishing-reproducible-manuscripts-online -[5]:https://elifesciences.org/for-the-press/81d42f7d/elife-enhances-open-annotation-with-hypothesis-to-promote-scientific-discussion-online -[6]:https://github.com/substance -[7]:https://github.com/stencila/stencila -[8]:https://rmarkdown.rstudio.com/ -[9]:https://www.python.org/ -[10]:http://jupyter.org/ -[11]:https://elifesciences.org/labs/7dbeb390/reproducible-document-stack-supporting-the-next-generation-research-article -[12]:https://github.com/hypothesis -[13]:http://www.pubfactory.com/ -[14]:http://www.ingenta.com/ -[15]:https://github.com/highwire -[16]:https://www.silverchair.com/community/silverchair-universe/hypothesis/ -[17]:https://www.tensorflow.org/ -[18]:https://elifesciences.org/labs -[19]:mailto:innovation@elifesciences.org diff --git a/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md b/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md deleted file mode 100644 index df00e7e05e..0000000000 --- a/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md +++ /dev/null @@ -1,167 +0,0 @@ -Protecting Code Integrity with PGP — Part 4: Moving Your Master Key to Offline Storage -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/industry-1920.jpg?itok=gI3QraS8) -In this tutorial series, we're providing practical guidelines for using PGP. You can catch up on previous articles here: - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -Here in part 4, we continue the series with a look at how and why to move your master key from your home directory to offline storage. Let's get started. - -### Checklist - - * Prepare encrypted detachable storage (ESSENTIAL) - - * Back up your GnuPG directory (ESSENTIAL) - - * Remove the master key from your home directory (NICE) - - * Remove the revocation certificate from your home directory (NICE) - - - - -#### Considerations - -Why would you want to remove your master [C] key from your home directory? This is generally done to prevent your master key from being stolen or accidentally leaked. Private keys are tasty targets for malicious actors -- we know this from several successful malware attacks that scanned users' home directories and uploaded any private key content found there. - -It would be very damaging for any developer to have their PGP keys stolen -- in the Free Software world, this is often tantamount to identity theft. Removing private keys from your home directory helps protect you from such events. - -##### Back up your GnuPG directory - -**!!!Do not skip this step!!!** - -It is important to have a readily available backup of your PGP keys should you need to recover them (this is different from the disaster-level preparedness we did with paperkey). - -##### Prepare detachable encrypted storage - -Start by getting a small USB "thumb" drive (preferably two!) that you will use for backup purposes. You will first need to encrypt them: - -For the encryption passphrase, you can use the same one as on your master key. - -##### Back up your GnuPG directory - -Once the encryption process is over, re-insert the USB drive and make sure it gets properly mounted. Find out the full mount point of the device, for example by running the mount command (under Linux, external media usually gets mounted under /media/disk, under Mac it's /Volumes). - -Once you know the full mount path, copy your entire GnuPG directory there: -``` -$ cp -rp ~/.gnupg [/media/disk/name]/gnupg-backup - -``` - -(Note: If you get any Operation not supported on socket errors, those are benign and you can ignore them.) - -You should now test to make sure everything still works: -``` -$ gpg --homedir=[/media/disk/name]/gnupg-backup --list-key [fpr] - -``` - -If you don't get any errors, then you should be good to go. Unmount the USB drive and distinctly label it, so you don't blow it away next time you need to use a random USB drive. Then, put in a safe place -- but not too far away, because you'll need to use it every now and again for things like editing identities, adding or revoking subkeys, or signing other people's keys. - -##### Remove the master key - -The files in our home directory are not as well protected as we like to think. They can be leaked or stolen via many different means: - - * By accident when making quick homedir copies to set up a new workstation - - * By systems administrator negligence or malice - - * Via poorly secured backups - - * Via malware in desktop apps (browsers, pdf viewers, etc) - - * Via coercion when crossing international borders - - - - -Protecting your key with a good passphrase greatly helps reduce the risk of any of the above, but passphrases can be discovered via keyloggers, shoulder-surfing, or any number of other means. For this reason, the recommended setup is to remove your master key from your home directory and store it on offline storage. - -###### Removing your master key - -Please see the previous section and make sure you have backed up your GnuPG directory in its entirety. What we are about to do will render your key useless if you do not have a usable backup! - -First, identify the keygrip of your master key: -``` -$ gpg --with-keygrip --list-key [fpr] - -``` - -The output will be something like this: -``` -pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD - Keygrip = AAAA999988887777666655554444333322221111 -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -sub rsa2048 2017-12-06 [E] - Keygrip = BBBB999988887777666655554444333322221111 -sub rsa2048 2017-12-06 [S] - Keygrip = CCCC999988887777666655554444333322221111 - -``` - -Find the keygrip entry that is beneath the pub line (right under the master key fingerprint). This will correspond directly to a file in your home .gnupg directory: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ ls -AAAA999988887777666655554444333322221111.key -BBBB999988887777666655554444333322221111.key -CCCC999988887777666655554444333322221111.key - -``` - -All you have to do is simply remove the .key file that corresponds to the master keygrip: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ rm AAAA999988887777666655554444333322221111.key - -``` - -Now, if you issue the --list-secret-keys command, it will show that the master key is missing (the # indicates it is not available): -``` -$ gpg --list-secret-keys -sec# rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -ssb rsa2048 2017-12-06 [E] -ssb rsa2048 2017-12-06 [S] - -``` - -##### Remove the revocation certificate - -Another file you should remove (but keep in backups) is the revocation certificate that was automatically created with your master key. A revocation certificate allows someone to permanently mark your key as revoked, meaning it can no longer be used or trusted for any purpose. You would normally use it to revoke a key that, for some reason, you can no longer control -- for example, if you had lost the key passphrase. - -Just as with the master key, if a revocation certificate leaks into malicious hands, it can be used to destroy your developer digital identity, so it's better to remove it from your home directory. -``` -cd ~/.gnupg/openpgp-revocs.d -rm [fpr].rev - -``` - -Next time, you'll learn how to secure your subkeys as well. Stay tuned. - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage - -作者:[Konstantin Ryabitsev][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md b/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md deleted file mode 100644 index 555a475651..0000000000 --- a/sources/tech/20180312 ddgr - A Command Line Tool To Search DuckDuckGo From The Terminal.md +++ /dev/null @@ -1,231 +0,0 @@ -ddgr – A Command Line Tool To Search DuckDuckGo From The Terminal -====== -Bash tricks are really awesome in Linux that makes everything is possible in Linux. - -It really works well for developers or system admins because they are spending most of the time with terminal. Did you know why they are preferring this tricks? - -These trick will improve their productivity and also make them to work fast. - -### What Is ddgr - -[ddgr][1] is a command-line utility to search DuckDuckGo from the terminal. ddgr works out of the box with several text-based browsers if the BROWSER environment variable is set. - -Make sure your system should have installed any text-based browsers. You may know about [googler][2] that allow users to perform Google searches from the Linux command line. - -It’s highly popular among cmdline users and they are expect the similar utility for privacy-aware DuckDuckGo, that’s why ddgr came to picture. - -Unlike the web interface, you can specify the number of search results you would like to see per page. - -**Suggested Read :** -**(#)** [Googler – Google Search From The Linux Command Line][2] -**(#)** [Buku – A Powerful Command-line Bookmark Manager for Linux][3] -**(#)** [SoCLI – Easy Way To Search And Browse Stack Overflow From The Terminal][4] -**(#)** [RTV (Reddit Terminal Viewer) – A Simple Terminal Viewer For Reddit][5] - -### What Is DuckDuckGo - -DDG stands for DuckDuckGo. DuckDuckGo (DDG) is an Internet search engine that really protecting users search and privacy. - -They didn’t filter users personalized search results and It’s showing the same search results to all users for a given search term. - -Most of the users prefer google search engine, if you really worrying about privacy then you can blindly go with DuckDuckGo. - -### ddgr Features - - * Fast and clean (no ads, stray URLs or clutter), custom color - * Designed to deliver maximum readability at minimum space - * Specify the number of search results to show per page - * Navigate result pages from omniprompt, open URLs in browser - * Search and option completion scripts for Bash, Zsh and Fish - * DuckDuckGo Bang support (along with completion) - * Open the first result directly in browser (as in I’m Feeling Ducky) - * Non-stop searches: fire new searches at omniprompt without exiting - * Keywords (e.g. filetype:mime, site:somesite.com) support - * Limit search by time, specify region, disable safe search - * HTTPS proxy support, Do Not Track set, optionally disable User Agent - * Support custom url handler script or cmdline utility - * Comprehensive documentation, man page with handy usage examples - * Minimal dependencies - - - -### Prerequisites - -ddgr requires Python 3.4 or later. So, make sure you system should have Python 3.4 or later version. -``` -$ python3 --version -Python 3.6.3 - -``` - -### How To Install ddgr In Linux - -We can easily install ddgr using the following command based on the distributions. - -For **`Fedora`** , use [DNF Command][6] to install ddgr. -``` -# dnf install ddgr - -``` - -Alternatively we can use [SNAP Command][7] to install ddgr. -``` -# snap install ddgr - -``` - -For **`LinuxMint/Ubuntu`** , use [APT-GET Command][8] or [APT Command][9] to install ddgr. -``` -$ sudo add-apt-repository ppa:twodopeshaggy/jarun -$ sudo apt-get update -$ sudo apt-get install ddgr - -``` - -For **`Arch Linux`** based systems, use [Yaourt Command][10] or [Packer Command][11] to install ddgr from AUR repository. -``` -$ yaourt -S ddgr -or -$ packer -S ddgr - -``` - -For **`Debian`** , use [DPKG Command][12] to install ddgr. -``` -# wget https://github.com/jarun/ddgr/releases/download/v1.2/ddgr_1.2-1_debian9.amd64.deb -# dpkg -i ddgr_1.2-1_debian9.amd64.deb - -``` - -For **`CentOS 7`** , use [YUM Command][13] to install ddgr. -``` -# yum install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.el7.3.centos.x86_64.rpm - -``` - -For **`opensuse`** , use [zypper Command][14] to install ddgr. -``` -# zypper install https://github.com/jarun/ddgr/releases/download/v1.2/ddgr-1.2-1.opensuse42.3.x86_64.rpm - -``` - -### How To Launch ddgr - -Enter the `ddgr` command without any option on terminal to bring DuckDuckGo search. You will get the same output similar to below. -``` -$ ddgr - -``` - -![][16] - -### How To Search Using ddgr - -We can initiate the search through two ways. Either from omniprompt or directly from terminal. You can search any phrases which you want. - -Directly from terminal. -``` -$ ddgr 2daygeek - -``` - -![][17] - -From `omniprompt`. -![][18] - -### Omniprompt Shortcut - -Enter `?` to obtain the `omniprompt`, which will show you list of keywords and shortcut to work further with ddgr. -![][19] - -### How To Move Next,Previous, and Fist Page - -It allows user to move next page or previous page or first page. - - * `n:` Move to next set of search results - * `p:` Move to previous set of search results - * `f:` Jump to the first page - - - -![][20] - -### How To Initiate A New Search - -“ **d** ” option allow users to initiate a new search from omniprompt. For example, i searched about `2daygeek website` and now i’m going to initiate a new search with phrase “ **Magesh Maruthamuthu** “. - -From `omniprompt`. -``` -ddgr (? for help) d magesh maruthmuthu - -``` - -![][21] - -### Show Complete URLs In Search Result - -By default it shows only an article heading, add the “ **x** ” option in search to show complete article urls in search result. -``` -$ ddgr -n 5 -x 2daygeek - -``` - -![][22] - -### Limit Search Results - -By default search results shows 10 results per page. If you want to limit the page results for your convenience, you can do by passing `--num or -n` argument with ddgr. -``` -$ ddgr -n 5 2daygeek - -``` - -![][23] - -### Website Specific Search - -To search specific pages from the particular website, use the below format. This will fetch the results for given keywords from the website. For example, We are going search about “ **Package Manager** ” from 2daygeek website. See the results. -``` -$ ddgr -n 5 --site 2daygeek "package manager" - -``` - -![][24] - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/ddgr-duckduckgo-search-from-the-command-line-in-linux/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://github.com/jarun/ddgr -[2]:https://www.2daygeek.com/googler-google-search-from-the-command-line-on-linux/ -[3]:https://www.2daygeek.com/buku-command-line-bookmark-manager-linux/ -[4]:https://www.2daygeek.com/socli-search-and-browse-stack-overflow-from-linux-terminal/ -[5]:https://www.2daygeek.com/rtv-reddit-terminal-viewer-a-simple-terminal-viewer-for-reddit/ -[6]:https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[7]:https://www.2daygeek.com/snap-command-examples/ -[8]:https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[9]:https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[10]:https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ -[11]:https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ -[12]:https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ -[13]:https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[14]:https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux1.png -[17]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-3.png -[18]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-2.png -[19]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-4.png -[20]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-5a.png -[21]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-6a.png -[22]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-7a.png -[23]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-8.png -[24]:https://www.2daygeek.com/wp-content/uploads/2018/03/ddgr-duckduckgo-command-line-search-for-linux-9a.png diff --git a/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md b/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md deleted file mode 100644 index ac862ff64e..0000000000 --- a/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md +++ /dev/null @@ -1,303 +0,0 @@ -Protecting Code Integrity with PGP — Part 5: Moving Subkeys to a Hardware Device -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pgp-keys.jpg?itok=aS6IWGpq) - -In this tutorial series, we're providing practical guidelines for using PGP. If you missed the previous article, you can catch up with the links below. But, in this article, we'll continue our discussion about securing your keys and look at some tips for moving your subkeys to a specialized hardware device. - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -[Part 4: Moving Your Master Key to Offline Storage][4] - -### Checklist - - * Get a GnuPG-compatible hardware device (NICE) - - * Configure the device to work with GnuPG (NICE) - - * Set the user and admin PINs (NICE) - - * Move your subkeys to the device (NICE) - - - - -### Considerations - -Even though the master key is now safe from being leaked or stolen, the subkeys are still in your home directory. Anyone who manages to get their hands on those will be able to decrypt your communication or fake your signatures (if they know the passphrase). Furthermore, each time a GnuPG operation is performed, the keys are loaded into system memory and can be stolen from there by sufficiently advanced malware (think Meltdown and Spectre). - -The best way to completely protect your keys is to move them to a specialized hardware device that is capable of smartcard operations. - -#### The benefits of smartcards - -A smartcard contains a cryptographic chip that is capable of storing private keys and performing crypto operations directly on the card itself. Because the key contents never leave the smartcard, the operating system of the computer into which you plug in the hardware device is not able to retrieve the private keys themselves. This is very different from the encrypted USB storage device we used earlier for backup purposes -- while that USB device is plugged in and decrypted, the operating system is still able to access the private key contents. Using external encrypted USB media is not a substitute to having a smartcard-capable device. - -Some other benefits of smartcards: - - * They are relatively cheap and easy to obtain - - * They are small and easy to carry with you - - * They can be used with multiple devices - - * Many of them are tamper-resistant (depends on manufacturer) - - - - -#### Available smartcard devices - -Smartcards started out embedded into actual wallet-sized cards, which earned them their name. You can still buy and use GnuPG-capable smartcards, and they remain one of the cheapest available devices you can get. However, actual smartcards have one important downside: they require a smartcard reader, and very few laptops come with one. - -For this reason, manufacturers have started providing small USB devices, the size of a USB thumb drive or smaller, that either have the microsim-sized smartcard pre-inserted, or that simply implement the smartcard protocol features on the internal chip. Here are a few recommendations: - - * [Nitrokey Start][5]: Open hardware and Free Software: one of the cheapest options for GnuPG use, but with fewest extra security features - - * [Nitrokey Pro][6]: Similar to the Nitrokey Start, but is tamper-resistant and offers more security features (but not U2F, see the Fido U2F section of the guide) - - * [Yubikey 4][7]: Proprietary hardware and software, but cheaper than Nitrokey Pro and comes available in the USB-C form that is more useful with newer laptops; also offers additional security features such as U2F - - - - -Our recommendation is to pick a device that is capable of both smartcard functionality and U2F, which, at the time of writing, means a Yubikey 4. - -#### Configuring your smartcard device - -Your smartcard device should Just Work (TM) the moment you plug it into any modern Linux or Mac workstation. You can verify it by running: -``` -$ gpg --card-status - -``` - -If you didn't get an error, but a full listing of the card details, then you are good to go. Unfortunately, troubleshooting all possible reasons why things may not be working for you is way beyond the scope of this guide. If you are having trouble getting the card to work with GnuPG, please seek support via your operating system's usual support channels. - -##### PINs don't have to be numbers - -Note, that despite having the name "PIN" (and implying that it must be a "number"), neither the user PIN nor the admin PIN on the card need to be numbers. - -Your device will probably have default user and admin PINs set up when it arrives. For Yubikeys, these are 123456 and 12345678, respectively. If those don't work for you, please check any accompanying documentation that came with your device. - -##### Quick setup - -To configure your smartcard, you will need to use the GnuPG menu system, as there are no convenient command-line switches: -``` -$ gpg --card-edit -[...omitted...] -gpg/card> admin -Admin commands are allowed -gpg/card> passwd - -``` - -You should set the user PIN (1), Admin PIN (3), and the Reset Code (4). Please make sure to record and store these in a safe place -- especially the Admin PIN and the Reset Code (which allows you to completely wipe the smartcard). You so rarely need to use the Admin PIN, that you will inevitably forget what it is if you do not record it. - -Getting back to the main card menu, you can also set other values (such as name, sex, login data, etc), but it's not necessary and will additionally leak information about your smartcard should you lose it. - -#### Moving the subkeys to your smartcard - -Exit the card menu (using "q") and save all changes. Next, let's move your subkeys onto the smartcard. You will need both your PGP key passphrase and the admin PIN of the card for most operations. Remember, that [fpr] stands for the full 40-character fingerprint of your key. -``` -$ gpg --edit-key [fpr] - -Secret subkeys are available. - -pub rsa4096/AAAABBBBCCCCDDDD - created: 2017-12-07 expires: 2019-12-07 usage: C - trust: ultimate validity: ultimate -ssb rsa2048/1111222233334444 - created: 2017-12-07 expires: never usage: E -ssb rsa2048/5555666677778888 - created: 2017-12-07 expires: never usage: S -[ultimate] (1). Alice Engineer -[ultimate] (2) Alice Engineer - -gpg> - -``` - -Using --edit-key puts us into the menu mode again, and you will notice that the key listing is a little different. From here on, all commands are done from inside this menu mode, as indicated by gpg>. - -First, let's select the key we'll be putting onto the card -- you do this by typing key 1 (it's the first one in the listing, our [E] subkey): -``` -gpg> key 1 - -``` - -The output should be subtly different: -``` -pub rsa4096/AAAABBBBCCCCDDDD - created: 2017-12-07 expires: 2019-12-07 usage: C - trust: ultimate validity: ultimate -ssb* rsa2048/1111222233334444 - created: 2017-12-07 expires: never usage: E -ssb rsa2048/5555666677778888 - created: 2017-12-07 expires: never usage: S -[ultimate] (1). Alice Engineer -[ultimate] (2) Alice Engineer - -``` - -Notice the * that is next to the ssb line corresponding to the key -- it indicates that the key is currently "selected." It works as a toggle, meaning that if you type key 1 again, the * will disappear and the key will not be selected any more. - -Now, let's move that key onto the smartcard: -``` -gpg> keytocard -Please select where to store the key: - (2) Encryption key -Your selection? 2 - -``` - -Since it's our [E] key, it makes sense to put it into the Encryption slot. When you submit your selection, you will be prompted first for your PGP key passphrase, and then for the admin PIN. If the command returns without an error, your key has been moved. - -**Important:** Now type key 1 again to unselect the first key, and key 2 to select the [S] key: -``` -gpg> key 1 -gpg> key 2 -gpg> keytocard -Please select where to store the key: - (1) Signature key - (3) Authentication key -Your selection? 1 - -``` - -You can use the [S] key both for Signature and Authentication, but we want to make sure it's in the Signature slot, so choose (1). Once again, if your command returns without an error, then the operation was successful. - -Finally, if you created an [A] key, you can move it to the card as well, making sure first to unselect key 2. Once you're done, choose "q": -``` -gpg> q -Save changes? (y/N) y - -``` - -Saving the changes will delete the keys you moved to the card from your home directory (but it's okay, because we have them in our backups should we need to do this again for a replacement smartcard). - -##### Verifying that the keys were moved - -If you perform --list-secret-keys now, you will see a subtle difference in the output: -``` -$ gpg --list-secret-keys -sec# rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -ssb> rsa2048 2017-12-06 [E] -ssb> rsa2048 2017-12-06 [S] - -``` - -The > in the ssb> output indicates that the subkey is only available on the smartcard. If you go back into your secret keys directory and look at the contents there, you will notice that the .key files there have been replaced with stubs: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ strings *.key - -``` - -The output should contain shadowed-private-key to indicate that these files are only stubs and the actual content is on the smartcard. - -#### Verifying that the smartcard is functioning - -To verify that the smartcard is working as intended, you can create a signature: -``` -$ echo "Hello world" | gpg --clearsign > /tmp/test.asc -$ gpg --verify /tmp/test.asc - -``` - -This should ask for your smartcard PIN on your first command, and then show "Good signature" after you run gpg --verify. - -Congratulations, you have successfully made it extremely difficult to steal your digital developer identity! - -### Other common GnuPG operations - -Here is a quick reference for some common operations you'll need to do with your PGP key. - -In all of the below commands, the [fpr] is your key fingerprint. - -#### Mounting your master key offline storage - -You will need your master key for any of the operations below, so you will first need to mount your backup offline storage and tell GnuPG to use it. First, find out where the media got mounted, for example, by looking at the output of the mount command. Then, locate the directory with the backup of your GnuPG directory and tell GnuPG to use that as its home: -``` -$ export GNUPGHOME=/media/disk/name/gnupg-backup -$ gpg --list-secret-keys - -``` - -You want to make sure that you see sec and not sec# in the output (the # means the key is not available and you're still using your regular home directory location). - -##### Updating your regular GnuPG working directory - -After you make any changes to your key using the offline storage, you will want to import these changes back into your regular working directory: -``` -$ gpg --export | gpg --homedir ~/.gnupg --import -$ unset GNUPGHOME - -``` - -#### Extending key expiration date - -The master key we created has the default expiration date of 2 years from the date of creation. This is done both for security reasons and to make obsolete keys eventually disappear from keyservers. - -To extend the expiration on your key by a year from current date, just run: -``` -$ gpg --quick-set-expire [fpr] 1y - -``` - -You can also use a specific date if that is easier to remember (e.g. your birthday, January 1st, or Canada Day): -``` -$ gpg --quick-set-expire [fpr] 2020-07-01 - -``` - -Remember to send the updated key back to keyservers: -``` -$ gpg --send-key [fpr] - -``` - -#### Revoking identities - -If you need to revoke an identity (e.g., you changed employers and your old email address is no longer valid), you can use a one-liner: -``` -$ gpg --quick-revoke-uid [fpr] 'Alice Engineer ' - -``` - -You can also do the same with the menu mode using gpg --edit-key [fpr]. - -Once you are done, remember to send the updated key back to keyservers: -``` -$ gpg --send-key [fpr] - -``` - -Next time, we'll look at how Git supports multiple levels of integration with PGP. - -Learn more about Linux through the free ["Introduction to Linux" ][8]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-5-moving-subkeys-hardware-device - -作者:[KONSTANTIN RYABITSEV][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage -[5]:https://shop.nitrokey.com/shop/product/nitrokey-start-6 -[6]:https://shop.nitrokey.com/shop/product/nitrokey-pro-3 -[7]:https://www.yubico.com/product/yubikey-4-series/ -[8]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md b/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md deleted file mode 100644 index 0169d96ad6..0000000000 --- a/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md +++ /dev/null @@ -1,318 +0,0 @@ -Protecting Code Integrity with PGP — Part 6: Using PGP with Git -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/global-network.jpg?itok=h_hhZc36) -In this tutorial series, we're providing practical guidelines for using PGP, including basic concepts and generating and protecting your keys. If you missed the previous articles, you can catch up below. In this article, we look at Git's integration with PGP, starting with signed tags, then introducing signed commits, and finally adding support for signed pushes. - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -[Part 4: Moving Your Master Key to Offline Storage][4] - -[Part 5: Moving Subkeys to a Hardware Device][5] - -One of the core features of Git is its decentralized nature -- once a repository is cloned to your system, you have full history of the project, including all of its tags, commits and branches. However, with hundreds of cloned repositories floating around, how does anyone verify that the repository you downloaded has not been tampered with by a malicious third party? You may have cloned it from GitHub or some other official-looking location, but what if someone had managed to trick you? - -Or what happens if a backdoor is discovered in one of the projects you've worked on, and the "Author" line in the commit says it was done by you, while you're pretty sure you had [nothing to do with it][6]? - -To address both of these issues, Git introduced PGP integration. Signed tags prove the repository integrity by assuring that its contents are exactly the same as on the workstation of the developer who created the tag, while signed commits make it nearly impossible for someone to impersonate you without having access to your PGP keys. - -### Checklist - - * Understand signed tags, commits, and pushes (ESSENTIAL) - - * Configure git to use your key (ESSENTIAL) - - * Learn how tag signing and verification works (ESSENTIAL) - - * Configure git to always sign annotated tags (NICE) - - * Learn how commit signing and verification works (ESSENTIAL) - - * Configure git to always sign commits (NICE) - - * Configure gpg-agent options (ESSENTIAL) - - - - -### Considerations - -Git implements multiple levels of integration with PGP, first starting with signed tags, then introducing signed commits, and finally adding support for signed pushes. - -#### Understanding Git Hashes - -Git is a complicated beast, but you need to know what a "hash" is in order to have a good grasp on how PGP integrates with it. We'll narrow it down to two kinds of hashes: tree hashes and commit hashes. - -##### Tree hashes - -Every time you commit a change to a repository, git records checksum hashes of all objects in it -- contents (blobs), directories (trees), file names and permissions, etc, for each subdirectory in the repository. It only does this for trees and blobs that have changed with each commit, so as not to re-checksum the entire tree unnecessarily if only a small part of it was touched. - -Then it calculates and stores the checksum of the toplevel tree, which will inevitably be different if any part of the repository has changed. - -##### Commit hashes - -Once the tree hash has been created, git will calculate the commit hash, which will include the following information about the repository and the change being made: - - * The checksum hash of the tree - - * The checksum hash of the tree before the change (parent) - - * Information about the author (name, email, time of authorship) - - * Information about the committer (name, email, time of commit) - - * The commit message - - - - -##### Hashing function - -At the time of writing, git still uses the SHA1 hashing mechanism to calculate checksums, though work is under way to transition to a stronger algorithm that is more resistant to collisions. Note, that git already includes collision avoidance routines, so it is believed that a successful collision attack against git remains impractical. - -#### Annotated tags and tag signatures - -Git tags allow developers to mark specific commits in the history of each git repository. Tags can be "lightweight" \-- more or less just a pointer at a specific commit, or they can be "annotated," which becomes its own object in the git tree. An annotated tag object contains all of the following information: - - * The checksum hash of the commit being tagged - - * The tag name - - * Information about the tagger (name, email, time of tagging) - - * The tag message - - - - -A PGP-signed tag is simply an annotated tag with all these entries wrapped around in a PGP signature. When a developer signs their git tag, they effectively assure you of the following: - - * Who they are (and why you should trust them) - - * What the state of their repository was at the time of signing: - - * The tag includes the hash of the commit - - * The commit hash includes the hash of the toplevel tree - - * Which includes hashes of all files, contents, and subtrees - * It also includes all information about authorship - - * Including exact times when changes were made - - - - -When you clone a git repository and verify a signed tag, that gives you cryptographic assurance that all contents in the repository, including all of its history, are exactly the same as the contents of the repository on the developer's computer at the time of signing. - -#### Signed commits - -Signed commits are very similar to signed tags -- the contents of the commit object are PGP-signed instead of the contents of the tag object. A commit signature also gives you full verifiable information about the state of the developer's tree at the time the signature was made. Tag signatures and commit PGP signatures provide exact same security assurances about the repository and its entire history. - -#### Signed pushes - -This is included here for completeness' sake, since this functionality needs to be enabled on the server receiving the push before it does anything useful. As we saw above, PGP-signing a git object gives verifiable information about the developer's git tree, but not about their intent for that tree. - -For example, you can be working on an experimental branch in your own git fork trying out a promising cool feature, but after you submit your work for review, someone finds a nasty bug in your code. Since your commits are properly signed, someone can take the branch containing your nasty bug and push it into master, introducing a vulnerability that was never intended to go into production. Since the commit is properly signed with your key, everything looks legitimate and your reputation is questioned when the bug is discovered. - -Ability to require PGP-signatures during git push was added in order to certify the intent of the commit, and not merely verify its contents. - -#### Configure git to use your PGP key - -If you only have one secret key in your keyring, then you don't really need to do anything extra, as it becomes your default key. - -However, if you happen to have multiple secret keys, you can tell git which key should be used ([fpr] is the fingerprint of your key): -``` -$ git config --global user.signingKey [fpr] - -``` - -NOTE: If you have a distinct gpg2 command, then you should tell git to always use it instead of the legacy gpg from version 1: -``` -$ git config --global gpg.program gpg2 - -``` - -#### How to work with signed tags - -To create a signed tag, simply pass the -s switch to the tag command: -``` -$ git tag -s [tagname] - -``` - -Our recommendation is to always sign git tags, as this allows other developers to ensure that the git repository they are working with has not been maliciously altered (e.g. in order to introduce backdoors). - -##### How to verify signed tags - -To verify a signed tag, simply use the verify-tag command: -``` -$ git verify-tag [tagname] - -``` - -If you are verifying someone else's git tag, then you will need to import their PGP key. Please refer to the "Trusted Team communication" document in the same repository for guidance on this topic. - -##### Verifying at pull time - -If you are pulling a tag from another fork of the project repository, git should automatically verify the signature at the tip you're pulling and show you the results during the merge operation: -``` -$ git pull [url] tags/sometag - -``` - -The merge message will contain something like this: -``` -Merge tag 'sometag' of [url] - -[Tag message] - -# gpg: Signature made [...] -# gpg: Good signature from [...] - -``` - -#### Configure git to always sign annotated tags - -Chances are, if you're creating an annotated tag, you'll want to sign it. To force git to always sign annotated tags, you can set a global configuration option: -``` -$ git config --global tag.forceSignAnnotated true - -``` - -Alternatively, you can just train your muscle memory to always pass the -s switch: -``` -$ git tag -asm "Tag message" tagname - -``` - -#### How to work with signed commits - -It is easy to create signed commits, but it is much more difficult to incorporate them into your workflow. Many projects use signed commits as a sort of "Committed-by:" line equivalent that records code provenance -- the signatures are rarely verified by others except when tracking down project history. In a sense, signed commits are used for "tamper evidence," and not to "tamper-proof" the git workflow. - -To create a signed commit, you just need to pass the -S flag to the git commit command (it's capital -S due to collision with another flag): -``` -$ git commit -S - -``` - -Our recommendation is to always sign commits and to require them of all project members, regardless of whether anyone is verifying them (that can always come at a later time). - -##### How to verify signed commits - -To verify a single commit you can use verify-commit: -``` -$ git verify-commit [hash] - -``` - -You can also look at repository logs and request that all commit signatures are verified and shown: -``` -$ git log --pretty=short --show-signature - -``` - -##### Verifying commits during git merge - -If all members of your project sign their commits, you can enforce signature checking at merge time (and then sign the resulting merge commit itself using the -S flag): -``` -$ git merge --verify-signatures -S merged-branch - -``` - -Note, that the merge will fail if there is even one commit that is not signed or does not pass verification. As it is often the case, technology is the easy part -- the human side of the equation is what makes adopting strict commit signing for your project difficult. - -##### If your project uses mailing lists for patch management - -If your project uses a mailing list for submitting and processing patches, then there is little use in signing commits, because all signature information will be lost when sent through that medium. It is still useful to sign your commits, just so others can refer to your publicly hosted git trees for reference, but the upstream project receiving your patches will not be able to verify them directly with git. - -You can still sign the emails containing the patches, though. - -#### Configure git to always sign commits - -You can tell git to always sign commits: -``` -git config --global commit.gpgSign true - -``` - -Or you can train your muscle memory to always pass the -S flag to all git commit operations (this includes --amend). - -#### Configure gpg-agent options - -The GnuPG agent is a helper tool that will start automatically whenever you use the gpg command and run in the background with the purpose of caching the private key passphrase. This way you only have to unlock your key once to use it repeatedly (very handy if you need to sign a bunch of git operations in an automated script without having to continuously retype your passphrase). - -There are two options you should know in order to tweak when the passphrase should be expired from cache: - - * default-cache-ttl (seconds): If you use the same key again before the time-to-live expires, the countdown will reset for another period. The default is 600 (10 minutes). - - * max-cache-ttl (seconds): Regardless of how recently you've used the key since initial passphrase entry, if the maximum time-to-live countdown expires, you'll have to enter the passphrase again. The default is 30 minutes. - - - - -If you find either of these defaults too short (or too long), you can edit your ~/.gnupg/gpg-agent.conf file to set your own values: -``` -# set to 30 minutes for regular ttl, and 2 hours for max ttl -default-cache-ttl 1800 -max-cache-ttl 7200 - -``` - -##### Bonus: Using gpg-agent with ssh - -If you've created an [A] (Authentication) key and moved it to the smartcard, you can use it with ssh for adding 2-factor authentication for your ssh sessions. You just need to tell your environment to use the correct socket file for talking to the agent. - -First, add the following to your ~/.gnupg/gpg-agent.conf: -``` -enable-ssh-support - -``` - -Then, add this to your .bashrc: -``` -export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) - -``` - -You will need to kill the existing gpg-agent process and start a new login session for the changes to take effect: -``` -$ killall gpg-agent -$ bash -$ ssh-add -L - -``` - -The last command should list the SSH representation of your PGP Auth key (the comment should say cardno:XXXXXXXX at the end to indicate it's coming from the smartcard). - -To enable key-based logins with ssh, just add the ssh-add -L output to ~/.ssh/authorized_keys on remote systems you log in to. Congratulations, you've just made your ssh credentials extremely difficult to steal. - -As a bonus, you can get other people's PGP-based ssh keys from public keyservers, should you need to grant them ssh access to anything: -``` -$ gpg --export-ssh-key [keyid] - -``` - -This can come in super handy if you need to allow developers access to git repositories over ssh. Next time, we'll provide tips for protecting your email accounts as well as your PGP keys. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-6-using-pgp-git - -作者:[KONSTANTIN RYABITSEV][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage -[5]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-5-moving-subkeys-hardware-device -[6]:https://github.com/jayphelps/git-blame-someone-else diff --git a/sources/tech/20180324 Memories of writing a parser for man pages.md b/sources/tech/20180324 Memories of writing a parser for man pages.md deleted file mode 100644 index fb53aed395..0000000000 --- a/sources/tech/20180324 Memories of writing a parser for man pages.md +++ /dev/null @@ -1,109 +0,0 @@ -Memories of writing a parser for man pages -====== - -I generally enjoy being bored, but sometimes enough is enough—that was the case a Sunday afternoon of 2015 when I decided to start an open source project to overcome my boredom. - -In my quest for ideas, I stumbled upon a request to build a [“Man page viewer built with web standards”][1] by [Mathias Bynens][2] and without thinking too much, I started coding a man page parser in JavaScript, which after a lot of back and forths, ended up being [Jroff][3]. - -Back then, I was familiar with manual pages as a concept and used them a fair amount of times, but that was all I knew, I had no idea how they were generated or if there was a standard in place. Two years later, here are some thoughts on the matter. - -### How man pages are written - -The first thing that surprised me at the time, was the notion that manpages at their core are just plain text files stored somewhere in the system (you can check this directory using the `manpath` command). - -This files not only contain the documentation, but also formatting information using a typesetting system from the 1970s called `troff`. - -> troff, and its GNU implementation groff, are programs that process a textual description of a document to produce typeset versions suitable for printing. **It’s more ‘What you describe is what you get’ rather than WYSIWYG.** -> -> — extracted from [troff.org][4] - -If you are totally unfamiliar with typesetting formats, you can think of them as Markdown on steroids, but in exchange for the flexibility you have a more complex syntax: - -![groff-compressor][5] - -The `groff` file can be written manually, or generated from other formats such as Markdown, Latex, HTML, and so on with many different tools. - -Why `groff` and man pages are tied together has to do with history, the format has [mutated along time][6], and his lineage is composed of a chain of similarly-named programs: RUNOFF > roff > nroff > troff > groff. - -But this doesn’t necessarily mean that `groff` is strictly related to man pages, it’s a general-purpose format that has been used to [write books][7] and even for [phototypesetting][8]. - -Moreover, It’s worth noting that `groff` can also call a postprocessor to convert its intermediate output to a final format, which is not necessarily ascii for terminal display! some of the supported formats are: TeX DVI, HTML, Canon, HP LaserJet4 compatible, PostScript, utf8 and many more. - -### Macros - -Other of the cool features of the format is its extensibility, you can write macros that enhance the basic functionalities. - -With the vast history of *nix systems, there are several macro packages that group useful macros together for specific functionalities according to the output that you want to generate, examples of macro packages are `man`, `mdoc`, `mom`, `ms`, `mm`, and the list goes on. - -Manual pages are conventionally written using `man` and `mdoc`. - -You can easily distinguish native `groff` commands from macros by the way standard `groff` packages capitalize their macro names. For `man`, each macro’s name is uppercased, like .PP, .TH, .SH, etc. For `mdoc`, only the first letter is uppercased: .Pp, .Dt, .Sh. - -![groff-example][9] - -### Challenges - -Whether you are considering to write your own `groff` parser, or just curious, these are some of the problems that I have found more challenging. - -#### Context-sensitive grammar - -Formally, `groff` has a context-free grammar, unfortunately, since macros describe opaque bodies of tokens, the set of macros in a package may not itself implement a context-free grammar. - -This kept me away (for good or bad) from the parser generators that were available at the time. - -#### Nested macros - -Most of the macros in `mdoc` are callable, this roughly means that macros can be used as arguments of other macros, for example, consider this: - - * The macro `Fl` (Flag) adds a dash to its argument, so `Fl s` produces `-s` - * The macro `Ar` (Argument) provides facilities to define arguments - * The `Op` (Optional) macro wraps its argument in brackets, as this is the standard idiom to define something as optional. - * The following combination `.Op Fl s Ar file` produces `[-s file]` because `Op` macros can be nested. - - - -#### Lack of beginner-friendly resources - -Something that really confused me was the lack of a canonical, well defined and clear source to look at, there’s a lot of information in the web which assumes a lot about the reader that it takes time to grasp. - -### Interesting macros - -To wrap up, I will offer to you a very short list of macros that I found interesting while developing jroff: - -**man** - - * TH: when writing manual pages with `man` macros, your first line that is not a comment must be this macro, it accepts five parameters: title section date source manual - * BI: bold alternating with italics (especially useful for function specifications) - * BR: bold alternating with Roman (especially useful for referring to other manual pages) - - - -**mdoc** - - * .Dd, .Dt, .Os: similar to how `man` macros require the `.TH` the `mdoc` macros require these three macros, in that particular order. Their initials stand for: Document date, Document title and Operating system. - * .Bl, .It, .El: these three macros are used to create list, their names are self-explanatory: Begin list, Item and End list. - - - - --------------------------------------------------------------------------------- - -via: https://monades.roperzh.com/memories-writing-parser-man-pages/ - -作者:[Roberto Dip][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://monades.roperzh.com -[1]:https://github.com/h5bp/lazyweb-requests/issues/114 -[2]:https://mathiasbynens.be/ -[3]:jroff -[4]:https://www.troff.org/ -[5]:https://user-images.githubusercontent.com/4419992/37868021-2e74027c-2f7f-11e8-894b-80829ce39435.gif -[6]:https://manpages.bsd.lv/history.html -[7]:https://rkrishnan.org/posts/2016-03-07-how-is-gopl-typeset.html -[8]:https://en.wikipedia.org/wiki/Phototypesetting -[9]:https://user-images.githubusercontent.com/4419992/37866838-e602ad78-2f6e-11e8-97a9-2a4494c766ae.jpg diff --git a/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md b/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md deleted file mode 100644 index 21821a070c..0000000000 --- a/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md +++ /dev/null @@ -1,236 +0,0 @@ -Manage your workstation with Ansible: Automating configuration -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/robot_arm_artificial_ai.png?itok=8CUU3U_7) -Ansible is an amazing automation and configuration management tool. It is mainly used for servers and cloud deployments, and it gets far less attention for its use in workstations, both desktops and laptops, which is the focus of this series. - -In the [first part][1] of this series, I showed you basic usage of the `ansible-pull` command, and we created a playbook that installs a handful of packages. That wasn't extremely useful by itself, but it set the stage for further automation. - -In this article, everything comes together full circle, and by the end we will have a fully working solution for automating workstation configuration. This time, we'll set up our Ansible configuration such that future changes we make will automatically be applied to our workstations. At this point, I'm assuming you already worked through part one. If you haven't, feel free to do that now and then return to this article when you're done. You should already have a GitHub repository with the code from the first article inside it. We're going to build directly on what we did before. - -First, we need to do some reorganization because we're going to do more than just install packages. At this point, we currently have a playbook named `local.yml` with the following content: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install packages - -    apt: name={{item}} - -    with_items: - -      - htop - -      - mc - -      - tmux - -``` - -That's all well and good if we only want to perform one task. As we add new things to our configuration, this file will become quite large and get very cluttered. It's better to organize our plays into individual files with each responsible for a different type of configuration. To achieve this, create what's called a taskbook, which is very much like a playbook but the contents are more streamlined. Let's create a directory for our taskbooks inside our Git repository: -``` -mkdir tasks - -``` - -The code inside our current `local.yml` playbook lends itself well to become a taskbook for installing packages. Let's move this file into the `tasks` directory we just created with a new name: -``` -mv local.yml tasks/packages.yml - -``` - -Now, we can edit our `packages.yml` taskbook and strip it down quite a bit. In fact, we can strip everything except for the individual task itself. Let's make `packages.yml` look like this: -``` -- name: Install packages - -  apt: name={{item}} - -  with_items: - -    - htop - -    - mc - -    - tmux - -``` - -As you can see, it uses the same syntax, but we stripped out everything that isn't necessary to the task it's performing. Now we have a dedicated taskbook for installing packages. However, we still need a file named `local.yml`, since `ansible-pull` still expects to find a file with that name. So we'll create a fresh one with this content in the root of our repository (not in the `tasks` directory): -``` -- hosts: localhost - -  become: true - -  pre_tasks: - -    - name: update repositories - -      apt: update_cache=yes - -      changed_when: False - - - -  tasks: - -    - include: tasks/packages.yml - -``` - -This new `local.yml` acts as an index that will import all our taskbooks. I've added a few new things to this file that you haven't seen yet in this series. First, at the beginning of the file, I added `pre_tasks`, which allows us to have Ansible perform a task before all the other tasks run. In this case, we're telling Ansible to update our distribution's repository index. This line does that for us: -``` -apt: update_cache=yes - -``` - -Normally the `apt` module allows us to install packages, but we can also tell it to update our repository index. The idea is that we want all our individual plays to work with a fresh index each time Ansible runs. This will help ensure we don't have an issue with a stale index while attempting to install a package. Note that the `apt` module works only with Debian, Ubuntu, and their derivatives. If you're running a different distribution, you'll want to use a module specific to your distribution rather than `apt`. See the documentation for Ansible if you need to use a different module. - -The following line is also worth further explanation: -``` -changed_when: False - -``` - -This line on an individual task stops Ansible from reporting the results of the play as changed even when it results in a change in the system. In this case, we don't care if the repository index contains new data; it almost always will, since repositories are always changing. We don't care about changes to `apt` repositories, as index changes are par for the course. If we omit this line, we'll see the summary at the end of the process report that something has changed, even if it was merely about the repository being updated. It's better to ignore these types of changes. - -Next is our normal tasks section, and we import the taskbook we created. Each time we add another taskbook, we add another line here: -``` -tasks: - -  - include: tasks/packages.yml - -``` - -If you were to run the `ansible-pull` command here, it should essentially do the same thing as it did in the last article. The difference is that we have improved our organization and we can more efficiently expand on it. The `ansible-pull` command syntax, to save you from finding the previous article, is this: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -If you recall, the `ansible-pull` command pulls down a Git repository and applies the configuration it contains. - -Now that our foundation is in place, we can expand upon our Ansible config and add features. Specifically, we'll add configuration to automate the deployment of future changes to our workstations. To support this goal, the first thing we should do is to create a user specifically to apply our Ansible configuration. This isn't required—we can continue to run our Ansible configuration under our own user. But using a separate user segregates this to a system process that will run in the background, without our involvement. - -We could create this user with the normal method, but since we're using Ansible, we should shy away from manual changes. Instead, we'll create a taskbook to handle user creation. This taskbook will create just one user for now, but you can always add additional plays to this taskbook to add additional users. I'll call this user `ansible`, but you can name it something else if you wish (if you do, make sure to update all occurrences). Let's create a taskbook named `users.yml` and place this code inside of it: -``` -- name: create ansible user - -  user: name=ansible uid=900 - -``` - -Next, we need to edit our `local.yml` file and append this new taskbook to the file, so it will look like this: -``` -- hosts: localhost - -  become: true - -  pre_tasks: - -    - name: update repositories - -      apt: update_cache=yes - -      changed_when: False - - - -  tasks: - -    - include: tasks/users.yml - -    - include: tasks/packages.yml - -``` - -Now when we run our `ansible-pull` command, a user named `ansible` will be created on the system. Note that I specifically declared `User ID 900` for this user by using the `UID` option. This isn't required, but it's recommended. The reason is that UIDs under 1,000 are typically not shown on the login screen, which is great because there's no reason we would need to log into a desktop session as our `ansible` user. UID 900 is arbitrary; it should be any number under 1,000 that's not already in use. You can find out if UID 900 is in use on your system with the following command: -``` -cat /etc/passwd |grep 900 - -``` - -However, you shouldn't run into a problem with this UID because I've never seen it used by default in any distribution I've used so far. - -Now, we have an `ansible` user that will later be used to apply our Ansible configuration automatically. Next, we can create the actual cron job that will be used to automate this. Rather than place this in the `users.yml` taskbook we just created, we should separate this into its own file. Create a taskbook named `cron.yml` in the tasks directory and place the following code inside: -``` -- name: install cron job (ansible-pull) - -  cron: user="ansible" name="ansible provision" minute="*/10" job="/usr/bin/ansible-pull -o -U https://github.com//ansible.git > /dev/null" - -``` - -The syntax for the cron module should be fairly self-explanatory. With this play, we're creating a cron job to be run as the `ansible` user. The job will execute every 10 minutes, and the command it will execute is this: -``` -/usr/bin/ansible-pull -o -U https://github.com//ansible.git > /dev/null - -``` - -Also, we can put additional cron jobs we want all our workstations to have into this one file. We just need to add additional plays to add new cron jobs. However, simply adding a new taskbook for cron isn't enough—we'll also need to add it to our `local.yml` file so it will be called. Place the following line with the other includes: -``` -- include: tasks/cron.yml - -``` - -Now when `ansible-pull` is run, it will set up a new cron job that will be run as the `ansible` user every 10 minutes. But, having an Ansible job running every 10 minutes isn't ideal because it will take considerable CPU power. It really doesn't make sense for Ansible to run every 10 minutes unless we've changed something in the Git repository. - -However, we've already solved this problem. Notice the `-o` option I added to the `ansible-pull` command in the cron job that we've never used before. This option tells Ansible to run only if the repository has changed since the last time `ansible-pull` was called. If the repository hasn't changed, it won't do anything. This way, you're not wasting valuable CPU for no reason. Sure, some CPU will be used when it pulls down the repository, but not nearly as much as it would use if it were applying your entire configuration all over again. When `ansible-pull` does run, it will go through all the tasks in the Playbook and taskbooks, but at least it won't run for no purpose. - -Although we've added all the required components to automate `ansible-pull`, it actually won't work properly yet. The `ansible-pull` command will run with `sudo`, which would give it access to perform system-level commands. However, our `ansible` user is not set up to perform tasks as `sudo`. Therefore, when the cron job triggers, it will fail. Normally we could just use `visudo` and manually set the `ansible` user up to have this access. However, we should do things the Ansible way from now on, and this is a great opportunity to show you how the `copy` module works. The `copy` module allows you to copy a file from your Ansible repository to somewhere else in the filesystem. In our case, we'll copy a config file for `sudo` to `/etc/sudoers.d/` so that the `ansible` user can perform administrative tasks. - -Open up the `users.yml` taskbook, and add the following play to the bottom: -``` -- name: copy sudoers_ansible - -  copy: src=files/sudoers_ansible dest=/etc/sudoers.d/ansible owner=root group=root mode=0440 - -``` - -The `copy` module, as we can see, copies a file from our repository to somewhere else. In this case, we're grabbing a file named `sudoers_ansible` (which we will create shortly) and copying it to `/etc/sudoers.d/ansible` with `root` as the owner. - -Next, we need to create the file that we'll be copying. In the root of your Ansible repository, create a `files` directory:​ -``` -mkdir files - -``` - -Then, in the `files` directory we just created, create the `sudoers_ansible` file with the following content: -``` -ansible ALL=(ALL) NOPASSWD: ALL - -``` - -Creating a file in `/etc/sudoers.d`, like we're doing here, allows us to configure `sudo` for a specific user. Here we're allowing the `ansible` user full access to the system via `sudo` without a password prompt. This will allow `ansible-pull` to run as a background task without us needing to run it manually. - -Now, you can run `ansible-pull` again to pull down the latest changes: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -From this point forward, the cron job for `ansible-pull` will run every 10 minutes in the background and check your repository for changes. If it finds changes, it will run your playbook and apply your taskbooks. - -So now we have a fully working solution. When you first set up a new laptop or desktop, you'll run the `ansible-pull` command manually, but only the first time. From that point forward, the `ansible` user will take care of subsequent runs in the background. When you want to make a change to your workstation machines, you simply pull down your Git repository, make the changes, then push those changes back to the repository. Then, the next time the cron job fires on each machine, it will pull down those changes and apply them. You now only have to make changes once, and all your workstations will follow suit. This method may be a bit unconventional though. Normally, you'd have an `inventory` file with your machines listed and several roles each machine could be a member of. However, the `ansible-pull` method, as described in this article, is a very efficient way of managing workstation configuration. - -I have updated the code in my [GitHub repository][2] for this article, so feel free to browse the code there and check your syntax against mine. Also, I moved the code from the previous article into its own directory in that repository. - -In part 3, we'll close out the series by using Ansible to configure GNOME desktop settings. I'll show you how to set your wallpaper and lock screen, apply a desktop theme, and more. - -In the meantime, it's time for a little homework assignment. Most of us have configuration files we like to maintain for various applications we use. This could be configuration files for Bash, Vim, or whatever tools you use. I challenge you now to automate copying these configuration files to your machines via the Ansible repository we've been working on. In this article, I've shown you how to copy a file, so take a look at that and see if you can apply that knowledge to your personal files. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2 - -作者:[Jay LaCroix][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://opensource.com/article/18/3/manage-workstation-ansible -[2]:https://github.com/jlacroix82/ansible_article.git diff --git a/sources/tech/20180328 What NASA Has Been Doing About Open Science.md b/sources/tech/20180328 What NASA Has Been Doing About Open Science.md deleted file mode 100644 index 96d3aaa4a0..0000000000 --- a/sources/tech/20180328 What NASA Has Been Doing About Open Science.md +++ /dev/null @@ -1,114 +0,0 @@ -What NASA Has Been Doing About Open Science -====== -![][1] - -We have recently started a new [Science category][2] on It’s FOSS. We covered [how open source approach is impacting Science][3] in the last article. In this open science article, we discuss [NASA][4]‘s actively growing efforts that involve their dynamic role in boosting scientific research by encouraging open source practices. - -### How NASA is using Open Source approach to improve science - -It was a great [initiative][5] by NASA that they made their entire research library freely available on the public domain. - -Yes! Entire research library for everyone to access and get benefit from it in their research. - -Their open science resources can now be mainly classified into these three categories as follows: - - * Open Source NASA - * Open API - * Open Data - - - -#### 1\. Open Source NASA - -Here’s an interesting interview of [Chris Wanstrath][6], co-founder and CEO of [GitHub][7], about how it all began to form many years ago: - -Uniquely named “[code.nasa.gov][8]“, NASA now has precisely 365 scientific software available as [open source via GitHub][9] as of the time of this post. So if you are a developer who loves coding, you can study each one of them every day for a year’s time! - -Even if you are not a developer, you can still browse through the fantastic collection of open source packages enlisted on the portal! - -One of the interesting open source packages listed here is the source code of [Apollo 11][10]‘s guidance computer. The Apollo 11 spaceflight took [the first two humans to the Moon][11], namely, [Neil Armstrong][12] and [Edwin Aldrin][13] ! If you want to know more about Edwin Aldrin, you might want to pay a visit [here][14]. - -##### Licenses being used by NASA’s Open Source Initiative: - -Here are the different [open source licenses][15] categorized as under: - -#### 2\. Open API - -An Open [Application Programming Interface][16] or API plays a significant role in practicing Open Science. Just like [The Open Source Initiative][17], there is also one for API, called [The Open API Initiative][18]. Here is a simple illustration of how an API bridges an application with its developer: - -![][19] - -Do check out the link in the caption in the image above. The API has been explained in a straightforward manner. It concludes with five exciting takeaways in the end. - -![][20] - -Makes one wonder how different [an open vs a proprietary API][21] would be. - -![][22] - -Targeted towards application developers, [NASA’s open API][23] is an initiative to significantly improve the accessibility of data that could also contain image content. The site has a live editor, allowing you check out the [API][16] behind [Astronomy Picture of the Day (APOD)][24]. - -#### 3\. Open Data - -![][25] - -In [our first science article][3], we shared with you the various open data models of three countries mentioned under the “Open Science” section, namely, France, India and the U.S. NASA also has a similar approach towards the same idea. This is a very important ideology that is being adopted by [many countries][26]. - -[NASA’s Open Data Portal][27] focuses on openness by having an ever-growing catalog of data, available for anyone to access freely. The inclusion of datasets within this collection is an essential and radical step towards the development of research of any kind. NASA have even taken a fantastic initiative to ask for dataset suggestions for submission on their portal and that’s really very innovative, considering the growing trends of [data science][28], [AI and deep learning][29]. - -The following video shows students and scientists coming up with their own definitions of Data Science based on their personal experience and research. That is really encouraging! [Dr. Murtaza Haider][30], Ted Rogers School of Management, Ryerson University, mentions the difference Open Source is making in the field of Data Science before the video ends. He explains in very simple terms, how development models transitioned from a closed source approach to an open one. The vision has proved to be sufficiently true enough in today’s time. - -![][31] - -Now anyone can suggest a dataset of any kind on NASA. Coming back to the video above, NASA’s initiative can be related so much with submitting datasets and working on analyzing them for better Data Science! - -![][32] - -You just need to signup for free. This initiative will have a very positive effect in the future, considering open discussions on the public forum and the significance of datasets in every type of analytical field that could exist. Statistical studies will also significantly improve for sure. We will talk about these concepts in detail in a future article and also about their relativeness to an open source model. - -So thus concludes an exploration into NASA’s open science model. See you soon in another open science article! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/nasa-open-science/ - -作者:[Avimanyu Bandyopadhyay][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/avimanyu/ -[1]:https://itsfoss.com/wp-content/uploads/2018/03/tux-in-space.jpg -[2]:https://itsfoss.com/category/science/ -[3]:https://itsfoss.com/open-source-impact-on-science/ -[4]:https://www.nasa.gov/ -[5]:https://futurism.com/free-science-nasa-just-opened-its-entire-research-library-to-the-public/ -[6]:http://chriswanstrath.com/ -[7]:https://github.com/ -[8]:http://code.nasa.gov -[9]:https://github.com/open-source -[10]:https://www.nasa.gov/mission_pages/apollo/missions/apollo11.html -[11]:https://www.space.com/16758-apollo-11-first-moon-landing.html -[12]:https://www.jsc.nasa.gov/Bios/htmlbios/armstrong-na.html -[13]:https://www.jsc.nasa.gov/Bios/htmlbios/aldrin-b.html -[14]:https://buzzaldrin.com/the-man/ -[15]:https://itsfoss.com/open-source-licenses-explained/ -[16]:https://en.wikipedia.org/wiki/Application_programming_interface -[17]:https://opensource.org/ -[18]:https://www.openapis.org/ -[19]:https://itsfoss.com/wp-content/uploads/2018/03/api-bridge.jpeg -[20]:https://itsfoss.com/wp-content/uploads/2018/03/open-api-diagram.jpg -[21]:http://www.apiacademy.co/resources/api-strategy-lesson-201-private-apis-vs-open-apis/ -[22]:https://itsfoss.com/wp-content/uploads/2018/03/nasa-open-api-live-example.jpg -[23]:https://api.nasa.gov/ -[24]:https://apod.nasa.gov/apod/astropix.html -[25]:https://itsfoss.com/wp-content/uploads/2018/03/nasa-open-data-portal.jpg -[26]:https://www.xbrl.org/the-standard/why/ten-countries-with-open-data/ -[27]:https://data.nasa.gov/ -[28]:https://en.wikipedia.org/wiki/Data_science -[29]:https://www.kdnuggets.com/2017/07/ai-deep-learning-explained-simply.html -[30]:https://www.ryerson.ca/tedrogersschool/bm/programs/real-estate-management/murtaza-haider/ -[31]:https://itsfoss.com/wp-content/uploads/2018/03/suggest-dataset-nasa-1.jpg -[32]:https://itsfoss.com/wp-content/uploads/2018/03/suggest-dataset-nasa-2-1.jpg diff --git a/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md b/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md deleted file mode 100644 index 5f409956f7..0000000000 --- a/sources/tech/20180329 Python ChatOps libraries- Opsdroid and Errbot.md +++ /dev/null @@ -1,241 +0,0 @@ -Python ChatOps libraries: Opsdroid and Errbot -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd) -This article was co-written with [Lacey Williams Henschel][1]. - -ChatOps is conversation-driven development. The idea is you can write code that is executed in response to something typed in a chat window. As a developer, you could use ChatOps to merge pull requests from Slack, automatically assign a support ticket to someone from a received Facebook message, or check the status of a deployment through IRC. - -In the Python world, the most widely used ChatOps libraries are Opsdroid and Errbot. In this month's Python column, let's chat about what it's like to use them, what each does well, and how to get started with them. - -### Opsdroid - -[Opsdroid][2] is a relatively young (since 2016) open source chatbot library written in Python. It has good documentation, a great tutorial, and includes plugins to help you connect to popular chat services. - -#### What's built in - -The library itself doesn't ship with everything you need to get started, but this is by design. The lightweight framework encourages you to enable its existing connectors (what Opsdroid calls the plugins that help you connect to chat services) or write your own, but it doesn't weigh itself down by shipping with connectors you may not need. You can easily enable existing Opsdroid connectors for: - - -+ The command line -+ Cisco Spark -+ Facebook -+ GitHub -+ Matrix -+ Slack -+ Telegram -+ Twitter -+ Websockets - - -Opsdroid calls the functions the chatbot performs "skills." Skills are `async` Python functions and use Opsdroid's matching decorators, called "matchers." You can configure your Opsdroid project to use skills from the same codebase your configuration file is in or import skills from outside public or private repositories. - -You can enable some existing Opsdroid skills as well, including [seen][3], which tells you when a specific user was last seen by the bot, and [weather][4], which will report the weather to the user. - -Finally, Opdroid allows you to configure databases using its existing database modules. Current databases with Opsdroid support include: - - -+ Mongo -+ Redis -+ SQLite - - -You configure databases, skills, and connectors in the `configuration.yaml` file in your Opsdroid project. - -#### Opsdroid pros - -**Docker support:** Opsdroid is meant to work well in Docker from the get-go. Docker instructions are part of its [installation documentation][5]. Using Opsdroid with Docker Compose is also simple: Set up Opsdroid as a service and when you run `docker-compose up`, your Opsdroid service will start and your chatbot will be ready to chat. -``` -version: "3" - - - -services: - -  opsdroid: - -    container_name: opsdroid - -    build: - -      context: . - -      dockerfile: Dockerfile - -``` - -**Lots of connectors:** Opsdroid supports nine connectors to services like Slack and GitHub out of the box; all you need to do is enable those connectors in your configuration file and pass necessary tokens or API keys. For example, to enable Opsdroid to post in a Slack channel named `#updates`, add this to the `connectors` section of your configuration file: -``` - - name: slack - -    api-token: "this-is-my-token" - -    default-room: "#updates" - -``` - -You will have to [add a bot user][6] to your Slack workspace before configuring Opsdroid to connect to Slack. - -If you need to connect to a service that Opsdroid does not support, there are instructions for adding your own connectors in the [docs][7]. - -**Pretty good docs.** Especially for a young-ish library in active development, Opsdroid's docs are very helpful. The docs include a [tutorial][8] that leads you through creating a couple of different basic skills. The Opsdroid documentation on [skills][9], [connectors][7], [databases][10], and [matchers][11] is also clear. - -The repositories for its supported skills and connectors provide helpful example code for when you start writing your own custom skills and connectors. - -**Natural language processing:** Opsdroid supports regular expressions for its skills, but also several NLP APIs, including [Dialogflow][12], [luis.ai][13], [Recast.AI][14], and [wit.ai][15]. - -#### Possible Opsdroid concern - -Opsdroid doesn't yet enable the full features of some of its connectors. For example, the Slack API allows you to add color bars, images, and other "attachments" to your message. The Opsdroid Slack connector doesn't enable the "attachments" feature, so you would need to write a custom Slack connector if those features were important to you. If a connector is missing a feature you need, though, Opsdroid would welcome your [contribution][16]. The docs could use some more examples, especially of expected use cases. - -#### Example usage - -`hello/__init__.py` -``` -from opsdroid.matchers import match_regex - -import random - - - - - -@match_regex(r'hi|hello|hey|hallo') - -async def hello(opsdroid, config, message): - -    text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user) - -    await message.respond(text) - -``` - -`configuration.yaml` -``` -connectors: - -  - name: websocket - - - -skills: - - - -  - name: hello - -    repo: "https://github.com//hello-skill" - -``` - -### Errbot - -[Errbot][17] is a batteries-included open source chatbot. Errbot was released in 2012 and has everything anyone would expect from a mature project, including good documentation, a great tutorial, and plenty of plugins to help you connect to existing popular chat services. - -#### What's built in - -Unlike Opsdroid, which takes a more lightweight approach, Errbot ships with everything you need to build a customized bot safely. - -Errbot includes support for XMPP, IRC, Slack, Hipchat, and Telegram services natively. It lists support for 10 other services through community-supplied backends. - -#### Errbot pros - -**Good docs:** Errbot's docs are mature and easy to use. - -**Dynamic plugin architecture:** Errbot allow you to securely install, uninstall, update, enable, and disable plugins by chatting with the bot. This makes development and adding features easy. For the security conscious, this can all be locked down thanks to Errbot's granular permission system. - -Errbot uses your plugin docstrings to generate documentation for available commands when someone types `!help`, which makes it easier to know what each command does. - -**Built-in administration and security:** Errbot allows you to restrict lists of users who have administrative rights and even has fine-grained access controls. For example, you can restrict which commands may be called by specific users and/or specific rooms. - -**Extensive plugin framework:** Errbot supports hooks, callbacks, subcommands, webhooks, polling, and many [more features][18]. If those aren't enough, you can even write [Dynamic plugins][19]. This feature is useful if you want to enable chat commands based on what commands are available on a remote server. - -**Ships with a testing framework:** Errbot supports [pytest][20] and ships with some useful utilities that make testing your plugins easy and possible. Its "[testing your plugins][21]" docs are well thought out and provide enough to get started. - -#### Possible Errbot concerns - -**Initial !:** By default, Errbot commands are issued starting with an exclamation mark (`!help` and `!hello`). Some people may like this, but others may find it annoying. Thankfully, this is easy to turn off. - -**Plugin metadata:** At first, Errbot's [Hello World][22] plugin example seems easy to use. However, I couldn't get my plugin to load until I read further into the tutorial and discovered that I also needed a `.plug` file, a file Errbot uses to load plugins. This is a pretty minor nitpick, but it wasn't obvious to me until I dug further into the docs. - -#### Example usage - -`hello.py` -``` -import random - -from errbot import BotPlugin, botcmd - - - -class Hello(BotPlugin): - - - -    @botcmd - -    def hello(self, msg, args): - -        text = random.choice(["Hi {}", "Hello {}", "Hey {}"]).format(message.user) - -        return text - -``` - -`hello.plug` -``` -[Core] - -Name = Hello - -Module = hello - - - -[Python] - -Version = 2+ - - - -[Documentation] - -Description = Example "Hello" plugin - -``` - -Have you used Errbot or Opsdroid? If so, please leave a comment with your impressions on these tools. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/python-chatops-libraries-opsdroid-and-errbot - -作者:[Jeff Triplett][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/laceynwilliams -[1]:https://opensource.com/users/laceynwilliams -[2]:https://opsdroid.github.io/ -[3]:https://github.com/opsdroid/skill-seen -[4]:https://github.com/opsdroid/skill-weather -[5]:https://opsdroid.readthedocs.io/en/stable/#docker -[6]:https://api.slack.com/bot-users -[7]:https://opsdroid.readthedocs.io/en/stable/extending/connectors/ -[8]:https://opsdroid.readthedocs.io/en/stable/tutorials/introduction/ -[9]:https://opsdroid.readthedocs.io/en/stable/extending/skills/ -[10]:https://opsdroid.readthedocs.io/en/stable/extending/databases/ -[11]:https://opsdroid.readthedocs.io/en/stable/matchers/overview/ -[12]:https://opsdroid.readthedocs.io/en/stable/matchers/dialogflow/ -[13]:https://opsdroid.readthedocs.io/en/stable/matchers/luis.ai/ -[14]:https://opsdroid.readthedocs.io/en/stable/matchers/recast.ai/ -[15]:https://opsdroid.readthedocs.io/en/stable/matchers/wit.ai/ -[16]:https://opsdroid.readthedocs.io/en/stable/contributing/ -[17]:http://errbot.io/en/latest/ -[18]:http://errbot.io/en/latest/features.html#extensive-plugin-framework -[19]:http://errbot.io/en/latest/user_guide/plugin_development/dynaplugs.html -[20]:http://pytest.org/ -[21]:http://errbot.io/en/latest/user_guide/plugin_development/testing.html -[22]:http://errbot.io/en/latest/index.html#simple-to-build-upon diff --git a/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md b/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md deleted file mode 100644 index 4efe606f51..0000000000 --- a/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md +++ /dev/null @@ -1,72 +0,0 @@ -Emacs #4: Automated emails to org-mode and org-mode syncing -====== -This is fourth in [a series on Emacs and org-mode][1]. - -Hopefully by now you’ve started to see how powerful and useful org-mode is. If you’re like me, you’re thinking: - -“I’d really like to have this in sync across all my devices.” - -and, perhaps: - -“Can I forward emails into org-mode?” - -This being Emacs, the answers, of course, are “Yes.” - -### Syncing - -Since org-mode just uses text files, syncing is pretty easily accomplished using any number of tools. I use git with git-remote-gcrypt. Due to some limitations of git-remote-gcrypt, each machine tends to push to its own branch, and to master on command. Each machine merges from all the other branches and pushes the result to master after a merge. A cron job causes pushes to the machine’s branch to happen, and a bit of elisp coordinates it all — making sure to save buffers before a sync, refresh them from disk after, etc. - -The code for this post is somewhat more extended, so I will be linking to it on github rather than posting inline. - -I have a directory $HOME/org where all my org-stuff lives. In ~/org lives [a Makefile][2] that handles the syncing. It defines these targets: - - * push: adds, commits, and pushes to a branch named after the machine’s hostname - * fetch: does a simple git fetch - * sync: adds, commits, pulls remote changes, merges, and (assuming the merge was successful) pushes to the branch named after the machine’s hostname plus master - - - -Now, in my user’s crontab, I have this: -``` -*/15 * * * * make -C $HOME/org push fetch 2>&1 | logger --tag 'orgsync' - -``` - -The [accompanying elisp code][3] defines a shortcut (C-c s) to cause a sync to occur. Thanks to the cronjob, as long as files were saved — even if I didn’t explicitly sync on the other boxen — they’ll be pulled in. - -I have found this setup to work really well. - -### Emailing to org-mode - -Before going down this path, one should ask the question: do you really need it? I use org-mode with mu4e, and the integration is excellent; any org task can link to an email by message-id, and this is ideal — it lets a person do things like make a reminder to reply to a message in a week. - -However, org is not just about reminders. It’s also a knowledge base, authoring system, etc. And, not all of my mail clients use mu4e. (Note: things like MobileOrg exist for mobile devices). I don’t actually use this as much as I thought I would, but it has its uses and I thought I’d document it here too. - -Now I didn’t want to just be able to accept plain text email. I wanted to be able to handle attachments, HTML mail, etc. This quickly starts to sound problematic — but with tools like ripmime and pandoc, it’s not too bad. - -The first step is to set up some way to get mail into a specific folder. A plus-extension, special user, whatever. I then use a [fetchmail configuration][4] to pull it down and run my [insorgmail][5] script. - -This script is where all the interesting bits happen. It starts with ripmime to process the message. HTML bits are converted from HTML to org format using pandoc. an org hierarchy is made to represent the structure of the email as best as possible. emails can get pretty complicated, with HTML and the rest, but I have found this does an acceptable job with my use cases. - -### Up next… - -My last post on org-mode will talk about using it to write documents and prepare slides — a use for which I found myself surprisingly pleased with it, but which needed a bit of tweaking. - - - --------------------------------------------------------------------------------- - -via: http://changelog.complete.org/archives/9898-emacs-4-automated-emails-to-org-mode-and-org-mode-syncing - -作者:[John Goerzen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://changelog.complete.org/ -[1]:https://changelog.complete.org/archives/tag/emacs2018 -[2]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/Makefile -[3]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/emacs-config.org -[4]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/fetchmailrc.orgmail -[5]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/insorgmail diff --git a/sources/tech/20180402 An introduction to the Flask Python web app framework.md b/sources/tech/20180402 An introduction to the Flask Python web app framework.md new file mode 100644 index 0000000000..4b07338bc5 --- /dev/null +++ b/sources/tech/20180402 An introduction to the Flask Python web app framework.md @@ -0,0 +1,451 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (An introduction to the Flask Python web app framework) +[#]: via: (https://opensource.com/article/18/4/flask) +[#]: author: (Nicholas Hunt-Walker https://opensource.com/users/nhuntwalker) +[#]: url: ( ) + +An introduction to the Flask Python web app framework +====== +In the first part in a series comparing Python frameworks, learn about Flask. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd) + +If you're developing a web app in Python, chances are you're leveraging a framework. A [framework][1] "is a code library that makes a developer's life easier when building reliable, scalable, and maintainable web applications" by providing reusable code or extensions for common operations. There are a number of frameworks for Python, including [Flask][2], [Tornado][3], [Pyramid][4], and [Django][5]. New Python developers often ask: Which framework should I use? + + * New visitors to the site should be able to register new accounts. + * Registered users can log in, log out, see information for their profiles, and edit their information. + * Registered users can create new task items, see their existing tasks, and edit existing tasks. + + + +This series is designed to help developers answer that question by comparing those four frameworks. To compare their features and operations, I'll take each one through the process of constructing an API for a simple To-Do List web application. The API is itself fairly straightforward: + +All this rounds out to a compact set of API endpoints that each backend must implement, along with the allowed HTTP methods: + + * `GET /` + * `POST /accounts` + * `POST /accounts/login` + * `GET /accounts/logout` + * `GET, PUT, DELETE /accounts/` + * `GET, POST /accounts//tasks` + * `GET, PUT, DELETE /accounts//tasks/` + + + +Each framework has a different way to put together its routes, models, views, database interaction, and overall application configuration. I'll describe those aspects of each framework in this series, which will begin with Flask. + +### Flask startup and configuration + +Like most widely used Python libraries, the Flask package is installable from the [Python Package Index][6] (PPI). First create a directory to work in (something like `flask_todo` is a fine directory name) then install the `flask` package. You'll also want to install `flask-sqlalchemy` so your Flask application has a simple way to talk to a SQL database. + +I like to do this type of work within a Python 3 virtual environment. To get there, enter the following on the command line: + +``` +$ mkdir flask_todo +$ cd flask_todo +$ pipenv install --python 3.6 +$ pipenv shell +(flask-someHash) $ pipenv install flask flask-sqlalchemy +``` + +If you want to turn this into a Git repository, this is a good place to run `git init`. It'll be the root of the project, and if you want to export the codebase to a different machine, it will help to have all the necessary setup files here. + +A good way to get moving is to turn the codebase into an installable Python distribution. At the project's root, create `setup.py` and a directory called `todo` to hold the source code. + +The `setup.py` should look something like this: + +``` +from setuptools import setup, find_packages + +requires = [ +    'flask', +    'flask-sqlalchemy', +    'psycopg2', +] + +setup( +    name='flask_todo', +    version='0.0', +    description='A To-Do List built with Flask', +    author='', +    author_email='', +    keywords='web flask', +    packages=find_packages(), +    include_package_data=True, +    install_requires=requires +) +``` + +This way, whenever you want to install or deploy your project, you'll have all the necessary packages in the `requires` list. You'll also have everything you need to set up and install the package in `site-packages`. For more information on how to write an installable Python distribution, check out [the docs on setup.py][7]. + +Within the `todo` directory containing your source code, create an `app.py` file and a blank `__init__.py` file. The `__init__.py` file allows you to import from `todo` as if it were an installed package. The `app.py` file will be the application's root. This is where all the `Flask` application goodness will go, and you'll create an environment variable that points to that file. If you're using `pipenv` (like I am), you can locate your virtual environment with `pipenv --venv` and set up that environment variable in your environment's `activate` script. + +``` +# in your activate script, probably at the bottom (but anywhere will do) + +export FLASK_APP=$VIRTUAL_ENV/../todo/app.py +export DEBUG='True' +``` + +When you installed `Flask`, you also installed the `flask` command-line script. Typing `flask run` will prompt the virtual environment's Flask package to run an HTTP server using the `app` object in whatever script the `FLASK_APP` environment variable points to. The script above also includes an environment variable named `DEBUG` that will be used a bit later. + +Let's talk about this `app` object. + +In `todo/app.py`, you'll create an `app` object, which is an instance of the `Flask` object. It'll act as the central configuration object for the entire application. It's used to set up pieces of the application required for extended functionality, e.g., a database connection and help with authentication. + +It's regularly used to set up the routes that will become the application's points of interaction. To explain what this means, let's look at the code it corresponds to. + +``` +from flask import Flask + +app = Flask(__name__) + +@app.route('/') +def hello_world(): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +This is the most basic complete Flask application. `app` is an instance of `Flask`, taking in the `__name__` of the script file. This lets Python know how to import from files relative to this one. The `app.route` decorator decorates the first **view** function; it can specify one of the routes used to access the application. (We'll look at this later.) + +Any view you specify must be decorated by `app.route` to be a functional part of the application. You can have as many functions as you want scattered across the application, but in order for that functionality to be accessible from anything external to the application, you must decorate that function and specify a route to make it into a view. + +In the example above, when the app is running and accessed at `http://domainname/`, a user will receive `"Hello, World!"` as a response. + +### Connecting the database in Flask + +While the code example above represents a complete Flask application, it doesn't do anything interesting. One interesting thing a web application can do is persist user data, but it needs the help of and connection to a database. + +Flask is very much a "do it yourself" web framework. This means there's no built-in database interaction, but the `flask-sqlalchemy` package will connect a SQL database to a Flask application. The `flask-sqlalchemy` package needs just one thing to connect to a SQL database: The database URL. + +Note that a wide variety of SQL database management systems can be used with `flask-sqlalchemy`, as long as the DBMS has an intermediary that follows the [DBAPI-2 standard][8]. In this example, I'll use PostgreSQL (mainly because I've used it a lot), so the intermediary to talk to the Postgres database is the `psycopg2` package. Make sure `psycopg2` is installed in your environment and include it in the list of required packages in `setup.py`. You don't have to do anything else with it; `flask-sqlalchemy` will recognize Postgres from the database URL. + +Flask needs the database URL to be part of its central configuration through the `SQLALCHEMY_DATABASE_URI` attribute. A quick and dirty solution is to hardcode a database URL into the application. + +``` +# top of app.py +from flask import Flask +from flask_sqlalchemy import SQLAlchemy + +app = Flask(__name__) +app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://localhost:5432/flask_todo' +db = SQLAlchemy(app) +``` + +However, this is not a sustainable solution. If you change databases or don't want your database URL visible in source control, you'll have to take extra steps to ensure your information is appropriate for the environment. + +You can make things simpler by using environment variables. They will ensure that, no matter what machine the code runs on, it always points at the right stuff if that stuff is configured in the running environment. It also ensures that, even though you need that information to run the application, it never shows up as a hardcoded value in source control. + +In the same place you declared `FLASK_APP`, declare a `DATABASE_URL` pointing to the location of your Postgres database. Development tends to work locally, so just point to your local database. + +``` +# also in your activate script + +export DATABASE_URL='postgres://localhost:5432/flask_todo' +``` + +Now in `app.py`, include the database URL in your app configuration. + +``` +app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') +db = SQLAlchemy(app) +``` + +And just like that, your application has a database connection! + +### Defining objects in Flask + +Having a database to talk to is a good first step. Now it's time to define some objects to fill that database. + +In application development, a "model" refers to the data representation of some real or conceptual object. For example, if you're building an application for a car dealership, you may define a `Car` model that encapsulates all of a car's attributes and behavior. + +In this case, you're building a To-Do List with Tasks, and each Task belongs to a User. Before you think too deeply about how they're related to each other, start by defining objects for Tasks and Users. + +The `flask-sqlalchemy` package leverages [SQLAlchemy][9] to set up and inform the database structure. You'll define a model that will live in the database by inheriting from the `db.Model` object and define the attributes of those models as `db.Column` instances. For each column, you must specify a data type, so you'll pass that data type into the call to `db.Column` as the first argument. + +Because the model definition occupies a different conceptual space than the application configuration, make `models.py` to hold model definitions separate from `app.py`. The Task model should be constructed to have the following attributes: + + * `id`: a value that's a unique identifier to pull from the database + * `name`: the name or title of the task that the user will see when the task is listed + * `note`: any extra comments that a person might want to leave with their task + * `creation_date`: the date and time the task was created + * `due_date`: the date and time the task is due to be completed (if at all) + * `completed`: a way to indicate whether or not the task has been completed + + + +Given this attribute list for Task objects, the application's `Task` object can be defined like so: + +``` +from .app import db +from datetime import datetime + +class Task(db.Model): +    """Tasks for the To Do list.""" +    id = db.Column(db.Integer, primary_key=True) +    name = db.Column(db.Unicode, nullable=False) +    note = db.Column(db.Unicode) +    creation_date = db.Column(db.DateTime, nullable=False) +    due_date = db.Column(db.DateTime) +    completed = db.Column(db.Boolean, default=False) + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.creation_date = datetime.now() +``` + +Note the extension of the class constructor method. At the end of the day, any model you construct is still a Python object and therefore must go through construction in order to be instantiated. It's important to ensure that the creation date of the model instance reflects its actual date of creation. You can explicitly set that relationship by effectively saying, "when an instance of this model is constructed, record the date and time and set it as the creation date." + +### Model relationships + +In a given web application, you may want to be able to express relationships between objects. In the To-Do List example, users own multiple tasks, and each task is owned by only one user. This is an example of a "many-to-one" relationship, also known as a foreign key relationship, where the tasks are the "many" and the user owning those tasks is the "one." + +In Flask, a many-to-one relationship can be specified using the `db.relationship` function. First, build the User object. + +``` +class User(db.Model): +    """The User object that owns tasks.""" +    id = db.Column(db.Integer, primary_key=True) +    username = db.Column(db.Unicode, nullable=False) +    email = db.Column(db.Unicode, nullable=False) +    password = db.Column(db.Unicode, nullable=False) +    date_joined = db.Column(db.DateTime, nullable=False) +    token = db.Column(db.Unicode, nullable=False) + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.date_joined = datetime.now() +        self.token = secrets.token_urlsafe(64) +``` + +It looks very similar to the Task object; you'll find that most objects have the same basic format of class attributes as table columns. Every once in a while, you'll run into something a little different, including some multiple-inheritance magic, but this is the norm. + +Now that the `User` model is created, you can set up the foreign key relationship. For the "many," set fields for the `user_id` of the `User` that owns this task, as well as the `user` object with that ID. Also make sure to include a keyword argument (`back_populates`) that updates the User model when the task gets a user as an owner. + +For the "one," set a field for the `tasks` the specific user owns. Similar to maintaining the two-way relationship on the Task object, set a keyword argument on the User's relationship field to update the Task when it is assigned to a user. + +``` +# on the Task object +user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) +user = db.relationship("user", back_populates="tasks") + +# on the User object +tasks = db.relationship("Task", back_populates="user") +``` + +### Initializing the database + +Now that the models and model relationships are set, start setting up your database. Flask doesn't come with its own database-management utility, so you'll have to write your own (to some degree). You don't have to get fancy with it; you just need something to recognize what tables are to be created and some code to create them (or drop them should the need arise). If you need something more complex, like handling updates to database tables (i.e., database migrations), you'll want to look into a tool like [Flask-Migrate][10] or [Flask-Alembic][11]. + +Create a script called `initializedb.py` next to `setup.py` for managing the database. (Of course, it doesn't need to be called this, but why not give names that are appropriate to a file's function?) Within `initializedb.py`, import the `db` object from `app.py` and use it to create or drop tables. `initializedb.py` should end up looking something like this: + +``` +from todo.app import db +import os + +if bool(os.environ.get('DEBUG', '')): +    db.drop_all() +db.create_all() +``` + +If a `DEBUG` environment variable is set, drop tables and rebuild. Otherwise, just create the tables once and you're good to go. + +### Views and URL config + +The last bits needed to connect the entire application are the views and routes. In web development, a "view" (in concept) is functionality that runs when a specific access point (a "route") in your application is hit. These access points appear as URLs: paths to functionality in an application that return some data or handle some data that has been provided. The views will be logical structures that handle specific HTTP requests from a given client and return some HTTP response to that client. + +In Flask, views appear as functions; for example, see the `hello_world` view above. For simplicity, here it is again: + +``` +@app.route('/') +def hello_world(): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +When the route of `http://domainname/` is accessed, the client receives the response, "Hello, world!" + +With Flask, a function is marked as a view when it is decorated by `app.route`. In turn, `app.route` adds to the application's central configuration a map from the specified route to the function that runs when that route is accessed. You can use this to start building out the rest of the API. + +Start with a view that handles only `GET` requests, and respond with the JSON representing all the routes that will be accessible and the methods that can be used to access them. + +``` +from flask import jsonify + +@app.route('/api/v1', methods=["GET"]) +def info_view(): +    """List of routes for this API.""" +    output = { +        'info': 'GET /api/v1', +        'register': 'POST /api/v1/accounts', +        'single profile detail': 'GET /api/v1/accounts/', +        'edit profile': 'PUT /api/v1/accounts/', +        'delete profile': 'DELETE /api/v1/accounts/', +        'login': 'POST /api/v1/accounts/login', +        'logout': 'GET /api/v1/accounts/logout', +        "user's tasks": 'GET /api/v1/accounts//tasks', +        "create task": 'POST /api/v1/accounts//tasks', +        "task detail": 'GET /api/v1/accounts//tasks/', +        "task update": 'PUT /api/v1/accounts//tasks/', +        "delete task": 'DELETE /api/v1/accounts//tasks/' +    } +    return jsonify(output) +``` + +Since you want your view to handle one specific type of HTTP request, use `app.route` to add that restriction. The `methods` keyword argument will take a list of strings as a value, with each string a type of possible HTTP method. In practice, you can use `app.route` to restrict to one or more types of HTTP request or accept any by leaving the `methods` keyword argument alone. + +Whatever you intend to return from your view function **must** be a string or an object that Flask turns into a string when constructing a properly formatted HTTP response. The exceptions to this rule are when you're trying to handle redirects and exceptions thrown by your application. What this means for you, the developer, is that you need to be able to encapsulate whatever response you're trying to send back to the client into something that can be interpreted as a string. + +A good structure that contains complexity but can still be stringified is a Python dictionary. Therefore, I recommend that, whenever you want to send some data to the client, you choose a Python `dict` with whatever key-value pairs you need to convey information. To turn that dictionary into a properly formatted JSON response, headers and all, pass it as an argument to Flask's `jsonify` function (`from flask import jsonify`). + +The view function above takes what is effectively a listing of every route that this API intends to handle and sends it to the client whenever the `http://domainname/api/v1` route is accessed. Note that, on its own, Flask supports routing to exactly matching URIs, so accessing that same route with a trailing `/` would create a 404 error. If you wanted to handle both with the same view function, you'd need stack decorators like so: + +``` +@app.route('/api/v1', methods=["GET"]) +@app.route('/api/v1/', methods=["GET"]) +def info_view(): +    # blah blah blah more code +``` + +An interesting case is that if the defined route had a trailing slash and the client asked for the route without the slash, you wouldn't need to double up on decorators. Flask would redirect the client's request appropriately. It's odd that it doesn't work both ways. + +### Flask requests and the DB + +At its base, a web framework's job is to handle incoming HTTP requests and return HTTP responses. The previously written view doesn't really have much to do with HTTP requests aside from the URI that was accessed. It doesn't process any data. Let's look at how Flask behaves when data needs handling. + +The first thing to know is that Flask doesn't provide a separate `request` object to each view function. It has **one** global request object that every view function can use, and that object is conveniently named `request` and is importable from the Flask package. + +The next thing is that Flask's route patterns can have a bit more nuance. One scenario is a hardcoded route that must be matched perfectly to activate a view function. Another scenario is a route pattern that can handle a range of routes, all mapping to one view by allowing a part of that route to be variable. If the route in question has a variable, the corresponding value can be accessed from the same-named variable in the view's parameter list. + +``` +@app.route('/a/sample//route) +def some_view(variable): +    # some code blah blah blah +``` + +To communicate with the database within a view, you must use the `db` object that was populated toward the top of the script. Its `session` attribute is your connection to the database when you want to make changes. If you just want to query for objects, the objects built from `db.Model` have their own database interaction layer through the `query` attribute. + +Finally, any response you want from a view that's more complex than a string must be built deliberately. Previously you built a response using a "jsonified" dictionary, but certain assumptions were made (e.g., 200 status code, status message "OK," Content-Type of "text/plain"). Any special sauce you want in your HTTP response must be added deliberately. + +Knowing these facts about working with Flask views allows you to construct a view whose job is to create new `Task` objects. Let's look at the code (below) and address it piece by piece. + +``` +from datetime import datetime +from flask import request, Response +from flask_sqlalchemy import SQLAlchemy +import json + +from .models import Task, User + +app = Flask(__name__) +app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') +db = SQLAlchemy(app) + +INCOMING_DATE_FMT = '%d/%m/%Y %H:%M:%S' + +@app.route('/api/v1/accounts//tasks', methods=['POST']) +def create_task(username): +    """Create a task for one user.""" +    user = User.query.filter_by(username=username).first() +    if user: +        task = Task( +            name=request.form['name'], +            note=request.form['note'], +            creation_date=datetime.now(), +            due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +            completed=bool(request.form['completed']), +            user_id=user.id, +        ) +        db.session.add(task) +        db.session.commit() +        output = {'msg': 'posted'} +        response = Response( +            mimetype="application/json", +            response=json.dumps(output), +            status=201 +        ) +        return response +``` + +Let's start with the `@app.route` decorator. The route is `'/api/v1/accounts//tasks'`, where `` is a route variable. Put angle brackets around any part of the route you want to be variable, then include that part of the route on the next line in the parameter list **with the same name**. The only parameters that should be in the parameter list should be the variables in your route. + +Next comes the query: + +``` +user = User.query.filter_by(username=username).first() +``` + +To look for one user by username, conceptually you need to look at all the User objects stored in the database and find the users with the username matching the one that was requested. With Flask, you can ask the `User` object directly through the `query` attribute for the instance matching your criteria. This type of query would provide a list of objects (even if it's only one object or none at all), so to get the object you want, just call `first()`. + +``` +task = Task( +    name=request.form['name'], +    note=request.form['note'], +    creation_date=datetime.now(), +    due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +    completed=bool(request.form['completed']), +    user_id=user.id, +) +``` + +Whenever data is sent to the application, regardless of the HTTP method used, that data is stored on the `form` attribute of the `request` object. The name of the field on the frontend will be the name of the key mapped to that data in the `form` dictionary. It'll always come in the form of a string, so if you want your data to be a specific data type, you'll have to make it explicit by casting it as the appropriate type. + +The other thing to note is the assignment of the current user's user ID to the newly instantiated `Task`. This is how that foreign key relationship is maintained. + +``` +db.session.add(task) +db.session.commit() +``` + +Creating a new `Task` instance is great, but its construction has no inherent connection to tables in the database. In order to insert a new row into the corresponding SQL table, you must use the `session` attached to the `db` object. The `db.session.add(task)` stages the new `Task` instance to be added to the table, but doesn't add it yet. While it's done only once here, you can add as many things as you want before committing. The `db.session.commit()` takes all the staged changes, or "commits," and applies them to the corresponding tables in the database. + +``` +output = {'msg': 'posted'} +response = Response( +    mimetype="application/json", +    response=json.dumps(output), +    status=201 +) +``` + +The response is an actual instance of a `Response` object with its `mimetype`, body, and `status` set deliberately. The goal for this view is to alert the user they created something new. Seeing how this view is supposed to be part of a backend API that sends and receives JSON, the response body must be JSON serializable. A dictionary with a simple string message should suffice. Ensure that it's ready for transmission by calling `json.dumps` on your dictionary, which will turn your Python object into valid JSON. This is used instead of `jsonify`, as `jsonify` constructs an actual response object using its input as the response body. In contrast, `json.dumps` just takes a given Python object and converts it into a valid JSON string if possible. + +By default, the status code of any response sent with Flask will be `200`. That will work for most circumstances, where you're not trying to send back a specific redirection-level or error-level message. Since this case explicitly lets the frontend know when a new item has been created, set the status code to be `201`, which corresponds to creating a new thing. + +And that's it! That's a basic view for creating a new `Task` object in Flask given the current setup of your To-Do List application. Similar views could be constructed for listing, editing, and deleting tasks, but this example offers an idea of how it could be done. + +### The bigger picture + +There is much more that goes into an application than one view for creating new things. While I haven't discussed anything about authorization/authentication systems, testing, database migration management, cross-origin resource sharing, etc., the details above should give you more than enough to start digging into building your own Flask applications. + +Learn more Python at [PyCon Cleveland 2018][12]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/flask + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nhuntwalker +[b]: https://github.com/lujun9972 +[1]: https://www.fullstackpython.com/web-frameworks.html +[2]: http://flask.pocoo.org/ +[3]: http://www.tornadoweb.org/en/stable/ +[4]: https://trypyramid.com/ +[5]: https://www.djangoproject.com/ +[6]: https://pypi.python.org +[7]: https://docs.python.org/3/distutils/setupscript.html +[8]: https://www.python.org/dev/peps/pep-0249/ +[9]: https://www.sqlalchemy.org/ +[10]: https://flask-migrate.readthedocs.io/en/latest/ +[11]: https://flask-alembic.readthedocs.io/en/stable/ +[12]: https://us.pycon.org/2018/ diff --git a/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md b/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md deleted file mode 100644 index 06c8dc8856..0000000000 --- a/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md +++ /dev/null @@ -1,177 +0,0 @@ -Emacs #5: Documents and Presentations with org-mode -====== - -### 1 About org-mode exporting - -#### 1.1 Background - -org-mode isn't just an agenda-making program. It can also export to lots of formats: LaTeX, PDF, Beamer, iCalendar (agendas), HTML, Markdown, ODT, plain text, man pages, and more complicated formats such as a set of web pages. - -This isn't just some afterthought either; it's a core part of the system and integrates very well. - -One file can be source code, automatically-generated output, task list, documentation, and presentation, all at once. - -Some use org-mode as their preferred markup format, even for things like LaTeX documents. The org-mode manual has an extensive [section on exporting][13]. - -#### 1.2 Getting started - -From any org-mode document, just hit C-c C-e. From there will come up a menu, letting you choose various export formats and options. These are generally single-key options so it's easy to set and execute. For instance, to export a document to a PDF, use C-c C-e l p or for HTML export, C-c C-e h h. - -There are lots of settings available for all of these export options; see the manual. It is, in fact, quite possible to use LaTeX-format equations in both LaTeX and HTML modes, to insert arbitrary preambles and settings for different modes, etc. - -#### 1.3 Add-on packages - -ELPA containts many addition exporters for org-mode as well. Check there for details. - -### 2 Beamer slides with org-mode - -#### 2.1 About Beamer - -[Beamer][14] is a LaTeX environment for making presentations. Its features include: - -* Automated generating of structural elements in the presentation (see, for example, [the Marburg theme][1]). This provides a visual reference for the audience of where they are in the presentation. - -* Strong help for structuring the presentation - -* Themes - -* Full LaTeX available - -#### 2.2 Benefits of Beamer in org-mode - -org-mode has a lot of benefits for working with Beamer. Among them: - -* org-mode's very easy and strong support for visualizing and changing the structure makes it very quick to reorganize your material. - -* Combined with org-babel, live source code (with syntax highlighting) and results can be embedded. - -* The syntax is often easier to work with. - -I have completely replaced my usage of LibreOffice/Powerpoint/GoogleDocs with org-mode and beamer. It is, in fact, rather frustrating when I have to use one of those tools, as they are nowhere near as strong as org-mode for visualizing a presentation structure. - -#### 2.3 Headline Levels - -org-mode's Beamer export will convert sections of your document (defined by headings) into slides. The question, of course, is: which sections? This is governed by the H [export setting][15] (org-export-headline-levels). - -There are many ways to go, which suit people. I like to have my presentation like this: - -``` -#+OPTIONS: H:2 -#+BEAMER_HEADER: \AtBeginSection{\frame{\sectionpage}} -``` - -This gives a standalone section slide for each major topic, to highlight major transitions, and then takes the level 2 (two asterisks) headings to set the slide. Many Beamer themes expect a third level of indirection, so you would set H:3 for them. - -#### 2.4 Themes and settings - -You can configure many Beamer and LaTeX settings in your document by inserting lines at the top of your org file. This document, for instance, defines: - -``` -#+TITLE: Documents and presentations with org-mode -#+AUTHOR: John Goerzen -#+BEAMER_HEADER: \institute{The Changelog} -#+PROPERTY: comments yes -#+PROPERTY: header-args :exports both :eval never-export -#+OPTIONS: H:2 -#+BEAMER_THEME: CambridgeUS -#+BEAMER_COLOR_THEME: default -``` - -#### 2.5 Advanced settings - -I like to change some colors, bullet formatting, and the like. I round out my document with: - -``` -# We can't just +BEAMER_INNER_THEME: default because that picks the theme default. -# Override per https://tex.stackexchange.com/questions/11168/change-bullet-style-formatting-in-beamer -#+BEAMER_INNER_THEME: default -#+LaTeX_CLASS_OPTIONS: [aspectratio=169] -#+BEAMER_HEADER: \definecolor{links}{HTML}{0000A0} -#+BEAMER_HEADER: \hypersetup{colorlinks=,linkcolor=,urlcolor=links} -#+BEAMER_HEADER: \setbeamertemplate{itemize items}[default] -#+BEAMER_HEADER: \setbeamertemplate{enumerate items}[default] -#+BEAMER_HEADER: \setbeamertemplate{items}[default] -#+BEAMER_HEADER: \setbeamercolor*{local structure}{fg=darkred} -#+BEAMER_HEADER: \setbeamercolor{section in toc}{fg=darkred} -#+BEAMER_HEADER: \setlength{\parskip}{\smallskipamount} -``` - -Here, aspectratio=169 sets a 16:9 aspect ratio, and the remaining are standard LaTeX/Beamer configuration bits. - -#### 2.6 Shrink (to fit) - -Sometimes you've got some really large code examples and you might prefer to just shrink the slide to fit. - -Just type C-c C-x p, set the BEAMER_opt property to shrink=15\. - -(Or a larger value of shrink). The previous slide uses this here. - -#### 2.7 Result - -Here's the end result: - - [![screenshot1](https://farm1.staticflickr.com/889/26366340577_fbde8ff266_o.png)][16] - -### 3 Interactive Slides - -#### 3.1 Interactive Emacs Slideshows - -With the [org-tree-slide package][17], you can display your slideshow from right within Emacs. Just run M-x org-tree-slide-mode. Then, use C-> and C-< to move between slides. - -You might find C-c C-x C-v (which is org-toggle-inline-images) helpful to cause the system to display embedded images. - -#### 3.2 HTML Slideshows - -There are a lot of ways to export org-mode presentations to HTML, with various levels of JavaScript integration. See the [non-beamer presentations section][18] of the org-mode wiki for details. - -### 4 Miscellaneous - -#### 4.1 Additional resources to accompany this post - -* [orgmode.org beamer tutorial][2] - -* [LaTeX wiki][3] - -* [Generating section title slides][4] - -* [Shrinking content to fit on slide][5] - -* A great resource: refcard-org-beamer See its [Github repo][6] Make sure to check out both the PDF and the .org file - -* A nice [Theme matrix][7] - -#### 4.2 Up next in my Emacs series… - -mu4e for email! - - --------------------------------------------------------------------------------- - -via: http://changelog.complete.org/archives/9900-emacs-5-documents-and-presentations-with-org-mode - -作者:[John Goerzen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://changelog.complete.org/archives/author/jgoerzen -[1]:https://hartwork.org/beamer-theme-matrix/all/beamer-albatross-Marburg-1.png -[2]:https://orgmode.org/worg/exporters/beamer/tutorial.html -[3]:https://en.wikibooks.org/wiki/LaTeX/Presentations -[4]:https://tex.stackexchange.com/questions/117658/automatically-generate-section-title-slides-in-beamer/117661 -[5]:https://tex.stackexchange.com/questions/78514/content-doesnt-fit-in-one-slide -[6]:https://github.com/fniessen/refcard-org-beamer -[7]:https://hartwork.org/beamer-theme-matrix/ -[8]:https://changelog.complete.org/archives/tag/emacs2018 -[9]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/emacs-org-beamer/emacs-org-beamer.org -[10]:http://changelog.complete.org/archives/9900-emacs-5-documents-and-presentations-with-org-mode -[11]:https://github.com/jgoerzen/public-snippets/raw/master/emacs/emacs-org-beamer/emacs-org-beamer.pdf -[12]:https://github.com/jgoerzen/public-snippets/raw/master/emacs/emacs-org-beamer/emacs-org-beamer-document.pdf -[13]:https://orgmode.org/manual/Exporting.html#Exporting -[14]:https://en.wikipedia.org/wiki/Beamer_(LaTeX) -[15]:https://orgmode.org/manual/Export-settings.html#Export-settings -[16]:https://www.flickr.com/photos/jgoerzen/26366340577/in/dateposted/ -[17]:https://orgmode.org/worg/org-tutorials/non-beamer-presentations.html#org-tree-slide -[18]:https://orgmode.org/worg/org-tutorials/non-beamer-presentations.html diff --git a/sources/tech/20180406 MX Linux- A Mid-Weight Distro Focused on Simplicity.md b/sources/tech/20180406 MX Linux- A Mid-Weight Distro Focused on Simplicity.md index 8cf35ea8bf..046bf9f0e8 100644 --- a/sources/tech/20180406 MX Linux- A Mid-Weight Distro Focused on Simplicity.md +++ b/sources/tech/20180406 MX Linux- A Mid-Weight Distro Focused on Simplicity.md @@ -1,3 +1,5 @@ +translating by qfzy1233 + MX Linux: A Mid-Weight Distro Focused on Simplicity ====== diff --git a/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md b/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md deleted file mode 100644 index 1bd3f8d0d7..0000000000 --- a/sources/tech/20180407 12 Best GTK Themes for Ubuntu and other Linux Distributions.md +++ /dev/null @@ -1,172 +0,0 @@ -12 Best GTK Themes for Ubuntu and other Linux Distributions -====== -**Brief: Let’s have a look at some of the beautiful GTK themes that you can use not only in Ubuntu but other Linux distributions that use GNOME.** - -For those of us that use Ubuntu proper, the move from Unity to Gnome as the default desktop environment has made theming and customizing easier than ever. Gnome has a fairly large tweaking community, and there is no shortage of fantastic GTK themes for users to choose from. With that in mind, I went ahead and found some of my favorite themes that I have come across in recent months. These are what I believe offer some of the best experiences that you can find. - -### Best themes for Ubuntu and other Linux distributions - -This is not an exhaustive list and may exclude some of the themes you already use and love, but hopefully, you find at least one theme that you enjoy that you did not already know about. All themes present should work on any Gnome 3 setup, Ubuntu or not. I lost some screenshots so I have taken images from the official websites. - -The themes listed here are in no particular order. - -But before you see the best GNOME themes, you should learn [how to install themes in Ubuntu GNOME][1]. - -#### 1\. Arc-Ambiance - -![][2] - -Arc and Arc variant themes have been around for quite some time now, and are widely regarded as some of the best themes you can find. In this example, I have selected Arc-Ambiance because of its modern take on the default Ambiance theme in Ubuntu. - -I am a fan of both the Arc theme and the default Ambiance theme, so needless to say, I was pumped when I came across a theme that merged the best of both worlds. If you are a fan of the arc themes but not a fan of this one in particular, Gnome look has plenty of other options that will most certainly suit your taste. - -[Arc-Ambiance Theme][3] - -#### 2\. Adapta Colorpack - -![][4] - -The Adapta theme has been one of my favorite flat themes I have ever found. Like Arc, Adapata is widely adopted by many-a-linux user. I have selected this color pack because in one download you have several options to choose from. In fact, there are 19 to choose from. Yep. You read that correctly. 19! - -So, if you are a fan of the flat/material design language that we see a lot of today, then there is most likely a variant in this theme pack that will satisfy you. - -[Adapta Colorpack Theme][5] - -#### 3\. Numix Collection - -![][6] - -Ah, Numix! Oh, the years we have spent together! For those of us that have been theming our DE for the last couple of years, you must have come across the Numix themes or icon packs at some point in time. Numix was probably the first modern theme for Linux that I fell in love with, and I am still in love with it today. And after all these years, it still hasn’t lost its charm. - -The gray tone throughout the theme, especially with the default pinkish-red highlight color, makes for a genuinely clean and complete experience. You would be hard pressed to find a theme pack as polished as Numix. And in this offering, you have plenty of options to choose from, so go crazy! - -[Numix Collection Theme][7] - -#### 4\. Hooli - -![][8] - -Hooli is a theme that has been out for some time now, but only recently came across my radar. I am a fan of most flat themes but have usually strayed away from themes that come to close to the material design language. Hooli, like Adapta, takes notes from that design language, but does it in a way that I think sets it apart from the rest. The green highlight color is one of my favorite parts about the theme, and it does a good job at not overpowering the entire theme. - -[Hooli Theme][9] - -#### 5\. Arrongin/Telinkrin - -![][10] - -Bonus: Two themes in one! And they are relatively new contenders in the theming realm. They both take notes from Ubuntu’s soon to be finished “[communitheme][11]” and bring it to your desktop today. The only real difference I can find between the offerings are the colors. Arrongin is centered around an Ubuntu-esq orange color, while Telinkrin uses a slightly more KDE Breeze-esq blue. I personally prefer the blue, but both are great options! - -[Arrongin/Telinkrin Themes][12] - -#### 6\. Gnome-osx - -![][13] - -I have to admit, usually, when I see that a theme has “osx” or something similar in the title, I don’t expect much. Most Apple inspired themes seem to have so much in common that I can’t really find a reason to use them. There are two themes I can think of that break this mold: the Arc-osc them and the Gnome-osx theme that we have here. - -The reason I like the Gnome-osx theme is because it truly does look at home on the Gnome desktop. It does a great job at blending into the DE without being too flat. So for those of you that enjoy a slightly less flat theme, and you like the red, yellow, and green button scheme for the close, minimize, and maximize buttons, than this theme is perfect for you. - -[Gnome-osx Theme][14] - -#### 7\. Ultimate Maia - -![][15] - -There was a time when I used Manjaro Gnome. Since then I have reverted back to Ubuntu, but one thing I wish I could have brought with me was the Manjaro theme. If you feel the same about the Manjaro theme as I do, then you are in luck because you can bring it to ANY distro you want that is running Gnome! - -The rich green color, the Breeze-esq close, minimize, maximize buttons, and the over-all polish of the theme makes for one compelling option. It even offers some other color variants of you are not a fan of the green. But let’s be honest…who isn’t a fan of that Manjaro green color? - -[Ultimate Maia Theme][16] - -#### 8\. Vimix - -![][17] - -This was a theme I easily got excited about. It is modern, pulls from the macOS red, yellow, green buttons without directly copying them, and tones down the vibrancy of the theme, making for one unique alternative to most other themes. It comes with three dark variants and several colors to choose from so most of us will find something we like. - -[Vimix Theme][18] - -#### 9\. Ant - -![][19] - -Like Vimix, Ant pulls inspiration from macOS for the button colors without directly copying the style. Where Vimix tones down the color options, Ant adds a richness to the colors that looks fantastic on my System 76 Galago Pro screen. The variation between the three theme options is pretty dramatic, and though it may not be to everyone’s taste, it is most certainly to mine. - -[Ant Theme][20] - -#### 10\. Flat Remix - -![][21] - -If you haven’t noticed by this point, I am a sucker for someone who pays attention to the details in the close, minimize, maximize buttons. The color theme that Flat Remix uses is one I have not seen anywhere else, with a red, blue, and orange color way. Add that on top of a theme that looks almost like a mix between Arc and Adapta, and you have Flat Remix. - -I am personally a fan of the dark option, but the light alternative is very nice as well. So if you like subtle transparencies, a cohesive dark theme, and a touch of color here and there, Flat Remix is for you. - -[Flat Remix Theme][22] - -#### 11\. Paper - -![][23] - -[Paper][24] has been around for some time now. I remember using it for the first back in 2014. I would say, at this point, Paper is more known for its icon pack than for its GTK theme, but that doesn’t mean that the theme isn’t a wonderful option in and of its self. Even though I adored the Paper icons from the beginning, I can’t say that I was a huge fan of the Paper theme when I first tried it out. - -I felt like the bright colors and fun approach to a theme made for an “immature” experience. Now, years later, Paper has grown on me, to say the least, and the light hearted approach that the theme takes is one I greatly appreciate. - -[Paper Theme][25] - -#### 12\. Pop - -![][26] - -Pop is one of the newer offerings on this list. Created by the folks over at [System 76][27], the Pop GTK theme is a fork of the Adapta theme listed earlier and comes with a matching icon pack, which is a fork of the previously mentioned Paper icon pack. - -The theme was released soon after System 76 announced that they were releasing [their own distribution,][28] Pop!_OS. You can read my [Pop!_OS review][29] to know more about it. Needless to say, I think Pop is a fantastic theme with a superb amount of polish and offers a fresh feel to any Gnome desktop. - -[Pop Theme][30] - -#### Conclusion - -Obviously, there way more themes to choose from than we could feature in one article, but these are some of the most complete and polished themes I have used in recent months. If you think we missed any that you really like or you just really dislike one that I featured above, then feel free to let me know in the comment section below and share why you think your favorite themes are better! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/best-gtk-themes/ - -作者:[Phillip Prado][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/phillip/ -[1]:https://itsfoss.com/install-themes-ubuntu/ -[2]:https://itsfoss.com/wp-content/uploads/2018/03/arcambaince-300x225.png -[3]:https://www.gnome-look.org/p/1193861/ -[4]:https://itsfoss.com/wp-content/uploads/2018/03/adapta-300x169.jpg -[5]:https://www.gnome-look.org/p/1190851/ -[6]:https://itsfoss.com/wp-content/uploads/2018/03/numix-300x169.png -[7]:https://www.gnome-look.org/p/1170667/ -[8]:https://itsfoss.com/wp-content/uploads/2018/03/hooli2-800x500.jpg -[9]:https://www.gnome-look.org/p/1102901/ -[10]:https://itsfoss.com/wp-content/uploads/2018/03/AT-800x590.jpg -[11]:https://itsfoss.com/ubuntu-community-theme/ -[12]:https://www.gnome-look.org/p/1215199/ -[13]:https://itsfoss.com/wp-content/uploads/2018/03/gosx-800x473.jpg -[14]:https://www.opendesktop.org/s/Gnome/p/1171688/ -[15]:https://itsfoss.com/wp-content/uploads/2018/03/ultimatemaia-800x450.jpg -[16]:https://www.opendesktop.org/s/Gnome/p/1193879/ -[17]:https://itsfoss.com/wp-content/uploads/2018/03/vimix-800x450.jpg -[18]:https://www.gnome-look.org/p/1013698/ -[19]:https://itsfoss.com/wp-content/uploads/2018/03/ant-800x533.png -[20]:https://www.opendesktop.org/p/1099856/ -[21]:https://itsfoss.com/wp-content/uploads/2018/03/flatremix-800x450.png -[22]:https://www.opendesktop.org/p/1214931/ -[23]:https://itsfoss.com/wp-content/uploads/2018/04/paper-800x450.jpg -[24]:https://itsfoss.com/install-paper-theme-linux/ -[25]:https://snwh.org/paper/download -[26]:https://itsfoss.com/wp-content/uploads/2018/04/pop-800x449.jpg -[27]:https://system76.com/ -[28]:https://itsfoss.com/system76-popos-linux/ -[29]:https://itsfoss.com/pop-os-linux-review/ -[30]:https://github.com/pop-os/gtk-theme/blob/master/README.md diff --git a/sources/tech/20180409 How to create LaTeX documents with Emacs.md b/sources/tech/20180409 How to create LaTeX documents with Emacs.md deleted file mode 100644 index 7dc16bcf10..0000000000 --- a/sources/tech/20180409 How to create LaTeX documents with Emacs.md +++ /dev/null @@ -1,281 +0,0 @@ -How to create LaTeX documents with Emacs -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -In his excellent article, [An introduction to creating documents in LaTeX][1], author [Aaron Cocker][2] introduces the [LaTeX typesetting system][3] and explains how to create a LaTeX document using [TeXstudio][4]. He also lists a few LaTeX editors that many users find helpful in creating LaTeX documents. - -This comment on the article by [Greg Pittman][5] caught my attention: "LaTeX seems like an awful lot of typing when you first start...". This is true. LaTeX involves a lot of typing and debugging, if you missed a special character like an exclamation mark, which can discourage many users, especially beginners. In this article, I will introduce you to [GNU Emacs][6] and describe how to use it to create LaTeX documents. - -### Creating your first document - -Launch Emacs by typing: -``` -emacs -q --no-splash helloworld.org - -``` - -The `-q` flag ensures that no Emacs initializations will load. The `--no-splash-screen` flag prevents splash screens to ensure that only one window is open, with the file `helloworld.org`. - -![Emacs startup screen][8] - -GNU Emacs with the helloworld.org file opened in a buffer window - -Let's add some LaTeX headers the Emacs way: Go to **Org** in the menu bar and select **Export/Publish**. - -![template_flow.png][10] - -Inserting a default template - -In the next window, Emacs offers options to either export or insert a template. Insert the template by entering **#** ([#] Insert template). This will move a cursor to a mini-buffer, where the prompt reads **Options category:**. At this time you may not know the category names; press Tab to see possible completions. Type "default" and press Enter. The following content will be inserted: -``` -#+TITLE: helloworld - -#+DATE: <2018-03-12 Mon> - -#+AUTHOR: - -#+EMAIL: makerpm@nubia - -#+OPTIONS: ':nil *:t -:t ::t <:t H:3 \n:nil ^:t arch:headline - -#+OPTIONS: author:t c:nil creator:comment d:(not "LOGBOOK") date:t - -#+OPTIONS: e:t email:nil f:t inline:t num:t p:nil pri:nil stat:t - -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:t todo:t |:t - -#+CREATOR: Emacs 25.3.1 (Org mode 8.2.10) - -#+DESCRIPTION: - -#+EXCLUDE_TAGS: noexport - -#+KEYWORDS: - -#+LANGUAGE: en - -#+SELECT_TAGS: export - -``` - -Change the title, date, author, and email as you wish. Mine looks like this: -``` -#+TITLE: Hello World! My first LaTeX document - -#+DATE: \today - -#+AUTHOR: Sachin Patil - -#+EMAIL: psachin@redhat.com - -``` - -We don't want to create a Table of Contents yet, so change the value of `toc` from `t` to `nil` inline, as shown below: -``` -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:nil todo:t |:t - -``` - -Let's add a section and paragraphs. A section starts with an asterisk (*). We'll copy the content of some paragraphs from Aaron's post (from the [Lipsum Lorem Ipsum generator][11]): -``` -* Introduction - - - -  \paragraph{} - -  Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras lorem - -  nisi, tincidunt tempus sem nec, elementum feugiat ipsum. Nulla in - -  diam libero. Nunc tristique ex a nibh egestas sollicitudin. - - - -  \paragraph{} - -  Mauris efficitur vitae ex id egestas. Vestibulum ligula felis, - -  pulvinar a posuere id, luctus vitae leo. Sed ac imperdiet orci, non - -  elementum leo. Nullam molestie congue placerat. Phasellus tempor et - -  libero maximus commodo. - -``` - - -![helloworld_file.png][13] - -The helloworld.org file - -With the content in place, we'll export the content as a PDF. Select **Export/Publish** from the **Org** menu again, but this time, type **l** (export to LaTeX), followed by **o** (as PDF file and open). This not only opens PDF file for you to view, but also saves the file as `helloworld.pdf` in the same path as `helloworld.org`. - -![org_to_pdf.png][15] - -Exporting helloworld.org to helloworld.pdf - -![org_and_pdf_file.png][17] - -Opening the helloworld.pdf file - -You can also export org to PDF by pressing `Alt + x`, then typing "org-latex-export-to-pdf". Use Tab to auto-complete. - -Emacs also creates the `helloworld.tex` file to give you control over the content. - -![org_tex_pdf.png][19] - -Emacs with LaTeX, org, and PDF files open in three different windows - -You can compile the `.tex` file to `.pdf` using the command: -``` -pdflatex helloworld.tex - -``` - -You can also export the `.org` file to HTML or as a simple text file. What I like about .org files is they can be pushed to [GitHub][20], where they are rendered just like any other markdown formats. - -### Creating a LaTeX Beamer presentation - -Let's go a step further and create a LaTeX [Beamer][21] presentation using the same file with some modifications as shown below: -``` -#+TITLE: LaTeX Beamer presentation - -#+DATE: \today - -#+AUTHOR: Sachin Patil - -#+EMAIL: psachin@redhat.com - -#+OPTIONS: ':nil *:t -:t ::t <:t H:3 \n:nil ^:t arch:headline - -#+OPTIONS: author:t c:nil creator:comment d:(not "LOGBOOK") date:t - -#+OPTIONS: e:t email:nil f:t inline:t num:t p:nil pri:nil stat:t - -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:nil todo:t |:t - -#+CREATOR: Emacs 25.3.1 (Org mode 8.2.10) - -#+DESCRIPTION: - -#+EXCLUDE_TAGS: noexport - -#+KEYWORDS: - -#+LANGUAGE: en - -#+SELECT_TAGS: export - -#+LATEX_CLASS: beamer - -#+BEAMER_THEME: Frankfurt - -#+BEAMER_INNER_THEME: rounded - - - - - -* Introduction - -*** Programming - -    - Python - -    - Ruby - - - -*** Paragraph one - - - -    Lorem ipsum dolor sit amet, consectetur adipiscing - -    elit. Cras lorem nisi, tincidunt tempus sem nec, elementum feugiat - -    ipsum. Nulla in diam libero. Nunc tristique ex a nibh egestas - -    sollicitudin. - - - -*** Paragraph two - - - -    Mauris efficitur vitae ex id egestas. Vestibulum - -    ligula felis, pulvinar a posuere id, luctus vitae leo. Sed ac - -    imperdiet orci, non elementum leo. Nullam molestie congue - -    placerat. Phasellus tempor et libero maximus commodo. - - - -* Thanks - -*** Links - -    - Link one - -    - Link two - -``` - -We have added three more lines to the header: -``` -#+LATEX_CLASS: beamer - -#+BEAMER_THEME: Frankfurt - -#+BEAMER_INNER_THEME: rounded - -``` - -To export to PDF, press `Alt + x` and type "org-beamer-export-to-pdf". - -![latex_beamer_presentation.png][23] - -The Latex Beamer presentation, created using Emacs and Org mode - -I hope you enjoyed creating this LaTeX and Beamer document using Emacs (note that it's faster to use keyboard shortcuts than a mouse). Emacs Org-mode offers much more than I can cover in this post; you can learn more at [orgmode.org][24]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/how-create-latex-documents-emacs - -作者:[Sachin Patil][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/psachin -[1]:https://opensource.com/article/17/6/introduction-latex -[2]:https://opensource.com/users/aaroncocker -[3]:https://www.latex-project.org -[4]:http://www.texstudio.org/ -[5]:https://opensource.com/users/greg-p -[6]:https://www.gnu.org/software/emacs/ -[7]:/file/392261 -[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/emacs_startup.png?itok=UnT4PgK5 (Emacs startup screen) -[9]:/file/392266 -[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/insert_template_flow.png?itok=V_c2KipO (template_flow.png) -[11]:https://www.lipsum.com/feed/html -[12]:/file/392271 -[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/helloworld_file.png?itok=o8IX0TsJ (helloworld_file.png) -[14]:/file/392276 -[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_to_pdf.png?itok=fNnC1Y-L (org_to_pdf.png) -[16]:/file/392281 -[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_and_pdf_file.png?itok=HEhtw-cu (org_and_pdf_file.png) -[18]:/file/392286 -[19]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_tex_pdf.png?itok=poZZV_tj (org_tex_pdf.png) -[20]:https://github.com -[21]:https://www.sharelatex.com/learn/Beamer -[22]:/file/392291 -[23]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/latex_beamer_presentation.png?itok=rsPSeIuM (latex_beamer_presentation.png) -[24]:https://orgmode.org/worg/org-tutorials/org-latex-export.html diff --git a/sources/tech/20180411 How To Setup Static File Server Instantly.md b/sources/tech/20180411 How To Setup Static File Server Instantly.md deleted file mode 100644 index b388b389fa..0000000000 --- a/sources/tech/20180411 How To Setup Static File Server Instantly.md +++ /dev/null @@ -1,171 +0,0 @@ -How To Setup Static File Server Instantly -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/serve-720x340.png) -Ever wanted to share your files or project over network, but don’t know how to do? No worries! Here is a simple utility named **“serve”** to share your files instantly over network. This simple utility will instantly turn your system into a static file server, allowing you to serve your files over network. You can access the files from any devices regardless of their operating system. All you need is a web browser. This utility also can be used to serve static websites. It is formerly known as “list” and “micro-list”, but now the name has been changed to “serve”, which is much more suitable for the purpose of this utility. - -### Setup Static File Server Using Serve - -To install “serve”, you need to install NodeJS and NPM first. Refer the following link to install NodeJS and NPM in your Linux box. - -Once NodeJS and NPM installed, run the following command to install “serve”. -``` -$ npm install -g serve - -``` - -Done! Now is the time to serve the files or folders. - -The typical syntax to use “serve” is: -``` -$ serve [options] - -``` - -### Serve Specific files or folders - -For example, let us share the contents of the **Documents** directory. To do so, run: -``` -$ serve Documents/ - -``` - -Sample output would be: - -![][2] - -As you can see in the above screenshot, the contents of the given directory have been served over network via two URLs. - -To access the contents from the local system itself, all you have to do is open your web browser and navigate to **** URL. - -![][3] - -The Serve utility displays the contents of the given directory in a simple layout. You can download (right click on the files and choose “Save link as..”) or just view them in the browser. - -If you want to open local address automatically in the browser, use **-o** flag. -``` -$ serve -o Documents/ - -``` - -Once you run the above command, The Serve utility will open your web browser automatically and display the contents of the shared item. - -Similarly, to access the shared directory from a remote system over network, type **** in the browser’s address bar. Replace 192.168.43.192 with your system’s IP. - -**Serve contents via different port** - -As you may noticed, The serve utility uses port **5000** by default. So, make sure the port 5000 is allowed in your firewall or router. If it is blocked for some reason, you can serve the contents using different port using **-p** flag. -``` -$ serve -p 1234 Documents/ - -``` - -The above command will serve the contents of Documents directory via port **1234**. - -![][4] - -To serve a file, instead of a folder, just give it’s full path like below. -``` -$ serve Documents/Papers/notes.txt - -``` - -The contents of the shared directory can be accessed by any user on the network as long as they know the path. - -**Serve the entire $HOME directory** - -Open your Terminal and type: -``` -$ serve - -``` - -This will share the contents of your entire $HOME directory over network. - -To stop the sharing, press **CTRL+C**. - -**Serve selective files or folders** - -You may not want to share all files or directories, but only a few in a directory. You can do this by excluding the files or directories using **-i** flag. -``` -$ serve -i Downloads/ - -``` - -The above command will serve entire file system except **Downloads** directory. - -**Serve contents only on localhost** - -Sometimes, you want to serve the contents only on the local system itself, not on the entire network. To do so, use **-l** flag as shown below: -``` -$ serve -l Documents/ - -``` - -This command will serve the **Documents** directory only on localhost. - -![][5] - -This can be useful when you’re working on a shared server. All users in the in the system can access the share, but not the remote users. - -**Serve content using SSL** - -Since we serve the contents over the local network, we need not to use SSL. However, Serve utility has the ability to shares contents using SSL using **–ssl** flag. -``` -$ serve --ssl Documents/ - -``` - -![][6] - -To access the shares via web browser use “ or “. - -![][7] - -**Serve contents with authentication** - -In all above examples, we served the contents without any authentication. So anyone on the network can access them without any authentication. You might feel some contents should be accessed with username and password. - -To do so, use: -``` -$ SERVE_USER=ostechnix SERVE_PASSWORD=123456 serve --auth - -``` - -Now the users need to enter the username (i.e **ostechnix** in our case) and password (123456) to access the shares. - -![][8] - -The Serve utility has some other features, such as disable [**Gzip compression**][9], setup * CORS headers to allow requests from any origin, prevent copy address automatically to clipboard etc. You can read the complete help section by running the following command: -``` -$ serve help - -``` - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-setup-static-file-server-instantly/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-4.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-3.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-6.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-5-1.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-7-1.png -[9]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/ diff --git a/sources/tech/20180412 A new approach to security instrumentation.md b/sources/tech/20180412 A new approach to security instrumentation.md deleted file mode 100644 index 0a6a98c0f2..0000000000 --- a/sources/tech/20180412 A new approach to security instrumentation.md +++ /dev/null @@ -1,82 +0,0 @@ -Translating by hopefully2333 - -A new approach to security instrumentation -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx) - -How many of us have ever uttered the following phrase: “I hope this works!”? - -Without a doubt, most of us have, likely more than once. It’s not a phrase that inspires confidence, as it reveals doubts about our abilities or the functionality of whatever we are testing. Unfortunately, this very phrase defines our traditional security model all too well. We operate based on the assumption and the hope that the controls we put in place—from vulnerability scanning on web applications to anti-virus on endpoints—prevent malicious actors and software from entering our systems and damaging or stealing our information. - -Penetration testing took a step to combat relying on assumptions by actively trying to break into the network, inject malicious code into a web application, or spread “malware” by sending out phishing emails. Composed of finding and poking holes in our different security layers, pen testing fails to account for situations in which holes are actively opened. In security experimentation, we intentionally create chaos in the form of controlled, simulated incident behavior to objectively instrument our ability to detect and deter these types of activities. - -> “Security experimentation provides a methodology for the experimentation of the security of distributed systems to build confidence in the ability to withstand malicious conditions.” - -When it comes to security and complex distributed systems, a common adage in the chaos engineering community reiterates that “hope is not an effective strategy.” How often do we proactively instrument what we have designed or built to determine if the controls are failing? Most organizations do not discover that their security controls are failing until a security incident results from that failure. We believe that “Security incidents are not detective measures” and “Hope is not an effective strategy” should be the mantras of IT professionals operating effective security practices. - -The industry has traditionally emphasized preventative security measures and defense-in-depth, whereas our mission is to drive new knowledge and insights into the security toolchain through detective experimentation. With so much focus on the preventative mechanisms, we rarely attempt beyond one-time or annual pen testing requirements to validate whether or not those controls are performing as designed. - -With all of these constantly changing, stateless variables in modern distributed systems, it becomes next to impossible for humans to adequately understand how their systems behave, as this can change from moment to moment. One way to approach this problem is through robust systematic instrumentation and monitoring. For instrumentation in security, you can break down the domain into two primary buckets: **testing** , and what we call **experimentation**. Testing is the validation or assessment of a previously known outcome. In plain terms, we know what we are looking for before we go looking for it. On the other hand, experimentation seeks to derive new insights and information that was previously unknown. While testing is an important practice for mature security teams, the following example should help further illuminate the differences between the two, as well as provide a more tangible depiction of the added value of experimentation. - -### Example scenario: Craft beer delivery - -Consider a simple web service or web application that takes orders for craft beer deliveries. - -This is a critical service for this craft beer delivery company, whose orders come in from its customers' mobile devices, the web, and via its API from restaurants that serve its craft beer. This critical service runs in the company's AWS EC2 environment and is considered by the company to be secure. The company passed its PCI compliance with flying colors last year and annually performs third-party penetration tests, so it assumes that its systems are secure. - -This company also prides itself on its DevOps and continuous delivery practices by deploying sometimes twice in the same day. - -After learning about chaos engineering and security experimentation, the company's development teams want to determine, on a continuous basis, how resilient and effective its security systems are to real-world events, and furthermore, to ensure that they are not introducing new problems into the system that the security controls are not able to detect. - -The team wants to start small by evaluating port security and firewall configurations for their ability to detect, block, and alert on misconfigured changes to the port configurations on their EC2 security groups. - - * The team begins by performing a summary of their assumptions about the normal state. - * Develops a hypothesis for port security in their EC2 instances - * Selects and configures the YAML file for the Unauthorized Port Change experiment. - * This configuration would designate the objects to randomly select from for targeting, as well as the port ranges and number of ports that should be changed. - * The team also configures when to run the experiment and shrinks the scope of its blast radius to ensure minimal business impact. - * For this first test, the team has chosen to run the experiment in their stage environments and run a single run of the test. - * In true Game Day style, the team has elected a Master of Disaster to run the experiment during a predefined two-hour window. During that window of time, the Master of Disaster will execute the experiment on one of the EC2 Instance Security Groups. - * Once the Game Day has finished, the team begins to conduct a thorough, blameless post-mortem exercise where the focus is on the results of the experiment against the steady state and the original hypothesis. The questions would be something similar to the following: - - - -### Post-mortem questions - - * Did the firewall detect the unauthorized port change? - * If the change was detected, was it blocked? - * Did the firewall report log useful information to the log aggregation tool? - * Did the SIEM throw an alert on the unauthorized change? - * If the firewall did not detect the change, did the configuration management tool discover the change? - * Did the configuration management tool report good information to the log aggregation tool? - * Did the SIEM finally correlate an alert? - * If the SIEM threw an alert, did the Security Operations Center get the alert? - * Was the SOC analyst who got the alert able to take action on the alert, or was necessary information missing? - * If the SOC alert determined the alert to be credible, was Security Incident Response able to conduct triage activities easily from the data? - - - -The acknowledgment and anticipation of failure in our systems have already begun unraveling our assumptions about how our systems work. Our mission is to take what we have learned and apply it more broadly to begin to truly address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models. - -As we continue to explore this new domain, we will be sure to post our findings. For those interested in learning more about the research or getting involved, please feel free to contact [Aaron Rinehart][1] or [Grayson Brewer][2]. - -Special thanks to Samuel Roden for the insights and thoughts provided in this article. - -**[See our related story,[Is the term DevSecOps necessary?][3]]** - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/new-approach-security-instrumentation - -作者:[Aaron Rinehart][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/aaronrinehart -[1]:https://twitter.com/aaronrinehart -[2]:https://twitter.com/BrewerSecurity -[3]:https://opensource.com/article/18/4/devsecops diff --git a/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md b/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md deleted file mode 100644 index 2f86fe2f01..0000000000 --- a/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md +++ /dev/null @@ -1,108 +0,0 @@ -Announcing NGINX Unit 1.0 -============================================================ - -Today, April 12, marks a significant milestone in the development of [NGINX Unit][8], our dynamic web and application server. Approximately six months after its [first public release][9], we’re now happy to announce that NGINX Unit is generally available and production‑ready. NGINX Unit is our new open source initiative led by Igor Sysoev, creator of the original NGINX Open Source software, which is now used by more than [409 million websites][10]. - -“I set out to make an application server which will be remotely and dynamically configured, and able to switch dynamically from one language or application version to another,” explains Igor. “Dynamic configuration and switching I saw as being certainly the main problem. People want to reconfigure servers without interrupting client processing.” - -NGINX Unit is dynamically configured using a REST API; there is no static configuration file. All configuration changes happen directly in memory. Configuration changes take effect without requiring process reloads or service interruptions. - -![NGINX runs Go, Perl, PHP, Python, and Ruby together on the same server](https://cdn-1.wp.nginx.com/wp-content/uploads/2017/09/dia-FM-2018-04-11-what-is-nginx-unit-01_1024x725-1024x725.png) -NGINX Unit runs multiple languages simultaneously - -“The dynamic switching requires that we can run different languages and language versions in one server,” continues Igor. - -As of Release 1.0, NGINX Unit supports Go, Perl, PHP, Python, and Ruby on the same server. Multiple language versions are also supported, so you can, for instance, run applications written for PHP 5 and PHP 7 on the same server. Support for additional languages, including Java, is planned for future NGINX Unit releases. - -Note: We have an additional blog post on [how to configure NGINX, NGINX Unit, and WordPress][11] to work together. - -Igor studied at Moscow State Technical University, which was a pioneer in the Russian space program, and April 12 has a special significance. “This is the anniversary of the first manned spaceflight in history, made by [Yuri Gagarin][12]. The first public version of NGINX [0.1.0] was released on [[October 4, 2004][7],] the anniversary of the [Sputnik][13] launch, and NGINX 1.0 was launched on April 12, 2011.” - -### What Is NGINX Unit? - -NGINX Unit is a dynamic web and application server, suitable for both stand‑alone applications and distributed, microservices application architectures. It launches and scales application processes on demand, executing each application instance in its own secure sandbox. - -NGINX Unit manages and routes all incoming network transactions to the application through a separate “router” process, so it can rapidly implement configuration changes without interrupting service. - -“The configuration is in JSON format, so users can edit it manually, and it’s very suitable for scripting. We hope to add capabilities to [NGINX Controller][14] and [NGINX Amplify][15] to work with Unit configuration too,” explains Igor. - -The NGINX Unit configuration process is described thoroughly in the [documentation][16]. - -“Now Unit can run Python, PHP, Ruby, Perl and Go – five languages. For example, during our beta, one of our users used Unit to run a number of different PHP platform versions on a single host,” says Igor. - -NGINX Unit’s ability to run multiple language runtimes is based on its internal separation between the router process, which terminates incoming HTTP requests, and groups of application processes, which implement the application runtime and execute application code. - -![NGINX Unit architecture](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/04/dia-FM-2018-04-11-Unit-1.0.0-blog-router-process-01-horiz_1024x576-1024x576.png) -NGINX Unit architecture - -The router process is persistent – it never restarts – meaning that configuration updates can be implemented seamlessly, without any interruption in service. Each application process is deployed in its own sandbox (with support for [Linux control groups][17] [cgroups] under active development), so that NGINX Unit provides secure isolation for user code. - -### What’s Next for NGINX Unit? - -The next milestones for the NGINX Unit engineering team after Release 1.0 are concerned with HTTP maturity, serving static content, and additional language support. - -“We plan to add SSL and HTTP/2 capabilities in Unit,” says Igor. “Also, we plan to support routing in configurations; currently, we have direct mapping from one listen port to one application. We plan to add routing using URIs and hostnames, etc.” - -“In addition, we want to add more language support to Unit. We are completing the Ruby implementation, and next we will consider Node.js and Java. Java will be added in a Tomcat‑compatible fashion.” - -The end goal for NGINX Unit is to create an open source platform for distributed, polyglot applications which can run application code securely, reliably, and with the best possible performance. The platform will self‑manage, with capabilities such as autoscaling to meet SLAs within resource constraints, and service discovery and internal load balancing to make it easy to create a [service mesh][18]. - -### NGINX Unit and the NGINX Application Platform - -An NGINX Unit platform will typically be delivered with a front‑end tier of NGINX Open Source or NGINX Plus reverse proxies to provide ingress control, edge load balancing, and security. The joint platform (NGINX Unit and NGINX or NGINX Plus) can then be managed fully using NGINX Controller to monitor, configure, and control the entire platform. - -![NGINX Application Platform for microservices and monolithic applications with NGINX Controller, NGINX Plus, and NGINX Unit](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/03/nginx.com-NAP-diagram-01ag_Main-Products-print-Roboto-white-1024x1008.png) -The NGINX Application Platform is our vision for building microservices - -Together, these three components – NGINX Plus, NGINX Unit, and NGINX Controller – make up the [NGINX Application Platform][19]. The NGINX Application Platform is a product suite that delivers load balancing, caching, API management, a WAF, and application serving, with rich management and control planes that simplify the tasks of operating monolithic, microservices, and transitional applications. - -### Getting Started with NGINX Unit - -NGINX Unit is free and open source. Please see the [installation instructions][20] to get started. We have prebuilt packages for most operating systems, including Ubuntu and Red Hat Enterprise Linux. We also make a [Docker container][21] available on Docker Hub. - -The source code is available in our [Mercurial repository][22] and [mirrored on GitHub][23]. The code is available under the Apache 2.0 license. You can compile NGINX Unit yourself on most popular Linux and Unix systems. - -If you have any questions, please use the [GitHub issues board][24] or the [NGINX Unit mailing list][25]. We’d love to hear how you are using NGINX Unit, and we welcome [code contributions][26] too. - -We’re also happy to extend technical support for NGINX Unit to NGINX Plus customers with Professional or Enterprise support contracts. Please refer to our [Support page][27] for details of the support services we can offer. - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/nginx-unit-1-0-released/ - -作者:[www.nginx.com ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:www.nginx.com -[1]:https://twitter.com/intent/tweet?text=Announcing+NGINX+Unit+1.0+by+%40nginx+https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[2]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&summary=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D -[3]:https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&t=Announcing%20NGINX%20Unit%201.0&text=Today,%20April%2012,%20marks%20a%20significant%20milestone%20in%20the%20development%20of%20NGINX%C2%A0Unit,%20our%20dynamic%20web%20and%20application%20server.%20Approximately%20six%20months%20after%20its%20first%20public%20release,%20we%E2%80%99re%20now%20happy%20to%20announce%20that%20NGINX%C2%A0Unit%20is%20generally%20available%20and%20production%E2%80%91ready.%20NGINX%C2%A0Unit%20is%20our%20new%20open%20source%20initiative%20led%20by%20Igor%C2%A0Sysoev,%20creator%20of%20the%20original%20NGINX%20Open%20Source%20[%E2%80%A6] -[4]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[5]:https://plus.google.com/share?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[6]:http://www.reddit.com/submit?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&text=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D -[7]:http://nginx.org/en/CHANGES -[8]:https://www.nginx.com/products/nginx-unit/ -[9]:https://www.nginx.com/blog/introducing-nginx-unit/ -[10]:https://news.netcraft.com/archives/2018/03/27/march-2018-web-server-survey.html -[11]:https://www.nginx.com/blog/installing-wordpress-with-nginx-unit/ -[12]:https://en.wikipedia.org/wiki/Yuri_Gagarin -[13]:https://en.wikipedia.org/wiki/Sputnik_1 -[14]:https://www.nginx.com/products/nginx-controller/ -[15]:https://www.nginx.com/products/nginx-amplify/ -[16]:http://unit.nginx.org/configuration/ -[17]:https://en.wikipedia.org/wiki/Cgroups -[18]:https://www.nginx.com/blog/what-is-a-service-mesh/ -[19]:https://www.nginx.com/products -[20]:http://unit.nginx.org/installation/ -[21]:https://hub.docker.com/r/nginx/unit/ -[22]:http://hg.nginx.org/unit -[23]:https://github.com/nginx/unit -[24]:https://github.com/nginx/unit/issues -[25]:http://mailman.nginx.org/mailman/listinfo/unit -[26]:https://unit.nginx.org/contribution/ -[27]:https://www.nginx.com/support -[28]:https://www.nginx.com/blog/tag/releases/ -[29]:https://www.nginx.com/blog/tag/nginx-unit/ diff --git a/sources/tech/20180416 Cgo and Python.md b/sources/tech/20180416 Cgo and Python.md index e5688d43c8..422f510949 100644 --- a/sources/tech/20180416 Cgo and Python.md +++ b/sources/tech/20180416 Cgo and Python.md @@ -1,3 +1,4 @@ +translating by name1e5s Cgo and Python ============================================================ diff --git a/sources/tech/20180416 How To Resize Active-Primary root Partition Using GParted Utility.md b/sources/tech/20180416 How To Resize Active-Primary root Partition Using GParted Utility.md deleted file mode 100644 index f75baf6166..0000000000 --- a/sources/tech/20180416 How To Resize Active-Primary root Partition Using GParted Utility.md +++ /dev/null @@ -1,181 +0,0 @@ -How To Resize Active/Primary root Partition Using GParted Utility -====== -Today we are going to discuss about disk partition. It’s one of the best topics in Linux. This allow users to resize the active root partition in Linux. - -In this article we will teach you how to resize the active root partition on Linux using Gparted utility. - -Just imagine, our system has 30GB disk and we didn’t configure properly while installation the Ubuntu operating system. - -We need to install another OS in that so we want to make secondary partition on that. - -Its not advisable to resize active partition. However, we are going to perform this as there is no way to free up the system. - -Make sure you should take backup of important data before performing this action because if something goes wrong (For example, if power got failure or your system got rebooted), you can retain your data. - -### What Is Gparted - -[GParted][1] is a free partition manager that enables you to resize, copy, and move partitions without data loss. We can use all of the features of the GParted application is by using the GParted Live bootable image. GParted Live enables you to use GParted on GNU/Linux as well as other operating systems, such as Windows or Mac OS X. - -### 1) Check Disk Space Usage Using df Command - -I just want to show you about my partition using df command. The df command output clearly showing that i only have one partition. -``` -$ df -h -Filesystem Size Used Avail Use% Mounted on -/dev/sda1 30G 3.4G 26.2G 16% / -none 4.0K 0 4.0K 0% /sys/fs/cgroup -udev 487M 4.0K 487M 1% /dev -tmpfs 100M 844K 99M 1% /run -none 5.0M 0 5.0M 0% /run/lock -none 498M 152K 497M 1% /run/shm -none 100M 52K 100M 1% /run/user - -``` - -### 2) Check Disk Partition Using fdisk Command - -I’m going to verify this using fdisk command. -``` -$ sudo fdisk -l -[sudo] password for daygeek: - -Disk /dev/sda: 33.1 GB, 33129218048 bytes -255 heads, 63 sectors/track, 4027 cylinders, total 64705504 sectors -Units = sectors of 1 * 512 = 512 bytes -Sector size (logical/physical): 512 bytes / 512 bytes -I/O size (minimum/optimal): 512 bytes / 512 bytes -Disk identifier: 0x000473a3 - - Device Boot Start End Blocks Id System -/dev/sda1 * 2048 62609407 31303680 83 Linux -/dev/sda2 62611454 64704511 1046529 5 Extended -/dev/sda5 62611456 64704511 1046528 82 Linux swap / Solaris - -``` - -### 3) Download GParted live ISO Image - -Use the below command to download GParted live ISO to perform this action. -``` -$ wget https://downloads.sourceforge.net/gparted/gparted-live-0.31.0-1-amd64.iso - -``` - -### 4) Boot Your System With GParted Live Installation Media - -Boot your system with GParted live installation media (like Burned CD/DVD or USB or ISO image). You will get the output similar to below screen. Here choose **GParted Live (Default settings)** and hit **Enter**. -![][3] - -### 5) Keyboard Selection - -By default it chooses the second option, just hit **Enter**. -![][4] - -### 6) Language Selection - -By default it chooses **33** for US English, just hit **Enter**. -![][5] - -### 7) Mode Selection (GUI or Command-Line) - -By default it chooses **0** for GUI mode, just hit **Enter**. -![][6] - -### 8) Loaded GParted Live Screen - -Now, GParted live screen is loaded. It is showing the list of partition which was created by me earlier. -![][7] - -### 9) How To Resize The root Partition - -Choose the root partition you want to resize, only one partition is available here so i’m going to edit that partition to install another OS. -![][8] - -To do so, press **Resize/Move** button to resize the partition. -![][9] - -Here, enter the size which you want take out from this partition in first box. I’m going to claim **10GB** so, i added **10240MB** and leave rest of boxes as default, then hit **Resize/Move** button -![][10] - -It will ask you once again to confirm to resize the partition because you are editing the live system partition, then hit **Ok**. -![][11] - -It has been successfully shrink the partition from 30GB to 20GB. Also shows Unallocated disk space of 10GB. -![][12] - -Finally click `Apply` button to perform remaining below operations. -![][13] - - * **`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD. - * **`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device. - * **`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file. - - - -**`e2fsck`** e2fsck is a file system check utility that automatically repair the file system for bad sectors, I/O errors related to HDD. -![][14] - -**`resize2fs`** The resize2fs program will resize ext2, ext3, or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on device. -![][15] - -**`e2image`** The e2image program will save critical ext2, ext3, or ext4 filesystem metadata located on device to a file specified by image-file. -![][16] - -All the operation got completed and close the dialog box. -![][17] - -Now, i could able to see **10GB** of Unallocated disk partition. -![][18] - -Reboot the system to check this. -![][19] - -### 10) Check Free Space - -Login to the system back and use fdisk command to see the available space in the partition. Yes i could see **10GB** of Unallocated disk space on this partition. -``` -$ sudo parted /dev/sda print free -[sudo] password for daygeek: -Model: ATA VBOX HARDDISK (scsi) -Disk /dev/sda: 32.2GB -Sector size (logical/physical): 512B/512B -Partition Table: msdos -Disk Flags: - -Number Start End Size Type File system Flags - 32.3kB 10.7GB 10.7GB Free Space - 1 10.7GB 32.2GB 21.5GB primary ext4 boot - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility/ - -作者:[Magesh Maruthamuthu][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://gparted.org/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-1.png -[4]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-2.png -[5]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-3.png -[6]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-4.png -[7]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-5.png -[8]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-6.png -[9]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-7.png -[10]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-8.png -[11]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-9.png -[12]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-10.png -[13]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-11.png -[14]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-12.png -[15]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-13.png -[16]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-14.png -[17]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-15.png -[18]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-16.png -[19]:https://www.2daygeek.com/wp-content/uploads/2014/08/how-to-resize-active-primary-root-partition-in-linux-using-gparted-utility-17.png diff --git a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md b/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md deleted file mode 100644 index 1ebf17ef68..0000000000 --- a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md +++ /dev/null @@ -1,138 +0,0 @@ -How To Browse Stack Overflow From Terminal -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/how2-720x340.png) -A while ago, we have written about [**SoCLI**][1], a python script to search and browse Stack Overflow website from command line. Today, we will discuss about a similar tool named **“how2”**. It is a command line utility to browse Stack Overflow from Terminal. You can query in the plain English as the way you do in [**Google search**][2] and it uses Google and Stackoverflow APIs to search for the given queries. It is free and open source utility written using NodeJS. - -### Browse Stack Overflow From Terminal Using how2 - -Since how2 is a NodeJS package, we can install it using Npm package manager. If you haven’t installed Npm and NodeJS already, refer the following guide. - -After installing Npm and NodeJS, run the following command to install how2 utility. -``` -$ npm install -g how2 - -``` - -Now let us see how to browse Stack Overflow uisng this program. The typical usage to search through Stack Overflow site using “how2” utility is: -``` -$ how2 - -``` - -For example, I am going to search for how to create tgz archive. -``` -$ how2 create archive tgz - -``` - -Oops! I get the following error. -``` -/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js:59 -Transport.prototype.__proto__ = EventEmitter.prototype; - ^ - - TypeError: Cannot read property 'prototype' of undefined - at Object. (/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js:59:46) - at Module._compile (internal/modules/cjs/loader.js:654:30) - at Object.Module._extensions..js (internal/modules/cjs/loader.js:665:10) - at Module.load (internal/modules/cjs/loader.js:566:32) - at tryModuleLoad (internal/modules/cjs/loader.js:506:12) - at Function.Module._load (internal/modules/cjs/loader.js:498:3) - at Module.require (internal/modules/cjs/loader.js:598:17) - at require (internal/modules/cjs/helpers.js:11:18) - at Object. (/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/stream.js:8:17) - at Module._compile (internal/modules/cjs/loader.js:654:30) - -``` - -I may be a bug. I hope it gets fixed in the future versions. However, I find a workaround posted [**here**][3]. - -To fix this error temporarily, you need to edit the **transport.js** file using command: -``` -$ vi /home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js - -``` - -The actual path of this file will be displayed in your error output. Replace the above file path with your own. Then find the following line: -``` -var EventEmitter = process.EventEmitter; - -``` - -and replace it with following line: -``` -var EventEmitter = require('events'); - -``` - -Press ESC and type **:wq** to save and quit the file. - -Now search again the query. -``` -$ how2 create archive tgz - -``` - -Here is the sample output from my Ubuntu system. - -[![][4]][5] - -If the answer you’re looking for is not displayed in the above output, press **SPACE BAR** key to start the interactive search where you can go through all suggested questions and answers from the Stack Overflow site. - -[![][4]][6] - -Use UP/DOWN arrows to move between the results. Once you got the right answer/question, hit SPACE BAR or ENTER key to open it in the Terminal. - -[![][4]][7] - -To go back and exit, press **ESC**. - -**Search answers for specific language** - -If you don’t specify a language it **defaults to Bash** unix command line and give you immediately the most likely answer as above. You can also narrow the results to a specific language, for example perl, python, c, Java etc. - -For instance, to search for queries related to “Python” language only using **-l** flag as shown below. -``` -$ how2 -l python linked list - -``` - -[![][4]][8] - -To get a quick help, type: -``` -$ how2 -h - -``` - -### Conclusion - -The how2 utility is a basic command line program to quickly search for questions and answers from Stack Overflow without leaving your Terminal and it does this job pretty well. However, it is just CLI browser for Stack overflow. For some advanced features such as searching most voted questions, searching queries using multiple tags, colored interface, submitting a new question and viewing questions stats etc., **SoCLI** is good to go. - -And, that’s all for now. Hope this was useful. I will be soon here with another useful guide. Until then, stay tuned with OSTechNix! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-browse-stack-overflow-from-terminal/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/search-browse-stack-overflow-website-commandline/ -[2]:https://www.ostechnix.com/google-search-navigator-enhance-keyboard-navigation-in-google-search/ -[3]:https://github.com/santinic/how2/issues/79 -[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-1.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-2.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-3.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-4.png diff --git a/sources/tech/20180419 Migrating to Linux- Network and System Settings.md b/sources/tech/20180419 Migrating to Linux- Network and System Settings.md deleted file mode 100644 index f3930f1777..0000000000 --- a/sources/tech/20180419 Migrating to Linux- Network and System Settings.md +++ /dev/null @@ -1,116 +0,0 @@ -Migrating to Linux: Network and System Settings -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/animals-birds-flock-55832.jpg?itok=NUGAyhDO) -In this series, we provide an overview of fundamentals to help you successfully make the transition to Linux from another operating system. If you missed the earlier articles in the series, you can find them here: - -[Part 1 - An Introduction][1] - -[Part 2 - Disks, Files, and Filesystems][2] - -[Part 3 - Graphical Environments][3] - -[Part 4 - The Command Line][4] - -[Part 5 - Using sudo][5] - -[Part 6 - Installing Software][6] - -Linux gives you a lot of control over network and system settings. On your desktop, Linux lets you tweak just about anything on the system. Most of these settings are exposed in plain text files under the /etc directory. Here I describe some of the most common settings you’ll use on your desktop Linux system. - -A lot of settings can be found in the Settings program, and the available options will vary by Linux distribution. Usually, you can change the background, tweak sound volume, connect to printers, set up displays, and more. While I won't talk about all of the settings here, you can certainly explore what's in there. - -### Connect to the Internet - -Connecting to the Internet in Linux is often fairly straightforward. If you are wired through an Ethernet cable, Linux will usually get an IP address and connect automatically when the cable is plugged in or at startup if the cable is already connected. - -If you are using wireless, in most distributions there is a menu, either in the indicator panel or in settings (depending on your distribution), where you can select the SSID for your wireless network. If the network is password protected, it will usually prompt you for the password. Afterward, it connects, and the process is fairly smooth. - -You can adjust network settings in the graphical environment by going into settings. Sometimes this is called System Settings or just Settings. Often you can easily spot the settings program because its icon is a gear or a picture of tools (Figure 1). - - -![Network Settings][8] - -Figure 1: Gnome Desktop Network Settings Indicator Icon. - -[Used with permission][9] - -### Network Interface Names - -Under Linux, network devices have names. Historically, these are given names like eth0 and wlan0 -- or Ethernet and wireless, respectively. Newer Linux systems have been using different names that appear more esoteric, like enp4s0 and wlp5s0. If the name starts with en, it's a wired Ethernet interface. If it starts with wl, it's a wireless interface. The rest of the letters and numbers reflect how the device is connected to hardware. - -### Network Management from the Command Line - -If you want more control over your network settings, or if you are managing network connections without a graphical desktop, you can also manage the network from the command line. - -Note that the most common service used to manage networks in a graphical desktop is the Network Manager, and Network Manager will often override setting changes made on the command line. If you are using the Network Manager, it's best to change your settings in its interface so it doesn't undo the changes you make from the command line or someplace else. - -Changing settings in the graphical environment is very likely to be interacting with the Network Manager, and you can also change Network Manager settings from the command line using the tool called nmtui. The nmtui tool provides all the settings that you find in the graphical environment but gives it in a text-based semi-graphical interface that works on the command line (Figure 2). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-2_0.png?itok=1QVjDdbJ) - -On the command line, there is an older tool called ifconfig to manage networks and a newer one called ip. On some distributions, ifconfig is considered to be deprecated and is not even installed by default. On other distributions, ifconfig is still in use. - -Here are some commands that will allow you to display and change network settings: - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screen_shot_2018-04-17_at_3.11.48_pm.png?itok=EZsjb-GQ) - -### Process and System Information - -In Windows, you can go into the Task Manager to see a list of the all the programs and services that are running. You can also stop programs from running. And you can view system performance in some of the tabs displayed there. - -You can do similar things in Linux both from the command line and from graphical tools. In Linux, there are a few graphical tools available depending on your distribution. The most common ones are System Monitor or KSysGuard. In these tools, you can see system performance, see a list of processes, and even kill processes (Figure 3). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-3_2.png?itok=ePeXj9PA) - -In these tools, you can also view global network traffic on your system (Figure 4). - - -![System Monitor][11] - -Figure 4: Screenshot of Gnome System Monitor. - -[Used with permission][9] - -### Managing Process and System Usage - -There are also quite a few tools you can use from the command line. The command ps can be used to list processes on your system. By default, it will list processes running in your current terminal session. But you can list other processes by giving it various command line options. You can get more help on ps with the commands info ps, or man ps. - -Most folks though want to get a list of processes because they would like to stop the one that is using up too much memory or CPU time. In this case, there are two commands that make this task much easier. These are top and htop (Figure 5). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-5_0.png?itok=2nm5EmAl) - -The top and htop tools work very similarly to each other. These commands update their list every second or two and re-sort the list so that the task using the most CPU is at the top. You can also change the sorting to sort by other resources as well such as memory usage. - -In either of these programs (top and htop), you can type '?' to get help, and 'q' to quit. With top, you can press 'k' to kill a process and then type in the unique PID number for the process to kill it. - -With htop, you can highlight a task by pressing down arrow or up arrow to move the highlight bar, and then press F9 to kill the task followed by Enter to confirm. - -The information and tools provided in this series will help you get started with Linux. With a little time and patience, you'll feel right at home. - -Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2018/4/migrating-linux-network-and-system-settings - -作者:[John Bonesio][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/johnbonesio -[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction -[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems -[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments -[4]:https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line -[5]:https://www.linux.com/blog/learn/2018/3/migrating-linux-using-sudo -[6]:https://www.linux.com/blog/learn/2018/3/migrating-linux-installing-software -[7]:https://www.linux.com/files/images/figure-1png-2 -[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-1_2.png?itok=J-C6q-t5 (Network Settings) -[9]:https://www.linux.com/licenses/category/used-permission -[10]:https://www.linux.com/files/images/figure-4png-1 -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-4_1.png?itok=boI-L1mF (System Monitor) -[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md b/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md deleted file mode 100644 index 0e4318c858..0000000000 --- a/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md +++ /dev/null @@ -1,220 +0,0 @@ -How To Remove Password From A PDF File in Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/Remove-Password-From-A-PDF-File-720x340.png) -Today I happen to share a password protected PDF file to one of my friend. I knew the password of that PDF file, but I didn’t want to disclose it. Instead, I just wanted to remove the password and send the file to him. I started to looking for some easy ways to remove the password protection from the pdf files on Internet. After a quick google search, I came up with four methods to remove password from a PDF file in Linux. The funny thing is I had already done it few years ago and I almost forgot it. If you’re wondering how to remove password from a PDF file in Linux, read on! It is not that difficult. - -### Remove Password From A PDF File in Linux - -**Method 1 – Using Qpdf** - -The **Qpdf** is a PDF transformation software which is used to encrypt and decrypt PDF files, convert PDF files to another equivalent pdf files. Qpdf is available in the default repositories of most Linux distributions, so you can install it using the default package manager. - -For example, Qpdf can be installed on Arch Linux and its variants using [**pacman**][1] as shown below. -``` -$ sudo pacman -S qpdf - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ sudo apt-get install qpdf - -``` - -Now let us remove the password from a pdf file using qpdf. - -I have a password-protected PDF file named **“secure.pdf”**. Whenever I open this file, it prompts me to enter the password to display its contents. - -![][3] - -I know the password of the above pdf file. However, I don’t want to share the password with anyone. So what I am going to do is to simply remove the password of the PDF file using Qpdf utility with following command. -``` -$ qpdf --password='123456' --decrypt secure.pdf output.pdf - -``` - -Quite easy, isn’t it? Yes, it is! Here, **123456** is the password of the **secure.pdf** file. Replace the password with your own. - -**Method 2 – Using Pdftk** - -**Pdftk** is yet another great software for manipulating pdf documents. Pdftk can do almost all sort of pdf operations, such as; - - * Encrypt and decrypt pdf files. - * Merge PDF documents. - * Collate PDF page Scans. - * Split PDF pages. - * Rotate PDF files or pages. - * Fill PDF forms with X/FDF data and/or flatten forms. - * Generate FDF data stencils from PDF forms. - * Apply a background watermark or a foreground stamp. - * Report PDF metrics, bookmarks and metadata. - * Add/update PDF bookmarks or metadata. - * Attach files to PDF pages or the PDF document. - * Unpack PDF attachments. - * Burst a PDF file into single pages. - * Compress and decompress page streams. - * Repair corrupted PDF file. - - - -Pddftk is available in AUR, so you can install it using any AUR helper programs on Arch Linux its derivatives. - -Using [**Pacaur**][4]: -``` -$ pacaur -S pdftk - -``` - -Using [**Packer**][5]: -``` -$ packer -S pdftk - -``` - -Using [**Trizen**][6]: -``` -$ trizen -S pdftk - -``` - -Using [**Yay**][7]: -``` -$ yay -S pdftk - -``` - -Using [**Yaourt**][8]: -``` -$ yaourt -S pdftk - -``` - -On Debian, Ubuntu, Linux Mint, run: -``` -$ sudo apt-get instal pdftk - -``` - -On CentOS, Fedora, Red Hat: - -First, Install EPEL repository: -``` -$ sudo yum install epel-release - -``` - -Or -``` -$ sudo dnf install epel-release - -``` - -Then install PDFtk application using command: -``` -$ sudo yum install pdftk - -``` - -Or -``` -$ sudo dnf install pdftk - -``` - -Once pdftk installed, you can remove the password from a pdf document using command: -``` -$ pdftk secure.pdf input_pw 123456 output output.pdf - -``` - -Replace ‘123456’ with your correct password. This command decrypts the “secure.pdf” file and create an equivalent non-password protected file named “output.pdf”. - -**Also read:** - -**Method 3 – Using Poppler** - -**Poppler** is a PDF rendering library based on the xpdf-3.0 code base. It contains the following set of command line utilities for manipulating PDF documents. - - * **pdfdetach** – lists or extracts embedded files. - * **pdffonts** – font analyzer. - * **pdfimages** – image extractor. - * **pdfinfo** – document information. - * **pdfseparate** – page extraction tool. - * **pdfsig** – verifies digital signatures. - * **pdftocairo** – PDF to PNG/JPEG/PDF/PS/EPS/SVG converter using Cairo. - * **pdftohtml** – PDF to HTML converter. - * **pdftoppm** – PDF to PPM/PNG/JPEG image converter. - * **pdftops** – PDF to PostScript (PS) converter. - * **pdftotext** – text extraction. - * **pdfunite** – document merging tool. - - - -For the purpose of this guide, we only use the “pdftops” utility. - -To install Poppler on Arch Linux based distributions, run: -``` -$ sudo pacman -S poppler - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ sudo apt-get install poppler-utils - -``` - -On RHEL, CentOS, Fedora: -``` -$ sudo yum install poppler-utils - -``` - -Once Poppler installed, run the following command to decrypt the password protected pdf file and create a new equivalent file named output.pdf. -``` -$ pdftops -upw 123456 secure.pdf output.pdf - -``` - -Again, replace ‘123456’ with your pdf password. - -As you might noticed in all above methods, we just converted the password protected pdf file named “secure.pdf” to another equivalent pdf file named “output.pdf”. Technically speaking, we really didn’t remove the password from the source file, instead we decrypted it and saved it as another equivalent pdf file without password protection. - -**Method 4 – Print to a file -** - -This is the easiest method in all of the above methods. You can use your existing PDF viewer such as Atril document viewer, Evince etc., and print the password protected pdf file to another file. - -Open the password protected file in your PDF viewer application. Go to **File - > Print**. And save the pdf file in any location of your choice. - -![][9] - -And, that’s all. Hope this was useful. Do you know/use any other methods to remove the password protection from PDF files? Let us know in the comment section below. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-remove-password-from-a-pdf-file-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/getting-started-pacman/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/Remove-Password-From-A-PDF-File-1.png -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ -[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[8]:https://www.ostechnix.com/install-yaourt-arch-linux/ diff --git a/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md b/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md deleted file mode 100644 index aea3b6b035..0000000000 --- a/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md +++ /dev/null @@ -1,526 +0,0 @@ -Command Line Tricks For Data Scientists • kade killary -====== - -![](https://i.imgur.com/0mzQMcB.png) - -For many data scientists, data manipulation begins and ends with Pandas or the Tidyverse. In theory, there is nothing wrong with this notion. It is, after all, why these tools exist in the first place. Yet, these options can often be overkill for simple tasks like delimiter conversion. - -Aspiring to master the command line should be on every developer’s list, especially data scientists. Learning the ins and outs of your shell will undeniably make you more productive. Beyond that, the command line serves as a great history lesson in computing. For instance, awk - a data-driven scripting language. Awk first appeared in 1977 with the help of [Brian Kernighan][1], the K in the legendary [K&R book][2]. Today, some near 50 years later, awk remains relevant with [new books][3] still appearing every year! Thus, it’s safe to assume that an investment in command line wizardry won’t depreciate any time soon. - -### What We’ll Cover - - * ICONV - * HEAD - * TR - * WC - * SPLIT - * SORT & UNIQ - * CUT - * PASTE - * JOIN - * GREP - * SED - * AWK - - - -### ICONV - -File encodings can be tricky. For the most part files these days are all UTF-8 encoded. To understand some of the magic behind UTF-8, check out this [excellent video][4]. Nonetheless, there are times where we receive a file that isn’t in this format. This can lead to some wonky attempts at swapping the encoding schema. Here, `iconv` is a life saver. Iconv is a simple program that will take text in one encoding and output the text in another. -``` -# Converting -f (from) latin1 (ISO-8859-1) -# -t (to) standard UTF_8 - -iconv -f ISO-8859-1 -t UTF-8 < input.txt > output.txt - -``` - - * Useful options: - - * `iconv -l` list all known encodings - * `iconv -c` silently discard characters that cannot be converted - - - -### HEAD - -If you are a frequent Pandas user then `head` will be familiar. Often when dealing with new data the first thing we want to do is get a sense of what exists. This leads to firing up Pandas, reading in the data and then calling `df.head()` \- strenuous, to say the least. Head, without any flags, will print out the first 10 lines of a file. The true power of `head` lies in testing out cleaning operations. For instance, if we wanted to change the delimiter of a file from commas to pipes. One quick test would be: `head mydata.csv | sed 's/,/|/g'`. -``` -# Prints out first 10 lines - -head filename.csv - -# Print first 3 lines - -head -n 3 filename.csv - -``` - - * Useful options: - - * `head -n` print a specific number of lines - * `head -c` print a specific number of bytes - - - -### TR - -Tr is analogous to translate. This powerful utility is a workhorse for basic file cleaning. An ideal use case is for swapping out the delimiters within a file. -``` -# Converting a tab delimited file into commas - -cat tab_delimited.txt | tr "\t" "," comma_delimited.csv - -``` - -Another feature of `tr` is all the built in `[:class:]` variables at your disposal. These include: -``` -[:alnum:] all letters and digits -[:alpha:] all letters -[:blank:] all horizontal whitespace -[:cntrl:] all control characters -[:digit:] all digits -[:graph:] all printable characters, not including space -[:lower:] all lower case letters -[:print:] all printable characters, including space -[:punct:] all punctuation characters -[:space:] all horizontal or vertical whitespace -[:upper:] all upper case letters -[:xdigit:] all hexadecimal digits - -``` - -You can chain a variety of these together to compose powerful programs. The following is a basic word count program you could use to check your READMEs for overuse. -``` -cat README.md | tr "[:punct:][:space:]" "\n" | tr "[:upper:]" "[:lower:]" | grep . | sort | uniq -c | sort -nr - -``` - -Another example using basic regex: -``` -# Converting all upper case letters to lower case - -cat filename.csv | tr '[A-Z]' '[a-z]' - -``` - - * Useful options: - - * `tr -d` delete characters - * `tr -s` squeeze characters - * `\b` backspace - * `\f` form feed - * `\v` vertical tab - * `\NNN` character with octal value NNN - - - -### WC - -Word count. Its value is primarily derived from the `-l` flag, which will give you the line count. -``` -# Will return number of lines in CSV - -wc -l gigantic_comma.csv - -``` - -This tool comes in handy to confirm the output of various commands. So, if we were to convert the delimiters within a file and then run `wc -l` we would expect the total lines to be the same. If not, then we know something went wrong. - - * Useful options: - - * `wc -c` print the byte counts - * `wc -m` print the character counts - * `wc -L` print length of longest line - * `wc -w` print word counts - - - -### SPLIT - -File sizes can range dramatically. And depending on the job, it could be beneficial to split up the file - thus `split`. The basic syntax for split is: -``` -# We will split our CSV into new_filename every 500 lines - -split -l 500 filename.csv new_filename_ - -# filename.csv -# ls output -# new_filename_aaa -# new_filename_aab -# new_filename_aac - -``` - -Two quirks are the naming convention and lack of file extensions. The suffix convention can be numeric via the `-d` flag. To add file extensions, you’ll need to run the following `find` command. It will change the names of ALL files within the current directory by appending `.csv`, so be careful. -``` -find . -type f -exec mv '{}' '{}'.csv \; - -# ls output -# filename.csv.csv -# new_filename_aaa.csv -# new_filename_aab.csv -# new_filename_aac.csv - -``` - - * Useful options: - - * `split -b` split by certain byte size - * `split -a` generate suffixes of length N - * `split -x` split using hex suffixes - - - -### SORT & UNIQ - -The preceding commands are obvious: they do what they say they do. These two provide the most punch in tandem (i.e. unique word counts). This is due to `uniq`, which only operates on duplicate adjacent lines. Thus, the reason to `sort` before piping the output through. One interesting note is that `sort -u` will achieve the same results as the typical `sort file.txt | uniq` pattern. - -Sort does have a sneakily useful ability for data scientists: the ability to sort an entire CSV based on a particular column. -``` -# Sorting a CSV file by the second column alphabetically - -sort -t"," -k2,2 filename.csv - -# Numerically - -sort -t"," -k2n,2 filename.csv - -# Reverse order - -sort -t"," -k2nr,2 filename.csv - -``` - -The `-t` option here is to specify the comma as our delimiter. More often than not spaces or tabs are assumed. Furthermore, the `-k` flag is for specifying our key. The syntax for this is `-km,n`, with `m` being the starting field and `n` being the last. - - * Useful options: - - * `sort -f` ignore case - * `sort -r` reverse sort order - * `sort -R` scramble order - * `uniq -c` count number of occurrences - * `uniq -d` only print duplicate lines - - - -### CUT - -Cut is for removing columns. To illustrate, if we only wanted the first and third columns. -``` -cut -d, -f 1,3 filename.csv - -``` - -To select every column other than the first. -``` -cut -d, -f 2- filename.csv - -``` - -In combination with other commands, `cut` serves as a filter. -``` -# Print first 10 lines of column 1 and 3, where "some_string_value" is present - -head filename.csv | grep "some_string_value" | cut -d, -f 1,3 - -``` - -Finding out the number of unique values within the second column. -``` -cat filename.csv | cut -d, -f 2 | sort | uniq | wc -l - -# Count occurences of unique values, limiting to first 10 results - -cat filename.csv | cut -d, -f 2 | sort | uniq -c | head - -``` - -### PASTE - -Paste is a niche command with an interesting function. If you have two files that you need merged, and they are already sorted, `paste` has you covered. -``` -# names.txt -adam -john -zach - -# jobs.txt -lawyer -youtuber -developer - -# Join the two into a CSV - -paste -d ',' names.txt jobs.txt > person_data.txt - -# Output -adam,lawyer -john,youtuber -zach,developer - -``` - -For a more SQL_-esque variant, see below. - -### JOIN - -Join is a simplistic, quasi-tangential, SQL. The largest differences being that `join` will return all columns and matches can only be on one field. By default, `join` will try and use the first column as the match key. For a different result, the following syntax is necessary: -``` -# Join the first file (-1) by the second column -# and the second file (-2) by the first - -join -t"," -1 2 -2 1 first_file.txt second_file.txt - -``` - -The standard join is an inner join. However, an outer join is also viable through the `-a` flag. Another noteworthy quirk is the `-e` flag, which can be used to substitute a value if a missing field is found. -``` -# Outer join, replace blanks with NULL in columns 1 and 2 -# -o which fields to substitute - 0 is key, 1.1 is first column, etc... - -join -t"," -1 2 -a 1 -a2 -e ' NULL' -o '0,1.1,2.2' first_file.txt second_file.txt - -``` - -Not the most user-friendly command, but desperate times, desperate measures. - - * Useful options: - - * `join -a` print unpairable lines - * `join -e` replace missing input fields - * `join -j` equivalent to `-1 FIELD -2 FIELD` - - - -### GREP - -Global search for a regular expression and print, or `grep`; likely, the most well known command, and with good reason. Grep has a lot of power, especially for finding your way around large codebases. Within the realm of data science, it acts as a refining mechanism for other commands. Although its standard usage is valuable as well. -``` -# Recursively search and list all files in directory containing 'word' - -grep -lr 'word' . - -# List number of files containing word - -grep -lr 'word' . | wc -l - -``` - -Count total number of lines containing word / pattern. -``` -grep -c 'some_value' filename.csv - -# Same thing, but in all files in current directory by file name - -grep -c 'some_value' * - -``` - -Grep for multiple values using the or operator - `\|`. -``` -grep "first_value\|second_value" filename.csv - -``` - - * Useful options - - * `alias grep="grep --color=auto"` make grep colorful - * `grep -E` use extended regexps - * `grep -w` only match whole words - * `grep -l` print name of files with match - * `grep -v` inverted matching - - - -### THE BIG GUNS - -Sed and Awk are the two most powerful commands in this article. For brevity, I’m not going to go into exhausting detail about either. Instead, I will cover a variety of commands that prove their impressive might. If you want to know more, [there is a book][5] just for that. - -### SED - -At its core `sed` is a stream editor. It excels at substitutions, but can also be leveraged for all out refactoring. - -The most basic `sed` command consists of `s/old/new/g`. This translates to search for old value, replace with new globally. Without the `/g` our command would terminate after the first occurrence. - -To get a quick taste of the power lets dive into an example. In this scenario you’ve been given the following file: -``` -balance,name -$1,000,john -$2,000,jack - -``` - -The first thing we may want to do is remove the dollar signs. The `-i` flag indicates in-place. The `''` is to indicate a zero-length file extension, thus overwriting our initial file. Ideally, you would test each of these individually and then output to a new file. -``` -sed -i '' 's/\$//g' data.txt - -# balance,name -# 1,000,john -# 2,000,jack - -``` - -Next up, the commas in our `balance` column values. -``` -sed -i '' 's/\([0-9]\),\([0-9]\)/\1\2/g' data.txt - -# balance,name -# 1000,john -# 2000,jack - -``` - -Lastly, Jack up and decided to quit one day. So, au revoir, mon ami. -``` -sed -i '' '/jack/d' data.txt - -# balance,name -# 1000,john - -``` - -As you can see, `sed` packs quite a punch, but the fun doesn’t stop there. - -### AWK - -The best for last. Awk is much more than a simple command: it is a full-blown language. Of everything covered in this article, `awk` is by far the coolest. If you find yourself impressed there are loads of great resources - see [here][6], [here][7] and [here][8]. - -Common use cases for `awk` include: - - * Text processing - * Formatted text reports - * Performing arithmetic operations - * Performing string operations - - - -Awk can parallel `grep` in its most nascent form. -``` -awk '/word/' filename.csv - -``` - -Or with a little more magic the combination of `grep` and `cut`. Here, `awk` prints the third and fourth column, tab separated, for all lines with our word. `-F,` merely changes our delimiter to a comma. -``` -awk -F, '/word/ { print $3 "\t" $4 }' filename.csv - -``` - -Awk comes with a lot of nifty variables built-in. For instance, `NF` \- number of fields - and `NR` \- number of records. To get the fifty-third record in a file: -``` -awk -F, 'NR == 53' filename.csv - -``` - -An added wrinkle is the ability to filter based off of one or more values. The first example, below, will print the line number and columns for records where the first column equals string. -``` -awk -F, ' $1 == "string" { print NR, $0 } ' filename.csv - -# Filter based off of numerical value in second column - -awk -F, ' $2 == 1000 { print NR, $0 } ' filename.csv - -``` - -Multiple numerical expressions: -``` -# Print line number and columns where column three greater -# than 2005 and column five less than one thousand - -awk -F, ' $3 >= 2005 && $5 <= 1000 { print NR, $0 } ' filename.csv - -``` - -Sum the third column: -``` -awk -F, '{ x+=$3 } END { print x }' filename.csv - -``` - -The sum of the third column, for values where the first column equals “something”. -``` -awk -F, '$1 == "something" { x+=$3 } END { print x }' filename.csv - -``` - -Get the dimensions of a file: -``` -awk -F, 'END { print NF, NR }' filename.csv - -# Prettier version - -awk -F, 'BEGIN { print "COLUMNS", "ROWS" }; END { print NF, NR }' filename.csv - -``` - -Print lines appearing twice: -``` -awk -F, '++seen[$0] == 2' filename.csv - -``` - -Remove duplicate lines: -``` -# Consecutive lines -awk 'a !~ $0; {a=$0}'] - -# Nonconsecutive lines -awk '! a[$0]++' filename.csv - -# More efficient -awk '!($0 in a) {a[$0];print} - -``` - -Substitute multiple values using built-in function `gsub()`. -``` -awk '{gsub(/scarlet|ruby|puce/, "red"); print}' - -``` - -This `awk` command will combine multiple CSV files, ignoring the header and then append it at the end. -``` -awk 'FNR==1 && NR!=1{next;}{print}' *.csv > final_file.csv - -``` - -Need to downsize a massive file? Welp, `awk` can handle that with help from `sed`. Specifically, this command breaks one big file into multiple smaller ones based on a line count. This one-liner will also add an extension. -``` -sed '1d;$d' filename.csv | awk 'NR%NUMBER_OF_LINES==1{x="filename-"++i".csv";}{print > x}' - -# Example: splitting big_data.csv into data_(n).csv every 100,000 lines - -sed '1d;$d' big_data.csv | awk 'NR%100000==1{x="data_"++i".csv";}{print > x}' - -``` - -### CLOSING - -The command line boasts endless power. The commands covered in this article are enough to elevate you from zero to hero in no time. Beyond those covered, there are many utilities to consider for daily data operations. [Csvkit][9], [xsv][10] and [q][11] are three of note. If you’re looking to take an even deeper dive into command line data science, then look no further than [this book][12]. It’s also available online [for free][13]! - --------------------------------------------------------------------------------- - -via: http://kadekillary.work/post/cli-4-ds/ - -作者:[Kade Killary][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://kadekillary.work/authors/kadekillary -[1]:https://en.wikipedia.org/wiki/Brian_Kernighan -[2]:https://en.wikipedia.org/wiki/The_C_Programming_Language -[3]:https://www.amazon.com/Learning-AWK-Programming-cutting-edge-text-processing-ebook/dp/B07BT98HDS -[4]:https://www.youtube.com/watch?v=MijmeoH9LT4 -[5]:https://www.amazon.com/sed-awk-Dale-Dougherty/dp/1565922255/ref=sr_1_1?ie=UTF8&qid=1524381457&sr=8-1&keywords=sed+and+awk -[6]:https://www.amazon.com/AWK-Programming-Language-Alfred-Aho/dp/020107981X/ref=sr_1_1?ie=UTF8&qid=1524388936&sr=8-1&keywords=awk -[7]:http://www.grymoire.com/Unix/Awk.html -[8]:https://www.tutorialspoint.com/awk/index.htm -[9]:http://csvkit.readthedocs.io/en/1.0.3/ -[10]:https://github.com/BurntSushi/xsv -[11]:https://github.com/harelba/q -[12]:https://www.amazon.com/Data-Science-Command-Line-Time-Tested/dp/1491947853/ref=sr_1_1?ie=UTF8&qid=1524390894&sr=8-1&keywords=data+science+at+the+command+line -[13]:https://www.datascienceatthecommandline.com/ diff --git a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md deleted file mode 100644 index 4e69d04837..0000000000 --- a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md +++ /dev/null @@ -1,294 +0,0 @@ -Things to do After Installing Ubuntu 18.04 -====== -**Brief: This list of things to do after installing Ubuntu 18.04 helps you get started with Bionic Beaver for a smoother desktop experience.** - -[Ubuntu][1] 18.04 Bionic Beaver releases today. You are perhaps already aware of the [new features in Ubuntu 18.04 LTS][2] release. If not, here’s the video review of Ubuntu 18.04 LTS: - -[Subscribe to YouTube Channel for more Ubuntu Videos][3] - -If you opted to install Ubuntu 18.04, I have listed out a few recommended steps that you can follow to get started with it. - -### Things to do after installing Ubuntu 18.04 Bionic Beaver - -![Things to do after installing Ubuntu 18.04][4] - -I should mention that the list of things to do after installing Ubuntu 18.04 depends a lot on you and your interests and needs. If you are a programmer, you’ll focus on installing programming tools. If you are a graphic designer, you’ll focus on installing graphics tools. - -Still, there are a few things that should be applicable to most Ubuntu users. This list is composed of those things plus a few of my of my favorites. - -Also, this list is for the default [GNOME desktop][5]. If you are using some other flavor like [Kubuntu][6], Lubuntu etc then the GNOME-specific stuff won’t be applicable to your system. - -You don’t have to follow each and every point on the list blindly. You should see if the recommended action suits your requirements or not. - -With that said, let’s get started with this list of things to do after installing Ubuntu 18.04. - -#### 1\. Update the system - -This is the first thing you should do after installing Ubuntu. Update the system without fail. It may sound strange because you just installed a fresh OS but still, you must check for the updates. - -In my experience, if you don’t update the system right after installing Ubuntu, you might face issues while trying to install a new program. - -To update Ubuntu 18.04, press Super Key (Windows Key) to launch the Activity Overview and look for Software Updater. Run it to check for updates. - -![Software Updater in Ubuntu 17.10][7] - -**Alternatively** , you can use these famous commands in the terminal ( Use Ctrl+Alt+T): -``` -sudo apt update && sudo apt upgrade - -``` - -#### 2\. Enable additional repositories for more software - -[Ubuntu has several repositories][8] from where it provides software for your system. These repositories are: - - * Main – Free and open-source software supported by Ubuntu team - * Universe – Free and open-source software maintained by the community - * Restricted – Proprietary drivers for devices. - * Multiverse – Software restricted by copyright or legal issues. - * Canonical Partners – Software packaged by Ubuntu for their partners - - - -Enabling all these repositories will give you access to more software and proprietary drivers. - -Go to Activity Overview by pressing Super Key (Windows key), and search for Software & Updates: - -![Software and Updates in Ubuntu 17.10][9] - -Under the Ubuntu Software tab, make sure you have checked all of the Main, Universe, Restricted and Multiverse repository checked. - -![Setting repositories in Ubuntu 18.04][10] - -Now move to the **Other Software** tab, check the option of **Canonical Partners**. - -![Enable Canonical Partners repository in Ubuntu 17.10][11] - -You’ll have to enter your password in order to update the software sources. Once it completes, you’ll find more applications to install in the Software Center. - -#### 3\. Install media codecs - -In order to play media files like MP#, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu has them in their repository but doesn’t install it by default because of copyright issues in various countries. - -As an individual, you can install these media codecs easily using the Ubuntu Restricted Extra package. Click on the link below to install it from the Software Center. - -[Install Ubuntu Restricted Extras][12] - -Or alternatively, use the command below to install it: -``` -sudo apt install ubuntu-restricted-extras - -``` - -#### 4\. Install software from the Software Center - -Now that you have setup the repositories and installed the codecs, it is time to get software. If you are absolutely new to Ubuntu, please follow this [guide to installing software in Ubuntu][13]. - -There are several ways to install software. The most convenient way is to use the Software Center that has thousands of software available in various categories. You can install them in a few clicks from the software center. - -![Software Center in Ubuntu 17.10 ][14] - -It depends on you what kind of software you would like to install. I’ll suggest some of my favorites here. - - * **VLC** – media player for videos - * **GIMP** – Photoshop alternative for Linux - * **Pinta** – Paint alternative in Linux - * **Calibre** – eBook management tool - * **Chromium** – Open Source web browser - * **Kazam** – Screen recorder tool - * [**Gdebi**][15] – Lightweight package installer for .deb packages - * **Spotify** – For streaming music - * **Skype** – For video messaging - * **Kdenlive** – [Video editor for Linux][16] - * **Atom** – [Code editor][17] for programming - - - -You may also refer to this list of [must-have Linux applications][18] for more software recommendations. - -#### 5\. Install software from the Web - -Though Ubuntu has thousands of applications in the software center, you may not find some of your favorite applications despite the fact that they support Linux. - -Many software vendors provide ready to install .deb packages. You can download these .deb files from their website and install it by double-clicking on it. - -[Google Chrome][19] is one such software that you can download from the web and install it. - -#### 6\. Opt out of data collection in Ubuntu 18.04 (optional) - -Ubuntu 18.04 collects some harmless statistics about your system hardware and your system installation preference. It also collects crash reports. - -You’ll be given the option to not send this data to Ubuntu servers when you log in to Ubuntu 18.04 for the first time. - -![Opt out of data collection in Ubuntu 18.04][20] - -If you miss it that time, you can disable it by going to System Settings -> Privacy and then set the Problem Reporting to Manual. - -![Privacy settings in Ubuntu 18.04][21] - -#### 7\. Customize the GNOME desktop (Dock, themes, extensions and more) - -The GNOME desktop looks good in Ubuntu 18.04 but doesn’t mean you cannot change it. - -You can do a few visual changes from the System Settings. You can change the wallpaper of the desktop and the lock screen, you can change the position of the dock (launcher on the left side), change power settings, Bluetooth etc. In short, you can find many settings that you can change as per your need. - -![Ubuntu 17.10 System Settings][22] - -Changing themes and icons are the major way to change the looks of your system. I advise going through the list of [best GNOME themes][23] and [icons for Ubuntu][24]. Once you have found the theme and icon of your choice, you can use them with GNOME Tweaks tool. - -You can install GNOME Tweaks via the Software Center or you can use the command below to install it: -``` -sudo apt install gnome-tweak-tool - -``` - -Once it is installed, you can easily [install new themes and icons][25]. - -![Change theme is one of the must to do things after installing Ubuntu 17.10][26] - -You should also have a look at [use GNOME extensions][27] to further enhance the looks and capabilities of your system. I made this video about using GNOME extensions in 17.10 and you can follow the same for Ubuntu 18.04. - -If you are wondering which extension to use, do take a look at this list of [best GNOME extensions][28]. - -I also recommend reading this article on [GNOME customization in Ubuntu][29] so that you can know the GNOME desktop in detail. - -#### 8\. Prolong your battery and prevent overheating - -Let’s move on to [prevent overheating in Linux laptops][30]. TLP is a wonderful tool that controls CPU temperature and extends your laptops’ battery life in the long run. - -Make sure that you haven’t installed any other power saving application such as [Laptop Mode Tools][31]. You can install it using the command below in a terminal: -``` -sudo apt install tlp tlp-rdw - -``` - -Once installed, run the command below to start it: -``` -sudo tlp start - -``` - -#### 9\. Save your eyes with Nightlight - -Nightlight is my favorite feature in GNOME desktop. Keeping [your eyes safe at night][32] from the computer screen is very important. Reducing blue light helps reducing eye strain at night. - -![flux effect][33] - -GNOME provides a built-in Night Light option, which you can activate in the System Settings. - -Just go to System Settings-> Devices-> Displays and turn on the Night Light option. - -![Enabling night light is a must to do in Ubuntu 17.10][34] - -#### 9\. Disable automatic suspend for laptops - -Ubuntu 18.04 comes with a new automatic suspend feature for laptops. If the system is running on battery and is inactive for 20 minutes, it will go in suspend mode. - -I understand that the intention is to save battery life but it is an inconvenience as well. You can’t keep the power plugged in all the time because it’s not good for the battery life. And you may need the system to be running even when you are not using it. - -Thankfully, you can change this behavior. Go to System Settings -> Power. Under Suspend & Power Button section, either turn off the Automatic Suspend option or extend its time period. - -![Disable automatic suspend in Ubuntu 18.04][35] - -You can also change the screen dimming behavior in here. - -#### 10\. System cleaning - -I have written in detail about [how to clean up your Ubuntu system][36]. I recommend reading that article to know various ways to keep your system free of junk. - -Normally, you can use this little command to free up space from your system: -``` -sudo apt autoremove - -``` - -It’s a good idea to run this command every once a while. If you don’t like the command line, you can use a GUI tool like [Stacer][37] or [Bleach Bit][38]. - -#### 11\. Going back to Unity or Vanilla GNOME (not recommended) - -If you have been using Unity or GNOME in the past, you may not like the new customized GNOME desktop in Ubuntu 18.04. Ubuntu has customized GNOME so that it resembles Unity but at the end of the day, it is neither completely Unity nor completely GNOME. - -So if you are a hardcore Unity or GNOMEfan, you may want to use your favorite desktop in its ‘real’ form. I wouldn’t recommend but if you insist here are some tutorials for you: - -#### 12\. Can’t log in to Ubuntu 18.04 after incorrect password? Here’s a workaround - -I noticed a [little bug in Ubuntu 18.04][39] while trying to change the desktop session to Ubuntu Community theme. It seems if you try to change the sessions at the login screen, it rejects your password first and at the second attempt, the login gets stuck. You can wait for 5-10 minutes to get it back or force power it off. - -The workaround here is that after it displays the incorrect password message, click Cancel, then click your name, then enter your password again. - -#### 13\. Experience the Community theme (optional) - -Ubuntu 18.04 was supposed to have a dashing new theme developed by the community. The theme could not be completed so it could not become the default look of Bionic Beaver release. I am guessing that it will be the default theme in Ubuntu 18.10. - -![Ubuntu 18.04 Communitheme][40] - -You can try out the aesthetic theme even today. [Installing Ubuntu Community Theme][41] is very easy. Just look for it in the software center, install it, restart your system and then at the login choose the Communitheme session. - -#### 14\. Get Windows 10 in Virtual Box (if you need it) - -In a situation where you must use Windows for some reasons, you can [install Windows in virtual box inside Linux][42]. It will run as a regular Ubuntu application. - -It’s not the best way but it still gives you an option. You can also [use WINE to run Windows software on Linux][43]. In both cases, I suggest trying the alternative native Linux application first before jumping to virtual machine or WINE. - -#### What do you do after installing Ubuntu? - -Those were my suggestions for getting started with Ubuntu. There are many more tutorials that you can find under [Ubuntu 18.04][44] tag. You may go through them as well to see if there is something useful for you. - -Enough from myside. Your turn now. What are the items on your list of **things to do after installing Ubuntu 18.04**? The comment section is all yours. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:https://www.ubuntu.com/ -[2]:https://itsfoss.com/ubuntu-18-04-release-features/ -[3]:https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/things-to-after-installing-ubuntu-18-04-featured-800x450.jpeg -[5]:https://www.gnome.org/ -[6]:https://kubuntu.org/ -[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-update-ubuntu-17-10.jpg -[8]:https://help.ubuntu.com/community/Repositories/Ubuntu -[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-updates-ubuntu-17-10.jpg -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/repositories-ubuntu-18.png -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-repository-ubuntu-17-10.jpeg -[12]:apt://ubuntu-restricted-extras -[13]:https://itsfoss.com/remove-install-software-ubuntu/ -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Ubuntu-software-center-17-10-800x551.jpeg -[15]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ -[16]:https://itsfoss.com/best-video-editing-software-linux/ -[17]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ -[18]:https://itsfoss.com/essential-linux-applications/ -[19]:https://www.google.com/chrome/ -[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/opt-out-of-data-collection-ubuntu-18-800x492.jpeg -[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/privacy-ubuntu-18-04-800x417.png -[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/System-Settings-Ubuntu-17-10-800x573.jpeg -[23]:https://itsfoss.com/best-gtk-themes/ -[24]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/ -[25]:https://itsfoss.com/install-themes-ubuntu/ -[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/GNOME-Tweak-Tool-Ubuntu-17-10.jpeg -[27]:https://itsfoss.com/gnome-shell-extensions/ -[28]:https://itsfoss.com/best-gnome-extensions/ -[29]:https://itsfoss.com/gnome-tricks-ubuntu/ -[30]:https://itsfoss.com/reduce-overheating-laptops-linux/ -[31]:https://wiki.archlinux.org/index.php/Laptop_Mode_Tools -[32]:https://itsfoss.com/night-shift-flux-ubuntu-linux/ -[33]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/03/flux-eyes-strain.jpg -[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Enable-Night-Light-Feature-Ubuntu-17-10-800x396.jpeg -[35]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/disable-automatic-suspend-ubuntu-18-800x586.jpeg -[36]:https://itsfoss.com/free-up-space-ubuntu-linux/ -[37]:https://itsfoss.com/optimize-ubuntu-stacer/ -[38]:https://itsfoss.com/bleachbit-2-release/ -[39]:https://gitlab.gnome.org/GNOME/gnome-shell/issues/227 -[40]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubunt-18-theme.jpeg -[41]:https://itsfoss.com/ubuntu-community-theme/ -[42]:https://itsfoss.com/install-windows-10-virtualbox-linux/ -[43]:https://itsfoss.com/use-windows-applications-linux/ -[44]:https://itsfoss.com/tag/ubuntu-18-04/ diff --git a/sources/tech/20180428 A Beginners Guide To Flatpak.md b/sources/tech/20180428 A Beginners Guide To Flatpak.md deleted file mode 100644 index db1dfa8181..0000000000 --- a/sources/tech/20180428 A Beginners Guide To Flatpak.md +++ /dev/null @@ -1,311 +0,0 @@ -A Beginners Guide To Flatpak -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/06/flatpak-720x340.jpg) - -A while, we have written about [**Ubuntu’s Snaps**][1]. Snaps are introduced by Canonical for Ubuntu operating system, and later it was adopted by other Linux distributions such as Arch, Gentoo, and Fedora etc. A snap is a single binary package bundled with all required libraries and dependencies, and you can install it on any Linux distribution, regardless of its version and architecture. Similar to Snaps, there is also another tool called **Flatpak**. As you may already know, packaging distributed applications for different Linux distributions are quite time consuming and difficult process. Each distributed application has different set of libraries and dependencies for various Linux distributions. But, Flatpak, the new framework for desktop applications that completely reduces this burden. Now, you can build a single Flatpak app and install it on various operating systems. How cool, isn’t it? - -Also, the users don’t have to worry about the libraries and dependencies, everything is bundled within the app itself. Most importantly, Flaptpak apps are sandboxed and isolated from the rest of the host operating system, and other applications. Another notable feature is we can install multiple versions of the same application at the same time in the same system. For example, you can install VLC player version 2.1, 2.2, and 2.3 on the same system. So, the developers can test different versions of same application at a time. - -In this tutorial, we will see how to install Flatpak in GNU/Linux. - -### Install Flatpak - -Flatpak is available for many popular Linux distributions such as Arch Linux, Debian, Fedora, Gentoo, Red Hat, Linux Mint, openSUSE, Solus, Mageia and Ubuntu distributions. - -To install Flatpak on Arch Linux, run: -``` -$ sudo pacman -S flatpak - -``` - -Flatpak is available in the default repositories of Debian Stretch and newer. To install it, run: -``` -$ sudo apt install flatpak - -``` - -On Fedora, Flatpak is installed by default. All you have to do is enable enable Flathub as described in the next section. - -Just in case, it is not installed for any reason, run: -``` -$ sudo dnf install flatpak - -``` - -On RHEL 7, run: -``` -$ sudo yum install flatpak - -``` - -On Linux Mint 18.3, flatpak is installed by default. So, no setup required. - -On openSUSE Tumbleweed, Flatpak can also be installed using Zypper: -``` -$ sudo zypper install flatpak - -``` - -On Ubuntu, add the following repository and install Flatpak as shown below. -``` -$ sudo add-apt-repository ppa:alexlarsson/flatpak - -$ sudo apt update - -$ sudo apt install flatpak - -``` - -The Flatpak plugin for the Software app makes it possible to install apps without needing the command line. To install this plugin, run: -``` -$ sudo apt install gnome-software-plugin-flatpak - -``` - -For other Linux distributions, refer the official installation [**link**][2]. - -### Getting Started With Flatpak - -There are many popular applications such as Gimp, Kdenlive, Steam, Spotify, Visual studio code etc., available as flatpaks. - -Let us now see the basic usage of flatpak command. - -First of all, we need to add remote repositories. - -#### Adding Remote Repositories** - -**Enable Flathub Repository:** - -**Flathub** is nothing but a central repository where all flatpak applications available to users. To enable it, just run: -``` -$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo - -``` - -Flathub is enough to install most popular apps. Just in case you wanted to try some GNOME apps, add the GNOME repository. - -**Enable GNOME Repository:** - -The GNOME repository contains all GNOME core applications. GNOME flatpak repository itself is available as two versions, **stable** and **nightly**. - -To add GNOME stable repository, run the following commands: -``` -$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg - -$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/ - -``` - -Applications in this repository require the **3.20 version of the org.gnome.Platform runtime**. - -To install the stable runtimes, run: -``` -$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg gnome https://sdk.gnome.org/repo/ - -``` - -To add the GNOME nightly apps repository, run: -``` -$ wget https://sdk.gnome.org/nightly/keys/nightly.gpg - -$ sudo flatpak remote-add --gpg-import=nightly.gpg --if-not-exists gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/ - -``` - -Applications in this repository require the **nightly version of the org.gnome.Platform runtime**. - -To install the nightly runtimes, run: -``` -$ sudo flatpak remote-add --gpg-import=nightly.gpg gnome-nightly https://sdk.gnome.org/nightly/repo/ - -``` - -#### Listing Remotes - -To list all configured remote repositories, run: -``` -$ flatpak remotes -Name Options -flathub system -gnome system -gnome-apps system -gnome-nightly system -gnome-nightly-apps system - -``` - -As you can see, the above command lists the remotes that you have added in your system. It also lists whether the remote has been added per-user or system-wide. - -#### Removing Remotes - -To remove a remote, for example flathub, simply do; -``` -$ sudo flatpak remote-delete flathub - -``` - -Here **flathub** is remote name. - -#### Installing Flatpak Applications - -In this section, we will see how to install flatpak apps. To install a flatpak application - -To install an application, simply do: -``` -$ sudo flatpak install flathub com.spotify.Client - -``` - -All the apps in the GNOME stable repository uses the version name of “stable”. - -To install any Stable GNOME applications, for example **Evince** , run: -``` -$ sudo flatpak install gnome-apps org.gnome.Evince stable - -``` - -All the apps in the GNOME nightly repository uses the version name of “master”. - -For example, to install gedit, run: -``` -$ sudo flatpak install gnome-nightly-apps org.gnome.gedit master - -``` - -If you don’t want to install apps system-wide, you also can install flatpak apps per-user like below. -``` -$ flatpak install --user - -``` - -All installed apps will be stored in **$HOME/.var/app/** location. -``` -$ ls $HOME/.var/app/ -com.spotify.Client - -``` - -#### Running Flatpak Applications - -You can launch the installed applications at any time from the application launcher. From command line, you can run it, for example Spotify, using command: -``` -$ flatpak run com.spotify.Client - -``` - -#### Listing Applications - -To view the installed applications and runtimes, run: -``` -$ flatpak list - -``` - -To view only the applications, not run times, use this command instead. -``` -$ flatpak list --app - -``` - -You can also view the list of available applications and runtimes from all remotes using command: -``` -$ flatpak remote-ls - -``` - -To list only applications not runtimes, run: -``` -$ flatpak remote-ls --app - -``` - -To list applications and runtimes from a specific repository, for example **gnome-apps** , run: -``` -$ flatpak remote-ls gnome-apps - -``` - -To list only the applications from a remote repository, run: -``` -$ flatpak remote-ls flathub --app - -``` - -#### Updating Applications - -To update all your flatpak applications, run: -``` -$ flatpak update - -``` - -To update a specific application, we do: -``` -$ flatpak update com.spotify.Client - -``` - -#### Getting Details Of Applications - -To display the details of a installed application, run: -``` -$ flatpak info io.github.mmstick.FontFinder - -``` - -Sample output: -``` -Ref: app/io.github.mmstick.FontFinder/x86_64/stable -ID: io.github.mmstick.FontFinder -Arch: x86_64 -Branch: stable -Origin: flathub -Date: 2018-04-11 15:10:31 +0000 -Subject: Workaround appstream issues (391ef7f5) -Commit: 07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 -Parent: dbff9150fce9fdfbc53d27e82965010805f16491ec7aa1aa76bf24ec1882d683 -Location: /var/lib/flatpak/app/io.github.mmstick.FontFinder/x86_64/stable/07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 -Installed size: 2.5 MB -Runtime: org.gnome.Platform/x86_64/3.28 - -``` - -#### Removing Applications - -To remove a flatpak application, run: -``` -$ sudo flatpak uninstall com.spotify.Client - -``` - -For details, refer flatpak help section. -``` -$ flatpak --help - -``` - -And, that’s all for now. Hope you had basic idea about Flatpak. - -If you find this guide useful, please share it on your social, professional networks and support OSTechNix. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:http://www.ostechnix.com/introduction-ubuntus-snap-packages/ -[2]:https://flatpak.org/setup/ diff --git a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md b/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md deleted file mode 100644 index 9bda5fa335..0000000000 --- a/sources/tech/20180429 The Easiest PDO Tutorial (Basics).md +++ /dev/null @@ -1,170 +0,0 @@ -The Easiest PDO Tutorial (Basics) -====== - -![](http://www.theitstuff.com/wp-content/uploads/2018/04/php-language.jpg) - -Approximately 80% of the web is powered by PHP. And similarly, high number goes for SQL as well. Up until PHP version 5.5, we had the **mysql_** commands for accessing mysql databases but they were eventually deprecated due to insufficient security. - -This happened with PHP 5.5 in 2013 and as I write this article, the year is 2018 and we are on PHP 7.2. The deprecation of mysql**_** brought 2 major ways of accessing the database, the **mysqli** and the **PDO** libraries. - -Now though the mysqli library was the official successor, PDO gained more fame due to a simple reason that mysqli could only support mysql databases whereas PDO could support 12 different types of database drivers. Also, PDO had several more features that made it the better choice for most developers. You can see some of the feature comparisons in the table below; - -| | PDO | MySQLi | -| Database support** | 12 drivers | Only MySQL | -| Paradigm | OOP | Procedural + OOP | -| Prepared Statements Client Side) | Yes | No | -| Named Parameters | Yes | No | - -Now I guess it is pretty clear why PDO is the choice for most developers, so let’s dig into it and hopefully we will try to cover most of the PDO you need in this article itself. - -### Connection - -The first step is connecting to the database and since PDO is completely Object Oriented, we will be using the instance of a PDO class. - -The first thing we do is we define the host, database name, user, password and the database charset. - -`$host = 'localhost';` - -`$db = 'theitstuff';` - -`$user = 'root';` - -`$pass = 'root';` - -`$charset = 'utf8mb4';` - -`$dsn = "mysql:host=$host;dbname=$db;charset=$charset";` - -`$conn = new PDO($dsn, $user, $pass);` - -After that, as you can see in the code above we have created the **DSN** variable, the DSN variable is simply a variable that holds the information about the database. For some people running mysql on external servers, you could also adjust your port number by simply supplying a **port=$port_number**. - -Finally, you can create an instance of the PDO class, I have used the **$conn** variable and I have supplied the **$dsn, $user, $pass** parameters. If you have followed this, you should now have an object named $conn that is an instance of the PDO connection class. Now it’s time to get into the database and run some queries. - -### A simple SQL Query - -Let us now run a simple SQL query. - -`$tis = $conn->query('SELECT name, age FROM students');` - -`while ($row = $tis->fetch())` - -`{` - -`echo $row['name']."\t";` - -`echo $row['age'];` - -`echo "
                  ";` - -`}` - -This is the simplest form of running a query with PDO. We first created a variable called **tis( **short for TheITStuff** )** and then you can see the syntax as we used the query function from the $conn object that we had created. - -We then ran a while loop and created a **$row** variable to fetch the contents from the **$tis** object and finally echoed out each row by calling out the column name. - -Easy wasn’t it ?. Now let’s get to the prepared statement. - -### Prepared Statements - -Prepared statements were one of the major reasons people started using PDO as it had prepared statements that could prevent SQL injections. - -There are 2 basic methods available, you could either use positional or named parameters. - -#### Position parameters - -Let us see an example of a query using positional parameters. - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` - -`$tis->bindValue(1,'mike');` - -`$tis->bindValue(2,22);` - -`$tis->execute();` - -In the above example, we have placed 2 question marks and later used the **bindValue()** function to map the values into the query. The values are bound to the position of the question mark in the statement. - -I could also use variables instead of directly supplying values by using the **bindParam()** function and example for the same would be this. - -`$name='Rishabh'; $age=20;` - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(?, ?)");` - -`$tis->bindParam(1,$name);` - -`$tis->bindParam(2,$age);` - -`$tis->execute();` - -### Named Parameters - -Named parameters are also prepared statements that map values/variables to a named position in the query. Since there is no positional binding, it is very efficient in queries that use the same variable multiple time. - -`$name='Rishabh'; $age=20;` - -`$tis = $conn->prepare("INSERT INTO STUDENTS(name, age) values(:name, :age)");` - -`$tis->bindParam(':name', $name);` - -`$tis->bindParam(':age', $age);` - -`$tis->execute();` - -The only change you can notice is that I used **:name** and **:age** as placeholders and then mapped variables to them. The colon is used before the parameter and it is of extreme importance to let PDO know that the position is for a variable. - -You can similarly use **bindValue()** to directly map values using Named parameters as well. - -### Fetching the Data - -PDO is very rich when it comes to fetching data and it actually offers a number of formats in which you can get the data from your database. - -You can use the **PDO::FETCH_ASSOC** to fetch associative arrays, **PDO::FETCH_NUM** to fetch numeric arrays and **PDO::FETCH_OBJ** to fetch object arrays. - -`$tis = $conn->prepare("SELECT * FROM STUDENTS");` - -`$tis->execute();` - -`$result = $tis->fetchAll(PDO::FETCH_ASSOC);` - -You can see that I have used **fetchAll** since I wanted all matching records. If only one row is expected or desired, you can simply use **fetch.** - -Now that we have fetched the data it is time to loop through it and that is extremely easy. - -`foreach($result as $lnu){` - -`echo $lnu['name'];` - -`echo $lnu['age']."
                  ";` - -`}` - -You can see that since I had requested associative arrays, I am accessing individual members by their names. - -Though there is absolutely no problem in defining how you want your data delivered, you could actually set one as default when defining the connection variable itself. - -All you need to do is create an options array where you put in all your default configs and simply pass the array in the connection variable. - -`$options = [` - -` PDO::ATTR_DEFAULT_FETCH_MODE => PDO::FETCH_ASSOC,` - -`];` - -`$conn = new PDO($dsn, $user, $pass, $options);` - -This was a very brief and quick intro to PDO we will be making an advanced tutorial soon. If you had any difficulties understanding any part of the tutorial, do let me know in the comment section and I’ll be there for you. - - --------------------------------------------------------------------------------- - -via: http://www.theitstuff.com/easiest-pdo-tutorial-basics - -作者:[Rishabh Kandari][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://www.theitstuff.com/author/reevkandari diff --git a/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md b/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md deleted file mode 100644 index 7a30127b07..0000000000 --- a/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md +++ /dev/null @@ -1,193 +0,0 @@ -11 Methods To Find System/Server Uptime In Linux -====== -Do you want to know, how long your Linux system has been running without downtime? when the system is up and what date. - -There are multiple commands is available in Linux to check server/system uptime and most of users prefer the standard and very famous command called `uptime` to get this details. - -Server uptime is not important for some people but it’s very important for server administrators when the server running with mission-critical applications such as online shopping portal, netbanking portal, etc,. - -It must be zero downtime because if there is a down time then it will impact badly to million users. - -As i told, many commands are available to check server uptime in Linux. In this tutorial we are going teach you how to check this using below 11 methods. - -Uptime means how long the server has been up since its last shutdown or reboot. - -The uptime command the fetch the details from `/proc` files and print the server uptime, the `/proc` file is not directly readable by humans. - -The below commands will print how long the system has been running and up. It also shows some additional information. - -### Method-1 : Using uptime Command - -uptime command will tell how long the system has been running. It gives a one line display of the following information. - -The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes. -``` -# uptime - - 08:34:29 up 21 days, 5:46, 1 user, load average: 0.06, 0.04, 0.00 - -``` - -### Method-2 : Using w Command - -w command provides a quick summary of every user logged into a computer, what each user is currently doing, -and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a. -``` -# w - - 08:35:14 up 21 days, 5:47, 1 user, load average: 0.26, 0.09, 0.02 -USER TTY FROM [email protected] IDLE JCPU PCPU WHAT -root pts/1 103.5.134.167 08:34 0.00s 0.01s 0.00s w - -``` - -### Method-3 : Using top Command - -Top command is one of the basic command to monitor real-time system processes in Linux. It display system information and running processes information like uptime, average load, tasks running, number of users logged in, number of CPUs & cpu utilization, Memory & swap information. Run top command then hit E to bring the memory utilization in MB. - -**Suggested Read :** [TOP Command Examples to Monitor Server Performance][1] -``` -# top -c - -top - 08:36:01 up 21 days, 5:48, 1 user, load average: 0.12, 0.08, 0.02 -Tasks: 98 total, 1 running, 97 sleeping, 0 stopped, 0 zombie -Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st -Mem: 1872888k total, 1454644k used, 418244k free, 175804k buffers -Swap: 2097148k total, 0k used, 2097148k free, 1098140k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init - 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd] - 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0] - 4 root 20 0 0 0 0 S 0.0 0.0 0:34.32 [ksoftirqd/0] - 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [stopper/0] - -``` - -### Method-4 : Using who Command - -who command displays a list of users who are currently logged into the computer. The who command is related to the command w, which provides the same information but also displays additional data and statistics. -``` -# who -b - -system boot 2018-04-12 02:48 - -``` - -### Method-5 : Using last Command - -The last command displays a list of last logged in users. Last searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created. -``` -# last reboot -F | head -1 | awk '{print $5,$6,$7,$8,$9}' - -Thu Apr 12 02:48:04 2018 - -``` - -### Method-6 : Using /proc/uptime File - -This file contains information detailing how long the system has been on since its last restart. The output of `/proc/uptime` is quite minimal. - -The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. -``` -# cat /proc/uptime - -1835457.68 1809207.16 - -``` - -# date -d “$(Method-7 : Using tuptime Command - -Tuptime is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output. -``` -$ tuptime - -``` - -### Method-8 : Using htop Command - -htop is an interactive process viewer for Linux which was developed by Hisham using ncurses library. Htop have many of features and options compared to top command. - -**Suggested Read :** [Monitor system resources using Htop command][2] -``` -# htop - - CPU[| 0.5%] Tasks: 48, 5 thr; 1 running - Mem[||||||||||||||||||||||||||||||||||||||||||||||||||| 165/1828MB] Load average: 0.10 0.05 0.01 - Swp[ 0/2047MB] Uptime: 21 days, 05:52:35 - - PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command -29166 root 20 0 110M 2484 1240 R 0.0 0.1 0:00.03 htop -29580 root 20 0 11464 3500 1032 S 0.0 0.2 55:15.97 /bin/sh ./OSWatcher.sh 10 1 - 1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init - 486 root 16 -4 10780 900 348 S 0.0 0.0 0:00.07 /sbin/udevd -d - 748 root 18 -2 10780 932 360 S 0.0 0.0 0:00.00 /sbin/udevd -d - -``` - -### Method-9 : Using glances Command - -Glances is a cross-platform curses-based system monitoring tool written in Python. We can say all in one place, like maximum of information in a minimum of space. It uses psutil library to get information from your system. - -Glances capable to monitor CPU, Memory, Load, Process list, Network interface, Disk I/O, Raid, Sensors, Filesystem (and folders), Docker, Monitor, Alert, System info, Uptime, Quicklook (CPU, MEM, LOAD), etc,. - -**Suggested Read :** [Glances (All in one Place)– An Advanced Real Time System Performance Monitoring Tool for Linux][3] -``` -glances - -ubuntu (Ubuntu 17.10 64bit / Linux 4.13.0-37-generic) - IP 192.168.1.6/24 Uptime: 21 days, 05:55:15 - -CPU [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 90.6%] CPU - 90.6% nice: 0.0% ctx_sw: 4K MEM \ 78.4% active: 942M SWAP - 5.9% LOAD 2-core -MEM [||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 78.0%] user: 55.1% irq: 0.0% inter: 1797 total: 1.95G inactive: 562M total: 12.4G 1 min: 4.35 -SWAP [|||| 5.9%] system: 32.4% iowait: 1.8% sw_int: 897 used: 1.53G buffers: 14.8M used: 749M 5 min: 4.38 - idle: 7.6% steal: 0.0% free: 431M cached: 273M free: 11.7G 15 min: 3.38 - -NETWORK Rx/s Tx/s TASKS 211 (735 thr), 4 run, 207 slp, 0 oth sorted automatically by memory_percent, flat view -docker0 0b 232b -enp0s3 12Kb 4Kb Systemd 7 Services loaded: 197 active: 196 failed: 1 -lo 616b 616b -_h478e48e 0b 232b CPU% MEM% VIRT RES PID USER NI S TIME+ R/s W/s Command - 63.8 18.9 2.33G 377M 2536 daygeek 0 R 5:57.78 0 0 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -DefaultGateway 83ms 78.5 10.9 3.46G 217M 2039 daygeek 0 S 21:07.46 0 0 /usr/bin/gnome-shell - 8.5 10.1 2.32G 201M 2464 daygeek 0 S 8:45.69 0 0 /usr/lib/firefox/firefox -new-window -DISK I/O R/s W/s 1.1 8.5 2.19G 170M 2653 daygeek 0 S 2:56.29 0 0 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -dm-0 0 0 1.7 7.2 2.15G 143M 2880 daygeek 0 S 7:10.46 0 0 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 - -``` - -### Method-10 : Using stat Command - -stat command displays the detailed status of a particular file or a file system. -``` -# stat /var/log/dmesg | grep Modify - -Modify: 2018-04-12 02:48:04.027999943 -0400 - -``` - -### Method-11 : Using procinfo Command - -procinfo gathers some system data from the /proc directory and prints it nicely formatted on the standard output device. -``` -# procinfo | grep Bootup - -Bootup: Fri Apr 20 19:40:14 2018 Load average: 0.16 0.05 0.06 1/138 16615 - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ - -作者:[Magesh Maruthamuthu][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/ -[2]:https://www.2daygeek.com/htop-command-examples-to-monitor-system-resources/ -[3]:https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/ diff --git a/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md b/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md new file mode 100644 index 0000000000..0b5fb0415b --- /dev/null +++ b/sources/tech/20180507 Modularity in Fedora 28 Server Edition.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Modularity in Fedora 28 Server Edition) +[#]: via: (https://fedoramagazine.org/wp-content/uploads/2018/05/f28-server-modularity-816x345.jpg) +[#]: author: (Stephen Gallagher https://fedoramagazine.org/author/sgallagh/) + +Modularity in Fedora 28 Server Edition +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/05/f28-server-modularity-816x345.jpg) + +### What is Modularity? + +A classic conundrum that all open-source distributions have faced is the “too fast/too slow” problem. Users install an operating system in order to enable the use of their applications. A comprehensive distribution like Fedora has an advantage and a disadvantage to the large amount of available software. While the package the user wants may be available, it might not be available in the version needed. Here’s how Modularity can help solve that problem. + +Fedora sometimes moves too fast for some users. Its rapid release cycle and desire to carry the latest stable software can result in breaking compatibility with applications. If a user can’t run a web application because Fedora upgraded a web framework to an incompatible version, it can be very frustrating. The classic answer to the “too fast” problem has been “Fedora should have an LTS release.” However, this approach only solves half the problem and makes the flip side of this conundrum worse. + +There are also times when Fedora moves too slowly for some of its users. For example, a Fedora release may be poorly-timed alongside the release of other desirable software. Once a Fedora release is declared stable, packagers must abide by the [Stable Updates Policy][1] and not introduce incompatible changes into the system. + +Fedora Modularity addresses both sides of this problem. Fedora will still ship a standard release under its traditional policy. However, it will also ship a set of modules that define alternative versions of popular software. Those in the “too fast” camp still have the benefit of Fedora’s newer kernel and other general platform enhancements. In addition, they still have access to older frameworks or toolchains that support their applications. + +In addition, those users who like to live closer to the edge can access newer software than was available at release time. + +### What is Modularity not? + +Modularity is not a drop-in replacement for [Software Collections][2]. These two technologies try to solve many of the same problems, but have distinct differences. + +Software Collections install different versions of packages in parallel on the system. However, their downside is that each installation exists in its own namespaced portion of the filesystem. Furthermore, each application that relies on them needs to be told where to find them. + +With Modularity, only one version of a package exists on the system, but the user can choose which one. The advantage is that this version lives in a standard location on the system. The package requires no special changes to applications that rely on it. Feedback from user studies shows most users don’t actually rely on parallel installation. Containerization and virtualization solve that problem. + +### Why not just use containers? + +This is another common question. Why would a user want modules when they could just use containers? The answer is, someone still has to maintain the software in the containers. Modules provide pre-packaged content for those containers that users don’t need to maintain, update and patch on their own. This is how Fedora takes the traditional value of a distribution and moves it into the new, containerized world. + +Here’s an example of how Modularity solves problems for users of Node.js and Review Board. + +### Node.js + +Many readers may be familiar with Node.js, a popular server-side JavaScript runtime. Node.js has an even/odd release policy. Its community supports even-numbered releases (6.x, 8.x, 10.x, etc.) for around 30 months. Meanwhile, they support odd-numbered releases that are essentially developer previews for 9 months. + +Due to this cycle, Fedora carried only the most recent even-numbered version of Node.js in its stable repositories. It avoided the odd-numbered versions entirely since their lifecycle was shorter than Fedora, and generally not aligned with a Fedora release. This didn’t sit well with some Fedora users, who wanted access to the latest and greatest enhancements. + +Thanks to Modularity, Fedora 28 shipped with not just one, but three versions of Node.js to satisfy both developers and stable deployments. Fedora 28’s traditional repository shipped with Node.js 8.x. This version was the most recent long-term stable version at release time. The Modular repositories (available by default on Fedora 28 Server edition) also made the older Node.js 6.x release and the newer Node.js 9.x development release available. + +Additionally, Node.js released 10.x upstream just days after Fedora 28\. In the past, users who wanted to deploy that version had to wait until Fedora 29, or use sources from outside Fedora. However, thanks again to Modularity, Node.js 10.x is already [available][3] in the Modular Updates-Testing repository for Fedora 28. + +### Review Board + +Review Board is a popular Django application for performing code reviews. Fedora included Review Board from Fedora 13 all the way until Fedora 21\. At that point, Fedora moved to Django 1.7\. Review Board was unable to keep up, due to backwards-incompatible changes in Django’s database support. It remained alive in EPEL for RHEL/CentOS 7, simply because those releases had fortunately frozen on Django 1.6\. Nevertheless, its time in Fedora was apparently over. + +However, with the advent of Modularity, Fedora could again ship the older Django as a non-default module stream. As a result, Review Board has been restored to Fedora as a module. Fedora carries both supported releases from upstream: 2.5.x and 3.0.x. + +### Putting the pieces together + +Fedora has always provided users with a wide range of software to use. Fedora Modularity now provides them with deeper choices for which versions of the software they need. The next few years will be very exciting for Fedora, as developers and users start putting together their software in new and exciting (or old and exciting) ways. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/wp-content/uploads/2018/05/f28-server-modularity-816x345.jpg + +作者:[Stephen Gallagher][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/sgallagh/ +[b]: https://github.com/lujun9972 +[1]: https://fedoraproject.org/wiki/Updates_Policy#Stable_Releases +[2]: https://www.softwarecollections.org +[3]: https://bodhi.fedoraproject.org/updates/FEDORA-MODULAR-2018-2b0846cb86 diff --git a/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md new file mode 100644 index 0000000000..a16e604774 --- /dev/null +++ b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md @@ -0,0 +1,617 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (An introduction to the Pyramid web framework for Python) +[#]: via: (https://opensource.com/article/18/5/pyramid-framework) +[#]: author: (Nicholas Hunt-Walker https://opensource.com/users/nhuntwalker) +[#]: url: ( ) + +An introduction to the Pyramid web framework for Python +====== +In the second part in a series comparing Python frameworks, learn about Pyramid. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pyramid.png?itok=hX73LWtl) + +In the [first article][1] in this four-part series comparing different Python web frameworks, I explained how to create a To-Do List web application in the [Flask][2] web framework. In this second article, I'll do the same task with the [Pyramid][3] web framework. Future articles will look at [Tornado][4] and [Django][5]; as I go along, I'll explore more of the differences among them. + +### Installing, starting up, and doing configuration + +Self-described as "the start small, finish big, stay finished framework," Pyramid is much like Flask in that it takes very little effort to get it up and running. In fact, you'll recognize many of the same patterns as you build out this application. The major difference between the two, however, is that Pyramid comes with several useful utilities, which I'll describe shortly. + +To get started, create a virtual environment and install the package. + +``` +$ mkdir pyramid_todo +$ cd pyramid_todo +$ pipenv install --python 3.6 +$ pipenv shell +(pyramid-someHash) $ pipenv install pyramid +``` + +As with Flask, it's smart to create a `setup.py` file to make the app you build an easily installable Python distribution. + +``` +# setup.py +from setuptools import setup, find_packages + +requires = [ +    'pyramid', +    'paster_pastedeploy', +    'pyramid-ipython', +    'waitress' +] + +setup( +    name='pyramid_todo', +    version='0.0', +    description='A To-Do List build with Pyramid', +    author='', +    author_email='', +    keywords='web pyramid pylons', +    packages=find_packages(), +    include_package_data=True, +    install_requires=requires, +    entry_points={ +        'paste.app_factory': [ +            'main = todo:main', +        ] +    } +) +``` + +`entry_points` section near the end sets up entry points into the application that other services can use. This allows the `plaster_pastedeploy` package to access what will be the `main` function in the application for building an application object and serving it. (I'll circle back to this in a bit.) + +Thesection near the end sets up entry points into the application that other services can use. This allows thepackage to access what will be thefunction in the application for building an application object and serving it. (I'll circle back to this in a bit.) + +When you installed `pyramid`, you also gained a few Pyramid-specific shell commands; the main ones to pay attention to are `pserve` and `pshell`. `pserve` will take an INI-style configuration file specified as an argument and serve the application locally. `pshell` will also take a configuration file as an argument, but instead of serving the application, it'll open up a Python shell that is aware of the application and its internal configuration. + +The configuration file is pretty important, so it's worth a closer look. Pyramid can take its configuration from environment variables or a configuration file. To avoid too much confusion around what is where, in this tutorial you'll write most of your configuration in the configuration file, with only a select few, sensitive configuration parameters set in the virtual environment. + +Create a file called `config.ini` + +``` +[app:main] +use = egg:todo +pyramid.default_locale_name = en + +[server:main] +use = egg:waitress#main +listen = localhost:6543 +``` + +This says a couple of things: + + * The actual application will come from the `main` function located in the `todo` package installed in the environment + * To serve this app, use the `waitress` package installed in the environment and serve on localhost port 6543 + + + +When serving an application and working in development, it helps to set up logging so you can see what's going on. The following configuration will handle logging for the application: + +``` +# continuing on... +[loggers] +keys = root, todo + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = INFO +handlers = console + +[logger_todo] +level = DEBUG +handlers = +qualname = todo + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(asctime)s %(levelname)-5.5s [%(name)s:%(lineno)s][%(threadName)s] %(message)s +``` + +In short, this configuration asks to log everything to do with the application to the console. If you want less output, set the logging level to `WARN` so a message will fire only if there's a problem. + +Because Pyramid is meant for an application that grows, plan out a file structure that could support that growth. Web applications can, of course, be built however you want. In general, the conceptual blocks you'll want to cover will contain: + + * **Models** for containing the code and logic for dealing with data representations + * **Views** for code and logic pertaining to the request-response cycle + * **Routes** for the paths for access to the functionality of your application + * **Scripts** for any code that might be used in configuration or management of the application itself + + + +Given the above, the file structure can look like so: + +``` +setup.py +config.ini +todo/ +    __init__.py +    models.py +    routes.py +    views.py +    scripts/ +``` + +Much like Flask's `app` object, Pyramid has its own central configuration. It comes from its `config` module and is known as the `Configurator` object. This object will handle everything from route configuration to pointing to where models and views exist. All this is done in an inner directory called `todo` within an `__init__.py` file. + +``` +# todo/__init__.py + +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.scan() +    return config.make_wsgi_app() +``` + +The `main` function looks for some global configuration from your environment as well as any settings that came through the particular configuration file you provide when you run the application. It takes those settings and uses them to build an instance of the `Configurator` object, which (for all intents and purposes) is the factory for your application. Finally, `config.scan()` looks for any views you'd like to attach to your application that are marked as Pyramid views. + +Wow, that was a lot to configure. + +### Using routes and views + +Now that a chunk of the configuration is done, you can start adding functionality to the application. Functionality comes in the form of URL routes that external clients can hit, which then map to functions that Python can run. + +With Pyramid, all functionality must be added to the `Configurator` in some way, shape, or form. For example, say you want to build the same simple `hello_world` view that you built with Flask, mapping to the route of `/`. With Pyramid, you can register the `/` route with the `Configurator` using the `.add_route()` method. This method takes as arguments the name of the route that you want to add as well as the actual pattern that must be matched to access that route. For this case, add the following to your `Configurator`: + +``` +config.add_route('home', '/') +``` + +Until you create a view and attach it to that route, that path into your application sits open and alone. When you add the view, make sure to include the `request` object in the parameter list. Every Pyramid view must have the `request` object as its first parameter, as that's what's being passed as the first argument to the view when it's called by Pyramid. + +One similarity that Pyramid views share with Flask is that you can mark a function as a view with a decorator. Specifically, the `@view_config` decorator from `pyramid.view`. + +In `views.py`, build the view that you want to see in the world. + +``` +from pyramid.view import view_config + +@view_config(route_name="hello", renderer="string") +def hello_world(request): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +With the `@view_config` decorator, you have to at least specify the name of the route that will map to this particular view. You can stack `view_config` decorators on top of one another to map to multiple routes if you want, but you have to have at least one to connect view the view at all, and each one must include the name of a route. **[NOTE: Is "to connect view the view" phrased correctly?]** + +The other argument, `renderer`, is optional but not really. If you don't specify a renderer, you have to deliberately construct the HTTP response you want to send back to the client using the `Response` object from `pyramid.response`. By specifying the `renderer` as a string, Pyramid knows to take whatever is returned by this function and wrap it in that same `Response` object with the MIME type of `text/plain`. By default, Pyramid allows you to use `string` and `json` as renderers. If you've attached a templating engine to your application because you want to have Pyramid generate your HTML as well, you can point directly to your HTML template as your renderer. + +The first view is done. Here's what `__init__.py` looks like now with the attached route. + +``` +# in __init__.py +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.add_route('hello', '/') +    config.scan() +    return config.make_wsgi_app() +``` + +Spectacular! Getting here was no easy feat, but now that you're set up, you can add functionality with significantly less difficulty. + +### Smoothing a rough edge + +Right now the application only has one route, but it's easy to see that a large application can have many dozens or even hundreds of routes. Containing them all in the same `main` function with your central configuration isn't really the best idea, because it would become cluttered. Thankfully, it's fairly easy to include routes with a few tweaks to the application. + +**One** : In the `routes.py` file, create a function called `includeme` (yes, it must actually be named this) that takes a configurator object as an argument. + +``` +# in routes.py +def includeme(config): +    """Include these routes within the application.""" +``` + +**Two** : Move the `config.add_route` method call from `__init__.py` into the `includeme` function: + +``` +def includeme(config): +    """Include these routes within the application.""" +    config.add_route('hello', '/') +``` + +**Three** : Alert the Configurator that you need to include this `routes.py` file as part of its configuration. Because it's in the same directory as `__init__.py`, you can get away with specifying the import path to this file as `.routes`. + +``` +# in __init__.py +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.include('.routes') +    config.scan() +    return config.make_wsgi_app() +``` + +### Connecting the database + +As with Flask, you'll want to persist data by connecting a database. Pyramid will leverage [SQLAlchemy][6] directly instead of using a specially tailored package. + +First get the easy part out of the way. `psycopg2` and `sqlalchemy` are required to talk to the Postgres database and manage the models, so add them to `setup.py`. + +``` +# in setup.py +requires = [ +    'pyramid', +    'pyramid-ipython', +    'waitress', +    'sqlalchemy', +    'psycopg2' +] +# blah blah other code +``` + +Now, you have a decision to make about how you'll include the database's URL. There's no wrong answer here; what you do will depend on the application you're building and how public your codebase needs to be. + +The first option will keep as much configuration in one place as possible by hard-coding the database URL into the `config.ini` file. One drawback is this creates a security risk for applications with a public codebase. Anyone who can view the codebase will be able to see the full database URL, including username, password, database name, and port. Another is maintainability; if you needed to change environments or the application's database location, you'd have to modify the `config.ini` file directly. Either that or you'll have to maintain one configuration file for each new environment, which adds the potential for discontinuity and errors in the application. **If you choose this option** , modify the `config.ini` file under the `[app:main]` heading to include this key-value pair: + +``` +sqlalchemy.url = postgres://localhost:5432/pyramid_todo +``` + +The second option specifies the location of the database URL when you create the `Configurator`, pointing to an environment variable whose value can be set depending on the environment where you're working. One drawback is that you're further splintering the configuration, with some in the `config.ini` file and some directly in the Python codebase. Another drawback is that when you need to use the database URL anywhere else in the application (e.g., in a database management script), you have to code in a second reference to that same environment variable (or set up the variable in one place and import from that location). **If you choose this option** , add the following: + +``` +# in __init__.py +import os +from pyramid.config import Configurator + +SQLALCHEMY_URL = os.environ.get('DATABASE_URL', '') + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    settings['sqlalchemy.url'] = SQLALCHEMY_URL # <-- important! +    config = Configurator(settings=settings) +    config.include('.routes') +    config.scan() +    return config.make_wsgi_app() +``` + +### Defining objects + +OK, so now you have a database. Now you need `Task` and `User` objects. + +Because it uses SQLAlchemy directly, Pyramid differs somewhat from Flash on how objects are built. First, every object you want to construct must inherit from SQLAlchemy's [declarative base class][7]. It'll keep track of everything that inherits from it, enabling simpler management of the database. + +``` +# in models.py +from sqlalchemy.ext.declarative import declarative_base + +Base = declarative_base() + +class Task(Base): +    pass + +class User(Base): +    pass +``` + +The columns, data types for those columns, and model relationships will be declared in much the same way as with Flask, although they'll be imported directly from SQLAlchemy instead of some pre-constructed `db` object. Everything else is the same. + +``` +# in models.py +from datetime import datetime +import secrets + +from sqlalchemy import ( +    Column, Unicode, Integer, DateTime, Boolean, relationship +) +from sqlalchemy.ext.declarative import declarative_base + +Base = declarative_base() + +class Task(Base): +    """Tasks for the To Do list.""" +    id = Column(Integer, primary_key=True) +    name = Column(Unicode, nullable=False) +    note = Column(Unicode) +    creation_date = Column(DateTime, nullable=False) +    due_date = Column(DateTime) +    completed = Column(Boolean, default=False) +    user_id = Column(Integer, ForeignKey('user.id'), nullable=False) +    user = relationship("user", back_populates="tasks") + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.creation_date = datetime.now() + +class User(Base): +    """The User object that owns tasks.""" +    id = Column(Integer, primary_key=True) +    username = Column(Unicode, nullable=False) +    email = Column(Unicode, nullable=False) +    password = Column(Unicode, nullable=False) +    date_joined = Column(DateTime, nullable=False) +    token = Column(Unicode, nullable=False) +    tasks = relationship("Task", back_populates="user") + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.date_joined = datetime.now() +        self.token = secrets.token_urlsafe(64) +``` + +Note that there's no `config.include` line for `models.py` anywhere because it's not needed. A `config.include` line is needed only if some part of the application's configuration needs to be changed. This has only created two objects, inheriting from some `Base` class that SQLAlchemy gave us. + +### Initializing the database + +Now that the models are done, you can write a script to talk to and initialize the database. In the `scripts` directory, create two files: `__init__.py` and `initializedb.py`. The first is simply to turn the `scripts` directory into a Python package. The second is the script needed for database management. + +`initializedb.py` needs a function to set up the necessary tables in the database. Like with Flask, this script must be aware of the `Base` object, whose metadata keeps track of every class that inherits from it. The database URL is required to point to and modify its tables. + +As such, this database initialization script will work: + +``` +# initializedb.py +from sqlalchemy import engine_from_config +from todo import SQLALCHEMY_URL +from todo.models import Base + +def main(): +    settings = {'sqlalchemy.url': SQLALCHEMY_URL} +    engine = engine_from_config(settings, prefix='sqlalchemy.') +    if bool(os.environ.get('DEBUG', '')): +        Base.metadata.drop_all(engine) +    Base.metadata.create_all(engine) +``` + +**Important note:** This will work only if you include the database URL as an environment variable in `todo/__init__.py` (the second option above). If the database URL was stored in the configuration file, you'll have to include a few lines to read that file. It will look something like this: + +``` +# alternate initializedb.py +from pyramid.paster import get_appsettings +from pyramid.scripts.common import parse_vars +from sqlalchemy import engine_from_config +import sys +from todo.models import Base + +def main(): +    config_uri = sys.argv[1] +    options = parse_vars(sys.argv[2:]) +    settings = get_appsettings(config_uri, options=options) +    engine = engine_from_config(settings, prefix='sqlalchemy.') +    if bool(os.environ.get('DEBUG', '')): +        Base.metadata.drop_all(engine) +    Base.metadata.create_all(engine) +``` + +Either way, in `setup.py`, add a console script that will access and run this function. + +``` +# bottom of setup.py +setup( +    # ... other stuff +    entry_points={ +        'paste.app_factory': [ +            'main = todo:main', +        ], +        'console_scripts': [ +            'initdb = todo.scripts.initializedb:main', +        ], +    } +) +``` + +When this package is installed, you'll have access to a new console script called `initdb`, which will construct the tables in your database. If the database URL is stored in the configuration file, you'll have to include the path to that file when you invoke the command. It'll look like `$ initdb /path/to/config.ini`. + +### Handling requests and the database + +Ok, here's where it gets a little deep. Let's talk about **transactions**. A "transaction," in an abstract sense, is any change made to an existing database. As with Flask, transactions are persisted no sooner than when they are committed. If changes have been made that haven't yet been committed, and you don't want those to occur (maybe there's an error thrown in the process), you can **rollback** a transaction and abort those changes. + +In Python, the [transaction package][8] allows you to interact with transactions as objects, which can roll together multiple changes into one single commit. `transaction` provides **transaction managers** , which give applications a straightforward, thread-aware way of handling transactions so all you need to think about is what to change. The `pyramid_tm` package will take the transaction manager from `transaction` and wire it up in a way that's appropriate for Pyramid's request-response cycle, attaching a transaction manager to every incoming request. + +Normally, with Pyramid the `request` object is populated when the route mapping to a view is accessed and the view function is called. Every view function will have a `request` object to work with**.** However, Pyramid allows you to modify its configuration to add whatever you might need to the `request` object. You can use the transaction manager that you'll be adding to the `request` to create a session with every request and add that session to the request. + +Yay, so why is this important? + +By attaching a transaction-managed session to the `request` object, when the view finishes processing the request, any changes made to the database session will be committed without you needing to explicitly commit**.** Here's what all these concepts look like in code. + +``` +# __init__.py +import os +from pyramid.config import Configurator +from sqlalchemy import engine_from_config +from sqlalchemy.orm import sessionmaker +import zope.sqlalchemy + +SQLALCHEMY_URL = os.environ.get('DATABASE_URL', '') + +def get_session_factory(engine): +    """Return a generator of database session objects.""" +    factory = sessionmaker() +    factory.configure(bind=engine) +    return factory + +def get_tm_session(session_factory, transaction_manager): +    """Build a session and register it as a transaction-managed session.""" +    dbsession = session_factory() +    zope.sqlalchemy.register(dbsession, transaction_manager=transaction_manager) +    return dbsession + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    settings['sqlalchemy.url'] = SQLALCHEMY_URL +    settings['tm.manager_hook'] = 'pyramid_tm.explicit_manager' +    config = Configurator(settings=settings) +    config.include('.routes') +    config.include('pyramid_tm') +    session_factory = get_session_factory(engine_from_config(settings, prefix='sqlalchemy.')) +    config.registry['dbsession_factory'] = session_factory +    config.add_request_method( +        lambda request: get_tm_session(session_factory, request.tm), +        'dbsession', +        reify=True +    ) + +    config.scan() +    return config.make_wsgi_app() +``` + +That looks like a lot, but it only did was what was explained above, plus it added an attribute to the `request` object called `request.dbsession`. + +A few new packages were included here, so update `setup.py` with those packages. + +``` +# in setup.py +requires = [ +    'pyramid', +    'pyramid-ipython', +    'waitress', +    'sqlalchemy', +    'psycopg2', +    'pyramid_tm', +    'transaction', +    'zope.sqlalchemy' +] +# blah blah other stuff +``` + +### Revisiting routes and views + +You need to make some real views that handle the data within the database and the routes that map to them. + +Start with the routes. You created the `routes.py` file to handle your routes but didn't do much beyond the basic `/` route. Let's fix that. + +``` +# routes.py +def includeme(config): +    config.add_route('info', '/api/v1/') +    config.add_route('register', '/api/v1/accounts') +    config.add_route('profile_detail', '/api/v1/accounts/{username}') +    config.add_route('login', '/api/v1/accounts/login') +    config.add_route('logout', '/api/v1/accounts/logout') +    config.add_route('tasks', '/api/v1/accounts/{username}/tasks') +    config.add_route('task_detail', '/api/v1/accounts/{username}/tasks/{id}') +``` + +Now, it not only has static URLs like `/api/v1/accounts`, but it can handle some variable URLs like `/api/v1/accounts/{username}/tasks/{id}` where any variable in a URL will be surrounded by curly braces. + +To create the view to create an individual task in your application (like in the Flash example), you can use the `@view_config` decorator to ensure that it only takes incoming `POST` requests and check out how Pyramid handles data from the client. + +Take a look at the code, then check out how it differs from Flask's version. + +``` +# in views.py +from datetime import datetime +from pyramid.view import view_config +from todo.models import Task, User + +INCOMING_DATE_FMT = '%d/%m/%Y %H:%M:%S' + +@view_config(route_name="tasks", request_method="POST", renderer='json') +def create_task(request): +    """Create a task for one user.""" +    response = request.response +    response.headers.extend({'Content-Type': 'application/json'}) +    user = request.dbsession.query(User).filter_by(username=request.matchdict['username']).first() +    if user: +        due_date = request.json['due_date'] +        task = Task( +            name=request.json['name'], +            note=request.json['note'], +            due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +            completed=bool(request.json['completed']), +            user_id=user.id +        ) +        request.dbsession.add(task) +        response.status_code = 201 +        return {'msg': 'posted'} +``` + +To start, note on the `@view_config` decorator that the only type of request you want this view to handle is a "POST" request. If you want to specify one type of request or one set of requests, provide either the string noting the request or a tuple/list of such strings. + +``` +response = request.response +response.headers.extend({'Content-Type': 'application/json'}) +# ...other code... +response.status_code = 201 +``` + +The HTTP response sent to the client is generated based on `request.response`. Normally, you wouldn't have to worry about that object. It would just produce a properly formatted HTTP response and you'd never know the difference. However, because you want to do something specific, like modify the response's status code and headers, you need to access that response and its methods/attributes. + +Unlike with Flask, you don't need to modify the view function parameter list just because you have variables in the route URL. Instead, any time a variable exists in the route URL, it is collected in the `matchdict` attribute of the `request`. It will exist there as a key-value pair, where the key will be the variable (e.g., "username") and the value will be whatever value was specified in the route (e.g., "bobdobson"). Regardless of what value is passed in through the route URL, it'll always show up as a string in the `matchdict`. So, when you want to pull the username from the incoming request URL, access it with `request.matchdict['username']` + +``` +user = request.dbsession.query(User).filter_by(username=request.matchdict['username']).first() +``` + +Querying for objects when using `sqlalchemy` directly differs significantly from what the `flask-sqlalchemy` package allows. Recall that when you used `flask-sqlalchemy` to build your models, the models inherited from the `db.Model` object. That `db` object already contained a connection to the database, so that connection could perform a straightforward operation like `User.query.all()`. + +That simple interface isn't present here, as the models in the Pyramid app inherit from `Base`, which is generated from `declarative_base()`, coming directly from the `sqlalchemy` package. It has no direct awareness of the database it'll be accessing. That awareness was attached to the `request` object via the app's central configuration as the `dbsession` attribute. Here's the code from above that did that: + +``` +config.add_request_method( +    lambda request: get_tm_session(session_factory, request.tm), +    'dbsession', +    reify=True +) +``` + +With all that said, whenever you want to query OR modify the database, you must work through `request.dbsession`. In the case, you want to query your "users" table for a specific user by using their username as their identifier. As such, the `User` object is provided as an argument to the `.query` method, then the normal SQLAlchemy operations are done from there. + +An interesting thing about this way of querying the database is that you can query for more than just one object or list of one type of object. You can query for: + + * Object attributes on their own, e.g., `request.dbsession.query(User.username)` would query for usernames + * Tuples of object attributes, e.g., `request.dbsession.query(User.username, User.date_joined)` + * Tuples of multiple objects, e.g., `request.dbsession.query(User, Task)` + + + +The data sent along with the incoming request will be found within the `request.json` dictionary. + +The last major difference is, because of all the machinations necessary to attach the committing of a session's activity to Pyramid's request-response cycle, you don't have to call `request.dbsession.commit()` at the end of your view. It's convenient, but there is one thing to be aware of moving forward. If instead of a new add to the database, you wanted to edit a pre-existing object in the database, you couldn't use `request.dbsession.commit()`. Pyramid will throw an error, saying something along the lines of "commit behavior is being handled by the transaction manager, so you can't call it on your own." And if you don't do something that resembles committing your changes, your changes won't stick. + +The solution here is to use `request.dbsession.flush()`. The job of `.flush()` is to signal to the database that some changes have been made and need to be included with the next commit. + +### Planning for the future + +At this point, you've set up most of the important parts of Pyramid, analogous to what you constructed with Flask in part one. There's much more that goes into an application, but much of the meat is handled here. Other view functions will follow similar formatting, and of course, there's always the question of security (which Pyramid has built in!). + +One of the major differences I see in the setup of a Pyramid application is that it has a much more intense configuration step than there is with Flask. I broke down those configuration steps to explain more about what's going on when a Pyramid application is constructed. However, it'd be disingenuous to act like I've known all of this since I started programming. My first experience with the Pyramid framework was with Pyramid 1.7 and its scaffolding system of `pcreate`, which builds out most of the necessary configuration, so all you need to do is think about the functionality you want to build. + +As of Pyramid 1.8, `pcreate` has been deprecated in favor of [cookiecutter][9], which effectively does the same thing. The difference is that it's maintained by someone else, and there are cookiecutter templates for more than just Pyramid projects. Now that we've gone through the components of a Pyramid project, I'd never endorse building a Pyramid project from scratch again when a cookiecutter template is available. Why do the hard work if you don't have to? In fact, the [pyramid-cookiecutter-alchemy][10] template would accomplish much of what I've written here (and a little bit more). It's actually similar to the `pcreate` scaffold I used when I first learned Pyramid. + +Learn more Python at [PyCon Cleveland 2018][11]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/pyramid-framework + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nhuntwalker +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/4/flask +[2]: http://flask.pocoo.org/ +[3]: https://trypyramid.com/ +[4]: http://www.tornadoweb.org/en/stable/ +[5]: https://www.djangoproject.com/ +[6]: https://www.sqlalchemy.org/ +[7]: http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/api.html#api-reference +[8]: http://zodb.readthedocs.io/en/latest/transactions.html +[9]: https://cookiecutter.readthedocs.io/en/latest/ +[10]: https://github.com/Pylons/pyramid-cookiecutter-alchemy +[11]: https://us.pycon.org/2018/ diff --git a/sources/tech/20180518 How to Manage Fonts in Linux.md b/sources/tech/20180518 How to Manage Fonts in Linux.md deleted file mode 100644 index 0faca7fa17..0000000000 --- a/sources/tech/20180518 How to Manage Fonts in Linux.md +++ /dev/null @@ -1,145 +0,0 @@ -How to Manage Fonts in Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_main.jpg?itok=qcJks7-c) - -Not only do I write technical documentation, I write novels. And because I’m comfortable with tools like GIMP, I also create my own book covers (and do graphic design for a few clients). That artistic endeavor depends upon a lot of pieces falling into place, including fonts. - -Although font rendering has come a long way over the past few years, it continues to be an issue in Linux. If you compare the look of the same fonts on Linux vs. macOS, the difference is stark. This is especially true when you’re staring at a screen all day. But even though the rendering of fonts has yet to find perfection in Linux, one thing that the open source platform does well is allow users to easily manage their fonts. From selecting, adding, scaling, and adjusting, you can work with fonts fairly easily in Linux. - -Here, I’ll share some of the tips I’ve depended on over the years to help extend my “font-ability” in Linux. These tips will especially help those who undertake artistic endeavors on the open source platform. Because there are so many desktop interfaces available for Linux (each of which deal with fonts in a different way), when a desktop environment becomes central to the management of fonts, I’ll be focusing primarily on GNOME and KDE. - -With that said, let’s get to work. - -### Adding new fonts - -For the longest time, I have been a collector of fonts. Some might say I have a bit of an obsession. And since my early days of using Linux, I’ve always used the same process for adding fonts to my desktops. There are two ways to do this: - - * Make the fonts available on a per-user basis. - - * Make the fonts available system-wide. - - - - -Because my desktops never have other users (besides myself), I only ever work with fonts on a per-user basis. However, I will show you how to do both. First, let’s see how to add fonts on a per-user basis. The first thing you must do is find fonts. Both True Type Fonts (TTF) and Open Type Fonts (OTF) can be added. I add fonts manually. Do this is, I create a new hidden directory in ~/ called ~/.fonts. This can be done with the command: -``` -mkdir ~/.fonts - -``` - -With that folder created, I then move all of my TTF and OTF files into the directory. That’s it. Every font you add into that directory will now be available for use to your installed apps. But remember, those fonts will only be available to that one user. - -If you want to make that collection of fonts available to all, here’s what you do: - - 1. Open up a terminal window. - - 2. Change into the directory housing all of your fonts. - - 3. Copy all of those fonts with the commands sudo cp *.ttf *.TTF /usr/share/fonts/truetype/ and sudo cp *.otf *.OTF /usr/share/fonts/opentype - - - - -The next time a user logs in, they’ll have access to all those glorious fonts. - -### GUI Font Managers - -There are a few ways to manage your fonts in Linux, via GUI. How it’s done will depend on your desktop environment. Let’s examine KDE first. With the KDE that ships with Kubuntu 18.04, you’ll find a Font Management tool pre-installed. Open that tool and you can easily add, remove, enable, and disable fonts (as well as get information about all of the installed fonts. This tool also makes it easy for you to add and remove fonts for personal and system-wide use. Let’s say you want to add a particular font for personal usage. To do this, download your font and then open up the Font Management tool. In this tool (Figure 1), click on Personal Fonts and then click the + Add button. - - -![adding fonts][2] - -Figure 1: Adding personal fonts in KDE. - -[Used with permission][3] - -Navigate to the location of your fonts, select them, and click Open. Your fonts will then be added to the Personal section and are immediately available for you to use (Figure 2). - - -![KDE Font Manager][5] - -Figure 2: Fonts added with the KDE Font Manager. - -[Used with permission][3] - -To do the same thing in GNOME requires the installation of an application. Open up either GNOME Software or Ubuntu Software (depending upon the distribution you’re using) and search for Font Manager. Select Font Manager and then click the Install button. Once the software is installed, launch it from the desktop menu. With the tool open, let’s install fonts on a per-user basis. Here’s how: - - 1. Select User from the left pane (Figure 3). - - 2. Click the + button at the top of the window. - - 3. Navigate to and select the downloaded fonts. - - 4. Click Open. - - - - -![Adding fonts ][7] - -Figure 3: Adding fonts in GNOME. - -[Used with permission][3] - -### Tweaking fonts - -There are three concepts you must first understand: - - * **Font Hinting:** The use of mathematical instructions to adjust the display of a font outline so that it lines up with a rasterized grid. - - * **Anti-aliasing:** The technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals. - - * **Scaling factor:** **** A scalable unit that allows you to multiple the point size of a font. So if you’re font is 12pt and you have an scaling factor of 1, the font size will be 12pt. If your scaling factor is 2, the font size will be 24pt. - - - - -Let’s say you’ve installed your fonts, but they don’t look quite as good as you’d like. How do you tweak the appearance of fonts? In both the KDE and GNOME desktops, you can make a few adjustments. One thing to consider with the tweaking of fonts is that taste is very much subjective. You might find yourself having to continually tweak until you get the fonts looking exactly how you like (dictated by your needs and particular taste). Let’s first look at KDE. - -Open up the System Settings tool and clock on Fonts. In this section, you can not only change various fonts, you can also enable and configure both anti-aliasing and enable font scaling factor (Figure 4). - - -![Configuring fonts][9] - -Figure 4: Configuring fonts in KDE. - -[Used with permission][3] - -To configure anti-aliasing, select Enabled from the drop-down and then click Configure. In the resulting window (Figure 5), you can configure an exclude range, sub-pixel rendering type, and hinting style. - -Once you’ve made your changes, click Apply. Restart any running applications and the new settings will take effect. - -To do this in GNOME, you have to have either use Font Manager or GNOME Tweaks installed. For this, GNOME Tweaks is the better tool. If you open the GNOME Dash and cannot find Tweaks installed, open GNOME Software (or Ubuntu Software), and install GNOME Tweaks. Once installed, open it and click on the Fonts section. Here you can configure hinting, anti-aliasing, and scaling factor (Figure 6). - -![Tweaking fonts][11] - -Figure 6: Tweaking fonts in GNOME. - -[Used with permission][3] - -### Make your fonts beautiful - -And that’s the gist of making your fonts look as beautiful as possible in Linux. You may not see a macOS-like rendering of fonts, but you can certainly improve the look. Finally, the fonts you choose will have a large impact on how things look. Make sure you’re installing clean, well-designed fonts; otherwise, you’re fighting a losing battle. - -Learn more about Linux through the free ["Introduction to Linux" ][12] course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/5/how-manage-fonts-linux - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[2]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_1.jpg?itok=7yTTe6o3 (adding fonts) -[3]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_2.jpg?itok=_g0dyVYq (KDE Font Manager) -[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_3.jpg?itok=8o884QKs (Adding fonts ) -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_4.jpg?itok=QJpPzFED (Configuring fonts) -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_6.jpg?itok=4cQeIW9C (Tweaking fonts) -[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md b/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md deleted file mode 100644 index 3ca8fba288..0000000000 --- a/sources/tech/20180518 What-s a hero without a villain- How to add one to your Python game.md +++ /dev/null @@ -1,275 +0,0 @@ -What's a hero without a villain? How to add one to your Python game -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/game-dogs-chess-play-lead.png?itok=NAuhav4Z) - -In the previous articles in this series (see [part 1][1], [part 2][2], [part 3][3], and [part 4][4]), you learned how to use Pygame and Python to spawn a playable character in an as-yet empty video game world. But, what's a hero without a villain? - -It would make for a pretty boring game if you had no enemies, so in this article, you'll add an enemy to your game and construct a framework for building levels. - -It might seem strange to jump ahead to enemies when there's still more to be done to make the player sprite fully functional, but you've learned a lot already, and creating villains is very similar to creating a player sprite. So relax, use the knowledge you already have, and see what it takes to stir up some trouble. - -For this exercise, you can download some pre-built assets from [Open Game Art][5]. Here are some of the assets I use: - - -+ Inca tileset -+ Some invaders -+ Sprites, characters, objects, and effects - - -### Creating the enemy sprite - -Yes, whether you realize it or not, you basically already know how to implement enemies. The process is very similar to creating a player sprite: - - 1. Make a class so enemies can spawn. - 2. Create an `update` function so enemies can detect collisions. - 3. Create a `move` function so your enemy can roam around. - - - -Start with the class. Conceptually, it's mostly the same as your Player class. You set an image or series of images, and you set the sprite's starting position. - -Before continuing, make sure you have a graphic for your enemy, even if it's just a temporary one. Place the graphic in your game project's `images` directory (the same directory where you placed your player image). - -A game looks a lot better if everything alive is animated. Animating an enemy sprite is done the same way as animating a player sprite. For now, though, keep it simple, and use a non-animated sprite. - -At the top of the `objects` section of your code, create a class called Enemy with this code: -``` -class Enemy(pygame.sprite.Sprite): - -    ''' - -    Spawn an enemy - -    ''' - -    def __init__(self,x,y,img): - -        pygame.sprite.Sprite.__init__(self) - -        self.image = pygame.image.load(os.path.join('images',img)) - -        self.image.convert_alpha() - -        self.image.set_colorkey(ALPHA) - -        self.rect = self.image.get_rect() - -        self.rect.x = x - -        self.rect.y = y - -``` - -If you want to animate your enemy, do it the [same way][4] you animated your player. - -### Spawning an enemy - -You can make the class useful for spawning more than just one enemy by allowing yourself to tell the class which image to use for the sprite and where in the world the sprite should appear. This means you can use this same enemy class to generate any number of enemy sprites anywhere in the game world. All you have to do is make a call to the class, and tell it which image to use and the X and Y coordinates of your desired spawn point. - -Again, this is similar in principle to spawning a player sprite. In the `setup` section of your script, add this code: -``` -enemy   = Enemy(20,200,'yeti.png')# spawn enemy - -enemy_list = pygame.sprite.Group()   # create enemy group - -enemy_list.add(enemy)                # add enemy to group - -``` - -In that sample code, `20` is the X position and `200` is the Y position. You might need to adjust these numbers, depending on how big your enemy sprite is, but try to get it to spawn in a place so that you can reach it with your player sprite. `Yeti.png` is the image used for the enemy. - -Next, draw all enemies in the enemy group to the screen. Right now, you have only one enemy, but you can add more later if you want. As long as you add an enemy to the enemies group, it will be drawn to the screen during the main loop. The middle line is the new line you need to add: -``` -    player_list.draw(world) - -    enemy_list.draw(world)  # refresh enemies - -    pygame.display.flip() - -``` - -Launch your game. Your enemy appears in the game world at whatever X and Y coordinate you chose. - -### Level one - -Your game is in its infancy, but you will probably want to add another level. It's important to plan ahead when you program so your game can grow as you learn more about programming. Even though you don't even have one complete level yet, you should code as if you plan on having many levels. - -Think about what a "level" is. How do you know you are at a certain level in a game? - -You can think of a level as a collection of items. In a platformer, such as the one you are building here, a level consists of a specific arrangement of platforms, placement of enemies and loot, and so on. You can build a class that builds a level around your player. Eventually, when you create more than one level, you can use this class to generate the next level when your player reaches a specific goal. - -Move the code you wrote to create an enemy and its group into a new function that will be called along with each new level. It requires some modification so that each time you create a new level, you can create several enemies: -``` -class Level(): - -    def bad(lvl,eloc): - -        if lvl == 1: - -            enemy = Enemy(eloc[0],eloc[1],'yeti.png') # spawn enemy - -            enemy_list = pygame.sprite.Group() # create enemy group - -            enemy_list.add(enemy)              # add enemy to group - -        if lvl == 2: - -            print("Level " + str(lvl) ) - - - -        return enemy_list - -``` - -The `return` statement ensures that when you use the `Level.bad` function, you're left with an `enemy_list` containing each enemy you defined. - -Since you are creating enemies as part of each level now, your `setup` section needs to change, too. Instead of creating an enemy, you must define where the enemy will spawn and what level it belongs to. -``` -eloc = [] - -eloc = [200,20] - -enemy_list = Level.bad( 1, eloc ) - -``` - -Run the game again to confirm your level is generating correctly. You should see your player, as usual, and the enemy you added in this chapter. - -### Hitting the enemy - -An enemy isn't much of an enemy if it has no effect on the player. It's common for enemies to cause damage when a player collides with them. - -Since you probably want to track the player's health, the collision check happens in the Player class rather than in the Enemy class. You can track the enemy's health, too, if you want. The logic and code are pretty much the same, but, for now, just track the player's health. - -To track player health, you must first establish a variable for the player's health. The first line in this code sample is for context, so add the second line to your Player class: -``` -        self.frame  = 0 - -        self.health = 10 - -``` - -In the `update` function of your Player class, add this code block: -``` -        hit_list = pygame.sprite.spritecollide(self, enemy_list, False) - -        for enemy in hit_list: - -            self.health -= 1 - -            print(self.health) - -``` - -This code establishes a collision detector using the Pygame function `sprite.spritecollide`, called `enemy_hit`. This collision detector sends out a signal any time the hitbox of its parent sprite (the player sprite, where this detector has been created) touches the hitbox of any sprite in `enemy_list`. The `for` loop is triggered when such a signal is received and deducts a point from the player's health. - -Since this code appears in the `update` function of your player class and `update` is called in your main loop, Pygame checks for this collision once every clock tick. - -### Moving the enemy - -An enemy that stands still is useful if you want, for instance, spikes or traps that can harm your player, but the game is more of a challenge if the enemies move around a little. - -Unlike a player sprite, the enemy sprite is not controlled by the user. Its movements must be automated. - -Eventually, your game world will scroll, so how do you get an enemy to move back and forth within the game world when the game world itself is moving? - -You tell your enemy sprite to take, for example, 10 paces to the right, then 10 paces to the left. An enemy sprite can't count, so you have to create a variable to keep track of how many paces your enemy has moved and program your enemy to move either right or left depending on the value of your counting variable. - -First, create the counter variable in your Enemy class. Add the last line in this code sample: -``` -        self.rect = self.image.get_rect() - -        self.rect.x = x - -        self.rect.y = y - -        self.counter = 0 # counter variable - -``` - -Next, create a `move` function in your Enemy class. Use an if-else loop to create what is called an infinite loop: - - * Move right if the counter is on any number from 0 to 100. - * Move left if the counter is on any number from 100 to 200. - * Reset the counter back to 0 if the counter is greater than 200. - - - -An infinite loop has no end; it loops forever because nothing in the loop is ever untrue. The counter, in this case, is always either between 0 and 100 or 100 and 200, so the enemy sprite walks right to left and right to left forever. - -The actual numbers you use for how far the enemy will move in either direction depending on your screen size, and possibly, eventually, the size of the platform your enemy is walking on. Start small and work your way up as you get used to the results. Try this first: -``` -    def move(self): - -        ''' - -        enemy movement - -        ''' - -        distance = 80 - -        speed = 8 - - - -        if self.counter >= 0 and self.counter <= distance: - -            self.rect.x += speed - -        elif self.counter >= distance and self.counter <= distance*2: - -            self.rect.x -= speed - -        else: - -            self.counter = 0 - - - -        self.counter += 1 - -``` - -You can adjust the distance and speed as needed. - -Will this code work if you launch your game now? - -Of course not, and you probably know why. You must call the `move` function in your main loop. The first line in this sample code is for context, so add the last two lines: -``` -    enemy_list.draw(world) #refresh enemy - -    for e in enemy_list: - -        e.move() - -``` - -Launch your game and see what happens when you hit your enemy. You might have to adjust where the sprites spawn so that your player and your enemy sprite can collide. When they do collide, look in the console of [IDLE][6] or [Ninja-IDE][7] to see the health points being deducted. - -![](https://opensource.com/sites/default/files/styles/panopoly_image_original/public/u128651/yeti.png?itok=4_GsDGor) - -You may notice that health is deducted for every moment your player and enemy are touching. That's a problem, but it's a problem you'll solve later, after you've had more practice with Python. - -For now, try adding some more enemies. Remember to add each enemy to the `enemy_list`. As an exercise, see if you can think of how you can change how far different enemy sprites move. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/pygame-enemy - -作者:[Seth Kenlon][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[1]:https://opensource.com/article/17/10/python-101 -[2]:https://opensource.com/article/17/12/game-framework-python -[3]:https://opensource.com/article/17/12/game-python-add-a-player -[4]:https://opensource.com/article/17/12/game-python-moving-player -[5]:https://opengameart.org -[6]:https://docs.python.org/3/library/idle.html -[7]:http://ninja-ide.org/ diff --git a/sources/tech/20180523 How to dual-boot Linux and Windows.md b/sources/tech/20180523 How to dual-boot Linux and Windows.md deleted file mode 100644 index 372097c866..0000000000 --- a/sources/tech/20180523 How to dual-boot Linux and Windows.md +++ /dev/null @@ -1,224 +0,0 @@ -translating by Auk7F7 -How to dual-boot Linux and Windows -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q) - -Even though Linux is a great operating system with widespread hardware and software support, the reality is that sometimes you have to use Windows, perhaps due to key apps that won't run under Linux. Thankfully, dual-booting Windows and Linux is very straightforward—and I'll show you how to set it up, with Windows 10 and Ubuntu 18.04, in this article. - -Before you get started, make sure you've backed up your computer. Although the dual-boot setup process is not very involved, accidents can still happen. So take the time to back up your important files in case chaos theory comes into play. In addition to backing up your files, consider taking an image backup of the disk as well, though that's not required and can be a more advanced process. - -### Prerequisites - -To get started, you will need the following five items: - -#### 1\. Two USB flash drives (or DVD-Rs) - -I recommend installing Windows and Ubuntu via flash drives since they're faster than DVDs. It probably goes without saying, but creating bootable media erases everything on the flash drive. Therefore, make sure the flash drives are empty or contain data you don't care about losing. - -If your machine doesn't support booting from USB, you can create DVD media instead. Unfortunately, because no two computers seem to have the same DVD-burning software, I can't walk you through that process. However, if your DVD-burning application has an option to burn from an ISO image, that's the option you need. - -#### 2\. A Windows 10 license - -If Windows 10 came with your PC, the license will be built into the computer, so you don't need to worry about entering it during installation. If you bought the retail edition, you should have a product key, which you will need to enter during the installation process. - -#### 3\. Windows 10 Media Creation Tool - -Download and launch the Windows 10 [Media Creation Tool][1]. Once you launch the tool, it will walk you through the steps required to create the Windows media on a USB or DVD-R. Note: Even if you already have Windows 10 installed, it's a good idea to create bootable media anyway, just in case something goes wrong and you need to reinstall it. - -#### 4\. Ubuntu 18.04 installation media - -Download the [Ubuntu 18.04][2] ISO image. - -#### 5\. Etcher software (for making a bootable Ubuntu USB drive) - -For creating bootable media for any Linux distribution, I recommend [Etcher][3]. Etcher works on all three major operating systems (Linux, MacOS, and Windows) and is careful not to let you overwrite your current operating system partition. - -Once you have downloaded and launched Etcher, click Select image, and point it to the Ubuntu ISO you downloaded in step 4. Next, click Select drive to choose your flash drive, and click Flash! to start the process of turning a flash drive into an Ubuntu installer. (If you're using a DVD-R, use your computer's DVD-burning software instead.) - -### Install Windows and Ubuntu - -You should be ready to begin. At this point, you should have accomplished the following: - - * Backed up your important files - * Created Windows installation media - * Created Ubuntu installation media - - - -There are two ways of going about the installation. First, if you already have Windows 10 installed, you can have the Ubuntu installer resize the partition, and the installation will proceed in the empty space. Or, if you haven't installed Windows 10, install it on a smaller partition you can set up during the installation process. (I'll describe how to do that below.) The second way is preferred and less error-prone. There's a good chance you won't have any issues either way, but installing Windows manually and giving it a smaller partition, then installing Ubuntu, is the easiest way to go. - -If you already have Windows 10 on your computer, skip the following Windows installation instructions and proceed to Installing Ubuntu. - -#### Installing Windows - -Insert the Windows installation media you created into your computer and boot from it. How you do this depends on your computer, but most have a key you can press to initiate the boot menu. On a Dell PC for example, that key is F12. If the flash drive doesn't show up as an option, you may need to restart the computer. Sometimes it will show up only if you've inserted the media before turning on the computer. If you see a message like, "press any key to boot from the installation media," press a key. You should see the following screen. Select your language and keyboard style and click Next. - -![Windows setup][5] - -Click on Install now to start the Windows installer. - -On the next screen, it will ask for your product key. If you don't have one because Windows 10 came with your PC, select "I don't have a product key." It should automatically activate after the installation once it catches up with updates. If you do have a product key, type that in and click Next. - - -![Enter product key][7] - - -Select which version of Windows you want to install. If you have a retail copy, the label will tell you what version you have. Otherwise, it is typically located with the documentation that came with your computer. In most cases, it's going to be either Windows 10 Home or Windows 10 Pro. Most PCs that come with the Home edition have a label that simply reads "Windows 10," while Pro is clearly marked. - - -![Select Windows version][10] - - -Accept the license agreement by checking the box, then click Next. - - -![Accept license terms][12] - - -After accepting the agreement, you have two installation options available. Choose the second option, Custom: Install Windows only (advanced). - - -![Select type of Windows installation][14] - - -The next screen should show your current hard disk configuration. - - -![Hard drive configuration][16] - - -Your results will probably look different than mine. I have never used this hard disk before, so it's completely unallocated. You will probably see one or more partitions for your current operating system. Highlight each partition and remove it. - -At this point, your screen will show your entire disk as unallocated. To continue, create a new partition. - - -![Create a new partition][18] - - -Here you can see that I divided the drive in half (or close enough) by creating a partition of 81,920MB (which is close to half of 160GB). Give Windows at least 40GB, preferably 64GB or more. Leave the rest of the drive unallocated, as that's where you'll install Ubuntu later. - -Your results will look similar to this: - - -![Leaving a partition with unallocated space][20] - - -Confirm the partitioning looks good to you and click Next. Windows will begin installing. - - -![Installing Windows][22] - - -If your computer successfully boots into Windows, you're all set to move on to the next step. - -![Windows desktop][24] - - -#### Installing Ubuntu - -Whether it was already there or you worked through the steps above, at this point you should have Windows installed. Now use the Ubuntu installation media you created earlier to boot into Ubuntu. Go ahead and insert the media and boot your computer from it. Again, the exact sequence of keys to access the boot menu varies from one computer to another, so check your documentation if you're not sure. If all goes well, you see the following screen once the media finishes loading: - - -![Ubuntu installation welcome screen][26] - - -Here, you can select between Try Ubuntu or Install Ubuntu. Don't install just yet; instead, click Try Ubuntu. After it finishes loading, you should see the Ubuntu desktop. - - -![Ubuntu desktop][28] - -By clicking Try Ubuntu, you have opted to try out Ubuntu before you install it. Here, in Live mode, you can play around with Ubuntu and make sure everything works before you commit to the installation. Ubuntu works with most PC hardware, but it's always better to test it out beforehand. Make sure you can access the internet and get audio and video playback. Going to YouTube and playing a video is a good way of doing all of that at once. If you need to connect to a wireless network, click on the networking icon at the top-right of the screen. There, you can find a list of wireless networks and connect to yours. - -Once you're ready to go, double-click on the Install Ubuntu 18.04 LTS icon on the desktop to launch the installer. - -Choose the language you want to use for the installation process, then click Continue. - - -![Select language in Ubuntu][30] - - -Next, choose the keyboard layout. Once you've made your selection, click Continue. - - -![Select keyboard in Ubuntu][32] - -You have a few options on the screen below. One, you can choose a Normal or a Minimal installation. For most people, the Normal installation is ideal. Advanced users may want to do a Minimal install instead, which has fewer software applications installed by default. In addition, you can choose to download updates and whether or not to include third-party software and drivers. I recommend checking both of those boxes. When done, click Continue. - - -![Choose Ubuntu installation options][34] - -The next screen asks whether you want to erase the disk or set up a dual-boot. Since you're dual-booting, choose Install Ubuntu alongside Windows 10. Click Install Now. - - -![install Ubuntu alongside Windows][36] - - -The following screen may appear. If you installed Windows from scratch and left unallocated space on the disk, Ubuntu will automatically set itself up in the empty space, so you won't see this screen. If you already had Windows 10 installed and it's taking up the entire drive, this screen will appear and give you an option to select a disk at the top. If you have just one disk, you can choose how much space to steal from Windows and apply to Ubuntu. You can drag the vertical line in the middle left and right with your mouse to take space away from one and gives it to the other. Adjust this exactly the way you want it, then click Install Now. - - -![Allocate drive space][38] - - -You should see a confirmation screen indicating what Ubuntu plans on doing. If everything looks right, click Continue. - -Ubuntu is now installing in the background. You still have some configuration to do, though. While Ubuntu tries its best to figure out your location, you can click on the map to narrow it down to ensure your time zone and other things are set correctly. - -Next, fill in the user account information: your name, computer name, username, and password. Click Continue when you're done. - -There you have it! The installation is complete. Go ahead and reboot the PC. - -If all went according to plan, you should see a screen similar to this when your computer restarts. Choose Ubuntu or Windows 10; the other options are for troubleshooting, so I won't go into them. - -Try booting into both Ubuntu and Windows to test them out and make sure everything works as expected. If it does, you now have both Windows and Ubuntu installed on your computer. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/dual-boot-linux - -作者:[Jay LaCroix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://www.microsoft.com/en-us/software-download/windows10 -[2]:https://www.ubuntu.com/download/desktop -[3]:http://www.etcher.io -[4]:/file/397066 -[5]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_01.png (Windows setup) -[6]:/file/397076 -[7]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_03.png (Enter product key) -[8]:data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== (Click and drag to move) -[9]:/file/397081 -[10]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_04.png (Select Windows version) -[11]:/file/397086 -[12]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_05.png (Accept license terms) -[13]:/file/397091 -[14]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_06.png (Select type of Windows installation) -[15]:/file/397096 -[16]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_07.png (Hard drive configuration) -[17]:/file/397101 -[18]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_08.png (Create a new partition) -[19]:/file/397106 -[20]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_09.png (Leaving a partition with unallocated space) -[21]:/file/397111 -[22]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_10.png (Installing Windows) -[23]:/file/397116 -[24]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_11.png (Windows desktop) -[25]:/file/397121 -[26]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_12.png (Ubuntu installation welcome screen) -[27]:/file/397126 -[28]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_13.png (Ubuntu desktop) -[29]:/file/397131 -[30]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_15.png (Select language in Ubuntu) -[31]:/file/397136 -[32]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_16.png (Select keyboard in Ubuntu) -[33]:/file/397141 -[34]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_17.png (Choose Ubuntu installation options) -[35]:/file/397146 -[36]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_18.png (Install Ubuntu alongside Windows) -[37]:/file/397151 -[38]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_18b.png (Allocate drive space) diff --git a/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md b/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md deleted file mode 100644 index 386149400c..0000000000 --- a/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md +++ /dev/null @@ -1,89 +0,0 @@ -How to Set Different Wallpaper for Each Monitor in Linux -====== -**Brief: If you want to display different wallpapers on multiple monitors on Ubuntu 18.04 or any other Linux distribution with GNOME, MATE or Budgie desktop environment, this nifty tool will help you achieve this.** - -Multi-monitor setup often leads to multiple issues on Linux but I am not going to discuss those issues in this article. I have rather a positive article on multiple monitor support on Linux. - -If you are using multiple monitor, perhaps you would like to setup a different wallpaper for each monitor. I am not sure about other Linux distributions and desktop environments, but Ubuntu with [GNOME desktop][1] doesn’t provide this functionality on its own. - -Fret not! In this quick tutorial, I’ll show you how to set a different wallpaper for each monitor on Linux distributions with GNOME desktop environment. - -### Setting up different wallpaper for each monitor on Ubuntu 18.04 and other Linux distributions - -![Different wallaper on each monitor in Ubuntu][2] - -I am going to use a nifty tool called [HydraPaper][3] for setting different backgrounds on different monitors. HydraPaper is a [GTK][4] based application to set different backgrounds for each monitor in [GNOME desktop environment][5]. - -It also supports on [MATE][6] and [Budgie][7] desktop environments. Which means Ubuntu MATE and [Ubuntu Budgie][8] users can also benefit from this application. - -#### Install HydraPaper on Linux using FlatPak - -HydraPaper can be installed easily using [FlatPak][9]. Ubuntu 18.04 already provides support for FlatPaks so all you need to do is to download the application file and double click on it to open it with the GNOME Software Center. - -You can refer to this article to learn [how to enable FlatPak support][10] on your distribution. Once you have the FlatPak support enabled, just download it from [FlatHub][11] and install it. - -[Download HydraPaper][12] - -#### Using HydraPaper for setting different background on different monitors - -Once installed, just look for HydraPaper in application menu and start the application. You’ll see images from your Pictures folder here because by default the application takes images from the Pictures folder of the user. - -You can add your own folder(s) where you keep your wallpapers. Do note that it doesn’t find images recursively. If you have nested folders, it will only show images from the top folder. - -![Setting up different wallpaper for each monitor on Linux][13] - -Using HydraPaper is absolutely simple. Just select the wallpapers for each monitor and click on the apply button at the top. You can easily identify external monitor(s) termed with HDMI. - -![Setting up different wallpaper for each monitor on Linux][14] - -You can also add selected wallpapers to ‘Favorites’ for quick access. Doing this will move the ‘favorite wallpapers’ from Wallpapers tab to Favorites tab. - -![Setting up different wallpaper for each monitor on Linux][15] - -You don’t need to start HydraPaper at each boot. Once you set different wallpaper for different monitor, the settings are saved and you’ll see your chosen wallpapers even after restart. This would be expected behavior of course but I thought I would mention the obvious. - -One big downside of HydraPaper is in the way it is designed to work. You see, HydraPaper combines your selected wallpapers into one single image and stretches it across the screens giving an impression of having different background on each display. And this becomes an issue when you remove the external display. - -For example, when I tried using my laptop without the external display, it showed me an background image like this. - -![Dual Monitor wallpaper HydraPaper][16] - -Quite obviously, this is not what I would expect. - -#### Did you like it? - -HydraPaper makes setting up different backgrounds on different monitors a painless task. It supports more than two monitors and monitors with different orientation. Simple interface with only the required features makes it an ideal application for those who always use dual monitors. - -How do you set different wallpaper for different monitor on Linux? Do you think HydraPaper is an application worth installing? - -Do share your views and if you find this article, please share it on various social media channels such as Twitter and [Reddit][17]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/wallpaper-multi-monitor/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]:https://www.gnome.org/ -[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/multi-monitor-wallpaper-setup-800x450.jpeg -[3]:https://github.com/GabMus/HydraPaper -[4]:https://www.gtk.org/ -[5]:https://itsfoss.com/gnome-tricks-ubuntu/ -[6]:https://mate-desktop.org/ -[7]:https://budgie-desktop.org/home/ -[8]:https://itsfoss.com/ubuntu-budgie-18-review/ -[9]:https://flatpak.org -[10]:https://flatpak.org/setup/ -[11]:https://flathub.org -[12]:https://flathub.org/apps/details/org.gabmus.hydrapaper -[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-2-800x631.jpeg -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-1.jpeg -[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-3.jpeg -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/hydra-paper-dual-monitor-800x450.jpeg -[17]:https://www.reddit.com/r/LinuxUsersGroup/ diff --git a/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md b/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md new file mode 100644 index 0000000000..ac756223f1 --- /dev/null +++ b/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md @@ -0,0 +1,209 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Streaming Australian TV Channels to a Raspberry Pi) +[#]: via: (https://blog.dxmtechsupport.com.au/streaming-australian-tv-channels-to-a-raspberry-pi/) +[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/) + +Streaming Australian TV Channels to a Raspberry Pi +====== + +If you’re anything like me, it’s been years since you’ve even thought about hooking an antenna to your television. With so much of the good stuff available by streaming and download, it’s easy go a very long time without even thinking about free-to-air TV. + +But every now and again, something comes up – perhaps the cricket, news and current affairs shows, the FIFA World Cup – where the easiest thing would be to just chuck on the telly. + +When I first started tinkering with the Raspberry Pi as a gaming and media centre platform, the standard advice for watching broadcast TV always seemed to involve an antenna and a USB TV tuner. + +Which I guess is fine if you can be arsed. + +But what if you utterly can’t? + +What if you bitterly resent the idea of more clutter, more cords to add to the mess, more stuff to buy? What if every USB port is precious and jealously guarded for your keyboard, mouse, game controllers and removable storage? What if the wall port for your roof antenna is in a different room? + +That’s all a bit of a hassle for a thing you might use only a few times a year. + +In 2018, shouldn’t we just be able to stream free TV from the internet? + +It turns out that, yes, we can access legal and high quality TV streams from any Australian IP using [Freeview][1]. And thanks to a cool Kodi Add-on by [Matt Huisman][2], it’s now really easy to access this service from a Raspberry Pi. + +I’ve tested this to work on a Model 3 B+ running Retropie 4.4 and Kodi 17.6. But it should work similarly for other models and operating systems, so long as you’re using a reasonably up-to-date version of Kodi. + +Let’s jump right in. + +### If You Already Have Kodi Installed + +If you’re already using your Raspberry Pi to watch movies and TV shows, there’s a good chance you’ve already installed Kodi. + +Most Raspberry Pi operating systems intended for media centre use – such as OSMC or Xbian – come with Kodi installed by default. + +It’s fairly easy to get running on other Linux operating systems, and you might have already installed it there too. + +If your version of Kodi is more than a year or so old, it might be an idea to update it. The following instructions are written for the interface on Kodi 17 (Krypton). + +You can do that by typing the following commands at the command line: + +``` +sudo apt-get update +sudo apt-get upgrade +``` + +And now you can skip ahead to the next section. + +### Installing Kodi + +Installing Kodi on Retropie and other versions of Raspbian is fairly simple. Other Linux operating systems should be able to run it, perhaps with a bit of coaxing. + +You will need to be connected to the internet to install it. + +If you’re using something, such as Risc OS – you probably can’t install kodi. You will need to either swap in another SD card, or use a boot loader to boot into a media centre OS for your TV viewing. + +#### Installing Kodi on Retropie + +It’s really easy to install Kodi using the Retropie menu system. + +Here’s how: + + 1. Navigate to the Retropie main screen – that’s that horizontal menu where you can scroll left and right through all your different consoles + 2. Select “Retropie” + 3. Select “Retropie setup” + 4. Select “Manage Packages” + 5. Select “Manage Optional Packages” + 6. Scroll down and select “Kodi” + 7. Select “Install from Binary” + + + +This will take a minute or two to install. Once it’s installed, you can exit out of the Retropie Setup screen. When you next restart Retropie, you will see Kodi under the “Ports” section of the Retropie main screen. + +#### Installing Kodi on Raspbian + +If you’re running Raspbian without Retropie. But that’s okay, because it’s pretty easy to do it from the command line + +Just type: + +``` +sudo apt-get update +sudo apt-get install kodi +``` + +At this point you have a vanilla installation of Kodi. You can run it by typing: + +``` +kodi +``` + +It’s possible to delve a lot further into setting up Kodi from the command line. Check out [this guide][3] if you’re interested. + +If not, what you’ve just installed will work just fine. + +#### Installing Kodi on Other Versions of Linux + +If you’re using a different flavour of Linux, such as Pidora or Arch Linux ARM, then the above might or might not work – I’m not really sure, because I don’t really use these operating systems. + +If you get stuck, it might be worth a look at the [how-to guide][4] on the Kodi wiki. + +#### Dual Booting a Media Centre OS + +If your operating system of choice isn’t suitable for Kodi – or is just too confusing and difficult to figure out – it might be easiest to use a boot loader for multiple operating systems on the one SD card. + +You can set this up using an OS installer like [PINN][5]. + +Using PINN, you can install a media centre OS like [OSMC][6] to use Kodi – it will be installed with the operating system – and then your preferred OS for your other uses. + +It’s even possible to [move your existing OS over][7]. + +### Adding Australian TV Channels to Kodi + +With Kodi installed and running, you’ve got a pretty good media player for the files on your network and hard drive. + +But we need to install an add-on if we want to use it to chuck on the telly. This only takes a minute or so. + +#### Installing Matt Huisman’s Kodi Repository + +Ready? Let’s get started. + + 1. Open Kodi + 2. Click the cog icon at the top left to enter the settings + 3. Click “System Settings” + 4. Select “Add-ons” + 5. Make sure that “Unknown Sources” is enabled + 6. Right click anywhere on the screen to navigate back to the settings menu + 7. Click “File Manager” + 8. Click “Add Source” + 9. Double-click “Add Source” + 10. Select “” + 11. Type in exactly **** + 12. Select “OK” + 13. Click the text input underneath the label “Enter a name for this media source.” + 14. Type in exactly **MJH** + 15. Click “OK” + 16. Right click twice anywhere on the screen to navigate back to the main menu + 17. Select “Add-ons” + 18. Click “My Add-ons” + 19. Click “..” + 20. Click “Install from zip file” + 21. Click “MJH” + 22. Select “repository.matthuisman.zip” + + + +The repository is now installing. + +If you get stuck with any of this, here’s a video from Matt that starts by installing the repository. + + + +#### Installing the Freeview Australia Add-On + +We’re nearly there! Just a few more steps. + + 1. Right click anywhere on the screen a couple of times to navigate back to the main menu + 2. Select “Add-ons” + 3. Click “My add-ons” + 4. Click “..” + 5. Click “Install from repository” + 6. Click “MattHuisman.nz Repository” + 7. Click “Video add-ons” + 8. Click “AU Freeview” + 9. Click “Install” + + + +You can now have every free-to-air TV channel in your Add-ons main menu item. + +### Watching TV + +When you want to chuck the telly on, all you need to do is click “AU Freeview” in the Add-ons main menu item. This will give you a list of channels to browse through and select. + +If you want, you can also add individual channels to your Favourites menu by right clicking them and selecting “Add to favourites”. + +By default you will be watching Melbourne television. You can change the region by right clicking on “AU Freeview” and clicking “settings”. + +When you first tune in, it sometimes jumps a bit for a few seconds, but after that it’s pretty smooth. + +After spending a few minutes with this, you’ll quickly realise that free-to-air TV hasn’t improved in the years since you last looked at. Unfortunately, I don’t think there’s a fix for that. + +But at least it’s there now for when you want it. + +-------------------------------------------------------------------------------- + +via: https://blog.dxmtechsupport.com.au/streaming-australian-tv-channels-to-a-raspberry-pi/ + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/ +[b]: https://github.com/lujun9972 +[1]: http://www.freeview.com.au/ +[2]: https://www.matthuisman.nz/ +[3]: https://www.raspberrypi.org/forums/viewtopic.php?t=192499 +[4]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux +[5]: https://github.com/procount/pinn +[6]: https://osmc.tv/ +[7]: https://github.com/procount/pinn/wiki/How-to-Create-a-Multi-Boot-SD-card-out-of-2-existing-OSes-using-PINN diff --git a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md b/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md deleted file mode 100644 index d04ee6d742..0000000000 --- a/sources/tech/20180529 Manage your workstation with Ansible- Configure desktop settings.md +++ /dev/null @@ -1,196 +0,0 @@ -Manage your workstation with Ansible: Configure desktop settings -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ) - -In the [first article][1] of this series on using Ansible to configure a workstation, we set up a repository and configured a few basic things. In the [second part][2], we automated Ansible to apply settings automatically when changes are made to our repository. In this third (and final) article, we'll use Ansible to configure GNOME desktop settings. - -This configuration will work only on newer distributions (such as Ubuntu 18.04, which I'll use in my examples). Older versions of Ubuntu will not work, as they ship with a version of `python-psutils` that is too old for Ansible's `dconf` module to work properly. If you're using a newer version of your Linux distribution, you should have no issues. - -Before you begin, make sure you've worked through parts one and two of this series, as part three builds upon that groundwork. If you haven't already, download the GitHub repository you've been using in those first two articles. We'll add a few more features to it. - -### Set a wallpaper and lock screen - -First, we'll create a taskbook to hold our GNOME settings. In the root of the repository, you should have a file named `local.yml`. Add the following line to it: -``` -- include: tasks/gnome.yml - -``` - -The entire file should now look like this: -``` -- hosts: localhost - -  become: true - -  pre_tasks: - -    - name: update repositories - -      apt: update_cache=yes - -      changed_when: False - - - -  tasks: - -    - include: tasks/users.yml - -    - include: tasks/cron.yml - -    - include: tasks/packages.yml - -    - include: tasks/gnome.yml - -``` - -Basically, this added a reference to a file named `gnome.yml` that will be stored in the `tasks` directory inside the repository. We haven't created this file yet, so let's do that now. Create `gnome.yml` file in the `tasks` directory, and place the following content inside: -``` -- name: Install python-psutil package - -  apt: name=python-psutil - - - -- name: Copy wallpaper file - -  copy: src=files/wallpaper.jpg dest=/home/jay/.wallpaper.jpg owner=jay group=jay mode=600 - - - -- name: Set GNOME Wallpaper - -  become_user: jay - -  dconf: key="/org/gnome/desktop/background/picture-uri" value="'file:///home/jay/.wallpaper.jpg'" - -``` - -Note that this code refers to my username (`jay`) several times, so make sure to replace every occurrence of `jay` with the username you use on your machine. Also, if you're not using Ubuntu 18.04 (as I am), you'll have to change the `apt` line to match the package manager for your chosen distribution and to confirm the name of the `python-psutil` package for your distribution, as it may be different. - -`wallpaper.jpg` inside the `files` directory. This file must exist or the Ansible configuration will fail. Inside the `tasks` directory, create a subdirectory named `files`. Find a wallpaper image you like, name it `wallpaper.jpg`, and place it inside the `files` directory. If the file is a PNG image instead of a JPG, change the file extension in both the code and in the repository. If you're not feeling creative, I have an example wallpaper file in the - -In the example tasks, I referred to a file namedinside thedirectory. This file must exist or the Ansible configuration will fail. Inside thedirectory, create a subdirectory named. Find a wallpaper image you like, name it, and place it inside thedirectory. If the file is a PNG image instead of a JPG, change the file extension in both the code and in the repository. If you're not feeling creative, I have an example wallpaper file in the [GitHub repository][3] for this article series that you can use. - -Once you've made all these changes, commit everything to your GitHub repository, and push those changes. To recap, you should've completed the following: - - * Modified the `local.yml` file to refer to the `tasks/gnome.yml` playbook - * Created the `tasks/gnome.yml` playbook with the content mentioned above - * Created a `files` directory inside the `tasks` directory, with an image file named `wallpaper.jpg` (or whatever you chose to call it). - - - -Once you've completed those steps and pushed your changes back to the repository, the configuration should be automatically applied during its next scheduled run. (You may recall that we automated this in the previous article.) If you're in a hurry, you can apply the configuration immediately with the following command: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -If everything ran correctly, you should see your new wallpaper. - -Let's take a moment to go through what the new GNOME taskbook does. First, we added a play to install the `python-psutil` package. If we don't add this, we can't use the `dconf` module, since it requires this package to be installed before we can modify GNOME settings. Next, we used the `copy` module to copy the wallpaper file to our `home` directory, and we named the resulting file starting with a period to hide it. If you'd prefer not to have this file in the root of your `home` directory, you can always instruct this section to copy it somewhere else—it will still work as long as you refer to it at the correct place. In the next play, we used the `dconf` module to change GNOME settings. In this case, we adjusted the `/org/gnome/desktop/background/picture-uri` key and set it equal to `file:///home/jay/.wallpaper.jpg`. Note the quotes in this section of the playbook—you must always use two single-quotes in `dconf` values, and you must also include double-quotes if the value is a string. - -Now, let's take our configuration a step further and apply a background to the lock screen. Here's the GNOME taskbook again, but with two additional plays added: -``` -- name: Install python-psutil package - -  apt: name=python-psutil - - - -- name: Copy wallpaper file - -  copy: src=files/wallpaper.jpg dest=/home/jay/.wallpaper.jpg owner=jay group=jay mode=600 - - - -- name: Set GNOME wallpaper - -  dconf: key="/org/gnome/desktop/background/picture-uri" value="'file:///home/jay/.wallpaper.jpg'" - - - -- name: Copy lockscreenfile - -  copy: src=files/lockscreen.jpg dest=/home/jay/.lockscreen.jpg owner=jay group=jay mode=600 - - - -- name: Set lock screen background - -  become_user: jay - -  dconf: key="/org/gnome/desktop/screensaver/picture-uri" value="'file:///home/jay/.lockscreen.jpg'" - -``` - -As you can see, we're pretty much doing the same thing as we did with the wallpaper. We added two additional tasks, one to copy the lock screen image and place it in our `home` directory, and another to apply the setting to GNOME so it will be used. Again, be sure to change your username from `jay` and also name your desired lock screen picture `lockscreen.jpg` and copy it to the `files` directory. Once you've committed these changes to your repository, the new lock screen should be applied during the next scheduled Ansible run. - -### Apply a new desktop theme - -Setting the wallpaper and lock screen background is cool and all, but let's go even further and apply a desktop theme. First, let's add an instruction to our taskbook to install the package for the `arc` theme. Add the following code to the beginning of the GNOME taskbook: -``` -- name: Install arc theme - -  apt: name=arc-theme - -``` - -Then, at the bottom, add the following play: -``` -- name: Set GTK theme - -  become_user: jay - -  dconf: key="/org/gnome/desktop/interface/gtk-theme" value="'Arc'" - -``` - -Did you see GNOME's GTK theme change right before your eyes? We added a play to install the `arc-theme` package via the `apt` module and another play to apply this theme to GNOME. - -### Make other customizations - -Now that you've changed some GNOME settings, feel free to add additional customizations on your own. Any setting you can tweak in GNOME can be automated this way; setting the wallpapers and the theme were just a few examples. You may be wondering how to find the settings that you want to change. Here's a trick that works for me. - -First, take a snapshot of ALL your current `dconf` settings by running the following command on the machine you're managing: -``` -dconf dump / > before.txt - -``` - -This command exports all your current changes to a file named `before.txt`. Next, manually change the setting you want to automate, and capture the `dconf` settings again: -``` -dconf dump / > after.txt - -``` - -Now, you can use the `diff` command to see what's different between the two files: -``` -diff before.txt after.txt - -``` - -This should give you a list of keys that changed. While it's true that changing settings manually defeats the purpose of automation, what you're essentially doing is capturing the keys that change when you update your preferred settings, which then allows you to create Ansible plays to modify those settings so you'll never need to touch those settings again. If you ever need to restore your machine, your Ansible repository will take care of each and every one of your customizations. If you have multiple machines, or even a fleet of workstations, you only have to manually make the change once, and all other workstations will have the new settings applied and be completely in sync. - -### Wrapping up - -If you've followed along with this series, you should know how to set up Ansible to automate your workstation. These examples offer a useful baseline, and you can use the syntax and examples to make additional customizations. As you go along, you can continue to add new modifications, which will make your Ansible configuration grow over time. - -I've used Ansible in this way to automate everything, including my user account and password; configuration files for Vim, tmux, etc.; desktop packages; SSH settings; SSH keys; and basically everything I could ever want to customize. Using this series as a starting point will pave the way for you to completely automate your workstations. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/manage-your-workstation-ansible-part-3 - -作者:[Jay LaCroix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://opensource.com/article/18/3/manage-workstation-ansible -[2]:https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2 -[3]:https://github.com/jlacroix82/ansible_article.git diff --git a/sources/tech/20180530 Introduction to the Pony programming language.md b/sources/tech/20180530 Introduction to the Pony programming language.md deleted file mode 100644 index 2f32ea6631..0000000000 --- a/sources/tech/20180530 Introduction to the Pony programming language.md +++ /dev/null @@ -1,94 +0,0 @@ -Introduction to the Pony programming language -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keys.jpg?itok=O4qaYCHK) - -At [Wallaroo Labs][1], where I'm the VP of engineering, we're are building a [high-performance, distributed stream processor][2] written in the [Pony][3] programming language. Most people haven't heard of Pony, but it has been an excellent choice for Wallaroo, and it might be an excellent choice for your next project, too. - -> "A programming language is just another tool. It's not about syntax. It's not about expressiveness. It's not about paradigms or models. It's about managing hard problems." —Sylvan Clebsch, creator of Pony - -I'm a contributor to the Pony project, but here I'll touch on why Pony is a good choice for applications like Wallaroo and share ways I've used Pony. If you are interested in a more in-depth look at why we use Pony to write Wallaroo, we have a [blog post][4] about that. - -### What is Pony? - -You can think of Pony as something like "Rust meets Erlang." Pony sports buzzworthy features. It is: - - * Type-safe - * Memory-safe - * Exception-safe - * Data-race-free - * Deadlock-free - - - -Additionally, it's compiled to efficient native code, and it's developed in the open and available under a two-clause BSD license. - -That's a lot of features, but here I'll focus on the few that were key to my company adopting Pony. - -### Why Pony? - -Writing fast, safe, efficient, highly concurrent programs is not easy with most of our existing tools. "Fast, efficient, and highly concurrent" is an achievable goal, but throw in "safe," and things get a lot harder. With Wallaroo, we wanted to accomplish all four, and Pony has made it easy to achieve. - -#### Highly concurrent - -Pony makes concurrency easy. Part of the way it does that is by providing an opinionated concurrency story. In Pony, all concurrency happens via the [actor model][5]. - -The actor model is most famous via the implementations in Erlang and Akka. The actor model has been around since the 1970s, and details vary widely from implementation to implementation. What doesn't vary is that all computation is executed by actors that communicate via asynchronous messaging. - -Think of the actor model this way: objects in object-oriented programming are state + synchronous methods and actors are state + asynchronous methods. - -When an actor receives a message, it executes a corresponding method. That method might operate on a state that is accessible by only that actor. The actor model allows us to use a mutable state in a concurrency-safe manner. Every actor is single-threaded. Two methods within an actor are never run concurrently. This means that, within a given actor, data updates cannot cause data races or other problems commonly associated with threads and mutable states. - -#### Fast and efficient - -Pony actors are scheduled with an efficient work-stealing scheduler. There's a single Pony scheduler per available CPU. The thread-per-core concurrency model is part of Pony's attempt to work in concert with the CPU to operate as efficiently as possible. The Pony runtime attempts to be as CPU-cache friendly as possible. The less your code thrashes the cache, the better it will run. Pony aims to help your code play nice with CPU caches. - -The Pony runtime also features per-actor heaps so that, during garbage collection, there's no "stop the world" garbage collection step. This means your program is always doing at least some work. As a result, Pony programs end up with very consistent performance and predictable latencies. - -#### Safe - -The Pony type system introduces a novel concept: reference capabilities, which make data safety part of the type system. The type of every variable in Pony includes information about how the data can be shared between actors. The Pony compiler uses the information to verify, at compile time, that your code is data-race- and deadlock-free. - -If this sounds a bit like Rust, it's because it is. Pony's reference capabilities and Rust's borrow checker both provide data safety; they just approach it in different ways and have different tradeoffs. - -### Is Pony right for you? - -Deciding whether to use a new programming language for a non-hobby project is hard. You must weigh the appropriateness of the tool against its immaturity compared to other solutions. So, what about Pony and you? - -Pony might be the right solution if you have a hard concurrency problem to solve. Concurrent applications are Pony's raison d'être. If you can accomplish what you want in a single-threaded Python script, you probably don't need Pony. If you have a hard concurrency problem, you should consider Pony and its powerful data-race-free, concurrency-aware type system. - -You'll get a compiler that will prevent you from introducing many concurrency-related bugs and a runtime that will give you excellent performance characteristics. - -### Getting started with Pony - -If you're ready to get started with Pony, your first visit should be the [Learn section][6] of the Pony website. There you will find directions for installing the Pony compiler and resources for learning the language. - -If you like to contribute to the language you are using, we have some [beginner-friendly issues][7] waiting for you on GitHub. - -Also, I can't wait to start talking with you on [our IRC channel][8] and the [Pony mailing list][9]. - -To learn more about Pony, attend Sean Allen's talk, [Pony: How I learned to stop worrying and embrace an unproven technology][10], at the [20th OSCON][11], July 16-19, 2018, in Portland, Ore. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/pony - -作者:[Sean T Allen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/seantallen -[1]:http://www.wallaroolabs.com/ -[2]:https://github.com/wallaroolabs/wallaroo -[3]:https://www.ponylang.org/ -[4]:https://blog.wallaroolabs.com/2017/10/why-we-used-pony-to-write-wallaroo/ -[5]:https://en.wikipedia.org/wiki/Actor_model -[6]:https://www.ponylang.org/learn/ -[7]:https://github.com/ponylang/ponyc/issues?q=is%3Aissue+is%3Aopen+label%3A%22complexity%3A+beginner+friendly%22 -[8]:https://webchat.freenode.net/?channels=%23ponylang -[9]:https://pony.groups.io/g/user -[10]:https://conferences.oreilly.com/oscon/oscon-or/public/schedule/speaker/213590 -[11]:https://conferences.oreilly.com/oscon/oscon-or diff --git a/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md b/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md deleted file mode 100644 index 6cbea61308..0000000000 --- a/sources/tech/20180531 Qalculate- - The Best Calculator Application in The Entire Universe.md +++ /dev/null @@ -1,125 +0,0 @@ -Qalculate! – The Best Calculator Application in The Entire Universe -====== -I have been a GNU-Linux user and a [Debian][1] user for more than a decade. As I started using the desktop more and more, it seemed to me that apart from few web-based services most of my needs were being met with [desktop applications][2] within Debian itself. - -One of such applications was the need for me to calculate between different measurements of units. While there are and were many web-services which can do the same, I wanted something which could do all this and more on my desktop for both privacy reasons as well as not having to hunt for a web service for doing one thing or the other. My search ended when I found Qalculate!. - -### Qalculate! The most versatile calculator application - -![Qalculator is the best calculator app][3] - -This is what aptitude says about [Qalculate!][4] and I cannot put it in better terms: - -> Powerful and easy to use desktop calculator – GTK+ version -> -> Qalculate! is small and simple to use but with much power and versatility underneath. Features include customizable functions, units, arbitrary precision, plotting, and a graphical interface that uses a one-line fault-tolerant expression entry (although it supports optional traditional buttons). - -It also did have a KDE interface as well as in its previous avatar, but at least in Debian testing, it just shows only the GTK+ version which can be seen from the github [repo][5] as well. - -Needless to say that Qalculate! is available in Debian repository and hence can easily be installed [using apt command][6] or through software center in Debian based distributions like Ubuntu. It is also availale for Windows and macOS. - -#### Features of Qalculate! - -Now while it would be particularly long to go through the whole list of functionality it allows – allow me to list some of the functionality to be followed by a few screenshots of just a couple of functionalities that Qalculate! provides. The idea is basically to familiarize you with a couple of basic methods and then leave it up to you to enjoy exploring what all Qalculate! can do. - - * Algebra - * Calculus - * Combinatorics - * Complex_Numbers - * Data_Sets - * Date_&_Time - * Economics - * Exponents_&_Logarithms - * Geometry - * Logical - * Matrices_&_Vectors - * Miscellaneous - * Number_Theory - * Statistics - * Trigonometry - - - -#### Using Qalculate! - -Using Qalculate! is not complicated. You can even write in the simple natural language. However, I recommend [reading the manual][7] to utilize the full potential of Qalculate! - -![qalculate byte to gibibyte conversion ][8] - -![conversion from celcius degrees to fahreneit][9] - -#### qalc is the command line version of Qalculate! - -You can achieve the same results as Qalculate! with its command-line brethren qalc -``` -$ qalc 62499836 byte to gibibyte -62499836 * byte = approx. 0.058207508 gibibyte - -$ qalc 40 degree celsius to fahrenheit -(40 * degree) * celsius = 104 deg*oF - -``` - -I shared the command-line interface so that people who don’t like GUI interfaces and prefer command-line (CLI) or have headless nodes (no GUI) could also use qalculate, pretty common in server environments. - -If you want to use it in scripts, I guess libqalculate would be the way to go and seeing how qalculate-gtk, qalc depend on it seems it should be good enough. - -Just to share, you could also explore how to use plotting of series data but that and other uses will leave to you. Don’t forget to check the /usr/share/doc/qalculate/index.html to see all the different functionalities that Qalculate! has. - -Note:- Do note that though Debian prefers [gnuplot][10] to showcase the pretty graphs that can come out of it. - -#### Bonus Tip: You can thank the developer via command line in Debian - -If you use Debian and like any package, you can quickly thank the Debian Developer or maintainer maintaining the said package using: -``` -reportbug --kudos $PACKAGENAME - -``` - -Since I liked QaIculate!, I would like to give a big shout-out to the Debian developer and maintainer Vincent Legout for the fantastic work he has done. -``` -reportbug --kudos qalculate - -``` - -I would also suggest reading my detailed article on using reportbug tool for [bug reporting in Debian][11]. - -#### The opinion of a Polymer Chemist on Qalculate! - -Through my fellow author [Philip Prado][12], we contacted a Mr. Timothy Meyers, currently a student working in a polymer lab as a Polymer Chemist. - -His professional opinion on Qaclulate! is – - -> This looks like almost any scientist to use as any type of data calculations statistics could use this program issue would be do you know the commands and such to make it function -> -> I feel like there’s some Physics constants that are missing but off the top of my head I can’t think of what they are but I feel like there’s not very many [fluid dynamics][13] stuff in there and also some different like [light absorption][14] coefficients for different compounds but that’s just a chemist in me I don’t know if those are super necessary. [Free energy][15] might be one - -In the end, I just want to share this is a mere introduction to what Qalculate! can do and is limited by what you want to get done and your imagination. I hope you like Qalculate! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/qalculate/ - -作者:[Shirish][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/shirish/ -[1]:https://www.debian.org/ -[2]:https://itsfoss.com/essential-linux-applications/ -[3]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/qalculate-app-featured-1-800x450.jpeg -[4]:https://qalculate.github.io/ -[5]:https://github.com/Qalculate -[6]:https://itsfoss.com/apt-command-guide/ -[7]:https://qalculate.github.io/manual/index.html -[8]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/qalculate-byte-conversion.png -[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/qalculate-gtk-weather-conversion.png -[10]:http://www.gnuplot.info/ -[11]:https://itsfoss.com/bug-report-debian/ -[12]:https://itsfoss.com/author/phillip/ -[13]:https://en.wikipedia.org/wiki/Fluid_dynamics -[14]:https://en.wikipedia.org/wiki/Absorption_(electromagnetic_radiation) -[15]:https://en.wikipedia.org/wiki/Gibbs_free_energy diff --git a/sources/tech/20180604 4 Firefox extensions worth checking out.md b/sources/tech/20180604 4 Firefox extensions worth checking out.md deleted file mode 100644 index 2091afe6fe..0000000000 --- a/sources/tech/20180604 4 Firefox extensions worth checking out.md +++ /dev/null @@ -1,109 +0,0 @@ -4 Firefox extensions worth checking out -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg?itok=gYaubJUv) - -I've been a Firefox user since v2.0 came out about 12 years ago. There were times when it wasn't the best web browser out there, but still, I kept going back to it for one reason: My favorite extensions wouldn't work with anything else. - -Today, I like the current state of Firefox itself for being fast, customizable, and open source, but I also appreciate extensions for manifesting ideas the original developers never thought of: What if you want to browse without a mouse? What if you don't like staring at bright light coming out of the monitor at night? What about using a dedicated media player for YouTube and other video hosting websites for better performance and extended playback controls? And what if you need a more sophisticated way to disable trackers and speed up loading pages? - -Fortunately, there's an answer for each of these questions, and I'm going to give them to you in the form of my favorite extensions—all of which are free software or open source (i.e., distributed under the [GNU GPL][1], [MPL][2], or [Apache][3] license) and make an excellent browser even better. - -Although the terms add-on and extension have slightly different meanings, I'll use them interchangeably in this article. - -### Tridactyl - -![Tridactyl screenshot][5] - -Tridactyl's new tab page, showcasing link hinting. - -[Tridactyl][6] enables you to use your keyboard for most of your browsing activities. It's inspired by the now-defunct [Vimperator][7] and [Pentadactyl][8], which were inspired by the default keybindings of [Vim][9]. Since I'm already used to Vim and other command-line applications, I find features like being able to navigate with the keys `h/j/k/l`, interact with hyperlinks with `f/F`, and create custom keybindings and commands very convenient. - -Tridactyl's optional native messenger (for now, available only for GNU/Linux and Mac OSX), which was implemented recently, offers even more cool features to boot. With it, for example, you can hide some elements of the GUI of Firefox (à la Vimperator and Pentadactyl), open a link or the current page in an external program (I often use [mpv][10] and [youtube-dl][11] for videos) and edit the content of text areas with your favorite text editor by pressing `Ctrl-I` (or any key combination of your choice). - -Having said that, keep in mind that it's a relatively young project and may still be rough around the edges. On the other hand, its development is very active, and when you look past its childhood illnesses, it can be a pleasure to use. - -### Open With - -![Open With Screenshot][13] - -A context menu provided by Open With. I can open the current page with one of the external programs listed here. - -Speaking of interaction with external programs, sometimes it's nice to have the ability to do that with the mouse. That's where [Open With][14] comes in. - -Apart from the added context menu (shown in the screenshot), you can find your own defined commands by clicking on the extension's icon on the add-on bar. As its icon and the description on [its page on Mozilla Add-ons][14] suggest, it was primarily intended to work with other web browsers, but I can use it with mpv and youtube-dl with ease as well. - -Keyboard shortcuts are available here, too, but they're severely limited. There are no more than three different combinations that can be selected in a drop-down list in the extension's settings. In contrast, Tridactyl lets me assign commands to virtually anything that isn't blocked by Firefox. Open With is currently for the mouse, really. - -### Stylus - -![Stylus Screenshot][16] - -In this screenshot, I've just searched for and installed a dark theme for the site I'm currently on with Stylus. Even the popup has custom style (called Deepdark Stylus)! - -[Stylus][17] is a userstyle manager, which means that by writing custom CSS rules and loading them with Stylus, you can change the appearance of any webpage. If you don't know CSS, there are a plethora of userstyles made by others on websites such as [userstyles.org][18]. - -Now, you may be asking, "Isn't that exactly what [Stylish][19] does?" You would be correct! You see, Stylus is based on Stylish and provides additional improvements: It respects your privacy by not containing any telemetry, all development is done in the open (although Stylish is still actively developed, I haven't been able to find the source code for recent versions), and it supports [UserCSS][20], among other things. - -UserCSS is an interesting format, especially for developers. I've written several userstyles for various websites (mainly dark themes and tweaks for better readability), and while the internal editor of Stylus is excellent, I still prefer editing code with Neovim. For that, all I need to do is load a local file with its name ending with ".user.css" in Stylus, enable the option "Live Reload", and any changes will be applied as soon as I modify and save that file in Neovim. Remote UserCSS files are also supported, so whenever I push changes to GitHub or any git-based development platforms, they'll automatically become available for users. (I provide a link to the raw version of the file so that they can access it easily.) - -### uMatrix - -![uMatrix Screenshot][22] - -The user interface of uMatrix, showing the current rules for the currently visited webpage. - -Jeremy Garcia mentioned uBlock Origin in [his article][23] here on Opensource.com as an excellent blocker. I'd like to draw attention to another extension made by [gorhill][24]: uMatrix. - -[uMatrix][25] allows you to set blocking rules for certain requests on a webpage, which can be toggled by clicking on the add-on's popup (seen in the screenshot above). These requests are distinguished by the categories of scripts, requests made by scripts, cookies, CSS rules, images, media content, frames, and anything else labeled as "other" by uMatrix. You can set up global rules to, for instance, allow all requests by default and add only particular ones to the blacklist (the more convenient approach), or block everything by default and whitelist certain requests manually (the safer approach). If you've been using NoScript or RequestPolicy, you can [import][26] your whitelist rules from them, too. - -In addition, uMatrix supports [hosts files][27], which can be used to block requests from certain domains. These are not to be confused with the filter lists used by uBlock Origin, which use the same syntax as the filters set by Adblock Plus. By default, uMatrix blocks domains of servers known to distribute ads, trackers, and malware with the help of a few hosts files, and you can add more external sources if you want to. - -So which one shall you choose—uBlock Origin or uMatrix? Personally, I use both on my desktop PC and only uMatrix on my Android phone. There's some overlap between the two, [according to gorhill][28], but they have a different target userbase and goals. If all you want is an easy way to block trackers and ads, uBlock Origin is a better choice. On the other hand, if you want granular control over what a webpage can or can't do inside your browser, even if it takes some time to configure and it can prevent sites from functioning as intended, uMatrix is the way to go. - -### Conclusion - -Currently, these are my favorite extensions for Firefox. Tridactyl is for speeding up browsing navigation by relying on the keyboard and interacting with external programs; Open With is there if I need to open something in another program with the mouse; Stylus is the definitive userstyle manager, appealing to both users and developers alike; and uMatrix is essentially a firewall within Firefox for filtering out requests on unknown territories. - -Even though I almost exclusively discussed the benefits of these add-ons, no software is ever perfect. If you like any of them and think they can be improved in any way, I recommend that you go to their GitHub page and look for their contribution guides. Usually, developers of free and open source software welcome bug reports and pull requests. Telling your friends about them or saying thanks are also excellent ways to help the developers, especially if they work on their projects in their spare time. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/firefox-open-source-extensions - -作者:[Zsolt Szakács][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/zsolt -[1]:https://www.gnu.org/licenses/gpl-3.0.en.html -[2]:https://www.mozilla.org/en-US/MPL/ -[3]:https://www.apache.org/licenses/LICENSE-2.0 -[4]:/file/398411 -[5]:https://opensource.com/sites/default/files/uploads/tridactyl.png (Tridactyl's new tab page, showcasing link hinting) -[6]:https://addons.mozilla.org/en-US/firefox/addon/tridactyl-vim/ -[7]:https://github.com/vimperator/vimperator-labs -[8]:https://addons.mozilla.org/en-US/firefox/addon/pentadactyl/ -[9]:https://www.vim.org/ -[10]:https://mpv.io/ -[11]:https://rg3.github.io/youtube-dl/index.html -[12]:/file/398416 -[13]:https://opensource.com/sites/default/files/uploads/openwith.png (A context menu provided by Open With. I can open the current page with one of the external programs listed here.) -[14]:https://addons.mozilla.org/en-US/firefox/addon/open-with/ -[15]:/file/398421 -[16]:https://opensource.com/sites/default/files/uploads/stylus.png (In this screenshot, I've just searched for and installed a dark theme for the site I'm currently on with Stylus. Even the popup has custom style (called Deepdark Stylus)!) -[17]:https://addons.mozilla.org/en-US/firefox/addon/styl-us/ -[18]:https://userstyles.org/ -[19]:https://addons.mozilla.org/en-US/firefox/addon/stylish/ -[20]:https://github.com/openstyles/stylus/wiki/Usercss -[21]:/file/398426 -[22]:https://opensource.com/sites/default/files/uploads/umatrix.png (The user interface of uMatrix, showing the current rules for the currently visited webpage.) -[23]:https://opensource.com/article/18/5/firefox-extensions -[24]:https://addons.mozilla.org/en-US/firefox/user/gorhill/ -[25]:https://addons.mozilla.org/en-US/firefox/addon/umatrix -[26]:https://github.com/gorhill/uMatrix/wiki/FAQ -[27]:https://en.wikipedia.org/wiki/Hosts_(file) -[28]:https://github.com/gorhill/uMatrix/issues/32#issuecomment-61372436 diff --git a/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md b/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md deleted file mode 100644 index f716a164a5..0000000000 --- a/sources/tech/20180604 BootISO - A Simple Bash Script To Securely Create A Bootable USB Device From ISO File.md +++ /dev/null @@ -1,172 +0,0 @@ -BootISO – A Simple Bash Script To Securely Create A Bootable USB Device From ISO File -====== -Most of us (including me) very often create a bootable USB device from ISO file for OS installation. - -There are many applications freely available in Linux for this purpose. Even we wrote few of the utility in the past. - -Every one uses different application and each application has their own features and functionality. - -In that few of applications are belongs to CLI and few of them associated with GUI. - -Today we are going to discuss about similar kind of utility called BootISO. It’s a simple bash script, which allow users to create a USB device from ISO file. - -Many of the Linux admin uses dd command to create bootable ISO, which is one of the native and famous method but the same time, it’s one of the very dangerous command. So, be careful, when you performing any action with dd command. - -**Suggested Read :** -**(#)** [Etcher – Easy way to Create a bootable USB drive & SD card from an ISO image][1] -**(#)** [Create a bootable USB drive from an ISO image using dd command on Linux][2] - -### What IS BootISO - -[BootIOS][3] is a simple bash script, which allow users to securely create a bootable USB device from one ISO file. It’s written in bash. - -It doesn’t offer any GUI but in the same time it has vast of options, which allow newbies to create a bootable USB device in Linux without any issues. Since it’s a intelligent tool that automatically choose if any USB device is connected on the system. - -It will print the list when the system has more than one USB device connected. When you choose manually another hard disk manually instead of USB, this will safely exit without writing anything on it. - -This script will also check for dependencies and prompt user for installation, it works with all package managers such as apt-get, yum, dnf, pacman and zypper. - -### BootISO Features - - * It checks whether the selected ISO has the correct mime-type or not. If no then it exit. - * BootISO will exit automatically, if you selected any other disks (local hard drive) except USB drives. - * BootISO allow users to select the desired USB drives when you have more than one. - * BootISO prompts the user for confirmation before erasing and paritioning USB device. - * BootISO will handle any failure from a command properly and exit. - * BootISO will call a cleanup routine on exit with trap. - - - -### How To Install BootISO In Linux - -There are few ways are available to install BootISO in Linux but i would advise users to install using the following method. -``` -$ curl -L https://git.io/bootiso -O -$ chmod +x bootiso -$ sudo mv bootiso /usr/local/bin/ - -``` - -Once BootISO installed, run the following command to list the available USB devices. -``` -$ bootiso -l - -Listing USB drives available in your system: -NAME HOTPLUG SIZE STATE TYPE -sdd 1 32G running disk - -``` - -If you have only one USB device, then simple run the following command to create a bootable USB device from ISO file. -``` -$ bootiso /path/to/iso file - -$ bootiso /opt/iso_images/archlinux-2018.05.01-x86_64.iso -Granting root privileges for bootiso. -Listing USB drives available in your system: -NAME HOTPLUG SIZE STATE TYPE -sdd 1 32G running disk -Autoselecting `sdd' (only USB device candidate) -The selected device `/dev/sdd' is connected through USB. -Created ISO mount point at `/tmp/iso.vXo' -`bootiso' is about to wipe out the content of device `/dev/sdd'. -Are you sure you want to proceed? (y/n)>y -Erasing contents of /dev/sdd... -Creating FAT32 partition on `/dev/sdd1'... -Created USB device mount point at `/tmp/usb.0j5' -Copying files from ISO to USB device with `rsync' -Synchronizing writes on device `/dev/sdd' -`bootiso' took 250 seconds to write ISO to USB device with `rsync' method. -ISO succesfully unmounted. -USB device succesfully unmounted. -USB device succesfully ejected. -You can safely remove it ! - -``` - -Mention your device name, when you have more than one USB device using `--device` option. -``` -$ bootiso -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso - -``` - -By default bootios uses `rsync` command to perform all the action and if you want to use `dd` command instead of, use the following format. -``` -$ bootiso --dd -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso - -``` - -If you want to skip `mime-type` check, include the following option with bootios utility. -``` -$ bootiso --no-mime-check -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso - -``` - -Add the below option with bootios to skip user for confirmation before erasing and partitioning USB device. -``` -$ bootiso -y -d /dev/sde /opt/iso_images/archlinux-2018.05.01-x86_64.iso - -``` - -Enable autoselecting USB devices in conjunction with -y option. -``` -$ bootiso -y -a /opt/iso_images/archlinux-2018.05.01-x86_64.iso - -``` - -To know more all the available option for bootiso, run the following command. -``` -$ bootiso -h -Create a bootable USB from any ISO securely. -Usage: bootiso [...] - -Options - --h, --help, help Display this help message and exit. --v, --version Display version and exit. --d, --device Select block file as USB device. - If is not connected through USB, `bootiso' will fail and exit. - Device block files are usually situated in /dev/sXX or /dev/hXX. - You will be prompted to select a device if you don't use this option. --b, --bootloader Install a bootloader with syslinux (safe mode) for non-hybrid ISOs. Does not work with `--dd' option. --y, --assume-yes `bootiso' won't prompt the user for confirmation before erasing and partitioning USB device. - Use at your own risks. --a, --autoselect Enable autoselecting USB devices in conjunction with -y option. - Autoselect will automatically select a USB drive device if there is exactly one connected to the system. - Enabled by default when neither -d nor --no-usb-check options are given. --J, --no-eject Do not eject device after unmounting. --l, --list-usb-drives List available USB drives. --M, --no-mime-check `bootiso' won't assert that selected ISO file has the right mime-type. --s, --strict-mime-check Disallow loose application/octet-stream mime type in ISO file. --- POSIX end of options. ---dd Use `dd' utility instead of mounting + `rsync'. - Does not allow bootloader installation with syslinux. ---no-usb-check `bootiso' won't assert that selected device is a USB (connected through USB bus). - Use at your own risks. - -Readme - - Bootiso v2.5.2. - Author: Jules Samuel Randolph - Bugs and new features: https://github.com/jsamr/bootiso/issues - If you like bootiso, please help the community by making it visible: - * star the project at https://github.com/jsamr/bootiso - * upvote those SE post: https://goo.gl/BNRmvm https://goo.gl/YDBvFe - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/bootiso-a-simple-bash-script-to-securely-create-a-bootable-usb-device-in-linux-from-iso-file/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/prakash/ -[1]:https://www.2daygeek.com/etcher-easy-way-to-create-a-bootable-usb-drive-sd-card-from-an-iso-image-on-linux/ -[2]:https://www.2daygeek.com/create-a-bootable-usb-drive-from-an-iso-image-using-dd-command-on-linux/ -[3]:https://github.com/jsamr/bootiso diff --git a/sources/tech/20180605 How to use autofs to mount NFS shares.md b/sources/tech/20180605 How to use autofs to mount NFS shares.md deleted file mode 100644 index 815cb53708..0000000000 --- a/sources/tech/20180605 How to use autofs to mount NFS shares.md +++ /dev/null @@ -1,138 +0,0 @@ -How to use autofs to mount NFS shares -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) - -Most Linux file systems are mounted at boot and remain mounted while the system is running. This is also true of any remote file systems that have been configured in the `fstab` file. However, there may be times when you prefer to have a remote file system mount only on demand—for example, to boost performance by reducing network bandwidth usage, or to hide or obfuscate certain directories for security reasons. The package [autofs][1] provides this feature. In this article, I'll describe how to get a basic automount configuration up and running. - -`tree.mydatacenter.net` is up and running. Also assume a data directory named `ourfiles` and two user directories, for Carl and Sarah, are being shared by this server. - -First, a few assumptions: Assume the NFS server namedis up and running. Also assume a data directory namedand two user directories, for Carl and Sarah, are being shared by this server. - -A few best practices will make things work a bit better: It is a good idea to use the same user ID for your users on the server and any client workstations where they have an account. Also, your workstations and server should have the same domain name. Checking the relevant configuration files should confirm. -``` -alan@workstation1:~$ sudo getent passwd carl sarah - -[sudo] password for alan: - -carl:x:1020:1020:Carl,,,:/home/carl:/bin/bash - -sarah:x:1021:1021:Sarah,,,:/home/sarah:/bin/bash - - - -alan@workstation1:~$ sudo getent hosts - -127.0.0.1       localhost - -127.0.1.1       workstation1.mydatacenter.net workstation1 - -10.10.1.5       tree.mydatacenter.net tree - -``` - -As you can see, both the client workstation and the NFS server are configured in the `hosts` file. I’m assuming a basic home or even small office network that might lack proper internal domain name service (i.e., DNS). - -### Install the packages - -You need to install only two packages: `nfs-common` for NFS client functions, and `autofs` to provide the automount function. -``` -alan@workstation1:~$ sudo apt-get install nfs-common autofs - -``` - -You can verify that the autofs files have been placed in the `etc` directory: -``` -alan@workstation1:~$ cd /etc; ll auto* - --rw-r--r-- 1 root root 12596 Nov 19  2015 autofs.conf - --rw-r--r-- 1 root root   857 Mar 10  2017 auto.master - --rw-r--r-- 1 root root   708 Jul  6  2017 auto.misc - --rwxr-xr-x 1 root root  1039 Nov 19  2015 auto.net* - --rwxr-xr-x 1 root root  2191 Nov 19  2015 auto.smb* - -alan@workstation1:/etc$ - -``` - -### Configure autofs - -Now you need to edit several of these files and add the file `auto.home`. First, add the following two lines to the file `auto.master`: -``` -/mnt/tree  /etc/auto.misc - -/home/tree  /etc/auto.home - -``` - -Each line begins with the directory where the NFS shares will be mounted. Go ahead and create those directories: -``` -alan@workstation1:/etc$ sudo mkdir /mnt/tree /home/tree - -``` - -Second, add the following line to the file `auto.misc`: -``` -ourfiles        -fstype=nfs     tree:/share/ourfiles - -``` - -This line instructs autofs to mount the `ourfiles` share at the location matched in the `auto.master` file for `auto.misc`. As shown above, these files will be available in the directory `/mnt/tree/ourfiles`. - -Third, create the file `auto.home` with the following line: -``` -*               -fstype=nfs     tree:/home/& - -``` - -This line instructs autofs to mount the users share at the location matched in the `auto.master` file for `auto.home`. In this case, Carl and Sarah's files will be available in the directories `/home/tree/carl` or `/home/tree/sarah`, respectively. The asterisk (referred to as a wildcard) makes it possible for each user's share to be automatically mounted when they log in. The ampersand also works as a wildcard representing the user's directory on the server side. Their home directory should be mapped accordingly in the `passwd` file. This doesn’t have to be done if you prefer a local home directory; instead, the user could use this as simple remote storage for specific files. - -Finally, restart the `autofs` daemon so it will recognize and load these configuration file changes. -``` -alan@workstation1:/etc$ sudo service autofs restart - -``` - -### Testing autofs - -If you change to one of the directories listed in the file `auto.master` and run the `ls` command, you won’t see anything immediately. For example, change directory `(cd)` to `/mnt/tree`. At first, the output of `ls` won’t show anything, but after running `cd ourfiles`, the `ourfiles` share directory will be automatically mounted. The `cd` command will also be executed and you will be placed into the newly mounted directory. -``` -carl@workstation1:~$ cd /mnt/tree - -carl@workstation1:/mnt/tree$ ls - -carl@workstation1:/mnt/tree$ cd ourfiles - -carl@workstation1:/mnt/tree/ourfiles$ - -``` - -To further confirm that things are working, the `mount` command will display the details of the mounted share. -``` -carl@workstation1:~$ mount - -tree:/mnt/share/ourfiles on /mnt/tree/ourfiles type nfs4 (rw,relatime,vers=4.0,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.10.1.22,local_lock=none,addr=10.10.1.5) - -``` - -The `/home/tree` directory will work the same way for Carl and Sarah. - -I find it useful to bookmark these directories in my file manager for quicker access. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/using-autofs-mount-nfs-shares - -作者:[Alan Formy-Duval][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/alanfdoss -[1]:https://wiki.archlinux.org/index.php/autofs diff --git a/sources/tech/20180606 Working with modules in Fedora 28.md b/sources/tech/20180606 Working with modules in Fedora 28.md deleted file mode 100644 index 9a45d3367b..0000000000 --- a/sources/tech/20180606 Working with modules in Fedora 28.md +++ /dev/null @@ -1,139 +0,0 @@ -Working with modules in Fedora 28 -====== -![](https://fedoramagazine.org/wp-content/uploads/2018/05/modules-workingwith-816x345.jpg) -The recent Fedora Magazine article entitled [Modularity in Fedora 28 Server Edition][1] did a great job of explaining Modularity in Fedora 28. It also pointed out a few example modules and explained the problems they solve. This article puts one of those modules to practical use, covering installation and setup of Review Board 3.0 using modules. - -### Getting started - -To follow along with this article and use modules, you need a system running [Fedora 28 Server Edition][2] along with [sudo administrative privileges][3]. Also, run this command to make sure all the packages on the system are current: -``` -sudo dnf -y update - -``` - -While you can use modules on Fedora 28 non-server editions, be aware of the [caveats described in the comments of the previous article][4]. - -### Examining modules - -First, take a look at what modules are available for Fedora 28. Run the following command: -``` -dnf module list - -``` - -The output lists a collection of modules that shows the associated stream, version, and available installation profiles for each. A [d] next to a particular module stream indicates the default stream used if the named module is installed. - -The output also shows most modules have a profile named default. That’s not a coincidence, since default is the name used for the default profile. - -To see where all those modules are coming from, run: -``` -dnf repolist - -``` - -Along with the usual [fedora and updates package repositories][5], the output shows the fedora-modular and updates-modular repositories. - -The introduction stated you’d be setting up Review Board 3.0. Perhaps a module named reviewboard caught your attention in the earlier output. Next, to get some details about that module, run this command: -``` -dnf module info reviewboard - -``` - -The description confirms it is the Review Board module, but also says it’s the 2.5 stream. However, you want 3.0. Look at the available reviewboard modules: -``` -dnf module list reviewboard - -``` - -The [d] next to the 2.5 stream means it is configured as the default stream for reviewboard. Therefore, be explicit about the stream you want: -``` -dnf module info reviewboard:3.0 - -``` - -Now for even more details about the reviewboard:3.0 module, add the verbose option: -``` -dnf module info reviewboard:3.0 -v - -``` - -### Installing the Review Board 3.0 module - -Now that you’ve tracked down the module you want, install it with this command: -``` -sudo dnf -y module install reviewboard:3.0 - -``` - -The output shows the ReviewBoard package was installed, along with several other dependent packages, including several from the django:1.6 module. The installation also enabled the reviewboard:3.0 module and the dependent django:1.6 module. - -Next, to see enabled modules, use this command: -``` -dnf module list --enabled - -``` - -The output shows [e] for enabled streams, and [i] for installed profiles. In the case of the reviewboard:3.0 module, the default profile was installed. You could have specified a different profile when installing the module. In fact, you still can — and this time you don’t need to specify the 3.0 stream since it was already enabled: -``` -sudo dnf -y module install reviewboard/server - -``` - -However, installation of the reviewboard:3.0/server profile is rather uneventful. The reviewboard:3.0 module’s server profile is the same as the default profile — so there’s nothing more to install. - -### Spin up a Review Board site - -Now that the Review Board 3.0 module and its dependent packages are installed, [create a Review Board site][6] running on the local system. Without further ado or explanation, copy and paste the following commands to do that: -``` -sudo rb-site install --noinput \ - --domain-name=localhost --db-type=sqlite3 \ - --db-name=/var/www/rev.local/data/reviewboard.db \ - --admin-user=rbadmin --admin-password=secret \ - /var/www/rev.local -sudo chown -R apache /var/www/rev.local/htdocs/media/uploaded \ - /var/www/rev.local/data -sudo ln -s /var/www/rev.local/conf/apache-wsgi.conf \ - /etc/httpd/conf.d/reviewboard-localhost.conf -sudo setsebool -P httpd_can_sendmail=1 httpd_can_network_connect=1 \ - httpd_can_network_memcache=1 httpd_unified=1 -sudo systemctl enable --now httpd - -``` - -Now fire up a web browser on the system, point it at , and enjoy the shiny new Review Board site! To login as the Review Board admin, use the userid and password seen in the rb-site command above. - -### Module cleanup - -It’s good practice to clean up after yourself. To do that, remove the Review Board module and the site directory: -``` -sudo dnf -y module remove reviewboard:3.0 -sudo rm -rf /var/www/rev.local - -``` - -### Closing remarks - -Now that you’ve explored how to examine and administer the Review Board module, go experiment with the other modules available in Fedora 28. - -Learn more about using modules in Fedora 28 on the [Fedora Modularity][7] web site. The dnf manual page’s Module Command section also contains useful information. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/working-modules-fedora-28/ - -作者:[Merlin Mathesius][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/merlinm/ -[1]:https://fedoramagazine.org/modularity-fedora-28-server-edition/ -[2]:https://getfedora.org/server/ -[3]:https://fedoramagazine.org/howto-use-sudo/ -[4]:https://fedoramagazine.org/modularity-fedora-28-server-edition/#comment-476696 -[5]:https://fedoraproject.org/wiki/Repositories -[6]:https://www.reviewboard.org/docs/manual/dev/admin/installation/creating-sites/ -[7]:https://docs.pagure.org/modularity/ diff --git a/sources/talk/20180611 12 fiction books for Linux and open source types.md b/sources/tech/20180611 12 fiction books for Linux and open source types.md similarity index 100% rename from sources/talk/20180611 12 fiction books for Linux and open source types.md rename to sources/tech/20180611 12 fiction books for Linux and open source types.md diff --git a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md b/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md deleted file mode 100644 index 664c054913..0000000000 --- a/sources/tech/20180611 3 open source alternatives to Adobe Lightroom.md +++ /dev/null @@ -1,84 +0,0 @@ -3 open source alternatives to Adobe Lightroom -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/camera-photography-film.jpg?itok=oe2ixyu6) - -You wouldn't be wrong to wonder whether the smartphone, that modern jack-of-all-trades, is taking over photography. While that might be valid in the point-and-shoot camera market, there are a sizeable number of photography professionals and hobbyists who recognize that a camera that fits in your pocket can never replace a high-end DSLR camera and the depth, clarity, and realism of its photos. - -All of that power comes with a small price in terms of convenience; like negatives from traditional film cameras, the [raw image][1] files produced by DSLRs must be processed before they can be edited or printed. For this, a digital image processing application is indispensable, and the go-to application has been Adobe Lightroom. But for many reasons—including its expensive, subscription-based pricing model and its proprietary license—there's a lot of interest in open source and other alternatives. - -Lightroom has two main functions: processing raw image files and digital asset management (DAM)—organizing images with tags, ratings, and other metadata to make it easier to keep track of them. - -In this article, we'll look at three open source image processing applications: Darktable, LightZone, and RawTherapee. All of them have DAM capabilities, but none has Lightroom's machine learning-based image categorization and tagging features. If you're looking for more information about open source DAM software, check out Terry Hancock's article "[Digital asset management for an open movie project][2]," where he shares his research on software to organize multimedia files for his [_Lunatics!_][3] open movie project. - -### Darktable - -![Darktable][4] - -Like the other applications on our list, [darktable][5] processes raw images into usable file formats—it exports into JPEG, PNG, TIFF, PPM, PFM, and EXR, and it also supports Google and Facebook web albums, Flickr uploads, email attachments, and web gallery creation. - -Its 61 image operation modules allow you to adjust contrast, tone, exposure, color, noise, etc.; add watermarks; crop and rotate; and much more. As with the other applications described in this article, those edits are "non-destructive"—that is, your original raw image is preserved no matter how many tweaks and modifications you make. - -Darktable imports raw images from more than 400 cameras plus JPEG, CR2, DNG, OpenEXR, and PFM; images are managed in a database so you can filter and search using metadata including tags, ratings, and color. It's also available in 21 languages and is supported on Linux, MacOS, BSD, Solaris 11/GNOME, and Windows. (The [Windows port][6] is new, and darktable warns it may have "rough edges or missing functionality" compared to other versions.) - -Darktable is licensed under [GPLv3][7]; you can learn more by perusing its [features][8], viewing the [user manual][9], or accessing its [source code][10] on GitHub. - -### LightZone - -![LightZone's tool stack][11] - -As a non-destructive raw image processing tool, [LightZone][12] is similar to the other two applications on this list: it's cross-platform, operating on Windows, MacOS, and Linux, and it supports JPG and TIFF images in addition to raw. But it's also unique in several ways. - -For one thing, it started out in 2005 as a proprietary image processing tool and later became an open source project under a BSD license. Also, before you can download the application, you must register for a free account; this is so the LightZone development community can track downloads and build the community. (Approval is quick and automated, so it's not a large barrier.) - -Another difference is that image modifications are done using stackable tools, rather than filters (like most image-editing applications); tool stacks can be rearranged or removed, as well as saved and copied to a batch of images. You can also edit certain parts of an image using a vector-based tool or by selecting pixels based on color or brightness. - -You can get more information on LightZone by searching its [forums][13] or accessing its [source code][14] on GitHub. - -### RawTherapee - -![RawTherapee][15] - -[RawTherapee][16] is another popular open source ([GPL][17]) raw image processor worth your attention. Like darktable and LightZone, it is cross-platform (Windows, MacOS, and Linux) and implements edits in a non-destructive fashion, so you maintain access to your original raw image file no matter what filters or changes you make. - -RawTherapee uses a panel-based interface, including a history panel to keep track of your changes and revert to a previous point; a snapshot panel that allows you to work with multiple versions of a photo; and scrollable tool panels to easily select a tool without worrying about accidentally using the wrong one. Its tools offer a wide variety of exposure, color, detail, transformation, and demosaicing features. - -The application imports raw files from most cameras and is localized to more than 25 languages, making it widely usable. Features like batch processing and [SSE][18] optimizations improve speed and CPU performance. - -RawTherapee offers many other [features][19]; check out its [documentation][20] and [source code][21] for details. - -Do you use another open source raw image processing tool in your photography? Do you have any related tips or suggestions for other photographers? If so, please share your recommendations in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/alternatives/adobe-lightroom - -作者:[Opensource.com][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com -[1]:https://en.wikipedia.org/wiki/Raw_image_format -[2]:https://opensource.com/article/18/3/movie-open-source-software -[3]:http://lunatics.tv/ -[4]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_darkroom1.jpg?itok=0fjk37tC (Darktable) -[5]:http://www.darktable.org/ -[6]:https://www.darktable.org/about/faq/#faq-windows -[7]:https://github.com/darktable-org/darktable/blob/master/LICENSE -[8]:https://www.darktable.org/about/features/ -[9]:https://www.darktable.org/resources/ -[10]:https://github.com/darktable-org/darktable -[11]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_lightzone1tookstack.jpg?itok=1e3s85CZ (LightZone's tool stack) -[12]:http://www.lightzoneproject.org/ -[13]:http://www.lightzoneproject.org/Forum -[14]:https://github.com/ktgw0316/LightZone -[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/uploads/raw-image-processors_rawtherapee.jpg?itok=meiuLxPw (RawTherapee) -[16]:http://rawtherapee.com/ -[17]:https://github.com/Beep6581/RawTherapee/blob/dev/LICENSE.txt -[18]:https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions -[19]:http://rawpedia.rawtherapee.com/Features -[20]:http://rawpedia.rawtherapee.com/Main_Page -[21]:https://github.com/Beep6581/RawTherapee diff --git a/sources/tech/20180612 7 open source tools to make literature reviews easy.md b/sources/tech/20180612 7 open source tools to make literature reviews easy.md index 96edb68eff..674f4ea44b 100644 --- a/sources/tech/20180612 7 open source tools to make literature reviews easy.md +++ b/sources/tech/20180612 7 open source tools to make literature reviews easy.md @@ -1,3 +1,4 @@ +translated by lixinyuxx 7 open source tools to make literature reviews easy ====== diff --git a/sources/tech/20180621 How to connect to a remote desktop from Linux.md b/sources/tech/20180621 How to connect to a remote desktop from Linux.md deleted file mode 100644 index ced3b233dc..0000000000 --- a/sources/tech/20180621 How to connect to a remote desktop from Linux.md +++ /dev/null @@ -1,136 +0,0 @@ -How to connect to a remote desktop from Linux -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO) - -A [remote desktop][1], according to Wikipedia, is "a software or operating system feature that allows a personal computer's desktop environment to be run remotely on one system (usually a PC, but the concept applies equally to a server), while being displayed on a separate client device." - -In other words, a remote desktop is used to access an environment running on another computer. For example, the [ManageIQ/Integration tests][2] repository's pull request (PR) testing system exposes a Virtual Network Computing (VNC) connection port so I can remotely view my PRs being tested in real time. Remote desktops are also used to help customers solve computer problems: with the customer's permission, you can establish a VNC or Remote Desktop Protocol (RDP) connection to see or interactively access the computer to troubleshoot or repair the problem. - -These connections are made using remote desktop connection software, and there are many options available. I use [Remmina][3] because I like its minimal, easy-to-use user interface (UI). It's written in GTK+ and is open source under the GNU GPL license. - -In this article, I'll explain how to use the Remmina client to connect remotely from a Linux computer to a Windows 10 system and a Red Hat Enterprise Linux 7 system. - -### Install Remmina on Linux - -First, you need to install Remmina on the computer you'll use to access the other computer(s) remotely. If you're using Fedora, you can run the following command to install Remmina: -``` -sudo dnf install -y remmina - -``` - -If you want to install Remmina on a different Linux platform, follow these [installation instructions][4]. You should then find Remmina with your other apps (Remmina is selected in this image). - -![](https://opensource.com/sites/default/files/uploads/remmina1-on-desktop.png) - -Launch Remmina by clicking on the icon. You should see a screen that resembles this: - -![](https://opensource.com/sites/default/files/uploads/remmina2_launched.png) - -Remmina offers several types of connections, including RDP, which is used to connect to Windows-based computers, and VNC, which is used to connect to Linux machines. As you can see in the top-left corner above, Remmina's default setting is RDP. - -### Connecting to Windows 10 - -Before you can connect to a Windows 10 computer through RDP, you must change some permissions to allow remote desktop sharing and connections through your firewall. - -[Note: Windows 10 Home has no RDP feature listed. ][5] - -To enable remote desktop sharing, in **File Explorer** right-click on **My Computer → Properties → Remote Settings** and, in the pop-up that opens, check **Allow remote connections to this computer** , then select **Apply**. - -![](https://opensource.com/sites/default/files/uploads/remmina3_connect_win10.png) - -Next, enable remote desktop connections through your firewall. First, search for **firewall settings** in the **Start** menu and select **Allow an app through Windows Firewall**. - -![](https://opensource.com/sites/default/files/uploads/remmina4_firewall.png) - -In the window that opens, look for **Remote Desktop** under **Allowed apps and features**. Check the box(es) in the **Private **and/or **Public** columns, depending on the type of network(s) you will use to access this desktop. Click **OK**. - -![](https://opensource.com/sites/default/files/uploads/remmina5_firewall_2.png) - -Go to the Linux computer you use to remotely access the Windows PC and launch Remmina. Enter the IP address of your Windows computer and hit the Enter key. (How do I locate my IP address [in Linux][6] and [Windows 10][7]?) When prompted, enter your username and password and click OK. - -![](https://opensource.com/sites/default/files/uploads/remmina6_login.png) - -If you're asked to accept the certificate, select OK. - -![](https://opensource.com/sites/default/files/uploads/remmina7_certificate.png) - -You should be able to see your Windows 10 computer's desktop. - -![](https://opensource.com/sites/default/files/uploads/remmina8_remote_desktop.png) - -### Connecting to Red Hat Enterprise Linux 7 - -To set permissions to enable remote access on your RHEL7 computer, open **All Settings** on the Linux desktop. - -![](https://opensource.com/sites/default/files/uploads/remmina9_settings.png) - -Click on the Sharing icon, and this window will open: - -![](https://opensource.com/sites/default/files/uploads/remmina10_sharing.png) - -If **Screen Sharing** is off, click on it. A window will open, where you can slide it into the **On** position. If you want to allow remote connections to control the desktop, set **Allow Remote Control** to **On**. You can also select between two access options: one that prompts the computer's primary user to accept or deny the connection request, and another that allows connection authentication with a password. At the bottom of the window, select the network interface where connections are allowed, then close the window. - -Next, open **Firewall Settings** from **Applications Menu → Sundry → Firewall**. - -![](https://opensource.com/sites/default/files/uploads/remmina11_firewall_settings.png) - -Check the box next to vnc-server (as shown above) and close the window. Then, head to Remmina on your remote computer, enter the IP address of the Linux desktop you want to connect with, select **VNC** as the protocol, and hit the **Enter** key. - -![](https://opensource.com/sites/default/files/uploads/remmina12_vncprotocol.png) - -If you previously chose the authentication option **New connections must ask for access** , the RHEL system's user will see a prompt like this: - -![](https://opensource.com/sites/default/files/uploads/remmina13_permission.png) - -Select **Accept** for the remote connection to succeed. - -If you chose the option to authenticate the connection with a password, Remmina will prompt you for the password. - -![](https://opensource.com/sites/default/files/uploads/remmina14_password-auth.png) - -Enter the password and hit **OK** , and you should be connected to the remote computer. - -![](https://opensource.com/sites/default/files/uploads/remmina15_connected.png) - -### Using Remmina - -Remmina offers a tabbed UI, as shown in above picture, much like a web browser. In the top-left corner, as shown in the screenshot above, you can see two tabs: one for the previously established Windows 10 connection and a new one for the RHEL connection. - -On the left-hand side of the window, there is a toolbar with options such as **Resize Window** , **Full-Screen Mode** , **Preferences** , **Screenshot** , **Disconnect** , and more. Explore them and see which ones work best for you. - -You can also create saved connections in Remmina by clicking on the **+** (plus) sign in the top-left corner. Fill in the form with details specific to your connection and click **Save**. Here is an example Windows 10 RDP connection: - -![](https://opensource.com/sites/default/files/uploads/remmina16_saved-connection.png) - -The next time you open Remmina, the connection will be available. - -![](https://opensource.com/sites/default/files/uploads/remmina17_connection-available.png) - -Just click on it, and your connection will be established without re-entering the details. - -### Additional info - -When you use remote desktop software, all the operations you perform take place on the remote desktop and use its resources—Remmina (or similar software) is just a way to interact with that desktop. You can also access a computer remotely through SSH, but it usually limits you to a text-only terminal to that computer. - -You should also note that enabling remote connections with your computer could cause serious damage if an attacker uses this method to gain access to your computer. Therefore, it is wise to disallow remote desktop connections and block related services in your firewall when you are not actively using Remote Desktop. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/linux-remote-desktop - -作者:[Kedar Vijay Kulkarni][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/kkulkarn -[1]:https://en.wikipedia.org/wiki/Remote_desktop_software -[2]:https://github.com/ManageIQ/integration_tests -[3]:https://www.remmina.org/wp/ -[4]:https://www.tecmint.com/remmina-remote-desktop-sharing-and-ssh-client/ -[5]:https://superuser.com/questions/1019203/remote-desktop-settings-missing#1019212 -[6]:https://opensource.com/article/18/5/how-find-ip-address-linux -[7]:https://www.groovypost.com/howto/find-windows-10-device-ip-address/ diff --git a/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md b/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md deleted file mode 100644 index 254f725a36..0000000000 --- a/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md +++ /dev/null @@ -1,85 +0,0 @@ -8 reasons to use the Xfce Linux desktop environment -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) - -For several reasons (including curiosity), a few weeks ago I started using [Xfce][1] as my Linux desktop. One reason was trouble with background daemons eating up all the CPU and I/O bandwidth on my very powerful main workstation. Of course, some of the instability may be due to my removal of some of the RPM packages that provide those background daemons. However, even before I removed the RPMs, the fact is KDE was unstable and causing performance and stability issues. I needed to use a different desktop to avoid these problems. - -I realized in looking back over my series of articles on Linux desktops that I had neglected Xfce. This article is intended to rectify that oversight. I like Xfce a lot and am enjoying the speed and lightness of it more than I thought I would. - -As part of my research, I googled to try to learn what Xfce means. There is a historical reference to XForms Common Environment, but Xfce no longer uses the XForms tools. Some years ago, I found a reference to "Xtra fine computing environment," and I like that a lot. I will use that (despite not being able to find the page reference again). - -### Eight reasons for recommending Xfce - -#### 1\. Lightweight construction - -Xfce has a very small memory footprint and CPU usage compared to some other desktops, such as KDE and GNOME. On my system, the programs that make up the Xfce desktop take a tiny amount of memory for such a powerful desktop. Very low CPU usage is also a hallmark of the Xfce desktop. With such a small memory footprint, I am not especially surprised that Xfce is also very sparing of CPU cycles. - -#### 2\. Simplicity - -The Xfce desktop is simple and uncluttered with fluff. The basic desktop has two panels and a vertical line of icons on the left side. Panel 0 is at the bottom and consists of some basic application launchers, as well as the Applications icon, which provides access to all the applications on the system. Panel 1 is at the top and has an Applications launcher as well as a Workspace Switcher that allows the user to switch between multiple workspaces. The panels can be modified with additional items, such as new launchers, or by altering their height and width. - -The icons down the left side of the desktop consist of the Home directory and Trash icons. It can also display icons for the complete filesystem directory tree and any connected pluggable USB storage devices. These icons can be used to mount and unmount the device, as well as to open the default file manager. They can also be hidden if you prefer, and the Filesystem, Trash, and Home directory icons are separately controllable. The removable drives can be hidden or displayed as a group. - -#### 3\. File management - -Thunar, Xfce's default file manager, is simple, easy to use and configure, and very easy to learn. While not as fancy as file managers like Konqueror or Dolphin, it is quite capable and very fast. Thunar can't create multiple panes in its window, but it does provide tabs so multiple directories can be open at the same time. Thunar also has a very nice sidebar that, like the desktop, shows the same icons for the complete filesystem directory tree and any connected USB storage devices. Devices can be mounted and unmounted, and removable media such as CDs can be ejected. Thunar can also use helper applications such as Ark to open archive files when they are clicked. Archives, such as ZIP, TAR, and RPM files, can be viewed, and individual files can be copied out of them. - - -![Xfce desktop][3] - -The Xfce desktop with Thunar and the Xfce terminal emulator. - -Having used many different applications for my [series on file managers][4], I must say that I like Thunar for its simplicity and ease of use. It is easy to navigate the filesystem using the sidebar. - -#### 4\. Stability - -The Xfce desktop is very stable. New releases seem to be on a three-year cycle, although updates are provided as necessary. The current version is 4.12, which was released in February 2015. The rock-solid nature of the Xfce desktop is very reassuring after having issues with KDE. The Xfce desktop has never crashed for me, and it has never spawned daemons that gobbled up system resources. It just sits there and works—which is what I want. - -#### 5\. Elegance - -Xfce is simply elegant. In my new book, The Linux Philosophy for SysAdmins, which will be available this fall, I talk about the many advantages of simplicity, including the fact that simplicity is one of the hallmarks of elegance. Clearly, the programmers who write and maintain Xfce and its component applications are great fans of simplicity. This simplicity is very likely the reason that Xfce is so stable, but it also results in a clean look, a responsive interface, an easily navigable structure that feels natural, and an overall elegance that makes it a pleasure to use. - -#### 6\. Terminal emulation - -The Xfce4 terminal emulator is a powerful emulator that uses tabs to allow multiple terminals in a single window, like many other terminal emulators. This terminal emulator is simple compared to emulators like Tilix, Terminator, and Konsole, but it gets the job done. The tab names can be changed, and the tabs can be rearranged by drag and drop, using the arrow icons on the toolbar, or selecting the options on the menu bar. One thing I especially like about the tabs on the Xfce terminal emulator is that they display the name of the host to which they are connected regardless of how many other hosts are connected through to make that connection, e.g., `host1==>host2==>host3==>host4` properly shows `host4` in the tab. Other emulators show `host2` at best. - -Other aspects of its function and appearance can be easily configured to suit your needs. Like other Xfce components, this terminal emulator uses very little in the way of system resources. - -#### 7\. Configurability - -Within its limits, Xfce is very configurable. While not offering as much configurability as a desktop like KDE, it is far more configurable (and more easily so) than GNOME, for example. I found that the Settings Manager is the doorway to everything needed to configure Xfce. The individual configuration apps are separately available, but the Settings Manager collects them all into one window for ease of access. All the important aspects of the desktop can be configured to meet my needs and preferences. - -#### 8\. Modularity - -Xfce has a number of individual projects that make up the whole, and not all parts of Xfce are necessarily installed by your distro. [Xfce's projects][5] page lists the main projects, so you can find additional parts you might want to install. The items that weren't installed on my Fedora 28 workstation when I installed the Xfce group were mostly the applications at the bottom of that page. - -There is also a [documentation page][6], and a wiki called [Xfce Goodies Project][7] lists other Xfce-related projects that provide applications, artwork, and plugins for Thunar and the Xfce panels. - -### Conclusions - -The Xfce desktop is thin and fast with an overall elegance that makes it easy to figure out how to do things. Its lightweight construction conserves both memory and CPU cycles. This makes it ideal for older hosts with few resources to spare for a desktop. However, Xfce is flexible and powerful enough to satisfy my needs as a power user. - -I've learned that changing to a new Linux desktop can take some work to configure it as I want—with all of my favorite application launchers on the panel, my preferred wallpaper, and much more. I have changed to new desktops or updates of old ones many times over the years. It takes some time and a bit of patience. - -I think of it like when I've moved cubicles or offices at work. Someone carries my stuff from the old office to the new one, and I connect my computer, unpack the boxes, and place their contents in appropriate locations in my new office. Moving into the Xfce desktop was the easiest move I have ever made. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/xfce-desktop - -作者:[David Both][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:https://xfce.org/ -[2]:/file/401856 -[3]:https://opensource.com/sites/default/files/uploads/xfce-desktop-01.png (Xfce desktop) -[4]:https://opensource.com/sitewide-search?search_api_views_fulltext=David%20Both%20File%20managers -[5]:https://xfce.org/projects -[6]:https://docs.xfce.org/ -[7]:https://goodies.xfce.org/ diff --git a/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md b/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md deleted file mode 100644 index 0321080a4b..0000000000 --- a/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md +++ /dev/null @@ -1,111 +0,0 @@ -Checking out the notebookbar and other improvements in LibreOffice 6.0 – FOSS adventures -====== - -With any new openSUSE release, I am interested in the improvements that the big applications have made. One of these big applications is LibreOffice. Ever since LibreOffice has forked from OpenOffice.org, there has been a constant delivery of new features and new fixes every 6 months. openSUSE Leap 15 brought us the upgrade from LibreOffice 5.3.3 to LibreOffice 6.0.4. In this post, I will highlight the improvements that I found most newsworthy. - -### Notebookbar - -One of the experimental features of LibreOffice 5.3 was the Notebookbar. In LibreOffice 6.0 this feature has matured a lot and has gained a new form: the groupedbar. Lets take a look at the 3 variants. You can enable the Notebookbar by clicking on View –> Toolbar Layout and then Notebookbar. - -![][1] - -Please be aware that switching back to the Default Toolbar Layout is a bit of a hassle. To list the tricks: - - * The contextual groups notebookbar shows the menubar by default. Make sure that you don’t hide it. Change the Layout via the View menu in the menubar. - * The tabbed notebookbar has a hamburger menu on the upper right side. Select menubar. Then change the Layout via the View menu in the menubar. - * The groupedbar notebookbar has a menu dropdown menu on the upper right side. Make sure to maximize the window. Otherwise it might be hidden. - - - -The most talked about version of the notebookbar is the tabbed version. This looks similar to the Microsoft Office 2007 ribbon. That fact alone is enough to ruffle some feathers in the open source community. In comparison to the ribbon, the tabs (other than Home) can feel rather empty. The reason for that is that the icons are not designed to be big and bold. Another reason is that there are no sub-sections in the tabs. In the Microsoft version of the ribbon, you have names of the sub-sections underneath the icons. This helps to fill the empty space. However, in terms of ease of use, this design does the job. It provides you with a lot of functions in an easy to understand interface. - -![][2] - -The most successful version of the notebookbar is in my opinion the groupedbar. It gives you all of the most needed functions in a single overview. And the dropdown menus (File / Edit / Styles / Format / Paragraph / Insert / Reference) all show useful functions that are not so often used. - -![][3] - -By the way, it also works great for Calc (Spreadsheets) and Impress (Presentations). - -![][4] - -![][5] - -Finally there is the contextual groups version. The “groups” version is not very helpful. It shows a very limited number of basic functions. And it takes up a lot of space. If you want to use more advanced functions, you need to use the traditional menubar. The traditional menubar works perfectly, but in that case I rather combine it with the Default toolbar layout. - -![][6] - -The contextual single version is the better version. If you compare it to the “normal” single toolbar, it contains more functions and the order in which the functions are arranged is easier to use. - -![][7] - -There is no real need to make the switch to the notebookbar. But it provides you with choice. One of these user interfaces might just suit your taste. - -### Microsoft Office compatibility - -Microsoft Office compatibility (especially .docx, .xlsx and .pptx) is one of the things that I find very important. As a former Business Consultant I have created a lot of documents in the past. I have created 200+ page reports. They need to work flawless, including getting the page brakes right, which is incredibly difficult as the margins are never the same. Also the index, headers, footers, grouped drawings and SmartArt drawings need to display as originally composed. I have created large PowerPoint presentations with branded slides with +30 layouts, grouped drawings and SmartArt drawings. I need these to render perfectly in the slideshow. Furthermore, I have created large multi-tabbed Excel sheets with filters, pivot tables, graphs and conditional formatting. All of these need to be conserved when I open these files in LibreOffice. - -And no, LibreOffice is still not perfect. But damn, it is close. This time I have seen no major problems when opening older documents. Which means that LibreOffice finally gets SmartArt drawings right. In Writer, the page breaks in different places compared to Microsoft Word. That has always been an issue. But I don’t see many other issues. In Calc, the rendering of the graphs is less beautiful. But it’s similar enough to Excel. In Impress, presentations can look strange, because sometimes you see bigger/smaller fonts in the same slide (and that is not on purpose). But I was very impressed to see branded slides with multiple sections render correctly. If I needed to score it, I would give LibreOffice a 7 out of 10 for Microsoft Office compatibility. A very solid score. Below some examples of compatibility done right. - -![][8] - -![][9] - -![][10] - -### Noteworthy features - -Finally, there are the noteworthy features. I will only highlight the ones that I find cool. The first one is the ability to rotate images in any degree. Below is an example of me rotating a Gecko. - -![][11] - -The second cool feature is that the old collection of autoformat table styles are now replaced with a new collection of table styles. You can access these styles via the menubar: Table –> AutoFormat Styles. In the screenshots below, I show how to change a table from the Box List Green to the Box List Red format. - -![][12] - -![][13] - -The third feature is the ability to copy-past unformatted text in Calc. This is something I will use a lot, making it a cool feature. - -![][14] - -The final feature is the new and improved LibreOffice Online help. This is not the same as the LibreOffice help (press F1 to see what I mean). That is still there (and as far as I know unchanged). But this is the online wiki that you will find on the LibreOffice.org website. Some contributors obviously put a lot of effort in this feature. It looks good, now also on a mobile device. Kudos! - -![][15] - -If you want to learn about all of the other introduced features, read the [release notes][16]. They are really well written. - -### And that’s not all folks - -I discussed LibreOffice on openSUSE Leap 15. However, LibreOffice is also available on Android and in the Cloud. You can get the Android version from the [Google Play Store][17]. And you can see the Cloud version in action if you go to the [Collabora website][18]. Check them out for yourselves. - --------------------------------------------------------------------------------- - -via: https://www.fossadventures.com/checking-out-the-notebookbar-and-other-improvements-in-libreoffice-6-0/ - -作者:[Martin De Boer][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.fossadventures.com/author/martin_de_boer/ -[1]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice06.jpeg -[2]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice09.jpeg -[3]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice11.jpeg -[4]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice10.jpeg -[5]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice08.jpeg -[6]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice07.jpeg -[7]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice12.jpeg -[8]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice14.jpeg -[9]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice15.jpeg -[10]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice16.jpeg -[11]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice01.jpeg -[12]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice02.jpeg -[13]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice03.jpeg -[14]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice04.jpeg -[15]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice05.jpeg -[16]:https://wiki.documentfoundation.org/ReleaseNotes/6.0 -[17]:https://play.google.com/store/apps/details?id=org.documentfoundation.libreoffice&hl=en -[18]:https://www.collaboraoffice.com/press-releases/collabora-office-6-0-released/ diff --git a/sources/tech/20180625 The life cycle of a software bug.md b/sources/tech/20180625 The life cycle of a software bug.md deleted file mode 100644 index cacceb7a15..0000000000 --- a/sources/tech/20180625 The life cycle of a software bug.md +++ /dev/null @@ -1,67 +0,0 @@ -The life cycle of a software bug -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug_software_issue_tracking_computer_screen.jpg?itok=6qfIHR5y) - -In 1947, the first computer bug was found—a moth trapped in a computer relay. - -If only all bugs were as simple to uncover. As software has become more complex, so too has the process of testing and debugging. Today, the life cycle of a software bug can be lengthy—though the right technology and business processes can help. For open source software, developers use rigorous ticketing services and collaboration to find and mitigate bugs. - -### Confirming a computer bug - -During the process of testing, bugs are reported to the development team. Quality assurance testers describe the bug in as much detail as possible, reporting on their system state, the processes they were undertaking, and how the bug manifested itself. - -Despite this, some bugs are never confirmed; they may be reported in testing but can never be reproduced in a controlled environment. In such cases they may not be resolved but are instead closed. - -It can be difficult to confirm a computer bug due to the wide array of platforms in use and the many different types of user behavior. Some bugs only occur intermittently or under very specific situations, and others may occur seemingly at random. - -Many people use and interact with open source software, and many bugs and issues may be non-repeatable or may not be adequately described. Still, because every user and developer also plays the role of quality assurance tester, at least in part, there is a good chance that bugs will be revealed. - -When a bug is confirmed, work begins. - -### Assigning a bug to be fixed - -A confirmed bug is assigned to a developer or a development team to be addressed. At this stage, the bug needs to be reproduced, the issue uncovered, and the associated code fixed. Developers may categorize this bug as an issue to be fixed later if the bug is low-priority, or they may assign someone directly if it is high-priority. Either way, a ticket is opened during the process of development, and the bug becomes a known issue. - -In open source solutions, developers may select from the bugs that they want to tackle, either choosing the areas of the program with which they are most familiar or working from the top priorities. Consolidated solutions such as [GitHub][1] make it easy for multiple developers to work on solutions without interfering with each other's work. - -When assigning bugs to be fixed, reporters may also select a priority level for the bug. Major bugs may have a high priority level, whereas bugs related to appearance only, for example, may have a lower level. This priority level determines how and when the development team is assigned to resolve these issues. Either way, all bugs need to be resolved before a product can be considered complete. Using proper traceability back to prioritized requirements can also be helpful in this regard. - -### Resolving the bug - -Once a bug has been fixed, it is usually be sent back to Quality Assurance as a resolved bug. Quality Assurance then puts the product through its paces again to reproduce the bug. If the bug cannot be reproduced, Quality Assurance will assume that it has been properly resolved. - -In open source situations, any changes are distributed—often as a tentative release that is being tested. This test release is distributed to users, who again fulfill the role of Quality Assurance and test the product. - -If the bug occurs again, the issue is sent back to the development team. At this stage, the bug is reopened, and it is up to the development team to repeat the cycle of resolving the bug. This may occur multiple times, especially if the bug is unpredictable or intermittent. Intermittent bugs are notoriously difficult to resolve. - -If the bug does not occur again, the issue will be marked as resolved. In some cases, the initial bug is resolved, but other bugs emerge as a result of the changes made. When this happens, new bug reports may need to be initiated, starting the process over again. - -### Closing the bug - -After a bug has been addressed, identified, and resolved, the bug is closed and developers can move on to other areas of software development and testing. A bug will also be closed if it was never found or if developers were never able to reproduce it—either way, the next stage of development and testing will begin. - -Any changes made to the solution in the testing version will be rolled into the next release of the product. If the bug was a serious one, a patch or a hotfix may be provided for current users until the release of the next version. This is common for security issues. - -Software bugs can be difficult to find, but by following set processes and procedures, developers can make the process faster, easier, and more consistent. Quality Assurance is an important part of this process, as QA testers must find and identify bugs and help developers reproduce them. Bugs cannot be closed and resolved until the error no longer occurs. - -Open source solutions distribute the burden of quality assurance testing, development, and mitigation, which often leads to bugs being discovered and mitigated more quickly and comprehensively. However, because of the nature of open source technology, the speed and accuracy of this process often depends upon the popularity of the solution and the dedication of its maintenance and development team. - -_Rich Butkevic is a PMP certified project manager, certified scum master, and runs[Project Zendo][2] , a website for project management professionals to discover strategies to simplify and improve their project results. Connect with Rich at [Richbutkevic.com][3] or on [LinkedIn][4]._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/life-cycle-software-bug - -作者:[Rich Butkevic][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/rich-butkevic -[1]:https://github.com/ -[2]:https://projectzendo.com -[3]:https://richbutkevic.com -[4]:https://www.linkedin.com/in/richbutkevic diff --git a/sources/tech/20180626 How To Search If A Package Is Available On Your Linux Distribution Or Not.md b/sources/tech/20180626 How To Search If A Package Is Available On Your Linux Distribution Or Not.md deleted file mode 100644 index b731465a55..0000000000 --- a/sources/tech/20180626 How To Search If A Package Is Available On Your Linux Distribution Or Not.md +++ /dev/null @@ -1,347 +0,0 @@ -How To Search If A Package Is Available On Your Linux Distribution Or Not -====== -You can directly install the require package which you want if you know the package name. - -In some cases, if you don’t know the exact package name or you want to search some packages then you can easily search that package with help of distribution package manager. - -Searches automatically include both installed and available packages. - -The format of the results depends upon the option. If the query produces no information, there are no packages matching the criteria. - -This can be done through distribution package managers with variety of options. - -I had added all the possible options in this article and you can select which is the best and suitable for you. - -Alternatively we can achieve this through **whohas** command. This will search the given package to all the major distributions (such as Debian, Ubuntu, Fedora, etc.,) not only your own system distribution. - -**Suggested Read :** -**(#)** [List of Command line Package Managers For Linux & Usage][1] -**(#)** [A Graphical frontend tool for Linux Package Manager][2] - -### How to Search a Package in Debian/Ubuntu - -We can use apt, apt-cache and aptitude package managers to find a given package on Debian based distributions. I had included vast of options with this package managers. - -We can done this on three ways in Debian based systems. - - * apt command - * apt-cache command - * aptitude command - - - -### How to search a package using apt command - -APT stands for Advanced Packaging Tool (APT) which is replacement for apt-get. It’s feature rich command-line tools with included all the futures in one command (APT) such as apt-cache, apt-search, dpkg, apt-cdrom, apt-config, apt-key, etc..,. and several other unique features. - -APT is a powerful command-line tool for installing, downloading, removing, searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library. It’s contains some less used command-line utilities related to package management. -``` -$ apt -q list nano vlc -Listing... -nano/artful,now 2.8.6-3 amd64 [installed] -vlc/artful 2.2.6-6 amd64 - -``` - -Alternatively we can search a given package using below format. -``` -$ apt search ^vlc -Sorting... Done -Full Text Search... Done -vlc/artful 2.2.6-6 amd64 - multimedia player and streamer - -vlc-bin/artful 2.2.6-6 amd64 - binaries from VLC - -vlc-data/artful,artful 2.2.6-6 all - Common data for VLC - -vlc-l10n/artful,artful 2.2.6-6 all - Translations for VLC - -vlc-plugin-access-extra/artful 2.2.6-6 amd64 - multimedia player and streamer (extra access plugins) - -vlc-plugin-base/artful 2.2.6-6 amd64 - multimedia player and streamer (base plugins) - -``` - -### How to search a package using apt-cache command - -apt-cache performs a variety of operations on APT’s package cache. Displays information about the given packages. apt-cache does not manipulate the state of the system but does provide operations to search and generate interesting output from the package metadata. -``` -$ apt-cache search nano | grep ^nano -nano - small, friendly text editor inspired by Pico -nano-tiny - small, friendly text editor inspired by Pico - tiny build -nanoblogger - Small weblog engine for the command line -nanoblogger-extra - Nanoblogger plugins -nanoc - static site generator written in Ruby -nanoc-doc - static site generator written in Ruby - documentation -nanomsg-utils - nanomsg utilities -nanopolish - consensus caller for nanopore sequencing data - -``` - -Alternatively we can search a given package using below format. -``` -$ apt-cache policy vlc -vlc: - Installed: (none) - Candidate: 2.2.6-6 - Version table: - 2.2.6-6 500 - 500 http://in.archive.ubuntu.com/ubuntu artful/universe amd64 Packages - -``` - -Alternatively we can search a given package using below format. -``` -$ apt-cache pkgnames vlc -vlc-bin -vlc-plugin-video-output -vlc-plugin-sdl -vlc-plugin-svg -vlc-plugin-samba -vlc-plugin-fluidsynth -vlc-plugin-qt -vlc-plugin-skins2 -vlc-plugin-visualization -vlc-l10n -vlc-plugin-notify -vlc-plugin-zvbi -vlc-plugin-vlsub -vlc-plugin-jack -vlc-plugin-access-extra -vlc -vlc-data -vlc-plugin-video-splitter -vlc-plugin-base - -``` - -### How to search a package using aptitude command - -aptitude is a text-based interface to the Debian GNU/Linux package system. It allows the user to view the list of packages and to perform package management tasks such as installing, upgrading, and removing packages. Actions may be performed from a visual interface or from the command-line. -``` -$ aptitude search ^vlc -p vlc - multimedia player and streamer -p vlc:i386 - multimedia player and streamer -p vlc-bin - binaries from VLC -p vlc-bin:i386 - binaries from VLC -p vlc-data - Common data for VLC -v vlc-data:i386 - -p vlc-l10n - Translations for VLC -v vlc-l10n:i386 - -p vlc-plugin-access-extra - multimedia player and streamer (extra access plugins) -p vlc-plugin-access-extra:i386 - multimedia player and streamer (extra access plugins) -p vlc-plugin-base - multimedia player and streamer (base plugins) -p vlc-plugin-base:i386 - multimedia player and streamer (base plugins) -p vlc-plugin-fluidsynth - FluidSynth plugin for VLC -p vlc-plugin-fluidsynth:i386 - FluidSynth plugin for VLC -p vlc-plugin-jack - Jack audio plugins for VLC -p vlc-plugin-jack:i386 - Jack audio plugins for VLC -p vlc-plugin-notify - LibNotify plugin for VLC -p vlc-plugin-notify:i386 - LibNotify plugin for VLC -p vlc-plugin-qt - multimedia player and streamer (Qt plugin) -p vlc-plugin-qt:i386 - multimedia player and streamer (Qt plugin) -p vlc-plugin-samba - Samba plugin for VLC -p vlc-plugin-samba:i386 - Samba plugin for VLC -p vlc-plugin-sdl - SDL video and audio output plugin for VLC -p vlc-plugin-sdl:i386 - SDL video and audio output plugin for VLC -p vlc-plugin-skins2 - multimedia player and streamer (Skins2 plugin) -p vlc-plugin-skins2:i386 - multimedia player and streamer (Skins2 plugin) -p vlc-plugin-svg - SVG plugin for VLC -p vlc-plugin-svg:i386 - SVG plugin for VLC -p vlc-plugin-video-output - multimedia player and streamer (video output plugins) -p vlc-plugin-video-output:i386 - multimedia player and streamer (video output plugins) -p vlc-plugin-video-splitter - multimedia player and streamer (video splitter plugins) -p vlc-plugin-video-splitter:i386 - multimedia player and streamer (video splitter plugins) -p vlc-plugin-visualization - multimedia player and streamer (visualization plugins) -p vlc-plugin-visualization:i386 - multimedia player and streamer (visualization plugins) -p vlc-plugin-vlsub - VLC extension to download subtitles from opensubtitles.org -p vlc-plugin-zvbi - VBI teletext plugin for VLC -p vlc-plugin-zvbi:i386 - -``` - -### How to Search a Package in RHEL/CentOS - -Yum (Yellowdog Updater Modified) is one of the package manager utility in Linux operating system. Yum command is used to install, update, search & remove packages on some Linux distributions based on RedHat. -``` -# yum search ftpd -Loaded plugins: fastestmirror, refresh-packagekit, security -Loading mirror speeds from cached hostfile - * base: centos.hyve.com - * epel: mirrors.coreix.net - * extras: centos.hyve.com - * rpmforge: www.mirrorservice.org - * updates: mirror.sov.uk.goscomb.net -============================================================== N/S Matched: ftpd =============================================================== -nordugrid-arc-gridftpd.x86_64 : ARC gridftp server -pure-ftpd.x86_64 : Lightweight, fast and secure FTP server -vsftpd.x86_64 : Very Secure Ftp Daemon - - Name and summary matches only, use "search all" for everything. - -``` - -Alternatively we can search the same using below command. -``` -# yum list ftpd - -``` - -### How to Search a Package in Fedora - -DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for backend. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. -``` -# dnf search ftpd -Last metadata expiration check performed 0:42:28 ago on Tue Jun 9 22:52:44 2018. -============================== N/S Matched: ftpd =============================== -proftpd-utils.x86_64 : ProFTPD - Additional utilities -pure-ftpd-selinux.x86_64 : SELinux support for Pure-FTPD -proftpd-devel.i686 : ProFTPD - Tools and header files for developers -proftpd-devel.x86_64 : ProFTPD - Tools and header files for developers -proftpd-ldap.x86_64 : Module to add LDAP support to the ProFTPD FTP server -proftpd-mysql.x86_64 : Module to add MySQL support to the ProFTPD FTP server -proftpd-postgresql.x86_64 : Module to add PostgreSQL support to the ProFTPD FTP - : server -vsftpd.x86_64 : Very Secure Ftp Daemon -proftpd.x86_64 : Flexible, stable and highly-configurable FTP server -owfs-ftpd.x86_64 : FTP daemon providing access to 1-Wire networks -perl-ftpd.noarch : Secure, extensible and configurable Perl FTP server -pure-ftpd.x86_64 : Lightweight, fast and secure FTP server -pyftpdlib.noarch : Python FTP server library -nordugrid-arc-gridftpd.x86_64 : ARC gridftp server - -``` - -Alternatively we can search the same using below command. -``` -# dnf list proftpd -Failed to synchronize cache for repo 'heikoada-terminix', disabling. -Last metadata expiration check: 0:08:02 ago on Tue 26 Jun 2018 04:30:05 PM IST. -Available Packages -proftpd.x86_64 - -``` - -### How to Search a Package in Arch Linux - -pacman stands for package manager utility (pacman). pacman is a command-line utility to install, build, remove and manage Arch Linux packages. pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. - -In my case, i’m going to search chromium package. -``` -# pacman -Ss chromium -extra/chromium 48.0.2564.116-1 - The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser -extra/qt5-webengine 5.5.1-9 (qt qt5) - Provides support for web applications using the Chromium browser project -community/chromium-bsu 0.9.15.1-2 - A fast paced top scrolling shooter -community/chromium-chromevox latest-1 - Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This - package does not contain the extension code. -community/fcitx-mozc 2.17.2313.102-1 - Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese - Input) - -``` - -By default `-s`‘s builtin ERE (Extended Regular Expressions) can cause a lot of unwanted results. Use the following format to match the package name only. -``` -# pacman -Ss '^chromium-' - -``` - -pkgfile is a tool for searching files from packages in the Arch Linux official repositories. -``` -# pkgfile chromium - -``` - -### How to Search a Package in openSUSE - -Zypper is a command line package manager for suse & openSUSE distributions. It’s used to install, update, search & remove packages & manage repositories, perform various queries, and more. Zypper command-line interface to ZYpp system management library (libzypp). -``` -# zypper search ftp -or -# zypper se ftp -Loading repository data... -Reading installed packages... -S | Name | Summary | Type ---+----------------+-----------------------------------------+-------- - | proftpd | Highly configurable GPL-licensed FTP -> | package - | proftpd-devel | Development files for ProFTPD | package - | proftpd-doc | Documentation for ProFTPD | package - | proftpd-lang | Languages for package proftpd | package - | proftpd-ldap | LDAP Module for ProFTPD | package - | proftpd-mysql | MySQL Module for ProFTPD | package - | proftpd-pgsql | PostgreSQL Module for ProFTPD | package - | proftpd-radius | Radius Module for ProFTPD | package - | proftpd-sqlite | SQLite Module for ProFTPD | package - | pure-ftpd | A Lightweight, Fast, and Secure FTP S-> | package - | vsftpd | Very Secure FTP Daemon - Written from-> | package - -``` - -### How to Search a Package using whohas command - -whohas command such a intelligent tools which search a given package to all the major distributions such as Debian, Ubuntu, Gentoo, Arch, AUR, Mandriva, Fedora, Fink, FreeBSD, NetBSD. -``` -$ whohas nano -Mandriva nano-debug 2.3.1-1mdv2010.2.x http://sophie.zarb.org/rpms/0b33dc73bca710749ad14bbc3a67e15a -Mandriva nano-debug 2.2.4-1mdv2010.1.i http://sophie.zarb.org/rpms/d9dfb2567681e09287b27e7ac6cdbc05 -Mandriva nano-debug 2.2.4-1mdv2010.1.x http://sophie.zarb.org/rpms/3299516dbc1538cd27a876895f45aee4 -Mandriva nano 2.3.1-1mdv2010.2.x http://sophie.zarb.org/rpms/98421c894ee30a27d9bd578264625220 -Mandriva nano 2.3.1-1mdv2010.2.i http://sophie.zarb.org/rpms/cea07b5ef9aa05bac262fc7844dbd223 -Mandriva nano 2.2.4-1mdv2010.1.s http://sophie.zarb.org/rpms/d61f9341b8981e80424c39c3951067fa -Mandriva spring-mod-nanoblobs 0.65-2mdv2010.0.sr http://sophie.zarb.org/rpms/74bb369d4cbb4c8cfe6f6028e8562460 -Mandriva nanoxml-lite 2.2.3-4.1.4mdv2010 http://sophie.zarb.org/rpms/287a4c37bc2a39c0f277b0020df47502 -Mandriva nanoxml-manual-lite 2.2.3-4.1.4mdv2010 http://sophie.zarb.org/rpms/17dc4f638e5e9964038d4d26c53cc9c6 -Mandriva nanoxml-manual 2.2.3-4.1.4mdv2010 http://sophie.zarb.org/rpms/a1b5092cd01fc8bb78a0f3ca9b90370b -Gentoo nano 9999 http://packages.gentoo.org/package/app-editors/nano -Gentoo nano 9999 http://packages.gentoo.org/package/app-editors/nano -Gentoo nano 2.9.8 http://packages.gentoo.org/package/app-editors/nano -Gentoo nano 2.9.7 http://packages.gentoo.org/package/app-editors/nano - -``` - -If you want to search a given package to only current distribution repository, use the below format. -``` -$ whohas -d Ubuntu vlc -Ubuntu vlc 2.1.6-0ubuntu14.04 1M all http://packages.ubuntu.com/trusty/vlc -Ubuntu vlc 2.1.6-0ubuntu14.04 1M all http://packages.ubuntu.com/trusty-updates/vlc -Ubuntu vlc 2.2.2-5ubuntu0.16. 1M all http://packages.ubuntu.com/xenial/vlc -Ubuntu vlc 2.2.2-5ubuntu0.16. 1M all http://packages.ubuntu.com/xenial-updates/vlc -Ubuntu vlc 2.2.6-6 40K all http://packages.ubuntu.com/artful/vlc -Ubuntu vlc 3.0.1-3build1 32K all http://packages.ubuntu.com/bionic/vlc -Ubuntu vlc 3.0.2-0ubuntu0.1 32K all http://packages.ubuntu.com/bionic-updates/vlc -Ubuntu vlc 3.0.3-1 33K all http://packages.ubuntu.com/cosmic/vlc -Ubuntu browser-plugin-vlc 2.0.6-2 55K all http://packages.ubuntu.com/trusty/browser-plugin-vlc -Ubuntu browser-plugin-vlc 2.0.6-4 47K all http://packages.ubuntu.com/xenial/browser-plugin-vlc -Ubuntu browser-plugin-vlc 2.0.6-4 47K all http://packages.ubuntu.com/artful/browser-plugin-vlc -Ubuntu browser-plugin-vlc 2.0.6-4 47K all http://packages.ubuntu.com/bionic/browser-plugin-vlc -Ubuntu browser-plugin-vlc 2.0.6-4 47K all http://packages.ubuntu.com/cosmic/browser-plugin-vlc -Ubuntu libvlc-bin 2.2.6-6 27K all http://packages.ubuntu.com/artful/libvlc-bin -Ubuntu libvlc-bin 3.0.1-3build1 17K all http://packages.ubuntu.com/bionic/libvlc-bin -Ubuntu libvlc-bin 3.0.2-0ubuntu0.1 17K all http://packages.ubuntu.com/bionic-updates/libvlc-bin - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-to-search-if-a-package-is-available-on-your-linux-distribution-or-not/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/prakash/ -[1]:https://www.2daygeek.com/list-of-command-line-package-manager-for-linux/ -[2]:https://www.2daygeek.com/list-of-graphical-frontend-tool-for-linux-package-manager/ diff --git a/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md b/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md new file mode 100644 index 0000000000..b1f8d97305 --- /dev/null +++ b/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md @@ -0,0 +1,539 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Playing Badass Acorn Archimedes Games on a Raspberry Pi) +[#]: via: (https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/) +[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/) + +Playing Badass Acorn Archimedes Games on a Raspberry Pi +====== + +![Cannon Fodder on the Raspberry Pi][1] + +The Acorn Archimedes was an excellent machine and years ahead of its time. + +Debuting in 1987, it featured a point and click graphic interface not so different to Windows 95, 32 bit processing, and enough 3D graphics power to portal you to a new decade. + +These days, it’s best remembered for launching the Acorn RISC Machines processor. ARM processors went on to rule the world. You almost certainly keep one in your pocket. + +What’s less well appreciated is that the Archimedes was rad for games. For a few years, it was the most powerful desktop in the world and developers were eager to show what they could do with it. + +But with such power came a great price tag. The Archimedes was never going to be in as many homes to make as many memories as Sega or Nintendo. + +But now, the Raspberry Pi’s ARM chip makes it cheap and easy to play these games on the same operating system and CPU architecture they were written for. + +Even better, the rights holders to much of this machine’s gaming catalogue have been generous enough to allow hobbyists to legally download their work for free. + +This is a cheap and easy project. In fact, if you already run a Raspberry Pi home theatre or retro gaming rig, all you really need is a spare SD card. + +### Introduction + +None of this will be on the exam, so if you already know the story of the Acorn Archimedes – or just want to get straight into gaming – feel free to skip ahead to the next section. + +But if you’re wondering what we’re even talking about, here it is: + +#### What on Earth is an Acorn Archimedes? + +Me and Acorn computers go way back. + +For the earliest part of my life that I can remember, Dad ran his business from home, writing timetabling software for schools. This was the early 80s, before the desktop computer market had been whittled down to Mac and PC. There were Amstrad CPCs, Sinclairs, Commodores, Ataris, TRSs, the list goes on. + +They all had their operating systems and ran their own software. If you wanted to port your software over to a new platform, you had to buy it. + +So, at a time when it was somewhat novel for a family to have even one computer, we had about a dozen, many of them already quite antique. There was a Microbee, an Apple IIc, an IBM XT, all sorts of stuff. + +The ones Dad liked most though were the BBC machines by [Acorn Computers][2]. He had several. There was a Model B, a Master 128 and a Master Compact. + +They were named that way because the British Broadcasting Corporation were developing a course to teach children how to program and they needed. Because of this educational focus, they’d found their way into a lot of schools – exactly the market he was selling to. + +At some point, I figured out you could play games on these things. I was straight away hooked like it was crack. All I cared about was games games games. It must have taken me several years to figure out that computers had a different use, because I can vividly recall how annoyed I was to be starting school while Dad got to stay home and play games all day. + +On my 7th birthday I got a second hand BBC Master Compact all of my own. This was probably as much to keep me away from his work computers as it was to introduce me to computing. I started learning to program in BASIC and Logo. I also played epic amounts of space shooters, 2D platformers and adventure games. + +Being obsessed with these things, I tagged along to the local BBC Users Group. This was a monthly get-together where enthusiasts would discuss what was new, bring their machines to show off what they’re doing and engage in some casual software piracy. Back before internet forums and torrents, people did this in person. + +This was where I first saw an Archimedes. I can’t really remember the exact year or the exact model – I just remember my jaw dropping to the floor at the 3D graphics and the 8 channel stereo sound. It would be about a decade before I saw anything similar on a gaming console. + + + +#### The Birth of a Legend + +Looking back, this has very good claim to be the first modern desktop computer. It was a 32-bit machine, an interface that looks more like what we use today than anything built in the 1980s, a palette of 4096 colours, and more horsepower than a lot of people knew what to do with. + +Now, don’t get me wrong: the 8-bit BBC machines were loads of fun and great for what they were – but what they were was fairly primitive. It was basically just a big box you typed commands on to make it beep and stuff. In theory it had 8 colours, but when you saw one in the wild it was usually hooked up to a monochrome screen and you didn’t feel like you were missing out on too much because of it. + +In 1984, Apple launched their Macintosh, featuring the first Graphical User Interface available on the mass market. Acorn knew they’d need a point and click graphic interface to stay in the game. And they knew the aging MOS 6502 they’d used in all their machines so far was just not going to be the CPU of the future. + +So, what to replace it with? + +The Acorn engineers looked at the available processors and found that none of them could quite do what they want. They decided to build their own – and it would be radically different. + +Up until that point, chip makers took a bit of a Swiss Army Knife approach to processor design – to compete, you added more and more useful stuff to the instruction set. + +There was a certain logic to this – hardware could be mass produced, while good software engineers were expensive. It made sense to handle as much as possible in the hardware. For device manufacturers with bills to pay, it was a real selling point. + +But this came at a cost – more and more complex instructions required more and more clock cycles to complete. Often there was a whole extra layer of processing to convert the complex machine code instructions into smaller instructions. As RAM became bigger and faster, CPUs were struggling to keep pace with the available memory bandwidth. + +Acorn turned this idea on its head, with a stripped-back approach in the great British tradition of the [de Havilland Mosquito][3]: Every instruction could be completed in a single cycle. + +While testing the prototype CPU, the engineers noticed something weird: they’d disconnected the power and yet the chip was running. What they’d built was so power-efficient that it kept running on residual power from nearby components. + +It was also 25 times faster than the 6502 CPU they used in the old BBC machines. Even better, it was several times more powerful than the Motorola 68000 found in the Apple Macintosh, Atari ST and Commodore Amiga – and several time more powerful than the 386 in the new Compaq too. + +With such radically new hardware, they needed a new operating system. What they come up with was Risc OS, and it was operated entirely through a graphic point-and-click desktop interface with a pinboard and an icon bar. This was pretty much Windows 95, 8 years before it happened. + +In a single step, Acorn had gone from producing some perfectly serviceable 8-bit box that kids could learn to code one, to having the most powerful desktop computer in the world. I mean, it was technically possible to get something more powerful – but it would have been some kind of server or mainframe. As far as something that could sit on your desk, this was top of the pile. + +It sold well in Acorn’s traditional education market in the UK. The sheer grunt also made it popular for certain power-hungry business tasks, like desktop publishing. + +#### The Crucifixion + +It wasn’t too long before Dad got an Archimedes – I can’t remember exactly which model. By this time, he’d moved his business out of home to an office. When school holidays rolled around, I’d sometimes have to spend the day at his work, where I had all the time in the world to fiddle around on it. + +The software it came with was enough to keep a child entertained for a while. It came with a demo game called Lander – this was more about showing off the machine’s 3D graphics power than providing any lasting value. There was a card game, and also some drawing programs. + + + +I played with the demo disc until I got bored – which I think was the most use that this particular machine ever got. For all the power under the hood, all the applications Dad used to actually run his business ran on DOS and Windows. + +He’d spent more than $4000 in today’s money for the most sophisticated and advanced piece of computing technology for a mile in any direction and it just sat there. + +He might have at least salvaged some beneficial utility out of it if he’d followed my advice of getting some games for it and letting me take it home. + +He never got around to writing any software on it. The Archimedes was apparently a big hit with British schools, but never really got popular enough with his Australian customer base to be worth coding for. + +Which I guess is kind of sums up where it all ultimately went wrong for the Acorn desktop. + +As the 80s wore on to the ’90s, Compaq reverse engineered the BIOS on the IBM PC to release their own fully compatible PC, and big players like Amstrad left their proprietary platforms to produce their own compatible MS-DOS machines. It was also became increasingly easy for just about anyone with a slight technical bunt to build their own PC-compatible clone from off-the-shelf parts – and to upgrade old PCs with new hard drives, sound cards, and the latest 386 and 486 processors. + +Small, independent computer shops and other local businesses started building their owns PCs and hardware manufacturers competed to sell parts to them. This was a computing platform that could serve all price points. + +With so much of the user base now on MS-DOS, software developers followed. Which only reinforced the idea that this was the obvious system to buy, which in turn reinforced that it was the system to code for. + +The days when just any computer maker could make a go of it with their own proprietary hardware and operating system had passed. Third-party support was everything. It didn’t actually matter how good your technology was if nothing would run on it. Even Apple nearly went to the wall. + +Acorn hung on through the 90s, and there was even a successor to the Archimedes called the [RiscPC][4]. But while the technology itself was again very good, these things were relatively marginal affairs in the marketplace. The era of the Acorn desktop had passed. + +#### The Resurrection + +It was definitely good for our family business when the market consolidated to Mac and PC. We didn’t need to maintain so many versions of the same software. + +But the Acorn machines had so much sentimental value. We both liked them and were sad to see them go. I’ve never been that into sport, but watching them slowly disappear might have been a bit like watching your football team lose match after match before finally going broke. + +We totally had no idea that they were, very quietly, on a path to total domination. + +The ARM was originally only built to go in the Archimedes. But it turned out that having a massively powerful processor with a simple instruction set and very little heat emission was useful for all sorts of stuff: DVD players, set top boxes, photocopiers, televisions, vending machines, home and small business routers, you name it. + +The ARM’s low power consumption made it especially useful for portable devices like PDAs, digital cameras, GPS navigators and – eventually – tablets and smartphones. Intel tried to compete in the smartphone market, but was [eventually forced to admit that this technology was just better for phones][5]. + +So in the end, Dad’s old BBC machines went on to conquer the world. + +### The Acorn Archimedes as a Gaming Platform + +While Microsoft operating systems were ultimately to become the only real choice for the serious desktop gamer, for a while the Archimedes was the most powerful desktop computer in the world. This attracted a lot of games developers, eager to show what they could do with it. + +This would have been about more than just selling to a well moneyed section of the desktop computer market that was clearly quite willing to throw cash at shiny things. It would have been a chance to make your reputation in the industry with a flagship product that just wasn’t possible on lesser hardware. + +So it is that you see Star Fighter 3000, Chocks Away and Zarch all charting new territory in what was possible on a desktop computer. + +But while the 3D graphics power was this system’s headline feature, the late 80s and early 90s were really the era of Sonic and Mario: the heyday of 2D platform games. Here, the Archimedes also excels, with offerings like Mad Professor Mariarti, Bug Hunter, F.R.E.D. and Hamsters, all of which are massively playable, have vibrant graphics and a boatload of personality. + +As you dig further into the library, you also find a few games that show that not every developer really knew what to do with this machine. Some games – like Repton 3 – are just old BBC micro games given the most meagre of facelifts. + +Many of the games in the Archimedes catalogue you’ll recognise from other platforms: Populous, Lemmings, James Pond, Battle Chess, the list goes on. + +Here, the massive hardware advantage of the Archimedes means that it usually had the best version of the game to play. You’re not getting a whole new game here: but it’s noticeably smoother graphics and gameplay, especially compared to the console releases. + +All in all, the Archimedes never had a catalogue as expansive as MS-DOS, the Commodore Amiga, or the Sega and Nintendo consoles. But there are enough truly excellent games to make it worth an SD card. + +### Configuring Your Raspberry Pi + +This is a bit different to other retro gaming options on the Raspberry Pi – we’re not running an emulator. The ARM chip in the Pi is a direct descendant of the one in the Archimedes, and there’s an [open source version of Risc OS][6] we can install on it. + +For the most hardcore retro gaming purist, nothing less than using the hardware will do. For everyone else, using the same operating system from back in the day to load up your games means that your retro gaming rig becomes just that little bit more of a time machine. + +But even with all these similarities, there’s still going to be a few things that change in 30 years of computing. + +The most visible difference is that our Raspberry Pi doesn’t come with an internal 3.5″ floppy disk drive. You might be able to hook up a USB one, but most of us don’t have this lying around and don’t really want one. So we’re going to need a different way to boot floppy images. + +The more major difference is how much RAM the operating system is written to handle. The earliest versions of Risc OS made efficient use of the ARM’s 32-bit register by using 26 bits for the memory address and the remaining 6 bits for status flags. A 26-bit scheme gives you enough addressing space for up to 64 megabytes of RAM. + +When this was first devised, the fact that an Archimedes came with a whole megabyte of RAM was considered incredibly luxurious by the standards of the day. By contrast, the first Commodore Amiga had 256kb of RAM. The Sega Mega Drive had 72kb. + +But as time wore on, and later versions of Risc OS moved to a 32-bit addressing scheme. This is what we have on our Raspberry Pi. A few games have been [recompiled to run on 32 bit addressing][7], but most have not. + +The Archimedes also used different display drivers for different screens. These days, our GPU can handle all of this for us. We just need to install a patch to get that working. + +There are free utilities you can download to handle all of these things. + +#### Hardware Requirements + +I’ve tested this to work with a Raspberry Pi Model 3 B, but I expect that any Pi from the Model A onwards should manage this. The ARM processor on the slowest Pi is a great many times more powerful than the on the fastest Archimedes. + +The lack of ports on a Raspberry Pi Zero means it’s probably not the most convenient choice, but if you can navigate around this, then it should be powerful enough. + +In addition to the board, you’ll need something to put it in, a micro SD card, a USB SD card adapter, a power supply, a screen with an HDMI input, a USB keyboard and a 3 button mouse – a clickable scroll wheel works fine for your middle button. + +If you already have a Raspberry Pi home theatre or retro gaming rig, then you’ve already got all this, so all you really need is a new micro SD card to swap in for Risc OS. + +#### Installing Risc OS Open + +When I first wrote this guide, Risc OS wasn’t an available option for the Raspberry Pi 3 on the NOOBS and PINN installers. That meant you had to download the image from the [Risc OS Open downloads page][8] and burn it to a new micro SD card. + +You can still do this if you like, and if you can’t connect your Raspberry Pi to the internet via Wi-Fi or Ethernet then that’s still your best option. If you’re not sure how to write an image to an SD card, here’s some good guides for [Windows][9] and for [Mac][10]. + +For everyone else, now that Risc OS is available in the [NOOBS installer][11] again, I recommend using that. What’s really cool about NOOBS is that it makes it super simple to dual boot with something like [Retropie][12] or [Recalbox][13] for the ultimate all-in-one retro gaming device. + +Risc OS is as an extremely good option for a dual boot machine because it only uses a few gigabytes – a small fraction of even the smallest SD cards around these days. This leaves most of it available for other operating systems and saves you having to swap cards, which can be a right pain if you have to unscrew the case. The games themselves vary from about 300kb to about 5 megabytes at the very largest, so don’t worry about that. + +This image requires a card with at least 2 gigabytes, which for what we’re doing is plenty. Don’t worry about tracking down the largest SD card you can find. The operating system is extremely lightweight and the games themselves vary from about 300kb to about 5 megabytes at the very largest. Even a very small card offers enough space for hundreds of games – more than you will ever play. + +If you’re unsure how to use the NOOBS installer, please [click here for instructions][14]. + +#### Navigating Risc OS + +Compared to your first Linux experience, using Risc OS for the first time is, in my opinion, far more gentle. This is in large part thanks to a graphical interface that’s both fairly intuitive and actually very useful for configuring things. + +The command line is there if you want it, but we won’t need it just to play some games. You can kind of tell that this was first built with a mass-market audience in mind. + +So let’s jump right in. + +Insert your card into your Pi, hook it up to your screen, keyboard, mouse and power supply. It shouldn’t take all that long before you’re at the desktop. + +###### The Risc OS Mouse + +Risc OS uses a three button mouse. + +You use the left button – or “Select” button – in much the same way as you’re already used to: one click to select icons and a double click to open folders and execute programs. + +The middle button – ie, your scroll wheel – is used to open menus in much the same way as the right mouse button is used in Windows. We’ll be using this a lot. + +The right button – or “Adjust” button – has behaviours that vary between applications. + +###### Browsing Directories + +Ok, so let’s start taking a look around. At the bottom of the screen you’ll see an icon bar. In the bottom left are icons for storage devices. + +Click once with the left mouse button on the SD card and a window will open to show you what’s on it. You can take a look inside the directories by double clicking. + +###### The Pling + +As you start to browse Risc OS, you’ll notice that a lot of what you see inside the directories has exclamation marks at the beginning. This is said out aloud as a “pling”, and it’s used to mark an application that you can execute. + +One of the quirks of Risc OS is that these executables aren’t really files – they’re directories. If you hold shift while double clicking, you can open them and start looking inside, same as any other directory – but be careful with this, it’s a good way to break stuff. + +Risc OS comes with some applications installed – including a web browser called !NetSurf. We’ll be using that soon to download our games and some small utilities we need to get them running. + +###### Creating Directories + +Risc OS comes with a basic directory structure, but you’ll probably want to create some new ones to put your floppy images and .zip files. + +To do this, click with the middle button inside the folder where you want to place your new directory. This will open up a menu. Move your mouse over the arrow next to “New Directory” and a prompt will open where you can name it and press OK. + +###### Copying Files and Directories + +To copy a file or directory somewhere else, just drag and drop it with the left mouse button to the new location. + +###### Forcing Applications to Quit + +Sometimes, if you haven’t configured something right, if you’ve downloaded something that just doesn’t work, or if you plain forgot to look up the controls in the manual, you might find yourself stuck inside an application that has a blank screen or isn’t responding. + +Here, you can press Ctrl-Shift-F12 to quit back to the desktop. + +###### Shutting Down and Rebooting + +If you want to power down or reboot your Pi, just click the middle button on the raspberry icon in the bottom right corner and select “Shutdown”. This will give you an option to reboot the Pi or you can just remove the power cable. + +#### Connecting to the Internet + +Okay, so I’ve got good news and bad news. I’ll get the bad news right out of the way first: + +Risc OS Open doesn’t yet support wireless networking through either the onboard wireless or a wireless dongle in the USB port. It’s on the [to-do list][15]. + +In the meantime, if you can a way to connect to the internet through the Ethernet port, it makes the rest of this project a lot easier. If you were going to use an Ethernet cable anyway, this will be no big deal. And if you have a wireless bridge handy, you can just use that. + +If you don’t have a wireless bridge, but do have a second Raspberry Pi board lying around (hey, who doesn’t these days :p), you can [set it up as a wireless bridge][16]. This is what I did and it’s actually pretty easy if you just follow the steps. + +Another option might be to set up a temporary tinkering area next to your router so that you can plug straight in to get everything in configured. + +Ok, so what’s the good news? + +It’s this: once you’ve got the internet in your front hole, the rest is rather easy. In fact, the only bit that’s not done for your is configuring name servers. + +So let’s get to it. + +Double-click on !Configure, click once on Network, click on Internet and then on Host Names. Then enter the IPs of your name servers in the name server fields. If you’re not sure what IP to put in here, just use Google’s publicly available name servers – 8.8.8.8 and 8.8.4.4. + +When you click Set, it will ask you if you want to reboot. Click yes. + +Now double-click on !NetSurf. You’ll see the logo is now added to the bottom right corner. Click on this to open a new browser window. + +Compared to Chrome, Firefox, et al, !NetSurf is a primitive web browser. I do not recommend it as a daily driver. But to download Risc OS software directly to the Pi, it’s actually pretty damn convenient. + +###### Short Link to This Guide + +As you go through the rest of this guide, it’s going to get annoying copying by hand all the URLs you’ll want to visit. + +To save you this trouble, type bit.do/riscpi into the browser bar to load this page. With this loaded, you can follow the links. + +###### If You’re Still Getting Host Name Error Messages + +One little quirk of Risc OS is that it seems to check for name servers as part of the boot process. If it doesn’t find them then, it assumes they’re not there for the rest of the session. + +This means that if you connect your Pi to the internet when it’s already booted, you will get an error message when you try to browse the internet with !NetSurf. + +To fix this, just double check that your wireless bridge is switched on or that your Pi is plugged into the router, reboot, and the OS should find the name servers. + +###### If You Can’t Connect to the Internet + +If this is all too hard and you absolutely can’t connect to the internet, there’s always sneakernet – downloading files to another machine and then transferring by USB stick. + +This is what I tried at first; It does work, but I found it terribly annoying. + +One frustration is that using a Windows 10 machine to download Risc OS software seems to strip out the filetype information – even when you aren’t unzipping the archives. It’s not that difficult to repair this, it’s just tedious when you have to do it all the time. + +The other problem is that running USB sticks from computer to computer all the time just got a bit old. + +Still, if you have to do it, it’s an option. + +#### Unzipping Compressed Files + +Most of the files we’ll be downloading will come in .zip format – this is a good thing, because it preserves the file type information. But we’ll need a way to uncompress these files. + +For this we’ll use a program called !SparkFS. This is proprietary software, but you can download a read-only version for free. This will let us extract files from .zip archives. + +To download and install !SparkFS, click [this link][17] and follow the instructions. You want the version of this software for machines with more than 2MB of RAM. + +#### Installing ADFFS and Anymode + +Now we need to install ADFFS, a floppy disk imaging program written by Jon Abbot of the [Archimedes Software Preservation Project][18]. + +This gives us a virtual floppy drive we can use to boot floppy images. It also takes care of the 26 bit memory addressing issues. + +To get your copy, browse to the [ADFFS subforum][19] and click the thread for the latest public release – at the time of writing that’s 2.64. + +Download the .zip file, open it and then drag and drop !ADFFS to somewhere on your SD card where it’s conveniently accessible – we’ll be using it a lot. + +###### Configuring Boot Scripts + +For ADFFS to work properly, we’re going to need to add a little boot script. + +Follow these instructions carefully – if you do the wrong thing here you can really mess up your OS, or even brick your Pi. + +###### Creating !Boot.Loader.CMDLINE/TXT + +Remember how I showed you that you could open up applications as directories by holding down shift when you double-click? We can also do this to get inside the Risc OS boot process. We’ll need to do this now to add our boot script. + +Start by left clicking once on the SD card icon on the icon bar, then hold down shift and double-click !Boot with your left mouse button. Then double click the folder labeled Loader to open it. This is where we’re going to put our script. + +To write our script, click Apps on the icon bar, then double-click !StrongEd. Now click on the fist icon that appeared on the bottom right of the icon bar to open a text editor window, and type: + +``` +disable_mode_changes +``` + +Computers are fussy so take a moment to double-check your spelling. + +To save this file, click the middle button on the text editor and hover your cursor over the arrow next to Save. Then delete the existing text in the Name field and replace it with: + +``` +CMDLINE/TXT +``` + +Now, see that button marked Save? It’s a trap! Instead, drag and drop the pen and paper icon to the Loader folder. + +We’re now finished with this folder, so you can close it and also close the text editor. + +###### Installing Anymode + +Now we need to install the Anymode module. This is to make the screen modes on our software play nice with our GPU and HDMI output. + +Download Anymode from [here,][20] copy the .zip file to somewhere temporary and open it. + +Now go back to the root directory on your SD card, double-click on !Boot again, then open the folders marked Choices, Boot and Predesk. + +Then use your left mouse button to drag and drop the Anymode module from your .zip file to the Predesk folder. + +#### Configuring USB Joysticks and Gamepads + +Over at the Archimedes Software Preservation Project, there’s a [USB joystick driver][21] in development. + +This module is still in alpha testing, and you’ll need to use the command line to configure it, but it’s there if you’d like to give it a try. + +If you can’t get this working, don’t worry too much. It was actually pretty normal back in the day for people either to not have a joystick, or not to be able to get it to work. So pretty much every game can be played with keyboard and mouse. + +I’ll be updating this section as this project develops. + +#### Setting File Types + +Risc OS uses an [ADFS][22] filesystem, different to anything used on Windows or Linux. + +Instead of using a filename extension, ADFS files have a “file type” associated with them to show what. When these files pass through a different operating system, this information can get stripped from the file. + +In theory, if we don’t open our .zip archives until it reaches our Risc OS machine, the file type should be preserved. Usually this works, but sometimes you unzip the archive and find files with no file type attached. You’ll be able to tell when this has happened because you’ll be faced with a green icon labeled “data”. + +Fortunately, this is pretty easy to fix. + +Just click on the file with your middle button. A menu will appear. + +The second item on this menu will include the name of the file and it will have an arrow next to it. Hover your cursor over the arrow and a second menu will appear. + +Near the bottom of this menu will be an option marked “Set Type”, and it will also have an arrow next to it. Hover your cursor over this arrow and a field will appear where you can enter the file type. + +To set the file type on a floppy image, type: + +``` +&FCE +``` + +[Click here for more file type codes.][23] + +### Finding, Loading and Playing Games + +The best start looking for floppy images is in the [Games subforum][24] at the Archimedes Software Preservation Project. + +There’s also a [Risc OS downloads section at Acorn Arcade][25]. + +There are definitely other websites that host Archimedes games, but I have no idea how legal these copies are. + +###### Booting and Switching Floppy Disc Images + +Some games might have specific instructions for how to boot the floppy. If so, then follow them. + +Generally, though, you drag and drop the image file to, then click on it with the middle button and press “boot floppy”. Your game should start straight away. + +Many of the games use more than one floppy disc. To play these, boot disc 1. When you’re asked to switch floppy discs, press control and shift and the function key corresponding to the disc you want to change to. + +### Which Games Should You Play? + +This is a matter of opinion really and everyone’s taste differs. + +Still, if you’re wondering what to try, here are my recommendations. + +This is still a work in progress. I’ll be adding more games as I find what I like. + +#### Cannon Fodder + + + +This is a top-down action/strategy game that’s extremely playable and wickedly funny. + +You control a team of soldiers indirectly by clicking on areas of the screen to tell them where to move and who to kill. Everyone dies with a single shot. + +At the start your enemies are all very easy to beat but the game progresses in difficulty. As you go, you’ll need to start dividing your team up into squads to command separately. + +I used to play this on the Mega Drive back in the day, but it’s so much more playable with an actual mouse. + +[Click here to get Cannon Fodder.][26] + +#### Star Fighter 3000 + + + +This is a 3D space shooter that really laid down the gauntlet for what the Archimedes could do. + +You fly around and blast stuff with lasers and missiles. It’s pretty awesome. It’s kind of a forerunner to Terminal Velocity, if you ever played that. + +It was later ported to the 3D0, Sega Saturn and Playstation, but they could never render the 3D graphics to the same distance. + +[Click here to get Star Fighter 3000.][27] + +You want the download marked “Star Fighter 3000 version 3.20”. This one doesn’t use a floppy image, so don’t use ADFFS to run this file. Just double click the program and go. + +#### Aggressor + + + +This is a side-scrolling run-and-gun where you have unlimited ammo and a whole lot of aliens and robots to kill. Badass. + +#### Bug Hunter + + + +This is a really unique puzzle/platform game – you’re a robot with sticky legs who can walk up and down walls and across the ceiling, and your job is to squash bugs by dropping objects lying around. + +Which is harder than it sounds, because you can easily get yourself into situations where you dropped something in the wrong place, making it impossible to complete your objective, so your only way out is to initiate your self destruct sequence in futility and shame. Which I guess is kinda rather dark, if you dwell on it. + +It’s fun though. + +[Click here to get Bug Hunter.][28] + +#### Mad Professor Mariarti + + + +This is a platformer where you’re a mad scientist who shoots spanners and other weapons at bad guys. It has good music and gameplay and an immersive puzzle element as well. + +[Click here to get Mad Professor Mariarti.][29] + +#### Chuckie Egg + +Ok, now we’re getting really retro. + +Strictly speaking, this doesn’t really belong in this list, because it’s not even an Archimedes game – it’s an old BBC Micro game that I played the hell out of back in the day that some nice chap has ported to Risc OS. + +But there’s a version that runs and it’s awesome so you should play it. + +Basically you’re just this guy who goes around stealing eggs. That’s it. That’s all you do. + +It’s absolutely amazing. + +If you’ve never played it, you really should check it out. + +You can [get Chuckie Egg here][30]. + +This isn’t a floppy image, so you don’t need ADFFS to run it. Just double click on the program and go. + +### Over to You + +Got any favourite Acorn Archimedes games? + +Got any tips for getting them running on the Pi? + +Please let me know in the comments section 🙂 + +-------------------------------------------------------------------------------- + +via: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/ + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/ +[b]: https://github.com/lujun9972 +[1]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2018/06/cannonfodder-1024x768.jpg +[2]: http://www.computinghistory.org.uk/det/897/Acorn-Computers/ +[3]: http://davetrott.co.uk/2017/03/strategy-is-sacrifice/ +[4]: http://www.old-computers.com/museum/computer.asp?c=1015 +[5]: https://www.theverge.com/2016/8/16/12507568/intel-arm-mobile-chips-licensing-deal-idf-2016 +[6]: https://www.riscosopen.org/content/ +[7]: https://www.riscosopen.org/wiki/documentation/show/ARMv6%2Fv7%20software%20compatibility%20list#games +[8]: https://www.riscosopen.org/content/downloads/raspberry-pi +[9]: http://www.raspberry-projects.com/pi/pi-operating-systems/win32diskimager +[10]: http://osxdaily.com/2018/01/11/write-img-to-sd-card-mac-etcher/ +[11]: https://www.raspberrypi.org/downloads/noobs/ +[12]: https://retropie.org.uk/ +[13]: https://www.recalbox.com/ +[14]: https://www.raspberrypi.org/documentation/installation/noobs.md +[15]: https://www.riscosopen.org/wiki/documentation/show/RISC%20OS%20Roadmap +[16]: https://pimylifeup.com/raspberry-pi-wifi-bridge/ +[17]: http://www.riscos.com/ftp_space/generic/sparkfs/index.htm +[18]: https://forums.jaspp.org.uk/forum/index.php +[19]: https://forums.jaspp.org.uk/forum/viewforum.php?f=14&sid=d0f037e95c560144f3910503b776aef5 +[20]: http://www.pi-star.co.uk/anymode/ +[21]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=8&t=396 +[22]: https://en.wikipedia.org/wiki/Advanced_Disc_Filing_System +[23]: https://www.riscosopen.org/wiki/documentation/show/File%20Types +[24]: https://forums.jaspp.org.uk/forum/viewforum.php?f=25 +[25]: http://www.acornarcade.com/downloads/ +[26]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=188 +[27]: http://starfighter.acornarcade.com/ +[28]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=330 +[29]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=148 +[30]: http://homepages.paradise.net.nz/mjfoot/riscos.htm diff --git a/sources/tech/20180629 100 Best Ubuntu Apps.md b/sources/tech/20180629 100 Best Ubuntu Apps.md deleted file mode 100644 index 581d22b527..0000000000 --- a/sources/tech/20180629 100 Best Ubuntu Apps.md +++ /dev/null @@ -1,1184 +0,0 @@ -100 Best Ubuntu Apps -====== - -Earlier this year we have published the list of [20 Best Ubuntu Applications for 2018][1] which can be very useful to many users. Now we are almost in the second half of 2018, so today we are going to have a look at 100 best applications for Ubuntu which you will find very useful. - -![100 Best Ubuntu Apps][2] - -Many users who have recently switched to Ubuntu from Microsoft Windows or any other operating system face the dilemma of finding best alternative to application software they have been using for years on their previous OS. Ubuntu has thousands of free to use and open-source application software’s that perform way better than many paid software’s on Windows and other OS. - -Following list features many application software in various categories, so that you can find best application which best matched to your requirements. - -### **1\. Google Chrome Browser** - -Almost all the Linux distributions feature Mozilla Firefox web browser by default and it is a tough competitor to Google Chrome. But Chrome has its own advantages over Firefox like Chrome gives you direct access to your Google account from where you can sync bookmarks, browser history, extensions, etc. from Chrome browser on other operating systems and mobile phones. - -![Chrome][3] - -Google Chrome features up-to-date Flash player for Linux which is not the case with other web browsers on Linux including Mozilla Firefox and Opera web browser. If you continuously use Chrome on Windows then it is best choice to use it on Linux too. - -### 2\. **Steam** - -Gaming on Linux is a real deal now, which was a distant dream few years ago. In 2013, Valve announced Steam gaming client for Linux and everything has changed since then. Earlier users were hesitant to switch to Linux from Windows just because they would not be able to play their favourite games on Ubuntu but that is not the case now. - -![Steam][4] - -Some users might find installing Steam on Linux tricky but it worth all your efforts as thousands of Steam games are available for Linux. Some popular high-end games like Counter Strike: Global Offensive, Hitman, Dota 2 are available for Linux, you just need to make sure you have minimum hardware required to play these games. - -``` -$ sudo add-apt-repository multiverse - -$ sudo apt-get update - -$ sudo apt-get install steam -``` - -### **3\. WordPress Desktop Client** - -Yes you read it correct, WordPress has its dedicated desktop client for Ubuntu from where you can manage your WordPress sites. You can also write and design separately on desktop client without need for switching browser tabs. - -![][5] - -If you have websites backed by WordPress then this desktop client is must have application for you as you can also keep track of all the WordPress notifications in one single window. You can also check stats for performance of posts on website. Desktop client is available in Ubuntu Software Centre from where you can download and install it. - -### **4\. VLC Media Player** - -VLC is a very popular cross-platform and open-source media player which is also available for Ubuntu. What makes VLC a best media player is that it can play videos in all the Audio and Video formats available on planet without any issue. - -![][6] - -VLC has a slick user interface which is very easy to use and apart from that it offers lot of features such as online video streaming, audio and video customization, etc. - -``` -$ sudo add-apt-repository ppa:videolan/master-daily -$ sudo apt update -$ sudo apt-get install vlc qtwayland5 -``` - -### **5\. Atom Text Editor** - -Having developed by Github, Atom is a free and open-source text editor which can also be used as Integrated Development Environment (IDE) for coding and editing in major programming languages. Atom developers claim it to be a completely hackable text editor for 21st Century. - -![][7] - -Atom Text Editor has one of the best user interfaces and it is a feature rich text editor with offerings like auto-completion, syntax highlighting and support of extensions and plug-ins. - -``` -$ sudo add-apt-repository ppa:webupd8team/atom -$ sudo apt-get update -$ sudo apt-get install atom -``` - -### **6\. GIMP Photo Editor** - -GIMP (GNU Image Manipulation Programme) is free and open-source photo editor for Ubuntu. It is arguably a best alternative to Adobe Photoshop on Windows. If you have been continuously using Adobe Photoshop and finding it difficult to get used to GIMP, then you can customize GIMP to look very similar to Photoshop. - -![][8] - -GIMP is a feature rich Photo editor and you can always use additional features by installing extensions and plug-ins anytime. - -``` -$ sudo apt-get install gimp -``` - -### **7\. Google Play Music Desktop Player** - -Google Play Music Desktop Player is an open-source music player which is replica of Google Play Music or you can say it’s better than that. Google always lacked a desktop music client but this third-party app fills the void perfectly. - -![][9] - -Like you can see in above screenshot, its interface is second to none in terms of look and feel. You just need to sign in into Google account and then it will import all your music and favorites into this desktop client. You can download installation files from its official [website][10] and install it using Software Center. - -### **8\. Franz** - -Franz is an instant messaging client that combines chat and messaging services into one application. It is one of the modern instant messaging platforms and it supports Facebook Messenger, WhatsApp, Telegram, HipChat, WeChat, Google Hangouts, Skype integration under one single application. - -![][11] - -Franz is complete messaging platform which you can use for business as well to manage mass customer service. To install Franz you need to download installation package from its [website][12] and open it using Software Center. - -### **9\. Synaptic Package Manager** - -Synaptic Package Manager is one of the must have tools on Ubuntu because it works for graphical interface for ‘apt-get’ command which we usually use to install apps on Ubuntu using Terminal. It gives tough competition to default app stores on various Linux distros. - -![][13] - -Synaptic comes with very simple user interface which is very fast and easy to use as compared to other app stores. On left-hand side you can browse various apps in different categories from where you can easily install and uninstall apps. - -``` -$ sudo apt-get install synaptic -``` - -### **10\. Skype** - -Skype is a very popular cross-platform video calling application which is now also available for Linux as a Snap app. Skype is an instant messaging application which offers features like voice and video calls, desktop screen sharing, etc. - -![][14] - -Skype has an excellent user interface which very similar to desktop client on Windows and it is very easy to use. It could be very useful app for many switchers from Windows. - -``` -$ sudo snap install skype -``` - -### **13\. VirtualBox** - -VirtualBox is a cross-platform virtualization software application developed by Oracle Corporation. If you love trying out new operating systems then VirtualBox is the must have Ubuntu application for you. You can tryout Linux, Mac inside Windows Operating System or Windows and Mac inside Linux. - -![][15] - -What VB actually does is it lets you run guest operating system on host operating system virtually. It creates virtual hard drive and installs guest OS on it. You can download and install VB directly from Ubuntu Software Center. - -### **12\. Unity Tweak Tool** - -Unity Tweak Tool (Gnome Tweak Tool) is must have tool for every Linux user because it gives user ability to customize desktop according to your need. You can try new GTK themes, set up desktop hot corners, customize icon set, tweak unity launcher, etc. - -![][16] - -Unity Tweak Tool can be very useful to user as it has everything covered right from the basic to advanced configurations. - -``` -$ sudo apt-get install unity-tweak-tool -``` - -### **13\. Ubuntu Cleaner** - -Ubuntu Cleaner is a system maintenance tool especially designed to remove packages that are no longer useful, remove unnecessary apps and clean-up browser caches. Ubuntu Cleaner has very simple user interface which is very easy to use. - -![][17] - -Ubuntu Cleaner is one of the best alternatives to BleachBit which is also a decent cleaning tool available for Linux distros. - -``` -$ sudo add-apt-repository ppa:gerardpuig/ppa -$ sudo apt-get update -$ sudo apt-get install ubuntu-cleaner -``` - -### 14\. Visual Studio Code - -Visual Studio Code is code editor which you will find very similar to Atom Text Editor and Sublime Text if you have already used them. Visual Studio Code proves to be very good educational tool as it explains everything from HTML tags to syntax in programming. - -![][18] - -Visual Studio comes with Git integration out of the box and it has excellent user interface which you will find very similar to likes of Atom Text Editor and Sublime Text. You can download and install it from Ubuntu Software Center. - -### **15\. Corebird** - -If you are looking for desktop client where you can use your Twitter then Corebird Twitter Client is the app you are looking for. It is arguably best Twitter client available for Linux distros and it offers features very similar to Twitter app on your mobile phone. - -![][19] - -Corebird Twitter Client also gives notifications whenever someone likes and retweets your tweet or messages you. You can also add multiple Twitter accounts on this client. - -``` -$ sudo snap install corebird -``` - -### **16\. Pixbuf** - -Pixbuf is a desktop client from Pixbuf photo community hub which lets you upload, share and sale your photos. It supports photo sharing to social media networks like Facebook, Pinterest, Instagram, Twitter, etc. and photography services including Flickr, 500px and Youpic. - -![][20] - -Pixbuf offers features like analytics which gives you stats about clicks, retweets, repins on your photo, scheduled posts, dedicated iOS extension. It also has mobile app, so that you can always be connected with your Pixbuf account from anywhere. Pixbuf is available for download in Ubuntu Software Center as a snap package. - -### **17\. Clementine Music Player** - -Clementine is a cross-platform music player and a good competitor to Rhythmbox which is default music player on Ubuntu. It is fast and easy to use music player thanks to its user friendly interface. It supports audio playback in all the major audio file formats. - -![][21] - -Apart from playing music from local library you can also listen to online radio from Spotify, SKY.fm, Soundcloud, etc. It also offers other features like smart and dynamic playlists, syncing music from cloud storages like Dropbox, Google Drive, etc. - -``` -$ sudo add-apt-repository ppa:me-davidsansome/clementine -$ sudo apt-get update -$ sudo apt-get install clementine -``` - -### **18\. Blender** - -Blender is free and open-source 3D creation application software which you can use to create 3D printed models, animated films, video games, etc. It comes with integrated game engine out of the box which you can use to develop and test video games. - -![blender][22] - -Blender has catchy user interface which is easy to use and it includes features like built-in render engine, digital sculpturing, simulation tool, animation tools and many more. It is one of the best applications you will ever find for Ubuntu considering it’s free and features it offers. - -### **19\. Audacity** - -Audacity is an open-source audio editing application which you can use to record, edit audio files. You can record audio from various inputs like microphone, electric guitar, etc. It also gives ability to edit and trim audio clips according to your need. - -![][23] - -Recently Audacity released with new features for Ubuntu which includes theme improvements, zoom toggle command, etc. Apart from these it offers features like various audio effects including noise reduction and many more. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/audacity -$ sudo apt-get update -$ sudo apt-get install audacity -``` - -### **20\. Vim** - -Vim is an Integrated Development Environment which you can use as standalone application or command line interface for programming in various major programming languages like Python. - -![][24] - -Most of the programmers prefer coding in Vim because it is fast and highly customizable IDE. Initially you might find it difficult to use but you will quickly get used to it. - -``` -$ sudo apt-get install vim -``` - -### **21\. Inkscape** - -Inkscape is an open-source and cross-platform vector graphics editor which you will find very much similar to Corel Draw and Adobe Illustrator. Using it you can create and edit vector graphics such as charts, logos, diagrams, illustrations, etc. - -![][25] - -Inkscape uses Scalable Vector Graphics (SVG) and an open XML-based W3C standard as a primary format. It supports various formats including JPEG, PNG, GIF, PDF, AI (Adobe Illustrator Format), VSD, etc. - -``` -$ sudo add-apt-repository ppa:inkscape.dev/stable -$ sudo apt-get update -$ sudo apt-get install inkscape -``` - -### **22\. Shotcut** - -Shotcut is a free, open-source and cross-platform video editing application developed by Meltytech, LLC on the MLT Multimedia Framework. It is one of the most powerful video editors you will ever find for Linux distros as it supports all the major audio, video and image formats. - -![][26] - -It gives ability to edit multiple tracks with various file formats using non-linear video editing. It also comes with support for 4K video resolutions and features like various audio and video filters, tone generator, audio mixing and many others. - -``` -snap install shotcut -- classic -``` - -### **23\. SimpleScreenRecorder** - -SimpleScreenRecorder is a free and lightweight screen video recorder for Ubuntu. This screen recorder can be very useful tool for you if you are a YouTube creator or application developer. - -![][27] - -It can capture a video/audio record of desktop screen or record video games directly. You can set video resolution, frame rate, etc. before starting the screen recording. It has simple user interface which you will find very easy to use. - -``` -$ sudo add-apt-repository ppa:marten-baert/simplescreenrecorder -$ sudo apt-get update -$ sudo apt-get install simplescreenrecorder -``` - -### **24\. Telegram** - -Telegram is a cloud-based instant messaging and VoIP platform which has got lot of popularity in recent years. It is an open-source and cross-platform messenger where user can send messages, share videos, photos, audio and other files. - -![][28] - -Some of the notable features in Telegram are secrete chats, voice messages, bots, telescope for video messages, live locations and social login. Privacy and security is at highest priority in Telegram, so all messages you send and receive are end-to-end encrypted. - -``` -``` -$ sudo snap install telegram-desktop - -### **25\. ClamTk** - -As we know viruses meant to harm Windows PC cannot do any harm to Ubuntu but it is always prone to get infected by some mails from Windows PC containing harmful files. So it is safe to have some antivirus applications on Linux too. - -![][29] - -ClamTk is a lightweight malware scanner which scans files and folders on your system and cleans if any harmful files are found. ClamTk is available as a Snap package and can be downloaded from Ubuntu Software Center. - -### **26\. MailSpring** - -MailSpring earlier known as Nylas Mail or Nylas N1 is an open-source email client. It saves all the mails locally on computer so that you can access them anytime you need. It features advanced search which uses AND and OR operations so that you can search for mails based on different parameters. - -![][30] - -MailSpring comes with excellent user interface which you will find only on handful of other mail clients. Privacy and security, scheduler, contact manager, calendar are some of the features MailSpring offers. - -### **27\. PyCharm** - -PyCharm is one of my favorite Python IDEs after Vim because it has slick user interface with lot of extensions and plug-in support. Basically it comes in two editions, one is community edition which is free and open-source and other is professional edition which is paid one. - -![][31] - -PyCharm is highly customizable IDE and sports features such as error highlighting, code analysis, integrated unit testing and Python debugger, etc. PyCharm is the preferred IDE by most of the Python programmers and developers. - -### **28\. Caffeine** - -Imagine you are watching something on YouTube or reading a news article and suddenly Ubuntu locks your screen, I know that is very annoying. It happens with many of us, so Caffeine is the tool that will help you block the Ubuntu lock screen or screensaver. - -![][32] - -Caffeine Inhibitor is lightweight tool, it adds icon on notification bar from where you can activate or deactivate it easily. No additional setting needs to be done in order to use this tool. - -``` -$ sudo add-apt-repository ppa:eugenesan/ppa -$ sudo apt-get update -$ sudo apt-get install caffeine -y -``` - -### **29\. Etcher USB Image Writer** - -Etcher is an open-source USB Image Writer developed by resin.io. It is a cross-platform application which helps you burn image files like ZIP, ISO, IMG to USB storage. If you always try out new OS then Etcher is the must have tool for you as it is easy to use and reliable. - -![][33] - -Etcher has clean user interface that guides you through process of burning image file to USB drive or SD card in three easy steps. Steps involve selecting Image file, selecting USB drive and finally flash (writes files to USB drive). You can download and install Etcher from its [website][34]. - -### **30\. Neofetch** - -Neofetch is a cool system information tool that gives you all the information about your system by running “neofetch” command in Terminal. It is cool tool to have because it gives you information about desktop environment, kernel version, bash version and GTK theme you are running. - -![][35] - -As compared to other system information tools Nefetch is highly customizable tool. You can perform various customizations using command line. - -``` -$ sudo add-apt-repository ppa:dawidd0811/neofetch -$ sudo apt-get update -$ sudo apt-get update install neofetch -``` - -### 31\. Liferea - -Liferea (Linux Feed Reader) is a free and open-source news aggregator for online news feeds. It is a fast and easy to use new aggregator that supports various formats such as RSS/RDF, Atom, etc. - -![][36] -Liferea comes with sync support with TinyTinyRSS out of the box and it gives you an ability to read feeds in offline mode. It is one of the best feed readers you will find for Linux in terms of reliability and flexibility. - -``` -$ sudo add-apt-repository ppa:ubuntuhandbook1/apps -$ sudo apt-get update -$ sudo apt-get install liferea -``` - -### 32\. Shutter - -It is easy to take screenshots in Ubuntu but when it comes to edit screenshots Shutter is the must have application for you. It helps you capture, edit and share screenshots easily. Using Shutter’s selector tool you can select particular part of your screen to take screenshot. - -![][37] - -Shutter is a feature-rich screenshot tool which offers features like adding effects to screenshot, draw lines, etc. It also gives you option to upload your screenshot to various image hosting sites. You can directly download and install Shutter from Software Center. - -### 33\. Weather - -Weather is a small application which gives you real-time weather information for your city or any other location in the world. It is simple and lightweight tool which gives you detailed forecast of up to 7 days and hourly details for current and next day. - -![][38] - -It integrates with GNOME shell to give you information about current weather conditions at recently searched locations. It has minimalist user interface which works smoothly on minimum hardware requirement. - -### 34\. Ramme - -Ramme is cool unofficial Instagram desktop client which gives you feel of Instagram mobile phone app. It is an Electron-based client so it replicates Instagram app and offers features like theme customization, etc. - -![][39] - -But due to Instagram’s API restrictions you can’t upload image using Ramme client but you can always go through Instagram feed, like and comment on posts, message friends. You can download Ramme installation files from[Github.][40] - -### **35\. Thunderbird** - -Thunderbird is an open-source email client which is also a default email client in most of the Linux distributions. Despite parting ways with Mozilla in 2017, Thunderbird is still very popular and best email client on Linux platform. It comes with features like spam filtering, IMAP and POP email syncing, calendar support, address book integration and many other features out of the box. - -![][41] - -It is a cross-platform email client with full community support across all supported platforms. You can always change its look and feel thanks to its highly customizable nature. - -### **36\. Pidgin** - -Pidgin is an instant messaging client where you can login into different instant messaging networks under single window. You can login to instant messaging networks like Google Talk, XMPP, AIM, Bonjour, etc. - -![][42] - -Pidgin has all the features you can expect in an instant messenger and you can always enhance its performance by installing additional plug-ins. - -``` -``` -$ sudo apt-get install pidgin - -### **37\. Krita** - -Krita is a free and open-source digital painting, editing and animation application developed by KDE. It has excellent user interface with everything placed perfectly so that you can easily find the tool you need. - -![][43] - -It uses OpenGL canvas which boosts Krita’s performance and it offers many features like different painting tools, animation tools, vector tools, layers and masks and many more. Krita is available in Ubuntu Software Center, you can easily download it from there. - -### **38\. Dropbox** - -Dropbox is stand-out player in cloud storage and its Linux clients works really well on Ubuntu once installed properly. While Google Drive comes out of the box on Ubuntu 16.04 LTS and later, Dropbox is still a preferred cloud storage tool on Linux in terms of features it offers. - -![][44] -It always works in background and back up new files from your system to cloud storage, syncs files continuously between your computer and its cloud storage. - -``` -$ sudo apt-get install nautilus-dropbox -``` - -### 39\. Kodi - -Kodi formerly known as Xbox Media Center (XBMC) is an open-source media player. You can play music, videos, podcasts and video games both in online and offline mode. This software was first developed for first generation of Xbox gaming console and then slowly ported to personal computers. - -![][45] - -Kodi has very impressive video interface which is fast and powerful. It is highly customizable media player and by installing additional plug-ins you can access online streaming services like Pandora, Spotify, Amazon Prime Video, Netflix and YouTube. - -### **40\. Spotify** - -Spotify is one of the best online media streaming sites. It provides music, podcast and video streaming services both on free and paid subscription basis. Earlier Spotify was not supported on Linux but now it has its own fully functional desktop client for Ubuntu. - -![][46] - -Alongside Google Play Music Desktop Player, Spotify is must have media player for you. You just need to login to your Spotify account to access your favorite online content from anywhere. - -### 41\. Brackets - -Brackets is an open-source text editor developed by Adobe. It can be used for web development and design in web technologies such as HTML, CSS and JavaScript. It sports live preview mode which is great feature to have as it can get real-time view of whatever the changes you make in script. - -![][47] - -It is one of the modern text editors on Ubuntu and has slick user interface which takes web development task to new level. It also offers features like inline editor and supports for popular extensions like Emmet, Beautify, Git, File Icons, etc. - -### 42\. Bitwarden - -Account safety is serious concern now as we can see rise in security breaches in which users passwords are stolen and important data being compromised. So Bitwarden is recommended tool for you which stores all your account logins and passwords safe and secure at one place. - -![][48] - -Bitwarden uses AES-256 bit encryption technique to store all the login details and only user has access to his data. It also helps you to create strong passwords which are less likely to be hacked. - -### 43\. Terminator - -Terminator is an open-source terminal emulator programmed and developed in Java. It is a cross-platform emulator which lets you have privilege of multiple terminals in one single window which is not the case in Linux default terminal emulator. - -![][49] - -Other stand-out feature in Terminator includes automatic logging, drag and drop, intelligent vertical and horizontal scrolling, etc. - -``` -$ sudo apt-get install terminator -``` - -### 44\. Yak Yak - -Yak Yak is an open-source unofficial desktop client for Google Hangouts messenger. It could be good alternative to Microsoft’s Skype as it comes with bunch of some amazing features out of the box. You can enable desktop notifications, language preferences, and works on minimal memory and power requirements. - -![][50] - -Yak Yak comes with all the features you would expect in any instant messaging app such as typing indicator, drag and drop media files, and audio/video calling. - -### 45\. **Thonny** - -Thonny is a simple and lightweight IDE especially designed for beginners in programming. If you are new to programming then this is the must have IDE for you because it lets you learn while programming in Python. - -![][51] - -Thonny is also great tool for debugging as it supports live variables during debugging, apart from this it offers features like separate windows for executing function call, simple user interface, etc. - -``` -$ sudo apt-get install thonny -``` - -### **46\. Font Manager** - -Font Manager is a lightweight tool for managing, adding or removing fonts on your Ubuntu system. It is specially built for Gnome desktop environment, users don’t having idea about managing fonts using command line will find this tool very useful. - -![][52] - -Gtk+ Font Manager is not meant to be for professional users, it has simple user interface which you will find very easy to navigate. You just need to download font files from internet and add them using Font Manager. - -$ sudo add-apt-repository ppa:font-manager/staging -$ sudo apt-get update -$ sudo apt-get install font-manager - -### **47\. Atril Document Viewer** - -Atril is a simple document viewer which supports file formats like Portable Document Format (PDF), PostScript (PS), Encapsulated PostScript (EPS), DJVU and DVI. Atril comes bundled with MATE desktop environment and it is identical to Evince which is default document on the most of the Linux distros. - -![][53] - -Atril has simple and lightweight user interface which is highly customizable and offers features like search, bookmarks and UI includes thumbnails on the left-hand side. - -``` -$ sudo apt-get install atril -``` - -### **48\. Notepadqq** - -If you have ever used Notepad++ on Windows and looking for similar program on Linux then don’t worry developers have ported it to Linux as Notepadqq. It is a simple yet powerful text editor which you can use for daily tasks or programming in various languages. - -![][54] - -Despite being a simple text editor it has some amazing features like you can set theme between dark and light color scheme, multiple selection, regular expression search and real-time highlighting. - -``` -$ sudo add-apt-repository ppa:notpadqq-team/notepadqq -$ sudo apt-get update -$ sudo apt-get install notepadqq -``` - -### **49\. Amarok** - -Amarok is an open-source music player developed under KDE projetct. It has an intuitive interface which makes you feel home so you can discover your favorite music easily. Besides Clementine, Amarok is very good choice to have when you are looking for perfect music player for Ubuntu. - -![][55] - -Some of the top features in Amarok include intelligent playlist support, integration support for online services like MP3tunes, Last.fm, Magnatune, etc. - -### **50\. Cheese** - -Cheese is a Linux default webcam application which can be useful to you in some video chat or instant messaging applications. Apart from that you can also use this app to take photos and videos with fancy effects. - -![][56] - -It also sports burst mode which lets you take multiple snaps in quick succession and option to share your photos with friends and family. Cheese come pre-installed with most of the Linux distros but you can also download and install it from Software Center. - -### **51\. MyPaint** - -MyPaint is a free and open-source raster graphics editor which focuses on digital painting rather than image manipulation and post processing. It is cross-platform application and more or less similar to Corel Painter. - -![][57] - -MyPaint could be good alternative to those who use Microsoft Paint application on Windows. It has simple user interface which is fast and powerful. MyPaint is available is Software Center for download. - -### **52\. PlayOnLinux** - -PlayOnLinux is a front-end for WINE emulator which allows you to run Windows applications on Linux. You just need to install Windows applications and game on WINE then you can easily launch applications and games using PlayOnLinux. - -![][58] - -### **53\. Akregator** - -Akregator is a default RSS reader for KDE Plasma Environment developed under KDE project. It has simple user interface and comes with KDE’s Konqueror browser so that you don’t need to switch between apps while reading news feeds. - -![][59] - -Akregator also offers features like desktop notifications, automatic feeds, etc. It is one of the best feed readers you will find for across most of the Linux distros. - -### **54\. Brave** - -Brave is an open-source web browser which blocks ads and trackers so that you can browse your content fast and safely. What it actually does is that it pays to websites and YouTubers on behalf of you. If you prefer contributing to websites and YouTubers rather than seeing advertisements then this browser is for you. - -![][60] - -This is a new concept and could be good browser for those who prefer safe browsing without compromising important data on internet. - -### **55\. Bitcoin Core** - -Bitcoin Core is an official Bitcoin client which is highly secure and reliable. It keeps track of all your transactions and makes sure all transactions are valid. It restricts Bitcoin miners and banks from taking full control of your Bitcoin wallet. - -![][61] - -Bitcoin Core also offers other important features like, private keys backup, cold storage, security notifications. - -``` -$ sudo add-apt-repository ppa:bitcoin/bitcoin -$ sudo apt-get update -$ sudo apt-get install bitcoin-qt -``` - -### **56\. Speedy Duplicate Finder** - -Speedy Duplicate Finder is a cross-platform file finder which helps you find duplicate files on your system and free-up disk space. It is a smart tool which searches for duplicate files on entire hard disk and also features smart filter which helps you find file by file type, extension or size. - -![][62] - -It has a simple and clean user interface which is very easy to handle. As soon as you download it from Software Center you are good to go with disk space clean-up. - -### **57\. Zulip** - -Zulip is a free and open-source group chat application which was acquired by Dropbox. It is written in Python and uses PostgreSQL database. It was designed and developed to be a good alternative to other chat applications like Slack and HipChat. - -![][63] - -Zulip is a feature-rich application with features such as drag and drop files, group chats, private messaging, image previews and many more. It also supports integration with likes of Github, JIRA, Sentry, and hundreds of other services. - -### **58\. Okular** - -Okular is a cross-platform document viewer developed by KDE for KDE desktop environment. It is a simple document viewer and supports file formats like Portable Document Format (PDF), PostScript, DjVu, Microsoft Compiled HTML help, and many other major file formats. - -![][64] - -Okular is one of the best document viewers you should try on Ubuntu as it offers features like commenting on PDF documents, drawing lines, highlighting and much more. You can also extract text from PDF document to text file. - -### **59\. FocusWriter** - -FocusWriter is a distraction-free word processor which hides your desktop screen so that you can only focus on writing. Like you can see in the screenshot below whole Ubuntu screen is hidden, it’s just you and your word processor. But you can always access Ubuntu screen whenever you need it by just moving your mouse cursor to the edges of the screen. - -![][65] - -It is a lightweight word processor with support for file formats like TXT, RTF and ODT files. It also offers fully customizable user interface and features like timers and alarms, daily goals, sound effects and support for translation into 20 languages. - -### **60\. Guake** - -Guake is a cool drop-down terminal for GNOME Desktop Environment. Guake comes in a flash whenever you need it and disappears as soon as your task is completed. You just need to click F12 button to launch or exit it so launching Guake is way faster than launching new Terminal window. - -![][66] - -Guake is a feature-rich terminal with features like support for multiple tabs, ability to save your terminal content to file in few clicks, and fully customizable user interface. - -``` -$ sudo apt-get install guake -``` - -### **61\. KDE Connect** - -KDE Connect is an awesome application on Ubuntu and I would have loved to list this application much higher in this marathon article but competition is intense. Anyways KDE Connect helps you get your Android smartphone notifications directly on Ubuntu desktop. - -![][67] - -With KDE Connect you can do whole lot of other things like check your phones battery level, exchange files between computer and Android phone, clipboard sync, send SMS, and you can also use your phone as wireless mouse or keyboard. - -``` -$ sudo add-apt-repository ppa:webupd8team/indicator-kedeconnect -$ sudo apt-get update -$ sudo apt-get install kdeconnect indicator-kdeconnect -``` - -### **62\. CopyQ** - -CopyQ is a simple but very useful clipboard manager which stores content of the system clipboard whenever any changes you make so that you can search and restore it back whenever you need. It is a great tool to have as it supports text, images, HTML and other formats. - -![][68] - -CopyQ comes pre-loaded with features like drag and drop, copy/paste, edit, remove, sort, create, etc. It also supports integration with text editors like Vim, so it could be very useful tool if you are a programmer. - -``` -$ sudo add-apt-repository ppa:hluk/copyq -$ sudo apt-get update -$ sudo apt-get install copyq -``` - -### **63\. Tilix** - -Tilix is feature-rich advanced GTK3 tiling terminal emulator. If you uses GNOME desktop environment then you’re going to love Tilix as it follows GNOME Human Interface Guidelines. What Tilix emulator offers different than default terminal emulators on most of the Linux distros is it gives you ability to split terminal window into multiple terminal panes. - -![][69] - -Tilix offers features like custom links, image support, multiple panes, drag and drop, persistent layout and many more. It also has support for keyboard shortcuts and you can customize shortcuts from the Preferences settings according to your need. - -``` -$ sudo add-apt-repository ppa:webupd8team/terminix -$ sudo apt-get update -$ sudo apt-get install tilix -``` - -### **64\. Anbox** - -Anbox is an Android emulator which lets you install and run Android apps on your Linux system. It is free and open-source Android emulator that executes Android runtime environment by using Linux Containers. It uses latest Linux technologies and Android releases so that you can run any Android app like any other native application. - -![][70] - -Anbox is one of the modern and feature-rich emulators and offers features like no limit for application use, powerful user interface, and seamless integration with host operating system. - -First you need to install kernel modules. - -``` -$ sudo add-apt-repository ppa:morphis/anbox-support -$ sudo apt-get update -$ sudo apt install anbox-modules-dkms Now install Anbox using snap -$ snap install --devmode -- beta anbox -``` - -### **65\. OpenShot** - -OpenShot is the best open-source video editor you will find for Linux distros. It is a cross-platform video editor which is very easy to use without any compromise with its performance. It supports all the major audio, video and image formats. - -![][71] - -OpenShot has clean user interface and offers features like drag and drop, clip resizing, scaling, trimming, snapping, real-time previews, audio mixing and editing and many other features. - -``` -$ sudo add-apt-repository ppa:openshot.developers/ppa -$ sudo apt-get update -$ sudo apt-get install openshot -qt -``` - -### **66\. Plank** - -If you are looking for cool and simple dock for your Ubuntu desktop then Plank should be #1 choice for you. It is perfect dock and you don’t need to make any tweaks after installation but if you want to, then it has built-in preferences panel where you can customize themes, dock size and position. - -![][72] - -Despite being a simple dock, Plank offers features like item rearrangement by simple drag and drop, pinned and running apps icon, transparent theme support. - -``` -$ sudo add-apt-repository ppa:ricotz/docky -$ sudo apt-get update -$ sudo apt-get install plank -``` - -### **67\. Filezilla** - -Filezilla is a free and cross-platform FTP application which sports FileZilla client and server. It lets you transfer files using FTP and encrypted FTP like FTPS and SFTP and supports IPv6 internet protocol. - -![][73] - -It is simple file transfer application with features like drag and drop, support for various languages used worldwide, powerful user interface for multitasking, control and configures transfer speed and many other features. - -### **68\. Stacer** - -Stacer is an open-source system diagnostic tool and optimizer developed using Electron development framework. It has an excellent user interface and you can clean cache memory, start-up applications, uninstall apps that are no longer needed, monitor background system processes. - -![][74] - -It also lets you check disk, memory and CPU usage and also gives real-time stats of downloads and uploads. It looks like a tough competitor to Ubuntu cleaner but both have unique features that separate them apart. - -``` -$ sudo add-apt-repository ppa:oguzhaninan/stacer -$ sudo apt-get update -$ sudo apt-get install stacer -``` - -### **69\. 4K Video Downloader** - -4K Video Downloader is simple video downloading tool which you can use to download videos, playlists, channels from Vimeo, Facebook, YouTube and other online video streaming sites. It supports downloading YouTube playlists and channels in MP4, MKV, M4A, 3GP and many other video/audio file formats. - -![][75] - -4K Video Downloader is not as simple as you would think, apart from normal video downloads it supports 3D and 360 degree video downloading. It also offers features like in-app proxy setup and direct transfer to iTunes. You can download it from [here][76]. - -### 70\. **Qalculate** - -Qalculate is multi-purpose, cross-platform desktop calculator which is simple but very powerful calculator. It can be used for solving complicated maths problems and equations, currency conversions, and many other daily calculations. - -![][77] - -It has an excellent user interface and offers features such as customizable functions, unit calculations, symbolic calculations, arithmetic, plotting and many other functions you will find in any scientific calculator. - -### **71\. Hiri** - -Hiri is a cross-platform email client developed in Python programming language. It has slick user interface and it can be a good alternative to Microsoft Outlook in terms of features and services offered. This is great email client that can be used for sending and receiving emails, managing contacts, calendars and tasks. - -![][78] - -It is a feature-rich email client that offers features like integrated task manager, email synchronization, email rating, email filtering and many more. - -``` -$ sudo snap install hiri -``` - -### **72\. Sublime Text** - -Sublime Text is a cross-platform source code editor programmed in C++ and Python. It has Python Application Programming Interface (API) and supports all the major programming and markup languages. It is simple and lightweight text editor which can be used as IDE with features like auto-completion, syntax highlighting, split editing, etc. - -![][79] - -Some additional features in this text editor include Goto anything, Goto definition, multiple selections, command palette and fully customizable user interface. - -``` -$ sudo apt-get install sublime-text -``` - -### **73\. TeXstudio** - -TeXstudio is an integrated writing environment for creating and editing LaTex documents. It is an open-source editor which offers features like syntax highlighting, integrated viewer, interactive spelling checker, code folding, drag and drop, etc. - -![][80] - -It is a cross-platform editor and has very simple, lightweight user interface which is easy to use. It ships in with integration for BibTex and BibLatex bibliographies managers and also integrated PDF viewer. You can download TeXstudio installation package from its [website][81] or Ubuntu Software Center. - -### **74\. QtQR** - -QtQR is a Qt based application that lets you create and read QR codes in Ubuntu. It is developed in Python and Qt and has simple and lightweight user interface. You can encode website URL, email, text, SMS, etc. and you can also decode barcode using webcam camera. - -![][82] - -QtQR could prove to be useful tool to have if you generally deal with product sales and services because I don’t think we have any other similar tool to QtQR which requires minimal hardware to function smoothly. - -``` -$ sudo add-apt-repository ppa: qr-tools-developers/qr-tools-stable -$ sudo apt-get update -$ sudo apt-get install qtqr -``` - -### **75\. Kontact** - -Kontact is an integrated personal information manager (PIM) developed by KDE for KDE desktop environment. It is all-in-one software suite that integrates KMail, KOrganizer and KAddressBook into a single user interface from where you can manage all your mails, contacts, schedules, etc. - -![][83] - -It could be very good alternative to Microsoft Outlook as it is fast and highly configurable information manager. It has very good user interface which you will find very easy to use. - -``` -$ sudo apt-get install kontact -``` - -### **76\. NitroShare** - -NitroShare is a cross-platform, open-source network file sharing application. It lets you easily share files between multiple operating systems local network. It is a simple yet powerful application and it automatically detects other devices running NitroShare on local network. - -![][84] - -File transfer speed is what makes NitroShare a stand-out file sharing application as it achieves gigabit speed on capable hardware. There is no need for additional configuration required, you can start file transfer as soon as you install it. - -``` -$ sudo apt-add-repository ppa:george-edison55/nitroshare -$ sudo apt-get update -$ sudo apt-get install nitroshare -``` - -### **77\. Konversation** - -Konversation is an open-source Internet Relay Chat (IRC) client developed for KDE desktop environment. It gives speedy access to Freenode network’s channels where you can find support for most distributions. - -![][85] - -It is simple chat client with features like support for IPv6 connection, SSL server support, bookmarks, on-screen notifications, UTF-8 detection, and additional themes. It has easy to use GUI which is highly configurable. - -``` -$ sudo apt-get install konversation -``` - -### **78\. Discord** - -If you’re hardcore gamer and play online games frequently then I have a great application for you. Discord which is free Voice over Internet Protocol (VoIP) application especially designed for online gamers around the world. It is a cross-platform application and can be used for text and audio chats. - -![][86] - -Discord is very popular VoIP application gaming community and as it is completely free application, it is very good competitor to likes of Skype, Ventrilo and Teamspeak. It also offers features like crystal clear voice quality, modern text chat where you can share images, videos and links. - -### **79\. QuiteRSS** - -QuiteRSS is an open-source news aggregator for RSS and Atom news feeds. It is cross-platform feed reader written in Qt and C++. It has simple user interface which you can tweak in either classic or newspaper mode. It comes integrated with webkit browser out of the box so that you can perform all the tasks under single window. - -![][87] - -QuiteRSS comes with features such as content blocking, automatic scheduled feeds, import/export OPML, system tray integration and many other features you could expect in any feed reader. - -``` -$ sudo apt-get install quiterss -``` - -### **80\. MPV Media Player** - -MPV is a free and open-source media player based on MPlayer and MPlayer 2. It has simple user interface where user just needs to drag and drop audio/video files to play them as there is no option to add media files on GUI. - -![][88] - -One of the things about MPV is that it can play 4K videos effortlessly which is not the case with other media players on Linux distros. It also gives user ability to play videos from online video streaming sites including YouTube and Dailymotion. - -``` -$ sudo add-apt-repository ppa:mc3man/mpv-tests -$ sudo apt-get update -$ sudo apt-get install -y mpv -``` - -### **81\. Plume Creator** - -If you’re a writer then Plume Creator is must have application for you because you will not find other app for Ubuntu with privileges like Plume Creator. Writing and editing stories, chapters is tedious task and Plume Creator will ease this task for you with the help of some amazing tools it has to offer. - -![][89] - -It is an open-source application with minimal user interface which you could find confusing in the beginning but you will get used to it in some time. It offers features like edit notes, synopses, export in HTML and ODT formats support, and rich text editing. - -``` -$ sudo apt-get install plume-creator -``` - -### **82\. Chromium Web Browser** - -Chromium is an open-source web browser developed and distributed by Google. Chromium is very identical to Google Chrome web browser in terms of appearance and features. It is a lightweight and fast web browser with minimalist user interface. - -![][90] - -If you use Google Chrome regularly on Windows and looking for similar browser for Linux then Chromium is the best browser for you as you can login into your Google account to access all your Google services including Gmail. - -### **83\. Simple Weather Indicator** - -Simple Weather Indicator is an open-source weather indicator app developed in Python. It automatically detects your location and shows you weather information like temperature, possibility of rain, humidity, wind speed and visibility. - -![][91] - -Weather indicator comes with configurable options such as location detection, temperature SI unit, location visibility on/off, etc. It is cool app to have which adjusts with your desktop comfortably. - -### **84\. SpeedCrunch** - -SpeedCrunch is a fast and high-precision scientific calculator. It comes preloaded with math functions, user-defined functions, complex numbers and unit conversions support. It has simple user interface which is easy to use. - -![][92] - -This scientific calculator has some amazing features like result preview, syntax highlighting and auto-completion. It is a cross-platform calculator with multi-language support. - -### **85\. Scribus** - -Scribus is a free and open-source desktop publishing application that lets you create posters, magazines and books. It is a cross-platform application based on Qt toolkit and released under GNU general public license. It is a professional application with features like CMYK and ICC color management, Python based scripting engine, and PDF creation. - -![][93] - -Scribus comes with decent user interface which is easy to use and works effortlessly on systems with low hardware. It could be downloaded and installed from Software Center on all the latest Linux distros. - -### **86.** **Cura** - -Cura is an open-source 3D printing application developed by David Braam. Ultimaker Cura is the most popular software in 3D printing world and used by millions of users worldwide. It follows 3 step 3D printing model: Design, Prepare and Print. - -![][94] - -It is a feature-rich 3D printing application with seamless integration support for CAD plug-ins and add-ons. It is very simple and easy to use tool, novice artists can start right away. It supports major file formats like STL, 3MF and OBJ file formats. - -### **87\. Nomacs** - -Nomacs is an open-source, cross-platform image viewer which is supports all the major image formats including RAW and psd images. It has ability to browse images in zip or Microsoft Office files and extract them to a directory. - -![][95] - -Nomacs has very simple user interface with image thumbnails at the top and it offers some basic image manipulation features like crop, resize, rotate, color correction, etc. - -``` -$ sudo add-apt-repository ppa:nomacs/stable -$ sudo apt-get update -$ sudo apt-get install nomacs -``` - -### **88\. BitTicker** - -BitTicker is a live bitcoin-USDT Ticker for Ubuntu. It is a simple tool that connects to bittrex.com market and retrieves the latest price for BTC-USDT and display Ubuntu clock on system tray. - -![][96] - -It is simple but very useful too if you invest in Bitcoin regularly and have to study price fluctuations regulary. - -### **89\. Organize My Files** - -Organize My Files is cross-platform one click file organizer and it is available in Ubuntu as a snap package. It is a simple but powerful tool that will help you find unorganized files in a simple click and take them under control. - -![][97] - -It has intuitive user interface which is powerful and fast. It offers features such as auto organizing, recursive organizing, smart filters and multi folders organizing. - -### **90\. GnuCash** - -GnuCash is financial accounting software licensed under GNU general public license for free usage. It is an ideal software for personal and small business accounting. It has simple user interface which allows you to keep track of bank accounts, stocks, income and expenses. - -![][98] - -GnuCash implements double-entry bookkeeping system and based on professional accounting principle to ensure balanced books and accurate reports. - -### **91\. Calibre** - -Calibre is a cross-platform, open-source solution to all your e-book needs. It is a simple e-book organizer that offers features like displaying, creating, editing e-books, organizing existing e-books into virtual libraries, syncing and many more. - -![][99] - -Calibre also helps you convert e-books into whichever format you need and send them to your e-book reading device. Calibre is a very good tool to have if you regularly read and manage e-books. - -### **92\. MATE Dictionary** - -MATE Dictionary is a simple dictionary basically developed for MATE desktop environment. You just need to type the word and then this dictionary will display the meaning and references for it. - -![][100] - -It is simple and lightweight online dictionary with minimalist user interface. - -### **93\. Converseen** - -Converseen is a free cross-platform batch image processing application that lets you convert, edit, resize, rotate and flip large number of images with a single mouse click. It offers features like renaming bunch of images using prefix/suffix, extract image from a Windows icon file, etc. - -![][101] - -It has very good user interface which is easy to use even if you are a novice user. It can also convert entire PDF file into collection of images. - -### **94\. Tiled Map Editor** - -Tiled is a free software level map editor which you can use to edit the maps in various projections such as orthogonal, isometric and hexagonal. This tool can be very useful for game developers during game engine development cycle. - -![][102] - -Tiled is a versatile map editor which lets you create power boost positions, map layouts, collision areas, and enemy positions. It saves all the data in tmx format. - -### **95.** **Qmmp** - -Qmmp is a free and open-source audio player developed in Qt and C++. It is a cross-platform audio player with user interface very similar to Winamp. It has simple and intuitive user interface, Winamp skins can be used instead of default UI. - -![][103] - -It offers features such as automatic album cover fetching, multiple artists support, additional plug-ins and add-ons support and other features similar to Winamp. - -``` -$ sudo add-apt-repository ppa:forkotov02/ppa -$ sudo apt-get update -$ sudo apt-get install qmmp qmmp-q4 qmmp-plugin-pack-qt4 -``` - -### **96\. Arora** - -Arora is free and open-source web browser which offers features like dedicated download manager, bookmarks, privacy mode and tabbed browsing. - -![][104] - -Arora web browser is developed by Benjamin C. Meyer and it is popular among Linux users for its lightweight nature and flexibility. - -### **97\. XnSketch** - -XnSketch is a cool application for Ubuntu that will help you transform your photos into cartoon or sketch images in few clicks. It sports 18 different effects such as black strokes, white strokes, pencil sketch and others. - -![][105] - -It has an excellent user interface which you will find easy to use. Some additional features in XnSketch include opacity and edge strength adjustment, contrast, brightness and saturation adjustment. - -### **98\. Geany** - -Geany is a simple and lightweight text editor which works like an Integrated Development Environment (IDE). It is a cross-platform text editor and supports all the major programming languages including Python, C++, LaTex, Pascal, C#, etc. - -![][106] - -Geany has simple user interface which resembles to programming editors like Notepad++. It offers IDE like features such as code navigation, auto-completion, syntax highlighting, and extensions support. - -``` -$ sudo apt-get install geany -``` - -### **99\. Mumble** - -Mumble is another Voice over Internet Protocol application very similar to Discord. Mumble is also basically designed for online gamers and uses client-server architecture for end-to-end chat. Voice quality is very good on Mumble and it offers end-to-end encryption to ensure privacy. - -![][107] - -Mumble is an open-source application and features very simple user interface which is easy to use. Mumble is available for download in Ubuntu Software Center. - -``` -$ sudo apt-get install mumble-server -``` - -### **100\. Deluge** - -Deluge is a cross-platform and lightweight BitTorrent client that can be used for downloading files on Ubuntu. BitTorrent client come bundled with many Linux distros but Deluge is the best BitTorrent client which has simple user interface which is easy to use. - -![][108] - -Deluge comes with all the features you will find in BitTorrent clients but one feature that stands out is it gives user ability to access the client from other devices as well so that you can download files to your computer even if you are not at home. - -``` -$ sudo add-apt-repository ppa:deluge-team/ppa -$ sudo apt-get update -$ sudo apt-get install deluge -``` - -So these are my picks for best 100 applications for Ubuntu in 2018 which you should try. All the applications listed here are tested on Ubuntu 18.04 and will definitely work on older versions too. Feel free to share your view about the article at [@LinuxHint][109] and [@SwapTirthakar][110]. - --------------------------------------------------------------------------------- - -via: https://linuxhint.com/100_best_ubuntu_apps/ - -作者:[Swapnil Tirthakar][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://linuxhint.com/author/swapnil/ -[1]:https://linuxhint.com/applications-2018-ubuntu/ -[2]:https://linuxhint.com/wp-content/uploads/2018/06/100-Best-Ubuntu-Apps.png -[3]:https://linuxhint.com/wp-content/uploads/2018/06/Chrome.png -[4]:https://linuxhint.com/wp-content/uploads/2018/06/Steam.png -[5]:https://linuxhint.com/wp-content/uploads/2018/06/Wordpress.png -[6]:https://linuxhint.com/wp-content/uploads/2018/06/VLC.png -[7]:https://linuxhint.com/wp-content/uploads/2018/06/Atom-Text-Editor.png -[8]:https://linuxhint.com/wp-content/uploads/2018/06/GIMP.png -[9]:https://linuxhint.com/wp-content/uploads/2018/06/Google-Play.png -[10]:https://www.googleplaymusicdesktopplayer.com/ -[11]:https://linuxhint.com/wp-content/uploads/2018/06/Franz.png -[12]:https://meetfranz.com/#download -[13]:https://linuxhint.com/wp-content/uploads/2018/06/Synaptic.png -[14]:https://linuxhint.com/wp-content/uploads/2018/06/Skype.png -[15]:https://linuxhint.com/wp-content/uploads/2018/06/VirtualBox.png -[16]:https://linuxhint.com/wp-content/uploads/2018/06/Unity-Tweak-Tool.png -[17]:https://linuxhint.com/wp-content/uploads/2018/06/Ubuntu-Cleaner.png -[18]:https://linuxhint.com/wp-content/uploads/2018/06/Visual-Studio-Code.png -[19]:https://linuxhint.com/wp-content/uploads/2018/06/Corebird.png -[20]:https://linuxhint.com/wp-content/uploads/2018/06/Pixbuf.png -[21]:https://linuxhint.com/wp-content/uploads/2018/06/Clementine.png -[22]:https://linuxhint.com/wp-content/uploads/2016/06/blender.jpg -[23]:https://linuxhint.com/wp-content/uploads/2018/06/Audacity.png -[24]:https://linuxhint.com/wp-content/uploads/2018/06/Vim.png -[25]:https://linuxhint.com/wp-content/uploads/2018/06/Inkscape-1.png -[26]:https://linuxhint.com/wp-content/uploads/2018/06/ShotCut.png -[27]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Screen-Recorder.png -[28]:https://linuxhint.com/wp-content/uploads/2018/06/Telegram.png -[29]:https://linuxhint.com/wp-content/uploads/2018/06/ClamTk.png -[30]:https://linuxhint.com/wp-content/uploads/2018/06/Mailspring.png -[31]:https://linuxhint.com/wp-content/uploads/2018/06/PyCharm.png -[32]:https://linuxhint.com/wp-content/uploads/2018/06/Caffeine.png -[33]:https://linuxhint.com/wp-content/uploads/2018/06/Etcher.png -[34]:https://etcher.io/ -[35]:https://linuxhint.com/wp-content/uploads/2018/06/Neofetch.png -[36]:https://linuxhint.com/wp-content/uploads/2018/06/Liferea.png -[37]:https://linuxhint.com/wp-content/uploads/2018/06/Shutter.png -[38]:https://linuxhint.com/wp-content/uploads/2018/06/Weather.png -[39]:https://linuxhint.com/wp-content/uploads/2018/06/Ramme.png -[40]:https://github.com/terkelg/ramme/releases -[41]:https://linuxhint.com/wp-content/uploads/2018/06/Thunderbird.png -[42]:https://linuxhint.com/wp-content/uploads/2018/06/Pidgin.png -[43]:https://linuxhint.com/wp-content/uploads/2018/06/Krita.png -[44]:https://linuxhint.com/wp-content/uploads/2018/06/Dropbox.png -[45]:https://linuxhint.com/wp-content/uploads/2018/06/kodi.png -[46]:https://linuxhint.com/wp-content/uploads/2018/06/Spotify.png -[47]:https://linuxhint.com/wp-content/uploads/2018/06/Brackets.png -[48]:https://linuxhint.com/wp-content/uploads/2018/06/Bitwarden.png -[49]:https://linuxhint.com/wp-content/uploads/2018/06/Terminator.png -[50]:https://linuxhint.com/wp-content/uploads/2018/06/Yak-Yak.png -[51]:https://linuxhint.com/wp-content/uploads/2018/06/Thonny.png -[52]:https://linuxhint.com/wp-content/uploads/2018/06/Font-Manager.png -[53]:https://linuxhint.com/wp-content/uploads/2018/06/Atril.png -[54]:https://linuxhint.com/wp-content/uploads/2018/06/Notepadqq.png -[55]:https://linuxhint.com/wp-content/uploads/2018/06/Amarok.png -[56]:https://linuxhint.com/wp-content/uploads/2018/06/Cheese.png -[57]:https://linuxhint.com/wp-content/uploads/2018/06/MyPaint.png -[58]:https://linuxhint.com/wp-content/uploads/2018/06/PlayOnLinux.png -[59]:https://linuxhint.com/wp-content/uploads/2018/06/Akregator.png -[60]:https://linuxhint.com/wp-content/uploads/2018/06/Brave.png -[61]:https://linuxhint.com/wp-content/uploads/2018/06/Bitcoin-Core.png -[62]:https://linuxhint.com/wp-content/uploads/2018/06/Speedy-Duplicate-Finder.png -[63]:https://linuxhint.com/wp-content/uploads/2018/06/Zulip.png -[64]:https://linuxhint.com/wp-content/uploads/2018/06/Okular.png -[65]:https://linuxhint.com/wp-content/uploads/2018/06/Focus-Writer.png -[66]:https://linuxhint.com/wp-content/uploads/2018/06/Guake.png -[67]:https://linuxhint.com/wp-content/uploads/2018/06/KDE-Connect.png -[68]:https://linuxhint.com/wp-content/uploads/2018/06/CopyQ.png -[69]:https://linuxhint.com/wp-content/uploads/2018/06/Tilix.png -[70]:https://linuxhint.com/wp-content/uploads/2018/06/Anbox.png -[71]:https://linuxhint.com/wp-content/uploads/2018/06/OpenShot.png -[72]:https://linuxhint.com/wp-content/uploads/2018/06/Plank.png -[73]:https://linuxhint.com/wp-content/uploads/2018/06/FileZilla.png -[74]:https://linuxhint.com/wp-content/uploads/2018/06/Stacer.png -[75]:https://linuxhint.com/wp-content/uploads/2018/06/4K-Video-Downloader.png -[76]:https://www.4kdownload.com/download -[77]:https://linuxhint.com/wp-content/uploads/2018/06/Qalculate.png -[78]:https://linuxhint.com/wp-content/uploads/2018/06/Hiri.png -[79]:https://linuxhint.com/wp-content/uploads/2018/06/Sublime-text.png -[80]:https://linuxhint.com/wp-content/uploads/2018/06/TeXstudio.png -[81]:https://www.texstudio.org/ -[82]:https://linuxhint.com/wp-content/uploads/2018/06/QtQR.png -[83]:https://linuxhint.com/wp-content/uploads/2018/06/Kontact.png -[84]:https://linuxhint.com/wp-content/uploads/2018/06/Nitro-Share.png -[85]:https://linuxhint.com/wp-content/uploads/2018/06/Konversation.png -[86]:https://linuxhint.com/wp-content/uploads/2018/06/Discord.png -[87]:https://linuxhint.com/wp-content/uploads/2018/06/QuiteRSS.png -[88]:https://linuxhint.com/wp-content/uploads/2018/06/MPU-Media-Player.png -[89]:https://linuxhint.com/wp-content/uploads/2018/06/Plume-Creator.png -[90]:https://linuxhint.com/wp-content/uploads/2018/06/Chromium.png -[91]:https://linuxhint.com/wp-content/uploads/2018/06/Simple-Weather-Indicator.png -[92]:https://linuxhint.com/wp-content/uploads/2018/06/SpeedCrunch.png -[93]:https://linuxhint.com/wp-content/uploads/2018/06/Scribus.png -[94]:https://linuxhint.com/wp-content/uploads/2018/06/Cura.png -[95]:https://linuxhint.com/wp-content/uploads/2018/06/Nomacs.png -[96]:https://linuxhint.com/wp-content/uploads/2018/06/Bit-Ticker-1.png -[97]:https://linuxhint.com/wp-content/uploads/2018/06/Organize-My-Files.png -[98]:https://linuxhint.com/wp-content/uploads/2018/06/GNU-Cash.png -[99]:https://linuxhint.com/wp-content/uploads/2018/06/Calibre.png -[100]:https://linuxhint.com/wp-content/uploads/2018/06/MATE-Dictionary.png -[101]:https://linuxhint.com/wp-content/uploads/2018/06/Converseen.png -[102]:https://linuxhint.com/wp-content/uploads/2018/06/Tiled-Map-Editor.png -[103]:https://linuxhint.com/wp-content/uploads/2018/06/Qmmp.png -[104]:https://linuxhint.com/wp-content/uploads/2018/06/Arora.png -[105]:https://linuxhint.com/wp-content/uploads/2018/06/XnSketch.png -[106]:https://linuxhint.com/wp-content/uploads/2018/06/Geany.png -[107]:https://linuxhint.com/wp-content/uploads/2018/06/Mumble.png -[108]:https://linuxhint.com/wp-content/uploads/2018/06/Deluge.png -[109]:https://twitter.com/linuxhint -[110]:https://twitter.com/SwapTirthakar diff --git a/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md b/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md deleted file mode 100644 index a9d540adae..0000000000 --- a/sources/tech/20180629 How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers.md +++ /dev/null @@ -1,113 +0,0 @@ -How To Get Flatpak Apps And Games Built With OpenGL To Work With Proprietary Nvidia Graphics Drivers -====== -**Some applications and games built with OpenGL support and packaged as Flatpak fail to start with proprietary Nvidia drivers. This article explains how to get such Flatpak applications or games them to start, without installing the open source drivers (Nouveau).** - -Here's an example. I'm using the proprietary Nvidia drivers on my Ubuntu 18.04 desktop (`nvidia-driver-390`) and when I try to launch the latest -``` -$ /usr/bin/flatpak run --branch=stable --arch=x86_64 --command=krita --file-forwarding org.kde.krita -Gtk-Message: Failed to load module "canberra-gtk-module" -Gtk-Message: Failed to load module "canberra-gtk-module" -libGL error: No matching fbConfigs or visuals found -libGL error: failed to load driver: swrast -Could not initialize GLX - -``` - -To fix Flatpak games and applications not starting when using OpenGL with proprietary Nvidia graphics drivers, you'll need to install a runtime for your currently installed proprietary Nvidia drivers. Here's how to do this. - -**1\. Add the FlatHub repository if you haven't already. You can find exact instructions for your Linux distribution[here][1].** - -**2. Now you'll need to figure out the exact version of the proprietary Nvidia drivers installed on your system. ** - -_This step is dependant of the Linux distribution you're using and I can't cover all cases. The instructions below are Ubuntu-oriented (and Ubuntu flavors) but hopefully you can figure out for yourself the Nvidia drivers version installed on your system._ - -To do this in Ubuntu, open `Software & Updates` , switch to the `Additional Drivers` tab and note the name of the Nvidia driver package. - -As an example, this is `nvidia-driver-390` in my case, as you can see here: - -![](https://1.bp.blogspot.com/-FAfjtGNeUJc/WzYXMYTFBcI/AAAAAAAAAx0/xUhIO83IAjMuK4Hn0jFUYKJhSKw8y559QCLcBGAs/s1600/additional-drivers-nvidia-ubuntu.png) - -That's not all. We've only found out the Nvidia drivers major version but we'll also need to know the minor version. To get the exact Nvidia driver version, which we'll need for the next step, run this command (should work in any Debian-based Linux distribution, like Ubuntu, Linux Mint and so on): -``` -apt-cache policy NVIDIA-PACKAGE-NAME - -``` - -Where NVIDIA-PACKAGE-NAME is the Nvidia drivers package name listed in `Software & Updates` . For example, to see the exact installed version of the `nvidia-driver-390` package, run this command: -``` -$ apt-cache policy nvidia-driver-390 -nvidia-driver-390: - Installed: 390.48-0ubuntu3 - Candidate: 390.48-0ubuntu3 - Version table: - * 390.48-0ubuntu3 500 - 500 http://ro.archive.ubuntu.com/ubuntu bionic/restricted amd64 Packages - 100 /var/lib/dpkg/status - -``` - -In this command's output, look for the `Installed` section and note the version numbers (excluding `-0ubuntu3` and anything similar). Now we know the exact version of the installed Nvidia drivers (`390.48` in my example). Remember this because we'll need it for the next step. - -**3\. And finally, you can install the Nvidia runtime for your installed proprietary Nvidia graphics drivers, from FlatHub** - -To list all the available Nvidia runtime packages available on FlatHub, you can use this command: -``` -flatpak remote-ls flathub | grep nvidia - -``` - -Hopefully the runtime for your installed Nvidia drivers is available on FlatHub. You can now proceed to install the runtime by using this command: - - * For 64bit systems: - - -``` -flatpak install flathub org.freedesktop.Platform.GL.nvidia-MAJORVERSION-MINORVERSION - -``` - -Replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and -MINORVERSION with the minor version (48 in my example from step 2). - -For example, to install the runtime for Nvidia graphics driver version 390.48, you'd have to use this command: -``` -flatpak install flathub org.freedesktop.Platform.GL.nvidia-390-48 - -``` - - * For 32bit systems (or to be able to run 32bit applications or games on 64bit), install the 32bit runtime using: - - -``` -flatpak install flathub org.freedesktop.Platform.GL32.nvidia-MAJORVERSION-MINORVERSION - -``` - -Once again, replace MAJORVERSION with the Nvidia driver major version installed on your computer (390 in my example above) and MINORVERSION with the minor version (48 in my example from step 2). - -For example, to install the 32bit runtime for Nvidia graphics driver version 390.48, you'd have to use this command: -``` -flatpak install flathub org.freedesktop.Platform.GL32.nvidia-390-48 - -``` - -That is all you need to do to get applications or games packaged as Flatpak that are built with OpenGL to run. - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/06/how-to-get-flatpak-apps-and-games-built.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://flatpak.org/setup/ -[2]:https://www.linuxuprising.com/2018/06/free-painting-software-krita-410.html -[3]:https://www.linuxuprising.com/2018/06/winepak-is-flatpak-repository-for.html -[4]:https://github.com/winepak/applications/issues/23 -[5]:https://github.com/flatpak/flatpak/issues/138 diff --git a/sources/tech/20180703 10 killer tools for the admin in a hurry.md b/sources/tech/20180703 10 killer tools for the admin in a hurry.md deleted file mode 100644 index 363f401709..0000000000 --- a/sources/tech/20180703 10 killer tools for the admin in a hurry.md +++ /dev/null @@ -1,87 +0,0 @@ -10 killer tools for the admin in a hurry -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT) - -Administering networks and systems can get very stressful when the workload piles up. Nobody really appreciates how long anything takes, and everyone wants their specific thing done yesterday. - -So it's no wonder so many of us are drawn to the open source spirit of figuring out what works and sharing it with everyone. Because, when deadlines are looming, and there just aren't enough hours in the day, it really helps if you can just find free answers you can implement immediately. - -So, without further ado, here's my Swiss Army Knife of stuff to get you out of the office before dinner time. - -### Server configuration and scripting - -Let's jump right in. - -**[NixCraft][1]** -Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here—useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google. - -**[Webmin][2]** -This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and `sudo nano`, which is handy when you're handling several customers. - -**[Windows Subsystem for Linux][3]** -The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop. - -What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10. - -This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors. - -**[mRemoteNG][4]** -This is an excellent SSH and remote desktop client for when you have 100+ servers to manage. - -### Setting up a network so you don't have to do it again - -A poorly planned network is the sworn enemy of the admin who hates working overtime. - -**[IP Addressing Schemes that Scale][5]** -The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial. - -Ain't nobody got time for that! - -At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us. - -**[Linux Chmod Permissions Cheat Sheet][6]** -A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's. - -**[VLSM Subnet Calculator][7]** -Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything. - -### Single-purpose Linux distributions - -Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately. - -Each of these has, at one point, made my work day so much easier. - -**[Porteus Kiosk][8]** -This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse. - -**[Parted Magic][9]** -This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools. - -**[IPFire][10]** -Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it. - -So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/tools-admin - -作者:[Grant Hamono][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/grantdxm -[1]:https://www.cyberciti.biz/ -[2]:http://www.webmin.com/ -[3]:http://wsl-guide.org/en/latest/ -[4]:https://mremoteng.org/ -[5]:https://blog.dxmtechsupport.com.au/ip-addressing-for-a-small-business-that-might-grow/ -[6]:https://isabelcastillo.com/linux-chmod-permissions-cheat-sheet -[7]:http://www.vlsm-calc.net/ -[8]:http://porteus-kiosk.org/ -[9]:https://partedmagic.com/ -[10]:https://www.ipfire.org/ diff --git a/sources/tech/20180707 Version Control Before Git with CVS.md b/sources/tech/20180707 Version Control Before Git with CVS.md deleted file mode 100644 index 4c59a2cfc0..0000000000 --- a/sources/tech/20180707 Version Control Before Git with CVS.md +++ /dev/null @@ -1,313 +0,0 @@ -(translating by runningwater) -Version Control Before Git with CVS -====== -Github was launched in 2008. If your software engineering career, like mine, is no older than Github, then Git may be the only version control software you have ever used. While people sometimes grouse about its steep learning curve or unintuitive interface, Git has become everyone’s go-to for version control. In Stack Overflow’s 2015 developer survey, 69.3% of respondents used Git, almost twice as many as used the second-most-popular version control system, Subversion. After 2015, Stack Overflow stopped asking developers about the version control systems they use, perhaps because Git had become so popular that the question was uninteresting. - -Git itself is not much older than Github. Linus Torvalds released the first version of Git in 2005. Though today younger developers might have a hard time conceiving of a world where the term “version control software” didn’t more or less just mean Git, such a world existed not so long ago. There were lots of alternatives to choose from. Open source developers preferred Subversion, enterprises and video game companies used Perforce (some still do), while the Linux kernel project famously relied on a version control system called BitKeeper. - -Some of these systems, particularly BitKeeper, might feel familiar to a young Git user transported back in time. Most would not. BitKeeper aside, the version control systems that came before Git worked according to a fundamentally different paradigm. In a taxonomy offered by Eric Sink, author of Version Control by Example, Git is a third-generation version control system, while most of Git’s predecessors, the systems popular in the 1990s and early 2000s, are second-generation version control systems. Where third-generation version control systems are distributed, second-generation version control systems are centralized. You have almost certainly heard Git described as a “distributed” version control system before. I never quite understood the distributed/centralized distinction, at least not until I installed and experimented with a centralized second-generation version control system myself. - -The system I installed was CVS. CVS, short for Concurrent Versions System, was the very first second-generation version control system. It was also the most popular version control system for about a decade until it was replaced in 2000 by Subversion. Even then, Subversion was supposed to be “CVS but better,” which only underscores how dominant CVS had become throughout the 1990s. - -CVS was first developed in 1986 by a Dutch computer scientist named Dick Grune, who was looking for a way to collaborate with his students on a compiler project. CVS was initially little more than a collection of shell scripts wrapping RCS (Revision Control System), a first-generation version control system that Grune wanted to improve. RCS works according to a pessimistic locking model, meaning that no two programmers can work on a single file at once. In order to edit a file, you have to first ask RCS for an exclusive lock on the file, which you keep until you are finished editing. If someone else is already editing a file you need to edit, you have to wait. CVS improved on RCS and ushered in the second generation of version control systems by trading the pessimistic locking model for an optimistic one. Programmers could now edit the same file at the same time, merging their edits and resolving any conflicts later. (Brian Berliner, an engineer who later took over the CVS project, wrote a very readable [paper][1] about CVS’ innovations in 1990.) - -In that sense, CVS wasn’t all that different from Git, which also works according to an optimistic model. But that’s where the similarities end. In fact, when Linus Torvalds was developing Git, one of his guiding principles was WWCVSND, or “What Would CVS Not Do.” Whenever he was in doubt about a decision, he strove to choose the option that had not been chosen in the design of CVS. So even though CVS predates Git by over a decade, it influenced Git as a kind of negative template. - -I’ve really enjoyed playing around with CVS. I think there’s no better way to understand why Git’s distributed nature is such an improvement on what came before. So I invite you to come along with me on an exciting journey and spend the next ten minutes of your life learning about a piece of software nobody has used in the last decade. (See correction.) - -### Getting Started with CVS - -Instructions for installing CVS can be found on the [project’s homepage][2]. On MacOS, you can install CVS using Homebrew. - -Since CVS is centralized, it distinguishes between the client-side universe and the server-side universe in a way that something like Git does not. The distinction is not so pronounced that there are different executables. But in order to start using CVS, even on your own machine, you’ll have to set up the CVS backend. - -The CVS backend, the central store for all your code, is called the repository. Whereas in Git you would typically have a repository for every project, in CVS the repository holds all of your projects. There is one central repository for everything, though there are ways to work with only a project at a time. - -To create a local repository, you run the `init` command. You would do this somewhere global like your home directory. - -``` -$ cvs -d ~/sandbox init -``` - -CVS allows you to pass options to either the `cvs` command itself or to the `init` subcommand. Options that appear after the `cvs` command are global in nature, while options that appear after the subcommand are specific to the subcommand. In this case, the `-d` flag is global. Here it happens to tell CVS where we want to create our repository, but in general the `-d` flag points to the location of the repository we want to use for any given action. It can be tedious to supply the `-d` flag all the time, so the `CVSROOT` environment variable can be set instead. - -Since we’re working locally, we’ve just passed a path for our `-d` argument, but we could also have included a hostname. - -The command creates a directory called `sandbox` in your home directory. If you list the contents of `sandbox`, you’ll find that it contains another directory called `CVSROOT`. This directory, not to be confused with the environment variable, holds administrative files for the repository. - -Congratulations! You’ve just created your first CVS repository. - -### Checking In Code - -Let’s say that you’ve decided to keep a list of your favorite colors. You are an artistically inclined but extremely forgetful person. You type up your list of colors and save it as a file called `favorites.txt`: - -``` -blue -orange -green - -definitely not yellow -``` - -Let’s also assume that you’ve saved your file in a new directory called `colors`. Now you’d like to put your favorite color list under version control, because fifty years from now it will be interesting to look back and see how your tastes changed through time. - -In order to do that, you will have to import your directory as a new CVS project. You can do that using the `import` command: - -``` -$ cvs -d ~/sandbox import -m "" colors colors initial -N colors/favorites.txt - -No conflicts created by this import -``` - -Here we are specifying the location of our repository with the `-d` flag again. The remaining arguments are passed to the `import` subcommand. We have to provide a message, but here we don’t really need one, so we’ve left it blank. The next argument, `colors`, specifies the name of our new directory in the repository; here we’ve just used the same name as the directory we are in. The last two arguments specify the vendor tag and the release tag respectively. We’ll talk more about tags in a minute. - -You’ve just pulled your “colors” project into the CVS repository. There are a couple different ways to go about bringing code into CVS, but this is the method recommended by [Pragmatic Version Control Using CVS][3], the Pragmatic Programmer book about CVS. What makes this method a little awkward is that you then have to check out your work fresh, even though you’ve already got an existing `colors` directory. Instead of using that directory, you’re going to delete it and then check out the version that CVS already knows about: - -``` -$ cvs -d ~/sandbox co colors -cvs checkout: Updating colors -U colors/favorites.txt -``` - -This will create a new directory, also called `colors`. In this directory you will find your original `favorites.txt` file along with a directory called `CVS`. The `CVS` directory is basically CVS’ equivalent of the `.git` directory in every Git repository. - -### Making Changes - -Get ready for a trip. - -Just like Git, CVS has a `status` subcommand: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Up-to-date - - Working revision: 1.1.1.1 2018-07-06 19:27:54 -0400 - Repository revision: 1.1.1.1 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: fD7GYxt035GNg8JA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -This is where things start to look alien. CVS doesn’t have commit objects. In the above, there is something called a “Commit Identifier,” but this might be only a relatively recent edition—no mention of a “Commit Identifier” appears in Pragmatic Version Control Using CVS, which was published in 2003. (The last update to CVS was released in 2008.) - -Whereas with Git you’d talk about the version of a file associated with commit `45de392`, in CVS files are versioned separately. The first version of your file is version 1.1, the next version is 1.2, and so on. When branches are involved, extra numbers are appended, so you might end up with something like the `1.1.1.1` above, which appears to be the default in our case even though we haven’t created any branches. - -If you were to run `cvs log` (equivalent to `git log`) in a project with lots of files and commits, you’d see an individual history for each file. You might have a file at version 1.2 and a file at version 1.14 in the same project. - -Let’s go ahead and make a change to version 1.1 of our `favorites.txt` file: - -``` - blue - orange - green -+cyan - - definitely not yellow -``` - -Once we’ve made the change, we can run `cvs diff` to see what CVS thinks we’ve done: - -``` -$ cvs diff -cvs diff: Diffing . -Index: favorites.txt -=================================================================== -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.1.1.1 -diff -r1.1.1.1 favorites.txt -3a4 -> cyan -``` - -CVS recognizes that we added a new line containing the color “cyan” to the file. (Actually, it says we’ve made changes to the “RCS” file; you can see that CVS never fully escaped its original association with RCS.) The diff we are being shown is the diff between the copy of `favorites.txt` in our working directory and the 1.1.1.1 version stored in the repository. - -In order to update the version stored in the repository, we have to commit the change. In Git, this would be a multi-step process. We’d have to stage the change so that it appears in our index. Then we’d commit the change. Finally, to make the change visible to anyone else, we’d have to push the commit up to the origin repository. - -In CVS, all of these things happen when you run `cvs commit`. CVS just bundles up all the changes it can find and puts them in the repository: - -``` -$ cvs commit -m "Add cyan to favorites." -cvs commit: Examining . -/Users/sinclairtarget/sandbox/colors/favorites.txt,v <-- favorites.txt -new revision: 1.2; previous revision: 1.1 -``` - -I’m so used to Git that this strikes me as terrifying. Without an opportunity to stage changes, any old thing that you’ve touched in your working directory might end up as part of the public repository. Did you passive-aggressively rewrite a coworker’s poorly implemented function out of cathartic necessity, never intending for him to know? Too bad, he now thinks you’re a dick. You also can’t edit your commits before pushing them, since a commit is a push. Do you enjoy spending 40 minutes repeatedly running `git rebase -i` until your local commit history flows like the derivation of a mathematical proof? Sorry, you can’t do that here, and everyone is going to find out that you don’t actually write your tests first. - -But I also now understand why so many people find Git needlessly complicated. If `cvs commit` is what you were used to, then I’m sure staging and pushing changes would strike you as a pointless chore. - -When people talk about Git being a “distributed” system, this is primarily the difference they mean. In CVS, you can’t make commits locally. A commit is a submission of code to the central repository, so it’s not something you can do without a connection. All you’ve got locally is your working directory. In Git, you have a full-fledged local repository, so you can make commits all day long even while disconnected. And you can edit those commits, revert, branch, and cherry pick as much as you want, without anybody else having to know. - -Since commits were a bigger deal, CVS users often made them infrequently. Commits would contain as many changes as today we might expect to see in a ten-commit pull request. This was especially true if commits triggered a CI build and an automated test suite. - -If we now run `cvs status`, we can see that we have a new version of our file: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Up-to-date - - Working revision: 1.2 2018-07-06 21:18:59 -0400 - Repository revision: 1.2 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: pQx5ooyNk90wW8JA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -### Merging - -As mentioned above, in CVS you can edit a file that someone else is already editing. That was CVS’ big improvement on RCS. What happens when you need to bring your changes back together? - -Let’s say that you have invited some friends to add their favorite colors to your list. While they are adding their colors, you decide that you no longer like the color green and remove it from the list. - -When you go to commit your changes, you might discover that CVS notices a problem: - -``` -$ cvs commit -m "Remove green" -cvs commit: Examining . -cvs commit: Up-to-date check failed for `favorites.txt' -cvs [commit aborted]: correct above errors first! -``` - -It looks like your friends committed their changes first. So your version of `favorites.txt` is not up-to-date with the version in the repository. If you run `cvs status`, you’ll see that your local copy of `favorites.txt` is version 1.2 with some local changes, but the repository version is 1.3: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Needs Merge - - Working revision: 1.2 2018-07-07 10:42:43 -0400 - Repository revision: 1.3 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: 2oZ6n0G13bDaldJA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -You can run `cvs diff` to see exactly what the differences between 1.2 and 1.3 are: - -``` -$ cvs diff -r HEAD favorites.txt -Index: favorites.txt -=================================================================== -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.3 -diff -r1.3 favorites.txt -3d2 -< green -7,10d5 -< -< pink -< hot pink -< bubblegum pink -``` - -It seems that our friends really like pink. In any case, they’ve edited a different part of the file than we have, so the changes are easy to merge. CVS can do that for us when we run `cvs update`, which is similar to `git pull`: - -``` -$ cvs update -cvs update: Updating . -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.2 -retrieving revision 1.3 -Merging differences between 1.2 and 1.3 into favorites.txt -M favorites.txt -``` - -If you now take a look at `favorites.txt`, you’ll find that it has been modified to include the changes that your friends made to the file. Your changes are still there too. Now you are free to commit the file: - -``` -$ cvs commit -cvs commit: Examining . -/Users/sinclairtarget/sandbox/colors/favorites.txt,v <-- favorites.txt -new revision: 1.4; previous revision: 1.3 -``` - -The end result is what you’d get in Git by running `git pull --rebase`. Your changes have been added on top of your friends’ changes. There is no “merge commit.” - -Sometimes, changes to the same file might be incompatible. If your friends had changed “green” to “olive,” for example, that would have conflicted with your change removing “green” altogether. In the early days of CVS, this was exactly the kind of case that caused people to worry that CVS wasn’t safe; RCS’ pessimistic locking ensured that such a case could never arise. But CVS guarantees safety by making sure that nobody’s changes get overwritten automatically. You have to tell CVS which change you want to keep going forward, so when you run `cvs update`, CVS marks up the file with both changes in the same way that Git does when Git detects a merge conflict. You then have to manually edit the file and pick the change you want to keep. - -The interesting thing to note here is that merge conflicts have to be fixed before you can commit. This is another consequence of CVS’ centralized nature. In Git, you don’t have to worry about resolving merges until you push the commits you’ve got locally. - -Since CVS doesn’t have easily addressable commit objects, the only way to group a collection of changes is to mark a particular working directory state with a tag. - -Creating a tag is easy: - -``` -$ cvs tag VERSION_1_0 -cvs tag: Tagging . -T favorites.txt -``` - -You’ll later be able to return files to this state by running `cvs update` and passing the tag to the `-r` flag: - -``` -$ cvs update -r VERSION_1_0 -cvs update: Updating . -U favorites.txt -``` - -Because you need a tag to rewind to an earlier working directory state, CVS encourages a lot of preemptive tagging. Before major refactors, for example, you might create a `BEFORE_REFACTOR_01` tag that you could later use if the refactor went wrong. People also used tags if they wanted to generate project-wide diffs. Basically, all the things we routinely do today with commit hashes have to be anticipated and planned for with CVS, since you needed to have the tags available already. - -Branches can be created in CVS, sort of. Branches are just a special kind of tag: - -``` -$ cvs rtag -b TRY_EXPERIMENTAL_THING colors -cvs rtag: Tagging colors -``` - -That only creates the branch (in full view of everyone, by the way), so you still need to switch to it using `cvs update`: - -``` -$ cvs update -r TRY_EXPERIMENTAL_THING -``` - -The above commands switch onto the new branch in your current working directory, but Pragmatic Version Control Using CVS actually advises that you create a new directory to hold your new branch. Presumably its authors found switching directories easier than switching branches in CVS. - -Pragmatic Version Control Using CVS also advises against creating branches off of an existing branch. They recommend only creating branches off of the mainline branch, which in Git is known as `master`. In general, branching was considered an “advanced” CVS skill. In Git, you might start a new branch for almost any trivial reason, but in CVS branching was typically used only when really necessary, such as for releases. - -A branch could later be merged back into the mainline using `cvs update` and the `-j` flag: - -``` -$ cvs update -j TRY_EXPERIMENTAL_THING -``` - -### Thanks for the Commit Histories - -In 2007, Linus Torvalds gave [a talk][4] about Git at Google. Git was very new then, so the talk was basically an attempt to persuade a roomful of skeptical programmers that they should use Git, even though Git was so different from anything then available. If you haven’t already seen the talk, I highly encourage you to watch it. Linus is an entertaining speaker, even if he never fails to be his brash self. He does an excellent job of explaining why the distributed model of version control is better than the centralized one. A lot of his criticism is reserved for CVS in particular. - -Git is a [complex tool][5]. Learning it can be a frustrating experience. But I’m also continually amazed at the things that Git can do. In comparison, CVS is simple and straightforward, though often unable to do many of the operations we now take for granted. Going back and using CVS for a while is an excellent way to find yourself with a new appreciation for Git’s power and flexibility. It illustrates well why understanding the history of software development can be so beneficial—picking up and re-examining obsolete tools will teach you volumes about the why behind the tools we use today. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out. - -#### Correction - -I’ve been told that there are many organizations, particularly risk-adverse organizations that do things like make medical device software, that still use CVS. Programmers in these organizations have developed little tricks for working around CVS’ limitations, such as making a new branch for almost every change to avoid committing directly to `HEAD`. (Thanks to Michael Kohne for pointing this out.) - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/07/07/cvs.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[runningwater](https://github.com/runningwater) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://docs.freebsd.org/44doc/psd/28.cvs/paper.pdf -[2]: https://www.nongnu.org/cvs/ -[3]: http://shop.oreilly.com/product/9780974514000.do -[4]: https://www.youtube.com/watch?v=4XpnKHJAok8 -[5]: https://xkcd.com/1597/ -[6]: https://twitter.com/TwoBitHistory -[7]: https://twobithistory.org/feed.xml diff --git a/sources/tech/20180709 5 Firefox extensions to protect your privacy.md b/sources/tech/20180709 5 Firefox extensions to protect your privacy.md deleted file mode 100644 index 848856fe07..0000000000 --- a/sources/tech/20180709 5 Firefox extensions to protect your privacy.md +++ /dev/null @@ -1,54 +0,0 @@ -5 Firefox extensions to protect your privacy -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/biz_cinderblock_cloud_yellowhat.jpg?itok=sJdlsYTF) - -In the wake of the Cambridge Analytica story, I took a hard look at how far I had let Facebook penetrate my online presence. As I'm generally concerned about single points of failure (or compromise), I am not one to use social logins. I use a password manager and create unique logins for every site (and you should, too). - -What I was most perturbed about was the pervasive intrusion Facebook was having on my digital life. I uninstalled the Facebook mobile app almost immediately after diving into the Cambridge Analytica story. I also [disconnected all apps, games, and websites][1] from Facebook. Yes, this will change your experience on Facebook, but it will also protect your privacy. As a veteran with friends spread out across the globe, maintaining the social connectivity of Facebook is important to me. - -I went about the task of scrutinizing other services as well. I checked Google, Twitter, GitHub, and more for any unused connected applications. But I know that's not enough. I need my browser to be proactive in preventing behavior that violates my privacy. I began the task of figuring out how best to do that. Sure, I can lock down a browser, but I need to make the sites and tools I use work while trying to keep them from leaking data. - -Following are five tools that will protect your privacy while using your browser. The first three extensions are available for Firefox and Chrome, while the latter two are only available for Firefox. - -### Privacy Badger - -[Privacy Badger][2] has been my go-to extension for quite some time. Do other content or ad blockers do a better job? Maybe. The problem with a lot of content blockers is that they are "pay for play." Meaning they have "partners" that get whitelisted for a fee. That is the antithesis of why content blockers exist. Privacy Badger is made by the Electronic Frontier Foundation (EFF), a nonprofit entity with a donation-based business model. Privacy Badger promises to learn from your browsing habits and requires minimal tuning. For example, I have only had to whitelist a handful of sites. Privacy Badger also allows granular controls of exactly which trackers are enabled on what sites. It's my #1, must-install extension, no matter the browser. - -### DuckDuckGo Privacy Essentials - -The search engine DuckDuckGo has typically been privacy-conscious. [DuckDuckGo Privacy Essentials][3] works across major mobile devices and browsers. It's unique in the sense that it grades sites based on the settings you give them. For example, Facebook gets a D, even with Privacy Protection enabled. Meanwhile, [chrisshort.net][4] gets a B with Privacy Protection enabled and a C with it disabled. If you're not keen on EFF or Privacy Badger for whatever reason, I would recommend DuckDuckGo Privacy Essentials (choose one, not both, as they essentially do the same thing). - -### HTTPS Everywhere - -[HTTPS Everywhere][5] is another extension from the EFF. According to HTTPS Everywhere, "Many sites on the web offer some limited support for encryption over HTTPS, but make it difficult to use. For instance, they may default to unencrypted HTTP or fill encrypted pages with links that go back to the unencrypted site. The HTTPS Everywhere extension fixes these problems by using clever technology to rewrite requests to these sites to HTTPS." While a lot of sites and browsers are getting better about implementing HTTPS, there are a lot of sites that still need help. HTTPS Everywhere will try its best to make sure your traffic is encrypted. - -### NoScript Security Suite - -[NoScript Security Suite][6] is not for the faint of heart. While the Firefox-only extension "allows JavaScript, Java, Flash, and other plugins to be executed only by trusted websites of your choice," it doesn't do a great job at figuring out what your choices are. But, make no mistake, a surefire way to prevent leaking data is not executing code that could leak it. NoScript enables that via its "whitelist-based preemptive script blocking." This means you will need to build the whitelist as you go for sites not already on it. Note that NoScript is only available for Firefox. - -### Facebook Container - -[Facebook Container][7] makes Firefox the only browser where I will use Facebook. "Facebook Container works by isolating your Facebook identity into a separate container that makes it harder for Facebook to track your visits to other websites with third-party cookies." This means Facebook cannot snoop on activity happening elsewhere in your browser. Suddenly those creepy ads will stop appearing so frequently (assuming you uninstalled the Facebook app from your mobile devices). Using Facebook in an isolated space will prevent any additional collection of data. Remember, you've given Facebook data already, and Facebook Container can't prevent that data from being shared. - -These are my go-to extensions for browser privacy. What are yours? Please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/firefox-extensions-protect-privacy - -作者:[Chris Short][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/chrisshort -[1]:https://www.facebook.com/help/211829542181913 -[2]:https://www.eff.org/privacybadger -[3]:https://duckduckgo.com/app -[4]:https://chrisshort.net -[5]:https://www.eff.org/https-everywhere -[6]:https://noscript.net/ -[7]:https://addons.mozilla.org/en-US/firefox/addon/facebook-container/ diff --git a/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md b/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md deleted file mode 100644 index b537d61e2f..0000000000 --- a/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md +++ /dev/null @@ -1,103 +0,0 @@ -translating---geekpi - -Anbox: How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way -====== -**[Anbox][1], or Android in a Box, is a free and open source tool that allows running Android applications on Linux.** It works by running the Android runtime environment in an LXC container, recreating the directory structure of Android as a mountable loop image, while using the native Linux kernel to execute applications. - -Its key features are security, performance, integration and convergence (scales across different form factors), according to its website. - -**Using Anbox, each Android application or game is launched in a separate window, just like system applications** , and they behave more or less like regular windows, showing up in the launcher, can be tiled, etc. - -By default, Anbox doesn't ship with the Google Play Store or support for ARM applications. To install applications you must download each app APK and install it manually using adb. Also, installing ARM applications or games doesn't work by default with Anbox - trying to install ARM apps results in the following error being displayed: -``` -Failed to install PACKAGE.NAME.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113] - -``` - -You can set up both Google Play Store and support for ARM applications (through libhoudini) manually for Android in a Box, but it's a quite complicated process. **To make it easier to install Google Play Store and Google Play Services on Anbox, and get it to support ARM applications and games (using libhoudini), the folks at[geeks-r-us.de][2] (linked article is in German) have created a [script][3] that automates these tasks.** - -Before using this, I'd like to make it clear that not all Android applications and games work in Anbox, even after integrating libhoudini for ARM support. Some Android applications and games may not show up in the Google Play Store at all, while others may be available for installation but will not work. Also, some features may not be available in some applications. - -### Install Google Play Store and enable ARM applications / games support on Anbox (Android in a Box) - -These instructions will obviously not work if Anbox is not already installed on your Linux desktop. If you haven't already, install Anbox by following the installation instructions found - -`anbox.appmgr` - -at least once after installing Anbox and before using this script, to avoid running into issues. - -1\. Install the required dependencies (`wget` , `lzip` , `unzip` and `squashfs-tools`). - -In Debian, Ubuntu or Linux Mint, use this command to install the required dependencies: -``` -sudo apt install wget lzip unzip squashfs-tools - -``` - -2\. Download and run the script that automatically downloads and installs Google Play Store (and Google Play Services) and libhoudini (for ARM apps / games support) on your Android in a Box installation. - -**Warning: never run a script you didn't write without knowing what it does. Before running this script, check out its [code][4]. ** - -To download the script, make it executable and run it on your Linux desktop, use these commands in a terminal: -``` -wget https://raw.githubusercontent.com/geeks-r-us/anbox-playstore-installer/master/install-playstore.sh -chmod +x install-playstore.sh -sudo ./install-playstore.sh - -``` - -3\. To get Google Play Store to work in Anbox, you need to enable all the permissions for both Google Play Store and Google Play Services - -To do this, run Anbox: -``` -anbox.appmgr - -``` - -Then go to `Settings > Apps > Google Play Services > Permissions` and enable all available permissions. Do the same for Google Play Store! - -You should now be able to login using a Google account into Google Play Store. - -Without enabling all permissions for Google Play Store and Google Play Services, you may encounter an issue when trying to login to your Google account, with the following error message: " _Couldn't sign in. There was a problem communicating with Google servers. Try again later_ ", as you can see in this screenshot: - -After logging in, you can disable some of the Google Play Store / Google Play Services permissions. - -**If you're encountering some connectivity issues when logging in to your Google account on Anbox,** make sure the `anbox-bride.sh` is running: - - * to start it: - - -``` -sudo /snap/anbox/current/bin/anbox-bridge.sh start - -``` - - * to restart it: - - -``` -sudo /snap/anbox/current/bin/anbox-bridge.sh restart - -``` - -You may also need to install the dnsmasq package if you continue to have connectivity issues with Anbox, according to - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/07/anbox-how-to-install-google-play-store.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://anbox.io/ -[2]:https://geeks-r-us.de/2017/08/26/android-apps-auf-dem-linux-desktop/ -[3]:https://github.com/geeks-r-us/anbox-playstore-installer/ -[4]:https://github.com/geeks-r-us/anbox-playstore-installer/blob/master/install-playstore.sh -[5]:https://docs.anbox.io/userguide/install.html -[6]:https://github.com/anbox/anbox/issues/118#issuecomment-295270113 diff --git a/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md b/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md deleted file mode 100644 index 6083111a32..0000000000 --- a/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md +++ /dev/null @@ -1,153 +0,0 @@ -Users, Groups, and Other Linux Beasts -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flamingo-2458782_1920.jpg?itok=_gkzGGx5) - -Having reached this stage, [after seeing how to manipulate folders/directories][1], but before flinging ourselves headlong into fiddling with files, we have to brush up on the matter of _permissions_ , _users_ and _groups_. Luckily, [there is already an excellent and comprehensive tutorial on this site that covers permissions][2], so you should go and read that right now. In a nutshell: you use permissions to establish who can do stuff to files and directories and what they can do with each file and directory -- read from it, write to it, move it, erase it, etc. - -To try everything this tutorial covers, you'll need to create a new user on your system. Let's be practical and make a user for anybody who needs to borrow your computer, that is, what we call a _guest account_. - -**WARNING:** _Creating and especially deleting users, along with home directories, can seriously damage your system if, for example, you remove your own user and files by mistake. You may want to practice on another machine which is not your main work machine or on a virtual machine. Regardless of whether you want to play it safe, or not, it is always a good idea to back up your stuff frequently, check the backups have worked correctly, and save yourself a lot of gnashing of teeth later on._ - -### A New User - -You can create a new user with the `useradd` command. Run `useradd` with superuser/root privileges, that is using `sudo` or `su`, depending on your system, you can do: -``` -sudo useradd -m guest - -``` - -... and input your password. Or do: -``` -su -c "useradd -m guest" - -``` - -... and input the password of root/the superuser. - -( _For the sake of brevity, we'll assume from now on that you get superuser/root privileges by using`sudo`_ ). - -By including the `-m` argument, `useradd` will create a home directory for the new user. You can see its contents by listing _/home/guest_. - -Next you can set up a password for the new user with -``` -sudo passwd guest - -``` - -Or you could also use `adduser`, which is interactive and asks you a bunch of questions, including what shell you want to assign the user (yes, there are more than one), where you want their home directory to be, what groups you want them to belong to (more about that in a second) and so on. At the end of running `adduser`, you get to set the password. Note that `adduser` is not installed by default on many distributions, while `useradd` is. - -Incidentally, you can get rid of a user with `userdel`: -``` -sudo userdel -r guest - -``` - -With the `-r` option, `userdel` not only removes the _guest_ user, but also deletes their home directory and removes their entry in the mailing spool, if they had one. - -### Skeletons at Home - -Talking of users' home directories, depending on what distro you're on, you may have noticed that when you use the `-m` option, `useradd` populates a user's directory with subdirectories for music, documents, and whatnot as well as an assortment of hidden files. To see everything in you guest's home directory run `sudo ls -la /home/guest`. - -What goes into a new user's directory is determined by a skeleton directory which is usually _/etc/skel_. Sometimes it may be a different directory, though. To check which directory is being used, run: -``` -useradd -D -GROUP=100 -HOME=/home -INACTIVE=-1 -EXPIRE= -SHELL=/bin/bash -SKEL=/etc/skel -CREATE_MAIL_SPOOL=no - -``` - -This gives you some extra interesting information, but what you're interested in right now is the `SKEL=/etc/skel` line. In this case, and as is customary, it is pointing to _/etc/skel/_. - -As everything is customizable in Linux, you can, of course, change what gets put into a newly created user directory. Try this: Create a new directory in _/etc/skel/_ : -``` -sudo mkdir /etc/skel/Documents - -``` - -And create a file containing a welcome text and copy it over: -``` -sudo cp welcome.txt /etc/skel/Documents - -``` - -Now delete the guest account: -``` -sudo userdel -r guest - -``` - -And create it again: -``` -sudo useradd -m guest - -``` - -Hey presto! Your _Documents/_ directory and _welcome.txt_ file magically appear in the guest's home directory. - -You can also modify other things when you create a user by editing _/etc/default/useradd_. Mine looks like this: -``` -GROUP=users -HOME=/home -INACTIVE=-1 -EXPIRE= -SHELL=/bin/bash -SKEL=/etc/skel -CREATE_MAIL_SPOOL=no - -``` - -Most of these options are self-explanatory, but let's take a closer look at the `GROUP` option. - -### Herd Mentality - -Instead of assigning permissions and privileges to users one by one, Linux and other Unix-like operating systems rely on _groups_. A group is a what you imagine it to be: a bunch of users that are related in some way. On your system you may have a group of users that are allowed to use the printer. They would belong to the _lp_ (for " _line printer_ ") group. The members of the _wheel_ group were traditionally the only ones who could become superuser/root by using _su_. The _network_ group of users can bring up and power down the network. And so on and so forth. - -Different distributions have different groups and groups with the same or similar names have different privileges also depending on the distribution you are using. So don't be surprised if what you read in the prior paragraph doesn't match what is going on in your system. - -Either way, to see which groups are on your system you can use: -``` -getent group - -``` - -The `getent` command lists the contents of some of the system's databases. - -To find out which groups your current user belongs to, try: -``` -groups - -``` - -When you create a new user with `useradd`, unless you specify otherwise, the user will only belong to one group: their own. A _guest_ user will belong to a _guest_ group and the group gives the user the power to administer their own stuff and that is about it. - -You can create new groups and then add users to them at will with the `groupadd` command: -``` -sudo groupadd photos - -``` - -will create the _photos_ group, for example. Next time, we’ll use this to build a shared directory all members of the group can read from and write to, and we'll learn even more about permissions and privileges. Stay tuned! - -Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux -[2]:https://www.linux.com/learn/understanding-linux-file-permissions -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md b/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md deleted file mode 100644 index b164bb6cf5..0000000000 --- a/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md +++ /dev/null @@ -1,110 +0,0 @@ -Users, Groups and Other Linux Beasts: Part 2 -====== -![](https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2) -In this ongoing tour of Linux, we’ve looked at [how to manipulate folders/directories][1], and now we’re continuing our discussion of _permissions_ , _users_ and _groups_ , which are necessary to establish who can manipulate which files and directories. [Last time,][2] we showed how to create new users, and now we’re going to dive right back in: - -You can create new groups and then add users to them at will with the `groupadd` command. For example, using: -``` -sudo groupadd photos - -``` - -will create the _photos_ group. - -You’ll need to [create a directory][1] hanging off the root directory: -``` -sudo mkdir /photos - -``` - -If you run `ls -l /`, one of the lines will be: -``` -drwxr-xr-x 1 root root 0 jun 26 21:14 photos - -``` - -The first _root_ in the output is the user owner and the second _root_ is the group owner. - -To transfer the ownership of the _/photos_ directory to the _photos_ group, use -``` -chgrp photos /photos - -``` - -The `chgrp` command typically takes two parameters, the first parameter is the group that will take ownership of the file or directory and the second is the file or directory you want to give over to the the group. - -Next, run `ls -l /` and you'll see the line has changed to: -``` -drwxr-xr-x 1 root photos 0 jun 26 21:14 photos - -``` - -You have successfully transferred the ownership of your new directory over to the _photos_ group. - -Then, add your own user and the _guest_ user to the _photos_ group: -``` -sudo usermod -a -G photos -sudo usermod guest -a -G photos - -``` - -You may have to log out and log back in to see the changes, but, when you do, running `groups` will show _photos_ as one of the groups you belong to. - -A couple of things to point out about the `usermod` command shown above. First: Be careful not to use the `-g` option instead of `-G`. The `-g` option changes your primary group and could lock you out of your stuff if you use it by accident. `-G`, on the other hand, _adds_ you to the groups listed and doesn't mess with the primary group. If you want to add your user to more groups than one, list them one after another, separated by commas, no spaces, after `-G`: -``` -sudo usermod -a -G photos,pizza,spaceforce - -``` - -Second: Be careful not to forget the `-a` parameter. The `-a` parameter stands for _append_ and attaches the list of groups you pass to `-G` to the ones you already belong to. This means that, if you don't include `-a`, the list of groups you already belong to, will be overwritten, again locking you out from stuff you need. - -Neither of these are catastrophic problems, but it will mean you will have to add your user back manually to all the groups you belonged to, which can be a pain, especially if you have lost access to the _sudo_ and _wheel_ group. - -### Permits, Please! - -There is still one more thing to do before you can copy images to the _/photos_ directory. Notice how, when you did `ls -l /` above, permissions for that folder came back as _drwxr-xr-x_. - -If you read [the article I recommended at the beginning of this post][3], you'll know that the first _d_ indicates that the entry in the file system is a directory, and then you have three sets of three characters ( _rwx_ , _r-x_ , _r-x_ ) that indicate the permissions for the user owner ( _rwx_ ) of the directory, then the group owner ( _r-x_ ), and finally the rest of the users ( _r-x_ ). This means that the only person who has write permissions so far, that is, the only person who can copy or create files in the _/photos_ directory, is the _root_ user. - -But [that article I mentioned also tells you how to change the permissions for a directory or file][3]: -``` -sudo chmod g+w /photos - -``` - -Running `ls -l /` after that will give you _/photos_ permissions as _drwxrwxr-x_ which is what you want: group members can now write into the directory. - -Now you can try and copy an image or, indeed, any other file to the directory and it should go through without a problem: -``` -cp image.jpg /photos - -``` - -The _guest_ user will also be able to read and write from the directory. They will also be able to read and write to it, and even move or delete files created by other users within the shared directory. - -### Conclusion - -The permissions and privileges system in Linux has been honed over decades. inherited as it is from the old Unix systems of yore. As such, it works very well and is well thought out. Becoming familiar with it is essential for any Linux sysadmin. In fact, you can't do much admining at all unless you understand it. But, it's not that hard. - -Next time, we'll be dive into files and see the different ways of creating, manipulating, and destroying them in creative ways. Always fun, that last one. - -See you then! - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2 - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux -[2]:https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts -[3]:https://www.linux.com/learn/understanding-linux-file-permissions -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180719 Building tiny container images.md b/sources/tech/20180719 Building tiny container images.md deleted file mode 100644 index bdaef5f08c..0000000000 --- a/sources/tech/20180719 Building tiny container images.md +++ /dev/null @@ -1,362 +0,0 @@ -Building tiny container images -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf) - -When [Docker][1] exploded onto the scene a few years ago, it brought containers and container images to the masses. Although Linux containers existed before then, Docker made it easy to get started with a user-friendly command-line interface and an easy-to-understand way to build images using the Dockerfile format. But while it may be easy to jump in, there are still some nuances and tricks to building container images that are usable, even powerful, but still small in size. - -### First pass: Clean up after yourself - -Some of these examples involve the same kind of cleanup you would use with a traditional server, but more rigorously followed. Smaller image sizes are critical for quickly moving images around, and storing multiple copies of unnecessary data on disk is a waste of resources. Consequently, these techniques should be used more regularly than on a server with lots of dedicated storage. - -An example of this kind of cleanup is removing cached files from an image to recover space. Consider the difference in size between a base image with [Nginx][2] installed by `dnf` with and without the metadata and yum cache cleaned up: -``` -# Dockerfile with cache - -FROM fedora:28 - -LABEL maintainer Chris Collins - - - -RUN dnf install -y nginx - - - ------ - - - -# Dockerfile w/o cache - -FROM fedora:28 - -LABEL maintainer Chris Collins - - - -RUN dnf install -y nginx \ - -        && dnf clean all \ - -        && rm -rf /var/cache/yum - - - ------ - - - -[chris@krang] $ docker build -t cache -f Dockerfile .   - -[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}" - -| head -n 1 - -cache: 464 MB - - - -[chris@krang] $ docker build -t no-cache -f Dockerfile-wo-cache . - -[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}"  | head -n 1 - -no-cache: 271 MB - -``` - -That is a significant difference in size. The version with the `dnf` cache is almost twice the size of the image without the metadata and cache. Package manager cache, Ruby gem temp files, `nodejs` cache, even downloaded source tarballs are all perfect candidates for cleaning up. - -### Layers—a potential gotcha - -Unfortunately (or fortunately, as you’ll see later), based on the way layers work with containers, you cannot simply add a `RUN rm -rf /var/cache/yum` line to your Dockerfile and call it a day. Each instruction of a Dockerfile is stored in a layer, with changes between layers applied on top. So even if you were to do this: -``` -RUN dnf install -y nginx - -RUN dnf clean all - -RUN rm -rf /var/cache/yum - -``` - -...you’d still end up with three layers, one of which contains all the cache, and two intermediate layers that "remove" the cache from the image. But the cache is actually still there, just as when you mount a filesystem over the top of another one, the files are there—you just can’t see or access them. - -You’ll notice that the example in the previous section chains the cache cleanup in the same Dockerfile instruction where the cache is generated: -``` -RUN dnf install -y nginx \ - -        && dnf clean all \ - -        && rm -rf /var/cache/yum - -``` - -This is a single instruction and ends up being a single layer within the image. You’ll lose a bit of the Docker (*ahem*) cache this way, making a rebuild of the image slightly longer, but the cached data will not end up in your final image. As a nice compromise, just chaining related commands (e.g., `yum install` and `yum clean all`, or downloading, extracting and removing a source tarball, etc.) can save a lot on your final image size while still allowing you to take advantage of the Docker cache for quicker development. - -This layer "gotcha" is more subtle than it first appears, though. Because the image layers document the _changes_ to each layer, one upon another, it’s not just the existence of files that add up, but any change to the file. For example, _even changing the mode_ of the file creates a copy of that file in the new layer. - -For example, the output of `docker images` below shows information about two images. The first, `layer_test_1`, was created by adding a single 1GB file to a base CentOS image. The second image, `layer_test_2`, was created `FROM layer_test_1` and did nothing but change the mode of the 1GB file with `chmod u+x`. -``` -layer_test_2        latest       e11b5e58e2fc           7 seconds ago           2.35 GB - -layer_test_1        latest       6eca792a4ebe           2 minutes ago           1.27 GB - -``` - -As you can see, the new image is more than 1GB larger than the first. Despite the fact that `layer_test_1` is only the first two layers of `layer_test_2`, there’s still an extra 1GB file floating around hidden inside the second image. This is true anytime you remove, move, or change any file during the image build process. - -### Purpose-built images vs. flexible images - -An anecdote: As my office heavily invested in [Ruby on Rails][3] applications, we began to embrace the use of containers. One of the first things we did was to create an official Ruby base image for all of our teams to use. For simplicity’s sake (and suffering under “this is the way we did it on our servers”), we used [rbenv][4] to install the latest four versions of Ruby into the image, allowing our developers to migrate all of their applications into containers using a single image. This resulted in a very large but flexible (we thought) image that covered all the bases of the various teams we were working with. - -This turned out to be wasted work. The effort required to maintain separate, slightly modified versions of a particular image was easy to automate, and selecting a specific image with a specific version actually helped to identify applications approaching end-of-life before a breaking change was introduced, wreaking havoc downstream. It also wasted resources: When we started to split out the different versions of Ruby, we ended up with multiple images that shared a single base and took up very little extra space if they coexisted on a server, but were considerably smaller to ship around than a giant image with multiple versions installed. - -That is not to say building flexible images is not helpful, but in this case, creating purpose-build images from a common base ended up saving both storage space and maintenance time, and each team could modify their setup however they needed while maintaining the benefit of the common base image. - -### Start without the cruft: Add what you need to a blank image - -As friendly and easy-to-use as the _Dockerfile_ is, there are tools available that offer the flexibility to create very small Docker-compatible container images without the cruft of a full operating system—even those as small as the standard Docker base images. - -[I’ve written about Buildah before][5], and I’ll mention it again because it is flexible enough to create an image from scratch using tools from your host to install packaged software and manipulate the image. Those tools then never need to be included in the image itself. - -Buildah replaces the `docker build` command. With it, you can mount the filesystem of your container image to your host machine and interact with it using tools from the host. - -Let’s try Buildah with the Nginx example from above (ignoring caches for now): -``` -#!/usr/bin/env bash - -set -o errexit - - - -# Create a container - -container=$(buildah from scratch) - - - -# Mount the container filesystem - -mountpoint=$(buildah mount $container) - - - -# Install a basic filesystem and minimal set of packages, and nginx - -dnf install --installroot $mountpoint  --releasever 28 glibc-minimal-langpack nginx --setopt install_weak_deps=false -y - - - -# Save the container to an image - -buildah commit --format docker $container nginx - - - -# Cleanup - -buildah unmount $container - - - -# Push the image to the Docker daemon’s storage - -buildah push nginx:latest docker-daemon:nginx:latest - -``` - -You’ll notice we’re no longer using a Dockerfile to build the image, but a simple Bash script, and we’re building it from a scratch (or blank) image. The Bash script mounts the container’s root filesystem to a mount point on the host, and then uses the hosts’ command to install the packages. This way the package manager doesn’t even have to exist inside the container. - -Without extra cruft—all the extra stuff in the base image, like `dnf`, for example—the image weighs in at only 304 MB, more than 100 MB smaller than the Nginx image built with a Dockerfile above. -``` -[chris@krang] $ docker images |grep nginx - -docker.io/nginx      buildah      2505d3597457    4 minutes ago         304 MB - -``` - -_Note: The image name has`docker.io` appended to it due to the way the image is pushed into the Docker daemon’s namespace, but it is still the image built locally with the build script above._ - -That 100 MB is already a huge savings when you consider a base image is already around 300 MB on its own. Installing Nginx with a package manager brings in a ton of dependencies, too. For something compiled from source using tools from the host, the savings can be even greater because you can choose the exact dependencies and not pull in any extra files you don’t need. - -If you’d like to try this route, [Tom Sweeney][6] wrote a much more in-depth article, [Creating small containers with Buildah][7], which you should check out. - -Using Buildah to build images without a full operating system and included build tools can enable much smaller images than you would otherwise be able to create. For some types of images, we can take this approach even further and create images with _only_ the application itself included. - -### Create images with only statically linked binaries - -Following the same philosophy that leads us to ditch administrative and build tools inside images, we can go a step further. If we specialize enough and abandon the idea of troubleshooting inside of production containers, do we need Bash? Do we need the [GNU core utilities][8]? Do we _really_ need the basic Linux filesystem? You can do this with any compiled language that allows you to create binaries with [statically linked libraries][9]—where all the libraries and functions needed by the program are copied into and stored within the binary itself. - -This is a relatively popular way of doing things within the [Golang][10] community, so we’ll use a Go application to demonstrate. - -The Dockerfile below takes a small Go Hello-World application and compiles it in an image `FROM golang:1.8`: -``` -FROM golang:1.8 - - - -ENV GOOS=linux - -ENV appdir=/go/src/gohelloworld - - - -COPY ./ /go/src/goHelloWorld - -WORKDIR /go/src/goHelloWorld - - - -RUN go get - -RUN go build -o /goHelloWorld -a - - - -CMD ["/goHelloWorld"] - -``` - -The resulting image, containing the binary, the source code, and the base image layer comes in at 716 MB. The only thing we actually need for our application is the compiled binary, however. Everything else is unused cruft that gets shipped around with our image. - -If we disable `cgo` with `CGO_ENABLED=0` when we compile, we can create a binary that doesn’t wrap C libraries for some of its functions: -``` -GOOS=linux CGO_ENABLED=0 go build -a goHelloWorld.go - -``` - -The resulting binary can be added to an empty, or "scratch" image: -``` -FROM scratch - -COPY goHelloWorld / - -CMD ["/goHelloWorld"] - -``` - -Let’s compare the difference in image size between the two: -``` -[ chris@krang ] $ docker images - -REPOSITORY      TAG             IMAGE ID                CREATED                 SIZE - -goHello     scratch     a5881650d6e9            13 seconds ago          1.55 MB - -goHello     builder     980290a100db            14 seconds ago          716 MB - -``` - -That’s a huge difference. The image built from `golang:1.8` with the `goHelloWorld` binary in it (tagged "builder" above) is _460_ times larger than the scratch image with just the binary. The entirety of the scratch image with the binary is only 1.55 MB. That means we’d be shipping around 713 MB of unnecessary data if we used the builder image. - -As mentioned above, this method of creating small images is used often in the Golang community, and there is no shortage of blog posts on the subject. [Kelsey Hightower][11] wrote [an article on the subject][12] that goes into more detail, including dealing with dependencies other than just C libraries. - -### Consider squashing, if it works for you - -There’s an alternative to chaining all the commands into layers in an attempt to save space: Squashing your image. When you squash an image, you’re really exporting it, removing all the intermediate layers, and saving a single layer with the current state of the image. This has the advantage of reducing that image to a much smaller size. - -Squashing layers used to require some creative workarounds to flatten an image—exporting the contents of a container and re-importing it as a single layer image, or using tools like `docker-squash`. Starting in version 1.13, Docker introduced a handy flag, `--squash`, to accomplish the same thing during the build process: -``` -FROM fedora:28 - -LABEL maintainer Chris Collins - - - -RUN dnf install -y nginx - -RUN dnf clean all - -RUN rm -rf /var/cache/yum - - - -[chris@krang] $ docker build -t squash -f Dockerfile-squash --squash . - -[chris@krang] $ docker images --format "{{.Repository}}: {{.Size}}"  | head -n 1 - -squash: 271 MB - -``` - -Using `docker squash` with this multi-layer Dockerfile, we end up with another 271MB image, as we did with the chained instruction example. This works great for this use case, but there’s a potential gotcha. - -“What? ANOTHER gotcha?” - -Well, sort of—it’s the same issue as before, causing problems in another way. - -### Going too far: Too squashed, too small, too specialized - -Images can share layers. The base may be _x_ megabytes in size, but it only needs to be pulled/stored once and each image can use it. The effective size of all the images sharing layers is the base layers plus the diff of each specific change on top of that. In this way, thousands of images may take up only a small amount more than a single image. - -This is a drawback with squashing or specializing too much. When you squash an image into a single layer, you lose any opportunity to share layers with other images. Each image ends up being as large as the total size of its single layer. This might work well for you if you use only a few images and run many containers from them, but if you have many diverse images, it could end up costing you space in the long run. - -Revisiting the Nginx squash example, we can see it’s not a big deal for this case. We end up with Fedora, Nginx installed, no cache, and squashing that is fine. Nginx by itself is not incredibly useful, though. You generally need customizations to do anything interesting—e.g., configuration files, other software packages, maybe some application code. Each of these would end up being more instructions in the Dockerfile. - -With a traditional image build, you would have a single base image layer with Fedora, a second layer with Nginx installed (with or without cache), and then each customization would be another layer. Other images with Fedora and Nginx could share these layers. - -Need an image: -``` -[   App 1 Layer (  5 MB) ]          [   App 2 Layer (6 MB) ] - -[   Nginx Layer ( 21 MB) ] ------------------^ - -[ Fedora  Layer (249 MB) ]   - -``` - -But if you squash the image, then even the Fedora base layer is squashed. Any squashed image based on Fedora has to ship around its own Fedora content, adding another 249 MB for _each image!_ -``` -[ Fedora + Nginx + App 1 (275 MB)]      [ Fedora + Nginx + App 2 (276 MB) ]   - -``` - -This also becomes a problem if you build lots of highly specialized, super-tiny images. - -As with everything in life, moderation is key. Again, thanks to how layers work, you will find diminishing returns as your container images become smaller and more specialized and can no longer share base layers with other related images. - -Images with small customizations can share base layers. As explained above, the base may be _x_ megabytes in size, but it only needs to be pulled/stored once and each image can use it. The effective size of all the images is the base layers plus the diff of each specific change on top of that. In this way, thousands of images may take up only a small amount more than a single image. -``` -[ specific app   ]      [ specific app 2 ] - -[ customizations ]--------------^ - -[ base layer     ] - -``` - -If you go too far with your image shrinking and you have too many variations or specializations, you can end up with many images, none of which share base layers and all of which take up their own space on disk. -``` - [ specific app 1 ]     [ specific app 2 ]      [ specific app 3 ] - -``` - -### Conclusion - -There are a variety of different ways to reduce the amount of storage space and bandwidth you spend working with container images, but the most effective way is to reduce the size of the images themselves. Whether you simply clean up your caches (avoiding leaving them orphaned in intermediate layers), squash all your layers into one, or add only static binaries in an empty image, it’s worth spending some time looking at where bloat might exist in your container images and slimming them down to an efficient size. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/building-container-images - -作者:[Chris Collins][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/clcollins -[1]:https://www.docker.com/ -[2]:https://www.nginx.com/ -[3]:https://rubyonrails.org/ -[4]:https://github.com/rbenv/rbenv -[5]:https://opensource.com/article/18/6/getting-started-buildah -[6]:https://twitter.com/TSweeneyRedHat -[7]:https://opensource.com/article/18/5/containers-buildah -[8]:https://www.gnu.org/software/coreutils/coreutils.html -[9]:https://en.wikipedia.org/wiki/Static_library -[10]:https://golang.org/ -[11]:https://twitter.com/kelseyhightower -[12]:https://medium.com/@kelseyhightower/optimizing-docker-images-for-static-binaries-b5696e26eb07 diff --git a/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md b/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md deleted file mode 100644 index 3f804ffe9e..0000000000 --- a/sources/tech/20180724 How To Mount Google Drive Locally As Virtual File System In Linux.md +++ /dev/null @@ -1,265 +0,0 @@ -How To Mount Google Drive Locally As Virtual File System In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/07/Google-Drive-720x340.png) - -[**Google Drive**][1] is the one of the popular cloud storage provider on the planet. As of 2017, over 800 million users are actively using this service worldwide. Even though the number of users have dramatically increased, Google haven’t released a Google drive client for Linux yet. But it didn’t stop the Linux community. Every now and then, some developers had brought few google drive clients for Linux operating system. In this guide, we will see three unofficial google drive clients for Linux. Using these clients, you can mount Google drive locally as a virtual file system and access your drive files in your Linux box. Read on. - -### 1. Google-drive-ocamlfuse - -The **google-drive-ocamlfuse** is a FUSE filesystem for Google Drive, written in OCaml. For those wondering, FUSE, stands for **F** ilesystem in **Use** rspace, is a project that allows the users to create virtual file systems in user level. **google-drive-ocamlfuse** allows you to mount your Google Drive on Linux system. It features read/write access to ordinary files and folders, read-only access to Google docks, sheets, and slides, support for multiple google drive accounts, duplicate file handling, access to your drive trash directory, and more. - -#### Installing google-drive-ocamlfuse - -google-drive-ocamlfuse is available in the [**AUR**][2], so you can install it using any AUR helper programs, for example [**Yay**][3]. -``` -$ yay -S google-drive-ocamlfuse - -``` - -On Ubuntu: -``` -$ sudo add-apt-repository ppa:alessandro-strada/ppa -$ sudo apt-get update -$ sudo apt-get install google-drive-ocamlfuse - -``` - -To install latest beta version, do: -``` -$ sudo add-apt-repository ppa:alessandro-strada/google-drive-ocamlfuse-beta -$ sudo apt-get update -$ sudo apt-get install google-drive-ocamlfuse - -``` - -#### Usage - -Once installed, run the following command to launch **google-drive-ocamlfuse** utility from your Terminal: -``` -$ google-drive-ocamlfuse - -``` - -When you run this first time, the utility will open your web browser and ask your permission to authorize your google drive files. Once you gave authorization, all necessary config files and folders it needs to mount your google drive will be automatically created. - -![][5] - -After successful authentication, you will see the following message in your Terminal. -``` -Access token retrieved correctly. - -``` - -You’re good to go now. Close the web browser and then create a mount point to mount your google drive files. -``` -$ mkdir ~/mygoogledrive - -``` - -Finally, mount your google drive using command: -``` -$ google-drive-ocamlfuse ~/mygoogledrive - -``` - -Congratulations! You can access access your files either from Terminal or file manager. - -From **Terminal** : -``` -$ ls ~/mygoogledrive - -``` - -From **File manager** : - -![][6] - -If you have more than one account, use **label** option to distinguish different accounts like below. -``` -$ google-drive-ocamlfuse -label label [mountpoint] - -``` - -Once you’re done, unmount the FUSE flesystem using command: -``` -$ fusermount -u ~/mygoogledrive - -``` - -For more details, refer man pages. -``` -$ google-drive-ocamlfuse --help - -``` - -Also, do check the [**official wiki**][7] and the [**project GitHub repository**][8] for more details. - -### 2. GCSF - -**GCSF** is a FUSE filesystem based on Google Drive, written using **Rust** programming language. The name GCSF has come from the Romanian word “ **G** oogle **C** onduce **S** istem de **F** ișiere”, which means “Google Drive Filesystem” in English. Using GCSF, you can mount your Google drive as a local virtual file system and access the contents from the Terminal or file manager. You might wonder how it differ from other Google Drive FUSE projects, for example **google-drive-ocamlfuse**. The developer of GCSF replied to a similar [comment on Reddit][9] “GCSF tends to be faster in several cases (listing files recursively, reading large files from Drive). The caching strategy it uses also leads to very fast reads (x4-7 improvement compared to google-drive-ocamlfuse) for files that have been cached, at the cost of using more RAM“. - -#### Installing GCSF - -GCSF is available in the [**AUR**][10], so the Arch Linux users can install it using any AUR helper, for example [**Yay**][3]. -``` -$ yay -S gcsf-git - -``` - -For other distributions, do the following. - -Make sure you have installed Rust on your system. - -Make sure **pkg-config** and the **fuse** packages are installed. They are available in the default repositories of most Linux distributions. For example, on Ubuntu and derivatives, you can install them using command: -``` -$ sudo apt-get install -y libfuse-dev pkg-config - -``` - -Once all dependencies installed, run the following command to install GCSF: -``` -$ cargo install gcsf - -``` - -#### Usage - -First, we need to authorize our google drive. To do so, simply run: -``` -$ gcsf login ostechnix - -``` - -You must specify a session name. Replace **ostechnix** with your own session name. You will see an output something like below with an URL to authorize your google drive account. - -![][11] - -Just copy and navigate to the above URL from your browser and click **allow** to give permission to access your google drive contents. Once you gave the authentication you will see an output like below. -``` -Successfully logged in. Credentials saved to "/home/sk/.config/gcsf/ostechnix". - -``` - -GCSF will create a configuration file in **$XDG_CONFIG_HOME/gcsf/gcsf.toml** , which is usually defined as **$HOME/.config/gcsf/gcsf.toml**. Credentials are stored in the same directory. - -Next, create a directory to mount your google drive contents. -``` -$ mkdir ~/mygoogledrive - -``` - -Then, edit **/etc/fuse.conf** file: -``` -$ sudo vi /etc/fuse.conf - -``` - -Uncomment the following line to allow non-root users to specify the allow_other or allow_root mount options. -``` -user_allow_other - -``` - -Save and close the file. - -Finally, mount your google drive using command: -``` -$ gcsf mount ~/mygoogledrive -s ostechnix - -``` - -Sample output: -``` -INFO gcsf > Creating and populating file system... -INFO gcsf > File sytem created. -INFO gcsf > Mounting to /home/sk/mygoogledrive -INFO gcsf > Mounted to /home/sk/mygoogledrive -INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them. -INFO gcsf::gcsf::file_manager > Checking for changes and possibly applying them. - -``` - -Again, replace **ostechnix** with your session name. You can view the existing sessions using command: -``` -$ gcsf list -Sessions: -- ostechnix - -``` - -You can now access your google drive contents either from the Terminal or from File manager. - -From **Terminal** : -``` -$ ls ~/mygoogledrive - -``` - -From **File manager** : - -![][12] - -If you don’t know where your Google drive is mounted, use **df** or **mount** command as shown below. -``` -$ df -h -Filesystem Size Used Avail Use% Mounted on -udev 968M 0 968M 0% /dev -tmpfs 200M 1.6M 198M 1% /run -/dev/sda1 20G 7.5G 12G 41% / -tmpfs 997M 0 997M 0% /dev/shm -tmpfs 5.0M 4.0K 5.0M 1% /run/lock -tmpfs 997M 0 997M 0% /sys/fs/cgroup -tmpfs 200M 40K 200M 1% /run/user/1000 -GCSF 15G 857M 15G 6% /home/sk/mygoogledrive - -$ mount | grep GCSF -GCSF on /home/sk/mygoogledrive type fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000,allow_other) - -``` - -Once done, unmount the google drive using command: -``` -$ fusermount -u ~/mygoogledrive - -``` - -Check the [**GCSF GitHub repository**][13] for more details. - -### 3. Tuxdrive - -**Tuxdrive** is yet another unofficial google drive client for Linux. We have written a detailed guide about Tuxdrive a while ago. Please check the following link. - -Of course, there were few other unofficial google drive clients available in the past, such as Grive2, Syncdrive. But it seems that they are discontinued now. I will keep updating this list when I come across any active google drive clients. - -And, that’s all for now, folks. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.google.com/drive/ -[2]:https://aur.archlinux.org/packages/google-drive-ocamlfuse/ -[3]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[5]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-2.png -[7]:https://github.com/astrada/google-drive-ocamlfuse/wiki/Configuration -[8]:https://github.com/astrada/google-drive-ocamlfuse -[9]:https://www.reddit.com/r/DataHoarder/comments/8vlb2v/google_drive_as_a_file_system/e1oh9q9/ -[10]:https://aur.archlinux.org/packages/gcsf-git/ -[11]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-3.png -[12]:http://www.ostechnix.com/wp-content/uploads/2018/07/google-drive-4.png -[13]:https://github.com/harababurel/gcsf diff --git a/sources/tech/20180725 Build an interactive CLI with Node.js.md b/sources/tech/20180725 Build an interactive CLI with Node.js.md deleted file mode 100644 index f240e51efd..0000000000 --- a/sources/tech/20180725 Build an interactive CLI with Node.js.md +++ /dev/null @@ -1,533 +0,0 @@ -translating by Flowsnow - -Build an interactive CLI with Node.js -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A) - -Node.js can be very useful when it comes to building command-line interfaces (CLIs). In this post, I'll teach you how to use [Node.js][1] to build a CLI that asks some questions and creates a file based on the answers. - -### Get started - -Let's start by creating a brand new [npm][2] package. (Npm is the JavaScript package manager.) -``` -mkdir my-script - -cd my-script - -npm init - -``` - -Npm will ask some questions. After that, we need to install some packages. -``` -npm install --save chalk figlet inquirer shelljs - -``` - -Here's what these packages do: - - * **Chalk:** Terminal string styling done right - * **Figlet:** A program for making large letters out of ordinary text - * **Inquirer:** A collection of common interactive command-line user interfaces - * **ShellJS:** Portable Unix shell commands for Node.js - - - -### Make an index.js file - -Now we'll create an `index.js` file with the following content: -``` -#!/usr/bin/env node - - - -const inquirer = require("inquirer"); - -const chalk = require("chalk"); - -const figlet = require("figlet"); - -const shell = require("shelljs"); - -``` - -### Plan the CLI - -It's always good to plan what a CLI needs to do before writing any code. This CLI will do just one thing: **create a file**. - -The CLI will ask two questions—what is the filename and what is the extension?—then create the file, and show a success message with the created file path. -``` -// index.js - - - -const run = async () => { - -  // show script introduction - -  // ask questions - -  // create the file - -  // show success message - -}; - - - -run(); - -``` - -The first function is the script introduction. Let's use `chalk` and `figlet` to get the job done. -``` -const init = () => { - -  console.log( - -    chalk.green( - -      figlet.textSync("Node JS CLI", { - -        font: "Ghost", - -        horizontalLayout: "default", - -        verticalLayout: "default" - -      }) - -    ) - -  ); - -} - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  // create the file - -  // show success message - -}; - - - -run(); - -``` - -Second, we'll write a function that asks the questions. -``` -const askQuestions = () => { - -  const questions = [ - -    { - -      name: "FILENAME", - -      type: "input", - -      message: "What is the name of the file without extension?" - -    }, - -    { - -      type: "list", - -      name: "EXTENSION", - -      message: "What is the file extension?", - -      choices: [".rb", ".js", ".php", ".css"], - -      filter: function(val) { - -        return val.split(".")[1]; - -      } - -    } - -  ]; - -  return inquirer.prompt(questions); - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  // show success message - -}; - -``` - -Notice the constants FILENAME and EXTENSIONS that came from `inquirer`. - -The next step will create the file. -``` -const createFile = (filename, extension) => { - -  const filePath = `${process.cwd()}/${filename}.${extension}` - -  shell.touch(filePath); - -  return filePath; - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -}; - -``` - -And last but not least, we'll show the success message along with the file path. -``` -const success = (filepath) => { - -  console.log( - -    chalk.white.bgGreen.bold(`Done! File created at ${filepath}`) - -  ); - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -  success(filePath); - -}; - -``` - -Let's test the script by running `node index.js`. Here's what we get: - -### The full code - -Here is the final code: -``` -#!/usr/bin/env node - - - -const inquirer = require("inquirer"); - -const chalk = require("chalk"); - -const figlet = require("figlet"); - -const shell = require("shelljs"); - - - -const init = () => { - -  console.log( - -    chalk.green( - -      figlet.textSync("Node JS CLI", { - -        font: "Ghost", - -        horizontalLayout: "default", - -        verticalLayout: "default" - -      }) - -    ) - -  ); - -}; - - - -const askQuestions = () => { - -  const questions = [ - -    { - -      name: "FILENAME", - -      type: "input", - -      message: "What is the name of the file without extension?" - -    }, - -    { - -      type: "list", - -      name: "EXTENSION", - -      message: "What is the file extension?", - -      choices: [".rb", ".js", ".php", ".css"], - -      filter: function(val) { - -        return val.split(".")[1]; - -      } - -    } - -  ]; - -  return inquirer.prompt(questions); - -}; - - - -const createFile = (filename, extension) => { - -  const filePath = `${process.cwd()}/${filename}.${extension}` - -  shell.touch(filePath); - -  return filePath; - -}; - - - -const success = filepath => { - -  console.log( - -    chalk.white.bgGreen.bold(`Done! File created at ${filepath}`) - -  ); - -}; - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -  success(filePath); - -}; - - - -run(); - -``` - -### Use the script anywhere - -To execute this script anywhere, add a `bin` section in your `package.json` file and run `npm link`. -``` -{ - -  "name": "creator", - -  "version": "1.0.0", - -  "description": "", - -  "main": "index.js", - -  "scripts": { - -    "test": "echo \"Error: no test specified\" && exit 1", - -    "start": "node index.js" - -  }, - -  "author": "", - -  "license": "ISC", - -  "dependencies": { - -    "chalk": "^2.4.1", - -    "figlet": "^1.2.0", - -    "inquirer": "^6.0.0", - -    "shelljs": "^0.8.2" - -  }, - -  "bin": { - -    "creator": "./index.js" - -  } - -} - -``` - -Running `npm link` makes this script available anywhere. - -That's what happens when you run this command: -``` -/usr/bin/creator -> /usr/lib/node_modules/creator/index.js - -/usr/lib/node_modules/creator -> /home/hugo/code/creator - -``` - -It links the `index.js` file as an executable. This is only possible because of the first line of the CLI script: `#!/usr/bin/env node`. - -Now we can run this script by calling: -``` -$ creator - -``` - -### Wrapping up - -As you can see, Node.js makes it very easy to build nice command-line tools! If you want to go even further, check this other packages: - - * [meow][3] – a simple command-line helper - * [yargs][4] – a command-line opt-string parser - * [pkg][5] – package your Node.js project into an executable - - - -Tell us about your experience building a CLI in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/node-js-interactive-cli - -作者:[Hugo Dias][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/hugodias -[1]:https://nodejs.org/en/ -[2]:https://www.npmjs.com/ -[3]:https://github.com/sindresorhus/meow -[4]:https://github.com/yargs/yargs -[5]:https://github.com/zeit/pkg diff --git a/sources/tech/20180727 How to analyze your system with perf and Python.md b/sources/tech/20180727 How to analyze your system with perf and Python.md index 4573ed1ee0..c1be98cc0e 100644 --- a/sources/tech/20180727 How to analyze your system with perf and Python.md +++ b/sources/tech/20180727 How to analyze your system with perf and Python.md @@ -1,7 +1,3 @@ -**translating by [erlinux](https://github.com/erlinux)** -**PROJECT MANAGEMENT TOOL called [gn2.sh](https://github.com/lctt/lctt-cli)** - - How to analyze your system with perf and Python ====== diff --git a/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md b/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md deleted file mode 100644 index c3b2d2b77e..0000000000 --- a/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md +++ /dev/null @@ -1,96 +0,0 @@ -GPaste Is A Great Clipboard Manager For Gnome Shell -====== -**[GPaste][1] is a clipboard management system that consists of a library, daemon, and interfaces for the command line and Gnome (using a native Gnome Shell extension).** - -A clipboard manager allows keeping track of what you're copying and pasting, providing access to previously copied items. GPaste, with its native Gnome Shell extension, makes the perfect addition for those looking for a Gnome clipboard manager. - -[![GPaste Gnome Shell extension Ubuntu 18.04][2]][3] -GPaste Gnome Shell extension -**Using GPaste in Gnome, you get a configurable, searchable clipboard history, available with a click on the top panel. GPaste remembers not only the text you copy, but also file paths and images** (the latter needs to be enabled from its settings as it's disabled by default). - -What's more, GPaste can detect growing lines, meaning it can detect when a new text copy is an extension of another and replaces it if it's true, useful for keeping your clipboard clean. - -From the extension menu you can pause GPaste from tracking the clipboard, and remove items from the clipboard history or the whole history. You'll also find a button that launches the GPaste user interface window. - -**If you prefer to use the keyboard, you can use a key shortcut to open the GPaste history from the top bar** (`Ctrl + Alt + H`), **or open the full GPaste GUI** (`Ctrl + Alt + G`). - -The tool also incorporates keyboard shortcuts to (can be changed): - - * delete the active item from history: `Ctrl + Alt + V` - - * **mark the active item as being a password (which obfuscates the clipboard entry in GPaste):** `Ctrl + Alt + S` - - * sync the clipboard to the primary selection: `Ctrl + Alt + O` - - * sync the primary selection to the clipboard: `Ctrl + Alt + P` - - * upload the active item to a pastebin service: `Ctrl + Alt + U` - -[![][4]][5] -GPaste GUI - -The GPaste interface window provides access to the clipboard history (with options to clear, edit or upload items), which can be searched, an option to pause GPaste from tracking the clipboard, restart the GPaste daemon, backup current clipboard history, as well as to its settings. - -[![][6]][7] -GPaste GUI - -From the GPaste UI you can change settings like: - - * Enable or disable the Gnome Shell extension - * Sync the daemon state with the extension's one - * Primary selection affects history - * Synchronize clipboard with primary selection - * Image support - * Trim items - * Detect growing lines - * Save history - * History settings like max history size, memory usage, max text item length, and more - * Keyboard shortcuts - - - -### Download GPaste - -[Download GPaste](https://github.com/Keruspe/GPaste) - -The Gpaste project page does not link to any GPaste binaries, and only source installation instructions. Users running Linux distributions other than Debian or Ubuntu (for which you'll find GPaste installation instructions below) can search their distro repositories for GPaste. - -Do not confuse GPaste with the GPaste Integration extension posted on the Gnome Shell extension website. That is a Gnome Shell extension that uses GPaste daemon, which is no longer maintained. The native Gnome Shell extension built into GPaste is still maintained. - -#### Install GPaste in Ubuntu (18.04, 16.04) or Debian (Jessie and newer) - -**For Debian, GPaste is available for Jessie and newer, while for Ubuntu, GPaste is in the repositories for 16.04 and newer (so it's available in the Ubuntu 18.04 Bionic Beaver).** - -**You can install GPaste (the daemon and the Gnome Shell extension) in Debian or Ubuntu using this command:** -``` -sudo apt install gnome-shell-extensions-gpaste gpaste - -``` - -After the installation completes, restart Gnome Shell by pressing `Alt + F2` and typing `r` , then pressing the `Enter` key. The GPaste Gnome Shell extension should now be enabled and its icon should show up on the top Gnome Shell panel. If it's not, use Gnome Tweaks (Gnome Tweak Tool) to enable the extension. - -**The GPaste 3.28.0 package from[Debian][8] and [Ubuntu][9] has a bug that makes it crash if the image support option is enabled, so do not enable this feature for now.** This was marked as - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/gpaste-is-great-clipboard-manager-for.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://github.com/Keruspe/GPaste -[2]:https://2.bp.blogspot.com/-2ndArDBcrwY/W2gyhMc1kEI/AAAAAAAABS0/ZAe_onuGCacMblF733QGBX3XqyZd--WuACLcBGAs/s400/gpaste-gnome-shell-extension-ubuntu1804.png (Gpaste Gnome Shell) -[3]:https://2.bp.blogspot.com/-2ndArDBcrwY/W2gyhMc1kEI/AAAAAAAABS0/ZAe_onuGCacMblF733QGBX3XqyZd--WuACLcBGAs/s1600/gpaste-gnome-shell-extension-ubuntu1804.png -[4]:https://2.bp.blogspot.com/-7FBRsZJvYek/W2gyvzmeRxI/AAAAAAAABS4/LhokMFSn8_kZndrNB-BTP4W3e9IUuz9BgCLcBGAs/s640/gpaste-gui_1.png -[5]:https://2.bp.blogspot.com/-7FBRsZJvYek/W2gyvzmeRxI/AAAAAAAABS4/LhokMFSn8_kZndrNB-BTP4W3e9IUuz9BgCLcBGAs/s1600/gpaste-gui_1.png -[6]:https://4.bp.blogspot.com/-047ShYc6RrQ/W2gyz5FCf_I/AAAAAAAABTA/-o6jaWzwNpsSjG0QRwRJ5Xurq_A6dQ0sQCLcBGAs/s640/gpaste-gui_2.png -[7]:https://4.bp.blogspot.com/-047ShYc6RrQ/W2gyz5FCf_I/AAAAAAAABTA/-o6jaWzwNpsSjG0QRwRJ5Xurq_A6dQ0sQCLcBGAs/s1600/gpaste-gui_2.png -[8]:https://packages.debian.org/buster/gpaste -[9]:https://launchpad.net/ubuntu/+source/gpaste -[10]:https://www.imagination-land.org/posts/2018-04-13-gpaste-3.28.2-released.html diff --git a/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md b/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md deleted file mode 100644 index 8ad6a4ac7d..0000000000 --- a/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md +++ /dev/null @@ -1,111 +0,0 @@ -5 reasons the i3 window manager makes Linux better -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows.png?itok=jd5sBNQH) - -One of the nicest things about Linux (and open source software in general) is the freedom to choose among different alternatives to address our needs. - -I've been using Linux for a long time, but I was never entirely happy with the desktop environment options available. Until last year, [Xfce][1] was the closest to what I consider a good compromise between features and performance. Then I found [i3][2], an amazing piece of software that changed my life. - -I3 is a tiling window manager. The goal of a window manager is to control the appearance and placement of windows in a windowing system. Window managers are often used as part a full-featured desktop environment (such as GNOME or Xfce), but some can also be used as standalone applications. - -A tiling window manager automatically arranges the windows to occupy the whole screen in a non-overlapping way. Other popular tiling window managers include [wmii][3] and [xmonad][4]. - -![i3 tiled window manager screenshot][6] - -Screenshot of i3 with three tiled windows - -Following are the top five reasons I use the i3 window manager and recommend it for a better Linux desktop experience. - -### 1\. Minimalism - -I3 is fast. It is neither bloated nor fancy. It is designed to be simple and efficient. As a developer, I value these features, as I can use the extra capacity to power my favorite development tools or test stuff locally using containers or virtual machines. - -In addition, i3 is a window manager and, unlike full-featured desktop environments, it does not dictate the applications you should use. Do you want to use Thunar from Xfce as your file manager? GNOME's gedit to edit text? I3 does not care. Pick the tools that make the most sense for your workflow, and i3 will manage them all in the same way. - -### 2\. Screen real estate - -As a tiling window manager, i3 will automatically "tile" or position the windows in a non-overlapping way, similar to laying tiles on a wall. Since you don't need to worry about window positioning, i3 generally makes better use of your screen real estate. It also allows you to get to what you need faster. - -There are many useful cases for this. For example, system administrators can open several terminals to monitor or work on different remote systems simultaneously; and developers can use their favorite IDE or editor and a few terminals to test their programs. - -In addition, i3 is flexible. If you need more space for a particular window, enable full-screen mode or switch to a different layout, such as stacked or tabbed. - -### 3\. Keyboard-driven workflow - -I3 makes extensive use of keyboard shortcuts to control different aspects of your environment. These include opening the terminal and other programs, resizing and positioning windows, changing layouts, and even exiting i3. When you start using i3, you need to memorize a few of those shortcuts to get around and, with time, you'll use more of them. - -The main benefit is that you don't often need to switch contexts from the keyboard to the mouse. With practice, it means you'll improve the speed and efficiency of your workflow. - -For example, to open a new terminal, press `+`. Since the windows are automatically positioned, you can start typing your commands right away. Combine that with a nice terminal-driven text editor (e.g., Vim) and a keyboard-focused browser for a fully keyboard-driven workflow. - -In i3, you can define shortcuts for everything. Here are some examples: - - * Open terminal - * Open browser - * Change layouts - * Resize windows - * Control music player - * Switch workspaces - - - -Now that I am used to this workflow, I can't see myself going back to a regular desktop environment. - -### 4\. Flexibility - -I3 strives to be minimal and use few system resources, but that does not mean it can't be pretty. I3 is flexible and can be customized in several ways to improve the visual experience. Because i3 is a window manager, it doesn't provide tools to enable customizations; you need external tools for that. Some examples: - - * Use `feh` to define a background picture for your desktop. - * Use a compositor manager such as `compton` to enable effects like window fading and transparency. - * Use `dmenu` or `rofi` to enable customizable menus that can be launched from a keyboard shortcut. - * Use `dunst` for desktop notifications. - - - -I3 is fully configurable, and you can control every aspect of it by updating the default configuration file. From changing all keyboard shortcuts, to redefining the name of the workspaces, to modifying the status bar, you can make i3 behave in any way that makes the most sense for your needs. - -![i3 with rofi menu and dunst desktop notifications][8] - -i3 with `rofi` menu and `dunst` desktop notifications - -Finally, for more advanced users, i3 provides a full interprocess communication ([IPC][9]) interface that allows you to use your favorite language to develop scripts or programs for even more customization options. - -### 5\. Workspaces - -In i3, a workspace is an easy way to group windows. You can group them in different ways according to your workflow. For example, you can put the browser on one workspace, the terminal on another, an email client on a third, etc. You can even change i3's configuration to always assign specific applications to their own workspaces. - -Switching workspaces is quick and easy. As usual in i3, do it with a keyboard shortcut. Press `+num` to switch to workspace `num`. If you get into the habit of always assigning applications/groups of windows to the same workspace, you can quickly switch between them, which makes workspaces a very useful feature. - -In addition, you can use workspaces to control multi-monitor setups, where each monitor gets an initial workspace. If you switch to that workspace, you switch to that monitor—without moving your hand off the keyboard. - -Finally, there is another, special type of workspace in i3: the scratchpad. It is an invisible workspace that shows up in the middle of the other workspaces by pressing a shortcut. This is a convenient way to access windows or programs that you frequently use, such as an email client or your music player. - -### Give it a try - -If you value simplicity and efficiency and are not afraid of working with the keyboard, i3 is the window manager for you. Some say it is for advanced users, but that is not necessarily the case. You need to learn a few basic shortcuts to get around at the beginning, but they'll soon feel natural and you'll start using them without thinking. - -This article just scratches the surface of what i3 can do. For more details, consult [i3's documentation][10]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/i3-tiling-window-manager - -作者:[Ricardo Gerardi][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/rgerardi -[1]:https://xfce.org/ -[2]:https://i3wm.org/ -[3]:https://code.google.com/archive/p/wmii/ -[4]:https://xmonad.org/ -[5]:/file/406476 -[6]:https://opensource.com/sites/default/files/uploads/i3_screenshot.png (i3 tiled window manager screenshot) -[7]:/file/405161 -[8]:https://opensource.com/sites/default/files/uploads/rofi_dunst.png (i3 with rofi menu and dunst desktop notifications) -[9]:https://i3wm.org/docs/ipc.html -[10]:https://i3wm.org/docs/userguide.html diff --git a/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md b/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md deleted file mode 100644 index 1c7d7e9226..0000000000 --- a/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md +++ /dev/null @@ -1,176 +0,0 @@ -Perform robust unit tests with PyHamcrest -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) - -At the base of the [testing pyramid][1] are unit tests. Unit tests test one unit of code at a time—usually one function or method. - -Often, a single unit test is designed to test one particular flow through a function, or a specific branch choice. This enables easy mapping of a unit test that fails and the bug that made it fail. - -Ideally, unit tests use few or no external resources, isolating them and making them faster. - -_Good_ tests increase developer productivity by catching bugs early and making testing faster. _Bad_ tests decrease developer productivity. - -Unit test suites help maintain high-quality products by signaling problems early in the development process. An effective unit test catches bugs before the code has left the developer machine, or at least in a continuous integration environment on a dedicated branch. This marks the difference between good and bad unit tests:tests increase developer productivity by catching bugs early and making testing faster.tests decrease developer productivity. - -Productivity usually decreases when testing _incidental features_. The test fails when the code changes, even if it is still correct. This happens because the output is different, but in a way that is not part of the function's contract. - -A good unit test, therefore, is one that helps enforce the contract to which the function is committed. - -If a unit test breaks, the contract is violated and should be either explicitly amended (by changing the documentation and tests), or fixed (by fixing the code and leaving the tests as is). - -While limiting tests to enforce only the public contract is a complicated skill to learn, there are tools that can help. - -One of these tools is [Hamcrest][2], a framework for writing assertions. Originally invented for Java-based unit tests, today the Hamcrest framework supports several languages, including [Python][3]. - -Hamcrest is designed to make test assertions easier to write and more precise. -``` -def add(a, b): - -    return a + b - - - -from hamcrest import assert_that, equal_to - - - -def test_add(): - -    assert_that(add(2, 2), equal_to(4))   - -``` - -This is a simple assertion, for simple functionality. What if we wanted to assert something more complicated? -``` -def test_set_removal(): - -    my_set = {1, 2, 3, 4} - -    my_set.remove(3) - -    assert_that(my_set, contains_inanyorder([1, 2, 4])) - -    assert_that(my_set, is_not(has_item(3))) - -``` - -Note that we can succinctly assert that the result has `1`, `2`, and `4` in any order since sets do not guarantee order. - -We also easily negate assertions with `is_not`. This helps us write _precise assertions_ , which allow us to limit ourselves to enforcing public contracts of functions. - -Sometimes, however, none of the built-in functionality is _precisely_ what we need. In those cases, Hamcrest allows us to write our own matchers. - -Imagine the following function: -``` -def scale_one(a, b): - -    scale = random.randint(0, 5) - -    pick = random.choice([a,b]) - -    return scale * pick - -``` - -We can confidently assert that the result divides into at least one of the inputs evenly. - -A matcher inherits from `hamcrest.core.base_matcher.BaseMatcher`, and overrides two methods: -``` -class DivisibleBy(hamcrest.core.base_matcher.BaseMatcher): - - - -    def __init__(self, factor): - -        self.factor = factor - - - -    def _matches(self, item): - -        return (item % self.factor) == 0 - - - -    def describe_to(self, description): - -        description.append_text('number divisible by') - -        description.append_text(repr(self.factor)) - -``` - -Writing high-quality `describe_to` methods is important, since this is part of the message that will show up if the test fails. -``` -def divisible_by(num): - -    return DivisibleBy(num) - -``` - -By convention, we wrap matchers in a function. Sometimes this gives us a chance to further process the inputs, but in this case, no further processing is needed. -``` -def test_scale(): - -    result = scale_one(3, 7) - -    assert_that(result, - -                any_of(divisible_by(3), - -                       divisible_by(7))) - -``` - -Note that we combined our `divisible_by` matcher with the built-in `any_of` matcher to ensure that we test only what the contract commits to. - -While editing this article, I heard a rumor that the name "Hamcrest" was chosen as an anagram for "matches". Hrm... -``` ->>> assert_that("matches", contains_inanyorder(*"hamcrest") - -Traceback (most recent call last): - -  File "", line 1, in - -  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 43, in assert_that - -    _assert_match(actual=arg1, matcher=arg2, reason=arg3) - -  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 57, in _assert_match - -    raise AssertionError(description) - -AssertionError: - -Expected: a sequence over ['h', 'a', 'm', 'c', 'r', 'e', 's', 't'] in any order - -      but: no item matches: 'r' in ['m', 'a', 't', 'c', 'h', 'e', 's'] - -``` - -Researching more, I found the source of the rumor: It is an anagram for "matchers". -``` ->>> assert_that("matchers", contains_inanyorder(*"hamcrest")) - ->>> - -``` - -If you are not yet writing unit tests for your Python code, now is a good time to start. If you are writing unit tests for your Python code, using Hamcrest will allow you to make your assertion _precise_ —neither more nor less than what you intend to test. This will lead to fewer false positives when modifying code and less time spent modifying tests for working code. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/robust-unit-tests-hamcrest - -作者:[Moshe Zadka][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/moshez -[1]:https://martinfowler.com/bliki/TestPyramid.html -[2]:http://hamcrest.org/ -[3]:https://www.python.org/ diff --git a/sources/tech/20180814 5 open source strategy and simulation games for Linux.md b/sources/tech/20180814 5 open source strategy and simulation games for Linux.md deleted file mode 100644 index 1f7e94c22f..0000000000 --- a/sources/tech/20180814 5 open source strategy and simulation games for Linux.md +++ /dev/null @@ -1,111 +0,0 @@ -5 open source strategy and simulation games for Linux -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32) - -Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games are often not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist. - -So, can someone who only uses free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely ever to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website to install and play it. - -This article looks at strategy and simulation games. I have already written about [arcade-style games][1], [board & card games][2], [puzzle games][3], [racing & flying games][4], and [role-playing games][5]. - -### Freeciv - -![](https://opensource.com/sites/default/files/uploads/freeciv.png) - -[Freeciv][6] is an open source version of the [Civilization series][7] of computer games. Gameplay is most similar to the earlier games in the Civilization series, and Freeciv even has options to use Civilization 1 and Civilization 2 rule sets. Freeciv involves building cities, exploring the world map, developing technologies, and competing with other civilizations trying to do the same. Victory conditions include defeating all the other civilizations, developing a space colony, or hitting deadline if neither of the first two conditions are met. The game can be played against AI opponents or other human players. Different tile-sets are available to change the look of the game's map. - -To install Freeciv, run the following command: - - * On Fedora: `dnf install freeciv` - * On Debian/Ubuntu: `apt install freeciv` - - - -### MegaGlest - -![](https://opensource.com/sites/default/files/uploads/megaglest.png) - -[MegaGlest][8] is an open source real-time strategy game in the style of Blizzard Entertainment's [Warcraft][9] and [StarCraft][10] games. Players control one of several different factions, building structures and recruiting units to explore the map and battle their opponents. At the beginning of the match, a player can build only the most basic buildings and recruit the weakest units. To build and recruit better things, players must work their way up their factions technology tree by building structures and recruiting units that unlock more advanced options. Combat units will attack when enemy units come into range, but for optimal strategy, it is best to manage the battle directly by controlling the units. Simultaneously managing the construction of new structures, recruiting new units, and managing battles can be a challenge, but that is the point of a real-time strategy game. MegaGlest provides a nice variety of factions, so there are plenty of reasons to try new and different strategies. - -To install MegaGlest, run the following command: - - * On Fedora: `dnf install megaglest` - * On Debian/Ubuntu: `apt install megaglest` - - - -### OpenTTD - -![](https://opensource.com/sites/default/files/uploads/openttd.png) - -[OpenTTD][11] (see also [our review][12]) is an open source implementation of [Transport Tycoon Deluxe][13]. The object of the game is to create a transportation network and earn money, which allows the player to build an even bigger transportation network. The network can include boats, buses, trains, trucks, and planes. By default, gameplay takes place between 1950 and 2050, with players aiming to get the highest performance rating possible before time runs out. The performance rating is based on things like the amount of cargo delivered, the number of vehicles they have, and how much money they earned. - -To install OpenTTD, run the following command: - - * On Fedora: `dnf install openttd` - * On Debian/Ubuntu: `apt install openttd` - - - -### The Battle for Wesnoth - -![](https://opensource.com/sites/default/files/uploads/the_battle_for_wesnoth.png) - -[The Battle for Wesnoth][14] is one of the most polished open source games available. This turn-based strategy game has a fantasy setting. Play takes place on a hexagonal grid, where individual units battle each other for control. Each type of unit has unique strengths and weaknesses, which requires players to plan their attacks accordingly. There are many different campaigns available for The Battle for Wesnoth, each with different objectives and storylines. The Battle for Wesnoth also comes with a map editor for players interested in creating their own maps or campaigns. - -To install The Battle for Wesnoth, run the following command: - - * On Fedora: `dnf install wesnoth` - * On Debian/Ubuntu: `apt install wesnoth` - - - -### UFO: Alien Invasion - -![](https://opensource.com/sites/default/files/uploads/ufo_alien_invasion.png) - -[UFO: Alien Invasion][15] is an open source tactical strategy game inspired by the [X-COM series][20]. There are two distinct gameplay modes: geoscape and tactical. In geoscape mode, the player takes control of the big picture and deals with managing their bases, researching new technologies, and controlling overall strategy. In tactical mode, the player controls a squad of soldiers and directly confronts the alien invaders in a turn-based battle. Both modes provide different gameplay styles, but both require complex strategy and tactics. - -To install UFO: Alien Invasion, run the following command: - - * On Debian/Ubuntu: `apt install ufoai` - - - -Unfortunately, UFO: Alien Invasion is not packaged for Fedora. - -Did I miss one of your favorite open source strategy or simulation games? Share it in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/strategy-simulation-games-linux - -作者:[Joshua Allen Holm][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/holmja -[1]:https://opensource.com/article/18/1/arcade-games-linux -[2]:https://opensource.com/article/18/3/card-board-games-linux -[3]:https://opensource.com/article/18/6/puzzle-games-linux -[4]:https://opensource.com/article/18/7/racing-flying-games-linux -[5]:https://opensource.com/article/18/8/role-playing-games-linux -[6]:http://www.freeciv.org/ -[7]:https://en.wikipedia.org/wiki/Civilization_(series) -[8]:https://megaglest.org/ -[9]:https://en.wikipedia.org/wiki/Warcraft -[10]:https://en.wikipedia.org/wiki/StarCraft -[11]:https://www.openttd.org/ -[12]:https://opensource.com/life/15/7/linux-game-review-openttd -[13]:https://en.wikipedia.org/wiki/Transport_Tycoon#Transport_Tycoon_Deluxe -[14]:https://www.wesnoth.org/ -[15]:https://ufoai.org/ -[16]:https://opensource.com/downloads/cheat-sheets?intcmp=7016000000127cYAAQ -[17]:https://opensource.com/alternatives?intcmp=7016000000127cYAAQ -[18]:https://opensource.com/tags/linux?intcmp=7016000000127cYAAQ -[19]:https://developers.redhat.com/cheat-sheets/advanced-linux-commands/?intcmp=7016000000127cYAAQ -[20]:https://en.wikipedia.org/wiki/X-COM diff --git a/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md b/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md deleted file mode 100644 index 410f692ad9..0000000000 --- a/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md +++ /dev/null @@ -1,674 +0,0 @@ -HTTP request routing and validation with gorilla/mux -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr) - -The Go networking library includes the `http.ServeMux` structure type, which supports HTTP request multiplexing (routing): A web server routes an HTTP request for a hosted resource, with a URI such as /sales4today, to a code handler; the handler performs the appropriate logic before sending an HTTP response, typically an HTML page. Here’s a sketch of the architecture: -``` -                 +------------+     +--------+     +---------+ -HTTP request---->| web server |---->| router |---->| handler | -                 +------------+     +--------+     +---------+ -``` - -In a call to the `ListenAndServe` method to start an HTTP server -``` -http.ListenAndServe(":8888", nil) // args: port & router -``` - -a second argument of `nil` means that the `DefaultServeMux` is used for request routing. - -The `gorilla/mux` package has a `mux.Router` type as an alternative to either the `DefaultServeMux` or a customized request multiplexer. In the `ListenAndServe` call, a `mux.Router` instance would replace `nil` as the second argument. What makes the `mux.Router` so appealing is best shown through a code example: - -### 1\. A sample crud web app - -The crud web application (see below) supports the four CRUD (Create Read Update Delete) operations, which match four HTTP request methods: POST, GET, PUT, and DELETE, respectively. In the crud app, the hosted resource is a list of cliche pairs, each a cliche and a conflicting cliche such as this pair: -``` -Out of sight, out of mind. Absence makes the heart grow fonder. - -``` - -New cliche pairs can be added, and existing ones can be edited or deleted. - -**The crud web app** -``` -package main - -import ( -   "gorilla/mux" -   "net/http" -   "fmt" -   "strconv" -) - -const GETALL string = "GETALL" -const GETONE string = "GETONE" -const POST string   = "POST" -const PUT string    = "PUT" -const DELETE string = "DELETE" - -type clichePair struct { -   Id      int -   Cliche  string -   Counter string -} - -// Message sent to goroutine that accesses the requested resource. -type crudRequest struct { -   verb     string -   cp       *clichePair -   id       int -   cliche   string -   counter  string -   confirm  chan string -} - -var clichesList = []*clichePair{} -var masterId = 1 -var crudRequests chan *crudRequest - -// GET / -// GET /cliches -func ClichesAll(res http.ResponseWriter, req *http.Request) { -   cr := &crudRequest{verb: GETALL, confirm: make(chan string)} -   completeRequest(cr, res, "read all") -} - -// GET /cliches/id -func ClichesOne(res http.ResponseWriter, req *http.Request) { -   id := getIdFromRequest(req) -   cr := &crudRequest{verb: GETONE, id: id, confirm: make(chan string)} -   completeRequest(cr, res, "read one") -} - -// POST /cliches - -func ClichesCreate(res http.ResponseWriter, req *http.Request) { - -   cliche, counter := getDataFromRequest(req) - -   cp := new(clichePair) - -   cp.Cliche = cliche - -   cp.Counter = counter - -   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)} - -   completeRequest(cr, res, "create") - -} - - - -// PUT /cliches/id - -func ClichesEdit(res http.ResponseWriter, req *http.Request) { - -   id := getIdFromRequest(req) - -   cliche, counter := getDataFromRequest(req) - -   cr := &crudRequest{verb: PUT, id: id, cliche: cliche, counter: counter, confirm: make(chan string)} - -   completeRequest(cr, res, "edit") - -} - - - -// DELETE /cliches/id - -func ClichesDelete(res http.ResponseWriter, req *http.Request) { - -   id := getIdFromRequest(req) - -   cr := &crudRequest{verb: DELETE, id: id, confirm: make(chan string)} - -   completeRequest(cr, res, "delete") - -} - - - -func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) { - -   crudRequests<-cr - -   msg := <-cr.confirm - -   res.Write([]byte(msg)) - -   logIt(logMsg) - -} - - - -func main() { - -   populateClichesList() - - - -   // From now on, this gorountine alone accesses the clichesList. - -   crudRequests = make(chan *crudRequest, 8) - -   go func() { // resource manager - -      for { - -         select { - -         case req := <-crudRequests: - -         if req.verb == GETALL { - -            req.confirm<-readAll() - -         } else if req.verb == GETONE { - -            req.confirm<-readOne(req.id) - -         } else if req.verb == POST { - -            req.confirm<-addPair(req.cp) - -         } else if req.verb == PUT { - -            req.confirm<-editPair(req.id, req.cliche, req.counter) - -         } else if req.verb == DELETE { - -            req.confirm<-deletePair(req.id) - -         } - -      } - -   }() - -   startServer() - -} - - - -func startServer() { - -   router := mux.NewRouter() - - - -   // Dispatch map for CRUD operations. - -   router.HandleFunc("/", ClichesAll).Methods("GET") - -   router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET") - - - -   router.HandleFunc("/cliches", ClichesCreate).Methods("POST") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesDelete).Methods("DELETE") - - - -   http.Handle("/", router) // enable the router - - - -   // Start the server. - -   port := ":8888" - -   fmt.Println("\nListening on port " + port) - -   http.ListenAndServe(port, router); // mux.Router now in play - -} - - - -// Return entire list to requester. - -func readAll() string { - -   msg := "\n" - -   for _, cliche := range clichesList { - -      next := strconv.Itoa(cliche.Id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n" - -      msg += next - -   } - -   return msg - -} - - - -// Return specified clichePair to requester. - -func readOne(id int) string { - -   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n" - - - -   index := findCliche(id) - -   if index >= 0 { - -      cliche := clichesList[index] - -      msg = "\n" + strconv.Itoa(id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n" - -   } - -   return msg - -} - - - -// Create a new clichePair and add to list - -func addPair(cp *clichePair) string { - -   cp.Id = masterId - -   masterId++ - -   clichesList = append(clichesList, cp) - -   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n" - -} - - - -// Edit an existing clichePair - -func editPair(id int, cliche string, counter string) string { - -   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n" - -   index := findCliche(id) - -   if index >= 0 { - -      clichesList[index].Cliche = cliche - -      clichesList[index].Counter = counter - -      msg = "\nCliche edited: " + cliche + " " + counter + "\n" - -   } - -   return msg - -} - - - -// Delete a clichePair - -func deletePair(id int) string { - -   idStr := strconv.Itoa(id) - -   msg := "\n" + "Bad Id: " + idStr + "\n" - -   index := findCliche(id) - -   if index >= 0 { - -      clichesList = append(clichesList[:index], clichesList[index + 1:]...) - -      msg = "\nCliche " + idStr + " deleted\n" - -   } - -   return msg - -} - - - -//*** utility functions - -func findCliche(id int) int { - -   for i := 0; i < len(clichesList); i++ { - -      if id == clichesList[i].Id { - -         return i; - -      } - -   } - -   return -1 // not found - -} - - - -func getIdFromRequest(req *http.Request) int { - -   vars := mux.Vars(req) - -   id, _ := strconv.Atoi(vars["id"]) - -   return id - -} - - - -func getDataFromRequest(req *http.Request) (string, string) { - -   // Extract the user-provided data for the new clichePair - -   req.ParseForm() - -   form := req.Form - -   cliche := form["cliche"][0]    // 1st and only member of a list - -   counter := form["counter"][0]  // ditto - -   return cliche, counter - -} - - - -func logIt(msg string) { - -   fmt.Println(msg) - -} - - - -func populateClichesList() { - -   var cliches = []string { - -      "Out of sight, out of mind.", - -      "A penny saved is a penny earned.", - -      "He who hesitates is lost.", - -   } - -   var counterCliches = []string { - -      "Absence makes the heart grow fonder.", - -      "Penny-wise and dollar-foolish.", - -      "Look before you leap.", - -   } - - - -   for i := 0; i < len(cliches); i++ { - -      cp := new(clichePair) - -      cp.Id = masterId - -      masterId++ - -      cp.Cliche = cliches[i] - -      cp.Counter = counterCliches[i] - -      clichesList = append(clichesList, cp) - -   } - -} - -``` - -To focus on request routing and validation, the crud app does not use HTML pages as responses to requests. Instead, requests result in plaintext response messages: A list of the cliche pairs is the response to a GET request, confirmation that a new cliche pair has been added to the list is a response to a POST request, and so on. This simplification makes it easy to test the app, in particular, the `gorilla/mux` components, with a command-line utility such as [curl][1]. - -The `gorilla/mux` package can be installed from [GitHub][2]. The crud app runs indefinitely; hence, it should be terminated with a Control-C or equivalent. The code for the crud app, together with a README and sample curl tests, is available on [my website][3]. - -### 2\. Request routing - -The `mux.Router` extends REST-style routing, which gives equal weight to the HTTP method (e.g., GET) and the URI or path at the end of a URL (e.g., /cliches). The URI serves as the noun for the HTTP verb (method). For example, in an HTTP request a startline such as -``` -GET /cliches - -``` - -means get all of the cliche pairs, whereas a startline such as -``` -POST /cliches - -``` - -means create a cliche pair from data in the HTTP body. - -In the crud web app, there are five functions that act as request handlers for five variations of an HTTP request: -``` -ClichesAll(...)    # GET: get all of the cliche pairs - -ClichesOne(...)    # GET: get a specified cliche pair - -ClichesCreate(...) # POST: create a new cliche pair - -ClichesEdit(...)   # PUT: edit an existing cliche pair - -ClichesDelete(...) # DELETE: delete a specified cliche pair - -``` - -Each function takes two arguments: an `http.ResponseWriter` for sending a response back to the requester, and a pointer to an `http.Request`, which encapsulates information from the underlying HTTP request. The `gorilla/mux` package makes it easy to register these request handlers with the web server, and to perform regex-based validation. - -The `startServer` function in the crud app registers the request handlers. Consider this pair of registrations, with `router` as a `mux.Router` instance: -``` -router.HandleFunc("/", ClichesAll).Methods("GET") - -router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -``` - -These statements mean that a GET request for either the single slash / or /cliches should be routed to the `ClichesAll` function, which then handles the request. For example, the curl request (with % as the command-line prompt) -``` -% curl --request GET localhost:8888/ - -``` - -produces this response: -``` -1: Out of sight, out of mind.  Absence makes the heart grow fonder. - -2: A penny saved is a penny earned.  Penny-wise and dollar-foolish. - -3: He who hesitates is lost.  Look before you leap. - -``` - -The three cliche pairs are the initial data in the crud app. - -In this pair of registration statements -``` -router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -router.HandleFunc("/cliches", ClichesCreate).Methods("POST") - -``` - -the URI is the same (/cliches) but the verbs differ: GET in the first case, and POST in the second. This registration exemplifies REST-style routing because the difference in the verbs alone suffices to dispatch the requests to two different handlers. - -More than one HTTP method is allowed in a registration, although this strains the spirit of REST-style routing: -``` -router.HandleFunc("/cliches", DoItAll).Methods("POST", "GET") - -``` - -HTTP requests can be routed on features besides the verb and the URI. For example, the registration -``` -router.HandleFunc("/cliches", ClichesCreate).Schemes("https").Methods("POST") - -``` - -requires HTTPS access for a POST request to create a new cliche pair. In similar fashion, a registration might require a request to have a specified HTTP header element (e.g., an authentication credential). - -### 3\. Request validation - -The `gorilla/mux` package takes an easy, intuitive approach to request validation through regular expressions. Consider this request handler for a get one operation: -``` -router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET") - -``` - -This registration rules out HTTP requests such as -``` -% curl --request GET localhost:8888/cliches/foo - -``` - -because foo is not a decimal numeral. The request results in the familiar 404 (Not Found) status code. Including the regex pattern in this handler registration ensures that the `ClichesOne` function is called to handle a request only if the request URI ends with a decimal integer value: -``` -% curl --request GET localhost:8888/cliches/3  # ok - -``` - -As a second example, consider the request -``` -% curl --request PUT --data "..." localhost:8888/cliches - -``` - -This request results in a status code of 405 (Bad Method) because the /cliches URI is registered, in the crud app, only for GET and POST requests. A PUT request, like a GET one request, must include a numeric id at the end of the URI: -``` -router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT") - -``` - -### 4\. Concurrency issues - -The `gorilla/mux` router executes each call to a registered request handler as a separate goroutine, which means that concurrency is baked into the package. For example, if there are ten simultaneous requests such as -``` -% curl --request POST --data "..." localhost:8888/cliches - -``` - -then the `mux.Router` launches ten goroutines to execute the `ClichesCreate` handler. - -Of the five request operations GET all, GET one, POST, PUT, and DELETE, the last three alter the requested resource, the shared `clichesList` that houses the cliche pairs. Accordingly, the crudapp needs to guarantee safe concurrency by coordinating access to the `clichesList`. In different but equivalent terms, the crud app must prevent a race condition on the `clichesList`. In a production environment, a database system might be used to store a resource such as the `clichesList`, and safe concurrency then could be managed through database transactions. - -The crud app takes the recommended Go approach to safe concurrency: - - * Only a single goroutine, the resource manager started in the crud app `startServer` function, has access to the `clichesList` once the web server starts listening for requests. - * The request handlers such as `ClichesCreate` and `ClichesAll` send a (pointer to) a `crudRequest` instance to a Go channel (thread-safe by default), and the resource manager alone reads from this channel. The resource manager then performs the requested operation on the `clichesList`. - - - -The safe-concurrency architecture can be sketched as follows: -``` -                 crudRequest                   read/write - -request handlers------------->resource manager------------>clichesList - -``` - -With this architecture, no explicit locking of the `clichesList` is needed because only one goroutine, the resource manager, accesses the `clichesList` once CRUD requests start coming in. - -To keep the crud app as concurrent as possible, it’s essential to have an efficient division of labor between the request handlers, on the one side, and the single resource manager, on the other side. Here, for review, is the `ClichesCreate` request handler: -``` -func ClichesCreate(res http.ResponseWriter, req *http.Request) { - -   cliche, counter := getDataFromRequest(req) - -   cp := new(clichePair) - -   cp.Cliche = cliche - -   cp.Counter = counter - -   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)} - -   completeRequest(cr, res, "create") - -}ClichesCreateres httpResponseWriterreqclichecountergetDataFromRequestreqcpclichePaircpClicheclichecpCountercountercr&crudRequestverbPOSTcpcpconfirmcompleteRequestcrres - -``` - -`ClichesCreate` calls the utility function `getDataFromRequest`, which extracts the new cliche and counter-cliche from the POST request. The `ClichesCreate` function then creates a new `ClichePair`, sets two fields, and creates a `crudRequest` to be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because the `clichesList` is not being accessed yet. - -The request handlercalls the utility function, which extracts the new cliche and counter-cliche from the POST request. Thefunction then creates a new, sets two fields, and creates ato be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because theis not being accessed yet. - -The `completeRequest` utility function called at the end of the `ClichesCreate` function and the other request handlers -``` -completeRequest(cr, res, "create") // shown above - -``` - -brings the resource manager into play by putting a `crudRequest` into the `crudRequests` channel: -``` -func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) { - -   crudRequests<-cr          // send request to resource manager - -   msg := <-cr.confirm       // await confirmation string - -   res.Write([]byte(msg))    // send confirmation back to requester - -   logIt(logMsg)             // print to the standard output - -} - -``` - -For a POST request, the resource manager calls the utility function `addPair`, which changes the `clichesList` resource: -``` -func addPair(cp *clichePair) string { - -   cp.Id = masterId  // assign a unique ID - -   masterId++        // update the ID counter - -   clichesList = append(clichesList, cp) // update the list - -   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n" - -} - -``` - -The resource manager calls similar utility functions for the other CRUD operations. It’s worth repeating that the resource manager is the only goroutine to read or write the `clichesList` once the web server starts accepting requests. - -For web applications of any type, the `gorilla/mux` package provides request routing, request validation, and related services in a straightforward, intuitive API. The crud web app highlights the package’s main features. Give the package a test drive, and you’ll likely be a buyer. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/http-request-routing-validation-gorillamux - -作者:[Marty Kalin][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mkalindepauledu -[1]:https://curl.haxx.se/ -[2]:https://github.com/gorilla/mux -[3]:http://condor.depaul.edu/mkalin diff --git a/sources/tech/20180823 Getting started with Sensu monitoring.md b/sources/tech/20180823 Getting started with Sensu monitoring.md deleted file mode 100644 index 7d0a65e306..0000000000 --- a/sources/tech/20180823 Getting started with Sensu monitoring.md +++ /dev/null @@ -1,290 +0,0 @@ -Getting started with Sensu monitoring -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e) - -Sensu is an open source infrastructure and application monitoring solution that monitors servers, services, and application health, and sends alerts and notifications with third-party integration. Written in Ruby, Sensu can use either [RabbitMQ][1] or [Redis][2] to handle messages. It uses Redis to store data. - -If you want to monitor your cloud infrastructure in a simple and efficient manner, Sensu is a good option. It can be integrated with many of the modern DevOps stacks your organization may already be using, such as [Slack][3], [HipChat][4], or [IRC][5], and it can even send mobile/pager alerts with [PagerDuty][6]. - -Sensu's [modular architecture][7] means every component can be installed on the same server or on completely separate machines. - -### Architecture - -Sensu's main communication mechanism is the Transport. Every Sensu component must connect to the Transport in order to send messages to each other. Transport can use either RabbitMQ (recommended in production) or Redis. - -Sensu Server processes event data and takes action. It registers clients and processes check results and monitoring events using filters, mutators, and handlers. The server publishes check definitions to the clients and the Sensu API provides a RESTful API, providing access to monitoring data and core functionality. - -[Sensu Client][8] executes checks either scheduled by Sensu Server or local checks definitions. Sensu uses a data store (Redis) to keep all the persistent data. Finally, [Uchiwa][9] is the web interface to communicate with Sensu API. - -![sensu_system.png][11] - -### Installing Sensu - -#### Prerequisites - - * One Linux installation to act as the server node (I used CentOS 7 for this article) - - * One or more Linux machines to monitor (clients) - - - - -#### Server side - -Sensu requires Redis to be installed. To install Redis, enable the EPEL repository: -``` -$ sudo yum install epel-release -y - -``` - -Then install Redis: -``` -$ sudo yum install redis -y - -``` - -Modify `/etc/redis.conf` to disable protected mode, listen on every interface, and set a password: -``` -$ sudo sed -i 's/^protected-mode yes/protected-mode no/g' /etc/redis.conf - -$ sudo sed -i 's/^bind 127.0.0.1/bind 0.0.0.0/g' /etc/redis.conf - -$ sudo sed -i 's/^# requirepass foobared/requirepass password123/g' /etc/redis.conf - -``` - -Enable and start Redis service: -``` -$ sudo systemctl enable redis -$ sudo systemctl start redis -``` - -Redis is now installed and ready to be used by Sensu. - -Now let’s install Sensu. - -First, configure the Sensu repository and install the packages: -``` -$ sudo tee /etc/yum.repos.d/sensu.repo << EOF -[sensu] -name=sensu -baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/ -gpgcheck=0 -enabled=1 -EOF - -$ sudo yum install sensu uchiwa -y -``` - -Let’s create the bare minimum configuration files for Sensu: -``` -$ sudo tee /etc/sensu/conf.d/api.json << EOF -{ -  "api": { -        "host": "127.0.0.1", -        "port": 4567 -  } -} -EOF -``` - -Next, configure `sensu-api` to listen on localhost, with Port 4567: -``` -$ sudo tee /etc/sensu/conf.d/redis.json << EOF -{ -  "redis": { -        "host": "", -        "port": 6379, -        "password": "password123" -  } -} -EOF - - -$ sudo tee /etc/sensu/conf.d/transport.json << EOF -{ -  "transport": { -        "name": "redis" -  } -} -EOF -``` - -In these two files, we configure Sensu to use Redis as the transport mechanism and the address where Redis will listen. Clients need to connect directly to the transport mechanism. These two files will be required on each client machine. -``` -$ sudo tee /etc/sensu/uchiwa.json << EOF -{ -   "sensu": [ -        { -        "name": "sensu", -        "host": "127.0.0.1", -        "port": 4567 -        } -   ], -   "uchiwa": { -        "host": "0.0.0.0", -        "port": 3000 -   } -} -EOF -``` - -In this file, we configure Uchiwa to listen on every interface (0.0.0.0) on Port 3000. We also configure Uchiwa to use `sensu-api` (already configured). - -For security reasons, change the owner of the configuration files you just created: -``` -$ sudo chown -R sensu:sensu /etc/sensu -``` - -Enable and start the Sensu services: -``` -$ sudo systemctl enable sensu-server sensu-api sensu-client -$ sudo systemctl start sensu-server sensu-api sensu-client -$ sudo systemctl enable uchiwa -$ sudo systemctl start uchiwa -``` - -Try accessing the Uchiwa website: http://:3000 - -For production environments, it’s recommended to run a cluster of RabbitMQ as the Transport instead of Redis (a Redis cluster can be used in production too), and to run more than one instance of Sensu Server and API for load balancing and high availability. - -Sensu is now installed. Now let’s configure the clients. - -#### Client side - -To add a new client, you will need to enable Sensu repository on the client machines by creating the file `/etc/yum.repos.d/sensu.repo`. -``` -$ sudo tee /etc/yum.repos.d/sensu.repo << EOF -[sensu] -name=sensu -baseurl=https://sensu.global.ssl.fastly.net/yum/\$releasever/\$basearch/ -gpgcheck=0 -enabled=1 -EOF -``` - -With the repository enabled, install the package Sensu: -``` -$ sudo yum install sensu -y -``` - -To configure `sensu-client`, create the same `redis.json` and `transport.json` created in the server machine, as well as the `client.json` configuration file: -``` -$ sudo tee /etc/sensu/conf.d/client.json << EOF -{ -  "client": { -        "name": "rhel-client", -        "environment": "development", -        "subscriptions": [ -        "frontend" -        ] -  } -} -EOF -``` - -In the name field, specify a name to identify this client (typically the hostname). The environment field can help you filter, and subscription defines which monitoring checks will be executed by the client. - -Finally, enable and start the services and check in Uchiwa, as the new client will register automatically: -``` -$ sudo systemctl enable sensu-client -$ sudo systemctl start sensu-client -``` - -### Sensu checks - -Sensu checks have two components: a plugin and a definition. - -Sensu is compatible with the [Nagios check plugin specification][12], so any check for Nagios can be used without modification. Checks are executable files and are run by the Sensu client. - -Check definitions let Sensu know how, where, and when to run the plugin. - -#### Client side - -Let’s install one check plugin on the client machine. Remember, this plugin will be executed on the clients. - -Enable EPEL and install `nagios-plugins-http` : -``` -$ sudo yum install -y epel-release -$ sudo yum install -y nagios-plugins-http -``` - -Now let’s explore the plugin by executing it manually. Try checking the status of a web server running on the client machine. It should fail as we don’t have a web server running: -``` -$ /usr/lib64/nagios/plugins/check_http -I 127.0.0.1 -connect to address 127.0.0.1 and port 80: Connection refused -HTTP CRITICAL - Unable to open TCP socket -``` - -It failed, as expected. Check the return code of the execution: -``` -$ echo $? -2 - -``` - -The Nagios check plugin specification defines four return codes for the plugin execution: - -| **Plugin return code** | **State** | -|------------------------|-----------| -| 0 | OK | -| 1 | WARNING | -| 2 | CRITICAL | -| 3 | UNKNOWN | - -With this information, we can now create the check definition on the server. - -#### Server side - -On the server machine, create the file `/etc/sensu/conf.d/check_http.json`: -``` -{ -  "checks": { -    "check_http": { -      "command": "/usr/lib64/nagios/plugins/check_http -I 127.0.0.1", -      "interval": 10, -      "subscribers": [ -        "frontend" -      ] -    } -  } -} -``` - -In the command field, use the command we tested before. `Interval` will tell Sensu how frequently, in seconds, this check should be executed. Finally, `subscribers` will define the clients where the check will be executed. - -Restart both sensu-api and sensu-server and confirm that the new check is available in Uchiwa. -``` -$ sudo systemctl restart sensu-api sensu-server -``` - -### What’s next? - -Sensu is a powerful tool, and this article covers just a glimpse of what it can do. See the [documentation][13] to learn more, and visit the Sensu site to learn more about the [Sensu community][14]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/getting-started-sensu-monitoring-solution - -作者:[Michael Zamot][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mzamot -[1]:https://www.rabbitmq.com/ -[2]:https://redis.io/topics/config -[3]:https://slack.com/ -[4]:https://en.wikipedia.org/wiki/HipChat -[5]:http://www.irc.org/ -[6]:https://www.pagerduty.com/ -[7]:https://docs.sensu.io/sensu-core/1.4/overview/architecture/ -[8]:https://docs.sensu.io/sensu-core/1.4/installation/install-sensu-client/ -[9]:https://uchiwa.io/#/ -[10]:/file/406576 -[11]:https://opensource.com/sites/default/files/uploads/sensu_system.png (sensu_system.png) -[12]:https://assets.nagios.com/downloads/nagioscore/docs/nagioscore/4/en/pluginapi.html -[13]:https://docs.sensu.io/ -[14]:https://sensu.io/community diff --git a/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md b/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md deleted file mode 100644 index 8cec20916d..0000000000 --- a/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md +++ /dev/null @@ -1,228 +0,0 @@ -An Introduction to Quantum Computing with Open Source Cirq Framework -====== -As the title suggests what we are about to begin discussing, this article is an effort to understand how far we have come in Quantum Computing and where we are headed in the field in order to accelerate scientific and technological research, through an Open Source perspective with Cirq. - -First, we will introduce you to the world of Quantum Computing. We will try our best to explain the basic idea behind the same before we look into how Cirq would be playing a significant role in the future of Quantum Computing. Cirq, as you might have heard of recently, has been breaking news in the field and in this Open Science article, we will try to find out why. - - - -Before we start with what Quantum Computing is, it is essential to get to know about the term Quantum, that is, a [subatomic particle][1] referring to the smallest known entity. The word [Quantum][2] is based on the Latin word Quantus, meaning, “how little”, as described in this short video: - - - -It will be easier for us to understand Quantum Computing by comparing it first to Classical Computing. Classical Computing refers to how today’s conventional computers are designed to work. The device with which you are reading this article right now, can also be referred to as a Classical Computing Device. - -### Classical Computing - -Classical Computing is just another way to describe how a conventional computer works. They work via a binary system, i.e, information is stored using either 1 or 0. Our Classical computers cannot understand any other form. - -In literal terms inside the computer, a transistor can be either on (1) or off (0). Whatever information we provide input to, is translated into 0s and 1s, so that the computer can understand and store that information. Everything is represented only with the help of a combination of 0s and 1s. - - - -### Quantum Computing - -Quantum Computing, on the other hand, does not follow an “on or off” model like Classical Computing. Instead, it can simultaneously handle multiple states of information with help of two phenomena called [superimposition and entanglement][3], thus accelerating computing at a much faster rate and also facilitating greater productivity in information storage. - -Please note that superposition and entanglement are [not the same phenomena][4]. - - - -![][5] - -So, if we have bits in Classical Computing, then in the case of Quantum Computing, we would have qubits (or Quantum bits) instead. To know more about the vast difference between the two, check this [page][6] from where the above pic was obtained for explanation. - -Quantum Computers are not going to replace our Classical Computers. But, there are certain humongous tasks that our Classical Computers will never be able to accomplish and that is when Quantum Computers would prove extremely resourceful. The following video describes the same in detail while also describing how Quantum Computers work: - - - -A comprehensive video on the progress in Quantum Computing so far: - - - -### Noisy Intermediate Scale Quantum - -According to the very recently updated research paper (31st July 2018), the term “Noisy” refers to inaccuracy because of producing an incorrect value caused by imperfect control over qubits. This inaccuracy is why there will be serious limitations on what Quantum devices can achieve in the near term. - -“Intermediate Scale” refers to the size of Quantum Computers which will be available in the next few years, where the number of qubits can range from 50 to a few hundred. 50 qubits is a significant milestone because that’s beyond what can be simulated by [brute force][7] using the most powerful existing digital [supercomputers][8]. Read more in the paper [here][9]. - -With the advent of Cirq, a lot is about to change. - -### What is Cirq? - -Cirq is a python framework for creating, editing, and invoking Noisy Intermediate Scale Quantum (NISQ) circuits that we just talked about. In other words, Cirq can address challenges to improve accuracy and reduce noise in Quantum Computing. - -Cirq does not necessarily require an actual Quantum Computer for execution. Cirq can also use a simulator-like interface to perform Quantum circuit simulations. - -Cirq is gradually grabbing a lot of pace, with one of its first users being [Zapata][10], formed last year by a [group of scientists][11] from Harvard University focused on Quantum Computing. - -### Getting started with Cirq on Linux - -The developers of the Open Source [Cirq library][12] recommend the installation in a [virtual python environment][13] like [virtualenv][14]. The developers’ installation guide for Linux can be found [here][15]. - -However, we successfully installed and tested Cirq directly for Python3 on an Ubuntu 16.04 system via the following steps: - -#### Installing Cirq on Ubuntu - -![Cirq Framework for Quantum Computing in Linux][16] - -First, we would require pip or pip3 to install Cirq. [Pip][17] is a tool recommended for installing and managing Python packages. - -For Python 3.x versions, Pip can be installed with: -``` -sudo apt-get install python3-pip - -``` - -Python3 packages can be installed via: -``` -pip3 install - -``` - -We went ahead and installed the Cirq library with Pip3 for Python3: -``` -pip3 install cirq - -``` - -#### Enabling Plot and PDF generation (optional) - -Optional system dependencies not install-able with pip can be installed with: -``` -sudo apt-get install python3-tk texlive-latex-base latexmk - -``` - - * python3-tk is Python’s own graphic library which enables plotting functionality. - * texlive-latex-base and latexmk enable PDF writing functionality. - - - -Later, we successfully tested Cirq with the following command and code: -``` -python3 -c 'import cirq; print(cirq.google.Foxtail)' - -``` - -We got the resulting output as: - -![][18] - -#### Configuring Pycharm IDE for Cirq - -We also configured a Python IDE [PyCharm on Ubuntu][19] to test the same results: - -Since we installed Cirq for Python3 on our Linux system, we set the path to the project interpreter in the IDE settings to be: -``` -/usr/bin/python3 - -``` - -![][20] - -In the output above, you can note that the path to the project interpreter that we just set, is shown along with the path to the test program file (test.py). An exit code of 0 shows that the program has finished executing successfully without errors. - -So, that’s a ready-to-use IDE environment where you can import the Cirq library to start programming with Python and simulate Quantum circuits. - -#### Get started with Cirq - -A good place to start are the [examples][21] that have been made available on Cirq’s Github page. - -The developers have included this [tutorial][22] on GitHub to get started with learning Cirq. If you are serious about learning Quantum Computing, they recommend an excellent book called [“Quantum Computation and Quantum Information” by Nielsen and Chuang][23]. - -#### OpenFermion-Cirq - -[OpenFermion][24] is an open source library for obtaining and manipulating representations of fermionic systems (including Quantum Chemistry) for simulation on Quantum Computers. Fermionic systems are related to the generation of [fermions][25], which according to [particle physics][26], follow [Fermi-Dirac statistics][27]. - -OpenFermion has been hailed as [a great practice tool][28] for chemists and researchers involved with [Quantum Chemistry][29]. The main focus of Quantum Chemistry is the application of [Quantum Mechanics][30] in physical models and experiments of chemical systems. Quantum Chemistry is also referred to as [Molecular Quantum Mechanics][31]. - -The advent of Cirq has now made it possible for OpenFermion to extend its functionality by providing routines and tools for using Cirq to compile and compose circuits for Quantum simulation algorithms. - -#### Google Bristlecone - -On March 5, 2018, Google presented [Bristlecone][32], their new Quantum processor, at the annual [American Physical Society meeting][33] in Los Angeles. The [gate-based superconducting system][34] provides a test platform for research into [system error rates][35] and [scalability][36] of Google’s [qubit technology][37], along-with applications in Quantum [simulation][38], [optimization][39], and [machine learning.][40] - -In the near future, Google wants to make its 72 qubit Bristlecone Quantum processor [cloud accessible][41]. Bristlecone will gradually become quite capable to perform a task that a Classical Supercomputer would not be able to complete in a reasonable amount of time. - -Cirq would make it easier for researchers to directly write programs for Bristlecone on the cloud, serving as a very convenient interface for real-time Quantum programming and testing. - -Cirq will allow us to: - - * Fine tune control over Quantum circuits, - * Specify [gate][42] behavior using native gates, - * Place gates appropriately on the device & - * Schedule the timing of these gates. - - - -### The Open Science Perspective on Cirq - -As we all know Cirq is Open Source on GitHub, its addition to the Open Source Scientific Communities, especially those which are focused on Quantum Research, can now efficiently collaborate to solve the current challenges in Quantum Computing today by developing new ways to reduce error rates and improve accuracy in the existing Quantum models. - -Had Cirq not followed an Open Source model, things would have definitely been a lot more challenging. A great initiative would have been missed out and we would not have been one step closer in the field of Quantum Computing. - -### Summary - -To summarize in the end, we first introduced you to the concept of Quantum Computing by comparing it to existing Classical Computing techniques followed by a very important video on recent developmental updates in Quantum Computing since last year. We then briefly discussed Noisy Intermediate Scale Quantum, which is what Cirq is specifically built for. - -We saw how we can install and test Cirq on an Ubuntu system. We also tested the installation for usability on an IDE environment with some resources to get started to learn the concept. - -Finally, we also saw two examples of how Cirq would be an essential advantage in the development of research in Quantum Computing, namely OpenFermion and Bristlecone. We concluded the discussion by highlighting some thoughts on Cirq with an Open Science Perspective. - -We hope we were able to introduce you to Quantum Computing with Cirq in an easy to understand manner. If you have any feedback related to the same, please let us know in the comments section. Thank you for reading and we look forward to see you in our next Open Science article. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/qunatum-computing-cirq-framework/ - -作者:[Avimanyu Bandyopadhyay][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/avimanyu/ -[1]:https://en.wikipedia.org/wiki/Subatomic_particle -[2]:https://en.wikipedia.org/wiki/Quantum -[3]:https://www.clerro.com/guide/491/quantum-superposition-and-entanglement-explained -[4]:https://physics.stackexchange.com/questions/148131/can-quantum-entanglement-and-quantum-superposition-be-considered-the-same-phenom -[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/bit-vs-qubit.jpg -[6]:http://www.rfwireless-world.com/Terminology/Difference-between-Bit-and-Qubit.html -[7]:https://en.wikipedia.org/wiki/Proof_by_exhaustion -[8]:https://www.explainthatstuff.com/how-supercomputers-work.html -[9]:https://arxiv.org/abs/1801.00862 -[10]:https://www.xconomy.com/san-francisco/2018/07/19/google-partners-with-zapata-on-open-source-quantum-computing-effort/ -[11]:https://www.zapatacomputing.com/about/ -[12]:https://github.com/quantumlib/Cirq -[13]:https://itsfoss.com/python-setup-linux/ -[14]:https://virtualenv.pypa.io -[15]:https://cirq.readthedocs.io/en/latest/install.html#installing-on-linux -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-framework-linux.jpeg -[17]:https://pypi.org/project/pip/ -[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-test-output.jpg -[19]:https://itsfoss.com/install-pycharm-ubuntu/ -[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-tested-on-pycharm.jpg -[21]:https://github.com/quantumlib/Cirq/tree/master/examples -[22]:https://github.com/quantumlib/Cirq/blob/master/docs/tutorial.md -[23]:http://mmrc.amss.cas.cn/tlb/201702/W020170224608149940643.pdf -[24]:http://openfermion.org -[25]:https://en.wikipedia.org/wiki/Fermion -[26]:https://en.wikipedia.org/wiki/Particle_physics -[27]:https://en.wikipedia.org/wiki/Fermi-Dirac_statistics -[28]:https://phys.org/news/2018-03-openfermion-tool-quantum-coding.html -[29]:https://en.wikipedia.org/wiki/Quantum_chemistry -[30]:https://en.wikipedia.org/wiki/Quantum_mechanics -[31]:https://ocw.mit.edu/courses/chemical-engineering/10-675j-computational-quantum-mechanics-of-molecular-and-extended-systems-fall-2004/lecture-notes/ -[32]:https://techcrunch.com/2018/03/05/googles-new-bristlecone-processor-brings-it-one-step-closer-to-quantum-supremacy/ -[33]:http://meetings.aps.org/Meeting/MAR18/Content/3475 -[34]:https://en.wikipedia.org/wiki/Superconducting_quantum_computing -[35]:https://en.wikipedia.org/wiki/Quantum_error_correction -[36]:https://en.wikipedia.org/wiki/Scalability -[37]:https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html -[38]:https://research.googleblog.com/2017/10/announcing-openfermion-open-source.html -[39]:https://research.googleblog.com/2016/06/quantum-annealing-with-digital-twist.html -[40]:https://arxiv.org/abs/1802.06002 -[41]:https://www.computerworld.com.au/article/644051/google-launches-quantum-framework-cirq-plans-bristlecone-cloud-move/ -[42]:https://en.wikipedia.org/wiki/Logic_gate diff --git a/sources/tech/20180828 Linux for Beginners- Moving Things Around.md b/sources/tech/20180828 Linux for Beginners- Moving Things Around.md deleted file mode 100644 index abefc7c6f5..0000000000 --- a/sources/tech/20180828 Linux for Beginners- Moving Things Around.md +++ /dev/null @@ -1,201 +0,0 @@ -Linux for Beginners: Moving Things Around -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/filesystem-linux.jpg?itok=NQCoYl1f) - -In previous installments of this series, [you learned about directories][1] and how [permissions to access directories work][2]. Most of what you learned in those articles can be applied to files, except how to make a file executable. - -So let's deal with that before moving on. - -### No _.exe_ Needed - -In other operating systems, the nature of a file is often determined by its extension. If a file has a _.jpg_ extension, the OS guesses it is an image; if it ends in _.wav_ , it is an audio file; and if it has an _.exe_ tacked onto the end of the file name, it is a program you can execute. - -This leads to serious problems, like trojans posing as documents. Fortunately, that is not how things work in Linux. Sure, you may see occasional executable file endings in _.sh_ that indicate they are runnable shell scripts, but this is mostly for the benefit of humans eyeballing files, the same way when you use `ls --color`, the names of executable files show up in bright green. - -The fact is most applications have no extension at all. What determines whether a file is really program is the _x_ (for _executable_ ) bit. You can make any file executable by running -``` -chmod a+x some_program - -``` - -regardless of its extension or lack thereof. The `x` in the command above sets the _x_ bit and the `a` says you are setting it for _all_ users. You could also set it only for the group of users that own the file (`g+x`), or for only one user, the owner (`u+x`). - -Although we will be covering creating and running scripts from the command line later in this series, know that you can run a program by writing the path to it and then tacking on the name of the program on the end: -``` -path/to/directory/some_program - -``` - -Or, if you are currently in the same directory, you can use: -``` -./some_program - -``` - -There are other ways of making your program available from anywhere in the directory tree (hint: look up the `$PATH` environment variable), but you will be reading about those when we talk about shell scripting. - -### Copying, Moving, Linking - -Obviously, there are more ways of modifying and handling files from the command line than just playing around with their permissions. Most applications will create a new file if you still try to open a file that doesn't exist. Both -``` -nano test.txt - -``` - -and -``` -vim test.txt - -``` - -([nano][3] and [vim][4] being to popular command line text editors) will create an empty _test.txt_ file for you to edit if _test.txt_ didn't exist beforehand. - -You can also create an empty file by _touching_ it: -``` -touch test.txt - -``` - -Will create a file, but not open it in any application. - -You can use `cp` to make a copy of a file in another location or under a new name: -``` -cp test.txt copy_of_test.txt - -``` - -You can also copy a whole bunch of files: -``` -cp *.png /home/images - -``` - -The instruction above copies all the PNG files in the current directory into an _images/_ directory hanging off of your home directory. The _images/_ directory has to exist before you try this, or `cp` will show an error. Also, be warned that, if you copy a file to a directory that contains another file with the same name, `cp` will silently overwrite the old file with the new one. - -You can use -``` -cp -i *.png /home/images - -``` - -If you want `cp` to warn you of any dangers (the `-i` options stands for _interactive_ ). - -You can also copy whole directories, but you need the `-r` option for that: -``` -cp -rv directory_a/ directory_b - -``` - -The `-r` option stands for _recursive_ , meaning that `cp` will drill down into _directory_a_ , copying over all the files and subdirectories contained within. I personally like to include the `-v` option, as it makes `cp` _verbose_ , meaning that it will show you what it is doing instead of just copying silently and then exiting. - -The `mv` command moves stuff. That is, it changes files from one location to another. In its simplest form, `mv` looks a lot like `cp`: -``` -mv test.txt new_test.txt - -``` - -The command above makes _new_test.txt_ appear and _test.txt_ disappear. -``` -mv *.png /home/images - -``` - -Moves all the PNG files in the current directory to a directory called _images/_ hanging of your home directory. Again you have to be careful you do not overwrite existing files by accident. Use -``` -mv -i *.png /home/images - -``` - -the same way you would with `cp` if you want to be on the safe side. - -Apart from moving versus copying, another difference between `mv` and `cp`is when you move a directory: -``` -mv directory_a/ directory_b - -``` - -No need for a recursive flag here. This is because what you are really doing is renaming the directory, the same way in the first example, you were renaming the file*. In fact, even when you "move" a file from one directory to another, as long as both directories are on the same storage device and partition, you are renaming the file. - -You can do an experiment to prove it. `time` is a tool that lets you measure how long a command takes to execute. Look for a hefty file, something that weighs several hundred MBs or even some GBs (say, something like a long video) and try copying it from one directory to another like this: -``` -$ time cp hefty_file.mkv another_directory/ -real 0m3,868s -user 0m0,016s -sys 0m0,887s - -``` - -In bold is what you have to type into the terminal and below what `time` outputs. The number to focus on is the one on the first line, _real_ time. It takes nearly 4 seconds to copy the 355 MBs of _hefty_file.mkv_ to _another_directory/_. - -Now let's try moving it: -``` -$ time mv hefty_file.mkv another_directory/ -real 0m0,004s -user 0m0,000s -sys 0m0,003s - -``` - -Moving is nearly instantaneous! This is counterintuitive, since it would seem that `mv` would have to copy the file and then delete the original. That is two things `mv` has to do versus `cp`'s one. But, somehow, `mv` is 1000 times faster. - -That is because the file system's structure, with all its tree of directories, only exists for the users convenience. At the beginning of each partition there is something called a _partition table_ that tells the operating system where to find each file on the actual physical disk. On the disk, data is not split up into directories or even files. [There are tracks, sectors and clusters instead][5]. When you "move" a file within the same partition, what the operating system does is just change the entry for that file in the partition table, but it still points to the same cluster of information on the disk. - -Yes! Moving is a lie! At least within the same partition that is. If you try and move a file to a different partition or a different device, `mv` is still fast, but is noticeably slower than moving stuff around within the same partition. That is because this time there is actually copying and erasing of data going on. - -### Renaming - -There are several distinct command line `rename` utilities around. None are fixtures like `cp` or `mv` and they can work in slightly different ways. What they all have in common is that they are used to change _parts_ of the names of files. - -In Debian and Ubuntu, the default `rename` utility uses [regular expressions][6] (patterns of strings of characters) to mass change files in a directory. The instruction: -``` -rename 's/\.JPEG$/.jpg/' * - -``` - -will change all the extensions of files with the extension _JPEG_ to _jpg_. The file _IMG001.JPEG_ becomes _IMG001.jpg_ , _my_pic.JPEG_ becomes _my_pic.jpg_ , and so on. - -Another version of `rename` available by default in Manjaro, a derivative of Arch, is much simpler, but arguably less powerful: -``` -rename .JPEG .jpg * - -``` - -This does the same renaming as you saw above. In this version, `.JPEG` is the string of characters you want to change, `.jpg` is what you want to change it to, and `*` represents all the files in the current directory. - -The bottom line is that you are better off using `mv` if all you want to do is rename one file or directory, and that's because `mv` is realiably the same in all distributions everywhere. - -### Learning more - -Check out the both `mv` and `cp`'s _man_ pages to learn more. Run -``` -man cp - -``` - -or -``` -man mv - -``` - -to read about all the options these commands come with and which make them more powerful and safer to use. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/8/linux-beginners-moving-things-around - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/bro66 -[1]: https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux -[2]: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2 -[3]: https://www.nano-editor.org/ -[4]: https://www.vim.org/ -[5]: https://en.wikipedia.org/wiki/Disk_sector -[6]: https://en.wikipedia.org/wiki/Regular_expression diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md deleted file mode 100644 index 9b3a4f45ce..0000000000 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ /dev/null @@ -1,75 +0,0 @@ -translating---geekpi - -Publishing Markdown to HTML with MDwiki -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o) - -There are plenty of reasons to like Markdown, a simple language with an easy-to-learn syntax that can be used with any text editor. Using tools like [Pandoc][1], you can convert Markdown text to [a variety of popular formats][2], including HTML. You can also automate that conversion process in a web server. An HTML5 and JavaScript application called [MDwiki][3], created by Timo Dörr, can take a stack of Markdown files and turn them into a website when requested from a browser. The MDwiki site includes a how-to guide and other information to help you get started: - -![MDwiki site getting started][5] - -What an Mdwiki site looks like. - -Inside the web server, a basic MDwiki site looks like this: - -![MDwiki site inside web server][7] - -What the webserver folder for that site looks like. - -I renamed the MDwiki HTML file `START.HTML` for this project. There is also one Markdown file that deals with navigation and a JSON file to hold a few configuration settings. Everything else is site content. - -While the overall website design is pretty much fixed by MDwiki, the content, styling, and number of pages are not. You can view a selection of different sites generated by MDwiki at [the MDwiki site][8]. It is fair to say that MDwiki sites lack the visual appeal that a web designer could achieve—but they are functional, and users should balance their simple appearance against the speed and ease of creating and editing them. - -Markdown comes in various flavors that extend a stable core functionality for different specific purposes. MDwiki uses GitHub flavor [Markdown][9], which adds features such as formatted code blocks and syntax highlighting for popular programming languages, making it well-suited for producing program documentation and tutorials. - -MDwiki also supports what it calls "gimmicks," which add extra functionality such as embedding YouTube video content and displaying mathematical formulas. These are worth exploring if you need them for specific projects. I find MDwiki an ideal tool for creating technical documentation and educational resources. I have also discovered some tricks and hacks that might not be immediately apparent. - -MDwiki works with any modern web browser when deployed in a web server; however, you do not need a web server if you access MDwiki with Mozilla Firefox. Most MDwiki users will opt to deploy completed projects on a web server to avoid excluding potential users, but development and testing can be done with just a text editor and Firefox. Completed MDwiki projects that are loaded into a Moodle Virtual Learning Environment (VLE) can be read by any modern browser, which could be useful in educational contexts. (This is probably also true for other VLE software, but you should test that.) - -MDwiki's default color scheme is not ideal for all projects, but you can replace it with another theme downloaded from [Bootswatch.com][10]. To do this, simply open the MDwiki HTML file in an editor, take out the `extlib/css/bootstrap-3.0.0.min.css` code, and insert the downloaded Bootswatch theme. There is also an MDwiki gimmick that lets users choose a Bootswatch theme to replace the default after MDwiki loads in their browser. I often work with users who have visual impairments, and they tend to prefer high-contrast themes, with white text on a dark background. - -![MDwiki screen with Bootswatch Superhero theme][12] - -MDwiki screen using the Bootswatch Superhero theme - -MDwiki, Markdown files, and static images are fine for many purposes. However, you might sometimes want to include, say, a JavaScript slideshow or a feedback form. Markdown files can include HTML code, but mixing Markdown with HTML can get confusing. One solution is to create the feature you want in a separate HTML file and display it inside a Markdown file with an iframe tag. I took this idea from the [Twine Cookbook][13], a support site for the Twine interactive fiction engine. The Twine Cookbook doesn’t actually use MDwiki, but combining Markdown and iframe tags opens up a wide range of creative possibilities. - -Here is an example: - -This HTML will display an HTML page created by the Twine interactive fiction engine inside a Markdown file. -``` - -``` - -The result in an MDwiki-generated site looks like this: - -![](https://opensource.com/sites/default/files/uploads/4_-_mdwiki_site_summary.png) - -In short, MDwiki is an excellent small application that achieves its purpose extremely well. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/markdown-html-publishing - -作者:[Peter Cheer][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/petercheer -[1]: https://pandoc.org/ -[2]: https://opensource.com/downloads/pandoc-cheat-sheet -[3]: http://dynalon.github.io/mdwiki/#!index.md -[4]: https://opensource.com/file/407306 -[5]: https://opensource.com/sites/default/files/uploads/1_-_mdwiki_screenshot.png (MDwiki site getting started) -[6]: https://opensource.com/file/407311 -[7]: https://opensource.com/sites/default/files/uploads/2_-_mdwiki_inside_web_server.png (MDwiki site inside web server) -[8]: http://dynalon.github.io/mdwiki/#!examples.md -[9]: https://guides.github.com/features/mastering-markdown/ -[10]: https://bootswatch.com/ -[11]: https://opensource.com/file/407316 -[12]: https://opensource.com/sites/default/files/uploads/3_-_mdwiki_bootswatch_superhero.png (MDwiki screen with Bootswatch Superhero theme) -[13]: https://github.com/iftechfoundation/twine-cookbook diff --git a/sources/tech/20180902 Learning BASIC Like It-s 1983.md b/sources/tech/20180902 Learning BASIC Like It-s 1983.md deleted file mode 100644 index 5790ef6e88..0000000000 --- a/sources/tech/20180902 Learning BASIC Like It-s 1983.md +++ /dev/null @@ -1,184 +0,0 @@ -Learning BASIC Like It's 1983 -====== -I was not yet alive in 1983. This is something that I occasionally regret. I am especially sorry that I did not experience the 8-bit computer era as it was happening, because I think the people that first encountered computers when they were relatively simple and constrained have a huge advantage over the rest of us. - -Today, (almost) everyone knows how to use a computer, but very few people, even in the computing industry, grasp all of what is going on inside of any single machine. There are now [so many layers of software][1] doing so many different things that one struggles to identify the parts that are essential. In 1983, though, home computers were unsophisticated enough that a diligent person could learn how a particular computer worked through and through. That person is today probably less mystified than I am by all the abstractions that modern operating systems pile on top of the hardware. I expect that these layers of abstractions were easy to understand one by one as they were introduced; today, new programmers have to try to understand them all by working top to bottom and backward in time. - -Many famous programmers, particularly in the video game industry, started programming games in childhood on 8-bit computers like the Apple II and the Commodore 64. John Romero, Richard Garriott, and Chris Roberts are all examples. It’s easy to see how this happened. In the 8-bit computer era, many games were available only as printed BASIC listings in computer magazines and [books][2]. If you wanted to play one of those games, you had to type in the whole program by hand. Inevitably, you would get something wrong, so you would have to debug your program. By the time you got it working, you knew enough about how the program functioned to start modifying it yourself. If you were an avid gamer, you became a good programmer almost by necessity. - -I also played computer games throughout my childhood. But the games I played came on CD-ROMs. I sometimes found myself having to google how to fix a crashing installer, which would involve editing the Windows Registry or something like that. This kind of minor troubleshooting may have made me comfortable enough with computers to consider studying computer science in college. But it never taught me anything crucial about how computers worked or how to control them. - -Now, of course, I tell computers what to do for a living. All the same, I can’t help feeling that I missed out on some fundamental insight afforded only to those that grew up programming simpler computers. What would it have been like to encounter computers for the first time in the early 1980s? How would that have been different from the experience of using a computer today? - -This post is going to be a little different from the usual Two-Bit History post because I’m going to try to imagine an answer to these questions. - -### 1983 - -It was just last week that you saw [the Commodore 64 ad][3] on TV. Now that M*A*S*H was over, you were in the market for something new to do on Monday nights. This Commodore 64 thing looked even better than the Apple II that Rudy’s family had in their basement. Plus, the ad promised that the new computer would soon bring friends “knocking down” your door. You knew several people at school that would rather be hanging out at your house than Rudy’s anyway, if only they could play Zork there. - -So you persuaded your parents to buy one. Your mother said that they would consider it only if having a home computer meant that you stayed away from the arcade. You reluctantly agreed. Your father thought he would start tracking the family’s finances in MultiPlan, the spreadsheet program he had heard about, which is why the computer got put in the living room. A year later, though, you would be the only one still using it. You were finally allowed to put it on the desk in your bedroom, right under your Police poster. - -(Your sister protested this decision, but it was 1983 and computers [weren’t for her][4].) - -Dad picked it up from [ComputerLand][5] on the way home from work. The two of you laid the box down next to the TV and opened it. “WELCOME TO THE WORLD OF FRIENDLY COMPUTING,” said the packaging. Twenty minutes later, you weren’t convinced—the two of you were still trying to connect the Commodore to the TV set and wondering whether the TV’s antenna cable was the 75-ohm or 300-ohm coax type. But eventually you were able to turn your TV to channel 3 and see a grainy, purple image. - -![Commodore 64 startup screen][6] - -`READY`, the computer reported. Your father pushed the computer toward you, indicating that you should be the first to give it a try. `HELLO`, you typed, carefully hunting for each letter. The computer’s response was baffling. - -![Commodore 64 syntax error][7] - -You tried typing in a few different words, but the response was always the same. Your father said that you had better read through the rest of the manual. That would be no mean feat—[the manual that came with the Commodore 64][8] was a small book. But that didn’t bother you, because the introduction to the manual foreshadowed wonders. - -The Commodore 64, it claimed, had “the most advanced picture maker in the microcomputer industry,” which would allow you “to design your own pictures in four different colors, just like the ones you see on arcade type video games.” The Commodore 64 also had “built-in music and sound effects that rival many well known music synthesizers.” All of these tools would be put in your hands, because the manual would walk you through it all: - -> Just as important as all the available hardware is the fact that this USER’S GUIDE will help you develop your understanding of computers. It won’t tell you everything there is to know about computers, but it will refer you to a wide variety of publications for more detailed information about the topics presented. Commodore wants you to really enjoy your new COMMODORE 64. And to have fun, remember: programming is not the kind of thing you can learn in a day. Be patient with yourself as you go through the USER’S GUIDE. - -That night, in bed, you read through the entire first three chapters—”Setup,” “Getting Started,” and “Beginning BASIC Programming”—before finally succumbing to sleep with the manual splayed across your chest. - -### Commodore BASIC - -Now, it’s Saturday morning and you’re eager to try out what you’ve learned. One of the first things the manual teaches you how to do is change the colors on the display. You follow the instructions, pressing `CTRL-9` to enter reverse type mode and then holding down the space bar to create long lines. You swap between colors using `CTRL-1` through `CTRL-8`, reveling in your sudden new power over the TV screen. - -![Commodore 64 color bands][9] - -As cool as this is, you realize it doesn’t count as programming. In order to program the computer, you learned last night, you have to speak to it in a language called BASIC. To you, BASIC seems like something out of Star Wars, but BASIC is, by 1983, almost two decades old. It was invented by two Dartmouth professors, John Kemeny and Tom Kurtz, who wanted to make computing accessible to undergraduates in the social sciences and humanities. It was widely available on minicomputers and popular in college math classes. It then became standard on microcomputers after Bill Gates and Paul Allen wrote the MicroSoft BASIC interpreter for the Altair. But the manual doesn’t explain any of this and you won’t learn it for many years. - -One of the first BASIC commands the manual suggests you try is the `PRINT` command. You type in `PRINT "COMMODORE 64"`, slowly, since it takes you a while to find the quotation mark symbol above the `2` key. You hit `RETURN` and this time, instead of complaining, the computer does exactly what you told it to do and displays “COMMODORE 64” on the next line. - -Now you try using the `PRINT` command on all sorts of different things: two numbers added together, two numbers multiplied together, even several decimal numbers. You stop typing out `PRINT` and instead use `?`, since the manual has advised you that `?` is an abbreviation for `PRINT` often used by expert programmers. You feel like an expert already, but then you remember that you haven’t even made it to chapter three, “Beginning BASIC Programming.” - -You get there soon enough. The chapter begins by prompting you to write your first real BASIC program. You type in `NEW` and hit `RETURN`, which gives you a clean slate. You then type your program in: - -``` -10 ?"COMMODORE 64" -20 GOTO 10 -``` - -The 10 and the 20, the manual explains, are line numbers. They order the statements for the computer. They also allow the programmer to refer to other lines of the program in certain commands, just like you’ve done here with the `GOTO` command, which directs the program back to line 10. “It is good programming practice,” the manual opines, “to number lines in increments of 10—in case you need to insert some statements later on.” - -You type `RUN` and stare as the screen clogs with “COMMODORE 64,” repeated over and over. - -![Commodore 64 showing result of printing "Commodore 64" repeatedly][10] - -You’re not certain that this isn’t going to blow up your computer. It takes you a second to remember that you are supposed to hit the `RUN/STOP` key to break the loop. - -The next few sections of the manual teach you about variables, which the manual tells you are like “a number of boxes within the computer that can each hold a number or a string of text characters.” Variables that end in a `%` symbol are whole numbers, while variables ending in a `$` symbol are strings of characters. All other variables are something called “floating point” variables. The manual warns you to be careful with variable names because only the first two letters of the name are actually recognized by the computer, even though nothing stops you from making a name as long as you want it to be. (This doesn’t particularly bother you, but you could see how 30 years from now this might strike someone as completely insane.) - -You then learn about the `IF... THEN...` and `FOR... NEXT...` constructs. With all these new tools, you feel equipped to tackle the next big challenge the manual throws at you. “If you’re the ambitious type,” it goads, “type in the following program and see what happens.” The program is longer and more complicated than any you have seen so far, but you’re dying to know what it does: - -``` -10 REM BOUNCING BALL -20 PRINT "{CLR/HOME}" -25 FOR X = 1 TO 10 : PRINT "{CRSR/DOWN}" : NEXT -30 FOR BL = 1 TO 40 -40 PRINT " ●{CRSR LEFT}";:REM (● is a Shift-Q) -50 FOR TM = 1 TO 5 -60 NEXT TM -70 NEXT BL -75 REM MOVE BALL RIGHT TO LEFT -80 FOR BL = 40 TO 1 STEP -1 -90 PRINT " {CRSR LEFT}{CRSR LEFT}●{CRSR LEFT}"; -100 FOR TM = 1 TO 5 -110 NEXT TM -120 NEXT BL -130 GOTO 20 -``` - -The program above takes advantage of one of the Commodore 64’s coolest features. Non-printable command characters, when passed to the `PRINT` command as part of a string, just do the action they usually perform instead of printing to the screen. This allows you to replay arbitrary chains of commands by printing strings from within your programs. - -It takes you a long time to type in the above program. You make several mistakes and have to re-enter some of the lines. But eventually you are able to type `RUN` and behold a masterpiece: - -![Commodore 64 bouncing ball][11] - -You think that this is a major contender for the coolest thing you have ever seen. You forget about it almost immediately though, because once you’ve learned about BASIC’s built-in functions like `RND` (which returns a random number) and `CHR$` (which returns the character matching a given number code), the manual shows you a program that many years from now will still be famous enough to be made the title of an [essay anthology][12]: - -``` -10 PRINT "{CLR/HOME}" -20 PRINT CHR$(205.5 + RND(1)); -40 GOTO 20 -``` - -When run, the above program produces a random maze: - -![Commodore 64 maze program][13] - -This is definitely the coolest thing you have ever seen. - -### PEEK and POKE - -You’ve now made it through the first four chapters of the Commodore 64 manual, including the chapter titled “Advanced BASIC,” so you’re feeling pretty proud of yourself. You’ve learned a lot this Saturday morning. But this afternoon (after a quick lunch break), you’re going to learn something that will make this magical machine in your living room much less mysterious. - -The next chapter in the manual is titled “Advanced Color and Graphic Commands.” It starts off by revisiting the colored bars that you were able to type out first thing this morning and shows you how you can do the same thing from a program. It then teaches you how to change the background colors of the screen. - -In order to do this, you need to use the BASIC `PEEK` and `POKE` commands. Those commands allow you to, respectively, examine and write to a memory address. The Commodore 64 has a main background color and a border color. Each is controlled by a specially designated memory address. You can write any color value you would like to those addresses to make the background or border that color. - -The manual explains: - -> Just as variables can be thought of as a representation of “boxes” within the machine where you placed your information, you can also think of some specially defined “boxes” within the computer that represent specific memory locations. -> -> The Commodore 64 looks at these memory locations to see what the screen’s background and border color should be, what characters are to be displayed on the screen—and where—and a host of other tasks. - -You write a program to cycle through all the available combinations of background and border color: - -``` -10 FOR BA = 0 TO 15 -20 FOR BO = 0 TO 15 -30 POKE 53280, BA -40 POKE 53281, BO -50 FOR X = 1 TO 500 : NEXT X -60 NEXT BO : NEXT BA -``` - -While the `POKE` commands, with their big operands, looked intimidating at first, now you see that the actual value of the number doesn’t matter that much. Obviously, you have to get the number right, but all the number represents is a “box” that Commodore just happened to store at address 53280. This box has a special purpose: Commodore uses it to determine what color the screen’s background should be. - -![Commodore 64 changing background colors][14] - -You think this is pretty neat. Just by writing to a special-purpose box in memory, you can control a fundamental property of the computer. You aren’t sure how the Commodore 64’s circuitry takes the value you write in memory and changes the color of the screen, but you’re okay not knowing that. At least you understand everything up to that point. - -### Special Boxes - -You don’t get through the entire manual that Saturday, since you are now starting to run out of steam. But you do eventually read all of it. In the process, you learn about many more of the Commodore 64’s special-purpose boxes. There are boxes you can write to control what is on screen—one box, in fact, for every place a character might appear. In chapter six, “Sprite Graphics,” you learn about the special-purpose boxes that allow you to define images that can be moved around and even scaled up and down. In chapter seven, “Creating Sound,” you learn about the boxes you can write to in order to make your Commodore 64 sing “Michael Row the Boat Ashore.” The Commodore 64, it turns out, has very little in the way of what you would later learn is called an API. Controlling the Commodore 64 mostly involves writing to memory addresses that have been given special meaning by the circuitry. - -The many years you ultimately spend writing to those special boxes stick with you. Even many decades later, when you find yourself programming a machine with an extensive graphics or sound API, you know that, behind the curtain, the API is ultimately writing to those boxes or something like them. You will sometimes wonder about younger programmers that have only ever used APIs, and wonder what they must think the API is doing for them. Maybe they think that the API is calling some other, hidden API. But then what do think that hidden API is calling? You will pity those younger programmers, because they must be very confused indeed. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][15] on Twitter or subscribe to the [RSS feed][16] to make sure you know when a new post is out. - -Previously on TwoBitHistory… - -> Have you ever wondered what a 19th-century computer program would look like translated into C? -> -> This week's post: A detailed look at how Ada Lovelace's famous program worked and what it was trying to do. -> -> — TwoBitHistory (@TwoBitHistory) [August 19, 2018][17] - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/09/02/learning-basic.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://www.youtube.com/watch?v=kZRE7HIO3vk -[2]: https://en.wikipedia.org/wiki/BASIC_Computer_Games -[3]: https://www.youtube.com/watch?v=ZekAbt2o6Ms -[4]: https://www.npr.org/sections/money/2014/10/21/357629765/when-women-stopped-coding -[5]: https://www.youtube.com/watch?v=MA_XtT3VAVM -[6]: https://twobithistory.org/images/c64_startup.png -[7]: https://twobithistory.org/images/c64_error.png -[8]: ftp://www.zimmers.net/pub/cbm/c64/manuals/C64_Users_Guide.pdf -[9]: https://twobithistory.org/images/c64_colors.png -[10]: https://twobithistory.org/images/c64_print_loop.png -[11]: https://twobithistory.org/images/c64_ball.gif -[12]: http://10print.org/ -[13]: https://twobithistory.org/images/c64_maze.gif -[14]: https://twobithistory.org/images/c64_background.gif -[15]: https://twitter.com/TwoBitHistory -[16]: https://twobithistory.org/feed.xml -[17]: https://twitter.com/TwoBitHistory/status/1030974776821665793?ref_src=twsrc%5Etfw diff --git a/sources/tech/20180911 Know Your Storage- Block, File - Object.md b/sources/tech/20180911 Know Your Storage- Block, File - Object.md deleted file mode 100644 index 24f179d9d5..0000000000 --- a/sources/tech/20180911 Know Your Storage- Block, File - Object.md +++ /dev/null @@ -1,62 +0,0 @@ -Know Your Storage: Block, File & Object -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/block2_1920.jpg?itok=s1y6RLhT) - -Dealing with the tremendous amount of data generated today presents a big challenge for companies who create or consume such data. It’s a challenge for tech companies that are dealing with related storage issues. - -“Data is growing exponentially each year, and we find that the majority of data growth is due to increased consumption and industries adopting transformational projects to expand value. Certainly, the Internet of Things (IoT) has contributed greatly to data growth, but the key challenge for software-defined storage is how to address the use cases associated with data growth,” said Michael St. Jean, principal product marketing manager, Red Hat Storage. - -Every challenge is an opportunity. “The deluge of data being generated by old and new sources today is certainly presenting us with opportunities to meet our customers escalating needs in the areas of scale, performance, resiliency, and governance,” said Tad Brockway, General Manager for Azure Storage, Media and Edge. - -### Trinity of modern software-defined storage - -There are three different kinds of storage solutions -- block, file, and object -- each serving a different purpose while working with the others. - -Block storage is the oldest form of data storage, where data is stored in fixed-length blocks or chunks of data. Block storage is used in enterprise storage environments and usually is accessed using Fibre Channel or iSCSI interface. “Block storage requires an application to map where the data is stored on the storage device,” according to SUSE’s Larry Morris, Sr. Product Manager, Software Defined Storage. - -Block storage is virtualized in storage area network and software defined storage systems, which are abstracted logical devices that reside on a shared hardware infrastructure and are created and presented to the host operating system of a server, virtual server, or hypervisor via protocols like SCSI, SATA, SAS, FCP, FCoE, or iSCSI. - -“Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks,” said St. Jean. - -Each block exists independently and can be formatted with its own data transfer protocol and operating system — giving users complete configuration autonomy. Because block storage systems aren’t burdened with the same investigative file-finding duties as the file storage systems, block storage is a faster storage system. Pairing that speed with configuration flexibility makes block storage ideal for raw server storage or rich media databases. - -Block storage can be used to host operating systems, applications, databases, entire virtual machines and containers. Traditionally, block storage can only be accessed by individual machine, or machines in a cluster, to which it has been presented. - -### File-based storage - -File-based storage uses a filesystem to map where the data is stored on the storage device. It’s a dominant technology used on direct- and networked-attached storage system, and it takes care of two things: organizing data and representing it to users. “With file storage, data is arranged on the server side in the exact same format as the clients see it. This allows the user to request a file by some unique identifier — like a name, location, or URL — which is communicated to the storage system using specific data transfer protocols,” said St. Jean. - -The result is a type of hierarchical file structure that can be navigated from top to bottom. File storage is layered on top of block storage, allowing users to see and access data as files and folders, but restricting access to the blocks that stand up those files and folders. - -“File storage is typically represented by shared filesystems like NFS and CIFS/SMB that can be accessed by many servers over an IP network. Access can be controlled at a file, directory, and export level via user and group permissions. File storage can be used to store files needed by multiple users and machines, application binaries, databases, virtual machines, and can be used by containers,” explained Brockway. - -### Object storage - -Object storage is the newest form of data storage, and it provides a repository for unstructured data which separates the content from the indexing and allows the concatenation of multiple files into an object. An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those two things together — the data and metadata — make an object. - -One advantage of object storage is the unique identifier associated with each piece of data. Accessing the data involves using the unique identifier and does not require the application or user to know where the data is actually stored. Object data is accessed through APIs. - -“The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run). Objects, object stores, and containers are very flat in nature — compared to the hierarchical structure of file storage systems — which allow them to be accessed very quickly at huge scale,” explained St. Jean. - -Object stores can scale to many petabytes to accommodate the largest datasets and are a great choice for images, audio, video, logs, backups, and data used by analytics services. - -### Conclusion - -Now you know about the various types of storage and how they are used. Stay tuned to learn more about software-defined storage as we examine the topic in the future. - -Join us at [Open Source Summit + Embedded Linux Conference Europe][1] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/9/know-your-storage-block-file-object - -作者:[Swapnil Bhartiya][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/arnieswap -[1]: https://events.linuxfoundation.org/events/elc-openiot-europe-2018/ diff --git a/sources/tech/20180912 How to turn on an LED with Fedora IoT.md b/sources/tech/20180912 How to turn on an LED with Fedora IoT.md deleted file mode 100644 index 007cfc27ab..0000000000 --- a/sources/tech/20180912 How to turn on an LED with Fedora IoT.md +++ /dev/null @@ -1,201 +0,0 @@ -How to turn on an LED with Fedora IoT -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/08/LED-IoT-816x345.jpg) - -Do you enjoy running Fedora, containers, and have a Raspberry Pi? What about using all three together to play with LEDs? This article introduces Fedora IoT and shows you how to install a preview image on a Raspberry Pi. You’ll also learn how to interact with GPIO in order to light up an LED. - -### What is Fedora IoT? - -Fedora IoT is one of the current Fedora Project objectives, with a plan to become a full Fedora Edition. The result will be a system that runs on ARM (aarch64 only at the moment) devices such as the Raspberry Pi, as well as on the x86_64 architecture. - -![][1] - -Fedora IoT is based on OSTree, like [Fedora Silverblue][2] and the former [Atomic Host][3]. - -### Download and install Fedora IoT - -The official Fedora IoT images are coming with the Fedora 29 release. However, in the meantime you can download a [Fedora 28-based image][4] for this experiment. - -You have two options to install the system: either flash the SD card using a dd command, or use a fedora-arm-installer tool. The Fedora Wiki offers more information about [setting up a physical device][5] for IoT. Also, remember that you might need to resize the third partition. - -Once you insert the SD card into the device, you’ll need to complete the installation by creating a user. This step requires either a serial connection, or a HDMI display with a keyboard to interact with the device. - -When the system is installed and ready, the next step is to configure a network connection. Log in to the system with the user you have just created choose one of the following options: - - * If you need to configure your network manually, run a command similar to the following. Remember to use the right addresses for your network: -``` - $ nmcli connection add con-name cable ipv4.addresses \ - 192.168.0.10/24 ipv4.gateway 192.168.0.1 \ - connection.autoconnect true ipv4.dns "8.8.8.8,1.1.1.1" \ - type ethernet ifname eth0 ipv4.method manual - -``` - - * If there’s a DHCP service on your network, run a command like this: - -``` - $ nmcli con add type ethernet con-name cable ifname eth0 -``` - - - - -### **The GPIO interface in Fedora** - -Many tutorials about GPIO on Linux focus on a legacy GPIO sysfis interface. This interface is deprecated, and the upstream Linux kernel community plan to remove it completely, due to security and other issues. - -The Fedora kernel is already compiled without this legacy interface, so there’s no /sys/class/gpio on the system. This tutorial uses a new character device /dev/gpiochipN provided by the upstream kernel. This is the current way of interacting with GPIO. - -To interact with this new device, you need to use a library and a set of command line interface tools. The common command line tools such as echo or cat won’t work with this device. - -You can install the CLI tools by installing the libgpiod-utils package. A corresponding Python library is provided by the python3-libgpiod package. - -### **Creating a container with Podman** - -[Podman][6] is a container runtime with a command line interface similar to Docker. The big advantage of Podman is it doesn’t run any daemon in the background. That’s especially useful for devices with limited resources. Podman also allows you to start containerized services with systemd unit files. Plus, it has many additional features. - -We’ll create a container in these two steps: - - 1. Create a layered image containing the required packages. - 2. Create a new container starting from our image. - - - -First, create a file Dockerfile with the content below. This tells podman to build an image based on the latest Fedora image available in the registry. Then it updates the system inside and installs some packages: - -``` -FROM fedora:latest -RUN  dnf -y update -RUN  dnf -y install libgpiod-utils python3-libgpiod - -``` - -You have created a build recipe of a container image based on the latest Fedora with updates, plus packages to interact with GPIO. - -Now, run the following command to build your base image: - -``` -$ sudo podman build --tag fedora:gpiobase -f ./Dockerfile - -``` - -You have just created your custom image with all the bits in place. You can play with this base container images as many times as you want without installing the packages every time you run it. - -### Working with Podman - -To verify the image is present, run the following command: - -``` -$ sudo podman images -REPOSITORY                 TAG        IMAGE ID       CREATED          SIZE -localhost/fedora           gpiobase   67a2b2b93b4b   10 minutes ago  488MB -docker.io/library/fedora   latest     c18042d7fac6   2 days ago     300MB - -``` - -Now, start the container and do some actual experiments. Containers are normally isolated and don’t have an access to the host system, including the GPIO interface. Therefore, you need to mount it inside while starting the container. To do this, use the –device option in the following command: - -``` -$ sudo podman run -it --name gpioexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash - -``` - -You are now inside the running container. Before you move on, here are some more container commands. For now, exit the container by typing exit or pressing **Ctrl+D**. - -To list the the existing containers, including those not currently running, such as the one you just created, run: - -``` -$ sudo podman container ls -a -CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS                              PORTS   NAMES -64e661d5d4e8   localhost/fedora:gpiobase   /bin/bash 37 seconds ago Exited (0) Less than a second ago           gpioexperiment - -``` - -To create a new container, run this command: - -``` -$ sudo podman run -it --name newexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash - -``` - -Delete it with the following command: - -``` -$ sudo podman rm newexperiment - -``` - -### **Turn on an LED** - -Now you can use the container you already created. If you exited from the container, start it again with this command: - -``` -$ sudo podman start -ia gpioexperiment - -``` - -As already discussed, you can use the CLI tools provided by the libgpiod-utils package in Fedora. To list the available GPIO chips, run: - -``` -$ gpiodetect -gpiochip0 [pinctrl-bcm2835] (54 lines) - -``` - -To get the list of the lines exposed by a specific chip, run: - -``` -$ gpioinfo gpiochip0 - -``` - -Notice there’s no correlation between the number of physical pins and the number of lines printed by the previous command. What’s important is the BCM number, as shown on [pinout.xyz][7]. It is not advised to play with the lines that don’t have a corresponding BCM number. - -Now, connect an LED to the physical pin 40, that is BCM 21. Remember: the shorter leg of the LED (the negative leg, called the cathode) must be connected to a GND pin of the Raspberry Pi with a 330 ohm resistor, and the long leg (the anode) to the physical pin 40. - -To turn the LED on, run the following command. It will stay on until you press **Ctrl+C** : - -``` -$ gpioset --mode=wait gpiochip0 21=1 - -``` - -To light it up for a certain period of time, add the -b (run in the background) and -s NUM (how many seconds) parameters, as shown below. For example, to light the LED for 5 seconds, run: - -``` -$ gpioset -b -s 5 --mode=time gpiochip0 21=1 - -``` - -Another useful command is gpioget. It gets the status of a pin (high or low), and can be useful to detect buttons and switches. - -![Closeup of LED connection with GPIO][8] - -### **Conclusion** - -You can also play with LEDs using Python — [there are some examples here][9]. And you can also use the i2c devices inside the container as well. In addition, Podman is not strictly related to this Fedora edition. You can install it on any existing Fedora Edition, or try it on the two new OSTree-based systems in Fedora: [Fedora Silverblue][2] and [Fedora CoreOS][10]. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/turnon-led-fedora-iot/ - -作者:[Alessio Ciregia][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://alciregi.id.fedoraproject.org/ -[1]: https://fedoramagazine.org/wp-content/uploads/2018/08/oled-1024x768.png -[2]: https://teamsilverblue.org/ -[3]: https://www.projectatomic.io/ -[4]: https://kojipkgs.fedoraproject.org/compose/iot/latest-Fedora-IoT-28/compose/IoT/ -[5]: https://fedoraproject.org/wiki/InternetOfThings/GettingStarted#Setting_up_a_Physical_Device -[6]: https://github.com/containers/libpod -[7]: https://pinout.xyz/ -[8]: https://fedoramagazine.org/wp-content/uploads/2018/08/breadboard-1024x768.png -[9]: https://github.com/brgl/libgpiod/tree/master/bindings/python/examples -[10]: https://coreos.fedoraproject.org/ diff --git a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md deleted file mode 100644 index 365b5eb5f8..0000000000 --- a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md +++ /dev/null @@ -1,616 +0,0 @@ -Lab 1: PC Bootstrap and GCC Calling Conventions -====== -### Lab 1: Booting a PC - -#### Introduction - -This lab is split into three parts. The first part concentrates on getting familiarized with x86 assembly language, the QEMU x86 emulator, and the PC's power-on bootstrap procedure. The second part examines the boot loader for our 6.828 kernel, which resides in the `boot` directory of the `lab` tree. Finally, the third part delves into the initial template for our 6.828 kernel itself, named JOS, which resides in the `kernel` directory. - -##### Software Setup - -The files you will need for this and subsequent lab assignments in this course are distributed using the [Git][1] version control system. To learn more about Git, take a look at the [Git user's manual][2], or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of Git][3] useful. - -The URL for the course Git repository is . To install the files in your Athena account, you need to _clone_ the course repository, by running the commands below. You must use an x86 Athena machine; that is, `uname -a` should mention `i386 GNU/Linux` or `i686 GNU/Linux` or `x86_64 GNU/Linux`. You can log into a public Athena host with `ssh -X athena.dialup.mit.edu`. - -``` -athena% mkdir ~/6.828 -athena% cd ~/6.828 -athena% add git -athena% git clone https://pdos.csail.mit.edu/6.828/2018/jos.git lab -Cloning into lab... -athena% cd lab -athena% - -``` - -Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, and want to checkpoint your progress, you can _commit_ your changes by running: - -``` -athena% git commit -am 'my solution for lab1 exercise 9' -Created commit 60d2135: my solution for lab1 exercise 9 - 1 files changed, 1 insertions(+), 0 deletions(-) -athena% - -``` - -You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, `origin/lab1` is the name of the git branch with the initial code you downloaded from our server for this assignment. - -We have set up the appropriate compilers and simulators for you on Athena. To use them, run add -f 6.828. You must run this command every time you log in (or add it to your `~/.environment` file). If you get obscure errors while compiling or running `qemu`, double check that you added the course locker. - -If you are working on a non-Athena machine, you'll need to install `qemu` and possibly `gcc` following the directions on the [tools page][4]. We've made several useful debugging changes to `qemu` and some of the later labs depend on these patches, so you must build your own. If your machine uses a native ELF toolchain (such as Linux and most BSD's, but notably _not_ OS X), you can simply install `gcc` from your package manager. Otherwise, follow the directions on the tools page. - -##### Hand-In Procedure - -You will turn in your assignments using the [submission website][5]. You need to request an API key from the submission website before you can turn in any assignments or labs. - -The lab code comes with GNU Make rules to make submission easier. After committing your final changes to the lab, type make handin to submit your lab. - -``` -athena% git commit -am "ready to submit my lab" -[lab1 c2e3c8b] ready to submit my lab - 2 files changed, 18 insertions(+), 2 deletions(-) - -athena% make handin -git archive --prefix=lab1/ --format=tar HEAD | gzip > lab1-handin.tar.gz -Get an API key for yourself by visiting https://6828.scripts.mit.edu/2018/handin.py/ -Please enter your API key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 50199 100 241 100 49958 414 85824 --:--:-- --:--:-- --:--:-- 85986 -athena% - -``` - -make handin will store your API key in _myapi.key_. If you need to change your API key, just remove this file and let make handin generate it again ( _myapi.key_ must not include newline characters). - -If use make handin and you have either uncomitted changes or untracked files, you will see output similar to the following: - -``` - M hello.c -?? bar.c -?? foo.pyc -Untracked files will not be handed in. Continue? [y/N] - -``` - -Inspect the above lines and make sure all files that your lab solution needs are tracked i.e. not listed in a line that begins with ??. - -In the case that make handin does not work properly, try fixing the problem with the curl or Git commands. Or you can run make tarball. This will make a tar file for you, which you can then upload via our [web interface][5]. - -You can run make grade to test your solutions with the grading program. The [web interface][5] uses the same grading program to assign your lab submission a grade. You should check the output of the grader (it may take a few minutes since the grader runs periodically) and ensure that you received the grade which you expected. If the grades don't match, your lab submission probably has a bug -- check the output of the grader (resp-lab*.txt) to see which particular test failed. - -For Lab 1, you do not need to turn in answers to any of the questions below. (Do answer them for yourself though! They will help with the rest of the lab.) - -#### Part 1: PC Bootstrap - -The purpose of the first exercise is to introduce you to x86 assembly language and the PC bootstrap process, and to get you started with QEMU and QEMU/GDB debugging. You will not have to write any code for this part of the lab, but you should go through it anyway for your own understanding and be prepared to answer the questions posed below. - -##### Getting Started with x86 assembly - -If you are not already familiar with x86 assembly language, you will quickly become familiar with it during this course! The [PC Assembly Language Book][6] is an excellent place to start. Hopefully, the book contains mixture of new and old material for you. - -_Warning:_ Unfortunately the examples in the book are written for the NASM assembler, whereas we will be using the GNU assembler. NASM uses the so-called _Intel_ syntax while GNU uses the _AT &T_ syntax. While semantically equivalent, an assembly file will differ quite a lot, at least superficially, depending on which syntax is used. Luckily the conversion between the two is pretty simple, and is covered in [Brennan's Guide to Inline Assembly][7]. - -Exercise 1. Familiarize yourself with the assembly language materials available on [the 6.828 reference page][8]. You don't have to read them now, but you'll almost certainly want to refer to some of this material when reading and writing x86 assembly. - -We do recommend reading the section "The Syntax" in [Brennan's Guide to Inline Assembly][7]. It gives a good (and quite brief) description of the AT&T assembly syntax we'll be using with the GNU assembler in JOS. - -Certainly the definitive reference for x86 assembly language programming is Intel's instruction set architecture reference, which you can find on [the 6.828 reference page][8] in two flavors: an HTML edition of the old [80386 Programmer's Reference Manual][9], which is much shorter and easier to navigate than more recent manuals but describes all of the x86 processor features that we will make use of in 6.828; and the full, latest and greatest [IA-32 Intel Architecture Software Developer's Manuals][10] from Intel, covering all the features of the most recent processors that we won't need in class but you may be interested in learning about. An equivalent (and often friendlier) set of manuals is [available from AMD][11]. Save the Intel/AMD architecture manuals for later or use them for reference when you want to look up the definitive explanation of a particular processor feature or instruction. - -##### Simulating the x86 - -Instead of developing the operating system on a real, physical personal computer (PC), we use a program that faithfully emulates a complete PC: the code you write for the emulator will boot on a real PC too. Using an emulator simplifies debugging; you can, for example, set break points inside of the emulated x86, which is difficult to do with the silicon version of an x86. - -In 6.828 we will use the [QEMU Emulator][12], a modern and relatively fast emulator. While QEMU's built-in monitor provides only limited debugging support, QEMU can act as a remote debugging target for the [GNU debugger][13] (GDB), which we'll use in this lab to step through the early boot process. - -To get started, extract the Lab 1 files into your own directory on Athena as described above in "Software Setup", then type make (or gmake on BSD systems) in the `lab` directory to build the minimal 6.828 boot loader and kernel you will start with. (It's a little generous to call the code we're running here a "kernel," but we'll flesh it out throughout the semester.) - -``` -athena% cd lab -athena% make -+ as kern/entry.S -+ cc kern/entrypgdir.c -+ cc kern/init.c -+ cc kern/console.c -+ cc kern/monitor.c -+ cc kern/printf.c -+ cc kern/kdebug.c -+ cc lib/printfmt.c -+ cc lib/readline.c -+ cc lib/string.c -+ ld obj/kern/kernel -+ as boot/boot.S -+ cc -Os boot/main.c -+ ld boot/boot -boot block is 380 bytes (max 510) -+ mk obj/kern/kernel.img - -``` - -(If you get errors like "undefined reference to `__udivdi3'", you probably don't have the 32-bit gcc multilib. If you're running Debian or Ubuntu, try installing the gcc-multilib package.) - -Now you're ready to run QEMU, supplying the file `obj/kern/kernel.img`, created above, as the contents of the emulated PC's "virtual hard disk." This hard disk image contains both our boot loader (`obj/boot/boot`) and our kernel (`obj/kernel`). - -``` -athena% make qemu - -``` - -or - -``` -athena% make qemu-nox - -``` - -This executes QEMU with the options required to set the hard disk and direct serial port output to the terminal. Some text should appear in the QEMU window: - -``` -Booting from Hard Disk... -6828 decimal is XXX octal! -entering test_backtrace 5 -entering test_backtrace 4 -entering test_backtrace 3 -entering test_backtrace 2 -entering test_backtrace 1 -entering test_backtrace 0 -leaving test_backtrace 0 -leaving test_backtrace 1 -leaving test_backtrace 2 -leaving test_backtrace 3 -leaving test_backtrace 4 -leaving test_backtrace 5 -Welcome to the JOS kernel monitor! -Type 'help' for a list of commands. -K> - -``` - -Everything after '`Booting from Hard Disk...`' was printed by our skeletal JOS kernel; the `K>` is the prompt printed by the small _monitor_ , or interactive control program, that we've included in the kernel. If you used make qemu, these lines printed by the kernel will appear in both the regular shell window from which you ran QEMU and the QEMU display window. This is because for testing and lab grading purposes we have set up the JOS kernel to write its console output not only to the virtual VGA display (as seen in the QEMU window), but also to the simulated PC's virtual serial port, which QEMU in turn outputs to its own standard output. Likewise, the JOS kernel will take input from both the keyboard and the serial port, so you can give it commands in either the VGA display window or the terminal running QEMU. Alternatively, you can use the serial console without the virtual VGA by running make qemu-nox. This may be convenient if you are SSH'd into an Athena dialup. To quit qemu, type Ctrl+a x. - -There are only two commands you can give to the kernel monitor, `help` and `kerninfo`. - -``` -K> help -help - display this list of commands -kerninfo - display information about the kernel -K> kerninfo -Special kernel symbols: - entry f010000c (virt) 0010000c (phys) - etext f0101a75 (virt) 00101a75 (phys) - edata f0112300 (virt) 00112300 (phys) - end f0112960 (virt) 00112960 (phys) -Kernel executable memory footprint: 75KB -K> - -``` - -The `help` command is obvious, and we will shortly discuss the meaning of what the `kerninfo` command prints. Although simple, it's important to note that this kernel monitor is running "directly" on the "raw (virtual) hardware" of the simulated PC. This means that you should be able to copy the contents of `obj/kern/kernel.img` onto the first few sectors of a _real_ hard disk, insert that hard disk into a real PC, turn it on, and see exactly the same thing on the PC's real screen as you did above in the QEMU window. (We don't recommend you do this on a real machine with useful information on its hard disk, though, because copying `kernel.img` onto the beginning of its hard disk will trash the master boot record and the beginning of the first partition, effectively causing everything previously on the hard disk to be lost!) - -##### The PC's Physical Address Space - -We will now dive into a bit more detail about how a PC starts up. A PC's physical address space is hard-wired to have the following general layout: - -``` -+------------------+ <- 0xFFFFFFFF (4GB) -| 32-bit | -| memory mapped | -| devices | -| | -/\/\/\/\/\/\/\/\/\/\ - -/\/\/\/\/\/\/\/\/\/\ -| | -| Unused | -| | -+------------------+ <- depends on amount of RAM -| | -| | -| Extended Memory | -| | -| | -+------------------+ <- 0x00100000 (1MB) -| BIOS ROM | -+------------------+ <- 0x000F0000 (960KB) -| 16-bit devices, | -| expansion ROMs | -+------------------+ <- 0x000C0000 (768KB) -| VGA Display | -+------------------+ <- 0x000A0000 (640KB) -| | -| Low Memory | -| | -+------------------+ <- 0x00000000 - -``` - -The first PCs, which were based on the 16-bit Intel 8088 processor, were only capable of addressing 1MB of physical memory. The physical address space of an early PC would therefore start at 0x00000000 but end at 0x000FFFFF instead of 0xFFFFFFFF. The 640KB area marked "Low Memory" was the _only_ random-access memory (RAM) that an early PC could use; in fact the very earliest PCs only could be configured with 16KB, 32KB, or 64KB of RAM! - -The 384KB area from 0x000A0000 through 0x000FFFFF was reserved by the hardware for special uses such as video display buffers and firmware held in non-volatile memory. The most important part of this reserved area is the Basic Input/Output System (BIOS), which occupies the 64KB region from 0x000F0000 through 0x000FFFFF. In early PCs the BIOS was held in true read-only memory (ROM), but current PCs store the BIOS in updateable flash memory. The BIOS is responsible for performing basic system initialization such as activating the video card and checking the amount of memory installed. After performing this initialization, the BIOS loads the operating system from some appropriate location such as floppy disk, hard disk, CD-ROM, or the network, and passes control of the machine to the operating system. - -When Intel finally "broke the one megabyte barrier" with the 80286 and 80386 processors, which supported 16MB and 4GB physical address spaces respectively, the PC architects nevertheless preserved the original layout for the low 1MB of physical address space in order to ensure backward compatibility with existing software. Modern PCs therefore have a "hole" in physical memory from 0x000A0000 to 0x00100000, dividing RAM into "low" or "conventional memory" (the first 640KB) and "extended memory" (everything else). In addition, some space at the very top of the PC's 32-bit physical address space, above all physical RAM, is now commonly reserved by the BIOS for use by 32-bit PCI devices. - -Recent x86 processors can support _more_ than 4GB of physical RAM, so RAM can extend further above 0xFFFFFFFF. In this case the BIOS must arrange to leave a _second_ hole in the system's RAM at the top of the 32-bit addressable region, to leave room for these 32-bit devices to be mapped. Because of design limitations JOS will use only the first 256MB of a PC's physical memory anyway, so for now we will pretend that all PCs have "only" a 32-bit physical address space. But dealing with complicated physical address spaces and other aspects of hardware organization that evolved over many years is one of the important practical challenges of OS development. - -##### The ROM BIOS - -In this portion of the lab, you'll use QEMU's debugging facilities to investigate how an IA-32 compatible computer boots. - -Open two terminal windows and cd both shells into your lab directory. In one, enter make qemu-gdb (or make qemu-nox-gdb). This starts up QEMU, but QEMU stops just before the processor executes the first instruction and waits for a debugging connection from GDB. In the second terminal, from the same directory you ran `make`, run make gdb. You should see something like this, - -``` -athena% make gdb -GNU gdb (GDB) 6.8-debian -Copyright (C) 2008 Free Software Foundation, Inc. -License GPLv3+: GNU GPL version 3 or later -This is free software: you are free to change and redistribute it. -There is NO WARRANTY, to the extent permitted by law. Type "show copying" -and "show warranty" for details. -This GDB was configured as "i486-linux-gnu". -+ target remote localhost:26000 -The target architecture is assumed to be i8086 -[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b -0x0000fff0 in ?? () -+ symbol-file obj/kern/kernel -(gdb) - -``` - -We provided a `.gdbinit` file that set up GDB to debug the 16-bit code used during early boot and directed it to attach to the listening QEMU. (If it doesn't work, you may have to add an `add-auto-load-safe-path` in your `.gdbinit` in your home directory to convince `gdb` to process the `.gdbinit` we provided. `gdb` will tell you if you have to do this.) - -The following line: - -``` -[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b - -``` - -is GDB's disassembly of the first instruction to be executed. From this output you can conclude a few things: - - * The IBM PC starts executing at physical address 0x000ffff0, which is at the very top of the 64KB area reserved for the ROM BIOS. - * The PC starts executing with `CS = 0xf000` and `IP = 0xfff0`. - * The first instruction to be executed is a `jmp` instruction, which jumps to the segmented address `CS = 0xf000` and `IP = 0xe05b`. - - - -Why does QEMU start like this? This is how Intel designed the 8088 processor, which IBM used in their original PC. Because the BIOS in a PC is "hard-wired" to the physical address range 0x000f0000-0x000fffff, this design ensures that the BIOS always gets control of the machine first after power-up or any system restart - which is crucial because on power-up there _is_ no other software anywhere in the machine's RAM that the processor could execute. The QEMU emulator comes with its own BIOS, which it places at this location in the processor's simulated physical address space. On processor reset, the (simulated) processor enters real mode and sets CS to 0xf000 and the IP to 0xfff0, so that execution begins at that (CS:IP) segment address. How does the segmented address 0xf000:fff0 turn into a physical address? - -To answer that we need to know a bit about real mode addressing. In real mode (the mode that PC starts off in), address translation works according to the formula: _physical address_ = 16 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated _segment_ \+ _offset_. So, when the PC sets CS to 0xf000 and IP to 0xfff0, the physical address referenced is: - -``` - 16 * 0xf000 + 0xfff0 # in hex multiplication by 16 is - = 0xf0000 + 0xfff0 # easy--just append a 0. - = 0xffff0 - -``` - -`0xffff0` is 16 bytes before the end of the BIOS (`0x100000`). Therefore we shouldn't be surprised that the first thing that the BIOS does is `jmp` backwards to an earlier location in the BIOS; after all how much could it accomplish in just 16 bytes? - -Exercise 2. Use GDB's si (Step Instruction) command to trace into the ROM BIOS for a few more instructions, and try to guess what it might be doing. You might want to look at [Phil Storrs I/O Ports Description][14], as well as other materials on the [6.828 reference materials page][8]. No need to figure out all the details - just the general idea of what the BIOS is doing first. - -When the BIOS runs, it sets up an interrupt descriptor table and initializes various devices such as the VGA display. This is where the "`Starting SeaBIOS`" message you see in the QEMU window comes from. - -After initializing the PCI bus and all the important devices the BIOS knows about, it searches for a bootable device such as a floppy, hard drive, or CD-ROM. Eventually, when it finds a bootable disk, the BIOS reads the _boot loader_ from the disk and transfers control to it. - -#### Part 2: The Boot Loader - -Floppy and hard disks for PCs are divided into 512 byte regions called _sectors_. A sector is the disk's minimum transfer granularity: each read or write operation must be one or more sectors in size and aligned on a sector boundary. If the disk is bootable, the first sector is called the _boot sector_ , since this is where the boot loader code resides. When the BIOS finds a bootable floppy or hard disk, it loads the 512-byte boot sector into memory at physical addresses 0x7c00 through 0x7dff, and then uses a `jmp` instruction to set the CS:IP to `0000:7c00`, passing control to the boot loader. Like the BIOS load address, these addresses are fairly arbitrary - but they are fixed and standardized for PCs. - -The ability to boot from a CD-ROM came much later during the evolution of the PC, and as a result the PC architects took the opportunity to rethink the boot process slightly. As a result, the way a modern BIOS boots from a CD-ROM is a bit more complicated (and more powerful). CD-ROMs use a sector size of 2048 bytes instead of 512, and the BIOS can load a much larger boot image from the disk into memory (not just one sector) before transferring control to it. For more information, see the ["El Torito" Bootable CD-ROM Format Specification][15]. - -For 6.828, however, we will use the conventional hard drive boot mechanism, which means that our boot loader must fit into a measly 512 bytes. The boot loader consists of one assembly language source file, `boot/boot.S`, and one C source file, `boot/main.c` Look through these source files carefully and make sure you understand what's going on. The boot loader must perform two main functions: - - 1. First, the boot loader switches the processor from real mode to _32-bit protected mode_ , because it is only in this mode that software can access all the memory above 1MB in the processor's physical address space. Protected mode is described briefly in sections 1.2.7 and 1.2.8 of [PC Assembly Language][6], and in great detail in the Intel architecture manuals. At this point you only have to understand that translation of segmented addresses (segment:offset pairs) into physical addresses happens differently in protected mode, and that after the transition offsets are 32 bits instead of 16. - 2. Second, the boot loader reads the kernel from the hard disk by directly accessing the IDE disk device registers via the x86's special I/O instructions. If you would like to understand better what the particular I/O instructions here mean, check out the "IDE hard drive controller" section on [the 6.828 reference page][8]. You will not need to learn much about programming specific devices in this class: writing device drivers is in practice a very important part of OS development, but from a conceptual or architectural viewpoint it is also one of the least interesting. - - - -After you understand the boot loader source code, look at the file `obj/boot/boot.asm`. This file is a disassembly of the boot loader that our GNUmakefile creates _after_ compiling the boot loader. This disassembly file makes it easy to see exactly where in physical memory all of the boot loader's code resides, and makes it easier to track what's happening while stepping through the boot loader in GDB. Likewise, `obj/kern/kernel.asm` contains a disassembly of the JOS kernel, which can often be useful for debugging. - -You can set address breakpoints in GDB with the `b` command. For example, b *0x7c00 sets a breakpoint at address 0x7C00. Once at a breakpoint, you can continue execution using the c and si commands: c causes QEMU to continue execution until the next breakpoint (or until you press Ctrl-C in GDB), and si _N_ steps through the instructions _`N`_ at a time. - -To examine instructions in memory (besides the immediate next one to be executed, which GDB prints automatically), you use the x/i command. This command has the syntax x/ _N_ i _ADDR_ , where _N_ is the number of consecutive instructions to disassemble and _ADDR_ is the memory address at which to start disassembling. - -Exercise 3. Take a look at the [lab tools guide][16], especially the section on GDB commands. Even if you're familiar with GDB, this includes some esoteric GDB commands that are useful for OS work. - -Set a breakpoint at address 0x7c00, which is where the boot sector will be loaded. Continue execution until that breakpoint. Trace through the code in `boot/boot.S`, using the source code and the disassembly file `obj/boot/boot.asm` to keep track of where you are. Also use the `x/i` command in GDB to disassemble sequences of instructions in the boot loader, and compare the original boot loader source code with both the disassembly in `obj/boot/boot.asm` and GDB. - -Trace into `bootmain()` in `boot/main.c`, and then into `readsect()`. Identify the exact assembly instructions that correspond to each of the statements in `readsect()`. Trace through the rest of `readsect()` and back out into `bootmain()`, and identify the begin and end of the `for` loop that reads the remaining sectors of the kernel from the disk. Find out what code will run when the loop is finished, set a breakpoint there, and continue to that breakpoint. Then step through the remainder of the boot loader. - -Be able to answer the following questions: - - * At what point does the processor start executing 32-bit code? What exactly causes the switch from 16- to 32-bit mode? - * What is the _last_ instruction of the boot loader executed, and what is the _first_ instruction of the kernel it just loaded? - * _Where_ is the first instruction of the kernel? - * How does the boot loader decide how many sectors it must read in order to fetch the entire kernel from disk? Where does it find this information? - - - -##### Loading the Kernel - -We will now look in further detail at the C language portion of the boot loader, in `boot/main.c`. But before doing so, this is a good time to stop and review some of the basics of C programming. - -Exercise 4. Read about programming with pointers in C. The best reference for the C language is _The C Programming Language_ by Brian Kernighan and Dennis Ritchie (known as 'K &R'). We recommend that students purchase this book (here is an [Amazon Link][17]) or find one of [MIT's 7 copies][18]. - -Read 5.1 (Pointers and Addresses) through 5.5 (Character Pointers and Functions) in K&R. Then download the code for [pointers.c][19], run it, and make sure you understand where all of the printed values come from. In particular, make sure you understand where the pointer addresses in printed lines 1 and 6 come from, how all the values in printed lines 2 through 4 get there, and why the values printed in line 5 are seemingly corrupted. - -There are other references on pointers in C (e.g., [A tutorial by Ted Jensen][20] that cites K&R heavily), though not as strongly recommended. - -_Warning:_ Unless you are already thoroughly versed in C, do not skip or even skim this reading exercise. If you do not really understand pointers in C, you will suffer untold pain and misery in subsequent labs, and then eventually come to understand them the hard way. Trust us; you don't want to find out what "the hard way" is. - -To make sense out of `boot/main.c` you'll need to know what an ELF binary is. When you compile and link a C program such as the JOS kernel, the compiler transforms each C source ('`.c`') file into an _object_ ('`.o`') file containing assembly language instructions encoded in the binary format expected by the hardware. The linker then combines all of the compiled object files into a single _binary image_ such as `obj/kern/kernel`, which in this case is a binary in the ELF format, which stands for "Executable and Linkable Format". - -Full information about this format is available in [the ELF specification][21] on [our reference page][8], but you will not need to delve very deeply into the details of this format in this class. Although as a whole the format is quite powerful and complex, most of the complex parts are for supporting dynamic loading of shared libraries, which we will not do in this class. The [Wikipedia page][22] has a short description. - -For purposes of 6.828, you can consider an ELF executable to be a header with loading information, followed by several _program sections_ , each of which is a contiguous chunk of code or data intended to be loaded into memory at a specified address. The boot loader does not modify the code or data; it loads it into memory and starts executing it. - -An ELF binary starts with a fixed-length _ELF header_ , followed by a variable-length _program header_ listing each of the program sections to be loaded. The C definitions for these ELF headers are in `inc/elf.h`. The program sections we're interested in are: - - * `.text`: The program's executable instructions. - * `.rodata`: Read-only data, such as ASCII string constants produced by the C compiler. (We will not bother setting up the hardware to prohibit writing, however.) - * `.data`: The data section holds the program's initialized data, such as global variables declared with initializers like `int x = 5;`. - - - -When the linker computes the memory layout of a program, it reserves space for _uninitialized_ global variables, such as `int x;`, in a section called `.bss` that immediately follows `.data` in memory. C requires that "uninitialized" global variables start with a value of zero. Thus there is no need to store contents for `.bss` in the ELF binary; instead, the linker records just the address and size of the `.bss` section. The loader or the program itself must arrange to zero the `.bss` section. - -Examine the full list of the names, sizes, and link addresses of all the sections in the kernel executable by typing: - -``` -athena% objdump -h obj/kern/kernel - -(If you compiled your own toolchain, you may need to use i386-jos-elf-objdump) - -``` - -You will see many more sections than the ones we listed above, but the others are not important for our purposes. Most of the others are to hold debugging information, which is typically included in the program's executable file but not loaded into memory by the program loader. - -Take particular note of the "VMA" (or _link address_ ) and the "LMA" (or _load address_ ) of the `.text` section. The load address of a section is the memory address at which that section should be loaded into memory. - -The link address of a section is the memory address from which the section expects to execute. The linker encodes the link address in the binary in various ways, such as when the code needs the address of a global variable, with the result that a binary usually won't work if it is executing from an address that it is not linked for. (It is possible to generate _position-independent_ code that does not contain any such absolute addresses. This is used extensively by modern shared libraries, but it has performance and complexity costs, so we won't be using it in 6.828.) - -Typically, the link and load addresses are the same. For example, look at the `.text` section of the boot loader: - -``` -athena% objdump -h obj/boot/boot.out - -``` - -The boot loader uses the ELF _program headers_ to decide how to load the sections. The program headers specify which parts of the ELF object to load into memory and the destination address each should occupy. You can inspect the program headers by typing: - -``` -athena% objdump -x obj/kern/kernel - -``` - -The program headers are then listed under "Program Headers" in the output of objdump. The areas of the ELF object that need to be loaded into memory are those that are marked as "LOAD". Other information for each program header is given, such as the virtual address ("vaddr"), the physical address ("paddr"), and the size of the loaded area ("memsz" and "filesz"). - -Back in boot/main.c, the `ph->p_pa` field of each program header contains the segment's destination physical address (in this case, it really is a physical address, though the ELF specification is vague on the actual meaning of this field). - -The BIOS loads the boot sector into memory starting at address 0x7c00, so this is the boot sector's load address. This is also where the boot sector executes from, so this is also its link address. We set the link address by passing `-Ttext 0x7C00` to the linker in `boot/Makefrag`, so the linker will produce the correct memory addresses in the generated code. - -Exercise 5. Trace through the first few instructions of the boot loader again and identify the first instruction that would "break" or otherwise do the wrong thing if you were to get the boot loader's link address wrong. Then change the link address in `boot/Makefrag` to something wrong, run make clean, recompile the lab with make, and trace into the boot loader again to see what happens. Don't forget to change the link address back and make clean again afterward! - -Look back at the load and link addresses for the kernel. Unlike the boot loader, these two addresses aren't the same: the kernel is telling the boot loader to load it into memory at a low address (1 megabyte), but it expects to execute from a high address. We'll dig in to how we make this work in the next section. - -Besides the section information, there is one more field in the ELF header that is important to us, named `e_entry`. This field holds the link address of the _entry point_ in the program: the memory address in the program's text section at which the program should begin executing. You can see the entry point: - -``` -athena% objdump -f obj/kern/kernel - -``` - -You should now be able to understand the minimal ELF loader in `boot/main.c`. It reads each section of the kernel from disk into memory at the section's load address and then jumps to the kernel's entry point. - -Exercise 6. We can examine memory using GDB's x command. The [GDB manual][23] has full details, but for now, it is enough to know that the command x/ _N_ x _ADDR_ prints _`N`_ words of memory at _`ADDR`_. (Note that both '`x`'s in the command are lowercase.) _Warning_ : The size of a word is not a universal standard. In GNU assembly, a word is two bytes (the 'w' in xorw, which stands for word, means 2 bytes). - -Reset the machine (exit QEMU/GDB and start them again). Examine the 8 words of memory at 0x00100000 at the point the BIOS enters the boot loader, and then again at the point the boot loader enters the kernel. Why are they different? What is there at the second breakpoint? (You do not really need to use QEMU to answer this question. Just think.) - -#### Part 3: The Kernel - -We will now start to examine the minimal JOS kernel in a bit more detail. (And you will finally get to write some code!). Like the boot loader, the kernel begins with some assembly language code that sets things up so that C language code can execute properly. - -##### Using virtual memory to work around position dependence - -When you inspected the boot loader's link and load addresses above, they matched perfectly, but there was a (rather large) disparity between the _kernel's_ link address (as printed by objdump) and its load address. Go back and check both and make sure you can see what we're talking about. (Linking the kernel is more complicated than the boot loader, so the link and load addresses are at the top of `kern/kernel.ld`.) - -Operating system kernels often like to be linked and run at very high _virtual address_ , such as 0xf0100000, in order to leave the lower part of the processor's virtual address space for user programs to use. The reason for this arrangement will become clearer in the next lab. - -Many machines don't have any physical memory at address 0xf0100000, so we can't count on being able to store the kernel there. Instead, we will use the processor's memory management hardware to map virtual address 0xf0100000 (the link address at which the kernel code _expects_ to run) to physical address 0x00100000 (where the boot loader loaded the kernel into physical memory). This way, although the kernel's virtual address is high enough to leave plenty of address space for user processes, it will be loaded in physical memory at the 1MB point in the PC's RAM, just above the BIOS ROM. This approach requires that the PC have at least a few megabytes of physical memory (so that physical address 0x00100000 works), but this is likely to be true of any PC built after about 1990. - -In fact, in the next lab, we will map the _entire_ bottom 256MB of the PC's physical address space, from physical addresses 0x00000000 through 0x0fffffff, to virtual addresses 0xf0000000 through 0xffffffff respectively. You should now see why JOS can only use the first 256MB of physical memory. - -For now, we'll just map the first 4MB of physical memory, which will be enough to get us up and running. We do this using the hand-written, statically-initialized page directory and page table in `kern/entrypgdir.c`. For now, you don't have to understand the details of how this works, just the effect that it accomplishes. Up until `kern/entry.S` sets the `CR0_PG` flag, memory references are treated as physical addresses (strictly speaking, they're linear addresses, but boot/boot.S set up an identity mapping from linear addresses to physical addresses and we're never going to change that). Once `CR0_PG` is set, memory references are virtual addresses that get translated by the virtual memory hardware to physical addresses. `entry_pgdir` translates virtual addresses in the range 0xf0000000 through 0xf0400000 to physical addresses 0x00000000 through 0x00400000, as well as virtual addresses 0x00000000 through 0x00400000 to physical addresses 0x00000000 through 0x00400000. Any virtual address that is not in one of these two ranges will cause a hardware exception which, since we haven't set up interrupt handling yet, will cause QEMU to dump the machine state and exit (or endlessly reboot if you aren't using the 6.828-patched version of QEMU). - -Exercise 7. Use QEMU and GDB to trace into the JOS kernel and stop at the `movl %eax, %cr0`. Examine memory at 0x00100000 and at 0xf0100000. Now, single step over that instruction using the stepi GDB command. Again, examine memory at 0x00100000 and at 0xf0100000. Make sure you understand what just happened. - -What is the first instruction _after_ the new mapping is established that would fail to work properly if the mapping weren't in place? Comment out the `movl %eax, %cr0` in `kern/entry.S`, trace into it, and see if you were right. - -##### Formatted Printing to the Console - -Most people take functions like `printf()` for granted, sometimes even thinking of them as "primitives" of the C language. But in an OS kernel, we have to implement all I/O ourselves. - -Read through `kern/printf.c`, `lib/printfmt.c`, and `kern/console.c`, and make sure you understand their relationship. It will become clear in later labs why `printfmt.c` is located in the separate `lib` directory. - -Exercise 8. We have omitted a small fragment of code - the code necessary to print octal numbers using patterns of the form "%o". Find and fill in this code fragment. - -Be able to answer the following questions: - - 1. Explain the interface between `printf.c` and `console.c`. Specifically, what function does `console.c` export? How is this function used by `printf.c`? - - 2. Explain the following from `console.c`: -``` - 1 if (crt_pos >= CRT_SIZE) { - 2 int i; - 3 memmove(crt_buf, crt_buf + CRT_COLS, (CRT_SIZE - CRT_COLS) 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated sizeof(uint16_t)); - 4 for (i = CRT_SIZE - CRT_COLS; i < CRT_SIZE; i++) - 5 crt_buf[i] = 0x0700 | ' '; - 6 crt_pos -= CRT_COLS; - 7 } - -``` - - 3. For the following questions you might wish to consult the notes for Lecture 2. These notes cover GCC's calling convention on the x86. - -Trace the execution of the following code step-by-step: -``` - int x = 1, y = 3, z = 4; - cprintf("x %d, y %x, z %d\n", x, y, z); - -``` - - * In the call to `cprintf()`, to what does `fmt` point? To what does `ap` point? - * List (in order of execution) each call to `cons_putc`, `va_arg`, and `vcprintf`. For `cons_putc`, list its argument as well. For `va_arg`, list what `ap` points to before and after the call. For `vcprintf` list the values of its two arguments. - 4. Run the following code. -``` - unsigned int i = 0x00646c72; - cprintf("H%x Wo%s", 57616, &i); - -``` - -What is the output? Explain how this output is arrived at in the step-by-step manner of the previous exercise. [Here's an ASCII table][24] that maps bytes to characters. - -The output depends on that fact that the x86 is little-endian. If the x86 were instead big-endian what would you set `i` to in order to yield the same output? Would you need to change `57616` to a different value? - -[Here's a description of little- and big-endian][25] and [a more whimsical description][26]. - - 5. In the following code, what is going to be printed after `'y='`? (note: the answer is not a specific value.) Why does this happen? -``` - cprintf("x=%d y=%d", 3); - -``` - - 6. Let's say that GCC changed its calling convention so that it pushed arguments on the stack in declaration order, so that the last argument is pushed last. How would you have to change `cprintf` or its interface so that it would still be possible to pass it a variable number of arguments? - - - - -Challenge Enhance the console to allow text to be printed in different colors. The traditional way to do this is to make it interpret [ANSI escape sequences][27] embedded in the text strings printed to the console, but you may use any mechanism you like. There is plenty of information on [the 6.828 reference page][8] and elsewhere on the web on programming the VGA display hardware. If you're feeling really adventurous, you could try switching the VGA hardware into a graphics mode and making the console draw text onto the graphical frame buffer. - -##### The Stack - -In the final exercise of this lab, we will explore in more detail the way the C language uses the stack on the x86, and in the process write a useful new kernel monitor function that prints a _backtrace_ of the stack: a list of the saved Instruction Pointer (IP) values from the nested `call` instructions that led to the current point of execution. - -Exercise 9. Determine where the kernel initializes its stack, and exactly where in memory its stack is located. How does the kernel reserve space for its stack? And at which "end" of this reserved area is the stack pointer initialized to point to? - -The x86 stack pointer (`esp` register) points to the lowest location on the stack that is currently in use. Everything _below_ that location in the region reserved for the stack is free. Pushing a value onto the stack involves decreasing the stack pointer and then writing the value to the place the stack pointer points to. Popping a value from the stack involves reading the value the stack pointer points to and then increasing the stack pointer. In 32-bit mode, the stack can only hold 32-bit values, and esp is always divisible by four. Various x86 instructions, such as `call`, are "hard-wired" to use the stack pointer register. - -The `ebp` (base pointer) register, in contrast, is associated with the stack primarily by software convention. On entry to a C function, the function's _prologue_ code normally saves the previous function's base pointer by pushing it onto the stack, and then copies the current `esp` value into `ebp` for the duration of the function. If all the functions in a program obey this convention, then at any given point during the program's execution, it is possible to trace back through the stack by following the chain of saved `ebp` pointers and determining exactly what nested sequence of function calls caused this particular point in the program to be reached. This capability can be particularly useful, for example, when a particular function causes an `assert` failure or `panic` because bad arguments were passed to it, but you aren't sure _who_ passed the bad arguments. A stack backtrace lets you find the offending function. - -Exercise 10. To become familiar with the C calling conventions on the x86, find the address of the `test_backtrace` function in `obj/kern/kernel.asm`, set a breakpoint there, and examine what happens each time it gets called after the kernel starts. How many 32-bit words does each recursive nesting level of `test_backtrace` push on the stack, and what are those words? - -Note that, for this exercise to work properly, you should be using the patched version of QEMU available on the [tools][4] page or on Athena. Otherwise, you'll have to manually translate all breakpoint and memory addresses to linear addresses. - -The above exercise should give you the information you need to implement a stack backtrace function, which you should call `mon_backtrace()`. A prototype for this function is already waiting for you in `kern/monitor.c`. You can do it entirely in C, but you may find the `read_ebp()` function in `inc/x86.h` useful. You'll also have to hook this new function into the kernel monitor's command list so that it can be invoked interactively by the user. - -The backtrace function should display a listing of function call frames in the following format: - -``` -Stack backtrace: - ebp f0109e58 eip f0100a62 args 00000001 f0109e80 f0109e98 f0100ed2 00000031 - ebp f0109ed8 eip f01000d6 args 00000000 00000000 f0100058 f0109f28 00000061 - ... - -``` - -Each line contains an `ebp`, `eip`, and `args`. The `ebp` value indicates the base pointer into the stack used by that function: i.e., the position of the stack pointer just after the function was entered and the function prologue code set up the base pointer. The listed `eip` value is the function's _return instruction pointer_ : the instruction address to which control will return when the function returns. The return instruction pointer typically points to the instruction after the `call` instruction (why?). Finally, the five hex values listed after `args` are the first five arguments to the function in question, which would have been pushed on the stack just before the function was called. If the function was called with fewer than five arguments, of course, then not all five of these values will be useful. (Why can't the backtrace code detect how many arguments there actually are? How could this limitation be fixed?) - -The first line printed reflects the _currently executing_ function, namely `mon_backtrace` itself, the second line reflects the function that called `mon_backtrace`, the third line reflects the function that called that one, and so on. You should print _all_ the outstanding stack frames. By studying `kern/entry.S` you'll find that there is an easy way to tell when to stop. - -Here are a few specific points you read about in K&R Chapter 5 that are worth remembering for the following exercise and for future labs. - - * If `int *p = (int*)100`, then `(int)p + 1` and `(int)(p + 1)` are different numbers: the first is `101` but the second is `104`. When adding an integer to a pointer, as in the second case, the integer is implicitly multiplied by the size of the object the pointer points to. - * `p[i]` is defined to be the same as `*(p+i)`, referring to the i'th object in the memory pointed to by p. The above rule for addition helps this definition work when the objects are larger than one byte. - * `&p[i]` is the same as `(p+i)`, yielding the address of the i'th object in the memory pointed to by p. - - - -Although most C programs never need to cast between pointers and integers, operating systems frequently do. Whenever you see an addition involving a memory address, ask yourself whether it is an integer addition or pointer addition and make sure the value being added is appropriately multiplied or not. - -Exercise 11. Implement the backtrace function as specified above. Use the same format as in the example, since otherwise the grading script will be confused. When you think you have it working right, run make grade to see if its output conforms to what our grading script expects, and fix it if it doesn't. _After_ you have handed in your Lab 1 code, you are welcome to change the output format of the backtrace function any way you like. - -If you use `read_ebp()`, note that GCC may generate "optimized" code that calls `read_ebp()` _before_ `mon_backtrace()`'s function prologue, which results in an incomplete stack trace (the stack frame of the most recent function call is missing). While we have tried to disable optimizations that cause this reordering, you may want to examine the assembly of `mon_backtrace()` and make sure the call to `read_ebp()` is happening after the function prologue. - -At this point, your backtrace function should give you the addresses of the function callers on the stack that lead to `mon_backtrace()` being executed. However, in practice you often want to know the function names corresponding to those addresses. For instance, you may want to know which functions could contain a bug that's causing your kernel to crash. - -To help you implement this functionality, we have provided the function `debuginfo_eip()`, which looks up `eip` in the symbol table and returns the debugging information for that address. This function is defined in `kern/kdebug.c`. - -Exercise 12. Modify your stack backtrace function to display, for each `eip`, the function name, source file name, and line number corresponding to that `eip`. - -In `debuginfo_eip`, where do `__STAB_*` come from? This question has a long answer; to help you to discover the answer, here are some things you might want to do: - - * look in the file `kern/kernel.ld` for `__STAB_*` - * run objdump -h obj/kern/kernel - * run objdump -G obj/kern/kernel - * run gcc -pipe -nostdinc -O2 -fno-builtin -I. -MD -Wall -Wno-format -DJOS_KERNEL -gstabs -c -S kern/init.c, and look at init.s. - * see if the bootloader loads the symbol table in memory as part of loading the kernel binary - - - -Complete the implementation of `debuginfo_eip` by inserting the call to `stab_binsearch` to find the line number for an address. - -Add a `backtrace` command to the kernel monitor, and extend your implementation of `mon_backtrace` to call `debuginfo_eip` and print a line for each stack frame of the form: - -``` -K> backtrace -Stack backtrace: - ebp f010ff78 eip f01008ae args 00000001 f010ff8c 00000000 f0110580 00000000 - kern/monitor.c:143: monitor+106 - ebp f010ffd8 eip f0100193 args 00000000 00001aac 00000660 00000000 00000000 - kern/init.c:49: i386_init+59 - ebp f010fff8 eip f010003d args 00000000 00000000 0000ffff 10cf9a00 0000ffff - kern/entry.S:70: +0 -K> - -``` - -Each line gives the file name and line within that file of the stack frame's `eip`, followed by the name of the function and the offset of the `eip` from the first instruction of the function (e.g., `monitor+106` means the return `eip` is 106 bytes past the beginning of `monitor`). - -Be sure to print the file and function names on a separate line, to avoid confusing the grading script. - -Tip: printf format strings provide an easy, albeit obscure, way to print non-null-terminated strings like those in STABS tables. `printf("%.*s", length, string)` prints at most `length` characters of `string`. Take a look at the printf man page to find out why this works. - -You may find that some functions are missing from the backtrace. For example, you will probably see a call to `monitor()` but not to `runcmd()`. This is because the compiler in-lines some function calls. Other optimizations may cause you to see unexpected line numbers. If you get rid of the `-O2` from `GNUMakefile`, the backtraces may make more sense (but your kernel will run more slowly). - -**This completes the lab.** In the `lab` directory, commit your changes with git commit and type make handin to submit your code. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[b]: https://github.com/lujun9972 -[1]: http://www.git-scm.com/ -[2]: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html -[3]: http://eagain.net/articles/git-for-computer-scientists/ -[4]: https://pdos.csail.mit.edu/6.828/2018/tools.html -[5]: https://6828.scripts.mit.edu/2018/handin.py/ -[6]: https://pdos.csail.mit.edu/6.828/2018/readings/pcasm-book.pdf -[7]: http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html -[8]: https://pdos.csail.mit.edu/6.828/2018/reference.html -[9]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm -[10]: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html -[11]: http://developer.amd.com/resources/developer-guides-manuals/ -[12]: http://www.qemu.org/ -[13]: http://www.gnu.org/software/gdb/ -[14]: http://web.archive.org/web/20040404164813/members.iweb.net.au/~pstorr/pcbook/book2/book2.htm -[15]: https://pdos.csail.mit.edu/6.828/2018/readings/boot-cdrom.pdf -[16]: https://pdos.csail.mit.edu/6.828/2018/labguide.html -[17]: http://www.amazon.com/C-Programming-Language-2nd/dp/0131103628/sr=8-1/qid=1157812738/ref=pd_bbs_1/104-1502762-1803102?ie=UTF8&s=books -[18]: http://library.mit.edu/F/AI9Y4SJ2L5ELEE2TAQUAAR44XV5RTTQHE47P9MKP5GQDLR9A8X-10422?func=item-global&doc_library=MIT01&doc_number=000355242&year=&volume=&sub_library= -[19]: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/pointers.c -[20]: https://pdos.csail.mit.edu/6.828/2018/readings/pointers.pdf -[21]: https://pdos.csail.mit.edu/6.828/2018/readings/elf.pdf -[22]: http://en.wikipedia.org/wiki/Executable_and_Linkable_Format -[23]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html -[24]: http://web.cs.mun.ca/~michael/c/ascii-table.html -[25]: http://www.webopedia.com/TERM/b/big_endian.html -[26]: http://www.networksorcery.com/enp/ien/ien137.txt -[27]: http://rrbrandt.dee.ufcg.edu.br/en/docs/ansi/ diff --git a/sources/tech/20180914 A day in the life of a log message.md b/sources/tech/20180914 A day in the life of a log message.md deleted file mode 100644 index 8d60ec9fe6..0000000000 --- a/sources/tech/20180914 A day in the life of a log message.md +++ /dev/null @@ -1,57 +0,0 @@ -A day in the life of a log message -====== - -Navigating a modern distributed system from the perspective of a log message. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/plane_travel_world_international.png?itok=jG3sYPty) - -Chaotic systems tend to be unpredictable. This is especially evident when architecting something as complex as a distributed system. Left unchecked, this unpredictability can waste boundless amounts of time. This is why every single component of a distributed system, no matter how small, must be designed to fit together in a streamlined way. - -[Kubernetes][1] provides a promising model for abstracting compute resources—but even it must be reconciled with other distributed platforms such as [Apache Kafka][2] to ensure reliable data delivery. If someone were to integrate these two platforms, how would it work? Furthermore, if you were to trace something as simple as a log message through such a system, what would it look like? This article will focus on how a log message from an application running inside [OKD][3], the Origin Community Distribution of Kubernetes that powers Red Hat OpenShift, gets to a data warehouse through Kafka. - -### OKD-defined environment - -Such a journey begins in OKD, since the container platform completely overlays the hardware it abstracts. This means that the log message waits to be written to **stdout** or **stderr** streams by an application residing in a container. From there, the log message is redirected onto the node's filesystem by a container engine such as [CRI-O][4]. - -![](https://opensource.com/sites/default/files/uploads/logmessagepathway.png) - -ithin OpenShift, one or more containers are encapsulated within virtual compute nodes known as pods. In fact, all applications running within OKD are abstracted as pods. This allows the applications to be manipulated in a uniform way. This also greatly simplifies communication between distributed components, since pods are systematically addressable through IP addresses and [load-balanced services][5] . So when the log message is taken from the node's filesystem by a log-collector application, it can easily be delivered to another pod running within OpenShift. - -### Two peas in a pod - -To ensure ubiquitous dispersal of the log message throughout the distributed system, the log collector needs to deliver the log message into a Kafka cluster data hub running within OpenShift. Through Kafka, the log message can be delivered to the consuming applications in a reliable and fault-tolerant way with low latency. However, in order to reap the benefits of Kafka within an OKD-defined environment, Kafka needs to be fully integrated into OKD. - -Running a [Strimzi operator][6] will instantiate all Kafka components as pods and integrate them to run within an OKD environment. This includes Kafka brokers for queuing log messages, Kafka connectors for reading and writing from Kafka brokers, and Zookeeper nodes for managing the Kafka cluster state. Strimzi can also instantiate the log collector to double as a Kafka connector, allowing the log collector to feed the log messages directly into a Kafka broker pod running within OKD. - -### Kafka inside OKD - -When the log-collector pod delivers the log message to a Kafka broker, the collector writes to a single broker partition, appending the message to the end of the partition. One of the advantages of using Kafka is that it decouples the log collector from the log's final destination. Thanks to the decoupling, the log collector doesn't care whether the logs end up in [Elasticsearch][7], Hadoop, Amazon S3, or all of them at the same time. Kafka is well-connected to all infrastructure, so the Kafka connectors can take the log message wherever it needs to go. - -Once written to a Kafka broker's partition, the log message is replicated across the broker partitions within the Kafka cluster. This is a very powerful concept on its own; combined with the self-healing features of the platform, it creates a very resilient distributed system. For example, when a node becomes unavailable, the applications running on the node are almost instantaneously spawned on healthy node(s). So even if a node with the Kafka broker is lost or damaged, the log message is guaranteed to survive as many deaths as it was replicated and a new Kafka broker will quickly take the original's place. - -### Off to storage - -After it is committed to a Kafka topic, the log message waits to be consumed by a Kafka connector sink, which relays the log message to either an analytics engine or logging warehouse. Upon delivery to its final destination, the log message could be studied for anomaly detection, queried for immediate root-cause analysis, or used for other purposes. Either way, the log message is delivered by Kafka to its destination in a safe and reliable manner. - -OKD and Kafka are powerful distributed platforms that are evolving rapidly. It is vital to create systems that can abstract the complicated nature of distributed computing without compromising performance. After all, how can we boast of systemwide efficiency if we cannot simplify the journey of a single log message? - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/life-log-message - -作者:[Josef Karásek][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/jkarasek -[1]: https://kubernetes.io/ -[2]: https://kafka.apache.org/ -[3]: https://www.okd.io/ -[4]: http://cri-o.io/ -[5]: https://kubernetes.io/docs/concepts/services-networking/service/ -[6]: http://strimzi.io/ -[7]: https://www.elastic.co/ diff --git a/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md b/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md new file mode 100644 index 0000000000..5d34623e8c --- /dev/null +++ b/sources/tech/20180919 Host your own cloud with Raspberry Pi NAS.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Host your own cloud with Raspberry Pi NAS) +[#]: via: (https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi?extIdCarryOver=true) +[#]: author: (Manuel Dewald https://opensource.com/users/ntlx) + +Host your own cloud with Raspberry Pi NAS +====== + +Protect and secure your data with a self-hosted cloud powered by your Raspberry Pi. + +![Tree clouds][1] + +In the first two parts of this series, we discussed the [hardware and software fundamentals][2] for building network-attached storage (NAS) on a Raspberry Pi. We also put a proper [backup strategy][3] in place to secure the data on the NAS. In this third part, we will talk about a convenient way to store, access, and share your data with [Nextcloud][4]. + +![Raspberry Pi NAS infrastructure with Nextcloud][6] + +### Prerequisites + +To use Nextcloud conveniently, you have to meet a few prerequisites. First, you should have a domain you can use for the Nextcloud instance. For the sake of simplicity in this how-to, we'll use **nextcloud.pi-nas.com**. This domain should be directed to your Raspberry Pi. If you want to run it on your home network, you probably need to set up dynamic DNS for this domain and enable port forwarding of ports 80 and 443 (if you go for an SSL setup, which is highly recommended; otherwise port 80 should be sufficient) from your router to the Raspberry Pi. + +You can automate dynamic DNS updates from the Raspberry Pi using [ddclient][7]. + +### Install Nextcloud + +To run Nextcloud on your Raspberry Pi (using the setup described in the [first part][2] of this series), install the following packages as dependencies to Nextcloud using **apt**. + +``` +sudo apt install unzip wget php apache2 mysql-server php-zip php-mysql php-dom php-mbstring php-gd php-curl +``` + +The next step is to download Nextcloud. [Get the latest release's URL][8] and copy it to download via **wget** on the Raspberry Pi. In the first article in this series, we attached two disk drives to the Raspberry Pi, one for current data and one for backups. Install Nextcloud on the data drive to make sure data is backed up automatically every night. +``` +sudo mkdir -p /nas/data/nextcloud +sudo chown pi /nas/data/nextcloud +cd /nas/data/ +wget -O /nas/data/nextcloud.zip +unzip nextcloud.zip +sudo ln -s /nas/data/nextcloud /var/www/nextcloud +sudo chown -R www-data:www-data /nas/data/nextcloud +``` + +When I wrote this, the latest release (as you see in the code above) was 14. Nextcloud is under heavy development, so you may find a newer version when installing your copy of Nextcloud onto your Raspberry Pi. + +### Database setup + +When we installed Nextcloud above, we also installed MySQL as a dependency to use it for all the metadata Nextcloud generates (for example, the users you create to access Nextcloud). If you would rather use a Postgres database, you'll need to adjust some of the modules installed above. + +To access the MySQL database as root, start the MySQL client as root: + +``` +sudo mysql +``` + +This will open a SQL prompt where you can insert the following commands—substituting the placeholder with the password you want to use for the database connection—to create a database for Nextcloud. +``` +CREATE USER nextcloud IDENTIFIED BY ''; +CREATE DATABASE nextcloud; +GRANT ALL ON nextcloud.* TO nextcloud; +``` + + +You can exit the SQL prompt by pressing **Ctrl+D** or entering **quit**. + +### Web server configuration + +Nextcloud can be configured to run using Nginx or other web servers, but for this how-to, I decided to go with the Apache web server on my Raspberry Pi NAS. (Feel free to try out another alternative and let me know if you think it performs better.) + +To set it up, configure a virtual host for the domain you created for your Nextcloud instance **nextcloud.pi-nas.com**. To create a virtual host, create the file **/etc/apache2/sites-available/001-nextcloud.conf** with content similar to the following. Make sure to adjust the ServerName to your domain and paths, if you didn't use the ones suggested earlier in this series. +``` + +ServerName nextcloud.pi-nas.com +ServerAdmin [admin@pi-nas.com][9] +DocumentRoot /var/www/nextcloud/ + + +AllowOverride None + + +``` + + +To enable this virtual host, run the following two commands. +``` +a2ensite 001-nextcloud +sudo systemctl reload apache2 +``` + + +With this configuration, you should now be able to reach the web server with your domain via the web browser. To secure your data, I recommend using HTTPS instead of HTTP to access Nextcloud. A very easy (and free) way is to obtain a [Let's Encrypt][10] certificate with [Certbot][11] and have a cron job automatically refresh it. That way you don't have to mess around with self-signed or expiring certificates. Follow Certbot's simple how-to [instructions to install it on your Raspberry Pi][12]. During Certbot configuration, you can even decide to automatically forward HTTP to HTTPS, so visitors to **** will be redirected to ****. Please note, if your Raspberry Pi is running behind your home router, you must have port forwarding enabled for ports 443 and 80 to obtain Let's Encrypt certificates. + +### Configure Nextcloud + +The final step is to visit your fresh Nextcloud instance in a web browser to finish the configuration. To do so, open your domain in a browser and insert the database details from above. You can also set up your first Nextcloud user here, the one you can use for admin tasks. By default, the data directory should be inside the Nextcloud folder, so you don't need to change anything for the backup mechanisms from the [second part of this series][3] to pick up the data stored by users in Nextcloud. + +Afterward, you will be directed to your Nextcloud and can log in with the admin user you created previously. To see a list of recommended steps to ensure a performant and secure Nextcloud installation, visit the Basic Settings tab in the Settings page (in our example: settings/admin) and see the Security & Setup Warnings section. + +Congratulations! You've set up your own Nextcloud powered by a Raspberry Pi. Go ahead and [download a Nextcloud client][13] from the Nextcloud page to sync data with your client devices and access it offline. Mobile clients even provide features like instant upload of pictures you take, so they'll automatically sync to your desktop PC without wondering how to get them there. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/host-cloud-nas-raspberry-pi?extIdCarryOver=true + +作者:[Manuel Dewald][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP (Tree clouds) +[2]: https://opensource.com/article/18/7/network-attached-storage-Raspberry-Pi +[3]: https://opensource.com/article/18/8/automate-backups-raspberry-pi +[4]: https://nextcloud.com/ +[5]: /file/409336 +[6]: https://opensource.com/sites/default/files/uploads/nas_part3.png (Raspberry Pi NAS infrastructure with Nextcloud) +[7]: https://sourceforge.net/p/ddclient/wiki/Home/ +[8]: https://nextcloud.com/install/#instructions-server +[9]: mailto:admin@pi-nas.com +[10]: https://letsencrypt.org/ +[11]: https://certbot.eff.org/ +[12]: https://certbot.eff.org/lets-encrypt/debianother-apache +[13]: https://nextcloud.com/install/#install-clients diff --git a/sources/tech/20180927 Lab 2- Memory Management.md b/sources/tech/20180927 Lab 2- Memory Management.md deleted file mode 100644 index 386bf6ceaf..0000000000 --- a/sources/tech/20180927 Lab 2- Memory Management.md +++ /dev/null @@ -1,272 +0,0 @@ -Lab 2: Memory Management -====== -### Lab 2: Memory Management - -#### Introduction - -In this lab, you will write the memory management code for your operating system. Memory management has two components. - -The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called _pages_. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory. - -The second component of memory management is _virtual memory_ , which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware's memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU's page tables according to a specification we provide. - -##### Getting started - -In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you've made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called `lab2` based on our lab2 branch, `origin/lab2`: - -``` - athena% cd ~/6.828/lab - athena% add git - athena% git pull - Already up-to-date. - athena% git checkout -b lab2 origin/lab2 - Branch lab2 set up to track remote branch refs/remotes/origin/lab2. - Switched to a new branch "lab2" - athena% -``` - -The git checkout -b command shown above actually does two things: it first creates a local branch `lab2` that is based on the `origin/lab2` branch provided by the course staff, and second, it changes the contents of your `lab` directory to reflect the files stored on the `lab2` branch. Git allows switching between existing branches using git checkout _branch-name_ , though you should commit any outstanding changes on one branch before switching to a different one. - -You will now need to merge the changes you made in your `lab1` branch into the `lab2` branch, as follows: - -``` - athena% git merge lab1 - Merge made by recursive. - kern/kdebug.c | 11 +++++++++-- - kern/monitor.c | 19 +++++++++++++++++++ - lib/printfmt.c | 7 +++---- - 3 files changed, 31 insertions(+), 6 deletions(-) - athena% -``` - -In some cases, Git may not be able to figure out how to merge your changes with the new lab assignment (e.g. if you modified some of the code that is changed in the second lab assignment). In that case, the git merge command will tell you which files are _conflicted_ , and you should first resolve the conflict (by editing the relevant files) and then commit the resulting files with git commit -a. - -Lab 2 contains the following new source files, which you should browse through: - - * `inc/memlayout.h` - * `kern/pmap.c` - * `kern/pmap.h` - * `kern/kclock.h` - * `kern/kclock.c` - - - -`memlayout.h` describes the layout of the virtual address space that you must implement by modifying `pmap.c`. `memlayout.h` and `pmap.h` define the `PageInfo` structure that you'll use to keep track of which pages of physical memory are free. `kclock.c` and `kclock.h` manipulate the PC's battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in `pmap.c` needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works. - -Pay particular attention to `memlayout.h` and `pmap.h`, since this lab requires you to use and understand many of the definitions they contain. You may want to review `inc/mmu.h`, too, as it also contains a number of definitions that will be useful for this lab. - -Before beginning the lab, don't forget to add -f 6.828 to get the 6.828 version of QEMU. - -##### Lab Requirements - -In this lab and subsequent labs, do all of the regular exercises described in the lab and _at least one_ challenge problem. (Some challenge problems are more challenging than others, of course!) Additionally, write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab2.txt` in the top level of your `lab` directory before handing in your work. - -##### Hand-In Procedure - -When you are ready to hand in your lab code and write-up, add your `answers-lab2.txt` to the Git repository, commit your changes, and then run make handin. - -``` - athena% git add answers-lab2.txt - athena% git commit -am "my answer to lab2" - [lab2 a823de9] my answer to lab2 - 4 files changed, 87 insertions(+), 10 deletions(-) - athena% make handin -``` - -As before, we will be grading your solutions with a grading program. You can run make grade in the `lab` directory to test your kernel with the grading program. You may change any of the kernel source and header files you need to in order to complete the lab, but needless to say you must not change or otherwise subvert the grading code. - -#### Part 1: Physical Page Management - -The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC's physical memory with _page granularity_ so that it can use the MMU to map and protect each piece of allocated memory. - -You'll now write the physical page allocator. It keeps track of which pages are free with a linked list of `struct PageInfo` objects (which, unlike xv6, are not embedded in the free pages themselves), each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables. - -Exercise 1. In the file `kern/pmap.c`, you must implement code for the following functions (probably in the order given). - -`boot_alloc()` -`mem_init()` (only up to the call to `check_page_free_list(1)`) -`page_init()` -`page_alloc()` -`page_free()` - -`check_page_free_list()` and `check_page_alloc()` test your physical page allocator. You should boot JOS and see whether `check_page_alloc()` reports success. Fix your code so that it passes. You may find it helpful to add your own `assert()`s to verify that your assumptions are correct. - -This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you'll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes. - -#### Part 2: Virtual Memory - -Before doing anything else, familiarize yourself with the x86's protected-mode memory management architecture: namely _segmentation_ and _page translation_. - -Exercise 2. Look at chapters 5 and 6 of the [Intel 80386 Reference Manual][1], if you haven't done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses the paging hardware for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it. - -##### Virtual, Linear, and Physical Addresses - -In x86 terminology, a _virtual address_ consists of a segment selector and an offset within the segment. A _linear address_ is what you get after segment translation but before page translation. A _physical address_ is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM. - -``` - Selector +--------------+ +-----------+ - ---------->| | | | - | Segmentation | | Paging | -Software | |-------->| |----------> RAM - Offset | Mechanism | | Mechanism | - ---------->| | | | - +--------------+ +-----------+ - Virtual Linear Physical - -``` - -A C pointer is the "offset" component of the virtual address. In `boot/boot.S`, we installed a Global Descriptor Table (GDT) that effectively disabled segment translation by setting all segment base addresses to 0 and limits to `0xffffffff`. Hence the "selector" has no effect and the linear address always equals the offset of the virtual address. In lab 3, we'll have to interact a little more with segmentation to set up privilege levels, but as for memory translation, we can ignore segmentation throughout the JOS labs and focus solely on page translation. - -Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual address space layout you are going to set up for JOS in this lab, we'll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of the virtual address space. - -Exercise 3. While GDB can only access QEMU's memory by virtual address, it's often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU [monitor commands][2] from the lab tools guide, especially the `xp` command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console). - -Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data. - -Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual addresses are mapped and with what permissions. - -From code executing on the CPU, once we're in protected mode (which we entered first thing in `boot/boot.S`), there's no way to directly use a linear or physical address. _All_ memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses. - -The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type `uintptr_t` represents opaque virtual addresses, and `physaddr_t` represents physical addresses. Both these types are really just synonyms for 32-bit integers (`uint32_t`), so the compiler won't stop you from assigning one type to another! Since they are integer types (not pointers), the compiler _will_ complain if you try to dereference them. - -The JOS kernel can dereference a `uintptr_t` by first casting it to a pointer type. In contrast, the kernel can't sensibly dereference a physical address, since the MMU translates all memory references. If you cast a `physaddr_t` to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won't get the memory location you intended. - -To summarize: - -C typeAddress type `T*` Virtual `uintptr_t` Virtual `physaddr_t` Physical - -Question - - 1. Assuming that the following JOS kernel code is correct, what type should variable `x` have, `uintptr_t` or `physaddr_t`? - -``` - mystery_t x; - char* value = return_a_pointer(); - *value = 10; - x = (mystery_t) value; - -``` - - - - -The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel cannot bypass virtual address translation and thus cannot directly load and store to physical addresses. One reason JOS remaps all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use `KADDR(pa)` to do that addition. - -The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by `boot_alloc()` are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use `PADDR(va)` to do that subtraction. - -##### Reference counting - -In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the `pp_ref` field of the `struct PageInfo` corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should be equal to the number of times the physical page appears below `UTOP` in all page tables (the mappings above `UTOP` are mostly set up at boot time by the kernel and should never be freed, so there's no need to reference count them). We'll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages. - -Be careful when using `page_alloc`. The page it returns will always have a reference count of 0, so `pp_ref` should be incremented as soon as you've done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, `page_insert`) and sometimes the function calling `page_alloc` must do it directly. - -##### Page Table Management - -Now you'll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed. - -Exercise 4. In the file `kern/pmap.c`, you must implement code for the following functions. - -``` - - pgdir_walk() - boot_map_region() - page_lookup() - page_remove() - page_insert() - - -``` - -`check_page()`, called from `mem_init()`, tests your page table management routines. You should make sure it reports success before proceeding. - -#### Part 3: Kernel Address Space - -JOS divides the processor's 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol `ULIM` in `inc/memlayout.h`, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel's virtual address space to map in a user environment below it at the same time. - -You'll find it helpful to refer to the JOS memory layout diagram in `inc/memlayout.h` both for this part and for later labs. - -##### Permissions and Fault Isolation - -Since kernel and user memory are both present in each environment's address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments' private data. Note that the writable permission bit (`PTE_W`) affects both user and kernel code! - -The user environment will have no permission to any of the memory above `ULIM`, while the kernel will be able to read and write this memory. For the address range `[UTOP,ULIM)`, both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below `UTOP` is for the user environment to use; the user environment will set permissions for accessing this memory. - -##### Initializing the Kernel Address Space - -Now you'll set up the address space above `UTOP`: the kernel part of the address space. `inc/memlayout.h` shows the layout you should use. You'll use the functions you just wrote to set up the appropriate linear to physical mappings. - -Exercise 5. Fill in the missing code in `mem_init()` after the call to `check_page()`. - -Your code should now pass the `check_kern_pgdir()` and `check_page_installed_pgdir()` checks. - -Question - - 2. What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible: - | Entry | Base Virtual Address | Points to (logically): | - |-------|----------------------|---------------------------------------| - | 1023 | ? | Page table for top 4MB of phys memory | - | 1022 | ? | ? | - | . | ? | ? | - | . | ? | ? | - | . | ? | ? | - | 2 | 0x00800000 | ? | - | 1 | 0x00400000 | ? | - | 0 | 0x00000000 | [see next question] | - 3. We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel's memory? What specific mechanisms protect the kernel memory? - 4. What is the maximum amount of physical memory that this operating system can support? Why? - 5. How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down? - 6. Revisit the page table setup in `kern/entry.S` and `kern/entrypgdir.c`. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary? - - -``` -Challenge! We consumed many physical pages to hold the page tables for the KERNBASE mapping. Do a more space-efficient job using the PTE_PS ("Page Size") bit in the page directory entries. This bit was _not_ supported in the original 80386, but is supported on more recent x86 processors. You will therefore have to refer to [Volume 3 of the current Intel manuals][3]. Make sure you design the kernel to use this optimization only on processors that support it! -``` - -``` -Challenge! Extend the JOS kernel monitor with commands to: - - * Display in a useful and easy-to-read format all of the physical page mappings (or lack thereof) that apply to a particular range of virtual/linear addresses in the currently active address space. For example, you might enter `'showmappings 0x3000 0x5000'` to display the physical page mappings and corresponding permission bits that apply to the pages at virtual addresses 0x3000, 0x4000, and 0x5000. - * Explicitly set, clear, or change the permissions of any mapping in the current address space. - * Dump the contents of a range of memory given either a virtual or physical address range. Be sure the dump code behaves correctly when the range extends across page boundaries! - * Do anything else that you think might be useful later for debugging the kernel. (There's a good chance it will be!) -``` - - -##### Address Space Layout Alternatives - -The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the _upper_ part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86's backward-compatibility modes, known as _virtual 8086 mode_ , is "hard-wired" in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there. - -It is even possible, though much more difficult, to design the kernel so as not to have to reserve _any_ fixed portion of the processor's linear or virtual address space for itself, but instead effectively to allow user-level processes unrestricted use of the _entire_ 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other! - -``` -Challenge! Each user-level environment maps the kernel. Change JOS so that the kernel has its own page table and so that a user-level environment runs with a minimal number of kernel pages mapped. That is, each user-level environment maps just enough pages mapped so that the user-level environment can enter and leave the kernel correctly. You also have to come up with a plan for the kernel to read/write arguments to system calls. -``` - -``` -Challenge! Write up an outline of how a kernel could be designed to allow user environments unrestricted use of the full 4GB virtual and linear address space. Hint: do the previous challenge exercise first, which reduces the kernel to a few mappings in a user environment. Hint: the technique is sometimes known as " _follow the bouncing kernel_. " In your design, be sure to address exactly what has to happen when the processor transitions between kernel and user modes, and how the kernel would accomplish such transitions. Also describe how the kernel would access physical memory and I/O devices in this scheme, and how the kernel would access a user environment's virtual address space during system calls and the like. Finally, think about and describe the advantages and disadvantages of such a scheme in terms of flexibility, performance, kernel complexity, and other factors you can think of. -``` - -``` -Challenge! Since our JOS kernel's memory management system only allocates and frees memory on page granularity, we do not have anything comparable to a general-purpose `malloc`/`free` facility that we can use within the kernel. This could be a problem if we want to support certain types of I/O devices that require _physically contiguous_ buffers larger than 4KB in size, or if we want user-level environments, and not just the kernel, to be able to allocate and map 4MB _superpages_ for maximum processor efficiency. (See the earlier challenge problem about PTE_PS.) - -Generalize the kernel's memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system. -``` - -**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab2.txt`. Commit your changes (including adding `answers-lab2.txt`) and type make handin in the `lab` directory to hand in your lab. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab2/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm -[2]: https://pdos.csail.mit.edu/6.828/2018/labguide.html#qemu -[3]: https://pdos.csail.mit.edu/6.828/2018/readings/ia32/IA32-3A.pdf diff --git a/sources/tech/20180928 Quiet log noise with Python and machine learning.md b/sources/tech/20180928 Quiet log noise with Python and machine learning.md index 79894775ed..f1fe2f1b7f 100644 --- a/sources/tech/20180928 Quiet log noise with Python and machine learning.md +++ b/sources/tech/20180928 Quiet log noise with Python and machine learning.md @@ -1,5 +1,3 @@ -translating by Flowsnow - Quiet log noise with Python and machine learning ====== diff --git a/sources/tech/20180930 A Short History of Chaosnet.md b/sources/tech/20180930 A Short History of Chaosnet.md deleted file mode 100644 index acbb10d53d..0000000000 --- a/sources/tech/20180930 A Short History of Chaosnet.md +++ /dev/null @@ -1,118 +0,0 @@ -A Short History of Chaosnet -====== -If you fire up `dig` and run a DNS query for `google.com`, you will get a response somewhat like the following: - -``` -$ dig google.com - -; <<>> DiG 9.10.6 <<>> google.com -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 27120 -;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 512 -;; QUESTION SECTION: -;google.com. IN A - -;; ANSWER SECTION: -google.com. 194 IN A 216.58.192.206 - -;; Query time: 23 msec -;; SERVER: 8.8.8.8#53(8.8.8.8) -;; WHEN: Fri Sep 21 16:14:48 CDT 2018 -;; MSG SIZE rcvd: 55 -``` - -The output contains both a section describing the “question” you asked (“What is the IP address of `google.com`?”) and a section describing the answer you received. In the answer section, we see that `dig` found a single record with what looks to be five fields. The record’s type is indicated by the `A` in the fourth field from the left—this is an “address” record. To the right of the `A`, in the fifth field, we can see that the IP address for `google.com` is `216.58.192.206`. The `194` value in the second field specifies how long in seconds this particular record can be cached. - -What does the `IN` field tell us? For an embarrassingly long time, I thought `IN` functioned as a preposition, so that every DNS record was saying something like “`google.com` is in `A` and has IP address `216.58.192.206`.” It turns out that `IN` actually stands for “internet.” The `IN` part of a DNS record tells us the record’s class. - -Why might a DNS record have a class other than “internet”? What would that even mean? How do you search for a host that isn’t on the internet? It would seem that `IN` is the only value that could possibly make sense here. Indeed, when you try to ask for the address of `google.com` while specifying that you expect a record with a class other than `IN`, the DNS server you are asking will probably complain. In the below, when we try to ask for the IP address of `google.com` using the `HS` class, the name server at `8.8.8.8` (Google Public DNS) returns a status of `SERVFAIL`: - -``` -$ dig -c HS google.com - -; <<>> DiG 9.10.6 <<>> -c HS google.com -;; global options: +cmd -;; Got answer: -;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 31517 -;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1 - -;; OPT PSEUDOSECTION: -; EDNS: version: 0, flags:; udp: 512 -;; QUESTION SECTION: -;google.com. HS A - -;; Query time: 34 msec -;; SERVER: 8.8.8.8#53(8.8.8.8) -;; WHEN: Tue Sep 25 14:48:10 CDT 2018 -;; MSG SIZE rcvd: 39 -``` - -So classes other than `IN` aren’t widely supported. But they do exist. In addition to `IN`, DNS records can have the `HS` class (as we’ve just seen) or the `CH` class. The `HS` class is reserved for use by a system called [Hesiod][1] that stores and distributes simple textual data using the Domain Name System. It is typically used in local environments as a stand-in for [LDAP][2]. The `CH` class is reserved for something called Chaosnet. - -Today, the world belongs to TCP/IP. Those two protocols (together with UDP) govern most of the remote communication that happens between computers. But I think it’s wonderful that you can still find, hidden in the plumbing of the internet, traces of this other, long-extinct, evocatively named system. What was Chaosnet? And why did it go the way of the dinosaurs? - -### A Machine Room at MIT - -Chaosnet was developed in the 1970s by researchers at the MIT Artificial Intelligence Lab. It was created as a part of a larger effort to design and build a machine that could run the Lisp programming language more efficiently than a general-purpose computer. - -Lisp was the brainchild of MIT professor John McCarthy, who pioneered the field of artificial intelligence. He first described Lisp to the world in [a paper][3] published in 1960. By 1962, an interpreter and a compiler had been written. Lisp introduced an astounding number of features that today we consider standard for many programming languages. It was the first language to have a garbage collector. It was the first to have a REPL. And it was the first to support dynamic typing. It found favor among programmers working in artificial intelligence and—to name just one example—was used to develop the famous [SHRDLU][4] demonstration, which allowed a human to dictate simple actions involving toy blocks to a computer in natural language. - -The problem with Lisp was that it could be slow. Simple operations could take twice as long to execute as was typical with other languages because Lisp variables were type-checked at runtime and not just during compilation. Lisp’s garbage collector was known to take up to an entire second to run on the IBM 7090 at MIT. These performance issues were especially unwelcome because the AI researchers using Lisp were trying to build applications like SHRDLU that interacted with users in real time. In the late 1970s, a group of MIT Artificial Intelligence Lab researchers decided to address these problems by building machines specifically designed to run Lisp programs. These “Lisp machines” had more memory and a compact instruction set better-suited to Lisp. Type-checking would be done by dedicated circuitry, speeding it up by orders of magnitude. And unlike most computer systems at the time, Lisp machines would not be time-shared, since ambitious Lisp programs needed all the resources a computer had available. Each user would be assigned his or her own CPU. In a memo, the Lisp Machine Group at MIT described how this would make Lisp programming significantly easier: - -> The Lisp Machine is a personal computer. Personal computing means that the processor and main memory are not time-division multiplexed, instead each person gets his own. The personal computation system consists of a pool of processors, each with its own main memory, and its own disk for swapping. When a user logs in, he is assigned a processor, and he has exclusive use of it for the duration of the session. When he logs out, the processor is returned to the pool, for the next person to use. This way, there is no competition from other users for memory; the pages the user is frequently referring to remain in core, and so swapping overhead is considerably reduced. Thus the Lisp Machine solves a basic problem of the time-sharing Lisp system. - -The Lisp machine would be a personal computer in a different sense than the one we think of today. As the Lisp Machine Group originally envisioned it, users would sit down in their offices not in front of their own Lisp machines but in front of terminals. The terminals would be connected to the actual Lisp machine, which would be elsewhere. Even though each user would be assigned his or her own processor, the processors would still be “kept off in a machine room,” since they would make noise and take up space and thus be “unwelcome office companions.” The processors would share access to a file system and to devices like printers via a high-speed local network “with completely distributed control.” That network was Chaosnet. - -Chaosnet is both a hardware standard and a software protocol. The hardware standard resembles Ethernet, and in fact the Chaosnet software protocol was eventually run over Ethernet. The software protocol, which specifies both network-layer and transport-layer interactions, was, unlike TCP/IP, always meant to govern a local network. In another memo released by the MIT Artificial Intelligence Lab, David Moon, a member of the Lisp Machine Group, explained that Chaosnet “contains no special provisions for things such as low-speed links, noisy links, multiple paths, and long-distance links with significant transit time.” The focus was instead on designing a protocol that could outperform other protocols on a small network. - -Speed was important because Chaosnet sat between each Lisp processor and the file system. Network delays would significantly slow rudimentary operations like viewing the contents of a text document. To be fast enough, Chaosnet incorporated several improvements over the Network Control Program then in use on Arpanet. According to Moon, “it was important to design out bottlenecks such as are found in Arpanet, for instance the control-link which is shared between multiple connections and the need to acknowledge each message before the next message is sent.” The Chaosnet protocol batches packet acknowledgments in much the same way that TCP does today and so reduced the number of packets that needed to be transmitted by a half to a third. - -Chaosnet could also get away with a relatively simple routing algorithm, since most hosts on the Lisp machine network were probably connected by a single, short wire. Moon wrote that the Chaosnet routing scheme “is predicated on the assumption that the network geometry is simple, there are few multiple paths, and the length of any path is quite short. This makes more sophisticated schemes unnecessary.” The simplicity of the algorithm meant that implementing the Chaosnet protocol was easy. The implementation program was supposedly half the size of the Arpanet Network Control Program. - -The Chaosnet protocol has other idiosyncrasies. A Chaosnet address is only 16 bits, half the size of an IPv4 address, which makes sense given that Chaosnet was only ever meant to work on a local network. Chaosnet also doesn’t use port numbers; instead, a process that wants to connect to another process on a different machine first makes a connection request that specifies a target “contact name.” That contact name is often just the name of a particular service. For example, one host may try to connect to another host using the contact name `TELNET`. In practice, I assume this works more or less just like TCP, since something well-known like port 80 might as well have the contact name `HTTP`. - -The Chaosnet DNS class was added to the Domain Name System by [RFC 973][5] in 1986. It replaced another class that had been available early on, the `CSNET` class, which was there to support a network called the Computer Science Network. I haven’t been able to figure out why Chaosnet was picked out for special treatment by the Domain Name System. There were other protocol families that could have been added but never were. For example, Paul Mockapetris, one of the principal architects of the Domain Name System, has written that he originally imagined that DNS would include a class for Xerox’s network protocol. That never happened. Chaosnet may have been added just because so much of the early work on Arpanet and the internet happened at Bolt, Beranek and Newman in Cambridge, Massachusetts, whose employees were often connected in some way with MIT. Chaosnet was probably well-known among the then relatively small group of people working on computer networks. - -Usage of Chaosnet presumably waned as Lisp machines became less and less popular. Though Lisp machines were for a short time commercially viable products—sold by companies such as Symbolics and Lisp Machines Inc. during the 1980s—they were soon displaced by cheaper microcomputers that could run Lisp just as quickly without special-purpose circuitry. TCP/IP also fixed many of the issues with the original Arpanet protocols that Chaosnet had been created to circumvent. - -### Ghost in the Shell - -There unfortunately isn’t a huge amount of information still around about Chaosnet. RFC 675, which was essentially the first draft of TCP/IP, was published in 1974. Chaosnet was first developed in 1975. TCP/IP eventually conquered the world, but Chaosnet seems to have been a technological dead end. Though it’s possible that Chaosnet influenced subsequent work on TCP/IP, I haven’t found any specific examples of that happening. - -The only really visible remnant of Chaosnet is the `CH` DNS class. There’s something about that fact that I find strangely fascinating. The `CH` class is a vestigial ghost of an alternative network protocol in a world that has long since settled on TCP/IP. It’s exciting, at least to me, to know that the last traces of Chaosnet still lurk out there in the infrastructure of our networked society. The `CH` DNS class is a fun artifact of digital archaeology. But it’s also a living reminder that the internet was not born fully formed, that TCP/IP is not the only way to connect computers to each other, and that “the internet” is far from the coolest name we could have had for our global communication system. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out. - -Previously on TwoBitHistory… - -> Where did RSS come from? Why are there so many competing formats? Why don't people seem to use it that much anymore? -> -> Answers to these questions and many more in this week's post about RSS: -> -> — TwoBitHistory (@TwoBitHistory) [September 17, 2018][8] - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/09/30/chaosnet.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://en.wikipedia.org/wiki/Hesiod_(name_service) -[2]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_Protocol -[3]: http://www-formal.stanford.edu/jmc/recursive.pdf -[4]: https://en.wikipedia.org/wiki/SHRDLU -[5]: https://tools.ietf.org/html/rfc973 -[6]: https://twitter.com/TwoBitHistory -[7]: https://twobithistory.org/feed.xml -[8]: https://twitter.com/TwoBitHistory/status/1041485204802756608?ref_src=twsrc%5Etfw diff --git a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md deleted file mode 100644 index efd2448e4d..0000000000 --- a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md +++ /dev/null @@ -1,265 +0,0 @@ -Translating by jlztan - -Turn your book into a website and an ePub using Pandoc -====== -Write once, publish twice using Markdown and Pandoc. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF. - -In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example. - -First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book. - -You can find the code in my [Programming Fight Club][4] GitHub repository. - -### Setting up the writing structure - -I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later. - -The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters. - -My about file might begin like: - -``` -# About this book {-} - -## Who should read this book {-} - -Before creating a complex software system one needs to create a solid foundation. -General Responsibility Assignment Software Principles (GRASP) are guidelines to assign -responsibilities to software classes in object-oriented programming. -``` - -Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub. - -### Generating the website - -#### Create the HTML meta-information file - -The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file. - -I recommend (at minimum) including the following fields in the web-metadata.yaml file: - -``` ---- -title: GRASP principles for the Object-oriented mind -author: Kiko Fernandez-Reyes -rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International -header-includes: -- | -  \```{=html} -  -  \``` -include-before: -- | -  \```{=html} - 

                  If you like this book, please consider -      spreading the word or -      -        buying me a coffee -     

                  -  \``` -include-after: -- | -  ```{=html} - 
                  -   
                  -   
                  -        -   
                  -  \``` ---- -``` - -Some variables to note: - - * The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag. - * The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.) - * The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee. - * The **include-after** variable appends raw HTML at the end of the website and shows my book's license. - - - -These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others. - -#### Split the website into chapters - -The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website. - -To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html. - -For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// /about/**, the index.html file from the folder about will be displayed in their browser. - -The following Makefile does all of this: - -``` -# Your book files -DEPENDENCIES= toc preface about - -# Placement of your HTML files -DOCS=docs - -all: web - -web: setup $(DEPENDENCIES) -        @cp $(DOCS)/toc/index.html $(DOCS) - - -# Creation and copy of stylesheet and images into -# the assets folder. This is important to deploy the -# website to Github Pages. -setup: -        @mkdir -p $(DOCS) -        @cp -r assets $(DOCS) - - -# Creation of folder and index.html file on a -# per-chapter basis - -$(DEPENDENCIES): -        @mkdir -p $(DOCS)/$@ -        @pandoc -s --toc web-metadata.yaml parts/$@.md \ -        -c /assets/pandoc.css -o $(DOCS)/$@/index.html - -clean: -        @rm -rf $(DOCS) - -.PHONY: all clean web setup -``` - -The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line: - -``` - -``` - -To generate the website, type: - -``` -make -``` - -The root folder should contain now the following structure and files: - -``` -.---parts -|    |--- toc.md -|    |--- preface.md -|    |--- about.md -| -|---docs -    |--- assets/ -    |--- index.html -    |--- toc -    |     |--- index.html -    | -    |--- preface -    |     |--- index.html -    | -    |--- about -          |--- index.html -    -``` - -#### Deploy the website - -To deploy the website on GitHub, follow these steps: - - 1. Create a new repository - 2. Push your content to the repository - 3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch - - - -You can get more details on the [GitHub Pages][5] site. - -Check out [my book's website][6], generated using this process, to see the result. - -### Generating the ePub book - -#### Create the ePub meta-information file - -The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css. - -``` ---- -title: 'GRASP principles for the Object-oriented Mind' -publisher: 'Programming Language Fight Club' -author: Kiko Fernandez-Reyes -rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International -cover-image: assets/cover.png -stylesheet: assets/epub.css -... -``` - -Add the following content to the previous Makefile: - -``` -epub: -        @pandoc -s --toc epub-meta.yaml \ -        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub -``` - -The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call: - -``` -@pandoc -s --toc epub-meta.yaml \ -parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub -``` - -Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder. - -Here's an [example][7] of an ePub created using this process. - -### Summarizing the process - -The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow. - - * HTML book: - * Write chapters in Markdown - * Add metadata - * Create a Makefile to glue pieces together - * Set up GitHub Pages - * Deploy - * ePub book: - * Reuse chapters from previous work - * Add new metadata file - * Create a Makefile to glue pieces together - * Set up GitHub Pages - * Deploy - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc - -作者:[Kiko Fernandez-Reyes][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/kikofernandez -[1]: https://opensource.com/article/18/9/intro-pandoc -[2]: https://pandoc.org/ -[3]: https://www.programmingfightclub.com/ -[4]: https://github.com/kikofernandez/programmingfightclub -[5]: https://pages.github.com/ -[6]: https://www.programmingfightclub.com/grasp-principles/ -[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/sources/tech/20181003 Manage NTP with Chrony.md b/sources/tech/20181003 Manage NTP with Chrony.md new file mode 100644 index 0000000000..aaec88da26 --- /dev/null +++ b/sources/tech/20181003 Manage NTP with Chrony.md @@ -0,0 +1,291 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage NTP with Chrony) +[#]: via: (https://opensource.com/article/18/12/manage-ntp-chrony) +[#]: author: (David Both https://opensource.com/users/dboth) + +Manage NTP with Chrony +====== +Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk) + +> "Does anybody really know what time it is? Does anybody really care?" +> – [Chicago][1], 1969 + +Perhaps that rock group didn't care what time it was, but our computers do need to know the exact time. Timekeeping is very important to computer networks. In banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow the trail of email through a series of servers or to determine the exact sequence of events using log files on geographically dispersed hosts when exact times are kept on the computers in question. + +I used to work at an organization that received over 20 million emails per day and had four servers just to accept and do a basic filter on the incoming flood of email. From there, emails were sent to one of four other servers to perform more complex anti-spam assessments, then they were delivered to one of several additional servers where the emails were placed in the correct inboxes. At each layer, the emails would be sent to one of the next-level servers, selected only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the system until we could determine where it "got lost," according to the pointy-haired bosses. We had to do this with frightening regularity. + +Most of that email turned out to be spam. Some people actually complained that their [joke, cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked us to find it. We did reject those opportunities. + +Our email and other transactional searches were aided by log entries with timestamps that—today—can resolve down to the nanosecond in even the slowest of modern Linux computers. In very high-volume transaction environments, even a few microseconds of difference in the system clocks can mean sorting thousands of transactions to find the correct one(s). + +### The NTP server hierarchy + +Computers worldwide use the [Network Time Protocol][2] (NTP) to synchronize their times with internet standard reference clocks via a hierarchy of NTP servers. The primary servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time service at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites. + +To prevent time requests from time servers lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, there are several thousand public NTP stratum 2 servers that are open and available for anyone to use. Many organizations with large numbers of hosts that need an NTP server will set up their own time servers so that only one local host accesses the stratum 2 time servers, then they configure the remaining network hosts to use the local time server which, in my case, is a stratum 3 server. + +### NTP choices + +The original NTP daemon, **ntpd** , has been joined by a newer one, **chronyd**. Both keep the local host's time synchronized with the time server. Both services are available, and I have seen nothing to indicate that this will change anytime soon. + +Chrony has features that make it the better choice for most environments for the following reasons: + + * Chrony can synchronize to the time server much faster than NTP. This is good for laptops or desktops that don't run constantly. + + * It can compensate for fluctuating clock frequencies, such as when a host hibernates or enters sleep mode, or when the clock speed varies due to frequency stepping that slows clock speeds when loads are low. + + * It handles intermittent network connections and bandwidth saturation. + + * It adjusts for network delays and latency. + + * After the initial time sync, Chrony never steps the clock. This ensures stable and consistent time intervals for system services and applications. + + * Chrony can work even without a network connection. In this case, the local host or server can be updated manually. + + + + +The NTP and Chrony RPM packages are available from standard Fedora repositories. You can install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works well, provides a better interface for the sysadmin, presents much more information, and increases control. + +Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If you'd like to know more, read this [comparison between NTP and Chrony][3] as implementations of the NTP protocol. + +This article explains how to configure Chrony clients and servers on a Fedora host, but the configuration for CentOS and RHEL current releases works the same. + +### Chrony structure + +The Chrony daemon, **chronyd** , runs in the background and monitors the time and status of the time server specified in the **chrony.conf** file. If the local time needs to be adjusted, **chronyd** does it smoothly without the programmatic trauma that would occur if the clock were instantly reset to a new time. + +Chrony's **chronyc** tool allows someone to monitor the current status of Chrony and make changes if necessary. The **chronyc** utility can be used as a command that accepts subcommands, or it can be used as an interactive text-mode program. This article will explain both uses. + +### Client configuration + +The NTP client configuration is simple and requires little or no intervention. The NTP server can be defined during the Linux installation or provided by the DHCP server at boot time. The default **/etc/chrony.conf** file (shown below in its entirety) requires no intervention to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL have their own NTP server pools. Like many Red Hat-based distributions, the configuration file is well commented. + +``` +# Use public servers from the pool.ntp.org project. +# Please consider joining the pool (http://www.pool.ntp.org/join.html). +pool 2.fedora.pool.ntp.org iburst + +# Record the rate at which the system clock gains/losses time. +driftfile /var/lib/chrony/drift + +# Allow the system clock to be stepped in the first three updates +# if its offset is larger than 1 second. +makestep 1.0 3 + +# Enable kernel synchronization of the real-time clock (RTC). + + +# Enable hardware timestamping on all interfaces that support it. +#hwtimestamp * + +# Increase the minimum number of selectable sources required to adjust +# the system clock. +#minsources 2 + +# Allow NTP client access from local network. +#allow 192.168.0.0/16 + +# Serve time even if not synchronized to a time source. +#local stratum 10 + +# Specify file containing keys for NTP authentication. +keyfile /etc/chrony.keys + +# Get TAI-UTC offset and leap seconds from the system tz database. +leapsectz right/UTC + +# Specify directory for log files. +logdir /var/log/chrony + +# Select which information is logged. +#log measurements statistics tracking +``` + +Let's look at the current status of NTP on a virtual machine I use for testing. The **chronyc** command, when used with the **tracking** subcommand, provides statistics that report how far off the local system is from the reference server. + +``` +[root@studentvm1 ~]# chronyc tracking +Reference ID    : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com) +Stratum         : 3 +Ref time (UTC)  : Fri Nov 16 16:21:30 2018 +System time     : 0.000645622 seconds slow of NTP time +Last offset     : -0.000308577 seconds +RMS offset      : 0.000786140 seconds +Frequency       : 0.147 ppm slow +Residual freq   : -0.073 ppm +Skew            : 0.062 ppm +Root delay      : 0.041452706 seconds +Root dispersion : 0.022665167 seconds +Update interval : 1044.2 seconds +Leap status     : Normal +[root@studentvm1 ~]# +``` + +The Reference ID in the first line of the result is the server the host is synchronized to—in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30 2018. The other lines are described in the [chronyc(1) man page][4]. + +The **sources** subcommand is also useful because it provides information about the time source configured in **chrony.conf**. + +``` +[root@studentvm1 ~]# chronyc sources +210 Number of sources = 5 +MS Name/IP address         Stratum Poll Reach LastRx Last sample               +=============================================================================== +^+ 192.168.0.51                  3   6   377     0  -2613us[-2613us] +/-   63ms +^+ dev.smatwebdesign.com         3  10   377   28m  -2961us[-3534us] +/-  113ms +^+ propjet.latt.net              2  10   377   465  -1097us[-1085us] +/-   77ms +^* ec2-35-171-237-77.comput>     2  10   377    83  +2388us[+2395us] +/-   95ms +^+ PBX.cytranet.net              3  10   377   507  -1602us[-1589us] +/-   96ms +[root@studentvm1 ~]# +``` + +The first source in the list is the time server I set up for my personal network. The others were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration file above, my DHCP server provides its IP address for the NTP server. The "S" column—Source State—indicates with an asterisk ( ***** ) the server our host is synced to. This is consistent with the data from the **tracking** subcommand. + +The **-v** option provides a nice description of the fields in this output. + +``` +[root@studentvm1 ~]# chronyc sources -v +210 Number of sources = 5 + +  .-- Source mode  '^' = server, '=' = peer, '#' = local clock. + / .- Source state '*' = current synced, '+' = combined , '-' = not combined, +| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable. +||                                                 .- xxxx [ yyyy ] +/- zzzz +||      Reachability register (octal) -.           |  xxxx = adjusted offset, +||      Log2(Polling interval) --.      |          |  yyyy = measured offset, +||                                \     |          |  zzzz = estimated error. +||                                 |    |           \ +MS Name/IP address         Stratum Poll Reach LastRx Last sample               +=============================================================================== +^+ 192.168.0.51                  3   7   377    28  -2156us[-2156us] +/-   63ms +^+ triton.ellipse.net            2  10   377    24  +5716us[+5716us] +/-   62ms +^+ lithium.constant.com          2  10   377   351   -820us[ -820us] +/-   64ms +^* t2.time.bf1.yahoo.com         2  10   377   453   -992us[ -965us] +/-   46ms +^- ntp.idealab.com               2  10   377   799  +3653us[+3674us] +/-   87ms +[root@studentvm1 ~]# +``` + +If I wanted my server to be the preferred reference time source for this host, I would add the line below to the **/etc/chrony.conf** file. + +``` +server 192.168.0.51 iburst prefer +``` + +I usually place this line just above the first pool server statement near the top of the file. There is no special reason for this, except I like to keep the server statements together. It would work just as well at the bottom of the file, and I have done that on several hosts. This configuration file is not sequence-sensitive. + +The **prefer** option marks this as the preferred reference source. As such, this host will always be synchronized with this reference source (as long as it is available). We can also use the fully qualified hostname for a remote reference server or the hostname only (without the domain name) for a local reference time source as long as the search statement is set in the **/etc/resolv.conf** file. I prefer the IP address to ensure that the time source is accessible even if DNS is not working. In most environments, the server name is probably the better option, because NTP will continue to work even if the server's IP address changes. + +If you don't have a specific reference source you want to synchronize to, it is fine to use the defaults. + +### Configuring an NTP server with Chrony + +The nice thing about the Chrony configuration file is that this single file configures the host as both a client and a server. To add a server function to our host—it will always be a client, obtaining its time from a reference server—we just need to make a couple of changes to the Chrony configuration, then configure the host's firewall to accept NTP requests. + +Open the **/etc/ ** **chrony****.conf** file in your favorite text editor and uncomment the **local stratum 10** line. This enables the Chrony NTP server to continue to act as if it were connected to a remote reference server if the internet connection fails; this enables the host to continue to be an NTP server to other hosts on the local network. + +Let's restart **chronyd** and track how the service is working for a few minutes. Before we enable our host as an NTP server, we want to test a bit. + +``` +[root@studentvm1 ~]# systemctl restart chronyd ; watch chronyc tracking +``` + +The results should look like this. The **watch** command runs the **chronyc tracking** command every two seconds so we can watch changes occur over time. + +``` +Every 2.0s: chronyc tracking                                           studentvm1: Fri Nov 16 20:59:31 2018 + +Reference ID    : C0A80033 (192.168.0.51) +Stratum         : 4 +Ref time (UTC)  : Sat Nov 17 01:58:51 2018 +System time     : 0.001598277 seconds fast of NTP time +Last offset     : +0.001791533 seconds +RMS offset      : 0.001791533 seconds +Frequency       : 0.546 ppm slow +Residual freq   : -0.175 ppm +Skew            : 0.168 ppm +Root delay      : 0.094823152 seconds +Root dispersion : 0.021242738 seconds +Update interval : 65.0 seconds +Leap status     : Normal +``` + +Notice that my NTP server, the **studentvm1** host, synchronizes to the host at 192.168.0.51, which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora pool machines would result in synchronization at stratum 3. Notice also that the amount of error decreases over time. Eventually, it should stabilize with a tiny variation around a fairly small range of error. The size of the error depends upon the stratum and other network factors. After a few minutes, use Ctrl+C to break out of the watch loop. + +To turn our host into an NTP server, we need to allow it to listen on the local network. Uncomment the following line to allow hosts on the local network to access our NTP server. + +``` +# Allow NTP client access from local network. +allow 192.168.0.0/16 +``` + +Note that the server can listen for requests on any local network it's attached to. The IP address in the "allow" line is just intended for illustrative purposes. Be sure to change the IP network and subnet mask in that line to match your local network's. + +Restart **chronyd**. + +``` +[root@studentvm1 ~]# systemctl restart chronyd +``` + +To allow other hosts on your network to access this server, configure the firewall to allow inbound UDP packets on port 123. Check your firewall's documentation to find out how to do that. + +### Testing + +Your host is now an NTP server. You can test it with another host or a VM that has access to the network on which the NTP server is listening. Configure the client to use the new NTP server as the preferred server in the **/etc/chrony.conf** file, then monitor that client using the **chronyc** tools we used above. + +### Chronyc as an interactive tool + +As I mentioned earlier, **chronyc** can be used as an interactive command tool. Simply run the command without a subcommand and you get a **chronyc** command prompt. + +``` +[root@studentvm1 ~]# chronyc +chrony version 3.4 +Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others +chrony comes with ABSOLUTELY NO WARRANTY.  This is free software, and +you are welcome to redistribute it under certain conditions.  See the +GNU General Public License version 2 for details. + +chronyc> +``` + +You can enter just the subcommands at this prompt. Try using the **tracking** , **ntpdata** , and **sources** commands. The **chronyc** command line allows command recall and editing for **chronyc** subcommands. You can use the **help** subcommand to get a list of possible commands and their syntax. + +### Conclusion + +Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all on the local network or scattered around the globe. It's easy to configure because, despite the large number of options available, only a few configurations are required for most circumstances. + +After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command: + +``` +/sbin/hwclock --systohc +``` + +This command can be added as a cron job or a script in **cron.daily** to keep the hardware clock synced with the system time. + +Chrony and NTP (the service) both use the same configuration, and the files' contents are interchangeable. The man pages for [chronyd][5], [chronyc][4], and [chrony.conf][6] contain an amazing amount of information that can help you get started or learn about esoteric configuration options. + +Do you run your own NTP server? Let us know in the comments and be sure to tell us which implementation you are using, NTP or Chrony. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/manage-ntp-chrony + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Does_Anybody_Really_Know_What_Time_It_Is%3F +[2]: https://en.wikipedia.org/wiki/Network_Time_Protocol +[3]: https://chrony.tuxfamily.org/comparison.html +[4]: https://linux.die.net/man/1/chronyc +[5]: https://linux.die.net/man/8/chronyd +[6]: https://linux.die.net/man/5/chrony.conf diff --git a/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md b/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md deleted file mode 100644 index e45d96470f..0000000000 --- a/sources/tech/20181003 Oomox - Customize And Create Your Own GTK2, GTK3 Themes.md +++ /dev/null @@ -1,128 +0,0 @@ -Oomox – Customize And Create Your Own GTK2, GTK3 Themes -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-720x340.png) - -Theming and Visual customization is one of the main advantages of Linux. Since all the code is open, you can change how your Linux system looks and behaves to a greater degree than you ever could with Windows/Mac OS. GTK theming is perhaps the most popular way in which people customize their Linux desktops. The GTK toolkit is used by a wide variety of desktop environments like Gnome, Cinnamon, Unity, XFCE, and budgie. This means that a single theme made for GTK can be applied to any of these Desktop Environments with little changes. - -There are a lot of very high quality popular GTK themes out there, such as **Arc** , **Numix** , and **Adapta**. But if you want to customize these themes and create your own visual design, you can use **Oomox**. - -The Oomox is a graphical app for customizing and creating your own GTK theme complete with your own color, icon and terminal style. It comes with several presets, which you can apply on a Numix, Arc, or Materia style theme to create your own GTK theme. - -### Installing Oomox - -On Arch Linux and its variants: - -Oomox is available on [**AUR**][1], so you can install it using any AUR helper programs like [**Yay**][2]. - -``` -$ yay -S oomox - -``` - -On Debian/Ubuntu/Linux Mint, download `oomox.deb`package from [**here**][3] and install it as shown below. As of writing this guide, the latest version was **oomox_1.7.0.5.deb**. - -``` -$ sudo dpkg -i oomox_1.7.0.5.deb -$ sudo apt install -f - -``` - -On Fedora, Oomox is available in third-party **COPR** repository. - -``` -$ sudo dnf copr enable tcg/themes -$ sudo dnf install oomox - -``` - -Oomox is also available as a [**Flatpak app**][4]. Make sure you have installed Flatpak as described in [**this guide**][5]. And then, install and run Oomox using the following commands: - -``` -$ flatpak install flathub com.github.themix_project.Oomox - -$ flatpak run com.github.themix_project.Oomox - -``` - -For other Linux distributions, go to the Oomox project page (Link is given at the end of this guide) on Github and compile and install it manually from source. - -### Customize And Create Your Own GTK2, GTK3 Themes - -**Theme Customization** - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-1-1.png) - -You can change the colour of practically every UI element, like: - - 1. Headers - 2. Buttons - 3. Buttons inside Headers - 4. Menus - 5. Selected Text - - - -To the left, there are a number of presets, like the Cars theme, modern themes like Materia, and Numix, and retro themes. Then, at the top of the main window, there’s an option called **Theme Style** , that lets you set the overall visual style of the theme. You can choose from between Numix, Arc, and Materia. - -With certain styles like Numix, you can even change things like the Header Gradient, Outline Width and Panel Opacity. You can also add a Dark Mode for your theme that will be automatically created from the default theme. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-2.png) - -**Iconset Customization** - -You can customize the iconset that will be used for the theme icons. There are 2 options – Gnome Colors and Archdroid. You an change the base, and stroke colours of the iconset. - -**Terminal Customization** - -You can also customize the terminal colours. The app has several presets for this, but you can customize the exact colour code for each colour value like red, green,black, and so on. You can also auto swap the foreground and background colours. - -**Spotify Theme** - -A unique feature this app has is that you can theme the spotify app to your liking. You can change the foreground, background, and accent color of the spotify app to match the overall GTK theme. - -Then, just press the **Apply Spotify Theme** button, and you’ll get this window: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-3.png) - -Just hit apply, and you’re done. - -**Exporting your Theme** - -Once you’re done customizing the theme to your liking, you can rename it by clicking the rename button at the top left: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-4.png) - -And then, just hit **Export Theme** to export the theme to your system. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Oomox-5.png) - -You can also just export just the Iconset or the terminal theme. - -After this, you can open any Visual Customization app for your Desktop Environment, like Tweaks for Gnome based DEs, or the **XFCE Appearance Settings** , and select your exported GTK and Shell theme. - -### Verdict - -If you are a Linux theme junkie, and you know exactly how each button, how each header in your system should look like, Oomox is worth a look. For extreme customizers, it lets you change virtually everything about how your system looks. For people who just want to tweak an existing theme a little bit, it has many, many presets so you can get what you want without a lot of effort. - -Have you tried it? What are your thoughts on Oomox? Put them in the comments below! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/oomox-customize-and-create-your-own-gtk2-gtk3-themes/ - -作者:[EDITOR][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/editor/ -[1]: https://aur.archlinux.org/packages/oomox/ -[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[3]: https://github.com/themix-project/oomox/releases -[4]: https://flathub.org/apps/details/com.github.themix_project.Oomox -[5]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ diff --git a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md index 987809aa0d..beb3bab797 100644 --- a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md +++ b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md @@ -1,5 +1,3 @@ -### translating by way-ww - 4 Must-Have Tools for Monitoring Linux ====== diff --git a/sources/tech/20181004 Archiving web sites.md b/sources/tech/20181004 Archiving web sites.md deleted file mode 100644 index 5b7f41b689..0000000000 --- a/sources/tech/20181004 Archiving web sites.md +++ /dev/null @@ -1,121 +0,0 @@ -fuowang 翻译中 - -Archiving web sites -====== - -I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web. - -### Converting simple sites - -The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard. - -For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine: - -``` - $ nice wget --mirror --execute robots=off --no-verbose --convert-links \ - --backup-converted --page-requisites --adjust-extension \ - --base=./ --directory-prefix=./ --span-hosts \ - --domains=www.example.com,example.com http://www.example.com/ - -``` - -The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site. - -The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site. - -That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well. - -### JavaScript doom - -Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm. - -Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites. - -As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach. - -### Creating and displaying WARC files - -At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13]. - -A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`: - -``` - $ pip install pywb - $ wb-manager init example - $ wb-manager add example crawl.warc.gz - $ wayback - -``` - -This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents. - -Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked: - -``` - $ crawl https://example.com/ - -``` - -(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb. - -### Future work and alternatives - -There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites. - -This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line. - -In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support. - -Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay. - -The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37]. - --------------------------------------------------------------------------------- - -via: https://anarc.at/blog/2018-10-04-archiving-web-sites/ - -作者:[Anarcat][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://anarc.at -[1]: https://anarc.at/blog -[2]: https://drupal.org -[3]: https://www.gnu.org/software/wget/ -[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/ -[5]: https://en.wikipedia.org/wiki/Progressive_enhancement -[6]: https://noscript.net/ -[7]: https://github.com/gorhill/uMatrix -[8]: https://pamplemousse.ca/ -[9]: https://archive.org -[10]: http://www.archive.org/web/researcher/ArcFileFormat.php -[11]: https://iipc.github.io/warc-specifications/ -[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium -[13]: https://github.com/internetarchive/heritrix3/wiki -[14]: https://github.com/webrecorder/pywb -[15]: https://webrecorder.io/ -[16]: https://github.com/webrecorder/pywb/issues/294 -[17]: https://github.com/iipc/warc-specifications/issues/23 -[18]: https://github.com/iipc/warc-specifications/pull/24 -[19]: https://git.autistici.org/ale/crawl/ -[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem -[21]: https://github.com/chfoo/wpull -[22]: http://phantomjs.org/ -[23]: http://rg3.github.io/youtube-dl/ -[24]: https://www.archiveteam.org/index.php?title=ArchiveBot -[25]: https://archiveteam.org/ -[26]: https://github.com/JustAnotherArchivist/snscrape -[27]: https://github.com/PromyLOPh/crocoite -[28]: http://www.httrack.com/ -[29]: https://gitlab.com/gnuwget/wget2 -[30]: https://gitlab.com/gnuwget/wget2/wikis/home -[31]: https://wallabag.org/ -[32]: https://getpocket.com/ -[33]: https://github.com/wallabag/wallabag/issues/2825 -[34]: https://github.com/wallabag/wallabag/issues/2914 -[35]: https://pirate.github.io/bookmark-archiver/ -[36]: https://github.com/kanishka-linux/reminiscence -[37]: http://iabak.archiveteam.org diff --git a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md deleted file mode 100644 index 691600a4cc..0000000000 --- a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md +++ /dev/null @@ -1,133 +0,0 @@ -Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs-720x340.png) - -A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+. - -### Installing dbxfs - -The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you haven’t installed PIP yet. - -And, install FUSE library as well. - -On Debian-based systems, run the following command to install FUSE: - -``` -$ sudo apt install libfuse2 - -``` - -On Fedora: - -``` -$ sudo dnf install fuse - -``` - -Once you installed all required dependencies, run the following command to install dbxfs utility: - -``` -$ pip3 install dbxfs - -``` - -### Mount Dropbox folder locally - -Create a mount point to mount your dropbox folder in your local file system. - -``` -$ mkdir ~/mydropbox - -``` - -Then, mount the dropbox folder locally using dbxfs utility as shown below: - -``` -$ dbxfs ~/mydropbox - -``` - -You will be asked to generate an access token: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-1.png) - -To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process. - -A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process. - -You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token. - -Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice. - -[![Generate access token 2][3]][4] - -All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dropbox-in-file-manager.png) - -### Change Access Token Storage Path - -By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5]. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/access-token.png) - -Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Don’t share your access token with anyone. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-a-new-app.png) - -Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location. - -Next edit the dbxfs configuration file and add the following line in it: - -``` -"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"] - -``` - -You can find the dbxfs configuration file by running the following command: - -``` -$ dbxfs --print-default-config-file - -``` - -For more details, refer dbxfs help section: - -``` -$ dbxfs -h - -``` - -As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if you’re interested to see how it works and let us know about your experience in the comment section below. - -And, that’s all for now. Hope this was useful. More good stuff to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ -[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png -[5]: https://dropbox.com/developers/apps -[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/ -[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/ -[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/ -[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/ -[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/ -[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/ -[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/ diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md index 71adf0112b..27616a9f6e 100644 --- a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -1,5 +1,3 @@ -translating by cyleft - Taking notes with Laverna, a web-based information organizer ====== diff --git a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md index f2885b177c..2fd085eda0 100644 --- a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md +++ b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md @@ -1,4 +1,5 @@ -translating by leemeans +Translating by Jamskr + Exploring the Linux kernel: The secrets of Kconfig/kbuild ====== Dive into understanding how the Linux config/build system works. diff --git a/sources/tech/20181022 Improve login security with challenge-response authentication.md b/sources/tech/20181022 Improve login security with challenge-response authentication.md deleted file mode 100644 index 66ed2534b0..0000000000 --- a/sources/tech/20181022 Improve login security with challenge-response authentication.md +++ /dev/null @@ -1,183 +0,0 @@ -Improve login security with challenge-response authentication -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/10/challenge-response-816x345.png) - -### Introduction - -Today, Fedora offers multiple ways to improve the secure authentication of our user accounts. Of course it has the familiar user name and password to login. It also offers additional authentication options such as biometric, fingerprint, smart card, one-time password, and even challenge-response authentication. - -Each authentication method has clear pros and cons. That, in itself, could be a topic for a rather lengthy article. Fedora Magazine has covered a few of these options previously: - - -+ [Using the YubiKey4 with Fedora][1] -+ [Fedora 28: Better smart card support in OpenSSH][2] - - -One of the most secure methods in modern Fedora releases is offline hardware challenge-response. It’s also one of the easiest to deploy. Here’s how. - -### Challenge-response authentication - -Technically, when you provide a password, you’re responding to a user name challenge. The offline challenge response covered here requires your user name first. Next, Fedora challenges you to provide an encrypted physical hardware token. The token responds to the challenge with another encrypted key it stores via the Pluggable Authentication Modules (PAM) framework. Finally, Fedora prompts you for the password. This prevents someone from just using a found hardware token, or just using a user name and password without the correct encrypted key. - -This means that in addition to your user name and password, you must have previously registered one or more encrypted hardware tokens with the OS. And you have to provide that physical hardware token to be able to authenticate with your user name. - -Some challenge-response methods, like one time passwords (OTP), take an encrypted code key on the hardware token, and pass that key across the network to a remote authentication server. The server then tells Fedora’s PAM framework if it’s is a valid token for that user name. This is great if the authentication server(s) are on the local network. The downside is if the network connection is down or you’re working remote without a network connection, you can’t use this remote authentication method. You could be locked out of the system until you can connect through the network to the server. - -Sometimes a workplace requires use of Yubikey One Time Passwords (OTP) configuration. However, on home or personal systems you may prefer a local challenge-response configuration. Everything is local, and the method requires no remote network calls. The following process works on Fedora 27, 28, and 29. - -### Preparation - -#### Hardware token keys - -First you need a secure hardware token key. Specifically, this process requires a Yubikey 4, Yubikey NEO, or a recently released Yubikey 5 series device which also supports FIDO2. You should purchase two of them to provide a backup in case one becomes lost or damaged. You can use these keys on numerous workstations. The simpler FIDO or FIDO U2F only versions don’t work for this process, but are great for online services that use FIDO. - -#### Backup, backup, and backup - -Next, make a backup of all your important data. You may want to test the configuration in a Fedora 27/28/29 cloned VM to make sure you understand the process before setting up your personal workstation. - -#### Updating and installing - -Now make sure Fedora is up to date. Then install the required Fedora Yubikey packages via these dnf commands: - -``` -$ sudo dnf upgrade -$ sudo dnf install ykclient* ykpers* pam_yubico* -$ cd -``` - -If you’re in a VM environment, such as Virtual Box, make sure the Yubikey device is inserted in a USB port, and enable USB access to the Yubikey in the VM control. - -### Configuring Yubikey - -Verify that your user account has access to the USB Yubikey: - -``` -$ ykinfo -v -version: 3.5.0 -``` - -If the YubiKey is not detected, the following error message appears: - -``` -Yubikey core error: no yubikey present -``` - -Next, initialize each of your new Yubikeys with the following ykpersonalize command. This sets up the Yubikey configuration slot 2 with a Challenge Response using the HMAC-SHA1 algorithm, even with less than 64 characters. If you have already setup your Yubikeys for challenge-response, you don’t need to run ykpersonalize again. - -``` -ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -oserial-api-visible -``` - -Some users leave the YubiKey in their workstation while using it, and even use challenge-response for virtual machines. However, for more security you may prefer to manually trigger the Yubikey to respond to challenge. - -To add that manual challenge button trigger, add the -ochal-btn-trig flag. This flag causes the Yubikey to flash the yubikey LED on a request. It waits for you to press the button on the hardware key area within 15 seconds to produce the response key. - -``` -$ ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -ochal-btn-trig -oserial-api-visible -``` - -Do this for each of your new hardware keys, only once per key. Once you have programmed your keys, store the Yubikey configuration to ~/.yubico with the following command: - -``` -$ ykpamcfg -2 -v -debug: util.c:222 (check_firmware_version): YubiKey Firmware version: 4.3.4 - -Sending 63 bytes HMAC challenge to slot 2 -Sending 63 bytes HMAC challenge to slot 2 -Stored initial challenge and expected response in '/home/chuckfinley/.yubico/challenge-9992567'. -``` - -If you are setting up multiple keys for backup purposes, configure all the keys the same, and store each key’s challenge-response using the ykpamcfg utility. If you run the command ykpersonalize on an existing registered key, you must store the configuration again. - -### Configuring /etc/pam.d/sudo - -Now to verify this configuration worked, **in the same terminal window** you’ll setup sudo to require the use of the Yubikey challenge-response. Insert the following line into the /etc/pam.d/sudo file: - -``` -auth required pam_yubico.so mode=challenge-response -``` - -Insert the above auth line into the file above the auth include system-auth line. Then save the file and exit the editor. In a default Fedora 29 setup, /etc/pam.d/sudo should now look like this: - -``` -#%PAM-1.0 -auth required pam_yubico.so mode=challenge-response -auth include system-auth -account include system-auth -password include system-auth -session optional pam_keyinit.so revoke -session required pam_limits.so -session include system-auth -``` - -**Keep this original terminal window open** , and test by opening another new terminal window. In the new terminal window type: - -``` -$ sudo echo testing -``` - -You should notice the LED blinking on the key. Tap the Yubikey button and you should see a prompt for your sudo password. After you enter your password, you should see “testing” echoed in the terminal screen. - -Now test to ensure a correct failure. Start another terminal window and remove the Yubikey from the USB port. Verify that sudo no longer works without the Yubikey with this command: - -``` -$ sudo echo testing fail -``` - -You should immediately be prompted for the sudo password. Even if you enter the password, it should fail. - -### Configuring Gnome Desktop Manager - -Once your testing is complete, now you can add challenge-response support for the graphical login. Re-insert your Yubikey into the USB port. Next you’ll add the following line to the /etc/pam.d/gdm-password file: - -``` -auth required pam_yubico.so mode=challenge-response -``` - -Open a terminal window, and issue the following command. You can use another editor if desired: - -``` -$ sudo vi /etc/pam.d/gdm-password -``` - -You should see the yubikey LED blinking. Press the yubikey button, then enter the password at the prompt. - -Modify the /etc/pam.d/gdm-password file to add the new auth line above the existing line auth substack password-auth. The top of the file should now look like this: - -``` -auth [success=done ignore=ignore default=bad] pam_selinux_permit.so -auth required pam_yubico.so mode=challenge-response -auth substack password-auth -auth optional pam_gnome_keyring.so -auth include postlogin - -account required pam_nologin.so -``` - -Save the changes and exit the editor. If you use vi, the key sequence is to hit the **Esc** key, then type wq! at the prompt to save and exit. - -### Conclusion - -Now log out of GNOME. With the Yubikey inserted into the USB port, click on your user name in the graphical login. The Yubikey LED begins to flash. Touch the button, and you will be prompted for your password. - -If you lose the Yubikey, you can still use the secondary backup Yubikey in addition to your set password. You can also add additional Yubikey configurations to your user account. - -If someone gains access to your password, they still can’t login without your physical hardware Yubikey. Congratulations! You’ve now dramatically increased the security of your workstation login. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/login-challenge-response-authentication/ - -作者:[nabooengineer][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/nabooengineer/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/using-the-yubikey4-with-fedora/ -[2]: https://fedoramagazine.org/fedora-28-better-smart-card-support-openssh/ - diff --git a/sources/tech/20181025 How to write your favorite R functions in Python.md b/sources/tech/20181025 How to write your favorite R functions in Python.md deleted file mode 100644 index a06d3557b9..0000000000 --- a/sources/tech/20181025 How to write your favorite R functions in Python.md +++ /dev/null @@ -1,153 +0,0 @@ -How to write your favorite R functions in Python -====== -R or Python? This Python script mimics convenient R-style functions for doing statistics nice and easy. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0) - -One of the great modern battles of data science and machine learning is "Python vs. R." There is no doubt that both have gained enormous ground in recent years to become top programming languages for data science, predictive analytics, and machine learning. In fact, according to a recent IEEE article, Python overtook C++ as the [top programming language][1] and R firmly secured its spot in the top 10. - -However, there are some fundamental differences between these two. [R was developed primarily][2] as a tool for statistical analysis and quick prototyping of a data analysis problem. Python, on the other hand, was developed as a general purpose, modern object-oriented language in the same vein as C++ or Java but with a simpler learning curve and more flexible demeanor. Consequently, R continues to be extremely popular among statisticians, quantitative biologists, physicists, and economists, whereas Python has slowly emerged as the top language for day-to-day scripting, automation, backend web development, analytics, and general machine learning frameworks and has an extensive support base and open source development community work. - -### Mimicking functional programming in a Python environment - -[R's nature as a functional programming language][3] provides users with an extremely simple and compact interface for quick calculations of probabilities and essential descriptive/inferential statistics for a data analysis problem. For example, wouldn't it be great to be able to solve the following problems with just a single, compact function call? - - * How to calculate the mean/median/mode of a data vector. - * How to calculate the cumulative probability of some event following a normal distribution. What if the distribution is Poisson? - * How to calculate the inter-quartile range of a series of data points. - * How to generate a few random numbers following a Student's t-distribution. - - - -The R programming environment can do all of these. - -On the other hand, Python's scripting ability allows analysts to use those statistics in a wide variety of analytics pipelines with limitless sophistication and creativity. - -To combine the advantages of both worlds, you just need a simple Python-based wrapper library that contains the most commonly used functions pertaining to probability distributions and descriptive statistics defined in R-style. This enables you to call those functions really fast without having to go to the proper Python statistical libraries and figure out the whole list of methods and arguments. - -### Python wrapper script for most convenient R-functions - -[I wrote a Python script][4] to define the most convenient and widely used R-functions in simple, statistical analysis—in Python. After importing this script, you will be able to use those R-functions naturally, just like in an R programming environment. - -The goal of this script is to provide simple Python subroutines mimicking R-style statistical functions for quickly calculating density/point estimates, cumulative distributions, and quantiles and generating random variates for important probability distributions. - -To maintain the spirit of R styling, the script uses no class hierarchy and only raw functions are defined in the file. Therefore, a user can import this one Python script and use all the functions whenever they're needed with a single name call. - -Note that I use the word mimic. Under no circumstance am I claiming to emulate R's true functional programming paradigm, which consists of a deep environmental setup and complex relationships between those environments and objects. This script allows me (and I hope countless other Python users) to quickly fire up a Python program or Jupyter notebook, import the script, and start doing simple descriptive statistics in no time. That's the goal, nothing more, nothing less. - -If you've coded in R (maybe in grad school) and are just starting to learn and use Python for data analysis, you will be happy to see and use some of the same well-known functions in your Jupyter notebook in a manner similar to how you use them in your R environment. - -Whatever your reason, using this script is fun. - -### Simple examples - -To start, just import the script and start working with lists of numbers as if they were data vectors in R. - -``` -from R_functions import * -lst=[20,12,16,32,27,65,44,45,22,18] - -``` - -Say you want to calculate the [Tuckey five-number][5] summary from a vector of data points. You just call one simple function, **fivenum** , and pass on the vector. It will return the five-number summary in a NumPy array. - -``` -lst=[20,12,16,32,27,65,44,45,22,18] -fivenum(lst) -> array([12. , 18.5, 24.5, 41. , 65. ]) -``` - -Maybe you want to know the answer to the following question: - -Suppose a machine outputs 10 finished goods per hour on average with a standard deviation of 2. The output pattern follows a near normal distribution. What is the probability that the machine will output at least 7 but no more than 12 units in the next hour? - -The answer is essentially this: - -![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_1.png) - -You can obtain the answer with just one line of code using **pnorm** : - -``` -pnorm(12,10,2)-pnorm(7,10,2) -> 0.7745375447996848 -``` - -Or maybe you need to answer the following: - -Suppose you have a loaded coin with the probability of turning heads up 60% every time you toss it. You are playing a game of 10 tosses. How do you plot and map out the chances of all the possible number of wins (from 0 to 10) with this coin? - -You can obtain a nice bar chart with just a few lines of code using just one function, **dbinom** : - -``` -probs=[] -import matplotlib.pyplot as plt -for i in range(11): -    probs.append(dbinom(i,10,0.6)) -plt.bar(range(11),height=probs) -plt.grid(True) -plt.show() -``` - -![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_2.png) - -### Simple interface for probability calculations - -R offers an extremely simple and intuitive interface for quick calculations from essential probability distributions. The interface goes like this: - - * **d** {distribution} gives the density function value at a point **x** - * **p** {distribution} gives the cumulative value at a point **x** - * **q** {distribution} gives the quantile function value at a probability **p** - * **r** {distribution} generates one or multiple random variates - - - -In our implementation, we stick to this interface and its associated argument list so you can execute these functions exactly like you would in an R environment. - -### Currently implemented functions - -The following R-style functions are implemented in the script for fast calling. - - * Mean, median, variance, standard deviation - * Tuckey five-number summary, IQR - * Covariance of a matrix or between two vectors - * Density, cumulative probability, quantile function, and random variate generation for the following distributions: normal, uniform, binomial, Poisson, F, Student's t, Chi-square, beta, and gamma. - - - -### Work in progress - -Obviously, this is a work in progress, and I plan to add some other convenient R-functions to this script. For example, in R, a single line of command **lm** can get you an ordinary least-square fitted model to a numerical dataset with all the necessary inferential statistics (P-values, standard error, etc.). This is powerfully brief and compact! On the other hand, standard linear regression problems in Python are often tackled using [Scikit-learn][6], which needs a bit more scripting for this use, so I plan to incorporate this single function linear model fitting feature using Python's [statsmodels][7] backend. - -If you like and use this script in your work, please help others find it by starring or forking its [GitHub repository][8]. Also, you can check my other [GitHub repos][9] for fun code snippets in Python, R, or MATLAB and some machine learning resources. - -If you have any questions or ideas to share, please contact me at [tirthajyoti[AT]gmail.com][10]. If you are, like me, passionate about machine learning and data science, please [add me on LinkedIn][11] or [follow me on Twitter. ][12] - -Originally published on [Towards Data Science][13]. Reposted under [CC BY-SA 4.0][14]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/write-favorite-r-functions-python - -作者:[Tirthajyoti Sarkar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/tirthajyoti -[b]: https://github.com/lujun9972 -[1]: https://spectrum.ieee.org/at-work/innovation/the-2018-top-programming-languages -[2]: https://www.coursera.org/lecture/r-programming/overview-and-history-of-r-pAbaE -[3]: http://adv-r.had.co.nz/Functional-programming.html -[4]: https://github.com/tirthajyoti/StatsUsingPython/blob/master/R_Functions.py -[5]: https://en.wikipedia.org/wiki/Five-number_summary -[6]: http://scikit-learn.org/stable/ -[7]: https://www.statsmodels.org/stable/index.html -[8]: https://github.com/tirthajyoti/StatsUsingPython -[9]: https://github.com/tirthajyoti?tab=repositories -[10]: mailto:tirthajyoti@gmail.com -[11]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/ -[12]: https://twitter.com/tirthajyotiS -[13]: https://towardsdatascience.com/how-to-write-your-favorite-r-functions-in-python-11e1e9c29089 -[14]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md b/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md deleted file mode 100644 index c11e1c9c38..0000000000 --- a/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md +++ /dev/null @@ -1,119 +0,0 @@ -@flowsnow is translating - - -How I organize my knowledge as a Software Engineer -============================================================ - - -Software Development and Technology in general are areas that evolve at a very fast pace and continuous learning is essential. -Some minutes navigating in the internet, in places like Twitter, Medium, RSS feeds, Hacker News and other specialized sites and communities, are enough to find lots of great pieces of information from articles, case studies, tutorials, code snippets, new applications and much more. - -Saving and organizing all that information can be a daunting task. In this post I will present some tools tools that I use to do it. - -One of the points I consider very important regarding knowledge management is to avoid lock-in in a particular platform. All the tools I use, allow to export your data in standard formats like Markdown and HTML. - -Note that, My workflow is not perfect and I am constantly searching for new tools and ways to optimize it. Also everyone is different, so what works for me might not working well for you. - -### Knowledge base with NotionHQ - -For me, the fundamental piece of Knowledge management is to have some kind of personal Knowledge base / wiki. A place where you can save links, bookmarks, notes etc in an organized manner. - -I use [NotionHQ][7] for that matter. I use it to keep notes on various topics, having lists of resources like great libraries or tutorials grouped by programming language, bookmarking interesting blog posts and tutorials, and much more, not only related to software development but also my personal life. - -What I really like about Notion, is how simple it is to create new content. You write it using Markdown and it is organized as tree. - -Here is my top level pages of my "Development" workspace: - - [![Image](https://res.cloudinary.com/practicaldev/image/fetch/s--uMbaRUtu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://i.imgur.com/kRnuvMV.png)][8] - -Notion has some nice other features like integrated spreadsheets / databases and Task boards. - -You will need to subscribe to paid Personal Plan, if you want to use Notion seriously as the free plan is somewhat limited. I think its worth the price. Notion allows to export your entire workspace to Markdown files. The export has some important problems, like loosing the page hierarchy, but hope Notion Team can improve that. - -As a free alternative I would probably use [VuePress][9] or [GitBook][10] to host my own. - -### Save interesting articles with Pocket - -[Pocket][11] is one of my favorite applications ever! With Pocket you can create a reading list of articles from the Internet.  -Every time I see an article that looks interesting, I save it to Pocket using its Chrome Extension. Later on, I will read it and If I found it useful enough, I will use the "Archive" function of Pocket to permanently save that article and clean up my Pocket inbox. - -I try to keep the Reading list small enough and keep archiving information that I have dealt with. Pocket allows you to tag articles which will make it simpler to search articles for a particular topic later in time. - -You can also save a copy of the article in Pocket servers in case of the original site disappears, but you will need Pocket Premium for that. - -Pocket also have a "Discover" feature which suggests similar articles based on the articles you have saved. This is a great way to find new content to read. - -### Snippet Management with SnippetStore - -From GitHub, to Stack Overflow answers, to blog posts, its common to find some nice code snippets that you want to save for later. It could be some nice algorithm implementation, an useful script or an example of how to do X in Y language. - -I tried many apps from simple GitHub Gists to [Boostnote][12] until I discovered [SnippetStore][13]. - -SnippetStore is an open source snippet management app. What distinguish SnippetStore from others is its simplicity. You can organize snippets by Language or Tags and you can have multi file snippets. Its not perfect but it gets the job done. Boostnote, for example has more features, but I prefer the simpler way of organizing content of SnippetStore. - -For abbreviations and snippets that I use on a daily basis, I prefer to use my Editor / IDE snippets feature as it is more convenient to use. I use SnippetStore more like a reference of coding examples. - -[Cacher][14] is also an interesting alternative, since it has integrations with many editors, have a cli tool and uses GitHub Gists as backend, but 6$/month for its pro plan, its too much IMO. - -### Managing cheat sheets with DevHints - -[Devhints][15] is a collection of cheat sheets created by Rico Sta. Cruz. Its open source and powered by Jekyll, one of the most popular static site generator. - -The cheat sheets are written in Markdown with some extra formatting goodies like support for columns. - -I really like the looks of the interface and being Markdown makes in incredibly easy to add new content and keep it updated and in version control, unlike cheat sheets in PDF or Image format, that you can find on sites like [Cheatography][16]. - -As it is open source I have created my own fork, removed some cheat sheets that I dont need and add some more. - -I use cheat sheets as reference of how to use some library or programming language or to remember some commands. Its very handy to have a single page, with all the basic syntax of a specific programming language for example. - -I am still experimenting with this but its working great so far. - -### Diigo - -[Diigo][17] allows you to Annotate and Highlight parts of websites. I use it to annotate important information when studying new topics or to save particular paragraphs from articles, Stack Overflow answers or inspirational quotes from Twitter! ;) - -* * * - -And thats it. There might be some overlap in terms of functionality in some of the tools, but like I said in the beginning, this is an always evolving workflow, as I am always experimenting and searching for ways to improve and be more productive. - -What about you? How to you organize your Knowledge?. Please feel free to comment below. - -Thank you for reading. - ------------------------------------------------------------------------- - -作者简介: - -Bruno Paz -Web Engineer. Expert in #PHP and @Symfony Framework. Enthusiast about new technologies. Sports and @FCPorto fan! - --------------------------------------------------------------------------------- - -via: https://dev.to/brpaz/how-do-i-organize-my-knowledge-as-a-software-engineer-4387 - -作者:[ Bruno Paz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[oska874](https://github.com/oska874) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://brunopaz.net/ -[1]:https://dev.to/brpaz -[2]:http://twitter.com/brunopaz88 -[3]:http://github.com/brpaz -[4]:https://dev.to/t/knowledge -[5]:https://dev.to/t/learning -[6]:https://dev.to/t/development -[7]:https://www.notion.so/ -[8]:https://res.cloudinary.com/practicaldev/image/fetch/s--uMbaRUtu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://i.imgur.com/kRnuvMV.png -[9]:https://vuepress.vuejs.org/ -[10]:https://www.gitbook.com/?t=1 -[11]:https://getpocket.com/ -[12]:https://boostnote.io/ -[13]:https://github.com/ZeroX-DG/SnippetStore -[14]:https://www.cacher.io/ -[15]:https://devhints.io/ -[16]:https://cheatography.com/ -[17]:https://www.diigo.com/index diff --git a/sources/tech/20181105 5 Easy Tips for Linux Web Browser Security.md b/sources/tech/20181105 5 Easy Tips for Linux Web Browser Security.md deleted file mode 100644 index 17d06c72d6..0000000000 --- a/sources/tech/20181105 5 Easy Tips for Linux Web Browser Security.md +++ /dev/null @@ -1,157 +0,0 @@ -5 Easy Tips for Linux Web Browser Security -====== -![](https://www.linux.com/learn/intro-to-linux/2018/11/5-easy-tips-linux-web-browser-security) - -If you use your Linux desktop and never open a web browser, you are a special kind of user. For most of us, however, a web browser has become one of the most-used digital tools on the planet. We work, we play, we get news, we interact, we bank… the number of things we do via a web browser far exceeds what we do in local applications. Because of that, we need to be cognizant of how we work with web browsers, and do so with a nod to security. Why? Because there will always be nefarious sites and people, attempting to steal information. Considering the sensitive nature of the information we send through our web browsers, it should be obvious why security is of utmost importance. - -So, what is a user to do? In this article, I’ll offer a few basic tips, for users of all sorts, to help decrease the chances that your data will end up in the hands of the wrong people. I will be demonstrating on the Firefox web browser, but many of these tips cross the application threshold and can be applied to any flavor of web browser. - -### 1. Choose Your Browser Wisely - -Although most of these tips apply to most browsers, it is imperative that you select your web browser wisely. One of the more important aspects of browser security is the frequency of updates. New issues are discovered quite frequently and you need to have a web browser that is as up to date as possible. Of major browsers, here is how they rank with updates released in 2017: - - 1. Chrome released 8 updates (with Chromium following up with numerous security patches throughout the year). - - 2. Firefox released 7 updates. - - 3. Edge released 2 updates. - - 4. Safari released 1 update (although Apple does release 5-6 security patches yearly). - - - - -But even if your browser of choice releases an update every month, if you (as a user) don’t upgrade, that update does you no good. This can be problematic with certain Linux distributions. Although many of the more popular flavors of Linux do a good job of keeping web browsers up to date, others do not. So, it’s crucial that you manually keep on top of browser updates. This might mean your distribution of choice doesn’t include the latest version of your web browser of choice in its standard repository. If that’s the case, you can always manually download the latest version of the browser from the developer’s download page and install from there. - -If you like to live on the edge, you can always use a beta or daily build version of your browser. Do note, that using a daily build or beta version does come with it the possibility of unstable software. Say, however, you’re okay with using a daily build of Firefox on a Ubuntu-based distribution. To do that, add the necessary repository with the command: - -``` -sudo apt-add-repository ppa:ubuntu-mozilla-daily/ppa -``` - -Update apt and install the daily Firefox with the commands: - -``` -sudo apt-get update -sudo apt-get install firefox -``` - -What’s most important here is to never allow your browser to get far out of date. You want to have the most updated version possible on your desktop. Period. If you fail this one thing, you could be using a browser that is vulnerable to numerous issues. - -### 2. Use A Private Window - -Now that you have your browser updated, how do you best make use of it? If you happen to be of the really concerned type, you should consider always using a private window. Why? Private browser windows don’t retain your data: No passwords, no cookies, no cache, no history… nothing. The one caveat to browsing through a private window is that (as you probably expect), every time you go back to a web site, or use a service, you’ll have to re-type any credentials to log in. If you’re serious about browser security, never saving credentials should be your default behavior. - -This leads me to a reminder that everyone needs: Make your passwords strong! In fact, at this point in the game, everyone should be using a password manager to store very strong passwords. My password manager of choice is [Universal Password Manager][1]. - -### 3\. Protect Your Passwords - -For some, having to retype those passwords every single time might be too much. So what do you do if you want to protect those passwords, while not having to type them constantly? If you use Firefox, there’s a built-in tool, called Master Password. With this enabled, none of your browser’s saved passwords are accessible, until you correctly type the master password. To set this up, do the following: - - 1. Open Firefox. - - 2. Click the menu button. - - 3. Click Preferences. - - 4. In the Preferences window, click Privacy & Security. - - 5. In the resulting window, click the checkbox for Use a master password (Figure 1). - - 6. When prompted, type and verify your new master password (Figure 2). - - 7. Close and reopen Firefox. - - - - -![Master Password][3] - -Figure 1: The Master Password option in Firefox Preferences. - -[Used with permission][4] - -![Setting password][6] - -Figure 2: Setting the Master Password in Firefox. - -[Used with permission][4] - -### 4\. Know your Extensions - -There are plenty of privacy-focused extensions available for most browsers. What extensions you use will depend upon what you want to focus on. For myself, I choose the following extensions for Firefox: - - * [Firefox Multi-Account Containers][7] \- Allows you to configure certain sites to open in a containerized tab. - - * [Facebook Container][8] \- Always opens Facebook in a containerized tab (Firefox Multi-Account Containers is required for this). - - * [Avast Online Security][9] \- Identifies and blocks known phishing sites and displays a website’s security rating (curated by the Avast community of over 400 million users). - - * [Mining Blocker][10] \- Blocks all CPU-Crypto Miners before they are loaded. - - * [PassFF][11] \- Integrates with pass (A UNIX password manager) to store credentials safely. - - * [Privacy Badger][12] \- Automatically learns to block trackers. - - * [uBlock Origin][13] \- Blocks trackers based on known lists. - - -Of course, you’ll find plenty more security-focused extensions for: - - - -+ [Firefox][2] - -+ [Chrome, Chromium, & Vivaldi][5] - -+ [Opera][14] - - -Not every web browser offers extensions. Some, such as Midoria, offer a limited about of built-in plugins, that can be enabled/disabled (Figure 3). However, you won’t find third-party plugins available for the majority of these lightweight browsers. - -![Midori Browser][15] - -Figure 3: The Midori Browser plugins window. - -[Used with permission][4] - -### 5\. Virtualize - -For those that are concerned about releasing locally stored data to prying eyes, one option would be to only use a browser on a virtual machine. To do this, install the likes of [VirtualBox][16], install a Linux guest, and then run whatever browser you like in the virtual environment. If you then apply the above tips, you can be sure your browsing experience will be safe. - -### The Truth of the Matter - -The truth is, if the machine you are working from is on a network, you’re never going to be 100% safe. However, if you use that web browser intelligently you’ll get more bang out of your security buck and be less prone to having data stolen. The silver lining with Linux is that the chances of getting malicious software installed on your machine is exponentially less than if you were using another platform. Just remember to always use the latest release of your browser, keep your operating system updated, and use caution with the sites you visit. - -Learn more about Linux through the free ["Introduction to Linux" ][17] course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/11/5-easy-tips-linux-web-browser-security - -作者:[Jack Wallen][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/jlwallen -[b]: https://github.com/lujun9972 -[1]: http://upm.sourceforge.net/ -[2]: https://addons.mozilla.org/en-US/firefox/search/?q=security -[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_1.jpg?itok=gHMPKEvr (Master Password) -[4]: https://www.linux.com/licenses/category/used-permission -[5]: https://chrome.google.com/webstore/search/security -[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_2.jpg?itok=4L7DR2Ik (Setting password) -[7]: https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/?src=search -[8]: https://addons.mozilla.org/en-US/firefox/addon/facebook-container/?src=search -[9]: https://addons.mozilla.org/en-US/firefox/addon/avast-online-security/?src=search -[10]: https://addons.mozilla.org/en-US/firefox/addon/miningblocker/?src=search -[11]: https://addons.mozilla.org/en-US/firefox/addon/passff/?src=search -[12]: https://addons.mozilla.org/en-US/firefox/addon/privacy-badger17/ -[13]: https://addons.mozilla.org/en-US/firefox/addon/ublock-origin/?src=search -[14]: https://addons.opera.com/en/search/?query=security -[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/browsersecurity_3.jpg?itok=hdNor0gw (Midori Browser) -[16]: https://www.virtualbox.org/ -[17]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181105 5 Minimal Web Browsers for Linux.md b/sources/tech/20181105 5 Minimal Web Browsers for Linux.md new file mode 100644 index 0000000000..1fbc18aeae --- /dev/null +++ b/sources/tech/20181105 5 Minimal Web Browsers for Linux.md @@ -0,0 +1,163 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (5 Minimal Web Browsers for Linux) +[#]: via: (https://www.linux.com/blog/intro-to-linux/2018/11/5-minimal-web-browsers-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) +[#]: url: ( ) + +5 Minimal Web Browsers for Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimal.jpg?itok=ifA0Y3pV) + +There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options. + +The same thing goes for web browsers. You can use anything from open source favorites, such as [Firefox][1] and [Chromium][2], or closed sourced industry darlings like [Vivaldi][3] and [Chrome][4]. Those options are full-fledged browsers with every possible bell and whistle you’ll ever need. For some, these feature-rich browsers are perfect for everyday needs. + +There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, it’s about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered. + +Let’s take a look at five of the minimal browsers that can be installed on Linux. I’ll be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Let’s dive in. + +### GNOME Web + +GNOME Web (codename Epiphany, which means [“a usually sudden manifestation or perception of the essential nature or meaning of something”][5]) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation. + +Epiphany uses the WebKit rendering engine, which is the same engine used in Apple’s Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines: + + * Simplicity - Feature bloat and user interface clutter are considered evil. + + * Standards compliance - No non-standard features will ever be introduced to the codebase. + + * Software freedom - Epiphany will always be released under a license that respects freedom. + + * Human interface - Epiphany follows the [GNOME Human Interface Guidelines][6]. + + * Minimal preferences - Preferences are only added when they make sense and after careful consideration. + + * Target audience - Non-technical users are the primary target audience (which helps to define the types of features that are included). + + + + +GNOME Web is as clean and simple a web browser as you’ll find (Figure 1). + +![GNOME Web][8] + +Figure 1: The GNOME Web browser displaying a minimal amount of preferences for the user. + +[Used with permission][9] + +The GNOME Web manifesto reads: + +A web browser is more than an application: it is a way of thinking, a way of seeing the world. Epiphany's principles are simplicity, standards compliance, and software freedom. + +### Netsurf + +The [Netsurf][10] minimal web browser opens almost faster than you can release the mouse button. Netsurf uses its own layout and rendering engine (designed completely from scratch), which is rather hit and miss in its rendering (Figure 2). + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_2.jpg?itok=KhGhIKlj) + +Although you might find Netsurf to suffer from rendering issues on certain sites, understand the Hubbub HTML parser is following the work-in-progress HTML5 specification, so there will be issues popup now and then. To ease those rendering headaches, Netsurf does include HTTPS support, web page thumbnailing, URL completion, scale view, bookmarks, full-screen mode, keyboard shorts, and no particular GUI toolkit requirements. That last bit is important, especially when you switch from one desktop to another. + +For those curious as to the requirements for Netsurf, the browser can run on a machine as slow as a 30Mhz ARM 6 computer with 16MB of RAM. That’s impressive, by today’s standard. + +### QupZilla + +If you’re looking for a minimal browser that uses the Qt Framework and the QtWebKit rendering engine, [QupZilla][11] might be exactly what you’re looking for. QupZilla does include all the standard features and functions you’d expect from a web browser, such as bookmarks, history, sidebar, tabs, RSS feeds, ad blocking, flash blocking, and CA Certificates management. Even with those features, QupZilla still manages to remain a very fast lightweight web browser. Other features include: Fast startup, speed dial homepage, built-in screenshot tool, browser themes, and more. +One feature that should appeal to average users is that QupZilla has a more standard preferences tools than found in many lightweight browsers (Figure 3). So, if going too far outside the lines isn’t your style, but you still want something lighter weight, QupZilla is the browser for you. + +![QupZilla][13] + +Figure 3: The QupZilla preferences tool. + +[Used with permission][9] + +### Otter Browser + +Otter Browser is a free, open source attempt to recreate the closed-source offerings found in the Opera Browser. Otter Browser uses the WebKit rendering engine and has an interface that should be immediately familiar with any user. Although lightweight, Otter Browser does include full-blown features such as: + + * Passwords manager + + * Add-on manager + + * Content blocking + + * Spell checking + + * Customizable GUI + + * URL completion + + * Speed dial (Figure 4) + + * Bookmarks and various related features + + * Mouse gestures + + * User style sheets + + * Built-in Note tool + + +![Otter][15] + +Figure 4: The Otter Browser Speed Dial tab. + +[Used with permission][9] + +Otter Browser can be run on nearly any Linux distribution from an [AppImage][16], so there’s no installation required. Just download the AppImage file, give the file executable permissions (with the command chmod u+x otter-browser-*.AppImage), and then launch the app with the command ./otter-browser*.AppImage. + +Otter Browser does an outstanding job of rendering websites and could function as your go-to minimal browser with ease. + +### Lynx + +Let’s get really minimal. When I first started using Linux, back in ‘97, one of the web browsers I often turned to was a text-only take on the app called [Lynx][17]. It should come as no surprise that Lynx is still around and available for installation from the standard repositories. As you might expect, Lynx works from the terminal window and doesn’t display pretty pictures or render much in the way of advanced features (Figure 5). In fact, Lynx is as bare-bones a browser as you will find available. Because of how bare-bones this web browser is, it’s not recommended for everyone. But if you happen to have a gui-less web server and you have a need to be able to read the occasional website, Lynx can be a real lifesaver. + +![Lynx][19] + +Figure 5: The Lynx browser rendering the Linux.com page. + +[Used with permission][9] + +I have also found Lynx an invaluable tool when troubleshooting certain aspects of a website (or if some feature on a website is preventing me from viewing the content in a regular browser). Another good reason to use Lynx is when you only want to view the content (and not the extraneous elements). + +### Plenty More Where This Came From + +There are plenty more minimal browsers than this. But the list presented here should get you started down the path of minimalism. One (or more) of these browsers are sure to fill that need, whether you’re running it on a low-powered machine or not. + +Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/11/5-minimal-web-browsers-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.mozilla.org/en-US/firefox/new/ +[2]: https://www.chromium.org/ +[3]: https://vivaldi.com/ +[4]: https://www.google.com/chrome/ +[5]: https://www.merriam-webster.com/dictionary/epiphany +[6]: https://developer.gnome.org/hig/stable/ +[7]: /files/images/minimalbrowsers1jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_1.jpg?itok=Q7wZLF8B (GNOME Web) +[9]: /licenses/category/used-permission +[10]: https://www.netsurf-browser.org/ +[11]: https://qupzilla.com/ +[12]: /files/images/minimalbrowsers3jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_3.jpg?itok=O8iMALWO (QupZilla) +[14]: /files/images/minimalbrowsers4jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_4.jpg?itok=5bCa0z-e (Otter) +[16]: https://sourceforge.net/projects/otter-browser/files/ +[17]: https://lynx.browser.org/ +[18]: /files/images/minimalbrowsers5jpg +[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_5.jpg?itok=p_Lmiuxh (Lynx) +[20]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md b/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md new file mode 100644 index 0000000000..81111b465c --- /dev/null +++ b/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md @@ -0,0 +1,342 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How To Check The List Of Packages Installed From Particular Repository?) +[#]: via: (https://www.2daygeek.com/how-to-check-the-list-of-packages-installed-from-particular-repository/) +[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/) +[#]: url: ( ) + +How To Check The List Of Packages Installed From Particular Repository? +====== + +If you would like to check the list of package installed from particular repository then you are in the right place to get it done. + +Why we need this detail? It may helps you to isolate the installed packages list based on the repository. + +Like, it’s coming from distribution official repository or these are coming from PPA or these are coming from other resources, etc., + +You may want to know what are the packages came from third party repositories to keep eye on those to avoid any damages on your system. + +So many third party repositories and PPAs are available for Linux. These repositories are included set of packages which is not available in distribution repository due to some limitation. + +It helps administrator to easily install some of the important packages which is not available in the distribution official repository. Installing third party repository on production system is not advisable as this may not properly maintained by the repository maintainer due to many reasons. + +So, you have to decide whether you want to install or not. I can say, we can believe some of the third party repositories which is well maintained and suggested by Linux distributions like [EPEL repository][1], Copr (Cool Other Package Repo), etc,. + +If you would like to see the list of package was installed from the corresponding repo, use the following commands based on your distributions. + +[List of Major repositories][2] and it’s details are below. + + * **`CentOS:`** [EPEL][1], [ELRepo][3], etc is [CentOS Community Approved Repositories][4]. + * **`Fedora:`** [RPMfusion repo][5] is commonly used by most of the [Fedora][6] users. + * **`ArchLinux:`** ArchLinux community repository contains packages that have been adopted by Trusted Users from the Arch User Repository. + * **`openSUSE:`** [Packman repo][7] offers various additional packages for openSUSE, especially but not limited to multimedia related applications and libraries that are on the openSUSE Build Service application blacklist. It’s the largest external repository of openSUSE packages. + * **`Ubuntu:`** Personal Package Archives (PPAs) are a kind of repository. Developers create them in order to distribute their software. You can find this information on the PPA’s Launchpad page. Also, you can enable Cananical partners repositories. + + + +### What Is Repository? + +A software repository is a central place which stores the software packages for the particular application. + +All the Linux distributions are maintaining their own repositories and they allow users to retrieve and install packages on their machine. + +Each vendor offered a unique package management tool to manage their repositories such as search, install, update, upgrade, remove, etc. + +Most of the Linux distributions comes as freeware except RHEL and SUSE. To access their repositories you need to buy a subscriptions. + +### How To Check The List Of Packages Installed From Particular Repository on RHEL/CentOS Systems? + +This can be done in multiple ways. Here we will be giving you all the possible options and you can choose which one is best for you. + +### Method-1: Using Yum Command + +RHEL & CentOS systems are using RPM packages hence we can use the [Yum Package Manager][8] to get this information. + +YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. + +Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. + +``` +[[email protected] ~]# yum list installed | grep @epel +apachetop.x86_64 0.15.6-1.el7 @epel +aria2.x86_64 1.18.10-2.el7.1 @epel +atop.x86_64 2.3.0-8.el7 @epel +axel.x86_64 2.4-9.el7 @epel +epel-release.noarch 7-11 @epel +lighttpd.x86_64 1.4.50-1.el7 @epel +``` + +Alternatively, you can use the yum command with other option to get the same details like above. + +``` +# yum repo-pkgs epel list installed +Loaded plugins: fastestmirror +Loading mirror speeds from cached hostfile + * epel: epel.mirror.constant.com +Installed Packages +apachetop.x86_64 0.15.6-1.el7 @epel +aria2.x86_64 1.18.10-2.el7.1 @epel +atop.x86_64 2.3.0-8.el7 @epel +axel.x86_64 2.4-9.el7 @epel +epel-release.noarch 7-11 @epel +lighttpd.x86_64 1.4.50-1.el7 @epel +``` + +### Method-2: Using Yumdb Command + +Yumdb info provides information similar to yum info but additionally it provides package checksum data, type, user info (who installed the package). Since yum 3.2.26 yum has started storing additional information outside of the rpmdatabase (where user indicates it was installed by the user, and dep means it was brought in as a dependency). + +``` +# yumdb search from_repo epel* |egrep -v '(from_repo|^$)' +Loaded plugins: fastestmirror +apachetop-0.15.6-1.el7.x86_64 +aria2-1.18.10-2.el7.1.x86_64 +atop-2.3.0-8.el7.x86_64 +axel-2.4-9.el7.x86_64 +epel-release-7-11.noarch +lighttpd-1.4.50-1.el7.x86_64 +``` + +### Method-3: Using Repoquery Command + +repoquery is a program for querying information from YUM repositories similarly to rpm queries. + +``` +# repoquery -a --installed --qf "%{ui_from_repo} %{name}" | grep '^@epel' +@epel apachetop +@epel aria2 +@epel atop +@epel axel +@epel epel-release +@epel lighttpd +``` + +### How To Check The List Of Packages Installed From Particular Repository on Fedora System? + +DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for back-end. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. + +[Dnf command][9] is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. + +``` +# dnf list installed | grep @updates +NetworkManager.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-adsl.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-bluetooth.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libnm.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libreswan.x86_64 1.2.10-1.fc29 @updates +NetworkManager-libreswan-gnome.x86_64 1.2.10-1.fc29 @updates +NetworkManager-openvpn.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-openvpn-gnome.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-ovs.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-ppp.x86_64 1:1.12.4-2.fc29 @updates +. +. +``` + +Alternatively, you can use the dnf command with other option to get the same details like above. + +``` +# dnf repo-pkgs updates list installed +Installed Packages +NetworkManager.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-adsl.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-bluetooth.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libnm.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libreswan.x86_64 1.2.10-1.fc29 @updates +NetworkManager-libreswan-gnome.x86_64 1.2.10-1.fc29 @updates +NetworkManager-openvpn.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-openvpn-gnome.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-ovs.x86_64 1:1.12.4-2.fc29 @updates +. +. +``` + +### How To Check The List Of Packages Installed From Particular Repository on openSUSE System? + +Zypper is a command line package manager which makes use of libzypp. [Zypper command][10] provides functions like repository access, dependency solving, package installation, etc. + +``` +zypper search -ir "Update Repository (Non-Oss)" +Loading repository data... +Reading installed packages... + +S | Name | Summary | Type +---+----------------------------+---------------------------------------------------+-------- +i | gstreamer-0_10-fluendo-mp3 | GStreamer plug-in from Fluendo for MP3 support | package +i+ | openSUSE-2016-615 | Test-update for openSUSE Leap 42.2 Non Free | patch +i+ | openSUSE-2017-724 | Security update for unrar | patch +i | unrar | A program to extract, test, and view RAR archives | package +``` + +Alternatively, we can use repo id instead of repo name. + +``` +zypper search -ir 2 +Loading repository data... +Reading installed packages... + +S | Name | Summary | Type +---+----------------------------+---------------------------------------------------+-------- +i | gstreamer-0_10-fluendo-mp3 | GStreamer plug-in from Fluendo for MP3 support | package +i+ | openSUSE-2016-615 | Test-update for openSUSE Leap 42.2 Non Free | patch +i+ | openSUSE-2017-724 | Security update for unrar | patch +i | unrar | A program to extract, test, and view RAR archives | package +``` + +### How To Check The List Of Packages Installed From Particular Repository on ArchLinux System? + +[Pacman command][11] stands for package manager utility. pacman is a simple command-line utility to install, build, remove and manage Arch Linux packages. Pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. + +``` +$ paclist community +acpi 1.7-2 +acpid 2.0.30-1 +adapta-maia-theme 3.94.0.149-1 +android-tools 9.0.0_r3-1 +blueman 2.0.6-1 +brotli 1.0.7-1 +. +. +ufw 0.35-5 +unace 2.5-10 +usb_modeswitch 2.5.2-1 +viewnior 1.7-1 +wallpapers-2018 1.0-1 +xcursor-breeze 5.11.5-1 +xcursor-simpleandsoft 0.2-8 +xcursor-vanilla-dmz-aa 0.4.5-1 +xfce4-whiskermenu-plugin-gtk3 2.3.0-1 +zeromq 4.2.5-1 +``` + +### How To Check The List Of Packages Installed From Particular Repository on Debian Based Systems? + +For Debian based systems, it can be done using grep command. + +If you want to know the list of installed repositories on your system, use the following command. + +``` +$ ls -lh /var/lib/apt/lists/ | uniq +total 370M +-rw-r--r-- 1 root root 10K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_InRelease +-rw-r--r-- 1 root root 6.4K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_partner_binary-amd64_Packages +-rw-r--r-- 1 root root 6.4K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_partner_binary-i386_Packages +-rw-r--r-- 1 root root 3.2K Jun 12 21:19 archive.canonical.com_ubuntu_dists_bionic_partner_i18n_Translation-en +drwxr-xr-x 2 _apt root 4.0K Jul 25 08:44 auxfiles +-rw-r--r-- 1 root root 3.7K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_contrib_binary-amd64_Packages +-rw-r--r-- 1 root root 7.2K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_contrib_Contents-amd64.lz4 +-rw-r--r-- 1 root root 4.4K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_InRelease +-rw-r--r-- 1 root root 34 Mar 19 2018 download.virtualbox.org_virtualbox_debian_dists_bionic_non-free_Contents-amd64.lz4 +-rw-r--r-- 1 root root 6.4K Sep 21 09:42 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_Contents-amd64.lz4 +-rw-r--r-- 1 root root 6.4K Sep 21 09:42 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_Contents-i386.lz4 +-rw-r--r-- 1 root root 73K Nov 6 11:16 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_InRelease +. +. +-rw-r--r-- 1 root root 29 May 11 06:39 security.ubuntu.com_ubuntu_dists_bionic-security_main_dep11_icons-64x64.tar.gz +-rw-r--r-- 1 root root 747K Nov 5 23:57 security.ubuntu.com_ubuntu_dists_bionic-security_main_i18n_Translation-en +-rw-r--r-- 1 root root 2.8K Oct 9 22:37 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_binary-amd64_Packages +-rw-r--r-- 1 root root 3.7K Oct 9 22:37 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_binary-i386_Packages +-rw-r--r-- 1 root root 1.8K Jul 24 23:06 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_i18n_Translation-en +-rw-r--r-- 1 root root 519K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_binary-amd64_Packages +-rw-r--r-- 1 root root 517K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_binary-i386_Packages +-rw-r--r-- 1 root root 11K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_Components-amd64.yml.gz +-rw-r--r-- 1 root root 8.9K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_icons-48x48.tar.gz +-rw-r--r-- 1 root root 16K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_icons-64x64.tar.gz +-rw-r--r-- 1 root root 315K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_i18n_Translation-en +``` + +To get the list of installed packages from the `security.ubuntu.com` repository. + +``` +$ grep Package /var/lib/apt/lists/security.ubuntu.com_*_Packages | awk '{print $2;}' +amd64-microcode +apache2 +apache2-bin +apache2-data +apache2-dbg +apache2-dev +. +. +znc +znc-dev +znc-perl +znc-python +znc-tcl +zsh-static +zziplib-bin +``` + +The security repository containing multiple branches (main, multiverse and universe) and if you would like to list out the installed packages from the particular repository `universe` then use the following format. + +``` +$ grep Package /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_bionic-security_universe*_Packages | awk '{print $2;}' +ant +ant-doc +ant-optional +apache2-suexec-custom +apache2-suexec-pristine +apparmor-easyprof +apport-kde +apport-noui +apport-valgrind +apt-transport-https +. +. +xul-ext-gdata-provider +xul-ext-lightning +xvfb +znc +znc-dev +znc-perl +znc-python +znc-tcl +zsh-static +zziplib-bin +``` + +one more example for `ppa.launchpad.net` repository. + +``` +$ grep Package /var/lib/apt/lists/ppa.launchpad.net_*_Packages | awk '{print $2;}' +notepadqq +notepadqq-gtk +notepadqq-common +notepadqq +notepadqq-gtk +notepadqq-common +numix-gtk-theme +numix-icon-theme +numix-icon-theme-circle +numix-icon-theme-square +numix-gtk-theme +numix-icon-theme +numix-icon-theme-circle +numix-icon-theme-square +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-the-list-of-packages-installed-from-particular-repository/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[2]: https://www.2daygeek.com/category/repository/ +[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ +[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ +[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ +[6]: https://fedoraproject.org/wiki/Third_party_repositories +[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ +[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[9]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ diff --git a/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md b/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md deleted file mode 100644 index e00fc5452b..0000000000 --- a/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md +++ /dev/null @@ -1,324 +0,0 @@ -Top 30 OpenStack Interview Questions and Answers -====== -Now a days most of the firms are trying to migrate their IT infrastructure and Telco Infra into private cloud i.e OpenStack. If you planning to give interviews on Openstack admin profile, then below list of interview questions might help you to crack the interview. - -![](https://www.linuxtechi.com/wp-content/uploads/2018/11/OpenStack-Interview-Questions.jpg) - -### Q:1 Define OpenStack and its key components? - -Ans: It is a bundle of opensource software, which all in combine forms a provide cloud software known as OpenStack.OpenStack is known as Stack of Open source Software or Projects. - -Following are the key components of OpenStack - - * **Nova** – It handles the Virtual machines at compute level and performs other computing task at compute or hypervisor level. - * **Neutron** – It provides the networking functionality to VMs, Compute and Controller Nodes. - * **Keystone** – It provides the identity service for all cloud users and openstack services. In other words, we can say Keystone a method to provide access to cloud users and services. - * **Horizon** – It provides a GUI (Graphical User Interface), using the GUI Admin can all day to day operations task at ease. - * **Cinder** – It provides the block storage functionality, generally in OpenStack Cinder is integrated with Chef and ScaleIO to service block storage to Compute & Controller nodes. - * **Swift** – It provides the object storage functionality. Generally, Glance images are on object storage. External storage like ScaleIO can work as Object storage too and can easily be integrated with Glance Service. - * **Glance** – It provides Cloud image services, using glance admin used to upload and download cloud images. - * **Heat** – It provides an orchestration service or functionality. Using Heat admin can easily VMs as stack and based on requirements VMs in the stack can be scale-in and Scale-out - * **Ceilometer** – It provides the telemetry and billing services. - - - -### Q:2 What are services generally run on a controller node? - -Ans: Following services run on a controller node: - - * Identity Service ( KeyStone) - * Image Service ( Glance) - * Nova Services like Nova API, Nova Scheduler & Nova DB - * Block & Object Service - * Ceilometer Service - * MariaDB / MySQL and RabbitMQ Service - * Management services of Networking (Neutron) and Networking agents - * Orchestration Service (Heat) - - - -### Q:3 What are the services generally run on a Compute Node? - -Ans: Following services run on a compute node, - - * Nova-Compute - * Networking Services like OVS - - - -### Q:4 What is the default location of VMs on the Compute Nodes? - -Ans: VMs in the Compute node are stored at “ **/var/lib/nova/instances** ” - -### Q:5 What is default location of glance images? - -Ans: As the Glance service runs on a controller node, all the glance images are store under the folder “ **/var/lib/glance/images** ” on a controller node. - -Read More : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1] - -### Q:6 Tell me the command how to spin a VM from Command Line? - -Ans: We can easily spin a new VM using the following openstack command, - -``` -# openstack server create --flavor {flavor-name} --image {Image-Name-Or-Image-ID}  --nic net-id={Network-ID} --security-group {Security_Group_ID} –key-name {Keypair-Name} -``` - -### Q:7 How to list the network namespace of a tenant in OpenStack? - -Ans: Network namespace of a tenant can be listed using “ip net ns” command - -``` -~# ip netns list -qdhcp-a51635b1-d023-419a-93b5-39de47755d2d -haproxy -vrouter -``` - -### Q:8 How to execute command inside network namespace in openstack? - -Ans: Let’s assume we want to execute “ifconfig” command inside the network namespace “qdhcp-a51635b1-d023-419a-93b5-39de47755d2d”, then run the beneath command, - -Syntax : ip netns exec {network-space} - -``` -~# ip netns exec qdhcp-a51635b1-d023-419a-93b5-39de47755d2d "ifconfig" -``` - -### Q:9 How to upload and download a cloud image in Glance from command line? - -Ans: A Cloud image can be uploaded in glance from command using beneath openstack command, - -``` -~# openstack image create --disk-format qcow2 --container-format bare   --public --file {Name-Cloud-Image}.qcow2     -``` - -Use below openstack command to download a cloud image from command line, - -``` -~# glance image-download --file --progress  -``` - -### Q:10 How to reset error state of a VM into active in OpenStack env? - -Ans: There are some scenarios where some VMs went to error state and this error state can be changed into active state using below commands, - -``` -~# nova reset-state --active {Instance_id} -``` - -### Q:11 How to get list of available Floating IPs from command line? - -Ans: Available floating ips can be listed using the below command, - -``` -~]# openstack ip floating list | grep None | head -10 -``` - -### Q:12 How to provision a virtual machine in specific availability zone and compute Host? - -Ans: Let’s assume we want to provision a VM on the availability zone NonProduction in compute-02, use the beneath command to accomplish this, - -``` -~]# openstack server create --flavor m1.tiny --image cirros --nic net-id=e0be93b8-728b-4d4d-a272-7d672b2560a6 --security-group NonProd_SG  --key-name linuxtec --availability-zone NonProduction:compute-02  nonprod_testvm -``` - -### Q:13 How to get list of VMs which are provisioned on a specific Compute node? - -Ans: Let’s assume we want to list the vms which are provisioned on compute-0-19, use below - -Syntax: openstack server list –all-projects –long -c Name -c Host | grep -i {Compute-Node-Name} - -``` -~# openstack server list --all-projects --long -c Name -c Host | grep -i  compute-0-19 -``` - -### Q:14 How to view the console log of an openstack instance from command line? - -Ans: Console logs of an instance can be viewed from the command line using the following commands, - -First get the ID of an instance and then use the below command, - -``` -~# openstack console log show {Instance-id} -``` - -### Q:15 How to get console URL of an openstack instance? - -Ans: Console URL of an instance can be retrieved from command line using the below openstack command, - -``` -~# openstack console url show {Instance-id} -``` - -### Q:16 How to create a bootable cinder / block storage volume from command line? - -Ans: To Create a bootable cinder or block storage volume (assume 8 GB) , refer the below steps: - - * Get Image list using below - - - -``` -~# openstack image list | grep -i cirros -| 89254d46-a54b-4bc8-8e4d-658287c7ee92 | cirros  | active | -``` - - * Create bootable volume of size 8 GB using cirros image - - - -``` -~# cinder create --image-id 89254d46-a54b-4bc8-8e4d-658287c7ee92 --display-name cirros-bootable-vol  8 -``` - -### Q:17 How to list all projects or tenants that has been created in your opentstack? - -Ans: Projects or tenants list can be retrieved from the command using the below openstack command, - -``` -~# openstack project list --long -``` - -### Q:18 How to list the endpoints of openstack services? - -Ans: Openstack service endpoints are classified into three categories, - - * Public Endpoint - * Internal Endpoint - * Admin Endpoint - - - -Use below openstack command to view endpoints of each openstack service, - -``` -~# openstack catalog list -``` - -To list the endpoint of a specific service like keystone use below, - -``` -~# openstack catalog show keystone -``` - -Read More : [**Step by Step Instance Creation Flow in OpenStack**][2] - -### Q:19 In which order we should restart nova services on a controller node? - -Ans: Following order should be followed to restart the nova services on openstack controller node, - - * service nova-api restart - * service nova-cert restart - * service nova-conductor restart - * service nova-consoleauth restart - * service nova-scheduler restart - - - -### Q:20 Let’s assume DPDK ports are configured on compute node for data traffic, now how you will check the status of dpdk ports? - -Ans: As DPDK ports are configured via openvSwitch (OVS), use below commands to check the status, - -### Q:21 How to add new rules to the existing SG(Security Group) from command line in openstack? - -Ans: New rules to the existing SG in openstack can be added using the neutron command, - -``` -~# neutron security-group-rule-create --protocol   --port-range-min --port-range-max --direction  --remote-ip-prefix Security-Group-Name -``` - -### Q:22 How to view the OVS bridges configured on Controller and Compute Nodes? - -Ans: OVS bridges on Controller and Compute nodes can be viewed using below command, - -``` -~]# ovs-vsctl show -``` - -### Q:23 What is the role of Integration Bridge(br-int) on the Compute Node ? - -Ans: The integration bridge (br-int) performs VLAN tagging and untagging for the traffic coming from and to the instance running on the compute node. - -Packets leaving the n/w interface of an instance goes through the linux bridge (qbr) using the virtual interface qvo. The interface qvb is connected to the Linux Bridge & interface qvo is connected to integration bridge (br-int). The qvo port on integration bridge has an internal VLAN tag that gets appended to packet header when a packet reaches to the integration bridge. - -### Q:24 What is the role of Tunnel Bridge (br-tun) on the compute node? - -Ans: The tunnel bridge (br-tun) translates the VLAN tagged traffic from integration bridge to the tunnel ids using OpenFlow rules. - -br-tun (tunnel bridge) allows the communication between the instances on different networks. Tunneling helps to encapsulate the traffic travelling over insecure networks, br-tun supports two overlay networks i.e GRE and VXLAN - -### Q:25 What is the role of external OVS bridge (br-ex)? - -Ans: As the name suggests, this bridge forwards the traffic coming to and from the network to allow external access to instances. br-ex connects to the physical interface like eth2, so that floating IP traffic for tenants networks is received from the physical network and routed to the tenant network ports. - -### Q:26 What is function of OpenFlow rules in OpenStack Networking? - -Ans: OpenFlow rules is a mechanism that define how a packet will reach to destination starting from its source. OpenFlow rules resides in flow tables. The flow tables are part of OpenFlow switch. - -When a packet arrives to a switch, it is processed by the first flow table, if it doesn’t match any flow entries in the table then packet is dropped or forwarded to another table. - -### Q:27 How to display the information about a OpenFlow switch (like ports, no. of tables, no of buffer)? - -Ans: Let’s assume we want to display the information about OpenFlow switch (br-int), run the following command, - -``` -root@compute-0-15# ovs-ofctl show br-int -OFPT_FEATURES_REPLY (xid=0x2): dpid:0000fe981785c443 -n_tables:254, n_buffers:256 -capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP -actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst - 1(patch-tun): addr:3a:c6:4f:bd:3e:3b -     config:     0 -     state:      0 -     speed: 0 Mbps now, 0 Mbps max - 2(qvob35d2d65-f3): addr:b2:83:c4:0b:42:3a -     config:     0 -     state:      0 -     current:    10GB-FD COPPER -     speed: 10000 Mbps now, 0 Mbps max - ……………………………………… -``` - -### Q:28 How to display the entries for all the flows in a switch? - -Ans: Flows entries of a switch can be displayed using the command ‘ **ovs-ofctl dump-flows** ‘ - -Let’s assume we want to display flow entries of OVS integration bridge (br-int), - -### Q:29 What are Neutron Agents and how to list all neutron agents? - -Ans: OpenStack neutron server acts as the centralized controller, the actual network configurations are executed either on compute and network nodes. Neutron agents are software entities that carry out configuration changes on compute or network nodes. Neutron agents communicate with the main neutron service via Neuron API and message queue. - -Neutron agents can be listed using the following command, - -``` -~# openstack network agent list -c ‘Agent type’ -c Host -c Alive -c State -``` - -### Q:30 What is CPU pinning? - -Ans: CPU pinning refers to reserving the physical cores for specific virtual machine. It is also known as CPU isolation or processor affinity. The configuration is in two parts: - - * it ensures that virtual machine can only run on dedicated cores - * it also ensures that common host processes don’t run on those cores - - - -In other words we can say pinning is one to one mapping of a physical core to a guest vCPU. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/openstack-interview-questions-answers/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxtechi.com/author/pradeep/ -[b]: https://github.com/lujun9972 -[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/ -[2]: https://www.linuxtechi.com/step-by-step-instance-creation-flow-in-openstack/ diff --git a/sources/tech/20181108 Choosing a printer for Linux.md b/sources/tech/20181108 Choosing a printer for Linux.md deleted file mode 100644 index 88afdff398..0000000000 --- a/sources/tech/20181108 Choosing a printer for Linux.md +++ /dev/null @@ -1,76 +0,0 @@ -translating---geekpi - -Choosing a printer for Linux -====== -Linux offers widespread support for printers. Learn how to take advantage of it. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -We've made significant strides toward the long-rumored paperless society, but we still need to print hard copies of documents from time to time. If you're a Linux user and have a printer without a Linux installation disk or you're in the market for a new device, you're in luck. That's because most Linux distributions (as well as MacOS) use the Common Unix Printing System ([CUPS][1]), which contains drivers for most printers available today. This means Linux offers much wider support than Windows for printers. - -### Selecting a printer - -If you're buying a new printer, the best way to find out if it supports Linux is to check the documentation on the box or the manufacturer's website. You can also search the [Open Printing][2] database. It's a great resource for checking various printers' compatibility with Linux. - -Here are some Open Printing results for Linux-compatible Canon printers. -![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png) - -The screenshot below is Open Printing's results for a Hewlett-Packard LaserJet 4050—according to the database, it should work "perfectly." The recommended driver is listed along with generic instructions letting me know it works with CUPS, Line Printing Daemon (LPD), LPRng, and more. -![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png) - -In all cases, it's best to check the manufacturer's website and ask other Linux users before buying a printer. - -### Checking your connection - -There are several ways to connect a printer to a computer. If your printer is connected through USB, it's easy to check the connection by issuing **lsusb** at the Bash prompt. - -``` -$ lsusb -``` - -The command returns **Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard** —it's not much information, but I can tell the printer is connected. I can get more information about the printer by entering the following command: - -``` -$ dmesg | grep -i usb -``` - -The results are much more verbose. -![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png) - -If you're trying to connect your printer to a parallel port (assuming your computer has a parallel port—they're rare these days), you can check the connection with this command: - -``` -$ dmesg | grep -i parport -``` - -The information returned can help me select the right driver for my printer. I have found that if I stick to popular, name-brand printers, most of the time I get good results. - -### Setting up your printer software - -Both Fedora Linux and Ubuntu Linux contain easy printer setup tools. [Fedora][3] maintains an excellent wiki for answers to printing issues. The tools are easily launched from Settings in the GUI or by invoking **system-config-printer** on the command line. - -![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png) - -Hewlett-Packard's [HP Linux Imaging and Printing][4] (HPLIP) software, which supports Linux printing, is probably already installed on your Linux system; if not, you can [download][5] the latest version for your distribution. Printer manufacturers [Epson][6] and [Brother][7] also have web pages with Linux printer drivers and information. - -What's your favorite Linux printer? Please share your opinion in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/choosing-printer-linux - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://www.cups.org/ -[2]: http://www.openprinting.org/printers -[3]: https://fedoraproject.org/wiki/Printing -[4]: https://developers.hp.com/hp-linux-imaging-and-printing -[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip -[6]: https://epson.com/Support/wa00821 -[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on diff --git a/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md b/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md deleted file mode 100644 index 66d34769c4..0000000000 --- a/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md +++ /dev/null @@ -1,137 +0,0 @@ -A Free, Secure And Cross-platform Password Manager -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-Password-Manager-720x340.png) - -In this modern Internet era, you will surely have multiple accounts on lot of websites. It could be a personal or official mail account, social or professional network account, GitHub account, and ecommerce account etc. So you should have several different passwords for different accounts. I am sure that you are already aware that setting up same password to multiple accounts is crazy and dangerous practice. If an attacker managed to breach one of your accounts, it’s highly likely he/she will try to access other accounts you have with the same password. So, it is **highly recommended to set different passwords** to different accounts. - -However, remembering several passwords might be difficult. You can write them in a paper. But it is not an efficient method either and you might lose them over a period of time. This is where the password managers comes in help. The password managers are like a repository where you can store all your passwords for different accounts and lock them down with a master password. By this way, all you need to remember is just the master password. We already have reviewed an open source password manager named [**KeeWeb**][1]. Today, we are going to see yet another password manager called **Buttercup**. - -### About Buttercup - -Buttercup is a free, open source, secure and cross-platform password manager written using **NodeJS**. It helps you to store all your login credentials of different accounts in an encrypted archive, which can be stored in your local system or any remote services like DropBox, ownCloud, NextCloud and WebDAV-based services. It uses strong **256bit AES encryption** method to save your sensitive data with a master password. So, no one can access your login details except those who have the master password. Buttercup currently supports Linux, Mac OS and Windows. It is also available a browser extension and mobile app. so, you can access the same archive you use on the desktop application and browser extension in your Android or iOS devices as well. - -### Installing Buttercup Password Manager - -Buttercup is currently available as **.deb** , **.rpm** packages, portable AppImage and tar archives for Linux platform. Head over to the [**releases pages**][2] and download and install the version you want to use. - -Buttercup desktop application is also available in [**AUR**][3], so you can install on Arch-based systems using AUR helper programs, such as [**Yay**][4], as shown below: - -``` -$ yay -S buttercup-desktop -``` - -If you have downloaded the portable AppImage file, make it executable using command: - -``` -$ chmod +x buttercup-desktop-1.11.0-x86_64.AppImage -``` - -Then, launch it using command: - -``` -$ ./buttercup-desktop-1.11.0-x86_64.AppImage -``` - -Once you run this command, it will prompt whether you like to integrate Buttercup AppImage with your system. If you choose ‘Yes’, this will add it to your applications menu and install icons. If you don’t do this, you can still launch the application by double-clicking on the AppImage or using the above command from the Terminal. - -### Add archives - -When you launch it for the first time, you will see the following welcome screen: -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-1.png) - -We haven’t added any archives yet, so let us add one. To do so, click on the “New Archive File” button and type the name of the archive file and choose the location to save it. -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-2.png) - -You can name it as you wish. I named it mine as “mypass”. The archives will have extension **.bcup** at the end and saved in the location of your choice. - -If you already have created one, simply choose it by clicking on “Open Archive File”. - -Next, buttercup will prompt you to enter a master password to the newly created archive. It is recommended to provide a strong password to protect the archives from the unauthorized access. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-3.png) - -We have now created an archive and secured it with a master password. Similarly, you can create any number of archives and protect them with a password. - -Let us go ahead and add the account details in the archives. - -### Adding entries (login credentials) in the archives - -Once you created or opened the archive, you will see the following screen. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-4.png) - -It is like a vault where we are going to save our login credentials of different online accounts. As you can see, we haven’t added any entries yet. Let us add some. - -To add a new entry, click “ADD ENTRY” button on the lower right corner and enter your account information you want to save. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-5-1.png) - -If you want to add any extra detail, there is an “ADD NEW FIELD” option right under the each entry. Just click on it and add as many as fields you want to include in the entries. - -Once you added all entries, you will see them on the right pane of the Buttercup interface. - -![][6] - -### Creating new groups - -You can also group login details under different name for easy recognition. Say for example, you can group all your mail accounts under a distinct name named “my_mails”. By default, your login details will be saved under “General” group. To create a new group, click “NEW GROUP” button and provide the name for the group. When creating new entries inside a new group, just click on the group name and start adding the entries as shown above. - -### Manage and access login details - -The data stored in the archives can be edited, moved to different groups, or entirely deleted at anytime. For instance, if you want to copy the username or password to clipboard, right click on the entry and choose “Copy to Clipboard” option. - -![][7] - -To edit/modify the data in the future, just click “Edit” button under the selected entry. - -### Save archives on remote location - -By default, Buttercup will save your data on the local system. However, you can save them on different remote services, such as Dropbox, ownCloud/NextCloud, WebDAV-based service. - -To connect to these services, go to **File - > Connect Cloud Sources**. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-8.png) - -And, choose the service you want to connect and authorize it to save your data. - -![][8] - -You can also connect those services from the Buttercup welcome screen while adding the archives. - -### Import/Export - -Buttercup allows you to import or export data to or from other password managers, such as 1Password, Lastpass and KeePass. You can also export your data and access them from another system or device, for example on your Android phone. You can export Buttercup vaults to CSV format as well. - -![][9] - -Buttercup is a simple, yet mature and fully functional password manager. It is being actively developed for years. If you ever in need of a password manager, Buttercup might a good choice. For more details, refer the project website and github page. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ -[2]: https://github.com/buttercup/buttercup-desktop/releases/latest -[3]: https://aur.archlinux.org/packages/buttercup-desktop/ -[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[6]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-6.png -[7]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-7.png -[8]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-9.png -[9]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-10.png diff --git a/sources/tech/20181112 The Source History of Cat.md b/sources/tech/20181112 The Source History of Cat.md deleted file mode 100644 index 1cb1139033..0000000000 --- a/sources/tech/20181112 The Source History of Cat.md +++ /dev/null @@ -1,94 +0,0 @@ -The Source History of Cat -====== -I once had a debate with members of my extended family about whether a computer science degree is a degree worth pursuing. I was in college at the time and trying to decide whether I should major in computer science. My aunt and a cousin of mine believed that I shouldn’t. They conceded that knowing how to program is of course a useful and lucrative thing, but they argued that the field of computer science advances so quickly that everything I learned would almost immediately be outdated. Better to pick up programming on the side and instead major in a field like economics or physics where the basic principles would be applicable throughout my lifetime. - -I knew that my aunt and cousin were wrong and decided to major in computer science. (Sorry, aunt and cousin!) It is easy to see why the average person might believe that a field like computer science, or a profession like software engineering, completely reinvents itself every few years. We had personal computers, then the web, then phones, then machine learning… technology is always changing, so surely all the underlying principles and techniques change too. Of course, the amazing thing is how little actually changes. Most people, I’m sure, would be stunned to know just how old some of the important software on their computer really is. I’m not talking about flashy application software, admittedly—my copy of Firefox, the program I probably use the most on my computer, is not even two weeks old. But, if you pull up the manual page for something like `grep`, you will see that it has not been updated since 2010 (at least on MacOS). And the original version of `grep` was written in 1974, which in the computing world was back when dinosaurs roamed Silicon Valley. People (and programs) still depend on `grep` every day. - -My aunt and cousin thought of computer technology as a series of increasingly elaborate sand castles supplanting one another after each high tide clears the beach. The reality, at least in many areas, is that we steadily accumulate programs that have solved problems. We might have to occasionally modify these programs to avoid software rot, but otherwise they can be left alone. `grep` is a simple program that solves a still-relevant problem, so it survives. Most application programming is done at a very high level, atop a pyramid of much older code solving much older problems. The ideas and concepts of 30 or 40 years ago, far from being obsolete today, have in many cases been embodied in software that you can still find installed on your laptop. - -I thought it would be interesting to take a look at one such old program and see how much it had changed since it was first written. `cat` is maybe the simplest of all the Unix utilities, so I’m going to use it as my example. Ken Thompson wrote the original implementation of `cat` in 1969. If I were to tell somebody that I have a program on my computer from 1969, would that be accurate? How much has `cat` really evolved over the decades? How old is the software on our computers? - -Thanks to repositories like [this one][1], we can see exactly how `cat` has evolved since 1969. I’m going to focus on implementations of `cat` that are ancestors of the implementation I have on my Macbook. You will see, as we trace `cat` from the first versions of Unix down to the `cat` in MacOS today, that the program has been rewritten more times than you might expect—but it ultimately works more or less the same way it did fifty years ago. - -### Research Unix - -Ken Thompson and Dennis Ritchie began writing Unix on a PDP 7. This was in 1969, before C, so all of the early Unix software was written in PDP 7 assembly. The exact flavor of assembly they used was unique to Unix, since Ken Thompson wrote his own assembler that added some features on top of the assembler provided by DEC, the PDP 7’s manufacturer. Thompson’s changes are all documented in [the original Unix Programmer’s Manual][2] under the entry for `as`, the assembler. - -[The first implementation][3] of `cat` is thus in PDP 7 assembly. I’ve added comments that try to explain what each instruction is doing, but the program is still difficult to follow unless you understand some of the extensions Thompson made while writing his assembler. There are two important ones. First, the `;` character can be used to separate multiple statements on the same line. It appears that this was used most often to put system call arguments on the same line as the `sys` instruction. Second, Thompson added support for “temporary labels” using the digits 0 through 9. These are labels that can be reused throughout a program, thus being, according to the Unix Programmer’s Manual, “less taxing both on the imagination of the programmer and on the symbol space of the assembler.” From any given instruction, you can refer to the next or most recent temporary label `n` using `nf` and `nb` respectively. For example, if you have some code in a block labeled `1:`, you can jump back to that block from further down by using the instruction `jmp 1b`. (But you cannot jump forward to that block from above without using `jmp 1f` instead.) - -The most interesting thing about this first version of `cat` is that it contains two names we should recognize. There is a block of instructions labeled `getc` and a block of instructions labeled `putc`, demonstrating that these names are older than the C standard library. The first version of `cat` actually contained implementations of both functions. The implementations buffered input so that reads and writes were not done a character at a time. - -The first version of `cat` did not last long. Ken Thompson and Dennis Ritchie were able to persuade Bell Labs to buy them a PDP 11 so that they could continue to expand and improve Unix. The PDP 11 had a different instruction set, so `cat` had to be rewritten. I’ve marked up [this second version][4] of `cat` with comments as well. It uses new assembler mnemonics for the new instruction set and takes advantage of the PDP 11’s various [addressing modes][5]. (If you are confused by the parentheses and dollar signs in the source code, those are used to indicate different addressing modes.) But it also leverages the `;` character and temporary labels just like the first version of `cat`, meaning that these features must have been retained when `as` was adapted for the PDP 11. - -The second version of `cat` is significantly simpler than the first. It is also more “Unix-y” in that it doesn’t just expect a list of filename arguments—it will, when given no arguments, read from `stdin`, which is what `cat` still does today. You can also give this version of `cat` an argument of `-` to indicate that it should read from `stdin`. - -In 1973, in preparation for the release of the Fourth Edition of Unix, much of Unix was rewritten in C. But `cat` does not seem to have been rewritten in C until a while after that. [The first C implementation][6] of `cat` only shows up in the Seventh Edition of Unix. This implementation is really fun to look through because it is so simple. Of all the implementations to follow, this one most resembles the idealized `cat` used as a pedagogic demonstration in K&R C. The heart of the program is the classic two-liner: - -``` -while ((c = getc(fi)) != EOF) - putchar(c); -``` - -There is of course quite a bit more code than that, but the extra code is mostly there to ensure that you aren’t reading and writing to the same file. The other interesting thing to note is that this implementation of `cat` only recognized one flag, `-u`. The `-u` flag could be used to avoid buffering input and output, which `cat` would otherwise do in blocks of 512 bytes. - -### BSD - -After the Seventh Edition, Unix spawned all sorts of derivatives and offshoots. MacOS is built on top of Darwin, which in turn is derived from the Berkeley Software Distribution (BSD), so BSD is the Unix offshoot we are most interested in. BSD was originally just a collection of useful programs and add-ons for Unix, but it eventually became a complete operating system. BSD seems to have relied on the original `cat` implementation up until the fourth BSD release, known as 4BSD, when support was added for a whole slew of new flags. [The 4BSD implementation][7] of `cat` is clearly derived from the original implementation, though it adds a new function to implement the behavior triggered by the new flags. The naming conventions already used in the file were adhered to—the `fflg` variable, used to mark whether input was being read from `stdin` or a file, was joined by `nflg`, `bflg`, `vflg`, `sflg`, `eflg`, and `tflg`, all there to record whether or not each new flag was supplied in the invocation of the program. These were the last command-line flags added to `cat`; the man page for `cat` today lists these flags and no others, at least on Mac OS. 4BSD was released in 1980, so this set of flags is 38 years old. - -`cat` would be entirely rewritten a final time for BSD Net/2, which was, among other things, an attempt to avoid licensing issues by replacing all AT&T Unix-derived code with new code. BSD Net/2 was released in 1991. This final rewrite of `cat` was done by Kevin Fall, who graduated from Berkeley in 1988 and spent the next year working as a staff member at the Computer Systems Research Group (CSRG). Fall told me that a list of Unix utilities still implemented using AT&T code was put up on a wall at CSRG and staff were told to pick the utilities they wanted to reimplement. Fall picked `cat` and `mknod`. The `cat` implementation bundled with MacOS today is built from a source file that still bears his name at the very top. His version of `cat`, even though it is a relatively trivial program, is today used by millions. - -[Fall’s original implementation][8] of `cat` is much longer than anything we have seen so far. Other than support for a `-?` help flag, it adds nothing in the way of new functionality. Conceptually, it is very similar to the 4BSD implementation. It is only longer because Fall separates the implementation into a “raw” mode and a “cooked” mode. The “raw” mode is `cat` classic; it prints a file character for character. The “cooked” mode is `cat` with all the 4BSD command-line options. The distinction makes sense but it also pads out the implementation so that it seems more complex at first glance than it actually is. There is also a fancy error handling function at the end of the file that further adds to its length. - -### MacOS - -In 2001, Apple launched Mac OS X. The launch was an important one for Apple, because Apple had spent many years trying and failing to replace its existing operating system (classic Mac OS), which had long been showing its age. There were two previous attempts to create a new operating system internally, but both went nowhere; in the end, Apple bought NeXT, Steve Jobs’ company, which had developed an operating system and object-oriented programming framework called NeXTSTEP. Apple took NeXTSTEP and used it as a basis for Mac OS X. NeXTSTEP was in part built on BSD, so using NeXTSTEP as a starting point for Mac OS X brought BSD-derived code right into the center of the Apple universe. - -The very first release of Mac OS X thus includes [an implementation][9] of `cat` pulled from the NetBSD project. NetBSD, which remains in development today, began as a fork of 386BSD, which in turn was based directly on BSD Net/2. So the first Mac OS X implementation of `cat` is Kevin Fall’s `cat`. The only thing that had changed over the intervening decade was that Fall’s error-handling function `err()` was removed and the `err()` function made available by `err.h` was used in its place. `err.h` is a BSD extension to the C standard library. - -The NetBSD implementation of `cat` was later swapped out for FreeBSD’s implementation of `cat`. [According to Wikipedia][10], Apple began using FreeBSD instead of NetBSD in Mac OS X 10.3 (Panther). But the Mac OS X implementation of `cat`, according to Apple’s own open source releases, was not replaced until Mac OS X 10.5 (Leopard) was released in 2007. The [FreeBSD implementation][11] that Apple swapped in for the Leopard release is the same implementation on Apple computers today. As of 2018, the implementation has not been updated or changed at all since 2007. - -So the Mac OS `cat` is old. As it happens, it is actually two years older than its 2007 appearance in MacOS X would suggest. [This 2005 change][12], which is visible in FreeBSD’s Github mirror, was the last change made to FreeBSD’s `cat` before Apple pulled it into Mac OS X. So the Mac OS X `cat` implementation, which has not been kept in sync with FreeBSD’s `cat` implementation, is officially 13 years old. There’s a larger debate to be had about how much software can change before it really counts as the same software; in this case, the source file has not changed at all since 2005. - -The `cat` implementation used by Mac OS today is not that different from the implementation that Fall wrote for the 1991 BSD Net/2 release. The biggest difference is that a whole new function was added to provide Unix domain socket support. At some point, a FreeBSD developer also seems to have decided that Fall’s `raw_args()` function and `cook_args()` should be combined into a single function called `scanfiles()`. Otherwise, the heart of the program is still Fall’s code. - -I asked Fall how he felt about having written the `cat` implementation now used by millions of Apple users, either directly or indirectly through some program that relies on `cat` being present. Fall, who is now a consultant and a co-author of the most recent editions of TCP/IP Illustrated, says that he is surprised when people get such a thrill out of learning about his work on `cat`. Fall has had a long career in computing and has worked on many high-profile projects, but it seems that many people still get most excited about the six months of work he put into rewriting `cat` in 1989. - -### The Hundred-Year-Old Program - -In the grand scheme of things, computers are not an old invention. We’re used to hundred-year-old photographs or even hundred-year-old camera footage. But computer programs are in a different category—they’re high-tech and new. At least, they are now. As the computing industry matures, will we someday find ourselves using programs that approach the hundred-year-old mark? - -Computer hardware will presumably change enough that we won’t be able to take an executable compiled today and run it on hardware a century from now. Perhaps advances in programming language design will also mean that nobody will understand C in the future and `cat` will have long since been rewritten in another language. (Though C has already been around for fifty years, and it doesn’t look like it is about to be replaced any time soon.) But barring all that, why not just keep using the `cat` we have forever? - -I think the history of `cat` shows that some ideas in computer science are very durable indeed. Indeed, with `cat`, both the idea and the program itself are old. It may not be accurate to say that the `cat` on my computer is from 1969. But I could make a case for saying that the `cat` on my computer is from 1989, when Fall wrote his implementation of `cat`. Lots of other software is just as ancient. So maybe we shouldn’t think of computer science and software development primarily as fields that disrupt the status quo and invent new things. Our computer systems are built out of historical artifacts. At some point, we may all spend more time trying to understand and maintain those historical artifacts than we spend writing new code. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][13] on Twitter or subscribe to the [RSS feed][14] to make sure you know when a new post is out. - - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/11/12/cat.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://github.com/dspinellis/unix-history-repo -[2]: https://www.bell-labs.com/usr/dmr/www/man11.pdf -[3]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-1-cat-pdp7-s -[4]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-2-cat-pdp11-s -[5]: https://en.wikipedia.org/wiki/PDP-11_architecture#Addressing_modes -[6]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-3-cat-v7-c -[7]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-4-cat-bsd4-c -[8]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-5-cat-net2-c -[9]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-6-cat-macosx-c -[10]: https://en.wikipedia.org/wiki/Darwin_(operating_system) -[11]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-7-cat-macos-10-13-c -[12]: https://github.com/freebsd/freebsd/commit/a76898b84970888a6fd015e15721f65815ea119a#diff-6e405d5ab5b47ca2a131ac7955e5a16b -[13]: https://twitter.com/TwoBitHistory -[14]: https://twobithistory.org/feed.xml -[15]: https://twitter.com/TwoBitHistory/status/1051826516844322821?ref_src=twsrc%5Etfw diff --git a/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md b/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md deleted file mode 100644 index c406c491b0..0000000000 --- a/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md +++ /dev/null @@ -1,228 +0,0 @@ -An introduction to Udev: The Linux subsystem for managing device events -====== -Create a script that triggers your computer to do a specific action when a specific device is plugged in. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourceprescription.png?itok=gFrc_GTH) - -Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it's the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it's well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in. - -This article teaches you how to create a [udev][1] script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive. - -### A basic script - -The best way to work with udev is in small chunks. Don't write the entire script upfront, but instead start with something that simply confirms that udev triggers some custom event. - -Depending on your goal for your script, you can't guarantee you will ever see the results of a script with your own eyes, so make sure your script logs that it was successfully triggered. The usual place for log files is in the **/var** directory, but that's mostly the root user's domain. For testing, use **/tmp** , which is accessible by normal users and usually gets cleaned out with a reboot. - -Open your favorite text editor and enter this simple script: - -``` -#!/usr/bin/bash - -echo $date > /tmp/udev.log -``` - -Place this in **/usr/local/bin** or some such place in the default executable path. Call it **trigger.sh** and, of course, make it executable with **chmod +x**. - -``` -$ sudo mv trigger.sh /usr/local/bin -$ sudo chmod +x /usr/local/bin/trigger.sh -``` - -This script has nothing to do with udev. When it executes, the script places a timestamp in the file **/tmp/udev.log**. Test the script yourself: - -``` -$ /usr/local/bin/trigger.sh -$ cat /tmp/udev.log -Tue Oct 31 01:05:28 NZDT 2035 -``` - -The next step is to make udev trigger the script. - -### Unique device identification - -In order for your script to be triggered by a device event, udev must know under what conditions it should call the script. In real life, you can identify a thumb drive by its color, the manufacturer, and the fact that you just plugged it into your computer. Your computer, however, needs a different set of criteria. - -Udev identifies devices by serial numbers, manufacturers, and even vendor ID and product ID numbers. Since this is early in your udev script's lifespan, be as broad, non-specific, and all-inclusive as possible. In other words, you want first to catch nearly any valid udev event to trigger your script. - -With the **udevadm monitor** command, you can tap into udev in real time and see what it sees when you plug in different devices. Become root and try it. - -``` -$ su -# udevadm monitor -``` - -The monitor function prints received events for: - - * UDEV: the event udev sends out after rule processing - * KERNEL: the kernel uevent - - - -With **udevadm monitor** running, plug in a thumb drive and watch as all kinds of information is spewed out onto your screen. Notice that the type of event is an **ADD** event. That's a good way to identify what type of event you want. - -The **udevadm monitor** command provides a lot of good info, but you can see it with prettier formatting with the command **udevadm info** , assuming you know where your thumb drive is currently located in your **/dev** tree. If not, unplug and plug your thumb drive back in, then immediately issue this command: - -``` -$ su -c 'dmesg | tail | fgrep -i sd*' -``` - -If that command returned **sdb: sdb1** , for instance, you know the kernel has assigned your thumb drive the **sdb** label. - -Alternately, you can use the **lsblk** command to see all drives attached to your system, including their sizes and partitions. - -Now that you have established where your drive is located in your filesystem, you can view udev information about that device with this command: - -``` -# udevadm info -a -n /dev/sdb | less -``` - -This returns a lot of information. Focus on the first block of info for now. - -Your job is to pick out parts of udev's report about a device that are most unique to that device, then tell udev to trigger your script when those unique attributes are detected. - -The **udevadm info** process reports on a device (specified by the device path), then "walks" up the chain of parent devices. For every device found, it prints all possible attributes using a key-value format. You can compose a rule to match according to the attributes of a device plus attributes from one single parent device. - -``` -looking at device '/devices/000:000/blah/blah//block/sdb': -  KERNEL=="sdb" -  SUBSYSTEM=="block" -  DRIVER=="" -  ATTR{ro}=="0" -  ATTR{size}=="125722368" -  ATTR{stat}==" 2765 1537 5393" -  ATTR{range}=="16" -  ATTR{discard\_alignment}=="0" -  ATTR{removable}=="1" -  ATTR{blah}=="blah" -``` - -A udev rule must contain one attribute from one single parent device. - -Parent attributes are things that describe a device from the most basic level, such as it's something that has been plugged into a physical port or it is something with a size or this is a removable device. - -Since the KERNEL label of **sdb** can change depending upon how many other drives were plugged in before you plugged that thumb drive in, that's not the optimal parent attribute for a udev rule. However, it works for a proof of concept, so you could use it. An even better candidate is the SUBSYSTEM attribute, which identifies that this is a "block" system device (which is why the **lsblk** command lists the device). - -Open a file called **80-local.rules** in **/etc/udev/rules.d** and enter this code: - -``` -SUBSYSTEM=="block", ACTION=="add", RUN+="/usr/local/bin/trigger.sh" -``` - -Save the file, unplug your test thumb drive, and reboot. - -Wait, reboot on a Linux machine? - -Theoretically, you can just issue **udevadm control --reload** , which should load all rules, but at this stage in the game, it's best to eliminate all variables. Udev is complex enough, and you don't want to be lying in bed all night wondering if that rule didn't work because of a syntax error or if you just should have rebooted. So reboot regardless of what your POSIX pride tells you. - -When your system is back online, switch to a text console (with Ctl+Alt+F3 or similar) and plug in your thumb drive. If you are running a recent kernel, you will probably see a bunch of output in your console when you plug in the drive. If you see an error message such as Could not execute /usr/local/bin/trigger.sh, you probably forgot to make the script executable. Otherwise, hopefully all you see is a device was plugged in, it got some kind of kernel device assignment, and so on. - -Now, the moment of truth: - -``` -$ cat /tmp/udev.log -Tue Oct 31 01:35:28 NZDT 2035 -``` - -If you see a very recent date and time returned from **/tmp/udev.log** , udev has successfully triggered your script. - -### Refining the rule into something useful - -The problem with this rule is that it's very generic. Plugging in a mouse, a thumb drive, or someone else's thumb drive will indiscriminately trigger your script. Now is the time to start focusing on the exact thumb drive you want to trigger your script. - -One way to do this is with the vendor ID and product ID. To get these numbers, you can use the **lsusb** command. - -``` -$ lsusb -Bus 001 Device 002: ID 8087:0024 Slacker Corp. Hub -Bus 002 Device 002: ID 8087:0024 Slacker Corp. Hub -Bus 003 Device 005: ID 03f0:3307 TyCoon Corp. -Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 hub -Bus 001 Device 003: ID 13d3:5165 SBo Networks -``` - -In this example, the **03f0:3307** before **TyCoon Corp.** denotes the idVendor and idProduct attributes. You can also see these numbers in the output of **udevadm info -a -n /dev/sdb | grep vendor** , but I find the output of **lsusb** a little easier on the eyes. - -You can now include these attributes in your rule. - -``` -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/thumb.sh" -``` - -Test this (yes, you should still reboot, just to make sure you're getting fresh reactions from udev), and it should work the same as before, only now if you plug in, say, a thumb drive manufactured by a different company (therefore with a different idVendor) or a mouse or a printer, the script won't be triggered. - -Keep adding new attributes to further focus in on that one unique thumb drive you want to trigger your script. Using **udevadm info -a -n /dev/sdb** , you can find out things like the vendor name, sometimes a serial number, or the product name, and so on. - -For your own sanity, be sure to add only one new attribute at a time. Most mistakes I have made (and have seen other people online make) is to throw a bunch of attributes into their udev rule and wonder why the thing no longer works. Testing attributes one by one is the safest way to ensure udev can identify your device successfully. - -### Security - -This brings up the security concerns of writing udev rules to automatically do something when a drive is plugged in. On my machines, I don't even have auto-mount turned on, and yet this article proposes scripts and rules that execute commands just by having something plugged in. - -Two things to bear in mind here. - - 1. Focus your udev rules once you have them working so they trigger scripts only when you really want them to. Executing a script that blindly copies data to or from your computer is a bad idea in case anyone who happens to be carrying the same brand of thumb drive plugs it into your box. - 2. Do not write your udev rule and scripts and forget about them. I know which computers have my udev rules on them, and those boxes are most often my personal computers, not the ones I take around to conferences or have in my office at work. The more "social" a computer is, the less likely it is to get a udev rule on it that could potentially result in my data ending up on someone else's device or someone else's data or malware on my device. - - - -In other words, as with so much of the power provided by a GNU system, it is your job to be mindful of how you are wielding that power. If you abuse it or fail to treat it with respect, it very well could go horribly wrong. - -### Udev in the real world - -Now that you can confirm that your script is triggered by udev, you can turn your attention to the function of the script. Right now, it is useless, doing nothing more than logging the fact that it has been executed. - -I use udev to trigger [automated backups][2] of my thumb drives. The idea is that the master copies of my active documents are on my thumb drive (since it goes everywhere I go and could be worked on at any moment), and those master documents get backed up to my computer each time I plug the drive into that machine. In other words, my computer is the backup drive and my production data is mobile. The source code is available, so feel free to look at the code of attachup for further examples of constraining your udev tests. - -Since that's what I use udev for the most, it's the example I'll use here, but udev can grab lots of other things, like gamepads (this is useful on systems that aren't set to load the xboxdrv module when a gamepad is attached) and cameras and microphones (useful to set inputs when a specific mic is attached), so realize that it's good for a lot more than this one example. - -A simple version of my backup system is a two-command process: - -``` -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", SYMLINK+="safety%n" -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/trigger.sh" -``` - -The first line detects my thumb drive with the attributes already discussed, then assigns the thumb drive a symlink within the device tree. The symlink it assigns is **safety%n**. The **%n** is a udev macro that resolves to whatever number the kernel gives to the device, such as sdb1, sdb2, sdb3, and so on. So **%n** would be the 1 or the 2 or the 3. - -This creates a symlink in the dev tree, so it does not interfere with the normal process of plugging in a device. This means that if you use a desktop environment that likes to auto-mount devices, you won't be causing problems for it. - -The second line runs the script. - -My backup script looks like this: - -``` -#!/usr/bin/bash - -mount /dev/safety1 /mnt/hd -sleep 2 -rsync -az /mnt/hd/ /home/seth/backups/ && umount /dev/safety1 -``` - -The script uses the symlink, which avoids the possibility of udev naming the drive something unexpected (for instance, if I have a thumb drive called DISK plugged into my computer already, and I plug in my other thumb drive also called DISK, the second one will be labeled DISK_, which would foil my script). It mounts **safety1** (the first partition of the drive) at my preferred mount point of **/mnt/hd**. - -Once safely mounted, it uses [rsync][3] to back up the drive to my backup folder (my actual script uses rdiff-backup, and yours can use whatever automated backup solution you prefer). - -### Udev is your dev - -Udev is a very flexible system and enables you to define rules and functions in ways that few other systems dare provide users. Learn it and use it, and enjoy the power of POSIX. - -This article builds on content from the [Slackermedia Handbook][4], which is licensed under the [GNU Free Documentation License 1.3][5]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/udev - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://linux.die.net/man/8/udev -[2]: https://gitlab.com/slackermedia/attachup -[3]: https://opensource.com/article/17/1/rsync-backup-linux -[4]: http://slackermedia.info/handbook/doku.php?id=backup -[5]: http://www.gnu.org/licenses/fdl-1.3.html diff --git a/sources/tech/20181114 Meet TiDB- An open source NewSQL database.md b/sources/tech/20181114 Meet TiDB- An open source NewSQL database.md deleted file mode 100644 index 5c993186a3..0000000000 --- a/sources/tech/20181114 Meet TiDB- An open source NewSQL database.md +++ /dev/null @@ -1,101 +0,0 @@ -Meet TiDB: An open source NewSQL database -====== -5 key differences between MySQL and TiDB for scaling in the cloud - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows-building-containers.png?itok=0XvZLZ8k) - -As businesses adopt cloud-native architectures, conversations will naturally lead to what we can do to make the database horizontally scalable. The answer will likely be to take a closer look at [TiDB][1]. - -TiDB is an open source [NewSQL][2] database released under the Apache 2.0 License. Because it speaks the [MySQL][3] protocol, your existing applications will be able to connect to it using any MySQL connector, and [most SQL functionality][4] remains identical (joins, subqueries, transactions, etc.). - -Step under the covers, however, and there are differences. If your architecture is based on MySQL with Read Replicas, you'll see things work a little bit differently with TiDB. In this post, I'll go through the top five key differences I've found between TiDB and MySQL. - -### 1. TiDB natively distributes query execution and storage - -With MySQL, it is common to scale-out via replication. Typically you will have one MySQL master with many slaves, each with a complete copy of the data. Using either application logic or technology like [ProxySQL][5], queries are routed to the appropriate server (offloading queries from the master to slaves whenever it is safe to do so). - -Scale-out replication works very well for read-heavy workloads, as the query execution can be divided between replication slaves. However, it becomes a bottleneck for write-heavy workloads, since each replica must have a full copy of the data. Another way to look at this is that MySQL Replication scales out SQL processing, but it does not scale out the storage. (By the way, this is true for traditional replication as well as newer solutions such as Galera Cluster and Group Replication.) - -TiDB works a little bit differently: - - * Query execution is handled via a layer of TiDB servers. Scaling out SQL processing is possible by adding new TiDB servers, which is very easy to do using Kubernetes [ReplicaSets][6]. This is because TiDB servers are [stateless][7]; its [TiKV][8] storage layer is responsible for all of the data persistence. - - * The data for tables is automatically sharded into small chunks and distributed among TiKV servers. Three copies of each data region (the TiKV name for a shard) are kept in the TiKV cluster, but no TiKV server requires a full copy of the data. To use MySQL terminology: Each TiKV server is both a master and a slave at the same time, since for some data regions it will contain the primary copy, and for others, it will be secondary. - - * TiDB supports queries across data regions or, in MySQL terminology, cross-shard queries. The metadata about where the different regions are located is maintained by the Placement Driver, the management server component of any TiDB Cluster. All operations are fully [ACID][9] compliant, and an operation that modifies data across two regions uses a [two-phase commit][10]. - - - - -For MySQL users learning TiDB, a simpler explanation is the TiDB servers are like an intelligent proxy that translates SQL into batched key-value requests to be sent to TiKV. TiKV servers store your tables with range-based partitioning. The ranges automatically balance to keep each partition at 96MB (by default, but configurable), and each range can be stored on a different TiKV server. The Placement Driver server keeps track of which ranges are located where and automatically rebalances a range if it becomes too large or too hot. - -This design has several advantages of scale-out replication: - - * It independently scales the SQL Processing and Data Storage tiers. For many workloads, you will hit one bottleneck before the other. - - * It incrementally scales by adding nodes (for both SQL and Data Storage). - - * It utilizes hardware better. To scale out MySQL to one master and four replicas, you would have five copies of the data. TiDB would use only three replicas, with hotspots automatically rebalanced via the Placement Driver. - - - - -### 2. TiDB's storage engine is RocksDB - -MySQL's default storage engine has been InnoDB since 2010. Internally, InnoDB uses a [B+tree][11] data structure, which is similar to what traditional commercial databases use. - -By contrast, TiDB uses RocksDB as the storage engine with TiKV. RocksDB has advantages for large datasets because it can compress data more effectively and insert performance does not degrade when indexes can no longer fit in memory. - -Note that both MySQL and TiDB support an API that allows new storage engines to be made available. For example, Percona Server and MariaDB both support RocksDB as an option. - -### 3. TiDB gathers metrics in Prometheus/Grafana - -Tracking key metrics is an important part of maintaining database health. MySQL centralizes these fast-changing metrics in Performance Schema. Performance Schema is a set of in-memory tables that can be queried via regular SQL queries. - -With TiDB, rather than retaining the metrics inside the server, a strategic choice was made to ship the information to a best-of-breed service. Prometheus+Grafana is a common technology stack among operations teams today, and the included graphs make it easy to create your own or configure thresholds for alarms. - -![](https://opensource.com/sites/default/files/uploads/tidb_metrics.png) - -### 4. TiDB handles DDL significantly better - -If we ignore for a second that not all data definition language (DDL) changes in MySQL are online, a larger challenge when running a distributed MySQL system is externalizing schema changes on all nodes at the same time. Think about a scenario where you have 10 shards and add a column, but each shard takes a different length of time to complete the modification. This challenge still exists without sharding, since replicas will process DDL after a master. - -TiDB implements online DDL using the [protocol introduced by the Google F1 paper][12]. In short, DDL changes are broken up into smaller transition stages so they can prevent data corruption scenarios, and the system tolerates an individual node being behind up to one DDL version at a time. - -### 5. TiDB is designed for HTAP workloads - -The MySQL team has traditionally focused its attention on optimizing performance for online transaction processing ([OLTP][13]) queries. That is, the MySQL team spends more time making simpler queries perform better instead of making all or complex queries perform better. There is nothing wrong with this approach since many applications only use simple queries. - -TiDB is designed to perform well across hybrid transaction/analytical processing ([HTAP][14]) queries. This is a major selling point for those who want real-time analytics on their data because it eliminates the need for batch loads between their MySQL database and an analytics database. - -### Conclusion - -These are my top five observations based on 15 years in the MySQL world and coming to TiDB. While many of them refer to internal differences, I recommend checking out the TiDB documentation on [MySQL Compatibility][4]. It describes some of the finer points about any differences that may affect your applications. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/key-differences-between-mysql-and-tidb - -作者:[Morgan Tocker][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/morgo -[b]: https://github.com/lujun9972 -[1]: https://www.pingcap.com/docs/ -[2]: https://en.wikipedia.org/wiki/NewSQL -[3]: https://en.wikipedia.org/wiki/MySQL -[4]: https://www.pingcap.com/docs/sql/mysql-compatibility/ -[5]: https://proxysql.com/ -[6]: https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/ -[7]: https://en.wikipedia.org/wiki/State_(computer_science) -[8]: https://github.com/tikv/tikv/wiki -[9]: https://en.wikipedia.org/wiki/ACID_(computer_science) -[10]: https://en.wikipedia.org/wiki/Two-phase_commit_protocol -[11]: https://en.wikipedia.org/wiki/B%2B_tree -[12]: https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41344.pdf -[13]: https://en.wikipedia.org/wiki/Online_transaction_processing -[14]: https://en.wikipedia.org/wiki/Hybrid_transactional/analytical_processing_(HTAP) diff --git a/sources/tech/20181115 3 best practices for continuous integration and deployment.md b/sources/tech/20181115 3 best practices for continuous integration and deployment.md deleted file mode 100644 index 09e10f187b..0000000000 --- a/sources/tech/20181115 3 best practices for continuous integration and deployment.md +++ /dev/null @@ -1,138 +0,0 @@ -[Translating by ChiZelin] -3 best practices for continuous integration and deployment -====== -Learn about automating, using a Git repository, and parameterizing Jenkins pipelines. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) - -The article covers three key topics: automating CI/CD configuration, using a Git repository for common CI/CD artifacts, and parameterizing Jenkins pipelines. - -### Terminology - -First things first; let's define a few terms. **CI/CD** is a practice that allows teams to quickly and automatically test, package, and deploy their applications. It is often achieved by leveraging a server called **[Jenkins][1]** , which serves as the CI/CD orchestrator. Jenkins listens to specific inputs (often a Git hook following a code check-in) and, when triggered, kicks off a pipeline. - -A **pipeline** consists of code written by development and/or operations teams that instructs Jenkins which actions to take during the CI/CD process. This pipeline is often something like "build my code, then test my code, and if those tests pass, deploy my application to the next highest environment (usually a development, test, or production environment)." Organizations often have more complex pipelines, incorporating tools such as artifact repositories and code analyzers, but this provides a high-level example. - -Now that we understand the key terminology, let's dive into some best practices. - -### 1\. Automation is key - -To run CI/CD on a PaaS, you need the proper infrastructure to be configured on the cluster. In this example, I will use [OpenShift][2]. - -"Hello, World" implementations of this are quite simple to achieve. Simply run **oc new-app jenkins- ** and voilà, you have a running Jenkins server ready to go. Uses in the enterprise, however, are much more complex. In addition to the Jenkins server, admins will often need to deploy a code analysis tool such as SonarQube and an artifact repository such as Nexus. They will then have to create pipelines to perform CI/CD and Jenkins slaves to reduce the load on the master. Most of these entities are backed by OpenShift resources that need to be created to deploy the desired CI/CD infrastructure. - -Eventually, the manual steps required to deploy your CI/CD components may need to be replicated, and you might not be the person to perform those steps. To ensure the outcome is produced quickly, error-free, and exactly as it was before, an automation method should be incorporated in the way your infrastructure is created. This can be an Ansible playbook, a Bash script, or any other way you would like to automate the deployment of CI/CD infrastructure. I have used [Ansible][3] and the [OpenShift-Applier][4] role to automate my implementations. You may find these tools valuable, or you may find something else that works better for you and your organization. Either way, you'll find that automation significantly reduces the workload required to recreate CI/CD components. - -#### Configuring the Jenkins master - -Outside of general "automation," I'd like to single out the Jenkins master and talk about a few ways admins can take advantage of OpenShift to automate Jenkins configuration. The Jenkins image from the [Red Hat Container Catalog][5] comes packaged with the [OpenShift-Sync plugin][6] installed. In the [video][7], we discuss how this plugin can be used to create Jenkins pipelines and slaves. - -To create a Jenkins pipeline, create an OpenShift BuildConfig similar to this: - -``` -apiVersion: v1 -kind: BuildConfig -... -spec:   -  source:       -    git:   -      ref: master       -      uri:   -  ...   -  strategy:     -    jenkinsPipelineStrategy:     -      jenkinsfilePath: Jenkinsfile       -    type: JenkinsPipeline -``` - -The OpenShift-Sync plugin will notice that a BuildConfig with the strategy **jenkinsPipelineStrategy** has been created and will convert it into a Jenkins pipeline, pulling from the Jenkinsfile specified by the Git source. An inline Jenkinsfile can also be used instead of pulling from one from a Git repository. See the [documentation][8] for more information. - -To create a Jenkins slave, create an OpenShift ImageStream that starts with the following definition: - -``` -apiVersion: v1 -kind: ImageStream -metadata: -  annotations: -    slave-label: jenkins-slave -    labels: -      role: jenkins-slave -… -``` - -Notice the metadata defined in this ImageStream. The OpenShift-Sync plugin will convert any ImageStream with the label **role: jenkins-slave** into a Jenkins slave. The Jenkins slave will be named after the value from the **slave-label** annotation. - -ImageStreams work just fine for simple Jenkins slave configurations, but some teams will find it necessary to configure nitty-gritty details such as resource limits, readiness and liveness probes, and instance caps. This is where ConfigMaps come into play: - -``` -apiVersion: v1 -kind: ConfigMap -metadata: -  labels: -  role: jenkins-slave -... -data: -  template1: |- -    -``` - -Notice that the **role: jenkins-slave** label is still required to convert the ConfigMap into a Jenkins slave. The **Kubernetes pod template** consists of a lengthy bit of XML that will configure every detail to your organization's liking. To view this XML, as well as more information on converting ImageStreams and ConfigMaps into Jenkins slaves, see the [documentation][9]. - -Notice with the three examples shown above that none of the operations required an administrator to make manual changes to the Jenkins console. By using OpenShift resources, Jenkins can be configured in a way that is easily automated. - -### 2\. Sharing is caring - -The second best practice is maintaining a Git repository of common CI/CD artifacts. The main idea is to prevent teams from reinventing the wheel. Imagine your team needs to perform a blue/green deployment to an OpenShift environment as part of the pipeline's CD phase. The members of your team responsible for writing the pipeline may not be OpenShift experts, nor may they have the bandwidth to write this functionality from scratch. Luckily, somebody has already written a function that incorporates that functionality in a common CI/CD repository, so your team can use that function instead of spending time writing one. - -To take this a step further, your organization may decide to maintain entire pipelines. You may find that teams are writing pipelines with similar functionality. It would be more efficient for those teams to use a parameterized pipeline from a common repository as opposed to writing their own from scratch. - -### 3\. Less is more - -As I hinted in the previous section, the third and final best practice is to parameterize your CI/CD pipelines. Parameterization will prevent an over-abundance of pipelines, making your CI/CD system easier to maintain. Imagine I have multiple regions where I can deploy my application. Without parameterization, I would need a separate pipeline for each region. - -To parameterize a pipeline written as an OpenShift build config, add the **env** stanza to the configuration: - -``` -... -spec: -  ... -  strategy: -    jenkinsPipelineStrategy: -      env: -      - name: REGION -        value: US-West           -      jenkinsfilePath: Jenkinsfile       -    type: JenkinsPipeline -``` - -With this configuration, I can pass the **REGION** parameter the pipeline to deploy my application to the specified region. - -The [video][7] provides a more substantial case where parameterization is a must. Some organizations decide to split up their CI/CD pipelines into separate CI and CD pipelines, usually, because there is some sort of approval process that happens before deployment. Imagine I have four images and three different environments to deploy to. Without parameterization, I would need 12 CD pipelines to allow all deployment possibilities. This can get out of hand very quickly. To make maintenance of the CD pipeline easier, organizations would find it better to parameterize the image and environment to allow one pipeline to perform the work of many. - -### Summary - -CI/CD at the enterprise level tends to become more complex than many organizations anticipate. Luckily, with Jenkins, there are many ways to seamlessly provide automation of your setup. Maintaining a Git repository of common CI/CD artifacts will also ease the effort, as teams can pull from maintained dependencies instead of writing their own from scratch. Finally, parameterization of your CI/CD pipelines will reduce the number of pipelines that will have to be maintained. - -If you've found other practices you can't do without, please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/best-practices-cicd - -作者:[Austin Dewey][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/adewey -[b]: https://github.com/lujun9972 -[1]: https://jenkins.io/ -[2]: https://www.openshift.com/ -[3]: https://docs.ansible.com/ -[4]: https://github.com/redhat-cop/openshift-applier -[5]: https://access.redhat.com/containers/?tab=overview#/registry.access.redhat.com/openshift3/jenkins-2-rhel7 -[6]: https://github.com/openshift/jenkins-sync-plugin -[7]: https://www.youtube.com/watch?v=zlL7AFWqzfw -[8]: https://docs.openshift.com/container-platform/3.11/dev_guide/dev_tutorials/openshift_pipeline.html#the-pipeline-build-config -[9]: https://docs.openshift.com/container-platform/3.11/using_images/other_images/jenkins.html#configuring-the-jenkins-kubernetes-plug-in diff --git a/sources/tech/20181115 How to install a device driver on Linux.md b/sources/tech/20181115 How to install a device driver on Linux.md deleted file mode 100644 index 517d984409..0000000000 --- a/sources/tech/20181115 How to install a device driver on Linux.md +++ /dev/null @@ -1,147 +0,0 @@ -Translating by Jamskr - -How to install a device driver on Linux -====== -Learn how Linux drivers work and how to use them. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc) - -One of the most daunting challenges for people switching from a familiar Windows or MacOS system to Linux is installing and configuring a driver. This is understandable, as Windows and MacOS have mechanisms that make this process user-friendly. For example, when you plug in a new piece of hardware, Windows automatically detects it and shows a pop-up window asking if you want to continue with the driver's installation. You can also download a driver from the internet, then just double-click it to run a wizard or import the driver through Device Manager. - -This process isn't as easy on a Linux operating system. For one reason, Linux is an open source operating system, so there are [hundreds of Linux distribution variations][1] . This means it's impossible to create one how-to guide that works for all Linux distros. Each Linux operating system handles the driver installation process a different way. - -Second, most default Linux drivers are open source and integrated into the system, which makes installing any drivers that are not included quite complicated, even though most hardware devices can be automatically detected. Third, license policies vary among the different Linux distributions. For example, [Fedora prohibits][2] including drivers that are proprietary, legally encumbered, or that violate US laws. And Ubuntu asks users to [avoid using proprietary or closed hardware][3]. - -To learn more about how Linux drivers work, I recommend reading [An Introduction to Device Drivers][4] in the book Linux Device Drivers. - -### Two approaches to finding drivers - -#### 1\. User interfaces - -If you are new to Linux and coming from the Windows or MacOS world, you'll be glad to know that Linux offers ways to see whether a driver is available through wizard-like programs. Ubuntu offers the [Additional Drivers][5] option. Other Linux distributions provide helper programs, like [Package Manager for GNOME][6], that you can check for available drivers. - -#### 2\. Command line - -What if you can't find a driver through your nice user interface application? Or you only have access through the shell with no graphic interface whatsoever? Maybe you've even decided to expand your skills by using a console. You have two options: - - A. **Use a repository** -This is similar to the [**homebrew**][7] command in MacOS.** ** By using **yum** , **dnf** , **apt-get** , etc., you're basically adding a repository and updating the package cache. - - - B. **Download, compile, and build it yourself** -This usually involves downloading a package directly from a website or using the **wget** command and running the configuration file and Makefile to install it. This is beyond the scope of this article, but you should be able to find online guides if you choose to go this route. - - - -### Check if a driver is already installed - -Before jumping further into installing a driver in Linux, let's look at some commands that will determine whether the driver is already available on your system. - -The [**lspci**][8] command shows detailed information about all PCI buses and devices on the system: - -``` -$ lscpci -``` - -Or with **grep** : - -``` -$ lscpci | grep SOME_DRIVER_KEYWORD -``` - -For example, you can type **lspci | grep SAMSUNG** if you want to know if a Samsung driver is installed. - -The [**dmesg**][9] command shows all device drivers recognized by the kernel: - -``` -$ dmesg -``` - -Or with **grep** : - -``` -$ dmesg | grep SOME_DRIVER_KEYWORD -``` - -Any driver that's recognized will show in the results. - -If nothing is recognized by the **dmesg** or **lscpi** commands, try these two commands to see if the driver is at least loaded on the disk: - -``` -$ /sbin/lsmod -``` - -and - -``` -$ find /lib/modules -``` - -Tip: As with **lspci** or **dmesg** , append **| grep** to either command above to filter the results. - -If a driver is recognized by those commands but not by **lscpi** or **dmesg** , it means the driver is on the disk but not in the kernel. In this case, load the module with the **modprobe** command: - -``` -$ sudo modprobe MODULE_NAME -``` - -Run as this command as **sudo** since this module must be installed as a root user. - -### Add the repository and install - -There are different ways to add the repository through **yum** , **dnf** , and **apt-get** ; describing them all is beyond the scope of this article. To make it simple, this example will use **apt-get** , but the idea is similar for the other options. - -**1\. Delete the existing repository, if it exists.** - -``` -$ sudo apt-get purge NAME_OF_DRIVER* -``` - -where **NAME_OF_DRIVER** is the probable name of your driver. You can also add pattern match to your regular expression to filter further. - -**2\. Add the repository to the repolist, which should be specified in the driver guide.** - -``` -$ sudo add-apt-repository REPOLIST_OF_DRIVER -``` - -where **REPOLIST_OF_DRIVER** should be specified from the driver documentation (e.g., **epel-list** ). - -**3\. Update the repository list.** - -``` -$ sudo apt-get update -``` - -**4\. Install the package.** - -``` -$ sudo apt-get install NAME_OF_DRIVER -``` - -**5\. Check the installation.** - -Run the **lscpi** command (as above) to check that the driver was installed successfully. - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/how-install-device-driver-linux - -作者:[Bryant Son][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/brson -[b]: https://github.com/lujun9972 -[1]: https://en.wikipedia.org/wiki/List_of_Linux_distributions -[2]: https://fedoraproject.org/wiki/Forbidden_items?rd=ForbiddenItems -[3]: https://www.ubuntu.com/licensing -[4]: https://www.xml.com/ldd/chapter/book/ch01.html -[5]: https://askubuntu.com/questions/47506/how-do-i-install-additional-drivers -[6]: https://help.gnome.org/users/gnome-packagekit/stable/add-remove.html.en -[7]: https://brew.sh/ -[8]: https://en.wikipedia.org/wiki/Lspci -[9]: https://en.wikipedia.org/wiki/Dmesg diff --git a/sources/tech/20181117 How to enter single user mode in SUSE 12 Linux.md b/sources/tech/20181117 How to enter single user mode in SUSE 12 Linux.md deleted file mode 100644 index 10377fc2eb..0000000000 --- a/sources/tech/20181117 How to enter single user mode in SUSE 12 Linux.md +++ /dev/null @@ -1,54 +0,0 @@ -How to enter single user mode in SUSE 12 Linux? -====== -Short article to learn how to enter single user mode in SUSE 12 Linux server. - -![How to enter single user mode in SUSE 12 Linux][1] - -In this short article we will walk you through steps which demonstrate how to enter single user mode in SUSE 12 Linux. Single user mode is always preferred when you are troubleshooting major issues with your system. Single user mode disables networking and no other users are logged in, you rule out many situations of multi user system and it helps you in troubleshooting fast. One of the most popular use of single user mode is to [reset forgotten root password][2]. - -### 1\. Halt boot process - -First of all you need have console of your machine to get into single user mode. If its VM then VM console, if its physical machine then you need its iLO/serial console connected. Reboot system and halt automatic booting of kernel at grub boot menu by pressing any key. - -![Kernel selection menu at boot in SUSE 12][3] - -### 2\. Edit boot option of kernel - -Once you are on above screen, press `e` on selected kernel (which is normally your preferred latest kernel) to update its boot options. You will see be below screen. - -![grub2 edits in SUSE 12][4] - -Now, scroll down to your booting kernel line and add `init=/bin/bash` at the end of the line as shown below. - -![Edit to boot in single user shell][5] - -### 3\. Boot kernel with edited entry - -Now press `Ctrl-x` or `F10` to boot this edited kernel. Kernel will be booted in single user mode and you will be presented with hash prompt i.e. root access to server. At this point of time, your root file system is mounted in read only mode. So any changes you are doing to system wont be saved. - -Run below command to remount root filesystem as re-writable. - -``` -kerneltalks:/ # mount -o remount,rw / -``` - -And you are good to go! Go ahead and do your necessary actions in single user mode. Dont forget to reboot server to boot into normal multiuser mode once you are done. - --------------------------------------------------------------------------------- - -via: https://kerneltalks.com/howto/how-to-enter-single-user-mode-in-suse-12-linux/ - -作者:[kerneltalks][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://kerneltalks.com -[b]: https://github.com/lujun9972 -[1]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/How-to-enter-single-user-mode-in-SUSE-12-Linux.png -[2]: https://kerneltalks.com/linux/recover-forgotten-root-password-rhel/ -[3]: https://a1.kerneltalks.com/wp-content/uploads/2018/11/Grub-menu-in-SUSE-12.png -[4]: https://a3.kerneltalks.com/wp-content/uploads/2018/11/grub2-editor.png -[5]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/Edit-to-boot-in-single-user-shell.png diff --git a/sources/tech/20181122 Getting started with Jenkins X.md b/sources/tech/20181122 Getting started with Jenkins X.md new file mode 100644 index 0000000000..1c2aab6903 --- /dev/null +++ b/sources/tech/20181122 Getting started with Jenkins X.md @@ -0,0 +1,148 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Getting started with Jenkins X) +[#]: via: (https://opensource.com/article/18/11/getting-started-jenkins-x) +[#]: author: (Dave Johnson https://opensource.com/users/snoopdave) +[#]: url: ( ) + +Getting started with Jenkins X +====== +Jenkins X provides continuous integration, automated testing, and continuous delivery to Kubernetes. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv) + +[Jenkins X][1] is an open source system that offers software developers continuous integration, automated testing, and continuous delivery, known as CI/CD, in Kubernetes. Jenkins X-managed projects get a complete CI/CD process with a Jenkins pipeline that builds and packages project code for deployment to Kubernetes and access to pipelines for promoting projects to staging and production environments. + +Developers are already benefiting from running "classic" open source Jenkins and CloudBees Jenkins on Kubernetes, thanks in part to the Jenkins Kubernetes plugin, which allows you to dynamically spin-up Kubernetes pods to run Jenkins build agents. Jenkins X adds what's missing from Jenkins: comprehensive support for continuous delivery and managing the promotion of projects to preview, staging, and production environments running in Kubernetes. + +This article is a high-level explanation of how Jenkins X works; it assumes you have some knowledge of Kubernetes and classic Jenkins. + +### What you get with Jenkins X + +If you're running on one of the major cloud providers (Amazon Elastic Container Service for Kubernetes, Google Kubernetes Engine, or Microsoft Azure Kubernetes Service), installing and deploying Jenkins X is easy. Download the Jenkins X command-line interface and run the **jx create cluster** command. You'll be prompted for the necessary information and, if you take the defaults, Jenkins X will create a starter-size Kubernetes cluster and install Jenkins X. + +When you deploy Jenkins X, a number of services are put in motion to watch your Git repositories and respond by building, testing, and promoting your applications to staging, production, and other environments you define. Jenkins X also deploys a set of supporting services, including [Jenkins][2], [Docker Registry][3], [Chart Museum][4], and [Monocular][5] to manage [Helm][6] charts, and [Nexus][7], which serves as a Maven and npm repository. + +The Jenkins X deployment also creates two Git repositories, one for your staging environment and one for production. These are in addition to the Git repositories you use to manage your project source code. Jenkins X uses these repositories to manage what is deployed to each environment, and promotions are done via Git pull requests (PRs)—this approach is known as [GitOps][8]. Each repository contains a Helm chart that specifies the applications to be deployed to the corresponding environment. Each repository also has a Jenkins pipeline to handle promotions. + +### Creating a new project with Jenkins X + +To create a new project with Jenkins X, use the **jx create quickstart** command. If you don't specify any options, jx will prompt you to select a project name and a platform—which can be just about anything. SpringBoot, Go, Python, Node, ASP.NET, Rust, Angular, and React are all supported, and the list keeps growing. Once you have chosen your project name and platform, Jenkins X will: + + * Create a new project that includes a "hello-world"-style web project + * Add the appropriate type of makefile or build script for the chosen platform + * Add a Jenkinsfile to manage promotions to staging and production environments + * Add a Dockerfile and Helm charts, created via [Draft][9] + * Add a [Skaffold][10] configuration for deploying the application to Kubernetes + * Create a Git repository and push the new project code there + + + +Next, a webhook from Git will notify Jenkins X that a project changed, and it will run your project's Jenkins pipeline to build and push your Docker image and Helm charts. + +Finally, the pipeline will submit a PR to the staging environment's Git repository with the changes needed to promote the application. + +Once the PR is merged, the staging pipeline will run to apply those changes and do the promotion. A couple of minutes after creating your project, you'll have end-to-end CI/CD, and your project will be running in staging and available for use. + +![Developer commits changes, project deployed to staging][12] + +Developer commits changes, project deployed to the staging environment. + +The figure above illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to staging. Here are the steps: + + 1. The developer commits and pushes the change to the project's Git repository + 2. Jenkins X is notified and runs the project's Jenkins pipeline in a Docker image that includes the project's language and supporting frameworks + 3. The project pipeline builds, tests, and pushes the project's Helm chart to Chart Museum and its Docker image to the registry + 4. The project pipeline creates a PR with changes needed to add the project to the staging environment + 5. Jenkins X automatically merges the PR to Master + 6. Jenkins X is notified and runs the staging pipeline + 7. The staging pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project's resources, typically a pod, service, and ingress. + + + +### Importing your existing projects into Jenkins X + +**jx import** , Jenkins X adds the things needed for your project to be deployed to Kubernetes and participate in CI/CD. It will add a Jenkins pipeline, Helm charts, and a Skaffold configuration for deploying the application to Kubernetes. Jenkins X will create a Git repository and push the changes there. Next, a webhook from Git will notify Jenkins X that a project changed, and promotion to staging will happen as described above for new projects. + +### Promoting your project to production + +When you import a project via, Jenkins X adds the things needed for your project to be deployed to Kubernetes and participate in CI/CD. It will add a Jenkins pipeline, Helm charts, and a Skaffold configuration for deploying the application to Kubernetes. Jenkins X will create a Git repository and push the changes there. Next, a webhook from Git will notify Jenkins X that a project changed, and promotion to staging will happen as described above for new projects. + +To promote a version of your project to the production environment, use the **jx promote** command. This command will prepare a Git PR that contains the Helm chart changes needed to deploy into the production environment and submit this request to the production environment's Git repository. Once the request is manually approved, Jenkins X will run the production pipeline to deploy your project via Helm. + +![Promoting project to production][14] + +Developer promotes the project to production. + +This figure illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to production. Here are the steps: + + 1. The developer runs the **jx promote** command to promote a project to production + 2. Jenkins X creates a PR with changes needed to add the project to the production environment + 3. The developer manually approves the PR, and it is merged to Master + 4. Jenkins X is notified and runs the production pipeline + 5. The production pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project's resources, typically a pod, service, and ingress. + + + +### Other features of Jenkins X + +Other interesting and appealing features of Jenkins X include: + +#### Preview environments + +When you create a PR to add a new feature to your project, you can ask Jenkins X to create a preview environment so you can make your new feature available for preview and testing before the PR is merged. + +#### Extensions + +It is possible to create extensions to Jenkins X. An extension is code that runs at specific times in the CI/CD process. An extension can provide code that runs when the extension is installed, uninstalled, as well as before and after each pipeline. + +#### Serverless Jenkins + +Instead of running the Jenkins web application, which continually consumes CPU and memory resources, you can run Jenkins only when you need it. During the past year, the Jenkins community created a version of Jenkins that can run classic Jenkins pipelines via the command line with the configuration defined by code instead of HTML forms. + +This capability is now available in Jenkins X. When you create a Jenkins X cluster, you can choose to use Serverless Jenkins. If you do, Jenkins X will deploy [Prow][15] to handle webhooks from GitHub and [Knative][16] to run Jenkins pipelines. + +### Jenkins X limitations + +Jenkins X also has some limitations that should be considered: + + * **Jenkins X is currently limited to projects that use Git:** Jenkins X is opinionated about CI/CD and assumes everybody wants to run and deploy software to Kubernetes and everybody is happy to use Git for source code and defining environments. Also, the Serverless Jenkins feature currently works only with GitHub. + * **Jenkins X is limited to Kubernetes:** It is true that Jenkins X can run automated builds, testing, and continuous integration for any type of software, but the continuous delivery part targets a Kubernetes namespace managed by Jenkins X. + * **Jenkins X requires cluster-admin level Kubernetes access:** Jenkins X needs cluster-admin access so it can define and manage a Kubernetes custom resource definition. Hopefully, this is a temporary limitation, because it could be a show-stopper for some. + + + +### Conclusions + +Jenkins X looks to be a good way to implement CI/CD for Kubernetes, and I'm looking forward to putting it to the test in production. Using Jenkins X is also a good way to learn about some useful open source tools for deploying to Kubernetes, including Helm, Draft, Skaffold, Prow, and more. These are things you might want to use even if you decide Jenkins X is not for you. If you're deploying to Kubernetes, take Jenkins X for a spin. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/getting-started-jenkins-x + +作者:[Dave Johnson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/snoopdave +[b]: https://github.com/lujun9972 +[1]: https://jenkins-x.io/ +[2]: https://jenkins.io/ +[3]: https://docs.docker.com/registry/ +[4]: https://github.com/helm/chartmuseum +[5]: https://github.com/helm/monocular +[6]: https://helm.sh +[7]: https://www.sonatype.com/nexus-repository-oss +[8]: https://www.weave.works/blog/gitops-operations-by-pull-request +[9]: https://draft.sh/ +[10]: https://github.com/GoogleContainerTools/skaffold +[11]: /file/414941 +[12]: https://opensource.com/sites/default/files/uploads/jenkinsx_fig1.png (Developer commits changes, project deployed to staging) +[13]: /file/414946 +[14]: https://opensource.com/sites/default/files/uploads/jenkinsx_fig2.png (Promoting project to production) +[15]: https://github.com/kubernetes/test-infra/tree/master/prow +[16]: https://cloud.google.com/knative/ diff --git a/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md b/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md new file mode 100644 index 0000000000..a38acec9da --- /dev/null +++ b/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Bio-Linux: A stable, portable scientific research Linux distribution) +[#]: via: (https://opensource.com/article/18/11/bio-linux) +[#]: author: (Matt Calverley https://opensource.com/users/mattcalverley) +[#]: url: ( ) + +Bio-Linux: A stable, portable scientific research Linux distribution +====== +Linux distro's integrated software approach offers powerful bioinformatic data analysis with a familiar look and feel. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV) + +Bio-Linux was introduced and detailed in a [Nature Biotechnology paper in July 2006][1]. The distribution was a group effort by the Natural Environment Research Council in the UK. As the creators and authors point out, the analysis demands of high-throughput “-omic” (genomic, proteomic, metabolomic) science has necessitated the development of integrated computing solutions to analyze the resultant mountains of experimental data. + +From this need, Bio-Linux was born. The distribution, [according to its creators][2], serves as a “free bioinformatics workstation platform that can be installed on anything from a laptop to a large server.” The current distro version, Bio-Linux 8, is built on an Ubuntu 14.04 LTS base. Thus, the general look and feel of Bio-Linux is similar to that of Ubuntu. + +In my own work as a research immunologist, I can attest to both the need for and success of the integrated software approach in Bio-Linux's design and development. Bio-Linux functions as a true turnkey solution to data pipeline requirements of modern science. As the website mentions, Bio-Linux includes [more than 250 pre-installed software packages][3], many of which are specific to the requirements of bioinformatic data analysis. + +The power of this approach becomes immediately evident when you try to duplicate the software installation process under another operating system. Integrating all software components and installing all required dependencies is immensely time-consuming, and in some instances is not even possible outside of the Linux operating system. The Bio-Linux distro provides a portable, stable, integrated environment with pre-installed software sufficient to begin a vast array of bioinformatic analysis tasks. + +By now you’re probably saying, “I’m sold—how do I get this amazing distro?” + +I’m glad you asked. I'll start by saying that there is excellent documentation on the Bio-Linux website. This [documentation][4] covers both installation instructions and a very thorough overview of using the distro. + +The distro can be installed and run locally, run off a CD/DVD or USB, installed on a server, or run out of a virtual machine environment. To begin the installation process for local installation, [download the disk image or ISO][5] for the Bio-Linux distro. The disk image is a 3.3GB file, and depending on your internet download speed, this may be a good time to get a cup of coffee or take a nice nap. + +Once the ISO has been downloaded, the Bio-Linux developers recommend using [UNetBootin][6], a freely available cross-platform software package used to make bootable USBs. There is a link provided for UNetBootin on the Bio-Linux website. I can attest to the effectiveness of UNetBootin in both Mac and Linux operating systems. + +On Unix family operating systems (Mac OS and Linux), it is also possible to make a bootable USB from the command line using the `dd `command: + +``` +sudo umount “USB location” + +sudo dd bs=4M if=”ISO location” of =”USB location” conv=fdatasync +``` +Regardless of the method you use, this might be another good time for a coffee break. + +At this point in my installation, UNetBootin appeared to freeze at the `squashfs` file transfer during bootable USB creation. However, a quick check of the Ubuntu disks application confirmed that the file was still being written to the USB. In other words, be patient—it takes quite some time to make the bootable USB. + +Once you’ve had your coffee and you have a finished USB in hand, you are ready to use Bio-Linux. As the Bio-Linux website points out, if you are trying to use a bootable USB with a Mac computer (particularly newer hardware versions), you may not be able to boot from the USB. There are workarounds, but they involve configuring the system for dual boot. Likewise, on Windows-based machines, it may be necessary to make changes to the boot order and possibly the secure boot settings for the machine from within BIOS. + +From this point, how you use the distro is up to you. You can run the distro from the USB to test it. You can install the distro to your computer. You can even follow the instructions on the Bio-Linux website to make a VM instance of the distro or run it on a server. Regardless of how you use it, you have a high-powered bioinformatic data analysis workstation at your disposal. + +Maybe you have a professional need for such a workstation, but even if you never use Bio-Linux as a professional researcher, it could provide a great resource for biology teaching professionals at all levels to introduce students to modern bioinformatics principles. For the price of a laptop and a USB, every school can have an in silico teaching resource to complement classroom lessons in the “-omics” age. Your only limitations are your creativity and the performance of your hardware. + +### More on Linux + +As an open source operating system with strong community support, the Linux kernel shares many of the strengths common to other successful open source software endeavors. Linux tends to be both stable and amenable to customization. It is also fairly hardware-agnostic, capable of running alongside other operating systems on a wide array of hardware configurations. In fact, installing Linux is a common method of regaining usability from dated hardware that is incapable of running other modern operating systems. Linux is also highly portable and can be run from any bootable external storage device, such as a USB drive, without the need to permanently install the operating system. + +It is this combination of stability, customizability, and portability that initially drew me to Linux. Each Linux operating system variant is referred to as a distribution (or distro), and it seems as though there is a Linux distribution for every imaginable computing scenario or desire. The options can actually be rather intimidating, and I suspect they may often discourage people from trying Linux. + +“How many different distributions can there possibly be?” you might wonder. If you have a few minutes, or even a few hours, have a look at [DistroWatch.com][7]. As its name implies, this site is devoted to the cataloging of all things Linux distribution-related. For visual learners, there is an amazing [Linux family tree][8] that really puts it into perspective. + +While [entire books][9] are devoted to the topic of Linux distributions, the differences often depend on what software is included in the base installation, how the software is managed, and graphical differences affecting the “look and feel” of the distribution. Certainly, there are also subtleties of hardware compatibility, speed, and stability. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/bio-linux + +作者:[Matt Calverley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mattcalverley +[b]: https://github.com/lujun9972 +[1]: https://www.nature.com/articles/nbt0706-801 +[2]: http://environmentalomics.org/bio-linux/ +[3]: http://environmentalomics.org/bio-linux-software-list/ +[4]: http://nebc.nerc.ac.uk/downloads/courses/Bio-Linux/bl8_latest.pdf +[5]: http://environmentalomics.org/bio-linux-download/ +[6]: https://unetbootin.github.io/ +[7]: https://distrowatch.com/ +[8]: https://distrowatch.com/images/other/distro-family-tree.png +[9]: https://www.amazon.com/Introducing-Linux-Distros-Dieguez-Castro/dp/1484213939 diff --git a/sources/tech/20181128 Building custom documentation workflows with Sphinx.md b/sources/tech/20181128 Building custom documentation workflows with Sphinx.md new file mode 100644 index 0000000000..7d9137fa40 --- /dev/null +++ b/sources/tech/20181128 Building custom documentation workflows with Sphinx.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Building custom documentation workflows with Sphinx) +[#]: via: (https://opensource.com/article/18/11/building-custom-workflows-sphinx) +[#]: author: ([Mark Meyer](https://opensource.com/users/ofosos)) +[#]: url: ( ) + +Building custom documentation workflows with Sphinx +====== +Create documentation the way that works best for you. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S) + +[Sphinx][1] is a popular application for creating documentation, similar to JavaDoc or Jekyll. However, Sphinx's reStructured Text input allows for a higher degree of customization than those other tools. + +This tutorial will explain how to customize Sphinx to suit your workflow. You can follow along using sample code on [GitHub][2]. + +### Some definitions + +Sphinx goes far beyond just enabling you to style text with predefined tags. It allows you to shape and automate your documentation by defining new roles and directives. A role is a single word element that usually is rendered inline in your documentation, while a directive can contain more complex content. These can be contained in a domain. + +A Sphinx domain is a collection of directives and roles as well as a few other things, such as an index definition. Your next Sphinx domain could be a specific programming language (Sphinx was developed to create Python's documentation). Or you might have a command line tool that implements the same command pattern (e.g., **tool \--args**) over and over. You can document it with a custom domain, adding directives and indexes along the way. + +Here's an example from our **recipe** domain: + +``` +The recipe contains `tomato` and `cilantro`. + +.. rcp:recipe:: TomatoSoup +  :contains: tomato cilantro salt pepper   + +  This recipe is a tasty tomato soup, combine all ingredients +  and cook. +``` + +Now that we've defined the recipe **TomatoSoup** , we can reference it anywhere in our documentation using the custom role **refef**. For example: + +``` +You can use the :rcp:reref:`TomatoSoup` recipe to feed your family. +``` + +This enables our recipes to show up in two indices: the first lists all recipes, and the second lists all recipes by ingredient. + +### What's in a domain? + +A Sphinx domain is a specialized container that ties together roles, directives, and indices, among other things. The domain has a name ( **rcp** ) to address its components in the documentation source. It announces its existence to Sphinx in the **setup()** method of the package. From there, Sphinx can find roles and directives, since these are part of the domain. + +This domain also serves as the central catalog of objects in this sample. Using initial data, it defines two variables, **objects** and **obj2ingredient**. These contain a list of all objects defined (all recipes) and a hash that maps a canonical ingredient name to the list of objects. + +``` +initial_data = { +    'objects': [],  # object list +    'obj2ingredient': {},  # ingredient -> [objects] +} +``` + +The way we name objects is common across our extension. For each object created, the canonical name is **rcp. .**, where **< typename>** is the Python type of the object, and **< objectname>** is the name the documentation writer gives the object. This enables the extension to use different object types that share the same name. + +Having a canonical name and central place for our objects is a huge advantage. Both our indices and our cross-referencing code use this feature. + +### Custom roles and directives + +In our example, **.. rcp:recipe::** indicates a custom directive. You might think it's overly specific to create custom syntax for these items, but it illustrates the degree of customization you can get in Sphinx. This provides rich markup that structures documents and leads to better docs. Specialization allows us to extract information from our docs. + +Our definition for this directive will provide minimal formatting, but it will be functional. + +``` +class RecipeNode(ObjectDescription): +  """A custom node that describes a recipe.""" + +  required_arguments = 1 + +  option_spec = { +    'contains': rst.directives.unchanged_required +  } +``` + +For this directive, **required_arguments** tells Sphinx to expect one parameter, the recipe name. **option_spec** lists the optional arguments, including their names. Finally, **has_content** specifies that there will be more reStructured Text as a child to this node. + +We also implement multiple methods: + + * **handle_signature()** implements parsing the signature of the directive and passes on the object's name and type to its superclass + * **add_taget_and_index()** adds a target (to link to) and an entry to the index for this node + + + +### Creating indices + +Both **IngredientIndex** and **RecipeIndex** are derived from Sphinx's **Index** class. They implement custom logic to generate a tuple of values that define the index. Note that **RecipeIndex** is a degenerate index that has only one entry. Extending it to cover more object types—and moving from a **RecipeDomain** to a **CookbookDomain** —is not yet part of the code. + +Both indices use the method **generate()** to do their work. This method combines the information from our domain, sorts it, and returns it in a list structure that will be accepted by Sphinx. See the [Sphinx Domain API][3] page for more information. + +The first time you visit the Domain API page, you may be a little overwhelmed by the structure. But our ingredient index is just a list of tuples, like **('tomato', 'TomatoSoup', 'test', 'rec-TomatoSoup',...)**. + +### Referencing recipes + +Adding cross-references is not difficult (but it's also not a given). Add an **XRefRole** to the domain and implement the method **resolve_xref()**. Having a custom role to reference a type allows us to unambiguously reference any object, even if two objects have the same name. If you look at the parameters of **resolve_xref()** in **Domain** , you'll see **typ** and **target**. These define the cross-reference type and its target name. We'll use **target** to resolve our destination from our domain's **objects** because we currently have only one type of node. + +We can add the cross-reference role to **RecipeDomain** in the following way: + +``` +roles = { +    'reref': XRefRole() +} +``` + +There's nothing for us to implement. Defining a working **resolve_xref()** and attaching an **XRefRole** to the domain is all you need to do. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/building-custom-workflows-sphinx + +作者:[Mark Meyer][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ofosos +[b]: https://github.com/lujun9972 +[1]: http://www.sphinx-doc.org/en/master/ +[2]: https://github.com/ofosos/sphinxrecipes +[3]: https://www.sphinx-doc.org/en/master/extdev/domainapi.html#sphinx.domains.Index.generate diff --git a/sources/tech/20181128 How to test your network with PerfSONAR.md b/sources/tech/20181128 How to test your network with PerfSONAR.md new file mode 100644 index 0000000000..9e9e66ef62 --- /dev/null +++ b/sources/tech/20181128 How to test your network with PerfSONAR.md @@ -0,0 +1,148 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How to test your network with PerfSONAR) +[#]: via: (https://opensource.com/article/18/11/how-test-your-network-perfsonar) +[#]: author: (Jessica Repka https://opensource.com/users/jrepka) +[#]: url: ( ) + +How to test your network with PerfSONAR +====== +Set up a single-node configuration to measure your network performance. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg) + +[PerfSONAR][1] is a network measurement toolkit collection for testing and sharing data on end-to-end network perfomance. + +The overall benefit of using network measurement tools like PerfSONAR is they can find issues before they become a large elephant in the room that nobody wants to talk about. Specifically, with the right answers from the right tools, patching can become more stringent, network traffic can be shaped to speed connections across the board, and the network infrastructure design can be improved. + +PerfSONAR is licensed under the open source Apache 2.0 license, which makes it more affordable than most tools that do this type of analysis, a key advantage given constrained network infrastructure budgets. + +### PerfSONAR versions + +Several versions of PerfSONAR are available: + + * **Perfsonar-tools:** The command line client version meant for on-demand testing. + * **Perfsonar-testpoint:** Adds automated testing and central management testing to PerfSONAR-tools. It has an archiving feature, but the archive must be set to an external node. + * **Perfsonar-core:** Includes everything in the testpoint software, but with local rather than external archiving. + * **Perfsonar-toolkit:** The core software; it includes a web UI with systemwide security settings. + * **Perfsonar-centralmanagement:** A completely separate version of PerfSONAR that uses mass grids of nodes to display results. It also has a feature to push out task templates to every node that is sending measurements back to the central host. + + + +This tutorial will use **PerfSonar-toolkit** ; the tools used in this software include [iPerf, iPerf3][2], and [OWAMP][3]. + +### Requirements + + * **Recommended operating system:** CentOS/RHEL7 + * **ISO:** [Downloading][4] the full installation ISO is the fastest way to get the software up and running. While there is a [Debian version][5], it is much harder and more complicated to use. + * **Minimum hardware requirements:** 2 cores and 4GB RAM + * **Recommended hardware:** 200GB HDD, 4 cores, 6GB of RAM + + + +### Installing and configuring PerfSONAR + +The installation is a quick CentOS install where you pick your timezone and configuration for the hard drive and user. I suggest using hard drive autoconfiguration, as you only need to choose "Install Toolkit" and follow the prompts from there. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image1_welcome.png) +Select your language. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image4_language.png) +Select a destination. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image3_destination.png) +After base installation, you see the Linux login screen. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image5a_linuxlogin.png) +After you log in, you are prompted to create a user ID and password to log into PerfSONAR's web frontend—make sure to remember your login information. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image5_createuser.png) +You're also asked to disable SSH access for root and create a new user for sudo; just follow the steps to create the new user. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image17_sudouser.png) +You can use a provisioning service to automatically provide an IP address and hostname. Otherwise, you will have to set the hostname (optional) and configure the IP address. + +### Log into the web frontend + +Once the base configuration is complete, you can log into the web frontend via **** or ****. The web frontend will appear with the name or IP address of the device you just set up, the list of tools used, a test result area, host information, global node directory, and on-demand testing. + +These options appear on the right-hand side of the web page. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image13_ondemandtesting.png) +![](https://opensource.com/sites/default/files/uploads/perfsonar_image1_frontend.png) + +For a single configuration mode, you will need another node to test with. To get one, click on the global node [Lookup Service Directory][6] link, which will bring you to a list of available nodes. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image20_lookupservicemap1.png) + +Pick an external node from the pScheduler Server list on the left. (I picked ESnet's Atlanta testing server.) +![](https://opensource.com/sites/default/files/uploads/perfsonar_image10_selectnode.png) + +Configure the node by clicking the Log In button and entering the user ID and password you created during base configuration. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image8_login.png) + +Next, choose Configuration. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image14_chooseconfig.png) + +This takes you to the configuration page, where you can add tests to other nodes by clicking Test, then clicking +Test. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image6_config.png) + +After you click +Test, you'll see a pop-up window with some drop-down options. For this tutorial, I used One-Way Active Measurement Protocol (OWAMP) testing for one-way latency against the ESnet Atlanta node that is IPv4. + +#### Side bar + + * The OWAMP measures unidirectional characteristics such as one-way delay and one-way loss. High-precision measurement of these one-way IP performance metrics became possible with wider availability of good time sources (such as GPS and CDMA). OWAMP enables the interoperability of these measurements. + * IPv4 is a fourth version of the Internet Protocol, which today is the main protocol to most of the internet. IPv4 protocol defines the rules for the operation of computer networks on the packet-exchange principle. This is a low-level protocol that is responsible for the connection between the nodes of the network on the basis of IP Addresses. + * The IPv4 node is a perfsonar testing node that only does network testing using the IPv4 protocols. The perfsonar testing node you connect to is the same application that is built in this documentation. + + + +The drop-down should use the server's main interface. Confirm that the test is enabled (the Test Status switch will be green) and click the OK button at the bottom of the window. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image9_addtest.png) + +Once you have added the test information, click the Save button at the bottom of the page. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image18_savetestinfo.png) + +You will see information about all of the scheduled tests and the hosts they are testing. You can add more hosts to the test by clicking the Settings icon in the Actions column. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image16_scheduledtests.png) + +The testing intervals are automatically set according to the recommended settings. If the test frequency increases, the tests will still run OK, but your hard drive may fill up with data more quickly. + +Once the test finishes, click View Public Dashboard to see the data that's returned. Note that it may take anywhere from five minutes to several hours to access the first sets of data. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image19_viewpublicdash.png) + +The public dashboard shows a high-level summary dataset. If you want more information, click Details. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image2_details.png) + +You'll see a larger graph and have the option to expand the graph over a year as data is collected. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image7_expandedgraph.png) + +PerfSONAR is now up, running, and testing the network. You can also test with two nodes inside your network (or one internal network node and one external node). + +### What can you learn about your network? + +In the time I've been using PerfSONAR, I've already uncovered the following issues: + + * Asymmetrical throughput + * Fiber outages + * Speed on circuit not meeting contractual agreement + * Internal network slowdowns due to misconfigurations + * Incorrect routes + + + +Have you used PerfSONAR or a similar tool? What benefits have you seen? + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/how-test-your-network-perfsonar + +作者:[Jessica Repka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jrepka +[b]: https://github.com/lujun9972 +[1]: https://www.perfsonar.net/ +[2]: https://iperf.fr/ +[3]: http://software.internet2.edu/owamp/ +[4]: http://downloads.perfsonar.net/toolkit/pS-Toolkit-4.1.3-CentOS7-FullInstall-x86_64-2018Oct24.iso +[5]: http://docs.perfsonar.net/install_options.html# +[6]: http://stats.es.net/ServicesDirectory/ diff --git a/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md b/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md new file mode 100644 index 0000000000..df932ebb83 --- /dev/null +++ b/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (The Top Command Tutorial With Examples For Beginners) +[#]: via: (https://www.ostechnix.com/the-top-command-tutorial-with-examples-for-beginners/) +[#]: author: ([SK](https://www.ostechnix.com/author/sk/)) +[#]: url: ( ) + +The Top Command Tutorial With Examples For Beginners +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-720x340.png) + +As a Linux administrator, you may need to need to know some basic details of your Linux system, such as the currently running processes, average system load, cpu and memory usage etc., at some point. Thankfully, we have a command line utility called **“top”** to get such details. The top command is a well-known and most widely used utility to display dynamic real-time information about running processes in Unix-like operating systems. In this brief tutorial, we are going to see some common use cases of top command. + +### Top Command Examples + +**Monitor all processes** + +To start monitoring the running processes, simply run the top command without any options: + +``` +$ top +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-1.png) + +As you see in the above screenshot, top command displays the list of processes in multiple columns. Each column displays details such as pid, user, cpu usage, memory usage. Apart from the list of processes, you will also see the brief stats about average system load, number of tasks, cpu usage, memory usage and swap usage on the top. + +Here is the explanation of the parameters mentioned above. + + * **PID** – Process id of the task. + * **USER** – Username of the the task’s owner. + * **PR** – Priority of the task. + * **NI** – Nice value of the task. If the nice value is negative, the process gets higher priority. If the nice value is positive, the priority is low. Refer [**this guide**][1] to know more about nice. + * **VIRT** – Total amount of virtual memory used by the task. + * **RES** – Resident Memory Size, the non-swapped physical memory a task is currently using. + * **SHR** – Shared Memory Size. The amount of shared memory used by a task. + * **S** – The status of the process (S=sleep R=running Z=zombie). + * **%CPU** – CPU usage. The task’s share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. + * **%MEM** – Memory Usage. A task’s currently resident share of available physical memory. + * **TIME+** – Total CPU time used by the task since it has started, precise to the hundredths of a second. + * **COMMAND** – Name of the running program. + + + +**Display path of processes** + +If you want to see the absolute path of the running processes, just press **‘c’**. Now you will see the actual path of the programs under the COMMAND column in the below screenshot. + +![][3] + +**Monitor processes owned by a specific user** + +If you run top command without any options, it will list all running processes owned by all users. How about displaying processes owned by a specific user? It is easy! To show the processes owned by a given user, for example **sk** , simply run: + +``` +$ top -u sk +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-3.png) + +**Do not show idle/zombie processes** + +Instead of viewing all processes, you can simply ignore the idle or zombie processes. The following command will not show any idle or zombie processes: + +``` +$ top -i +``` + +**Monitor processes with PID** + +If you know the PID of any processes, for example 21180, you can monitor that process using **-p** flag. + +``` +$ top -p 21180 +``` + +You can specify multiple PIDs with comma-separated values. + +**Monitor processes with process name** + +I don’t know PID, but know only the process name. How to monitor it? Simple! + +``` +$ top -p $(pgrep -d ',' firefox) +``` + +Here, **firefox** is the process name and **‘pgrep -d’** picks the respective PID from the process name. + +**Display processes by CPU usage** + +Sometimes, you might want to display processes sorted by CPU usage. If so, use the following command: + +``` +$ top -o %CPU +``` + +![][4] + +The processes with higher CPU usage will be displayed on the top. Alternatively, you sort the processes by CPU usage by pressing **SHIFT+p**. + +**Display processes by Memory usage** + +Similarly, to order processes by memory usage, the command would be: + +``` +$ top -o %MEM +``` + +**Renice processes** + +You can change the priority of a process at any time using the option **‘r’**. Run the top command and press **r** and type the PID of a process to change its priority. + +![][5] + +Here, **‘r’** refers renice. + +**Set update interval** + +Top program has an option to specify the delay between screen updates. If want to change the delay-time, say 5 seconds, run: + +``` +$ top -d 5 +``` + +The default value is **3.0** seconds. + +If you already started the top command, just press **‘d’** and type delay-time and hit ENTER key. + +![][6] + +**Set number of iterations (repetition)** + +By default, top command will keep running until you press **q** to exit. However, you can set the number of iterations after which top will end. For instance, to exit top command automatically after 5 iterations, run: + +``` +$ top -n 5 +``` + +**Kill running processes** + +To kill a running process, simply press **‘k’** and type its PID and hit ENTER key. + +![][7] + +Top command supports few other options as well. For example, press **‘z’** to switch between mono and color output. It will help you to easily highlight running processes. + +![][8] + +Press **‘h’** to view all available keyboard shortcuts and help section. + +To quit top, just press **q**. + +At this stage, you will have a basic understanding of top command. For more details, refer man pages. + +``` +$ man top +``` + +As you can see, using Top command to monitor the running processes isn’t that hard. Top command is easy to learn and use! + +And, that’s all for now. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-top-command-tutorial-with-examples-for-beginners/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/change-priority-process-linux/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-4.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-8.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-7.png +[7]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-5.png +[8]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-6.png diff --git a/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md b/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md new file mode 100644 index 0000000000..91c16e4e99 --- /dev/null +++ b/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md @@ -0,0 +1,266 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How To Customize The GNOME 3 Desktop?) +[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) +[#]: url: ( ) + +How To Customize The GNOME 3 Desktop? +====== + +We have got many emails from user to write an article about GNOME 3 desktop customization but we don’t get a time to write this topic. + +I was using Ubuntu operating system since long time in my primary laptop and i got bored so, i would like to test some other distro which is related to Arch Linux. + +I prefer to go with Majaro so, i have installed Manjaro 18.0 with GNOME 3 desktop in my laptop. + +I’m customizing my desktop, how i want it. So, i would like to take this opportunity to write up this article in detailed way to help others. + +This article helps others to customize their desktop without headache. + +I’m not going to include all my customization and i will be adding a necessary things which will be mandatory and useful for Linux desktop users. + +If you feel some tweak is missing in this article, i would request you to mention that in comment sections. It will be very helpful for other users. + +### 1) How to Launch Activities Overview in GNOME 3 Desktop? + +The Activities Overview will display all the running applications or launched/opened windows by clicking `Super Key` or by clicking `Activities` button in the topmost left corner. + +It allows you to launch a new applications, switch windows, and move windows between workspaces. + +You can simply exit the Activities Overview by choosing the following any of the one actions like selecting a window, application or workspace, or by pressing the `Super Key` or `Esc Key`. + +Activities Overview Screenshot. +![][2] + +### 2) How to Resize Windows in GNOME 3 Desktop? + +The Launched windows can be maximized, unmaximized and snapped to one side of the screen (Left or Right) by using the following key combinations. + + * `Super Key+Down Arrow:` To unmaximize the window. + * `Super Key+Up Arrow:` To maximize the window. + * `Super Key+Right Arrow:` To fill a window in the right side of the half screen. + * `Super Key+Left Arrow:` To fill a window in the left side of the half screen + + + +Use `Super Key+Down Arrow` to unmaximize the window. +![][3] + +Use `Super Key+Up Arrow` to maximize the window. +![][4] + +Use `Super Key+Right Arrow` to fill a window in the right side of the half screen. +![][5] + +Use `Super Key+Left Arrow` to fill a window in the left side of the half screen. +![][6] + +This feature will help you to view two applications at a time a.k.a splitting screen. +![][7] + +### 3) How to Display Applications in GNOME 3 Desktop? + +Click on the `Show Application Grid` button in the Dash to display all the installed applications on your system. +![][8] + +### 4) How to Add Applications on Dash in GNOME 3 Desktop? + +To speed up your day to day activity you may want to add frequently used application into Dash or Drag the application launcher to the Dash. + +It will allow you to directly launch your favorite applications without searching them. To do so, simply right click on it and use the option `Add to Favorites`. +![][9] + +To remove a application launcher a.k.a favorite from Dash, either drag it from the Dash to the grid button or simply right click on it and use the option `Remove from Favorites`. +![][10] + +### 5) How to Switch Between Workspaces in GNOME 3 Desktop? + +Workspaces allow you to group windows together. It will helps you to segregate your work properly. If you are working on Multiple things and you want to group each work and related things separately then it will be very handy and perfect option for you. + +You can switch workspaces in two ways, Open the Activities Overview and select a workspace from the right-hand side or use the following key combinations. + + * Use `Ctrl+Alt+Up` Switch to the workspace above. + * Use `Ctrl+Alt+Down` Switch to the workspace below. + + + +![][11] + +### 6) How to Switch Between Applications (Application Switcher) in GNOME 3 Desktop? + +Use either `Alt+Tab` or `Super+Tab` to switch between applications. To launch Application Switcher, use either `Alt+Tab` or `Super+Tab`. + +Once launched, just keep holding the Alt or Super key and hit the tab key to move to the next application from left to right order. + +### 7) How to Add UserName to Top Panel in GNOME 3 Desktop? + +If you would like to add your UserName to Top Panel then install the following [Add Username to Top Panel][12] GNOME Extension. +![][13] + +### 8) How to Add Microsoft Bing’s wallpaper in GNOME 3 Desktop? + +Install the following [Bing Wallpaper Changer][14] GNOME shell extension to change your wallpaper every day to Microsoft Bing’s wallpaper. +![][15] + +### 9) How to Enable Night Light in GNOME 3 Desktop? + +Night light app is one of the famous app which reduces strain on the eyes by turning your screen a dim yellow from blue light after sunset. + +It is available in smartphones. The other known apps for the same purpose are flux and **[redshift][16]**. + +To enable this, navigate to **System Settings** >> **Devices** >> **Displays** and turn Nigh Light on. +![][17] + +Once it’s enabled and status icon will be placed on the top panel. +![][18] + +### 10) How to Show the Battery Percentage in GNOME 3 Desktop? + +Battery percentage will show you the exact battery usage. To enable this follow the below steps. + +Start GNOME Tweaks >> **Top Bar** >> **Battery Percentage** and switch it on. +![][19] + +After modification you can able to see the battery percentage icon on the top panel. +![][20] + +### 11) How to Enable Mouse Right Click in GNOME 3 Desktop? + +By default right click is disabled on GNOME 3 desktop environment. To enable this follow the below steps. + +Start GNOME Tweaks >> **Keyboard & Mouse** >> Mouse Click Emulation and select “Area” option. +![][21] + +### 12) How to Enable Minimize On Click in GNOME 3 Desktop? + +Enable one-click minimize feature which will help us to minimize opened window without using minimize option. + +``` +$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize' +``` + +### 13) How to Customize Dock in GNOME 3 Desktop? + +If you would like to change your Dock similar to Deepin desktop or Mac then use the following set of commands. + +``` +$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM +$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false +$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED +$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50 +``` + +![][22] + +### 14) How to Show Desktop in GNOME 3 Desktop? + +By default `Super Key+D` shortcut doesn’t show your desktop. To configure this follow the below steps. + +Settings >> **Devices** >> **Keyboard** >> Click **Hide all normal windows** under Navigation then Press `Super Key+D` finally hit `Set` button to enable it. +![][23] + +### 15) How to Customize Date and Time Format? + +By default GNOME 3 shows date and time with `Sun 04:48`. It’s not clear and if you want to get the output with following format `Sun Dec 2 4:49 AM` follow the below steps. + +**For Date Modification:** Start GNOME Tweaks >> **Top Bar** and enable `Weekday` option under Clock. +![][24] + +**For Time Modification:** Settings >> **Details** >> **Date & Time** then choose `AM/PM` option in the time format. +![][25] + +After modification you can able to see the date and time format same as below. +![][26] + +### 16) How to Permanently Disable Unused Services in Boot? + +In my case, i’m not going to use **Bluetooth** & **cpus a.k.a Printer service**. Hence, disabling these services on my laptop. To disable services on Arch based systems use **[Pacman Package Manager][27]**. +For Bluetooth + +``` +$ sudo systemctl stop bluetooth.service +$ sudo systemctl disable bluetooth.service +$ sudo systemctl mask bluetooth.service +$ systemctl status bluetooth.service +``` + +For cups + +``` +$ sudo systemctl stop org.cups.cupsd.service +$ sudo systemctl disable org.cups.cupsd.service +$ sudo systemctl mask org.cups.cupsd.service +$ systemctl status org.cups.cupsd.service +``` + +Finally verify whether these services are disabled or not in the boot using the following command. If you want to double confirm this, you can reboot once and check the same. Navigate to the following link to know more about **[systemctl][28]** usage, + +``` +$ systemctl list-unit-files --type=service | grep enabled +[email protected] enabled +dbus-org.freedesktop.ModemManager1.service enabled +dbus-org.freedesktop.NetworkManager.service enabled +dbus-org.freedesktop.nm-dispatcher.service enabled +display-manager.service enabled +gdm.service enabled +[email protected] enabled +linux-module-cleanup.service enabled +ModemManager.service enabled +NetworkManager-dispatcher.service enabled +NetworkManager-wait-online.service enabled +NetworkManager.service enabled +systemd-fsck-root.service enabled-runtime +tlp-sleep.service enabled +tlp.service enabled +``` + +### 17) Install Icons & Themes in GNOME 3 Desktop? + +Bunch of Icons and Themes are available for GNOME Desktop so, choose the desired **[GTK Themes][29]** and **[Icons Themes][30]** for you. To configure this further, navigate to the below links which makes your Desktop more elegant. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg +[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg +[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg +[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg +[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg +[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg +[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg +[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg +[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg +[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg +[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/ +[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg +[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/ +[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg +[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/ +[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg +[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg +[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg +[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg +[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg +[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg +[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg +[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg +[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg +[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/ +[29]: https://www.2daygeek.com/category/gtk-theme/ +[30]: https://www.2daygeek.com/category/icon-theme/ diff --git a/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md b/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md new file mode 100644 index 0000000000..7c8952549f --- /dev/null +++ b/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (ANGRYsearch – Quick Search GUI Tool for Linux) +[#]: via: (https://itsfoss.com/angrysearch/) +[#]: author: (John Paul https://itsfoss.com/author/john/) +[#]: url: ( ) + +ANGRYsearch – Quick Search GUI Tool for Linux +====== + +A search application is one of the most important tools you can have on your computer. Most are slow to indexes your system and find results. However, today we will be looking at an application that can display results as you type. Today, we will be looking at ANGRYsearch. + +### What is ANGRYsearch? + +![][1] +Newly installed ANGRYsearch + +[ANGRYsearch][2] is a Python-based application that delivers results as you type your search query. The overall idea and design of the application are both inspired by [Everything][3] a search tool for Windows. (I discovered Everything ad couple of years ago and install it wherever I use Windows.) + +ANGRYsearch is able to display the search results so quickly because it only indexes filenames. After you install ANGRYsearch, you create a database of filenames by indexing your system. ANGRYsearch then quickly filters filenames as you type your query. + +Even though there is not much to ANGRYsearch, there are several things you can do to customize the experience. First, ANGRYsearch has two different display modes: lite and full. Lite mode only shows the filename and path. Full mode displays filename, path, size, and date of the last modification. Full mode, obviously, takes longer to display. The default is lite mode. In order to switch to full mode, you need to edit the config file at `~/.config/angrysearch/angrysearch.conf`. In that file change the `angrysearch_lite` value to false. + +ANGRYsearch also has three different search modes: fast, slow, and regex. Fast mode displays filenames that start with your search term. For example, if you had a folder full of the latest releases of a bunch of Linux distros and you searched “Ubuntu”, ANGRYsearch would display Ubuntu, Ubuntu Mate, Ubuntu Budgie, but not Kubuntu, Xubuntu, or Lubuntu. Fast mode is on by default and can be turned off by unchecking the checkbox next to the “update” button. Slow mode is slightly slower (obviously), but it will display files that have your search term anywhere in their name. In the previous example, ANGRYsearch would show all Ubuntu distros. Regex mode is the slowest and most precise. It uses [regular expressions][4] and is case insensitive. Regex mode is activated by pressing F8. + +You can also tell ANGRYsearch to ignore certain folders when it indexes your system. Just click the “update” button and enter the names of the folders you want to be ignored in the space provided. You can also choose from several icon themes, though it doesn’t make that much difference. + +![][5]Fast mode results + +### Installing ANGRYsearch on Linux + +ANGRYsearch is available in the [Arch User Repository][6]. It has also been packaged for [Fedora and openSUSE][7]. + +To install on other distros, follow these instructions. Instructions are written for a Debian or Ubuntu based system. + +ANGRYsearch depends on `python3-pyqt5` and`xdg-utils` so you will need to install them first. Most distros have `xdg-utils`already installed. + +`sudo apt install python3-pyqt5` + +Next. download the latest version (1.0.1). + +`wget https://github.com/DoTheEvo/ANGRYsearch/archive/v1.0.1.zip` + +Now, unzip the archive file. + +`unzip v1.0.1.zip` + +Next, we will navigate to the new folder (ANGRYsearch-1.0.1) and run the installer. + +`cd ANGRYsearch-1.0.1` + +`chmod +x install.sh` + +`sudo ./install.sh` + +The installation process is very quick, so don’t be surprised when a new command line is displayed as soon as you hit `Enter`. + +The first time that you start ANGRYsearch, you will need to index your system. ANGRYsearch does not automatically keep its database updated. You can use `crontab` to schedule a system scan. + +To open a text editor to create a new cronjob, use `crontab -e`. To make sure that the ANGRYsearch database is updated every 6 hours, use this command `0 */6 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated /usr/share/angrysearch/angrysearch_update_database.py`. `crontab` does not run the job if it is powered off when the timer does off. In some case, you may need to manually update the database, but it should not take long. + +![][8]ANGRYsearch update/options menu + +### Experience + +In the past, I was always frustrated by how painfully slow it was to search my computer. I knew that Windows had the Everything app, but I thought Linux out of luck. It didn’t even occur to me to look for something similar on Linux. I’m glad I accidentally stumbled upon ANGRYsearch. + +I know there will be quite a few people complaining that ANGRYsearch only searches filenames, but most of the time that is all I need. Thankfully, most of the time I only need to remember part of the name to find what I am looking for. + +The only thing that annoys me about ANGRYsearch is that fact that it does not automatically update its database. You’d think there would be a way for the installer to create a cron job when you install it. + +![][9]Slow mode results + +### Final Thoughts + +Since ANGRYsearch is basically a Linux port of one of my favorite Windows apps, I’m pretty happy with it. I plan to install it on all my systems going forward. + +I know that I have ragged on other Linux apps for not being packaged for easy install, but I can’t do the same for ANGRYsearch. The installation process is pretty easy. I would definitely recommend it for Linux noobs. + +Have you ever used [ANGRYsearch][2]? If not, what is your favorite Linux search application? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][10]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/angrysearch/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch3.jpg?resize=800%2C627&ssl=1 +[2]: https://github.com/dotheevo/angrysearch/ +[3]: https://www.voidtools.com/ +[4]: http://www.aivosto.com/articles/regex.html +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch1.jpg?resize=800%2C627&ssl=1 +[6]: https://aur.archlinux.org/packages/angrysearch/ +[7]: https://software.opensuse.org/package/angrysearch +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch2.jpg?resize=800%2C626&ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch4.jpg?resize=800%2C627&ssl=1 +[10]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181206 How to view XML files in a web browser.md b/sources/tech/20181206 How to view XML files in a web browser.md new file mode 100644 index 0000000000..6060c792e2 --- /dev/null +++ b/sources/tech/20181206 How to view XML files in a web browser.md @@ -0,0 +1,109 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to view XML files in a web browser) +[#]: via: (https://opensource.com/article/18/12/xml-browser) +[#]: author: (Greg Pittman https://opensource.com/users/greg-p) + +How to view XML files in a web browser +====== +Turn XML files into something more useful. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) + +Once you learn that HTML is a form of XML, you might wonder what would happen if you tried to view an XML file in a browser. The results are quite disappointing—Firefox shows you a banner at the top of the page that says, "This XML file does not appear to have any style information associated with it. The document tree is shown below." The document tree looks like the file would look in an editor: +![](https://opensource.com/sites/default/files/uploads/xml_menu.png) +This is the beginning of the **menu.xml** file for the online manual that comes with [Scribus][1], to which I'm a contributor. Although you see blue text, they are not clickable links. I wanted to be able to view this in a regular browser, since sometimes I need to go back and forth from the canvas in Scribus to the manual to figure out how to do something (maybe to see if I need to edit the manual to straighten out some misinformation or to add some missing information). + +The way to help a browser know what to do with these XML tags is by using XSLT—Extensible Stylesheet Language Transformations. In a broad sense, you could use XSLT to transform XML to a variety of outputs, or even HTML to XML. Here I want to use it to present the XML tags to a browser as suitable HTML. + +One slight modification needs to happen to the XML file: + +![](https://opensource.com/sites/default/files/uploads/xml_modified-menu.png) + +Adding this second line to the file tells the browser to look for a file named **scribus-manual.xsl** for the style information. The more important part is to create this XSL file. Here is the complete listing of **scribus-manual.xsl** for the Scribus manual: + +``` + + + +    +    Scribus Online Manual + + +        +                +                +        +
                  + +    + +      +        +         

                  +          +           

                  +                +                 

                    +                   
                  • +                 

                  +               
                  +         
                  +        +         

                  +       
                  +       
                  +     
                  +    +    + 
                  +
                  +``` + +This looks a lot more like HTML, and you can see it contains a number of HTML tags. After some preliminary tags and some particulars about displaying H2, H3, and H4 tags, you see a Table tag. This adds a graphical heading at the top of the page and uses some images already in the documentation files. + +After this, you get into the process of dissecting the various **submenuitem** tags, trying to create the nested listing structure as it appears in Scribus when you view the manual. One feature I did not try to duplicate is the ability to collapse and expand **submenuitem** areas. As you can imagine, it takes some time to sort through the number of nested lists you need to create, but when I finished, here is how it looked: + +![](https://opensource.com/sites/default/files/uploads/xml_scribusmenuinbrowser.png) + +This minimal editing to **menu.xml** does not interfere with Scribus' ability to show the manual in its own browser. I put this modified **menu.xml** file and the **scribus-manual.xsl** in the English documentation folder for 1.5.x versions of Scribus, so anyone using these versions can simply point their browser to the **menu.xml** file and it should show up just like you see above. + +A much bigger chore I took on a few years ago was to create a version of the ICD10 (International Classification of Diseases, version 10) when it came out. Many changes were made from the previous version (ICD9) to 10. These are important since these codes must be used for diagnostic purposes in medical practice. You can easily download XML files from the US [Centers for Medicare and Medicaid][2] website since it is public information, but—just as with the Scribus manual—these files are hard to use. + +Here is the beginning of the tabular listing of diseases: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_begin.png) + +One of the features I created was the color coding used in the listing shown here: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_body.png) + +As with **menu.xml** , the only editing I did in this **Tabular.xml** file was to add **** as the second line of the file. I started this project with the 2014 version, and I was quite pleased to find that the original **tabular.xsl** stylesheet worked perfectly when the 2016 version came out, which is the last one I worked on. The** Tabular.xml** file is 8.4MB, quite large for a plaintext file. It takes a few seconds to load into a browser, but once it's loaded, navigation is fast. + +While you may not often have to deal with an XML file in this way, if you do, I hope this article shows that your file can easily be turned into something much more usable. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/xml-browser + +作者:[Greg Pittman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[b]: https://github.com/lujun9972 +[1]: https://www.scribus.net/ +[2]: https://www.cms.gov/ diff --git a/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md new file mode 100644 index 0000000000..4dd47e948a --- /dev/null +++ b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Screen Recorders for the Linux Desktop) +[#]: via: (https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Screen Recorders for the Linux Desktop +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screen-record.png?itok=tKWx29k8) + +There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select. + +And, without further ado, let’s get on with the list. + +### Simple Screen Recorder + +I’m starting out with my go-to screen recorder. I use [Simple Screen Recorder][1] on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs. + +![Simple Screen Recorder ][3] + +Figure 1: Simple Screen Recorder input profile window. + +[Used with permission][4] + +Simple screen recorder also: + + * Records audio input + + * Allows you to pause and resume recording + + * Offers a preview during recording + + * Allows for the selection of video containers and codecs + + * Adds timestamp to file name (optional) + + * Includes hotkey recording and sound notifications + + * Works well on slower machines + + * And much more + + + + +Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the [application download page][5]. + +### Gtk-recordmydesktop + +The next entry, [gtk-recordmydesktop][6], doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2). + +![gtk-recordmydesktop][8] + +Figure 2: Some of the options for gtk-recordmydesktop. + +[Used with permission][4] + +Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so: + +``` +recordmydesktop -x X_POS -y Y_POS --width WIDTH --height HEIGHT -o FILENAME.ogv +``` + +where: + + * X_POS is the offset on the X axis + + * Y_POS is the offset on the Y axis + + * WIDTH is the width of the screen to be recorded + + * HEIGHT is the height of the screen to be recorded + + * FILENAME is the name of the file to be saved + + + + +To find out more about the command line options, issue the command man recordmydesktop and read through the manual page. + +### Kazam + +If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3). + +![Kazam][10] + +Figure 3: Setting up YouTube Live broadcasting in Kazam. + +[Used with permission][4] + +Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away. + +The version of Kazam, with broadcast goodness, can be found in the following repository: + +``` +ppa:sylvain-pineau/kazam +``` + +For Ubuntu (and Ubuntu-based distributions), install with the following commands: + +``` +sudo apt-add-repository ppa:sylvain-pineau/kazam + +sudo apt-get update + +sudo apt-get install kazam -y +``` + +### Vokoscreen + +The [Vokoscreen][11] recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a webcam (Figure 4). + +![Vokoscreen][13] + +Figure 4: Configuring a web cam for a Vokoscreen screen recording. + +[Used with permission][4] + +As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200x200, 400x200, or 600x200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse). + +Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its [GitHub repository][14]. + +### OBS Studio + +For many, [OBS Studio][15] will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast, Mixer.com, DailyMotion, Facebook Live, Restream.io, LiveEdu.tv, Twitter, and more. In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop. + +Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream. + +![OBS Studio][17] + +Figure 5: Configuring your streaming service for OBS Studio. + +[Used with permission][4] + +I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally. + +### There’s More Where That Came From + +This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps. + +Learn more about Linux through the free ["Introduction to Linux" ][18]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: http://www.maartenbaert.be/simplescreenrecorder/ +[2]: /files/images/screenrecorder1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_1.jpg?itok=hZJ5xugI (Simple Screen Recorder ) +[4]: /licenses/category/used-permission +[5]: http://www.maartenbaert.be/simplescreenrecorder/#download +[6]: http://recordmydesktop.sourceforge.net/about.php +[7]: /files/images/screenrecorder2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_2.jpg?itok=TEGXaVYI (gtk-recordmydesktop) +[9]: /files/images/screenrecorder3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_3.jpg?itok=cvtFjxen (Kazam) +[11]: https://github.com/vkohaupt/vokoscreen +[12]: /files/images/screenrecorder4jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_4.jpg?itok=c3KVS954 (Vokoscreen) +[14]: https://github.com/vkohaupt/vokoscreen.git +[15]: https://obsproject.com/ +[16]: /files/images/desktoprecorder5jpg +[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/desktoprecorder_5.jpg?itok=xyM-dCa7 (OBS Studio) +[18]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md new file mode 100644 index 0000000000..c513f36017 --- /dev/null +++ b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automatic continuous development and delivery of a hybrid mobile app) +[#]: via: (https://opensource.com/article/18/12/hybrid-mobile-app-development) +[#]: author: (Angelo Manganiello https://opensource.com/users/amanganiello90) + +Automatic continuous development and delivery of a hybrid mobile app +====== +Hybrid apps are a good middle ground between native and web apps. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd) + +Offering a mobile app is essentially a business requirement for organizations today. One of the first steps in developing an app is to understand the different types—native, hybrid (or cross-platform), and web—so you can decide which one will best meet your needs. + +### Native is better, right? + +**Native apps** represent the vast majority of applications that people download every day. Native applications are developed specifically for an operating system. Thus, a native iOS application will not work on an Android system and vice versa. To develop a native app, you need to know two things: + + 1. How to develop in a specific programming language (e.g., Swift for Apple devices; Java for Android) + 2. The app will not work for other platforms + + + +Even though native apps will work only on the platform they're developed for, they have several notable advantages over hybrid and web apps: + + * Increased speed, reliability, and responsiveness and higher resolution, all of which provide a better user experience + * May work offline/without internet service + * Easier access to all phone features (e.g., accelerometer, camera, microphone) + + + +### But my business is still linked to the web… + +Most companies have focused their resources on web development and now want to enter the mobile market. But many don't have the right technical resources to develop a native app for each platform. For these companies, **hybrid** development is the right choice. In this model, developers can use their existing frontend skills to develop a single, cross-platform mobile app. + +![Hybrid mobile apps][2] + +Hybrid apps are a good middle ground: they're faster and less expensive to develop than native apps, and they offer more possibilities than web apps. The tradeoffs are they don't perform as well as native apps and developers can't maintain their existing tight focus on web development (as they could with web apps). + +If you already are a fan of the [Angular][3] cross-platform development framework, I recommend trying the [Ionic][4] framework, which "lets web developers build, test, and deploy cross-platform hybrid mobile apps." I see Ionic as an extension of the [Apache Cordova][5] framework, which enables a normal web app (JS, HTML, or CSS) to run as a mobile app in a container. Ionic uses the base Cordova features that support the Angular development for its user interface. + +The advantage of this approach is simple: the Angular paradigm is maintained, so developers can continue writing [TypeScript][6] files but target a build for Android, iOS, and Windows by properly configuring the development environment. It also provides two important tools: + + * An appealing design and widget that are very similar to a native app's, so your hybrid app will look less "web" + * Cordova Plugins allow the app to communicate with all phone features + + + +### What about the Node.js backend? + +The programming world likes to standardize, which is why hybrid apps are so popular. Frontend developers' common skills are useful in the mobile world. But if we have a technology stack for the user interface, why not focus on a single backend with the same programming paradigm? + +This makes [Node.js][7] an appealing option. Node.js is a JavaScript runtime built on the Chrome V8 JavaScript engine. It can make the API development backend very fast and easy, and it integrates fully with web technologies. You can develop a Cordova plugin, using your Node.js backend, internally in your hybrid app, as I did with the [nodejs-cordova-plugin][8]. This plugin, following the Cordova guidelines, integrates a mobile-compatible version of the Node.js platform to provide a full-stack mobile app. + +If you need a simple CRUD Node.js backend, you can use my [API][9] [node generator][9] that generates an app using a [MongoDB][10] embedded database. + +![Cordova Full Stack application][12] + +### Deploying your app + +Open source offers everything you need to deploy your app in the best way. You just need a GitHub repository and a good continuous integration tool. I recommend [Travis-ci][13], an excellent tool that allows you to build and deploy your product for every commit. + +Travis-ci is a fork of the better known [Jenkins][14]. Like with Jenkins, you have to configure your pipeline through a configuration file (in this case a **.travis.yml** file) in your GitHub repo. See the [.travis.yml file][15] in my repository as an example. + +![](https://opensource.com/sites/default/files/uploads/3-travis-ci-process.png) + +In addition, this pipeline automatically delivers and installs your app on [Appetize.io][16], a web-based iOS simulator and Android emulator, for testing. + +You can learn more in the [Cordova Android][17] section of my GitHub repository. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/hybrid-mobile-app-development + +作者:[Angelo Manganiello][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/amanganiello90 +[b]: https://github.com/lujun9972 +[1]: /file/416441 +[2]: https://opensource.com/sites/default/files/uploads/1-title.png (Hybrid mobile apps) +[3]: https://angular.io/ +[4]: https://ionicframework.com/ +[5]: https://cordova.apache.org/ +[6]: https://www.typescriptlang.org/ +[7]: https://nodejs.org/ +[8]: https://github.com/fullStackApp/nodejs-cordova-plugin +[9]: https://github.com/fullStackApp/generator-full-stack-api +[10]: https://www.mongodb.com/ +[11]: /file/416351 +[12]: https://opensource.com/sites/default/files/uploads/2-cordova-full-stack-app.png (Cordova Full Stack application) +[13]: https://travis-ci.org/ +[14]: https://jenkins.io/ +[15]: https://github.com/amanganiello90/java-angular-web-app/blob/master/.travis.yml +[16]: https://appetize.io/ +[17]: https://github.com/amanganiello90/java-angular-web-app#cordova-android diff --git a/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md new file mode 100644 index 0000000000..3962e361f3 --- /dev/null +++ b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Benchmark Linux Commands And Programs From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Benchmark Linux Commands And Programs From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/benchmark-720x340.png) + +A while ago, I have written a guide about the [**alternatives to ‘top’, the command line utility**][1]. Some of the users asked me which one among those tools is best and on what basis (like features, contributors, years active, page requests etc.) I compared those tools. They also asked me to share the bench-marking results If I have any. Unfortunately, I didn’t even know how to benchmark programs at that time. While searching for some simple and easy to use bench-marking tools to compare the Linux programs, I stumbled upon two utilities named **‘Bench’** and **‘Hyperfine’**. These are simple and easy-to-use command line tools to benchmark Linux commands and programs on Unix-like systems. + +### 1\. Bench Tool + +The **‘Bench’** utility benchmarks one or more given commands/programs using **Haskell’s criterion** library and displays the output statistics in an easy-to-understandable format. This tool can be helpful where you need to compare similar programs based on the bench-marking result. We can also export the results to HTML format or CSV or templated output. + +#### Installing Bench Utility + +The bench utility can be installed in three methods. + +**1\. Using Linuxbrew** + +We can install Bench utility using Linuxbrew package manager. If you haven’t installed Linuxbrew yet, refer the following link. + +After installing Linuxbrew, run the following command to install Bench: + +``` +$ brew install bench +``` + +**2\. Using Haskell’s stack tool** + +First, install Haskell as described in the following link. + +And then, run the following commands to install Bench. + +``` +$ stack setup + +$ stack install bench +``` + +The ‘stack’ will install bench to **~/.local/bin** or something similar. Make sure that the installation directory is on your executable search path before using bench tool. You will be reminded to do this even if you forgot. + +**3\. Using Nix package manager** + +Another way to install Bench is using **Nix** package manager. Install Nix as shown in the below link. + +After installing Nix, install Bench tool using command: + +``` +$ nix-env -i bench +``` + +#### Benchmark Linux Commands And Programs Using Bench + +It is time to start benchmarking the programs. + +For instance, let me show you the benchmark result of ‘ls -al’ command. + +``` +$ bench 'ls -al' +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-1.png) + +You must quote the commands when you use flags/options with them. + +Similarly, you can benchmark any programs installed in your system. The following commands shows the benchmarking result of ‘htop’ and ‘ptop’ programs. + +``` +$ bench htop + +$ bench ptop +``` +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-2-1.png) +Bench tool can benchmark multiple programs at once as well. Here is the benchmarking result of ls, htop, ptop programs. + +``` +$ bench ls htop ptop +``` + +Sample output: +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-3.png) + +We can also export the benchmark result to a HTML like below. + +``` +$ bench htop --output example.html +``` + +To export the result to CSV, just run: + +``` +$ bench htop --csv FILE +``` + +View help section: + +``` +$ bench --help +``` + +### **2. Hyperfine Benchmark Tool + +** + +**Hyperfine** is yet another command line benchmarking tool inspired by the ‘Bench’ tool which we just discussed above. It is free, open source, cross-platform benchmarking program and written in **Rust** programming language. It has few additional features compared to the Bench tool as listed below. + + * Statistical analysis across multiple runs. + * Support for arbitrary shell commands. + * Constant feedback about the benchmark progress and current estimates. + * Perform warmup runs before the actual benchmark. + * Cache-clearing commands can be set up before each timing run. + * Statistical outlier detection. + * Export benchmark results to various formats, such as CSV, JSON, Markdown. + * Parameterized benchmarks. + + + +#### Installing Hyperfine + +We can install Hyperfine using any one of the following methods. + +**1\. Using Linuxbrew** + +``` +$ brew install hyperfine +``` + +**2\. Using Cargo** + +Make sure you have installed Rust as described in the following link. + +After installing Rust, run the following command to install Hyperfine via Cargo: + +``` +$ cargo install hyperfine +``` + +**3\. Using AUR helper programs** + +Hyperfine is available in [**AUR**][2]. So, you can install it on Arch-based systems using any helper programs, such as [**YaY**][3], like below. + +``` +$ yay -S hyperfine +``` + +**4\. Download and install the binaries** + +Hyperfine is available in binaries for Debian-based systems. Download the latest .deb binary file from the [**releases page**][4] and install it using ‘dpkg’ package manager. As of writing this guide, the latest version was **1.4.0**. + +``` +$ wget https://github.com/sharkdp/hyperfine/releases/download/v1.4.0/hyperfine_1.4.0_amd64.deb + +$ sudo dpkg -i hyperfine_1.4.0_amd64.deb + +$ sudo apt install -f +``` + +#### Benchmark Linux Commands And Programs Using Hyperfine + +To run a benchmark using Hyperfine, simply run it along with the program/command as shown below. + +``` +$ hyperfine 'ls -al' +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-1.png) + +Benchmark multiple commands/programs: + +``` +$ hyperfine htop ptop +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-2.png) + +As you can see at the end of the output, Hyperfine mentiones – **‘htop ran 1.96 times faster than ptop’** , so we can immediately conclude htop performs better than Ptop. This will help you to quickly find which program performs better when benchmarking multiple programs. We don’t get this detailed output in Bench utility though. + +Hyperfine will automatically determine the number of runs to perform for each command. By default, it will perform at least **10 benchmarking runs**. If you want to set the **minimum number of runs** (E.g 5 runs), use the `-m` **/`--min-runs`** option like below: + +``` +$ hyperfine --min-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -m 5 htop ptop +``` + +Similarly, to perform **maximum number of runs** for each command, the command would be: + +``` +$ hyperfine --max-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -M 5 htop ptop +``` + +We can even perform **exact number of runs** for each command using the following command: + +``` +$ hyperfine -r 5 htop ptop +``` + +As you may know, if the program execution time is limited by disk I/O, the benchmarking results can be heavily influenced by disk caches and whether they are cold or warm. Luckily, Hyperfine has the options to perform a certain number of program executions before performing the actual benchmark. + +To perform NUM warmup runs (E.g 3) before the actual benchmark, use the **`-w`/**`--warmup` option like below: + +``` +$ hyperfine --warmup 3 htop +``` + +Just like Bench utility, Hyperfine also allows us to export the benchmark results to a given file. We can export the results to CSV, JSON, and Markdown formats. + +For instance, to export the results in Markdown format, use the following command: + +``` +$ hyperfine htop ptop --export-markdown +``` + +For more options and usage details, refer the help secion: + +``` +$ hyperfine --help +``` + +And, that’s all for now. If you ever be in a situation where you need to benchmark similar and alternative programs, these tools might help to compare how they performs and share the details with your peers and colleagues. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/ +[2]: https://aur.archlinux.org/packages/hyperfine +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://github.com/sharkdp/hyperfine/releases diff --git a/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md new file mode 100644 index 0000000000..adc14c6111 --- /dev/null +++ b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Podman and user namespaces: A marriage made in heaven) +[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +Podman and user namespaces: A marriage made in heaven +====== +Learn how to use Podman to run containers in separate user namespaces. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct) + +[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces. + +I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable). + +If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody. + +### An example + +Here's how that might work. First, I create a file in my system owned by root. + +``` +$ sudo bash -c "echo Test > /tmp/test" +$ sudo chmod 600 /tmp/test +$ sudo ls -l /tmp/test +-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test +``` + +Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000. + +``` +$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh +# id +uid=0(root) gid=0(root) groups=0(root) +# ls -l /tmp/test +-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test +# cat /tmp/test +cat: /tmp/test: Permission denied +``` + +The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host + +Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable. + +User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs. + +Note that SELinux and other technologies also limit what would happen if a container process broke out of the container. + +### Using `podman top` to show user namespaces + +We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host. + +Let's start by running a sleep container using our UID mapping. + +``` +$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000 +``` + +Now run **podman top** : + +``` +$ sudo podman top --latest user huser +USER   HUSER +root   100000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000. + +Now let's run a second container, but this time we will choose a separate UID map starting at 200000. + +``` +$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000 +$ sudo podman top --latest user huser +USER   HUSER +root   200000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +200000   23644 23632  1 08:08 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host. + +Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000. + +This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning. + +### Problems with user namespaces + +For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system. + +In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem. + + +Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image. + +When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**. + +Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0. + +This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container. + +Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out. + +This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container. + +This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system. + +### Going forward + +I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag. + +Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that. + +In my next article, I will explain how to run Podman as non-root in a user namespace. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/podman-and-user-namespaces + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://podman.io/ +[2]: https://github.com/containers/libpod +[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers +[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html +[5]: https://chmodcommand.com/chmod-660/ +[6]: https://en.wikipedia.org/wiki/Chown +[7]: https://github.com/containers/storage +[8]: https://en.wikipedia.org/wiki/OverlayFS +[9]: https://buildah.io/ +[10]: http://cri-o.io/ diff --git a/sources/tech/20181214 Tips for using Flood Element for performance testing.md b/sources/tech/20181214 Tips for using Flood Element for performance testing.md new file mode 100644 index 0000000000..90994b0724 --- /dev/null +++ b/sources/tech/20181214 Tips for using Flood Element for performance testing.md @@ -0,0 +1,180 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tips for using Flood Element for performance testing) +[#]: via: (https://opensource.com/article/18/12/tips-flood-element-testing) +[#]: author: (Nicole van der Hoeven https://opensource.com/users/nicolevanderhoeven) + +Tips for using Flood Element for performance testing +====== +Get started with this powerful, intuitive load testing tool. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +In case you missed it, there’s a new performance test tool on the block: [Flood Element][1]. It’s a scalable, browser-based tool that allows you to write scripts in JavaScript that interact with web pages like a real user would. + +Browser Level Users is a [newer approach to load testing][2] that overcomes many of the common challenges we hear about traditional methods of testing. It offers: + + * Scripting that is akin to common functional tools like Selenium and easier to learn + * More realistic results that are based on true browser performance rather than API response + * The ability to test against all components of your web app, including things like JavaScript that are rendered via the browser + + + +Given the above benefits, it’s a no-brainer to check out Flood Element for your web load testing, especially if you have struggled with existing tools like JMeter or HP LoadRunner. + +Pairing Element with [Flood][3] turns it into a pretty powerful load test tool. We have a [great guide here][4] that you can follow if you’d like to get started. I’ve been using and testing Element for several months now, and I’d like to share some tips I’ve learned along the way. + +### Initializing your script + +You can always start from scratch, but the quickest way to get started is to type `element init myfirstelementtest` from your terminal, filling in your preferred project name. + +You’ll then be asked to type the title of your test as well as the URL you’d like to script against. After a minute, you’ll see that a new directory has been created: + +![](https://opensource.com/sites/default/files/uploads/image_1_-_new_directory.png) + +Element will automatically create a file called **test.ts**. This file contains the skeleton of a script, along with some sample code to help you find a button and then click on it. But before you open it, let’s move on to… + +### Choosing the right text editor + +Scripting in Element is already pretty simple, but two things that help are syntax highlighting and code completion. Syntax highlighting will greatly improve the experience of learning a new test tool like Element, and code completion will make your scripting lightning-fast as you become more experienced. My text editor of choice is [Visual Studio Code][5], which has both of those features. It’s slick and clean, and it does the job. + +Syntax highlighting is when the text editor intelligently changes the font color of your code according to its role in the programming language you’re using. Here’s a screenshot of the **test.ts** file we generated earlier in VS Code to show you what I mean: + +![](https://opensource.com/sites/default/files/uploads/image_2_test.ts_.png) + +This makes it easier to make sense of the code at a glance: Comments are in green, values and labels are in orange, etc. + +Code completion is when you start to type something, and VS Code helpfully opens a context menu with suggestions for methods you can use. + +![][6] + +I love this because it means I don’t need to remember the exact name of the method. It also suggests names of variables you’ve already defined and highlights code that doesn’t make sense. This will help to make your tests more maintainable and readable for others, which is a great benefit as you look to scale your testing out in the future. + +![](https://opensource.com/sites/default/files/image-4-element-visible-copy.gif) + +### Taking screenshots + +One of the most powerful features of Element is its ability to take screenshots. I find it immensely useful when debugging because sometimes it’s just easier to see what’s going on visually. With protocol-based tools, debugging can be a much more involved and technical process. + +There are two ways to take screenshots in Element: + + 1. Add a setting to automatically take a screenshot when an error is encountered. You can do this by setting `screenshotOnFailure` to "true" in `TestSettings`: + + + +``` +export const settings: TestSettings = { +        device: Device.iPadLandscape, +        userAgent: 'flood-chrome-test', +        clearCache: true, +        disableCache: true, +        screenshotOnFailure: true, +} +``` + + 2. Explicitly take a screenshot at a particular point in the script. You can do this by adding + + + +``` +await browser.takeScreenshot() +``` + +to your code. + +### Viewing screenshots + +Once you’ve taken screenshots within your tests, you will probably want to view them and know that they will be stored for future safekeeping. Whether you are running your test locally on have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +Screenshots will be saved as .jpg files in a timestamped folder corresponding to your run. It should look something like this: **…myfirstelementtest/tmp/element-results/test/2018-11-20T135700.595Z/flood/screenshots/**. The screenshots will be uniquely named so that new screenshots, even for the same step, don’t overwrite older ones. + +However, I rarely need to look up the screenshots in that folder because I prefer to see them in iTerm2 for MacOS. iTerm is an alternative to the terminal that works particularly well with Element. When you take a screenshot, iTerm actually shows it in-line: + +![](https://opensource.com/sites/default/files/uploads/image_5_iterm_inline.png) + +**Tests run in Flood** + +Running an Element script on Flood is ideal when you need larger concurrency. Rather than accessing your screenshot locally, Flood will centralize the images into your account, so the images remain even after the cloud load injectors are destroyed. You can get to the screenshot files by downloading Archived Results: + +![](https://opensource.com/sites/default/files/image_6_archived_results.png) + +You can also click on a step on the dashboard to see a filmstrip of your test: + +![](https://opensource.com/sites/default/files/uploads/image_7_filmstrip_view.png) + +### Using logs + +You may need to check out the logs for more technical debugging, especially when the screenshots don’t tell the whole story. Again, whether you are running your test locally or have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +You can print to the console by typing, for example: `console.log('orderValues = ’ + orderValues)` + +This will print the value of the variable `orderValues` at that point in the script. You would see this in your terminal if you’re running Element locally. + +**Tests run in Flood** + +If you’re running the script on Flood, you can either download the log (in the same Archived Results zipped file mentioned earlier) or click on the Logs tab: + +![](https://opensource.com/sites/default/files/uploads/image_8_logs_tab.png) + +### Fun with flags + +Element comes with a few flags that give you more control over how the script is run locally. Here are a few of my favorites: + +**Headless flag** + +When in doubt, run Element in non-headless mode to see the script actually opening the web app on Chrome and interacting with the page. This is only possible locally, but there’s nothing like actually seeing for yourself what’s happening in real time instead of relying on screenshots and logs after the fact. To enable this mode, add the flag when running your test: + +``` +element run myfirstelementtest.ts --no-headless +``` + +**Watch flag** + +Element will automatically close the browser window when it encounters an error or finishes the iteration. Adding `--watch` will leave the browser window open and then monitor the script. As soon as the script is saved, it will automatically run it in the same window from the beginning. Simply add this flag like the above example: + +``` +--watch +``` + +**Dev tools flag** + +This opens a browser instance and runs the script with the Chrome Dev Tools open, allowing you to find locators for the next action you want to script. Simply add this flag as in the first example: + +``` +--dev-tools +``` + +For more flags, use `element run --help`. + +### Try Element + +You’ve just gotten a crash course on Flood Element and are ready to get started. [Download Element][1] to start writing functional test scripts and reusing them as load test scripts on Flood. If you don’t have a Flood account, you can easily sign up for a free trial [on the Flood website][7]. + +We’re proud to contribute to the open source community and can’t wait to have you try this new addition to the Flood line. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/tips-flood-element-testing + +作者:[Nicole van der Hoeven][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nicolevanderhoeven +[b]: https://github.com/lujun9972 +[1]: https://element.flood.io/ +[2]: https://flood.io/blog/why-you-should-load-test-with-browsers/ +[3]: https://flood.io/ +[4]: https://help.flood.io/getting-started-with-load-testing/step-by-step-guide-flood-element +[5]: https://code.visualstudio.com/ +[6]: https://flood.io/wp-content/uploads/2018/11/vscode-codecompletion2.gif +[7]: https://flood.io/load-performance-testing-tool/free-load-testing-trial/ diff --git a/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md new file mode 100644 index 0000000000..ad688a7820 --- /dev/null +++ b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 tips and tricks for using KeePassX to secure your passwords) +[#]: via: (https://opensource.com/article/18/12/keepassx-security-best-practices) +[#]: author: (Michael McCune https://opensource.com/users/elmiko) + +6 tips and tricks for using KeePassX to secure your passwords +====== +Get more out of your password manager by following these best practices. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum) + +Our increasingly interconnected digital world makes security an essential and common discussion topic. We hear about [data breaches][1] with alarming regularity and are often on our own to make informed decisions about how to use technology securely. Although security is a deep and nuanced topic, there are some easy daily habits you can keep to reduce your attack surface. + +Securing passwords and account information is something that affects anyone today. Technologies like [OAuth][2] help make our lives simpler by reducing the number of accounts we need to create, but we are still left with a staggering number of places where we need new, unique information to keep our records secure. An easy way to deal with the increased mental load of organizing all this sensitive information is to use a password manager like [KeePassX][3]. + +In this article, I will explain the importance of keeping your password information secure and offer suggestions for getting the most out of KeePassX. For an introduction to KeePassX and its features, I highly recommend Ricardo Frydman's article "[Managing passwords in Linux with KeePassX][4]." + +### Why are unique passwords important? + +Using a different password for each account is the first step in ensuring that your accounts are not vulnerable to shared information leaks. Generating new credentials for every account is time-consuming, and it is extremely common for people to fall into the trap of using the same password on several accounts. The main problem with reusing passwords is that you increase the number of accounts an attacker could access if one of them experiences a credential breach. + +It may seem like a burden to create new credentials for each account, but the few minutes you spend creating and recording this information will pay for itself many times over in the event of a data breach. This is where password management tools like KeePassX are invaluable for providing convenience and reliability in securing your logins. + +### 3 tips for getting the most out of KeePassX + +I have been using KeePassX to manage my password information for many years, and it has become a primary resource in my digital toolbox. Overall, it's fairly simple to use, but there are a few best practices I've learned that I think are worth highlighting. + + 1. Add the direct login URL for each account entry. KeePassX has a very convenient shortcut to open the URL listed with an entry. (It's Control+Shift+U on Linux.) When creating a new account entry for a website, I spend some time to locate the site's direct login URL. Although most websites have a login widget in their navigation toolbars, they also usually have direct pages for login forms. By putting this URL into the URL field on the account entry setup form, I can use the shortcut to directly open the login page in my browser. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip1.png) + + 2. Use the Notes field to record extra security information. In addition to passwords, most websites will ask several questions to create additional authentication factors for an account. I use the Notes sections in my account entries to record these additional factors. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip2.png) + + 3. Turn on automatic database locking. In the **Application Settings** under the **Tools** menu, there is an option to lock the database after a period of inactivity. Enabling this option is a good common-sense measure, similar to enabling a password-protected screen lock, that will help ensure your password database is not left open and unprotected if someone else gains access to your computer. + +![](https://opensource.com/sites/default/files/uploads/keepassx_application-settings.png) + +### Food for thought + +Protecting your accounts with better password practices and daily habits is just the beginning. Once you start using a password manager, you need to consider issues like protecting the password database file and ensuring you don't forget or lose the master credentials. + +The cloud-native world of disconnected devices and edge computing makes having a central password store essential. The practices and methodologies you adopt will help minimize your risk while you explore and work in the digital world. + + 1. Be aware of retention policies when storing your database in the cloud. KeePassX's database has an open format used by several tools on multiple platforms. Sooner or later, you will want to transfer your database to another device. As you do this, consider the medium you will use to transfer the file. The best option is to use some sort of direct transfer between devices, but this is not always convenient. Always think about where the database file might be stored as it winds its way through the information superhighway; an email may get cached on a server, an object store may move old files to a trash folder. Learn about these interactions for the platforms you are using before deciding where and how you will share your database file. + + 2. Consider the source of truth for your database while you're making edits. After you share your database file between devices, you might need to create accounts for new services or change information for existing services while using a device. To ensure your information is always correct across all your devices, you need to make sure any edits you make on one device end up in all copies of the database file. There is no easy solution to this problem, but you might think about making all edits from a single device or storing the master copy in a location where all your devices can make edits. + + 3. Do you really need to know your passwords? This is more of a philosophical question that touches on the nature of memorable passwords, convenience, and secrecy. I hardly look at passwords as I create them for new accounts; in most cases, I don't even click the "Show Password" checkbox. There is an idea that you can be more secure by not knowing your passwords, as it would be impossible to compel you to provide them. This may seem like a worrisome idea at first, but consider that you can recover or reset passwords for most accounts through alternate verification methods. When you consider that you might want to change your passwords on a semi-regular basis, it almost makes more sense to treat them as ephemeral information that can be regenerated or replaced. + + + + +Here are a few more ideas to consider as you develop your best practices. + +I hope these tips and tricks have helped expand your knowledge of password management and KeePassX. You can find tools that support the KeePass database format on nearly every platform. If you are not currently using a password manager or have never tried KeePassX, I highly recommend doing so now! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/keepassx-security-best-practices + +作者:[Michael McCune][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/elmiko +[b]: https://github.com/lujun9972 +[1]: https://vigilante.pw/ +[2]: https://en.wikipedia.org/wiki/OAuth +[3]: https://www.keepassx.org/ +[4]: https://opensource.com/business/16/5/keepassx diff --git a/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md new file mode 100644 index 0000000000..c10e7ae4ed --- /dev/null +++ b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Insync: The Hassleless Way of Using Google Drive on Linux) +[#]: via: (https://itsfoss.com/insync-linux-review/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Insync: The Hassleless Way of Using Google Drive on Linux +====== + +Using Google Drive on Linux is a pain and you probably already know that. There is no official desktop client of Google Drive for Linux. It’s been [more than six years since Google promised Google Drive on Linux][1] but it doesn’t seem to be happening. + +In the absence of the official Google Drive client on Linux, you have no option other than trying the alternatives. I have already discussed a number of [tools that allow you to use Google Drive on Linux][2]. One of those to[ols is][3] Insync, and in my opinion, this is your best bet for a native Google Drive experience on desktop Linux. + +Note that Insync is not an open source software. Heck, it is not even free to use. + +But it has so many features that it becomes an essential tool for those Linux users who rely heavily on Google Drive. + +I briefly discussed Insync in the old article about [Google Drive and Linux][2]. In this article, I’ll discuss Insync features in detail. + +### Insync brings native Google Drive experience to Linux desktop + +![Use insync to access Google Drive in Linux][4] + +The core competency of Insync is syncing your Google Drive, but the app is much more than that. It has features to help you maximize and control your productivity, your Google Drive and your files such as: + + * Cross-platform access (supports Linux, Windows and macOS) + * Easy multiple Google Drive accounts access + * Choose your syncing location. Sync files to your hard drive, external drives and NAS! + * Support for features like file matching, symlink and ignore list + + + +Let me show you some of the main features in action: + +#### Cross-platform in true sense + +Insync claims to run the same app across all operating systems i.e., Linux, Windows, and macOS. That means that you can access the same UI across different OSes, making it easy for you to manage your files across multiple machines. + +![The UI of Insync and the default location of the Insync folder.][5]The UI of Insync and the default location of the Insync folder. + +#### Multiple Google account management + +Insync interface allows you to manage multiple Google Drive accounts seamlessly. You can easily switch between several accounts just by clicking your Google account. + +![Switching between multiple Google accounts in Insync][6]Switching between multiple Google accounts + +#### Custom sync folders + +Customize the way you sync your files and folders. You can easily set your syncing destination anywhere on your machine including external drive and network drives. + +![Customize sync location in Insync][7]Customize sync location + +The selective syncing mode also allows you to easily select a number of files and folders you’d want to sync (or unsync) in your local machine. This includes selectively syncing files within folders. + +![Selective synchronization in Insync][8]Selective synchronization + +It has features like file matching and ‘ignore list’ to help you filter files you don’t want to sync or files that you already have on your machine. + +![File matching feature in Insync][9]Avoids duplication of files + +The ‘ignore list’ allows you to set rules to exclude certain type of files from synchronization. + +![Selective syncing based on rules in Insync][10]Selective syncing based on rules + +If you prefer to work out of the desktop, you have an “Add to Insync” feature that will allow you to add any local file to your Drive. + +![Sync files right from your desktop][11]Sync files right from your desktop + +Insync also supports symlinks for those with workflows that use symbolic links. To learn more about Insync and symlinks, you can refer to [this article.][12] + +#### Exclusive features for Linux + +Insync supports the most commonly used 64-bit Linux distributions like **Ubuntu, Debian and Fedora**. You can check out the full list of distribution support [here][13]. + +Insync also has [headless][14] support for those looking to sync through the command line interface. This is perfect if you use a distro that is not fully supported by the GUI app or if you are working with servers or if you simply prefer the CLI. + +![Insync CLI][15]Command Line Interface + +You can learn more about installing and running Insync headless [here][16]. + +### Insync pricing and special discount + +Insync is a premium tool and it comes with a [price tag][17]. You have 2 licenses to choose from: + + * **Prime** is priced at $29.99 per Google account. You’ll get access to: cross-platform syncing, multiple accounts access and **support**. + * **Teams** is priced at $49.99 per Google account. You’ll be able to access all the Prime features + Team Drives syncing + + + +It’s a one-time fee which means once you buy it, you don’t have to pay it again. In a world where everything is paid monthly, it’s refreshing to pay for software that is still one-time! + +Each Google account has a 15-day free trial that will allow you to test the full suite of features, including [Team Drives][18] syncing. + +If you think it’s a bit expensive for your budget, I have good news for you. As an It’s FOSS reader, you get Insync at 25% discount. + +Just use the code ITSFOSS25 at checkout time and you will get 25% immediate discount on any license. Isn’t it cool? + +If you are not certain yet, you can try Insync free for 15 days. And if you think it’s worth the money, purchase the license with **ITSFOSS25** coupon code. + +You can download Insync from their website. + +I have used Insync from the time when it was available for free and I have always liked it. They have added more features over the time and improved its UI and performance. Overall, it’s a nice-to-have application if you use Google Drive a lot and do not mind paying for the efforts of the developers. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/insync-linux-review/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://abevoelker.github.io/how-long-since-google-said-a-google-drive-linux-client-is-coming/ +[2]: https://itsfoss.com/use-google-drive-linux/ +[3]: https://www.insynchq.com +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/google-drive-linux-insync.jpeg?resize=800%2C450&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_interface.jpeg?fit=800%2C501&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_multiple_google_account.jpeg?ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_folder_settings.png?ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_selective_sync.png?ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_file_matching.jpeg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_ignore_list_1.png?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/add-to-insync-shortcut.jpeg?ssl=1 +[12]: https://help.insynchq.com/key-features-and-syncing-explained/syncing-superpowers/using-symlinks-on-google-drive-with-insync +[13]: https://www.insynchq.com/downloads +[14]: https://en.wikipedia.org/wiki/Headless_software +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_cli.jpeg?fit=800%2C478&ssl=1 +[16]: https://help.insynchq.com/installation-on-windows-linux-and-macos/advanced/linux-controlling-insync-via-command-line-cli +[17]: https://www.insynchq.com/pricing +[18]: https://gsuite.google.com/learning-center/products/drive/get-started-team-drive/#!/ diff --git a/sources/tech/20181220 Getting started with Prometheus.md b/sources/tech/20181220 Getting started with Prometheus.md new file mode 100644 index 0000000000..79704addb7 --- /dev/null +++ b/sources/tech/20181220 Getting started with Prometheus.md @@ -0,0 +1,166 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Prometheus) +[#]: via: (https://opensource.com/article/18/12/introduction-prometheus) +[#]: author: (Michael Zamot https://opensource.com/users/mzamot) + +Getting started with Prometheus +====== +Learn to install and write queries for the Prometheus monitoring and alerting system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy. + +Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances. + +Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters. + +Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4]. + +Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths. + +In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions. + +### Making sense of Prometheus' configuration file + +The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds. + +Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets. + +### Installing Prometheus + +In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6]. + +Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows: + +``` +global: +  scrape_interval:      15s +  evaluation_interval: 15s + +scrape_configs: +  - job_name: 'prometheus' + +        static_configs: +        - targets: ['localhost:9090'] + +  - job_name: 'webservers' + +        static_configs: +        - targets: [':9100'] +``` + +Start Prometheus with Docker by running the following command: + +``` +$ sudo docker run -d -p 9090:9090 -v +/path/to/prometheus.yml:/etc/prometheus/prometheus.yml +prom/prometheus +``` + +By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" :"** at the end of the previous command. + +In the machine you want to monitor, download and run the **node_exporter** container by using the following command: + +``` +$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v +"/:/rootfs" --net="host" prom/node-exporter --path.procfs +/host/proc --path.sysfs /host/sys --collector.filesystem.ignored- +mount-points "^/(sys|proc|dev|host|etc)($|/)" +``` + +For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only. + +To verify that **node_exporter** is running, open your browser and navigate to **http:// :9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape. + +![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png) + +To verify the Prometheus server installation, open your browser and navigate to . + +You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**. + +![](https://opensource.com/sites/default/files/uploads/targets-up.png) + +### Using Prometheus queries + +It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **** on your Prometheus server. You will see a query editor and two tabs: Graph and Console. + +Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example: + +``` +node_filesystem_avail_bytes{fstype="ext4"} +``` + +When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this: + +To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems: + +``` +node_filesystem_avail_bytes{fstype=~"ext4|xfs"} +``` + +To exclude a match: + +``` +node_filesystem_avail_bytes{fstype!="xfs"} +``` + +You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector. + +For example, the following command produces the samples from five minutes to the present: + +``` +node_memory_MemAvailable_bytes[5m] +``` + +Prometheus also includes functions to allow advanced queries, such as this: + +``` +100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m]))) +``` + +Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100. + +![](https://opensource.com/sites/default/files/uploads/cpu-usage.png) + +### Learn more + +Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure. + +To learn more about Prometheus and its functions, I recommend the following resources: + ++ About [PromQL][8] ++ What [node_exporters collects][9] ++ [Prometheus functions][10] ++ [4 open source monitoring tools][11] ++ [Now available: The open source guide to DevOps monitoring tools][12] + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/introduction-prometheus + +作者:[Michael Zamot][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[b]: https://github.com/lujun9972 +[1]: https://prometheus.io/ +[2]: https://www.cncf.io/ +[3]: https://prometheus.io/docs/alerting/alertmanager/ +[4]: https://en.wikipedia.org/wiki/PagerDuty +[5]: https://grafana.com/ +[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus +[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[9]: https://github.com/prometheus/node_exporter#collectors +[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/ +[11]: https://opensource.com/article/18/8/open-source-monitoring-tools +[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools diff --git a/sources/tech/20181221 Large files with Git- LFS and git-annex.md b/sources/tech/20181221 Large files with Git- LFS and git-annex.md new file mode 100644 index 0000000000..29a76f810f --- /dev/null +++ b/sources/tech/20181221 Large files with Git- LFS and git-annex.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Large files with Git: LFS and git-annex) +[#]: via: (https://anarc.at/blog/2018-12-21-large-files-with-git/) +[#]: author: (Anarc.at https://anarc.at/) + +Large files with Git: LFS and git-annex +====== + +Git does not handle large files very well. While there is work underway to handle large repositories through the [commit graph work][2], Git's internal design has remained surprisingly constant throughout its history, which means that storing large files into Git comes with a significant and, ultimately, prohibitive performance cost. Thankfully, other projects are helping Git address this challenge. This article compares how Git LFS and git-annex address this problem and should help readers pick the right solution for their needs. + +### The problem with large files + +As readers probably know, Linus Torvalds wrote Git to manage the history of the kernel source code, which is a large collection of small files. Every file is a "blob" in Git's object store, addressed by its cryptographic hash. A new version of that file will store a new blob in Git's history, with no deduplication between the two versions. The pack file format can store binary deltas between similar objects, but if many objects of similar size change in a repository, that algorithm might fail to properly deduplicate. In practice, large binary files (say JPEG images) have an irritating tendency of changing completely when even the smallest change is made, which makes delta compression useless. + +There have been different attempts at fixing this in the past. In 2006, Torvalds worked on [improving the pack-file format][3] to reduce object duplication between the index and the pack files. Those changes were eventually reverted because, as Nicolas Pitre [put it][4]: "that extra loose object format doesn't appear to be worth it anymore". + +Then in 2009, [Caca Labs][5] worked on improving the `fast-import` and `pack-objects` Git commands to do special handling for big files, in an effort called [git-bigfiles][6]. Some of those changes eventually made it into Git: for example, since [1.7.6][7], Git will stream large files directly to a pack file instead of holding them all in memory. But files are still kept forever in the history. + +An example of trouble I had to deal with is for the Debian security tracker, which follows all security issues in the entire Debian history in a single file. That file is around 360,000 lines for a whopping 18MB. The resulting repository takes 1.6GB of disk space and a local clone takes 21 minutes to perform, mostly taken up by Git resolving deltas. Commit, push, and pull are noticeably slower than a regular repository, taking anywhere from a few seconds to a minute depending one how old the local copy is. And running annotate on that large file can take up to ten minutes. So even though that is a simple text file, it's grown large enough to cause significant problems for Git, which is otherwise known for stellar performance. + +Intuitively, the problem is that Git needs to copy files into its object store to track them. Third-party projects therefore typically solve the large-files problem by taking files out of Git. In 2009, Git evangelist Scott Chacon released [GitMedia][8], which is a Git filter that simply takes large files out of Git. Unfortunately, there hasn't been an official release since then and it's [unclear][9] if the project is still maintained. The next effort to come up was [git-fat][10], first released in 2012 and still maintained. But neither tool has seen massive adoption yet. If I would have to venture a guess, it might be because both require manual configuration. Both also require a custom server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits collaboration since users need access to another service. + +### Git LFS + +That was before GitHub [released][11] Git Large File Storage (LFS) in August 2015. Like all software taking files out of Git, LFS tracks file hashes instead of file contents. So instead of adding large files into Git directly, LFS adds a pointer file to the Git repository, which looks like this: + +``` +version https://git-lfs.github.com/spec/v1 +oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393 +size 12345 +``` + +LFS then uses Git's smudge and clean filters to show the real file on checkout. Git only stores that small text file and does so efficiently. The downside, of course, is that large files are not version controlled: only the latest version of a file is kept in the repository. + +Git LFS can be used in any repository by installing the right hooks with `git lfs install` then asking LFS to track any given file with `git lfs track`. This will add the file to the `.gitattributes` file which will make Git run the proper LFS filters. It's also possible to add patterns to the `.gitattributes` file, of course. For example, this will make sure Git LFS will track MP3 and ZIP files: + +``` +$ cat .gitattributes +*.mp3 filter=lfs -text +*.zip filter=lfs -text +``` + +After this configuration, we use Git normally: `git add`, `git commit`, and so on will talk to Git LFS transparently. + +The actual files tracked by LFS are copied to a path like `.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded file path of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the content's hash (currently SHA-256) of the file. This brings the extra feature that multiple copies of the same file in the same repository are automatically deduplicated, although in practice this rarely occurs. + +Git LFS will copy large files to that internal storage on `git add`. When a file is modified in the repository, Git notices, the new version is copied to the internal storage, and the pointer file is updated. The old version is left dangling until the repository is pruned. + +This process only works for new files you are importing into Git, however. If a Git repository already has large files in its history, LFS can fortunately "fix" repositories by retroactively rewriting history with [git lfs migrate][12]. This has all the normal downsides of rewriting history, however --- existing clones will have to be reset to benefit from the cleanup. + +LFS also supports [file locking][13], which allows users to claim a lock on a file, making it read-only everywhere except in the locking repository. This allows users to signal others that they are working on an LFS file. Those locks are purely advisory, however, as users can remove other user's locks by using the `--force` flag. LFS can also [prune][14] old or unreferenced files. + +The main [limitation][15] of LFS is that it's bound to a single upstream: large files are usually stored in the same location as the central Git repository. If it is hosted on GitHub, this means a default quota of 1GB storage and bandwidth, but you can purchase additional "packs" to expand both of those quotas. GitHub also limits the size of individual files to 2GB. This [upset][16] some users surprised by the bandwidth fees, which were previously hidden in GitHub's cost structure. + +While the actual server-side implementation used by GitHub is closed source, there is a [test server][17] provided as an example implementation. Other Git hosting platforms have also [implemented][18] support for the LFS [API][19], including GitLab, Gitea, and BitBucket; that level of adoption is something that git-fat and GitMedia never achieved. LFS does support hosting large files on a server other than the central one --- a project could run its own LFS server, for example --- but this will involve a different set of credentials, bringing back the difficult user onboarding that affected git-fat and GitMedia. + +Another limitation is that LFS only supports pushing and pulling files over HTTP(S) --- no SSH transfers. LFS uses some [tricks][20] to bypass HTTP basic authentication, fortunately. This also might change in the future as there are proposals to add [SSH support][21], resumable uploads through the [tus.io protocol][22], and other [custom transfer protocols][23]. + +Finally, LFS can be slow. Every file added to LFS takes up double the space on the local filesystem as it is copied to the `.git/lfs/objects` storage. The smudge/clean interface is also slow: it works as a pipe, but buffers the file contents in memory each time, which can be prohibitive with files larger than available memory. + +### git-annex + +The other main player in large file support for Git is git-annex. We [covered the project][24] back in 2010, shortly after its first release, but it's certainly worth discussing what has changed in the eight years since Joey Hess launched the project. + +Like Git LFS, git-annex takes large files out of Git's history. The way it handles this is by storing a symbolic link to the file in `.git/annex`. We should probably credit Hess for this innovation, since the Git LFS storage layout is obviously inspired by git-annex. The original design of git-annex introduced all sorts of problems however, especially on filesystems lacking symbolic-link support. So Hess has implemented different solutions to this problem. Originally, when git-annex detected such a "crippled" filesystem, it switched to [direct mode][25], which kept files directly in the work tree, while internally committing the symbolic links into the Git repository. This design turned out to be a little confusing to users, including myself; I have managed to shoot myself in the foot more than once using this system. + +Since then, git-annex has adopted a different v7 mode that is also based on smudge/clean filters, which it called "[unlocked files][26]". Like Git LFS, unlocked files will double disk space usage by default. However it is possible to reduce disk space usage by using "thin mode" which uses hard links between the internal git-annex disk storage and the work tree. The downside is, of course, that changes are immediately performed on files, which means previous file versions are automatically discarded. This can lead to data loss if users are not careful. + +Furthermore, git-annex in v7 mode suffers from some of the performance problems affecting Git LFS, because both use the smudge/clean filters. Hess actually has [ideas][27] on how the smudge/clean interface could be improved. He proposes changing Git so that it stops buffering entire files into memory, allows filters to access the work tree directly, and adds the hooks he found missing (for `stash`, `reset`, and `cherry-pick`). Git-annex already implements some tricks to work around those problems itself but it would be better for those to be implemented in Git natively. + +Being more distributed by design, git-annex does not have the same "locking" semantics as LFS. Locking a file in git-annex means protecting it from changes, so files need to actually be in the "unlocked" state to be editable, which might be counter-intuitive to new users. In general, git-annex has some of those unusual quirks and interfaces that often come with more powerful software. + +And git-annex is much more powerful: it not only addresses the "large-files problem" but goes much further. For example, it supports "partial checkouts" --- downloading only some of the large files. I find that especially useful to manage my video, music, and photo collections, as those are too large to fit on my mobile devices. Git-annex also has support for location tracking, where it knows how many copies of a file exist and where, which is useful for archival purposes. And while Git LFS is only starting to look at transfer protocols other than HTTP, git-annex already supports a [large number][28] through a [special remote protocol][29] that is fairly easy to implement. + +"Large files" is therefore only scratching the surface of what git-annex can do: I have used it to build an [archival system for remote native communities in northern Québec][30], while others have built a [similar system in Brazil][31]. It's also used by the scientific community in projects like [GIN][32] and [DataLad][33], which manage terabytes of data. Another example is the [Japanese American Legacy Project][34] which manages "upwards of 100 terabytes of collections, transporting them from small cultural heritage sites on USB drives". + +Unfortunately, git-annex is not well supported by hosting providers. GitLab [used to support it][35], but since it implemented Git LFS, it [dropped support for git-annex][36], saying it was a "burden to support". Fortunately, thanks to git-annex's flexibility, it may eventually be possible to treat [LFS servers as just another remote][37] which would make git-annex capable of storing files on those servers again. + +### Conclusion + +Git LFS and git-annex are both mature and well maintained programs that deal efficiently with large files in Git. LFS is easier to use and is well supported by major Git hosting providers, but it's less flexible than git-annex. + +Git-annex, in comparison, allows you to store your content anywhere and espouses Git's distributed nature more faithfully. It also uses all sorts of tricks to save disk space and improve performance, so it should generally be faster than Git LFS. Learning git-annex, however, feels like learning Git: you always feel you are not quite there and you can always learn more. It's a double-edged sword and can feel empowering for some users and terrifyingly hard for others. Where you stand on the "power-user" scale, along with project-specific requirements will ultimately determine which solution is the right one for you. + +Ironically, after thorough evaluation of large-file solutions for the Debian security tracker, I ended up proposing to rewrite history and [split the file by year][38] which improved all performance markers by at least an order of magnitude. As it turns out, keeping history is critical for the security team so any solution that moves large files outside of the Git repository is not acceptable to them. Therefore, before adding large files into Git, you might want to think about organizing your content correctly first. But if large files are unavoidable, the Git LFS and git-annex projects allow users to keep using most of their current workflow. + +> This article [first appeared][39] in the [Linux Weekly News][40]. + +-------------------------------------------------------------------------------- + +via: https://anarc.at/blog/2018-12-21-large-files-with-git/ + +作者:[Anarc.at][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://anarc.at/ +[b]: https://github.com/lujun9972 +[1]: https://anarc.at/blog/ +[2]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt +[3]: https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/ +[4]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/ +[5]: http://caca.zoy.org/ +[6]: http://caca.zoy.org/wiki/git-bigfiles +[7]: https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/ +[8]: https://github.com/alebedev/git-media +[9]: https://github.com/alebedev/git-media/issues/15 +[10]: https://github.com/jedbrown/git-fat +[11]: https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/ +[12]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn +[13]: https://github.com/git-lfs/git-lfs/wiki/File-Locking +[14]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-prune.1.ronn +[15]: https://github.com/git-lfs/git-lfs/wiki/Limitations +[16]: https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91 +[17]: https://github.com/git-lfs/lfs-test-server +[18]: https://github.com/git-lfs/git-lfs/wiki/Implementations%0A +[19]: https://github.com/git-lfs/git-lfs/tree/master/docs/api +[20]: https://github.com/git-lfs/git-lfs/blob/master/docs/api/authentication.md +[21]: https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md +[22]: https://tus.io/ +[23]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md +[24]: https://lwn.net/Articles/419241/ +[25]: http://git-annex.branchable.com/direct_mode/ +[26]: https://git-annex.branchable.com/tips/unlocked_files/ +[27]: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/ +[28]: http://git-annex.branchable.com/special_remotes/ +[29]: http://git-annex.branchable.com/special_remotes/external/ +[30]: http://isuma-media-players.readthedocs.org/en/latest/index.html +[31]: https://github.com/RedeMocambos/baobaxia +[32]: https://web.gin.g-node.org/ +[33]: https://www.datalad.org/ +[34]: http://www.densho.org/ +[35]: https://docs.gitlab.com/ee/workflow/git_annex.html +[36]: https://gitlab.com/gitlab-org/gitlab-ee/issues/1648 +[37]: https://git-annex.branchable.com/todo/LFS_API_support/ +[38]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=908678#52 +[39]: https://lwn.net/Articles/774125/ +[40]: http://lwn.net/ diff --git a/sources/tech/20181222 How to detect automatically generated emails.md b/sources/tech/20181222 How to detect automatically generated emails.md new file mode 100644 index 0000000000..23b509a77b --- /dev/null +++ b/sources/tech/20181222 How to detect automatically generated emails.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to detect automatically generated emails) +[#]: via: (https://arp242.net/weblog/autoreply.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +How to detect automatically generated emails +====== + +### How to detect automatically generated emails + + +When you send out an auto-reply from an email system you want to take care to not send replies to automatically generated emails. At best, you will get a useless delivery failure. At words, you will get an infinite email loop and a world of chaos. + +Turns out that reliably detecting automatically generated emails is not always easy. Here are my observations based on writing a detector for this and scanning about 100,000 emails with it (extensive personal archive and company archive). + +### Auto-submitted header + +Defined in [RFC 3834][1]. + +This is the ‘official’ standard way to indicate your message is an auto-reply. You should **not** send a reply if `Auto-Submitted` is present and has a value other than `no`. + +### X-Auto-Response-Suppress header + +Defined [by Microsoft][2] + +This header is used by Microsoft Exchange, Outlook, and perhaps some other products. Many newsletters and such also set this. You should **not** send a reply if `X-Auto-Response-Suppress` contains `DR` (“Suppress delivery reports”), `AutoReply` (“Suppress auto-reply messages other than OOF notifications”), or `All`. + +### List-Id and List-Unsubscribe headers + +Defined in [RFC 2919][3] + +You usually don’t want to send auto-replies to mailing lists or news letters. Pretty much all mail lists and most newsletters set at least one of these headers. You should **not** send a reply if either of these headers is present. The value is unimportant. + +### Feedback-ID header + +Defined [by Google][4]. + +Gmail uses this header to identify mail newsletters, and uses it to generate statistics/reports for owners of those newsletters. You should **not** send a reply if this headers is present; the value is unimportant. + +### Non-standard ways + +The above methods are well-defined and clear (even though some are non-standard). Unfortunately some email systems do not use any of them :-( Here are some additional measures. + +#### Precedence header + +Not really defined anywhere, mentioned in [RFC 2076][5] where its use is discouraged (but this header is commonly encountered). + +Note that checking for the existence of this field is not recommended, as some ails use `normal` and some other (obscure) values (this is not very common though). + +My recommendation is to **not** send a reply if the value case-insensitively matches `bulk`, `auto_reply`, or `list`. + +#### Other obscure headers + +A collection of other (somewhat obscure) headers I’ve encountered. I would recommend **not** sending an auto-reply if one of these is set. Most mails also set one of the above headers, but some don’t (but it’s not very common). + + * `X-MSFBL`; can’t really find a definition (Microsoft header?), but I only have auto-generated mails with this header. + + * `X-Loop`; not really defined anywhere, and somewhat rare, but sometimes it’s set. It’s most often set to the address that should not get emails, but `X-Loop: yes` is also encountered. + + * `X-Autoreply`; fairly rare, and always seems to have a value of `yes`. + + + + +#### Email address + +Check if the `From` or `Reply-To` headers contains `noreply`, `no-reply`, or `no_reply` (regex: `^no.?reply@`). + +#### HTML only + +If an email only has a HTML part, but no text part it’s a good indication this is an auto-generated mail or newsletter. Pretty much all mail clients also set a text part. + +#### Delivery failures + +Many delivery failure messages don’t really indicate that they’re failures. Some ways to check this: + + * `From` contains `mailer-daemon` or `Mail Delivery Subsystem` + + + +Many mail libraries leave some sort of footprint, and most regular mail clients override this with their own data. Checking for this seems to work fairly well. + + * `X-Mailer: Microsoft CDO for Windows 2000` – Set by some MS software; I can only find it on autogenerated mails. Yes, it’s still used in 2015. + + * `Message-ID` header contains `.JavaMail.` – I’ve found a few (5 on 50k) regular messages with this, but not many; the vast majority (thousends) of messages are news-letters, order confirmations, etc. + + * `^X-Mailer` starts with `PHP`. This should catch both `X-Mailer: PHP/5.5.0` and `X-Mailer: PHPmailer blah blah`. The same as `JavaMail` applies. + + * `X-Library` presence; only [Indy][6] seems to set this. + + * `X-Mailer` starts with `wdcollect`. Set by some Plesk mails. + + * `X-Mailer` starts with `MIME-tools`. + + + + +### Final precaution: limit the number of replies + +Even when following all of the above advice, you may still encounter an email program that will slip through. This can very dangerous, as email systems that simply `IF email THEN send_email` have the potential to cause infinite email loops. + +For this reason, I recommend keeping track of which emails you’ve sent an autoreply to and rate limiting this to at most n emails in n minutes. This will break the back-and-forth chain. + +We use one email per five minutes, but something less strict will probably also work well. + +### What you need to set on your auto-response + +The specifics for this will vary depending on what sort of mails you’re sending. This is what we use for auto-reply mails: + +``` +Auto-Submitted: auto-replied +X-Auto-Response-Suppress: All +Precedence: auto_reply +``` + +### Feedback + +You can mail me at [martin@arp242.net][7] or [create a GitHub issue][8] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/autoreply.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: http://tools.ietf.org/html/rfc3834 +[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx +[3]: https://tools.ietf.org/html/rfc2919) +[4]: https://support.google.com/mail/answer/6254652?hl=en +[5]: http://www.faqs.org/rfcs/rfc2076.html +[6]: http://www.indyproject.org/index.en.aspx +[7]: mailto:martin@arp242.net +[8]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md new file mode 100644 index 0000000000..e49778eab7 --- /dev/null +++ b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md @@ -0,0 +1,288 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Turn GNOME to Heaven With These 23 GNOME Extensions) +[#]: via: (https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions) +[#]: author: (M.Hanny Sabbagh https://fosspost.org/author/mhsabbagh) + +Turn GNOME to Heaven With These 23 GNOME Extensions +====== + +GNOME Shell is one of the most used desktop interfaces on the Linux desktop. It’s part of the GNOME project and is considered to be the next generation of the old classic GNOME 2.x interface. GNOME Shell was first released in 2011 carrying a lot of features, including GNOME Shell extensions feature. + +GNOME Extensions are simply extra functionality that you can add to your interface, they can be panel extensions, performance extensions, quick access extensions, productivity extensions or for any other type of usage. They are all free and open source of course; you can install them with a single click **from your web browser** actually. + +### How To Install GNOME Extensions? + +You main way to install GNOME extensions will be via the extensions.gnome.org website. It’s an official platform belonging to GNOME where developers publish their extensions easily so that users can install them in a single click. + +In order to for this to work, you’ll need two things: + + 1. Browser Add-on: You’ll need to install a browser add-on that allows the website to communicate with your local GNOME desktop. You install it from [here for Firefox][1], or [here for Chrome][2] or [here for Opera][3]. + + 2. Native Connector: You still need another part to allow your system to accept installing files locally from your web browser. To install this component, you must install the `chrome-gnome-shell` package. Do not be deceived! Although the package name is containing “chrome”, it also works on Firefox too. To install it on Debian/Ubuntu/Mint run the following command in terminal: + +``` +sudo apt install chrome-gnome-shell +``` + +For Fedora: + +``` +sudo dnf install chrome-gnome-shell +``` + +For Arch: + +``` +sudo pacman -S chrome-gnome-shell +``` + +After you have installed the two components above, you can easily install extensions from the GNOME extensions website. + +### How to Configure GNOME Extensions Settings? + +Many of these extensions do have a settings window that you can access to adjust the preferences of that extension. You must make sure that you have seen its options at least once so that you know what you can possibly do using that extension. + +To do this, you can head to the [installed extensions page on the GNOME website][4], and you’ll see a small options button near every extension that offers one: + +![Screenshot 2018 12 24 20 50 55 41][5] + +Clicking it will display a window for you, from which you can see the possible settings: + +![Screenshot 2018 12 24 20 51 29 43][6] + +Read our article below for our list of recommended extension! + +### General Extensions + +#### 1\. User Themes + +![Screenshot from 2018 12 23 12 30 20 45][7] + +This is the first must-install extension on the GNOME Shell interface, it simply allows you to change the desktop theme to another one using the tweak tool. After installation run gnome-tweak-tool, and you’ll be able to change your desktop theme. + +Installation link: + +#### 2\. Dash to Panel + +![Screenshot from 2018 12 24 21 16 11 47][8] + +Converts the GNOME top bar into a taskbar with many added features, such as favorite icons, moving the clock to right, adding currently opened windows to the panel and many other features. (Make sure not to install this one with some other extensions below which do provide the same functionality). + +Installation link: + +#### 3\. Desktop Icons + +![gnome shell screenshot SSP3UZ 49][9] + +Restores desktop icons back again to GNOME. Still in continues development. + +Installation link: + +#### 4\. Dash to Dock + +![Screenshot from 2018 12 24 21 50 07 51][10] + +If you are a fan of the Unity interface, then this extension may help you. It simply adds a dock to the left/right side of the screen, which is very similar to Unity. You can customize that dock however you like. + +Installation link: + +### Productivity Extensions + +#### 5\. Todo.txt + +![screenshot_570_5X5YkZb][11] + +For users who like to maintain productivity, you can use this extension to add a simple To-Do list functionality to your desktop, it will use the [syntax][12] from todotxt.com, you can add unlimited to-dos, mark them as complete or remove them, change their position beside modifying or taking a backup of the todo.txt file manually. + +Installation link: + +#### 6\. Screenshot Tool + +![Screenshot from 2018 12 24 21 04 14 54][13] + +Easily take a screenshot of your desktop or a specific area, with the possibility of also auto-uploading it to imgur.com and auto-saving the link into the clipboard! Very useful extension. + +Installation link: + +#### 7\. OpenWeather + +![screenshot_750][14] + +If you would like to know the weather forecast everyday then this extension will be the right one for you, this extension will simply add an applet to the top panel allowing you to fetch the weather data from openweathermap.org or forecast.io, it supports all the countries and cities around the world. It also shows the wind and humidity. + +Installation link: + +#### 8 & 9\. Search Providers Extensions + +![Screenshot from 2018 12 24 21 29 41 57][15] + +In GNOME, you can add what’s known as “search providers” to the shell, meaning that when you type something in the search box, you’ll be able to automatically search these websites (search providers) using the same text you entered, and see the results directly from your shell! + +YouTube Search Provider: + +Wikipedia Search Provider: + +### Workflow Extensions + +#### 10\. No Title Bar + +![Screenshot 20181224210737 59][16] + +This extension simply removes the title bar from all the maximized windows, and moves it into the top GNOME Panel. In this way, you’ll be able to save a complete horizontal line on your screen, more space for your work! + +Installation Link: + +#### 11\. Applications Menu + +![Screenshot 2018 12 23 13 58 07 61][17] + +This extension simply adds a classic menu to the “activities” menu on the corner. By using it, you will be able to browse the installed applications and categories without the need to use the dash or the search feature, which saves you time. (Check the “No hot corner” extension below to get a better usage). + +Installation link: + +#### 12\. Places Status Indicator + +![screenshot_8_1][18] + +This indicator will put itself near the left corner of the activities button, it allows you to access your home folder and sub-folders easily using a menu, you can also browse the available devices and networks using it. + +Installation link: + +#### 13\. Window List + +![Screenshot from 2016-08-12 08-05-48][19] + +Officially supported by GNOME team, this extension adds a bottom panel to the desktop which allows you to navigate between the open windows easily, it also include a workspace indicator to switch between them. + +Installation link: + +#### 14\. Frippery Panel Favorites + +![screenshot_4][20] + +This extensions adds your favorite applications and programs to the panel near the activities button, allowing you to access to it more quickly with just 1 click, you can add or remove applications from it just by modifying your applications in your favorites (the same applications in the left panel when you click the activities button will appear here). + +Installation link: + +#### 15\. TopIcons + +![Screenshot 20181224211009 66][21] + +Those extensions restore the system tray back into the top GNOME panel. Very needed in cases of where applications are very much dependent on the tray icon. + +For GNOME 3.28, installation link: + +For GNOME 3.30, installation link: + +#### 16\. Clipboard Indicator + +![Screenshot 20181224214626 68][22] + +A clipboard manager is simply an applications that manages all the copy & paste operations you do on your system and saves them into a history, so that you can access them later whenever you want. + +This extension does exactly this, plus many other cool features that you can check. + +Installation link: + +### Other Extensions + +#### 17\. Frippery Move Clock + +![screenshot_2][23] + +If you are from those people who like alignment a lot, and dividing the panels into 2 parts only, then you may like this extension, what it simply does is moving the clock from the middle of the GNOME Shell panel to the right near the other applets on the panel, which makes it more organized. + +Installation link: + +#### 18\. No Topleft Hot Corner + +If you don’t like opening the dash whenever you move the mouse to the left corner, you can disable it easily using this extension. You can for sure click the activities button if you want to open the dash view (or via the Super key on the keyboard), but the hot corner will be disabled only. + +Installation link: + +#### 19\. No Annoyance + +Simply removes the “window is ready” notification each time a new window a opened. + +Installation link: + +#### 20\. EasyScreenCast + +![Screenshot 20181224214219 71][24] + +If you would like to quickly take a screencast for your desktop, then this extension may help you. By simply just choosing the type of recording you want, you’ll be able to take screencasts any time. You can also configure advanced options for the extension, such as the pipeline and many other things. + +Installation link: + +#### 21\. Removable drive Menu + +![Screenshot 20181224214131 73][25] + +Adds an icon to the top bar which shows you a list of your currently removable drives. + +Installation link: + +#### 22\. BottomPanel + +![Screenshot 20181224214419 75][26] + +As its title says.. It simply moves the top GNOME bar into the bottom of the screen. + +Installation link: + +#### 23\. Unite + +If you would like one extension only to do most of the above tasks, then Unite extension can help you. It adds panel favorites, removes title bar, moves the clock, allows you to change the location of the panel.. And many other features. All using this extension alone! + +Installation link: + +### Conclusion + +This was our list for some great GNOME Shell extensions to try out. Of course, you don’t (and shouldn’t!) install all of these, but just what you need for your own usage. As you can see, you can convert GNOME into any form you would like, but be careful for RAM usage (because if you use more extensions, the shell will consume very much resources). + +What other GNOME Shell extensions do you use? What do you think of this list? + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions + +作者:[M.Hanny Sabbagh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fosspost.org/author/mhsabbagh +[b]: https://github.com/lujun9972 +[1]: https://addons.mozilla.org/en/firefox/addon/gnome-shell-integration/ +[2]: https://chrome.google.com/webstore/detail/gnome-shell-integration/gphhapmejobijbbhgpjhcjognlahblep +[3]: https://addons.opera.com/en/extensions/details/gnome-shell-integration/ +[4]: https://extensions.gnome.org/local/ +[5]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-50-55.png?resize=850%2C359&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 42) +[6]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-51-29.png?resize=850%2C462&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 44) +[7]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-23-12-30-20.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 46) +[8]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-16-11.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 48) +[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/gnome-shell-screenshot-SSP3UZ.png?resize=850%2C492&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 50) +[10]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-50-07.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 52) +[11]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_570_5X5YkZb.png?resize=478%2C474&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 53) +[12]: https://github.com/ginatrapani/todo.txt-cli/wiki/The-Todo.txt-Format +[13]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-04-14.png?resize=715%2C245&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 55) +[14]: https://i2.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_750.jpg?resize=648%2C276&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 56) +[15]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-29-41.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 58) +[16]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224210737-380x95.png?resize=380%2C95&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 60) +[17]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-23_13-58-07.png?resize=524%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 62) +[18]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_8_1.png?resize=247%2C620&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 63) +[19]: https://i1.wp.com/fosspost.org/wp-content/uploads/2016/08/Screenshot-from-2016-08-12-08-05-48.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 64) +[20]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_4.png?resize=414%2C39&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 65) +[21]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224211009-631x133.png?resize=631%2C133&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 67) +[22]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214626-520x443.png?resize=520%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 69) +[23]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_2.png?resize=388%2C26&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 70) +[24]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214219-327x328.png?resize=327%2C328&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 72) +[25]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214131-366x199.png?resize=366%2C199&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 74) +[26]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214419-830x143.png?resize=830%2C143&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 76) diff --git a/sources/tech/20181226 -Review- Polo File Manager in Linux.md b/sources/tech/20181226 -Review- Polo File Manager in Linux.md new file mode 100644 index 0000000000..cf763850cf --- /dev/null +++ b/sources/tech/20181226 -Review- Polo File Manager in Linux.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: ([Review] Polo File Manager in Linux) +[#]: via: (https://itsfoss.com/polo-file-manager/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +[Review] Polo File Manager in Linux +====== + +We are all familiar with file managers. It’s that piece of software that allows you to access your directories, files in a GUI. + +Most of us use the default file manager included with our desktop of choice. The creator of [Polo][1] hopes to get you to use his file manager by adding extra features but hides the good ones behind a paywall. + +![][2]Polo file manager + +### What is Polo file manager? + +According to its [website][1], Polo is an “advanced file manager for Linux written in [Vala][3])”. Further down the page, Polo is referred to as a “modern, light-weight file manager for Linux with support for multiple panes and tabs; support for archives, and much more.” + +It is from the same developer (Tony George) that has given us some of the most popular applications for desktop Linux. [Timeshift backup][4] tool, [Conky Manager][5], [Aptik backup tool][6]s for applications etc. Polo is the latest offering from Tony. + +Note that Polo is still in the beta stage of development which means the first stable version of the software is not out yet. + +### Features of Polo file manager + +![Polo File Manager in Ubuntu Linux][7]Polo File Manager in Ubuntu Linux + +It’s true that Polo has a bunch of neat features that most file managers don’t have. However, the really neat features are only available if you donate more than $10 to the project or sign up for the creator’s Patreon. I will be separating the free features from the features that require the “donation plugin”. + +![Cloud storage support in Polo file manager][8]Support cloud storage + +#### Free Features + + * Multiple Panes – Single-pane, dual-pane (vertical or horizontal split) and quad-pane layouts. + * Multiple Views – List view, Icon view, Tiled view, and Media view + * Device Manager – Devices popup displays the list of connected devices with options to mount and unmount + * Archive Support – Support for browsing archives as normal folders. Supports creation of archives in multiple formats with advanced compression settings. + * Checksum & Hashing – Generate and compare MD5, SHA1, SHA2-256 ad SHA2-512 checksums + * Built-in [Fish shell][9] + * Support for [cloud storage][10], such as Dropbox, Google Drive, Amazon Drive, Amazon S3, Backblaze B2, Hubi, Microsoft OneDrive, OpenStack Swift, and Yandex Disk + * Compare files + * Analyses disk usage + * KVM support + * Connect to FTP, SFTP, SSH and Samba servers + + + +![Dual pane view of Polo file manager][11]Polo in dual pane view + +#### Donation/Paywall Features + + * Write ISO to USB Device + * Image optimization and adjustment tools + * Optimize PNG + * Reduce JPEG Quality + * Remove Color + * Reduce Color + * Boost Color + * Set as Wallpaper + * Rotate + * Resize + * Convert to PNG, JPEG, TIFF, BMP, ICO and more + * PDF tools + * Split + * Merge + * Add and Remove Password + * Reduce File Size + * Uncompress + * Remove Colors + * Rotate + * Optimize + * Video Download via [youtube-dl][12] + + + +### Installing Polo + +Let’s see how to install Polo file manager on various Linux distributions. + +#### 1\. Ubuntu based distributions + +For all Ubuntu based systems (Ubuntu, Linux Mint, Elementary OS, etc), you can install Polo via the [official PPA][13]. Not sure what a PPA is? [Read about PPA here][14]. + +`sudo apt-add-repository -y ppa:teejee2008/ppa` +`sudo apt-get update` +`sudo apt-get install polo-file-manager` + +#### 2\. Arch based distributions + +For all Arch-based systems (Arch, Manjaro, ArchLabs, etc), you can install Polo from the [Arch User Repository][15]. + +#### 3\. Other Distros + +For all other distros, you can download and use the [.RUN installer][16] to setup Polo. + +### Thoughts on Polo + +I’ve installed tons of different distros and never had a problem with the default file manager. (I’ve probably used Thunar and Caja the most.) The free version of Polo doesn’t contain any features that would make me switch. As for the paid features, I already use a number of applications that accomplish the same things. + +One final note: the paid version of Polo is supposed to help fund development of the project. However, [according to GitHub][17], the last commit on Polo was three months ago. That’s quite a big interval of inactivity for a software that is still in the beta stages of development. + +Have you ever used [Polo][1]? If not, what is your favorite Linux file manager? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][18]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/polo-file-manager/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://teejee2008.github.io/polo/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo.jpg?fit=800%2C500&ssl=1 +[3]: https://en.wikipedia.org/wiki/Vala_(programming_language +[4]: https://itsfoss.com/backup-restore-linux-timeshift/ +[5]: https://itsfoss.com/conky-gui-ubuntu-1304/ +[6]: https://github.com/teejee2008/aptik +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-file-manager-in-ubuntu.jpeg?resize=800%2C450&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-coud-options.jpg?fit=800%2C795&ssl=1 +[9]: https://fishshell.com/ +[10]: https://itsfoss.com/cloud-services-linux/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-dual-pane.jpg?fit=800%2C520&ssl=1 +[12]: https://itsfoss.com/download-youtube-linux/ +[13]: https://launchpad.net/~teejee2008/+archive/ubuntu/ppa +[14]: https://itsfoss.com/ppa-guide/ +[15]: https://aur.archlinux.org/packages/polo +[16]: https://github.com/teejee2008/polo/releases +[17]: https://github.com/teejee2008/polo +[18]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181227 Linux commands for measuring disk activity.md b/sources/tech/20181227 Linux commands for measuring disk activity.md new file mode 100644 index 0000000000..badda327dd --- /dev/null +++ b/sources/tech/20181227 Linux commands for measuring disk activity.md @@ -0,0 +1,252 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux commands for measuring disk activity) +[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Linux commands for measuring disk activity +====== +![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg) +Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system, and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity. + +Probably one of the easiest and most obvious of these commands is **dstat**. + +### dtstat + +In spite of the fact that the **dstat** command begins with the letter "d", it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the **-d** option. As shown below, you’ll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that after the first report, each subsequent row in the display will report disk activity in the following time interval, and the default is only one second. + +``` +$ dstat -d +-dsk/total- + read writ + 949B 73k + 65k 0 <== first second + 0 24k <== second second + 0 16k + 0 0 ^C +``` + +Including a number after the -d option will set the interval to that number of seconds. + +``` +$ dstat -d 10 +-dsk/total- + read writ + 949B 73k + 65k 81M <== first five seconds + 0 21k <== second five second + 0 9011B ^C +``` + +Notice that the reported data may be shown in a number of different units — e.g., M (megabytes), k (kilobytes), and B (bytes). + +Without options, the dstat command is going to show you a lot of other information as well — indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches. + +``` +$ dstat +You did not select any stats, using -cdngy by default. +--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- +usr sys idl wai stl| read writ| recv send| in out | int csw + 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 + 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 + 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C +``` + +The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools, such as vmstat, netstat, iostat, and ifstat, with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the [dstat][1] command. + +### iostat + +The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It's sometimes used to evaluate the balance of activity between disks. + +``` +$ iostat +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +loop0 0.00 0.00 0.00 1048 0 +loop1 0.00 0.00 0.00 365 0 +loop2 0.00 0.00 0.00 1056 0 +loop3 0.00 0.01 0.00 16169 0 +loop4 0.00 0.00 0.00 413 0 +loop5 0.00 0.00 0.00 1184 0 +loop6 0.00 0.00 0.00 1062 0 +loop7 0.00 0.00 0.00 5261 0 +sda 1.06 0.89 72.66 2837453 232735080 +sdb 0.00 0.02 0.00 48669 40 +loop8 0.00 0.00 0.00 1053 0 +loop9 0.01 0.01 0.00 18949 0 +loop10 0.00 0.00 0.00 56 0 +loop11 0.00 0.00 0.00 7090 0 +loop12 0.00 0.00 0.00 1160 0 +loop13 0.00 0.00 0.00 108 0 +loop14 0.00 0.00 0.00 3572 0 +loop15 0.01 0.01 0.00 20026 0 +loop16 0.00 0.00 0.00 24 0 +``` + +Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the **-p** option, which allows you to just look at your disks — as shown in the commands below. + +``` +$ iostat -p sda +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.54 2843737 232815784 +sda1 1.04 0.88 72.54 2821733 232815784 +``` + +Note that **tps** refers to transfers per second. + +You can also get iostat to provide repeated reports. In the example below, we're getting measurements every five seconds by using the **-d** option. + +``` +$ iostat -p sda -d 5 +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.51 2843749 232834048 +sda1 1.04 0.88 72.51 2821745 232834048 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +If you prefer to omit the first (stats since boot) report, add a **-y** to your command. + +``` +$ iostat -p sda -d 5 -y +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +Next, we look at our second disk drive. + +``` +$ iostat -p sdb +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sdb 0.00 0.02 0.00 48669 40 +sdb2 0.00 0.00 0.00 4861 40 +sdb1 0.00 0.01 0.00 35344 0 +``` + +### iotop + +The **iotop** command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output. + +``` +$ sudo iotop -d 5 +Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s +Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s + TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND +32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] + 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] + 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash + 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] + 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] + 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] + 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] +``` + +### ioping + +The **ioping** command is an altogether different type of tool, but it can report disk latency — how long it takes a disk to respond to requests — and can be helpful in diagnosing disk problems. + +``` +$ sudo ioping /dev/sda1 +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms +^C +--- /dev/sda1 (block device 111.8 GiB) ioping statistics --- +3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s +generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s +min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us +``` + +### atop + +The **atop** command, like **top** provides a lot of information on system performance, including some stats on disk activity. + +``` +ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed +PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | +CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | +cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | +CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | +MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | +SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | +DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | +NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | +NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | +NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | + + PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | + 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop + 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% + 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% + 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% +31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash + 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep + 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e + 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% + 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% + 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% +``` + +If you want to look at _just_ the disk stats, you can easily manage that with a command like this: + +``` +$ atop | grep DSK +$ atop | grep DSK +DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | +DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | +DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | +^C +``` + +### Being in the know with disk I/O + +Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it's time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when you need to check them. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20181228 The office coffee model of concurrent garbage collection.md b/sources/tech/20181228 The office coffee model of concurrent garbage collection.md new file mode 100644 index 0000000000..825eb4b536 --- /dev/null +++ b/sources/tech/20181228 The office coffee model of concurrent garbage collection.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The office coffee model of concurrent garbage collection) +[#]: via: (https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +The office coffee model of concurrent garbage collection +====== + +Garbage collection is a field with its own terminology. Concepts like like _mutator_ s, _card marking_ , and _write barriers_ create a hurdle to understanding how garbage collectors work. Here’s an analogy to explain the operations of a concurrent garbage collector using everyday items found in the workplace. + +Before we discuss the operation of _concurrent_ garbage collection, let’s introduce the dramatis personae. In offices around the world you’ll find one of these: + +![][1] + +In the workplace coffee is a natural resource. Employees visit the break room and fill their cups as required. That is, until the point someone goes to fill their cup only to discover the pot is _empty_! + +Immediately the office is thrown into chaos. Meeting are called. Investigations are held. The perpetrator who took the last cup without refilling the machine is found and [reprimanded][2]. Despite many passive aggressive notes the situation keeps happening, thus a committee is formed to decide if a larger coffee pot should be requisitioned. Once the coffee maker is again full office productivity slowly returns to normal. + +This is the model of _stop the world_ garbage collection. The various parts of your program proceed through their day consuming memory, or in our analogy coffee, without a care about the next allocation that needs to be made. Eventually one unlucky attempt to allocate memory is made only to find the heap, or the coffee pot, exhausted, triggering a stop the world garbage collection. + +* * * + +Down the road at a more enlightened workplace, management have adopted a different strategy for mitigating their break room’s coffee problems. Their policy is simple: if the pot is more than half full, fill your cup and be on your way. However, if the pot is less than half full, _before_ filling your cup, you must add a little coffee and a little water to the top of the machine. In this way, by the time the next person arrives for their re-up, the level in the pot will hopefully have risen higher than when the first person found it. + +This policy does come at a cost to office productivity. Rather than filling their cup and hoping for the best, each worker may, depending on the aggregate level of consumption in the office, have to spend a little time refilling the percolator and topping up the water. However, this is time spent by a person who was already heading to the break room. It costs a few extra minutes to maintain the coffee machine, but does not impact their officemates who aren’t in need of caffeination. If several people take a break at the same time, they will all find the level in the pot below the half way mark and all proceed to top up the coffee maker–the more consumption, the greater the rate the machine will be refilled, although this takes a little longer as the break room becomes congested. + +This is the model of _concurrent garbage collection_ as practiced by the Go runtime (and probably other language runtimes with concurrent collectors). Rather than each heap allocation proceeding blindly until the heap is exhausted, leading to a long stop the world pause, concurrent collection algorithms spread the work of walking the heap to find memory which is no longer reachable over the parts of the program allocating memory. In this way the parts of the program which allocate memory each pay a small cost–in terms of latency–for those allocations rather than the whole program being forced to halt when the heap is exhausted. + +Lastly, in keeping with the office coffee model, if the rate of coffee consumption in the office is so high that management discovers that their staff are always in the break room trying desperately to refill the coffee machine, it’s time to invest in a machine with a bigger pot–or in garbage collection terms, grow the heap. + +### Related posts: + + 1. [Visualising the Go garbage collector][3] + 2. [A whirlwind tour of Go’s runtime environment variables][4] + 3. [Why is a Goroutine’s stack infinite ?][5] + 4. [Introducing Go 2.0][6] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/wp-content/uploads/2018/12/20181204175004_79256.jpg +[2]: https://www.youtube.com/watch?v=ww86iaucd2A +[3]: https://dave.cheney.net/2014/07/11/visualising-the-go-garbage-collector (Visualising the Go garbage collector) +[4]: https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables (A whirlwind tour of Go’s runtime environment variables) +[5]: https://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-infinite (Why is a Goroutine’s stack infinite ?) +[6]: https://dave.cheney.net/2016/10/25/introducing-go-2-0 (Introducing Go 2.0) diff --git a/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md new file mode 100644 index 0000000000..58b072f2fc --- /dev/null +++ b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md @@ -0,0 +1,259 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Easily Upload Text Snippets To Pastebin-like Services From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Easily Upload Text Snippets To Pastebin-like Services From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-720x340.png) + +Whenever there is need to share the code snippets online, the first one probably comes to our mind is Pastebin.com, the online text sharing site launched by **Paul Dixon** in 2002. Now, there are several alternative text sharing services available to upload and share text snippets, error logs, config files, a command’s output or any sort of text files. If you happen to share your code often using various Pastebin-like services, I do have a good news for you. Say hello to **Wgetpaste** , a command line BASH utility to easily upload text snippets to pastebin-like services. Using Wgetpaste script, anyone can quickly share text snippets to their friends, colleagues, or whoever wants to see/use/review the code from command line in Unix-like systems. + +### Installing Wgetpaste + +Wgetpaste is available in Arch Linux [Community] repository. To install it on Arch Linux and its variants like Antergos and Manjaro Linux, just run the following command: + +``` +$ sudo pacman -S wgetpaste +``` + +For other distributions, grab the source code from [**Wgetpaste website**][1] and install it manually as described below. + +First download the latest Wgetpaste tar file: + +``` +$ wget http://wgetpaste.zlin.dk/wgetpaste-2.28.tar.bz2 +``` + +Extract it: + +``` +$ tar -xvjf wgetpaste-2.28.tar.bz2 +``` + +It will extract the contents of the tar file in a folder named “wgetpaste-2.28”. + +Go to that directory: + +``` +$ cd wgetpaste-2.28/ +``` + +Copy the wgetpaste binary to your $PATH, for example **/usr/local/bin/**. + +``` +$ sudo cp wgetpaste /usr/local/bin/ +``` + +Finally, make it executable using command: + +``` +$ sudo chmod +x /usr/local/bin/wgetpaste +``` + +### Upload Text Snippets To Pastebin-like Services + +Uploading text snippets using Wgetpaste is trivial. Let me show you a few examples. + +**1\. Upload text files** + +To upload any text file using Wgetpaste, just run: + +``` +$ wgetpaste mytext.txt +``` + +This command will upload the contents of mytext.txt file. + +Sample output: + +``` +Your paste can be seen here: https://paste.pound-python.org/show/eO0aQjTgExP0wT5uWyX7/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-1.png) + +You can share the pastebin URL via any medium like mail, message, whatsapp or IRC etc. Whoever has this URL can visit it and view the contents of the text file in a web browser of their choice. + +Here is the contents of mytext.txt file in web browser: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-2.png) + +You can also use **‘tee’** command to display what is being pasted, instead of uploading them blindly. + +To do so, use **-t** option like below. + +``` +$ wgetpaste -t mytext.txt +``` + +![][3] + +**2. Upload text snippets to different services +** + +By default, Wgetpaste will upload the text snippets to **poundpython** () service. + +To view the list of supported services, run: + +``` +$ wgetpaste -S +``` + +Sample output: + +``` +Services supported: (case sensitive): +Name: | Url: +=============|================= +bpaste | https://bpaste.net/ +codepad | http://codepad.org/ +dpaste | http://dpaste.com/ +gists | https://api.github.com/gists +*poundpython | https://paste.pound-python.org/ +``` + +Here, ***** indicates the default service. + +As you can see, Wgetpaste currently supports five text sharing services. I didn’t try all of them, but I believe all services will work. + +To upload the contents to other services, for example **bpaste.net** , use **-s** option like below. + +``` +$ wgetpaste -s bpaste mytext.txt +Your paste can be seen here: https://bpaste.net/show/5199e127e733 +``` + +**3\. Read input from stdin** + +Wgetpaste can also read the input from stdin. + +``` +$ uname -a | wgetpaste +``` + +This command will upload the output of ‘uname -a’ command. + +**4. Upload the COMMAND and the output of COMMAND together +** + +Sometimes, you may need to paste a COMMAND and its output. To do so, specify the contents of the command within quotes like below. + +``` +$ wgetpaste -c 'ls -l' +``` + +This will upload the command ‘ls -l’ along with its output to the pastebin service. + +This can be useful when you wanted to let others to clearly know what was the exact command you just ran and its output. + +![][4] + +As you can see in the output, I ran ‘ls -l’ command. + +**5. Upload system log files, config files +** + +Like I already said, we can upload any sort of text files, not just an ordinary text file, in your system such as log files, a specific command’s output etc. Say for example, you just updated your Arch Linux box and ended up with a broken system. You ask your colleague how to fix it and s/he wants to read the pacman.log file. Here is the command to upload the contents of the pacman.log file: + +``` +$ wgetpaste /var/log/pacman.log +``` + +Share the pastebin URL with your Colleague, so s/he will review the pacman.log and may help you to fix the problem by reviewing the log file. + +Usually, the contents of log files might be too long and you don’t want to share them all. In such cases, just use **cat** command to read the output and use **tail** command with the **-n** switch to define the number of lines to share and finally pipe the output to Wgetpaste as shown below. + +``` +$ cat /var/log/pacman.log | tail -n 50 | wgetpaste +``` + +The above command will upload only the **last 50 lines** of pacman.log file. + +**6\. Convert input url to tinyurl** + +By default, Wgetpaste will display the full pastebin URL in the output. If you want to convert the input URL to a tinyurl, just use **-u** option. + +``` +$ wgetpaste -u mytext.txt +Your paste can be seen here: http://tinyurl.com/y85d8gtz +``` + +**7. Set language +** + +By default, Wgetpaste will upload text snippets in **plain text**. + +To list languages supported by the specified service, use **-L** option. + +``` +$ wgetpaste -L +``` + +This command will list all languages supported by default service i.e **poundpython** (). + +We can change this using **-l** option. + +``` +$ wgetpaste -l Bash mytext.txt +``` + +**8\. Disable syntax highlighting or html in the output** + +As I mentioned above, the text snippets will be displayed in a specific language format (plaintext, Bash etc.). + +You can, however, change this behaviour to display the raw text snippets using **-r** option. + +``` +$ wgetpaste -r mytext.txt +Your raw paste can be seen here: https://paste.pound-python.org/raw/CUJhQ3jEmr2UvfmD2xCL/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-5.png) + +As you can see in the above output, there is no syntax highlighting, no html formatting. Just a raw output. + +**9\. Change Wgetpaste defaults** + +All Defaults values (DEFAULT_{NICK,LANGUAGE,EXPIRATION}[_${SERVICE}] and DEFAULT_SERVICE) can be changed globally in **/etc/wgetpaste.conf** or per user in **~/.wgetpaste.conf** files. These files, however, are not available by default in my system. I guess we need to manually create them. The developer has given the sample contents for both files [**here**][5] and [**here**][6]. Just create these files manually with given sample contents and modify the parameters accordingly to change Wgetpaste defaults. + +**10\. Getting help** + +To display the help section, run: + +``` +$ wgetpaste -h +``` + +And, that’s all for now. Hope this was useful. We will publish more useful content in the days to come. Stay tuned! + +On behalf of **OSTechNix** , I wish you all a very **Happy New Year 2019**. I am grateful to all our readers, contributors, and mentors for supporting us from the beginning of our journey. We couldn’t come this far without your support and guidance. Thank you everyone! Have a great year ahead!! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://wgetpaste.zlin.dk/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-3.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-4.png +[5]: http://wgetpaste.zlin.dk/zlin.conf +[6]: http://wgetpaste.zlin.dk/wgetpaste.example diff --git a/sources/tech/20181231 Troubleshooting hardware problems in Linux.md b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md new file mode 100644 index 0000000000..dcc89034db --- /dev/null +++ b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Troubleshooting hardware problems in Linux) +[#]: via: (https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +Troubleshooting hardware problems in Linux +====== +Learn what's causing your Linux hardware to malfunction so you can get it back up and running quickly. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0) + +[Linux servers][1] run mission-critical business applications in many different types of infrastructures including physical machines, virtualization, private cloud, public cloud, and hybrid cloud. It's important for Linux sysadmins to understand how to manage Linux hardware infrastructure—including software-defined functionalities related to [networking][2], storage, Linux containers, and multiple tools on Linux servers. + +It can take some time to troubleshoot and solve hardware-related issues on Linux. Even highly experienced sysadmins sometimes spend hours working to solve mysterious hardware and software discrepancies. + +The following tips should make it quicker and easier to troubleshoot hardware in Linux. Many different things can cause problems with Linux hardware; before you start trying to diagnose them, it's smart to learn about the most common issues and where you're most likely to find them. + +### Quick-diagnosing devices, modules, and drivers + +The first step in troubleshooting usually is to display a list of the hardware installed on your Linux server. You can obtain detailed information on the hardware using **ls** commands such as **[lspci][3]** , **[lsblk][4]** , **[lscpu][5]** , and **[lsscsi][6]**. For example, here is output of the **lsblk** command: + +``` +# lsblk +NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +xvda    202:0    0  50G  0 disk +├─xvda1 202:1    0   1M  0 part +└─xvda2 202:2    0  50G  0 part / +xvdb    202:16   0  20G  0 disk +└─xvdb1 202:17   0  20G  0 part +``` + +If the **ls** commands don't reveal any errors, use init processes (e.g., **systemd** ) to see how the Linux server is working. **systemd** is the most popular init process for bootstrapping user spaces and controlling multiple system processes. For example, here is output of the **systemctl status** command: + +``` +# systemctl status +● bastion.f347.internal +    State: running +     Jobs: 0 queued +   Failed: 0 units +    Since: Wed 2018-11-28 01:29:05 UTC; 2 days ago +   CGroup: / +           ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21 +           ├─kubepods.slice +           │ ├─kubepods-pod3881728a_f2af_11e8_af77_06af52f87498.slice +           │ │ ├─docker-88b27385f4bae77bba834fbd60a61d19026bae13d18eb147783ae27819c34967.scope +           │ │ │ └─23860 /opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-c +           │ │ └─docker-a4433f0d523c7e5bc772ee4db1861e4fa56c4e63a2d48f6bc831458c2ce9fd2d.scope +           │ │   └─23639 /usr/bin/pod +.... +``` + +### Digging into multiple loggings + +**Dmesg** allows you to figure out errors and warnings in the kernel's latest messages. For example, here is output of the **dmesg | more** command: + +``` +# dmesg | more +.... +[ 1539.027419] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 1539.042726] IPv6: ADDRCONF(NETDEV_UP): veth61f37018: link is not ready +[ 1539.048706] IPv6: ADDRCONF(NETDEV_CHANGE): veth61f37018: link becomes ready +[ 1539.055034] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready +[ 1539.098550] device veth61f37018 entered promiscuous mode +[ 1541.450207] device veth61f37018 left promiscuous mode +[ 1542.493266] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.292788] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.449401] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 9965.462738] IPv6: ADDRCONF(NETDEV_UP): vetheacc333c: link is not ready +[ 9965.468942] IPv6: ADDRCONF(NETDEV_CHANGE): vetheacc333c: link becomes ready +.... +``` + +You can also look at all Linux system logs in the **/var/log/messages** file, which is where you'll find errors related to specific issues. It's worthwhile to monitor the messages via the **tail** command in real time when you make modifications to your hardware, such as mounting an extra disk or adding an Ethernet network interface. For example, here is output of the **tail -f /var/log/messages** command: + +``` +# tail -f /var/log/messages +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:03 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:33 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +``` + +### Analyzing networking functions + +You may have hundreds of thousands of cloud-native applications to serve business services in a complex networking environment; these may include virtualization, multiple cloud, and hybrid cloud. This means you should analyze whether networking connectivity is working correctly as part of your troubleshooting. Useful commands to figure out networking functions in the Linux server include **ip addr** , **traceroute** , **nslookup** , **dig** , and **ping** , among others. For example, here is output of the **ip addr show** command: + +``` +# ip addr show +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 +    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +    inet 127.0.0.1/8 scope host lo +       valid_lft forever preferred_lft forever +    inet6 ::1/128 scope host +       valid_lft forever preferred_lft forever +2: eth0: mtu 9001 qdisc mq state UP group default qlen 1000 +    link/ether 06:af:52:f8:74:98 brd ff:ff:ff:ff:ff:ff +    inet 192.199.0.169/24 brd 192.199.0.255 scope global noprefixroute dynamic eth0 +       valid_lft 3096sec preferred_lft 3096sec +    inet6 fe80::4af:52ff:fef8:7498/64 scope link +       valid_lft forever preferred_lft forever +3: docker0: mtu 1500 qdisc noqueue state DOWN group default +    link/ether 02:42:67:fb:1a:a2 brd ff:ff:ff:ff:ff:ff +    inet 172.17.0.1/16 scope global docker0 +       valid_lft forever preferred_lft forever +    inet6 fe80::42:67ff:fefb:1aa2/64 scope link +       valid_lft forever preferred_lft forever +.... +``` + +### In conclusion + +Troubleshooting Linux hardware requires considerable knowledge, including how to use powerful command-line tools and figure out system loggings. You should also know how to diagnose the kernel space, which is where you can find the root cause of many hardware problems. Keep in mind that hardware issues in Linux may come from many different sources, including devices, modules, drivers, BIOS, networking, and even plain old hardware malfunctions. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux + +作者:[Daniel Oh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/5/what-linux-server +[2]: https://opensource.com/article/18/11/intro-software-defined-networking +[3]: https://linux.die.net/man/8/lspci +[4]: https://linux.die.net/man/8/lsblk +[5]: https://linux.die.net/man/1/lscpu +[6]: https://linux.die.net/man/8/lsscsi diff --git a/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md new file mode 100644 index 0000000000..71555454f5 --- /dev/null +++ b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Yarn on Ubuntu and Other Linux Distributions) +[#]: via: (https://itsfoss.com/install-yarn-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Using Yarn on Ubuntu and Other Linux Distributions +====== + +**This quick tutorial shows you the official way of installing Yarn package manager on Ubuntu and Debian Linux. You’ll also learn some basic Yarn commands and the steps to remove Yarn completely.** + +[Yarn][1] is an open source JavaScript package manager developed by Facebook. It is an alternative or should I say improvement to the popular npm package manager. [Facebook developers’ team][2] created Yarn to overcome the shortcomings of [npm][3]. Facebook claims that Yarn is faster, reliable and more secure than npm. + +Like npm, Yarn provides you a way to automate the process of installing, updating, configuring, and removing packages retrieved from a global registry. + +The advantage of Yarn is that it is faster as it caches every package it downloads so it doesn’t need to download it again. It also parallelizes operations to maximize resource utilization. Yarn also uses [checksums to verify the integrity][4] of every installed package before its code is executed. Yarn also guarantees that an install that worked on one system will work exactly the same way on any other system. + +If you are [using nodejs on Ubuntu][5], probably you already have npm installed on your system. In that case, you can use npm to install Yarn globally in the following manner: + +``` +sudo npm install yarn -g +``` + +However, I would recommend using the official way to install Yarn on Ubuntu/Debian. + +### Installing Yarn on Ubuntu and Debian [The Official Way] + +![Yarn JS][6] + +The instructions mentioned here should be applicable to all versions of Ubuntu such as Ubuntu 18.04, 16.04 etc. The same set of instructions are also valid for Debian and other Debian based distributions. + +Since the tutorial uses Curl to add the GPG key of Yarn project, it would be a good idea to verify whether you have Curl installed already or not. + +``` +sudo apt install curl +``` + +The above command will install Curl if it wasn’t installed already. Now that you have curl, you can use it to add the GPG key of Yarn project in the following fashion: + +``` +curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - +``` + +After that, add the repository to your sources list so that you can easily upgrade the Yarn package in future with the rest of the system updates: + +``` +sudo sh -c 'echo "deb https://dl.yarnpkg.com/debian/ stable main" >> /etc/apt/sources.list.d/yarn.list' +``` + +You are set to go now. [Update Ubuntu][7] or Debian system to refresh the list of available packages and then install yarn: + +``` +sudo apt update +sudo apt install yarn +``` + +This will install Yarn along with nodejs. Once the process completes, verify that Yarn has been installed successfully. You can do that by checking the Yarn version. + +``` +yarn --version +``` + +For me, it showed an output like this: + +``` +yarn --version +1.12.3 +``` + +This means that I have Yarn version 1.12.3 installed on my system. + +### Using Yarn + +I presume that you have some basic understandings of JavaScript programming and how dependencies work. I am not going to go in details here. I’ll show you some of the basic Yarn commands that will help you getting started with it. + +#### Creating a new project with Yarn + +Like npm, Yarn also works with a package.json file. This is where you add your dependencies. All the packages of the dependencies are cached in the node_modules directory in the root directory of your project. + +In the root directory of your project, run the following command to generate a fresh package.json file: + +It will ask you a number of questions. You can skip the questions r go with the defaults by pressing enter. + +``` +yarn init +yarn init v1.12.3 +question name (test_yarn): test_yarn_proect +question version (1.0.0): 0.1 +question description: Test Yarn +question entry point (index.js): +question repository url: +question author: abhishek +question license (MIT): +question private: +success Saved package.json +Done in 82.42s. +``` + +With this, you get a package.json file of this sort: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT" +} +``` + +Now that you have the package.json, you can either manually edit it to add or remove package dependencies or use Yarn commands (preferred). + +#### Adding dependencies with Yarn + +You can add a dependency on a certain package in the following fashion: + +``` +yarn add +``` + +For example, if you want to use [Lodash][8] in your project, you can add it using Yarn like this: + +``` +yarn add lodash +yarn add v1.12.3 +info No lockfile found. +[1/4] Resolving packages… +[2/4] Fetching packages… +[3/4] Linking dependencies… +[4/4] Building fresh packages… +success Saved lockfile. +success Saved 1 new dependency. +info Direct dependencies +└─ [email protected] +info All dependencies +└─ [email protected] +Done in 2.67s. +``` + +And you can see that this dependency has been added automatically in the package.json file: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT", + "dependencies": { + "lodash": "^4.17.11" + } +} +``` + +By default, Yarn will add the latest version of a package in the dependency. If you want to use a specific version, you may specify it while adding. + +As always, you can also update the package.json file manually. + +#### Upgrading dependencies with Yarn + +You can upgrade a particular dependency to its latest version with the following command: + +``` +yarn upgrade +``` + +It will see if the package in question has a newer version and will update it accordingly. + +You can also change the version of an already added dependency in the following manner: + +You can also upgrade all the dependencies of your project to their latest version with one single command: + +``` +yarn upgrade +``` + +It will check the versions of all the dependencies and will update them if there are any newer versions. + +#### Removing dependencies with Yarn + +You can remove a package from the dependencies of your project in this way: + +``` +yarn remove +``` + +#### Install all project dependencies + +If you made any changes to the project.json file, you should run either + +``` +yarn +``` + +or + +``` +yarn install +``` + +to install all the dependencies at once. + +### How to remove Yarn from Ubuntu or Debian + +I’ll complete this tutorial by mentioning the steps to remove Yarn from your system if you used the above steps to install it. If you ever realized that you don’t need Yarn anymore, you will be able to remove it. + +Use the following command to remove Yarn and its dependencies. + +``` +sudo apt purge yarn +``` + +You should also remove the Yarn repository from the repository list: + +``` +sudo rm /etc/apt/sources.list.d/yarn.list +``` + +The optional next step is to remove the GPG key you had added to the trusted keys. But for that, you need to know the key. You can get that using the apt-key command: + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging yarn@dan.cx sub rsa4096 2016-10-05 [E] sub rsa4096 2019-01-02 [S] [expires: 2020-02-02] + +The key here is the last 8 characters of the GPG key’s fingerprint in the line starting with pub. + +So, in my case, the key is 86E50310 and I’ll remove it using this command: + +``` +sudo apt-key del 86E50310 +``` + +You’ll see an OK in the output and the GPG key of Yarn package will be removed from the list of GPG keys your system trusts. + +I hope this tutorial helped you to install Yarn on Ubuntu, Debian, Linux Mint, elementary OS etc. I provided some basic Yarn commands to get you started along with complete steps to remove Yarn from your system. + +I hope you liked this tutorial and if you have any questions or suggestions, please feel free to leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-yarn-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://yarnpkg.com/lang/en/ +[2]: https://code.fb.com/ +[3]: https://www.npmjs.com/ +[4]: https://itsfoss.com/checksum-tools-guide-linux/ +[5]: https://itsfoss.com/install-nodejs-ubuntu/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/yarn-js-ubuntu-debian.jpeg?resize=800%2C450&ssl=1 +[7]: https://itsfoss.com/update-ubuntu/ +[8]: https://lodash.com/ diff --git a/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md new file mode 100644 index 0000000000..fa92d3450a --- /dev/null +++ b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Search, Study And Practice Linux Commands On The Fly!) +[#]: via: (https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Search, Study And Practice Linux Commands On The Fly! +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-720x340.png) + +The title may look like sketchy and click bait. Allow me to explain what I am about to explain in this tutorial. Let us say you want to download an archive file, extract it and move the file from one location to another from command line. As per the above scenario, we may need at least three Linux commands, one for downloading the file, one for extracting the downloaded file and one for moving the file. If you’re intermediate or advanced Linux user, you could do this easily with an one-liner command or a script in few seconds/minutes. But, if you are a noob who don’t know much about Linux commands, you might need little help. + +Of course, a quick google search may yield many results. Or, you could use [**man pages**][1]. But some man pages are really long, comprehensive and lack in useful example. You might need to scroll down for quite a long time when you’re looking for a particular information on the specific flags/options. Thankfully, there are some [**good alternatives to man pages**][2], which are focused on mostly practical commands. One such good alternative is **TLDR pages**. Using TLDR pages, we can quickly and easily learn a Linux command with practical examples. To access the TLDR pages, we require a TLDR client. There are many clients available. Today, we are going to learn about one such client named **“Tldr++”**. + +Tldr++ is a fast and interactive tldr client written with **Go** programming language. Unlike the other Tldr clients, it is fully interactive. That means, you can pick a command, read all examples , and immediately run any command without having to retype or copy/paste each command in the Terminal. Still don’t get it? No problem. Read on to learn and practice Linux commands on the fly. + +### Install Tldr++ + +Installing Tldr++ is very simple. Download tldr++ latest version from the [**releases page**][3]. Extract it and move the tldr++ binary to your $PATH. + +``` +$ wget https://github.com/isacikgoz/tldr/releases/download/v0.5.0/tldr_0.5.0_linux_amd64.tar.gz + +$ tar xzf tldr_0.5.0_linux_amd64.tar.gz + +$ sudo mv tldr /usr/local/bin + +$ sudo chmod +x /usr/local/bin/tldr +``` + +Now, run ‘tldr’ binary to populate the tldr pages in your local system. + +``` +$ tldr +``` + +Sample output: + +``` +Enumerating objects: 6, done. +Counting objects: 100% (6/6), done. +Compressing objects: 100% (6/6), done. +Total 18157 (delta 0), reused 3 (delta 0), pack-reused 18151 +Successfully cloned into: /home/sk/.local/share/tldr +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-2.png) + +Tldr++ is available in AUR. If you’re on Arch Linux, you can install it using any AUR helper, for example [**YaY**][4]. Make sure you have removed any existing tldr client from your system and run the following command to install tldr++. + +``` +$ yay -S tldr++ +``` + +Alternatively, you can build from source as described below. Since Tldr++ is written using Go language, make sure you have installed it on your Linux box. If it isn’t installed yet, refer the following guide. + ++ [How To Install Go Language In Linux](https://www.ostechnix.com/install-go-language-linux/) + +After installing Go, run the following command to install Tldr++. + +``` +$ go get -u github.com/isacikgoz/tldr +``` + +This command will download the contents of tldr repository in a folder named **‘go’** in the current working directory. + +Now run the ‘tldr’ binary to populate all tldr pages in your local system using command: + +``` +$ go/bin/tldr +``` + +Sample output: + +![][6] + +Finally, copy the tldr binary to your PATH. + +``` +$ sudo mv tldr /usr/local/bin +``` + +It is time to see some examples. + +### Tldr++ Usage + +Type ‘tldr’ command without any options to display all command examples in alphabetical order. + +![][7] + +Use the **UP/DOWN arrows** to navigate through the commands, type any letters to search or type a command name to view the examples of that respective command. Press **?** for more and **Ctrl+c** to return/exit. + +To display the example commands of a specific command, for example **apt** , simply do: + +``` +$ tldr apt +``` + +![][8] + +Choose any example command from the list and hit ENTER. You will see a *** symbol** before the selected command. For example, I choose the first command i.e ‘sudo apt update’. Now, it will ask you whether to continue or not. If the command is correct, just type ‘y’ to continue and type your sudo password to run the selected command. + +![][9] + +See? You don’t need to copy/paste or type the actual command in the Terminal. Just choose it from the list and run on the fly! + +There are hundreds of Linux command examples are available in Tldr pages. You can choose one or two commands per day and learn them thoroughly. And keep this practice everyday to learn as much as you can. + +### Learn And Practice Linux Commands On The Fly Using Tldr++ + +Now think of the scenario that I mentioned in the first paragraph. You want to download a file, extract it and move it to different location and make it executable. Let us see how to do it interactively using Tldr++ client. + +**Step 1 – Download a file from Internet** + +To download a file from command line, we mostly use **‘curl’** or **‘wget’** commands. Let me use ‘wget’ to download the file. To open tldr page of wget command, just run: + +``` +$ tldr wget +``` + +Here is the examples of wget command. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/wget-tldr.png) + +You can use **UP/DOWN** arrows to go through the list of commands. Once you choose the command of your choice, press ENTER. Here I chose the first command. + +Now, enter the path of the file to download. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-3.png) + +You will then be asked to confirm if it is the correct command or not. If the command is correct, simply type ‘yes’ or ‘y’ to start downloading the file. + +![][10] + +We have downloaded the file. Let us go ahead and extract this file. + +**Step 2 – Extract downloaded archive** + +We downloaded the **tar.gz** file. So I am going to open the ‘tar’ tldr page. + +``` +$ tldr tar +``` + +You will see the list of example commands. Go through the examples and find which command is suitable to extract tar.gz(gzipped archive) file and hit ENTER key. In our case, it is the third command. + +![][11] + +Now, you will be prompted to enter the path of the tar.gz file. Just type the path and hit ENTER key. Tldr++ supports smart file suggestions. That means it will suggest the file name automatically as you type. Just press TAB key for auto-completion. + +![][12] + +If you downloaded the file to some other location, just type the full path, for example **/home/sk/Downloads/tldr_0.5.0_linux_amd64.tar.gz.** + +Once you enter the path of the file to extract, press ENTER and then, type ‘y’ to confirm. + +![][13] + +**Step 3 – Move file from one location to another** + +We extracted the archive. Now we need to move the file to another location. To move the files from one location to another, we use ‘mv’ command. So, let me open the tldr page for mv command. + +``` +$ tldr mv +``` + +Choose the correct command to move the files from one location to another. In our case, the first command will work, so let me choose it. + +![][14] + +Type the path of the file that you want to move and enter the destination path and hit ENTER key. + +![][15] + +**Note:** Type **y!** or **yes!** to run command with **sudo** privileges. + +As you see in the above screenshot, I moved the file named **‘tldr’** to **‘/usr/local/bin/’** location. + +For more details, refer the project’s GitHub page given at the end. + + +### Conclusion + +Don’t get me wrong. **Man pages are great!** There is no doubt about it. But, as I already said, many man pages are comprehensive and doesn’t have useful examples. There is no way I could memorize all lengthy commands with tricky flags. Some times I spent much time on man pages and remained clueless. The Tldr pages helped me to find what I need within few minutes. Also, we use some commands once in a while and then we forget them completely. Tldr pages on the other hand actually helps when it comes to using commands we rarely use. Tldr++ client makes this task much easier with smart user interaction. Give it a go and let us know what you think about this tool in the comment section below. + +And, that’s all. More good stuffs to come. Stay tuned! + +Good luck! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/learn-use-man-pages-efficiently/ +[2]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[3]: https://github.com/isacikgoz/tldr/releases +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[6]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-1.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-11.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-12.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-13.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-4.png +[11]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-6.png +[12]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-7.png +[13]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-8.png +[14]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-9.png +[15]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-10.png diff --git a/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md new file mode 100644 index 0000000000..6619cfe65a --- /dev/null +++ b/sources/tech/20190104 Three Ways To Reset And Change Forgotten Root Password on RHEL 7-CentOS 7 Systems.md @@ -0,0 +1,254 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems) +[#]: via: (https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/) +[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/) + +Three Ways To Reset And Change Forgotten Root Password on RHEL 7/CentOS 7 Systems +====== + +If you are forget to remember your root password for RHEL 7 and CentOS 7 systems and want to reset the forgotten root password? + +If so, don’t worry we are here to help you out on this. + +Navigate to the following link if you want to **[reset forgotten root password on RHEL 6/CentOS 6][1]**. + +This is generally happens when you use different password in vast environment or if you are not maintaining the proper inventory. + +Whatever it is. No issues, we will help you through this article. + +It can be done in many ways but we are going to show you the best three methods which we tried many times for our clients. + +In Linux servers there are three different users are available. These are, Normal User, System User and Super User. + +As everyone knows the Root user is known as super user in Linux and Administrator is in Windows. + +We can’t perform any major activity without root password so, make sure you should have the right root password when you perform any major tasks. + +If you don’t know or don’t have it, try to reset using one of the below method. + + * Reset Forgotten Root Password By Booting into Single User Mode using `rd.break` + * Reset Forgotten Root Password By Booting into Single User Mode using `init=/bin/bash` + * Reset Forgotten Root Password By Booting into Rescue Mode + + + +### Method-1: Reset Forgotten Root Password By Booting into Single User Mode + +Just follow the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems. + +To do so, reboot your system and follow the instructions carefully. + +**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments. +![][3] + +**`Step-2:`** In the GRUB options, find `linux16` word and add the `rd.break` word in the end of the file then press `Ctrl+x` or `F10` to boot into single user mode. +![][4] + +**`Step-3:`** At this point of time, your root filesystem will be mounted in Read only (RO) mode to /sysroot. Run the below command to confirm this. + +``` +# mount | grep root +``` + +![][5] + +**`Step-4:`** Based on the above output, i can say that i’m in single user mode and my root file system is mounted in read only mode. + +It won’t allow you to make any changes on your system until you mount the root filesystem with Read and write (RW) mode to /sysroot. To do so, use the following command. + +``` +# mount -o remount,rw /sysroot +``` + +![][6] + +**`Step-5:`** Currently your file systems are mounted as a temporary partition. Now, your command prompt shows **switch_root:/#**. + +Run the following command to get into a chroot jail so that /sysroot is used as the root of the file system. + +``` +# chroot /sysroot +``` + +![][7] + +**`Step-6:`** Now you can able to reset the root password with help of `passwd` command. + +``` +# echo "CentOS7$#123" | passwd --stdin root +``` + +![][8] + +**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. + +It allow us to fix the context of the **/etc/shadow** file. + +``` +# touch /.autorelabel +``` + +![][9] + +**`Step-8:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system. +![][10] + +**`Step-9:`** Now you can login to your system with your new password. +![][11] + +### Method-2: Reset Forgotten Root Password By Booting into Single User Mode + +Alternatively we can use the below procedure to reset the forgotten root password in RHEL 7/CentOS 7 systems. + +**`Step-1:`** Reboot your system and interrupt at the boot menu by hitting **`e`** key to modify the kernel arguments. +![][3] + +**`Step-2:`** In the GRUB options, find `rhgb quiet` word and replace with the `init=/bin/bash` or `init=/bin/sh` word then press `Ctrl+x` or `F10` to boot into single user mode. + +Screenshot for **`init=/bin/bash`**. +![][12] + +Screenshot for **`init=/bin/sh`**. +![][13] + +**`Step-3:`** At this point of time, your root system will be mounted in Read only mode to /. Run the below command to confirm this. + +``` +# mount | grep root +``` + +![][14] + +**`Step-4:`** Based on the above ouput, i can say that i’m in single user mode and my root file system is mounted in read only (RO) mode. + +It won’t allow you to make any changes on your system until you mount the root file system with Read and write (RW) mode. To do so, use the following command. + +``` +# mount -o remount,rw / +``` + +![][15] + +**`Step-5:`** Now you can able to reset the root password with help of `passwd` command. + +``` +# echo "RHEL7$#123" | passwd --stdin root +``` + +![][16] + +**`Step-6:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. + +It allow us to fix the context of the **/etc/shadow** file. + +``` +# touch /.autorelabel +``` + +![][17] + +**`Step-7:`** Finally `Reboot` the system. + +``` +# exec /sbin/init 6 +``` + +![][18] + +**`Step-9:`** Now you can login to your system with your new password. +![][11] + +### Method-3: Reset Forgotten Root Password By Booting into Rescue Mode + +Alternatively, we can reset the forgotten Root password for RHEL 7 and CentOS 7 systems using Rescue mode. + +**`Step-1:`** Insert the bootable media through USB or DVD drive which is compatible for you and reboot your system. It will take to you to the below screen. + +Hit `Troubleshooting` to launch the `Rescue` mode. +![][19] + +**`Step-2:`** Choose `Rescue a CentOS system` and hit `Enter` button. +![][20] + +**`Step-3:`** Here choose `1` and the rescue environment will now attempt to find your Linux installation and mount it under the directory `/mnt/sysimage`. +![][21] + +**`Step-4:`** Simple hit `Enter` to get a shell. +![][22] + +**`Step-5:`** Run the following command to get into a chroot jail so that /mnt/sysimage is used as the root of the file system. + +``` +# chroot /mnt/sysimage +``` + +![][23] + +**`Step-6:`** Now you can able to reset the root password with help of **passwd** command. + +``` +# echo "RHEL7$#123" | passwd --stdin root +``` + +![][24] + +**`Step-7:`** By default CentOS 7/RHEL 7 use SELinux in enforcing mode, so create a following hidden file which will automatically perform a relabel of all files on next boot. +It allow us to fix the context of the /etc/shadow file. + +``` +# touch /.autorelabel +``` + +![][25] + +**`Step-8:`** Remove the bootable media then initiate the reboot. + +**`Step-9:`** Issue `exit` twice to exit from the chroot jail environment and reboot the system. +![][26] + +**`Step-10:`** Now you can login to your system with your new password. +![][11] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-7-centos-7/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-reset-change-forgotten-root-password-in-rhel-6-centos-6/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-2.png +[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-8.png +[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10.png +[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-10a.png +[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-11.png +[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/reset-forgotten-root-password-on-rhel-7-centos-7-12.png +[12]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1.png +[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-1a.png +[14]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-4.png +[16]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/method-reset-forgotten-root-password-on-rhel-7-centos-7-7.png +[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-1.png +[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-2.png +[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-3.png +[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-4.png +[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-5.png +[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-6.png +[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-7.png +[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/rescue-reset-forgotten-root-password-on-rhel-7-centos-7-8.png diff --git a/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md new file mode 100644 index 0000000000..c6c520e339 --- /dev/null +++ b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Setting up an email server, part 1: The Forwarder) +[#]: via: (https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/) +[#]: author: (Julian Andres Klode https://blog.jak-linux.org/) + +Setting up an email server, part 1: The Forwarder +====== + +This week, I’ve been working on rolling out mail services on my server. I started working on a mail server setup at the end of November, while the server was not yet in use, but only for about two days, and then let it rest. + +As my old shared hosting account expired on January 1, I had to move mail forwarding duties over to the new server. Yes forwarding - I do plan to move hosting the actual email too, but at the moment it’s “just” forwarding to gmail. + +### The Software + +As you might know from the web server story, my server runs on Ubuntu 18.04. I set up a mail server on this system using + + * [Postfix][1] for SMTP duties (warning, they oddly do not have an https page) + * [rspamd][2] for spam filtering, and signing DKIM / ARC + * [bind9][3] for DNS resolving + * [postsrsd][4] for SRS + + + +You might wonder why bind9 is in there. It turns out that DNS blacklists used by spam filters block the caching DNS servers you usually use, so you have to use your own recursive DNS server. Ubuntu offers you the choice between bind9 and dnsmasq in main, and it seems like bind9 is more appropriate here than dnsmasq. + +### Setting up postfix + +Most of the postfix configuration is fairly standard. So, let’s skip TLS configuration and outbound SMTP setups (this is email, and while they support TLS, it’s all optional, so let’s not bother that much here). + +The most important part is restrictions in `main.cf`. + +First of all, relay restrictions prevent us from relaying emails to weird domains: + +``` +# Relay Restrictions +smtpd_relay_restrictions = reject_non_fqdn_recipient reject_unknown_recipient_domain permit_mynetworks permit_sasl_authenticated defer_unauth_destination +``` + +We also only accept mails from hosts that know their own full qualified name: + +``` +# Helo restrictions (hosts not having a proper fqdn) +smtpd_helo_required = yes +smtpd_helo_restrictions = permit_mynetworks reject_invalid_helo_hostname reject_non_fqdn_helo_hostname reject_unknown_helo_hostname +``` + +We also don’t like clients (other servers) that send data too early, or have an unknown hostname: + +``` +smtpd_data_restrictions = reject_unauth_pipelining +smtpd_client_restrictions = permit_mynetworks reject_unknown_client_hostname +``` + +I also set up a custom apparmor profile that’s pretty lose, I plan to migrate to the one in the apparmor git eventually but it needs more testing and some cleanup. + +### Sender rewriting scheme + +For SRS using postsrsd, we define the `SRS_DOMAIN` in `/etc/default/postsrsd` and then configure postfix to talk to it: + +``` +# Handle SRS for forwarding +recipient_canonical_maps = tcp:localhost:10002 +recipient_canonical_classes= envelope_recipient,header_recipient + +sender_canonical_maps = tcp:localhost:10001 +sender_canonical_classes = envelope_sender +``` + +This has a minor issue that it also rewrites the `Return-Path` when it delivers emails locally, but as I am only forwarding, I’m worrying about that later. + +### rspamd basics + +rspamd is a great spam filtering system. It uses a small core written in C and a bunch of Lua plugins, such as: + + * IP score, which keeps track of how good a specific server was in the past + * Replies, which can check whether an email is a reply to another one + * DNS blacklisting + * DKIM and ARC validation and signing + * DMARC validation + * SPF validation + + + +It also has a nice web UI: + +![rspamd web ui status][5] + +rspamd web ui status + +![rspamd web ui investigating a spam message][6] + +rspamd web ui investigating a spam message + +Setting up rspamd is quite easy. You basically just drop a bunch of configuration overrides into `/etc/rspamd/local.d` and you’re done. Heck, it mostly works out of the box. There’s a fancy `rspamadm configwizard` too. + +What you do want for rspamd is a redis server. redis is needed in [many places][7], such as rate limiting, greylisting, dmarc, reply tracking, ip scoring, neural networks. + +I made a few changes to the defaults: + + * I enabled subject rewriting instead of adding headers, so spam mail subjects get `[SPAM]` prepended, in `local.d/actions.conf`: + +``` + reject = 15; +rewrite_subject = 6; +add_header = 6; +greylist = 4; +subject = "[SPAM] %s"; +``` + + * I set `autolearn = true;` in `local.d/classifier-bayes.conf` to make it learn that an email that has a score of at least 15 (those that are rejected) is spam, and emails with negative scores are ham. + + * I set `extended_spam_headers = true;` in `local.d/milter_headers.conf` to get a report from rspamd in the header seeing the score and how the score came to be. + + + + +### ARC setup + +[ARC][8] is the ‘Authenticated Received Chain’ and is currently a DMARC working group work item. It allows forwarders / mailing lists to authenticate their forwarding of the emails and the checks they have performed. + +rspamd is capable of validating and signing emails with ARC, but I’m not sure how much influence ARC has on gmail at the moment, for example. + +There are three parts to setting up ARC: + + 1. Generate a DKIM key pair (use `rspamadm dkim_keygen`) + 2. Setup rspamd to sign incoming emails using the private key + 3. Add a DKIM `TXT` record for the public key. `rspamadm` helpfully tells you how it looks like. + + + +For step two, what we need to do is configure `local.d/arc.conf`. You can basically use the example configuration from the [rspamd page][9], the key point for signing incoming email is to specifiy `sign_incoming = true;` and `use_domain_sign_inbound = "recipient";` (FWIW, none of these options are documented, they are fairly new, and nobody updated the documentation for them). + +My configuration looks like this at the moment: + +``` +# If false, messages with empty envelope from are not signed +allow_envfrom_empty = true; +# If true, envelope/header domain mismatch is ignored +allow_hdrfrom_mismatch = true; +# If true, multiple from headers are allowed (but only first is used) +allow_hdrfrom_multiple = false; +# If true, username does not need to contain matching domain +allow_username_mismatch = false; +# If false, messages from authenticated users are not selected for signing +auth_only = true; +# Default path to key, can include '$domain' and '$selector' variables +path = "${DBDIR}/arc/$domain.$selector.key"; +# Default selector to use +selector = "arc"; +# If false, messages from local networks are not selected for signing +sign_local = true; +# +sign_inbound = true; +# Symbol to add when message is signed +symbol_signed = "ARC_SIGNED"; +# Whether to fallback to global config +try_fallback = true; +# Domain to use for ARC signing: can be "header" or "envelope" +use_domain = "header"; +use_domain_sign_inbound = "recipient"; +# Whether to normalise domains to eSLD +use_esld = true; +# Whether to get keys from Redis +use_redis = false; +# Hash for ARC keys in Redis +key_prefix = "ARC_KEYS"; +``` + +This would also sign any outgoing email, but I’m not sure that’s necessary - my understanding is that we only care about ARC when forwarding/receiving incoming emails, not when sending them (at least that’s what gmail does). + +### Other Issues + +There are few other things to keep in mind when running your own mail server. I probably don’t know them all yet, but here we go: + + * You must have a fully qualified hostname resolving to a public IP address + + * Your public IP address must resolve back to the fully qualified host name + + * Again, you should run a recursive DNS resolver so your DNS blacklists work (thanks waldi for pointing that out) + + * Setup an SPF record. Mine looks like this: + +`jak-linux.org. 3600 IN TXT "v=spf1 +mx ~all"` + +this states that all my mail servers may send email, but others probably should not (a softfail). Not having an SPF record can punish you; for example, rspamd gives missing SPF and DKIM a score of 1. + + * All of that software is sandboxed using AppArmor. Makes you question its security a bit less! + + + + +### Source code, outlook + +As always, you can find the Ansible roles on [GitHub][10]. Feel free to point out bugs! 😉 + +In the next installment of this series, we will be looking at setting up Dovecot, and configuring DKIM. We probably also want to figure out how to run notmuch on the server, keep messages in matching maildirs, and have my laptop synchronize the maildir and notmuch state with the server. Ugh, sounds like a lot of work. + +-------------------------------------------------------------------------------- + +via: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/ + +作者:[Julian Andres Klode][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.jak-linux.org/ +[b]: https://github.com/lujun9972 +[1]: http://www.postfix.org/ +[2]: https://rspamd.com/ +[3]: https://www.isc.org/downloads/bind/ +[4]: https://github.com/roehling/postsrsd +[5]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-status.png +[6]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-spam.png +[7]: https://rspamd.com/doc/configuration/redis.html +[8]: http://arc-spec.org/ +[9]: https://rspamd.com/doc/modules/arc.html +[10]: https://github.com/julian-klode/ansible.jak-linux.org diff --git a/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md new file mode 100644 index 0000000000..32a6a7dd3e --- /dev/null +++ b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md @@ -0,0 +1,232 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Different Ways To Update Linux Kernel For Ubuntu) +[#]: via: (https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Different Ways To Update Linux Kernel For Ubuntu +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/ubuntu-linux-kernel-720x340.png) + +In this guide, we have given 7 different ways to update Linux kernel for Ubuntu. Among the 7 methods, five methods requires system reboot to apply the new Kernel and two methods don’t. Before updating Linux Kernel, it is **highly recommended to backup your important data!** All methods mentioned here are tested on Ubuntu OS only. We are not sure if they will work on other Ubuntu flavors (Eg. Xubuntu) and Ubuntu derivatives (Eg. Linux Mint). + +### Part A – Kernel Updates with reboot + +The following methods requires you to reboot your system to apply the new Linux Kernel. All of the following methods are recommended for personal or testing systems. Again, please backup your important data, configuration files and any other important stuff from your Ubuntu system. + +#### Method 1 – Update the Linux Kernel with dpkg (The manual way) + +This method helps you to manually download and install the latest available Linux kernel from **[kernel.ubuntu.com][1]** website. If you want to install most recent version (either stable or release candidate), this method will help. Download the Linux kernel version from the above link. As of writing this guide, the latest available version was **5.0-rc1** and latest stable version was **v4.20**. + +![][3] + +Click on the Linux Kernel version link of your choice and find the section for your architecture (‘Build for XXX’). In that section, download the two files with these patterns (where X.Y.Z is the highest version): + + 1. linux-image-*X.Y.Z*-generic-*.deb + 2. linux-modules-X.Y.Z*-generic-*.deb + + + +In a terminal, change directory to where the files are and run this command to manually install the kernel: + +``` +$ sudo dpkg --install *.deb +``` + +Reboot to use the new kernel: + +``` +$ sudo reboot +``` + +Check the kernel is as expected: + +``` +$ uname -r +``` + +For step by step instructions, please check the section titled under “Install Linux Kernel 4.15 LTS On DEB based systems” in the following guide. + ++ [Install Linux Kernel 4.15 In RPM And DEB Based Systems](https://www.ostechnix.com/install-linux-kernel-4-15-rpm-deb-based-systems/) + +The above guide is specifically written for 4.15 version. However, all the steps are same for installing latest versions too. + +**Pros:** No internet needed (You can download the Linux Kernel from any system). + +**Cons:** Manual update. Reboot necessary. + +#### Method 2 – Update the Linux Kernel with apt-get (The recommended method) + +This is the recommended way to install latest Linux kernel on Ubuntu-like systems. Unlike the previous method, this method will download and install latest Kernel version from Ubuntu official repositories instead of **kernel.ubuntu.com** website.. + +To update the whole system including the Kernel, just do: + +``` +$ sudo apt-get update + +$ sudo apt-get upgrade +``` + +If you want to update the Kernel only, run: + +``` +$ sudo apt-get upgrade linux-image-generic +``` + +**Pros:** Simple. Recommended method. + +**Cons:** Internet necessary. Reboot necessary. + +Updating Kernel from official repositories will mostly work out of the box without any problems. If it is the production system, this is the recommended way to update the Kernel. + +Method 1 and 2 requires user intervention to update Linux Kernels. The following methods (3, 4 & 5) are mostly automated. + +#### Method 3 – Update the Linux Kernel with Ukuu + +**Ukuu** is a Gtk GUI and command line tool that downloads the latest main line Linux kernel from **kernel.ubuntu.com** , and install it automatically in your Ubuntu desktop and server editions. Ukku is not only simplifies the process of manually downloading and installing new Kernels, but also helps you to safely remove the old and unnecessary Kernels. For more details, refer the following guide. + ++ [Ukuu – An Easy Way To Install And Upgrade Linux Kernel In Ubuntu-based Systems](https://www.ostechnix.com/ukuu-an-easy-way-to-install-and-upgrade-linux-kernel-in-ubuntu-based-systems/) + +**Pros:** Easy to install and use. Automatically installs main line Kernel. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 4 – Update the Linux Kernel with UKTools + +Just like Ukuu, the **UKTools** also fetches the latest stable Kernel from from **kernel.ubuntu.com** site and installs it automatically on Ubuntu and its derivatives like Linux Mint. More details about UKTools can be found in the link given below. + ++ [UKTools – Upgrade Latest Linux Kernel In Ubuntu And Derivatives](https://www.ostechnix.com/uktools-upgrade-latest-linux-kernel-in-ubuntu-and-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 5 – Update the Linux Kernel with Linux Kernel Utilities + +**Linux Kernel Utilities** is yet another program that makes the process of updating Linux kernel easy in Ubuntu-like systems. It is actually a set of BASH shell scripts used to compile and/or update latest Linux kernels for Debian and derivatives. It consists of three utilities, one for manually compiling and installing Kernel from source from [**http://www.kernel.org**][4] website, another for downloading and installing pre-compiled Kernels from from **** website. and third script is for removing the old kernels. For more details, please have a look at the following link. + ++ [Linux Kernel Utilities – Scripts To Compile And Update Latest Linux Kernel For Debian And Derivatives](https://www.ostechnix.com/linux-kernel-utilities-scripts-compile-update-latest-linux-kernel-debian-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + + +### Part B – Kernel Updates without reboot + +As I already said, all of above methods need you to reboot the server before the new kernel is active. If they are personal systems or testing machines, you could simply reboot and start using the new Kernel. But, what if they are production systems that requires zero downtime? No problem. This is where **Livepatching** comes in handy! + +The **livepatching** (or hot patching) allows you to install Linux updates or patches without rebooting, keeping your server at the latest security level, without any downtime. This is attractive for ‘always-on’ servers, such as web hosts, gaming servers, in fact, any situation where the server needs to stay on all the time. Linux vendors maintain patches only for security fixes, so this approach is best when security is your main concern. + +The following two methods doesn’t require system reboot and useful for updating Linux Kernel on production and mission-critical Ubuntu servers. + +#### Method 6 – Update the Linux Kernel Canonical Livepatch Service + +![][5] + +[**Canonical Livepatch Service**][6] applies Kernel updates, patches and security hotfixes automatically without rebooting the Ubuntu systems. It reduces the Ubuntu systems downtime and keep them secure. Canonical Livepatch Service can be set up either during or after installation. If you are using desktop Ubuntu, the Software Updater will automatically check for kernel patches and notify you. In a console-based system, it is up to you to run apt-get update regularly. It will install kernel security patches only when you run the command “apt-get upgrade”, hence is semi-automatic. + +Livepatch is free for three systems. If you have more than three, you need to upgrade to enterprise support solution named **Ubuntu Advantage** suite. This suite includes **Kernel Livepatching** and other services such as, + + * Extended Security Maintenance – critical security updates after Ubuntu end-of-life. + * Landscape – the systems management tool for using Ubuntu at scale. + * Knowledge Base – A private collection of articles and tutorials written by Ubuntu experts. + * Phone and web-based support. + + + +**Cost** + +Ubuntu Advantage includes three paid plans namely, Essential, Standard and Advanced. The basic plan (Essential plan) starts from **225 USD per year for one physical node** and **75 USD per year for one VPS**. It seems there is no monthly subscription for Ubuntu servers and desktops. You can view detailed information on all plans [**here**][7]. + +**Pros:** Simple. Semi-automatic. No reboot necessary. Free for 3 systems. + +**Cons:** Expensive for 4 or more hosts. No patch rollback. + +**Enable Canonical Livepatch Service** + +If you want to setup Livepatch service after installation, just do the following steps. + +Get a key at [**https://auth.livepatch.canonical.com/**][8]. + +``` +$ sudo snap install canonical-livepatch + +$ sudo canonical-livepatch enable your-key +``` + +#### Method 7 – Update the Linux Kernel with KernelCare + +![][9] + +[**KernelCare**][10] is the newest of all the live patching solutions. It is the product of [CloudLinux][11]. KernelCare runs on Ubuntu and other flavors of Linux. It checks for patch releases every 4 hours and will install them without confirmation. Patches can be rolled back if there are problems. + +**Cost** + +Fees, per server: **4 USD per month** , **45 USD per year**. + +Compared to Ubuntu Livepatch, kernelCare seems very cheap and affordable. Good thing is **monthly subscriptions are also available**. Another notable feature is it supports other Linux distributions, such as Red Hat, CentOS, Debian, Oracle Linux, Amazon Linux and virtualization platforms like OpenVZ, Proxmox etc. + +You can read all the features and benefits of KernelCare [**here**][12] and check all available plan details [**here**][13]. + +**Pros:** Simple. Fully automated. Wide OS coverage. Patch rollback. No reboot necessary. Free license for non-profit organizations. Low cost. + +**Cons:** Not free (except for 30 day trial). + +**Enable KernelCare Service** + +Get a 30-day trial key at [**https://cloudlinux.com/kernelcare-free-trial5**][14]. + +Run the following commands to enable KernelCare and register the key. + +``` +$ sudo wget -qq -O - https://repo.cloudlinux.com/kernelcare/kernelcare_install.sh | bash + +$ sudo /usr/bin/kcarectl --register KEY +``` + +If you’re looking for an affordable and reliable commercial service to keep the Linux Kernel updated on your Linux servers, KernelCare is good to go. + +*with inputs from **Paul A. Jacobs** , a Technical Evangelist and Content Writer from Cloud Linux.* + +**Suggested read:** + +And, that’s all for now. Hope this was useful. If you believe any other tools/methods should include in this list, feel free to let us know in the comment section below. I will check and update this guide accordingly. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2019/01/Ubuntu-mainline-kernel.png +[4]: http://www.kernel.org +[5]: http://www.ostechnix.com/wp-content/uploads/2019/01/Livepatch.png +[6]: https://www.ubuntu.com/livepatch +[7]: https://www.ubuntu.com/support/plans-and-pricing +[8]: https://auth.livepatch.canonical.com/ +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/KernelCare.png +[10]: https://www.kernelcare.com/ +[11]: https://www.cloudlinux.com/ +[12]: https://www.kernelcare.com/update-kernel-linux/ +[13]: https://www.kernelcare.com/pricing/ +[14]: https://cloudlinux.com/kernelcare-free-trial5 diff --git a/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md new file mode 100644 index 0000000000..6552cc3905 --- /dev/null +++ b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI) +[#]: via: (https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI +====== + +Google Drive is an one of the best cloud storage compared with other cloud storage. + +It’s one of the application which is used by millions of users in daily basics. + +It allow users to access the application anywhere irrespective of devices. + +We can upload, download & share documents, photo, files, docs, spreadsheet, etc to anyone with securely. + +We had already written few articles in 2daygeek website about google drive mapping with Linux. + +If you would like to check those, navigate to the following link. + +GNOME desktop offers easy way to **[Integrate Google Drive Using Gnome Nautilus File Manager in Linux][1]** without headache. + +Also, you can give a try with **[Google Drive Ocamlfuse Client][2]**. + +### What’s DriveSync? + +[DriveSync][3] is a command line utility that synchronizes your files between local system and Google Drive via command line. + +Downloads new remote files, uploads new local files to your Drive and deletes or updates files both locally and on Drive if they have changed in one place. + +Allows blacklisting or whitelisting of files and folders that should not / should be synced. + +It was written in Ruby scripting language so, make sure your system should have ruby installed. If it’s not installed then install it as a prerequisites for DriveSync. + +### DriveSync Features + + * Downloads new remote files + * Uploads new local files + * Delete or Update files in both locally and Drive + * Allow blacklist to disable sync for files and folders + * Automate the sync using cronjob + * Allow us to set file upload/download size (Defautl 512MB) + * Allow us to modify Timeout threshold + + + +### How to Install Ruby Scripting Language in Linux? + +Ruby is an interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (like in Perl). It is simple, straight-forward, and extensible. + +It’s available in all the Linux distribution official repository. Hence we can easily install it with help of distribution official **[Package Manager][4]**. + +For **`Fedora`** system, use **[DNF Command][5]** to install Ruby. + +``` +$ sudo dnf install ruby rubygem-bundler +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][6]** or **[APT Command][7]** to install Ruby. + +``` +$ sudo apt install ruby ruby-bundler +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][8]** to install Ruby. + +``` +$ sudo pacman -S ruby ruby-bundler +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][9]** to install Ruby. + +``` +$ sudo yum install ruby ruby-bundler +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install Ruby. + +``` +$ sudo zypper install ruby ruby-bundler +``` + +### How to Install DriveSync in Linux? + +DriveSync installation also easy to do it. Follow the below procedure to get it done. + +``` +$ git clone https://github.com/MStadlmeier/drivesync.git +$ cd drivesync/ +$ bundle install +``` + +### How to Set Up DriveSync in Linux? + +As of now, we had successfully installed DriveSync and still we need to perform few steps to use this. + +Run the following command to set up this and Sync the files. + +``` +$ ruby drivesync.rb +``` + +When you ran the above command you will be getting the below url. +![][12] + +Navigate to the given URL in your preferred Web Browser and follow the instruction. It will open a google sign-in page in default web browser. Enter your credentials then hit Sign in button. +![][13] + +Input your password. +![][14] + +Hit **`Allow`** button to allow DriveSync to access your Google Drive. +![][15] + +Finally, it will give you an authorization code. +![][16] + +Just copy and past it on the terminal and hit **`Enter`** button to start the sync. +![][17] + +Yes, it’s syncing the files from Google Drive to my local folder. + +``` +$ ruby drivesync.rb +Warning: Could not find config file at /home/daygeek/.drivesync/config.yml . Creating default config... +Open the following URL in the browser and enter the resulting code after authorization +https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com&include_granted_scopes=true&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&scope=https://www.googleapis.com/auth/drive +4/ygAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +Local folder is 1437 files behind and 0 files ahead of remote +Starting sync at 2019-01-06 19:48:49 +0530 +Downloading file 2018-07-31-17-48-54-635_1533039534635_XXXPM0534X_ITRV.zip ... +Downloading file 5459XXXXXXXXXX25_11-03-2018.PDF ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format-icon-theme.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/2g-banner-format.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format.svg ... +Downloading file documents/Magesh_Resume_Updated_26_Mar_2018.doc ... +Downloading file documents/Magesh_Resume_updated-new.doc ... +Downloading file documents/Aadhaar-Thanu.PNG ... +Downloading file documents/Aadhaar-Magesh.PNG ... +Downloading file documents/Copy of PP-docs.pdf ... +Downloading file EAadhaar_2189821080299520170807121602_25082017123052_172991.pdf ... +Downloading file Tanisha/VID_20170223_113925.mp4 ... +Downloading file Tanisha/VID_20170224_073234.mp4 ... +Downloading file Tanisha/VID_20170304_170457.mp4 ... +Downloading file Tanisha/IMG_20170225_203243.jpg ... +Downloading file Tanisha/IMG_20170226_123949.jpg ... +Downloading file Tanisha/IMG_20170226_123953.jpg ... +Downloading file Tanisha/IMG_20170304_184227.jpg ... +. +. +. +Sync complete. +``` + +It will create the **`drive`** folder under **`/home/user/Documents/`** and sync all the files in it. +![][18] + +DriveSync configuration files are located in the following location **`/home/user/.drivesync/`** if you had installed it on your **home** directory. + +``` +$ ls -lh ~/.drivesync/ +total 176K +-rw-r--r-- 1 daygeek daygeek 1.9K Jan 6 19:42 config.yml +-rw-r--r-- 1 daygeek daygeek 170K Jan 6 21:31 manifest +``` + +You can make your changes by modifying the **`config.yml`** file. + +### How to Verify Whether Sync is Working Fine or Not? + +To test this, we are going to create a new folder called **`2g-docs-2019`**. Also, adding an image file in it. Once it’s done, run the **`drivesync.rb`** command again. + +``` +$ ruby drivesync.rb +Local folder is 0 files behind and 1 files ahead of remote +Starting sync at 2019-01-06 21:59:32 +0530 +Uploading file 2g-docs-2019/Asciinema - Record And Share Your Terminal Activity On The Web.png ... +``` + +Yes, it has been synced to Google Drive. The same has been verified through Web Browser. +![][19] + +Create the below **CronJob** to enable an auto sync. The following “CronJob” will be running an every mins. + +``` +$ vi crontab +*/1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated ruby ~/drivesync/drivesync.rb +``` + +I have added one more file to test this. Yes, it got success. + +``` +Jan 07 09:36:01 daygeek-Y700 crond[590]: (daygeek) RELOAD (/var/spool/cron/daygeek) +Jan 07 09:36:01 daygeek-Y700 crond[20942]: pam_unix(crond:session): session opened for user daygeek by (uid=0) +Jan 07 09:36:01 daygeek-Y700 CROND[20943]: (daygeek) CMD (ruby ~/drivesync/drivesync.rb) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Local folder is 0 files behind and 1 files ahead of remote) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Starting sync at 2019-01-07 09:36:26 +0530) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Uploading file 2g-docs-2019/Check CPU And HDD Temperature In Linux.png ...) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT () +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Sync complete.) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: pam_unix(crond:session): session closed for user daygeek +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/ +[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/ +[3]: https://github.com/MStadlmeier/drivesync +[4]: https://www.2daygeek.com/category/package-management/ +[5]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[6]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[7]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[9]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-1.jpg +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-2.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-3.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-4.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-5.png +[17]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-6.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-7.jpg +[19]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-8.png diff --git a/sources/tech/20190107 How to manage your media with Kodi.md b/sources/tech/20190107 How to manage your media with Kodi.md new file mode 100644 index 0000000000..cea446c5b0 --- /dev/null +++ b/sources/tech/20190107 How to manage your media with Kodi.md @@ -0,0 +1,303 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage your media with Kodi) +[#]: via: (https://opensource.com/article/19/1/manage-your-media-kodi) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +How to manage your media with Kodi +====== + +![](Get control over your home media content with Kodi media player software.) + +If you, like me, like to own your own data, chances are you also like to purchase movies and TV shows on Blu-Ray or DVD discs. And you may also like to make [ISOs][1] of the videos to keep exact digital copies, as I do. + +For a little while, it might be manageable to have a bunch of files stored in some sort of directory structure. However, as your collection grows, you may want features like resuming from a specific spot; keeping track of where you left off watching a video (i.e., its watched status); storing episode or movie summaries and movie trailers; buying media in multiple languages; or just having a sane way to play all those ISOs you ripped. + +This is where Kodi comes in. + +### What is Kodi? + +Modern [Kodi][2] is the successor to Xbox Media Player, which was discontinued way back in 2003. In June 2004, Xbox Media Center (XBMC) was born. For over three years, XBMC remained on the Xbox. Then in 2007, work began in earnest to port the media player over to Linux. + +![](https://opensource.com/sites/default/files/uploads/00_xbmc_500x300.png) + +Aside from some uninteresting technical history, things remained fairly stable, and XBMC grew in prominence. By 2014, XBMC had a thriving community, and its core functionality grew to include playing games, streaming content from the web, and connecting to mobile devices. This, combined with legal issues involving Xbox in the name, lead the team behind XBMC to rename it Kodi. Kodi is now branded as an "entertainment hub that brings all your digital media together into a beautiful and user-friendly package." + +Today, Kodi has an extensible interface that has allowed the open source community to build new functionality using plugins. Note that, as with all open source software, Kodi's developers are not responsible for the ecosystem's plugins. + +### How do I start? + +For Ubuntu-based distributions, Kodi is just a few short commands away: + +``` +sudo apt install software-properties-common +sudo add-apt-repository ppa:team-xbmc/ppa +sudo apt update +sudo apt install kodi +``` + +In Arch Linux, you can install the latest version from the community repo: + +``` +sudo pacman -S kodi +``` + +Packages were maintained for Fedora 26 by RPM Fusion (referenced in the [Kodi documentation][3]). I tried it on Fedora 29, and it was quite unstable. I'm sure that this will improve over time, but my experience is that Fedora 29 is not the ideal platform for Kodi. + +### OK, it's installed… now what? + +Before we proceed, note that I am making two assumptions about your media content: + + 1. You have your own local, legally attained content. + 2. You have already transferred this content from your DVDs, Blu-Rays, or another digital distribution source to your local directory or network. + + + +Kodi uses a scraping service to pull down TV and movie metadata. For Kodi to match things appropriately, I recommend adopting a directory and file-naming structure similar to this: + +``` +Utopia +├── Utopia.S01.dvd_rip.x264 +│   ├── Utopia.S01E01.dvd_rip.x264.mkv +│   ├── Utopia.S01E02.dvd_rip.x264.mkv +│   ├── Utopia.S01E03.dvd_rip.x264.mkv +│   ├── Utopia.S01E04.dvd_rip.x264.mkv +│   ├── Utopia.S01E05.dvd_rip.x264.mkv +│   ├── Utopia.S01E06.dvd_rip.x264.mkv +└── Utopia.S02.dvd_rip.x264 +    ├── Utopia.S02E01.dvd_rip.x264.mkv +    ├── Utopia.S02E02.dvd_rip.x264.mkv +    ├── Utopia.S02E03.dvd_rip.x264.mkv +    ├── Utopia.S02E04.dvd_rip.x264.mkv +    ├── Utopia.S02E05.dvd_rip.x264.mkv +    └── Utopia.S02E06.dvd_rip.x264.mkv +``` + +I put the source (my DVD) and the codec (x264) in the title, but these are optional. For a TV series, you can include the episode title in the filename if you like. The important part is **SxxExx** , which stands for Season and Episode. This is how Kodi (and by extension the scrapers) can identify your media. + +Assuming you have organized your media like this, let's do some basic Kodi configuration. + +### Add video sources + +Adding video sources is a simple, six-step process: + + 1. Enter the files section + 2. Select **Files** + 3. Click **Add source** + 4. Browse to your source + 5. Define the video content type + 6. Refresh the metadata + + + +If you're impatient, feel free to navigate these steps on your own. But if you want details, keep reading. + +When you first launch Kodi, you'll see the home screen below. Click **Enter files section**. It doesn't matter whether you do this under Movies (as shown here) or TV shows. + +![](https://opensource.com/sites/default/files/uploads/01_fresh_kodi_main_screen.png) + +Next, select the **Videos** folder, click **Files** , and choose **Add videos**. + +![](https://opensource.com/sites/default/files/uploads/02_videos_folder.png) + +![](https://opensource.com/sites/default/files/uploads/03_add_videos.png) + +Either click on **None** and start typing the path to your files or click **Browse** and use the file navigation. + +![](https://opensource.com/sites/default/files/uploads/04_browse_video_source.png) + +![](https://opensource.com/sites/default/files/uploads/05_add_video_source_name.png) + +As you can see in this screenshot, I added my local **Videos** directory. You can set some default options through **Browse** , such as specifying your home folder and any drives you have mounted—maybe on a network file system (NFS), universal plug and play (UPnP) device, Windows Network ([SMB/CIFS][4]), or [zeroconf][5]. I won't cover most of these, as they are outside the scope of this article, but we will use NFS later for one of Kodi's advanced features. + +After you select your path and click OK, identify the type of content you're working with. + +![](https://opensource.com/sites/default/files/uploads/06_define_video_content.png) + +Next, Kodi prompts you to refresh the metadata for the content in the selected directory. This is how Kodi knows what videos you have and their synopsis, cast information, thumbnails, fan art, etc. Select **Yes** , and you can watch the video-scanning progress in the top right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/07_refresh.png) + +![](https://opensource.com/sites/default/files/uploads/08_active_scan_in_progress.png) + +When the scan completes, you'll see lots of useful information, such as video overviews and season and episode descriptions for TV shows. + +![](https://opensource.com/sites/default/files/uploads/09_screen_after_scan.png) + +![](https://opensource.com/sites/default/files/uploads/10_show_description.png) + +You can use the same process for other types of content, such as music or music videos. + +### Increase functionality with add-ons + +One of the most interesting things about open source projects is that the community often extends them well beyond their initial scope. Kodi has a very robust add-on infrastructure. Most of them are produced by Kodi fans who want to extend its default functionality, and sometimes companies (such as the [Plex][6] content streaming service) release official plugins. Be very careful about adding plugins from untrusted sources. Just because you find an add-on on the internet does not mean it is safe! + +**Be warned:** Add-ons are not supported by Kodi's core team! + +Having said that, there are many useful add-ons that are worth your consideration. In my house, we use Kodi for local playback and Plex when we want to access our content outside the house—with one exception. One of our rooms has a poor WiFi signal. I rip my Blu-Rays to very large MKV files (usually 20–40GB each), and the WiFi (and therefore Kodi) can't handle the files without stuttering. Although you can (and we have) dug into some of the advanced buffering options, even those tweaks have proved insufficient with very large files. Since we already have a Plex server that can transcode content, we solved our problem with a Kodi add-on. + +To show how to install an add-on, I'll use Plex as an example. First, click on **Add-ons** in the side panel and select **Enter add-on browser**. Either use the search function or scroll down until you find Plex. + +![](https://opensource.com/sites/default/files/uploads/11_addons.png) + +Select the Plex add-on and click the **Install** button in the lower right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/13_install_plex_addon.png) + +Once the download completes, you can access Plex on the main Kodi screen under **Add-ons**. + +![](https://opensource.com/sites/default/files/uploads/14_addons_finished_installing.png) + +There are several ways to configure an add-on. Some add-ons, such as NHL TV, are configured via a menu accessed by right-clicking the add-on and selecting Configure. Others, such as Plex, display a configuration walk-through when they launch. If an add-on doesn't seem to be configured when you first launch it, try right-clicking its menu and see if a settings option is available there. + +### Coordinating metadata across Kodi devices + +In our house, we have multiple machines that run Kodi. By default, Kodi tracks metadata, such as a video's watched status and show information, locally. Therefore, content updates on one machine won't appear on any other machine—unless you configure all your Kodi devices to store metadata inside an SQL database (which is a feature Kodi supports). This technique is not particularly difficult, but it is more advanced. If you're willing to put in the effort, here's how to do it. + +#### Before you begin + +There are a few things you need to know before configuring shared status for Kodi. + + 1. All content must be on a network share ([Samba][7], NFS, etc.). + 2. All content must be mounted via the network protocol, even if the disks are local to the machine. That means that no matter where the content is physically located, each client must be configured to use a network fileshare source. + 3. You need to be running an SQL-style database. Kodi's official guide walks through MySQL, but I chose MariaDB. + 4. All clients need to have the database port open (port 3306 in the case of MySQL/MariaDB) or the firewalls disabled. + 5. All clients must be running the same version of Kodi + + + +#### Install and configure the database + +If you're running Ubuntu, you can install MariaDB with the following commands: + +``` +sudo apt update +sudo apt install mariadb-server -y +``` + +I am running MariaDB on an Arch Linux machine. The [Arch Wiki][8] documents the initial setup process well, but I'll summarize it here. + +To install, issue the following command: + +``` +sudo pacman -S mariadb +``` + +Most distributions of MariaDB will have the same setup commands. I recommend that you understand what the commands do, but you can safely take the defaults if you're in a home environment. + +``` +sudo systemctl start mariadb +sudo mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql +sudo mysql_secure_installation +``` + +Next, edit the MariaDB config file. This file is different depending on your distribution. On Ubuntu, you want to edit **/etc/mysql/mariadb.conf.d/50-server.cnf**. On Arch, the file is either **/etc/my.cnf** or **/etc/mysql/my.cnf**. Locate the line that says **bind-address = 127.0.0.1** and change it to your desired Ethernet port's IP address or to **bind-address = 0.0.0.0** if you want it to listen on all interfaces. + +Restart the service so the change will take effect: + +``` +sudo systemctl restart mariadb +``` + +#### Configure Kodi and MariaDB/MySQL + +To enable Kodi to write to the database, one of two things needs to happen: You can create the database yourself, or you can let Kodi do it for you. In this case, since the only database on this system is for Kodi, I'll create a user with the rights to create any databases that Kodi requires. Do NOT do this if the machine runs more than one database. + +``` +mysql -u root -p +CREATE USER 'kodi' IDENTIFIED BY 'kodi'; +GRANT ALL ON core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md TO 'kodi'; +flush privileges; +\q +``` + +This grants the user all rights—essentially enabling it to act as a root user. For my purposes, this is fine. + +Next, on each Kodi device where you want to share metadata, create the following file: **/home/ /.kodi/userdata/advancedsettings.xml**. This file can contain a lot of very advanced, tweakable settings. My devices have these settings: + +``` + +    +        mysql +        mysql-arch.example.com +        3306 +        kodi +        kodi +    +    +        true +        true +    +    +        +        1 +        322122547 +        20 +    + +``` + +The **< cache>** section—which sets how much of a file Kodi will buffer over the network— is optional in this scenario. See the [Kodi wiki][9] for a full breakdown of this file and its options. + +Once the configuration is complete, it's a good idea to close and reopen Kodi to make sure the settings are applied. + +The final step is configuring all the Kodi clients to use the same network share for all their content. Only one client needs to scrap/refresh the metadata if everything is created successfully. When data is collected, you should see that Kodi creates a new database on your SQL server: + +``` +[kodi@kodi-mysql ~]$ mysql -u root -p +Enter password: +Welcome to the MariaDB monitor.  Commands end with ; or \g. +Your MariaDB connection id is 180 +Server version: 10.1.37-MariaDB MariaDB Server + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show databases; ++--------------------+ +| Database           | ++--------------------+ +| MyVideos107        | +| information_schema | +| mysql              | +| performance_schema | ++--------------------+ +4 rows in set (0.00 sec) +``` + +### Wrapping up + +This article walked through how to get up and running with the basic functionality of Kodi. You should be able to add content and pull down metadata to make browsing your media more convenient. + +You also know how to search for, install, and potentially configure add-ons for additional features. Be extra careful when downloading add-ons, as they are provided by the community at large and not the core developers. It's best to use add-ons only from organizations or companies you trust. + +And you know a bit about sharing metadata across multiple devices. You've been introduced to **advancedsettings.xml** ; hopefully it has piqued your interest. Kodi has a lot of dials and knobs to turn, and you can squeeze a lot of performance and functionality out of the platform with enough experimentation. + +Are you interested in doing more tweaking? What are some of your favorite add-ons or settings? Do you want to know how to change the user interface? What are some of your favorite skins? Let me know in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/manage-your-media-kodi + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/ISO_image +[2]: https://kodi.tv/ +[3]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux#Fedora +[4]: https://en.wikipedia.org/wiki/Server_Message_Block +[5]: https://en.wikipedia.org/wiki/Zero-configuration_networking +[6]: https://www.plex.tv +[7]: https://www.samba.org/ +[8]: https://wiki.archlinux.org/index.php/MySQL +[9]: https://kodi.wiki/view/Advancedsettings.xml diff --git a/sources/tech/20190107 Testing isn-t everything.md b/sources/tech/20190107 Testing isn-t everything.md new file mode 100644 index 0000000000..b2a2daaaac --- /dev/null +++ b/sources/tech/20190107 Testing isn-t everything.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing isn't everything) +[#]: via: (https://arp242.net/weblog/testing.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +Testing isn't everything +====== + +This is adopted from a discussion about [Want to write good unit tests in go? Don’t panic… or should you?][1] While this mainly talks about Go a lot of the points also apply to other languages. + +Some of the most difficult code I’ve worked with is code that is “easily testable”. Code that abstracts everything to the point where you have no idea what’s going on, just so that it can add a “unit test” to what would otherwise be a very straightforward function. DHH called this [Test-induced design damage][2]. + +Testing is just one tool to make sure that your program works, out of several. Another very important tool is writing code in such a way that it is easy to understand and reason about (“simplicity”). + +Books that advocate extensive testing – such as Robert C. Martin’s Clean Code – were written, in part, as a response to ever more complex programs, where you read 1,000 lines of code but still had no idea what’s going on. I recently had to port a simple Java “emoji replacer” (😂 ➙ 😂) to Go. To ensure compatibility I looked up the im­ple­men­ta­tion. It was a whole bunch of classes, factories, and whatnot which all just resulted in calling a regexp on a string. 🤷 + +In dynamic languages like Ruby and Python tests are important for a different reason, as something like this will “work” just fine: + +``` +if condition: + print('w00t') +else: + nonexistent_function() +``` + +Except of course if that `else` branch is entered. It’s easy to typo stuff, or mix stuff up. + +In Go, both of these problems are less of a concern. It has a good static type system, and the focus is on simple straightforward code that is easy to comprehend. Even for a number of dynamic languages there are optional typing systems (function annotations in Python, TypeScript for JavaScript). + +Sometimes you can do a straightforward implementation that doesn’t sacrifice anything for testability; great! But sometimes you have to strike a balance. For some code, not adding a unit test is fine. + +Intensive focus on “unit tests” can be incredibly damaging to a code base. Some codebases have a gazillion unit tests, which makes any change excessively time-consuming as you’re fixing up a whole bunch of tests for even trivial changes. Often times a lot of these tests are just duplicates; adding tests to every layer of a simple CRUD HTTP endpoint is a common example. In many apps it’s fine to just rely on a single integration test. + +Stuff like SQL mocks is another great example. It makes code more complex, harder to change, all so we can say we added a “unit test” to `select * from foo where x=?`. The worst part is, it doesn’t even test anything other than verifying you didn’t typo an SQL query. As soon as the test starts doing anything useful, such as verifying that it actually returns the correct rows from the database, the Unit Test purists will start complaining that it’s not a True Unit Test™ and that You’re Doing It Wrong™. +For most queries, the integration tests and/or manual tests are fine, and extensive SQL mocks are entirely superfluous at best, and harmful at worst. + +There are exceptions, of course; if you’ve got a lot of `if cond { q += "more sql" }` then adding SQL mocks to verify the correctness of that logic might be a good idea. Even in those cases a “non-unit unit test” (e.g. one that just accesses the database) is still a viable option. Integration tests are also still an option. A lot of applications don’t have those kind of complex queries anyway. + +One important reason for the focus on unit tests is to ensure test code runs fast. This was a response to massive test harnesses that take a day to run. This, again, is not really a problem in Go. All integration tests I’ve written run in a reasonable amount of time (several seconds at most, usually faster). The test cache introduced in Go 1.10 makes it even less of a concern. + +Last year a coworker refactored our ETag-based caching library. The old code was very straightforward and easy to understand, and while I’m not claiming it was guaranteed bug-free, it did work very well for a long time. + +It should have been written with some tests in place, but it wasn’t (I didn’t write the original version). Note that the code was not completely untested, as we did have integration tests. + +The refactored version is much more complex. Aside from the two weeks lost on refactoring a working piece of code to … another working piece of code (topic for another post), I’m not so convinced it’s actually that much better. I consider myself a reasonably accomplished and experienced programmer, with a reasonable knowledge and experience in Go. I think that in general, based on feedback from peers and performance reviews, I am at least a programmer of “average” skill level, if not more. + +If an average programmer has trouble comprehending what is in essence a handful of simple functions because there are so many layers of abstractions, then something has gone wrong. The refactor traded one tool to verify correctness (simplicity) with another (testing). Simplicity is hardly a guarantee to ensure correctness, but neither are unit tests. Ideally, we should do both. + +Postscript: the refactor introduced a bug and removed a feature that was useful, but is now harder to add, not in the least because the code is much more complex. + +All units working correctly gives exactly zero guarantees that the program is working correctly. A lot of logic errors won’t be caught because the logic consists of several units working together. So you need integration tests, and if the integration tests duplicate half of your unit tests, then why bother with those unit tests? + +Test Driven Development (TDD) is also just one tool. It works well for some problems; not so much for others. In particular, I think that “forced to write code in tiny units” can be terribly harmful in some cases. Some code is just a serial script which says “do this, and then that, and then this”. Splitting that up in a whole bunch of “tiny units” can greatly reduce how easy the code is to understand, and thus harder to verify that it is correct. + +I’ve had to fix some Ruby code where everything was in tiny units – there is a strong culture of TDD in the Ruby community – and even though the units were easy to understand I found it incredibly hard to understand the application logic. If everything is split in “tiny units” then understanding how everything fits together to create an actual program that does something useful will be much harder. + +You see the same friction in the old microkernel vs. monolithic kernel debate, or the more recent microservices vs. monolithic app one. In principle splitting everything up in small parts sounds like a great idea, but in practice it turns out that making all the small parts work together is a very hard problem. A hybrid approach seems to work best for kernels and app design, balancing the ad­van­tages and downsides of both approaches. I think the same applies to code. + +To be clear, I am not against unit tests or TDD and claiming we should all gung-go cowboy code our way through life 🤠. I write unit tests and practice TDD, when it makes sense. My point is that unit tests and TDD are not the solution to every single last problem and should applied indiscriminately. This is why I use words such as “some” and “often” so frequently. + +This brings me to the topic of testing frameworks. I have never understood what problem libraries such as [goblin][3] are solving. How is this: + +``` +Expect(err).To(nil) +Expect(out).To(test.wantOut) +``` + +An improvement over this? + +``` +if err != nil { + t.Fatal(err) +} + +if out != tt.want { + t.Errorf("out: %q\nwant: %q", out, tt.want) +} +``` + +What’s wrong with `if` and `==`? Why do we need to abstract it? Note that with table-driven tests you’re only typing these checks once, so you’re saving just a few lines here. + +[Ginkgo][4] is even worse. It turns a very simple, straightforward, and understandable piece of code and doesn’t just abstract `if`, it also chops up the execution in several different functions (`BeforeEach()` and `DescribeTable()`). + +This is known as Behaviour-driven development (BDD). I am not entirely sure what to think of BDD. I am skeptical, but I’ve never properly used it in a large project so I’m hesitant to just dismiss it. Note that I said “properly”: most projects don’t really use BDD, they just use a library with a BDD syntax and shoehorn their testing code in to that. That’s ad-hoc BDD, or faux-BDD. + +Whatever merits BDD may have, they are not present simply because your testing code vaguely resembles BDD-style syntax. This on its own demonstrates that BDD is perhaps not a great idea for many projects. + +I think there are real problems with these BDD(-ish) test tools, as they obfuscate what you’re actually doing. No matter what, testing remains a matter of getting the output of a function and checking if that matches what you expected. No testing methodology is going to change that fundamental. The more layers you add on top of that, the harder it will be to debug. + +When determining if something is “easy” then my prime concern is not how easy something is to write, but how easy something is to debug when things fail. I will gladly spend a bit more effort writing things if that makes things a lot easier to debug. + +All code – including testing code – can fail in confusing, surprising, and unexpected ways (a “bug”), and then you’re expected to debug that code. The more complex the code, the harder it is to debug. + +You should expect all code – including testing code – to go through several debugging cycles. Note that with debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. + +In general, I already find testing code harder to debug than regular code, as the “code surface” tends to be larger. You have the testing code and the actual implementation code to think of. That’s a lot more than just thinking of the implementation code. + +Adding these abstractions means you will now also have to think about that, too! This might be okay if the abstractions would reduce the scope of what you have to think about, which is a common reason to add abstractions in regular code, but it doesn’t. It just adds more things to think about. + +So these are exactly the wrong kind of abstractions: they wrap and obfuscate, rather than separate concerns and reduce the scope. + +If you’re interested in soliciting contributions from other people in open source projects then making your tests understandable is a very important concern (it’s also important in business context, but a bit less so, as you’ve got actual time to train people). + +Seeing PRs with “here’s the code, it works, but I couldn’t figure out the tests, plz halp!” is not uncommon; and I’m fairly sure that at least a few people never even bothered to submit PRs just because they got stuck on the tests. I know I have. + +There is one open source project that I contributed to, and would like to contribute more to, but don’t because it’s just too hard to write and run tests. Every change is “write working code in 15 minutes, spend 45 minutes dealing with tests”. It’s … no fun at all. + +Writing good software is hard. I’ve got some ideas on how to do it, but don’t have a comprehensive view. I’m not sure if anyone really does. I do know that “always add unit tests” and “always practice TDD” isn’t the answer, in spite of them being useful concepts. To give an analogy: most people would agree that a free market is a good idea, but at the same time even most libertarians would agree it’s not the complete solution to every single problem (well, [some do][5], but those ideas are … rather misguided). + +You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/testing.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: https://medium.com/@jens.neuse/want-to-write-good-unit-tests-in-go-dont-panic-or-should-you-ba3eb5bf4f51 +[2]: http://david.heinemeierhansson.com/2014/test-induced-design-damage.html +[3]: https://github.com/franela/goblin +[4]: https://github.com/onsi/ginkgo +[5]: https://en.wikipedia.org/wiki/Murray_Rothbard#Children's_rights_and_parental_obligations +[6]: mailto:martin@arp242.net +[7]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20190108 Avoid package names like base, util, or common.md b/sources/tech/20190108 Avoid package names like base, util, or common.md new file mode 100644 index 0000000000..b2c70b0b2e --- /dev/null +++ b/sources/tech/20190108 Avoid package names like base, util, or common.md @@ -0,0 +1,57 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Avoid package names like base, util, or common) +[#]: via: (https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Avoid package names like base, util, or common +====== + +Writing a good Go package starts with its name. Think of your package’s name as an elevator pitch, you have to describe what it does using just one word. + +A common cause of poor package names are _utility packages_. These are packages where helpers and utility code congeal. These packages contain an assortment of unrelated functions, as such their utility is hard to describe in terms of what the package _provides_. This often leads to a package’s name being derived from what the package _contains_ —utilities. + +Package names like `utils` or `helpers` are commonly found in projects which have developed deep package hierarchies and want to share helper functions without introducing import loops. Extracting utility functions to new package breaks the import loop, but as the package stems from a design problem in the project, its name doesn’t reflect its purpose, only its function in breaking the import cycle. + +> [A little] duplication is far cheaper than the wrong abstraction. + +— [Sandy Metz][1] + +My recommendation to improve the name of `utils` or `helpers` packages is to analyse where they are imported and move the relevant functions into the calling package. Even if this results in some code duplication this is preferable to introducing an import dependency between two packages. In the case where utility functions are used in many places, prefer multiple packages, each focused on a single aspect with a correspondingly descriptive name. + +Packages with names like `base` or `common` are often found when functionality common to two or more related facilities, for example common types between a client and server or a server and its mock, has been refactored into a separate package. Instead the solution is to reduce the number of packages by combining client, server, and common code into a single package named after the facility the package provides. + +For example, the `net/http` package does not have `client` and `server` packages, instead it has `client.go` and `server.go` files, each holding their respective types. `transport.go` holds for the common message transport code used by both HTTP clients and servers. + +Name your packages after what they _provide_ , not what they _contain_. + +### Related posts: + + 1. [Simple profiling package moved, updated][2] + 2. [The package level logger anti pattern][3] + 3. [How to include C code in your Go package][4] + 4. [Why I think Go package management is important][5] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/01/08/avoid-package-names-like-base-util-or-common + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://www.sandimetz.com/blog/2016/1/20/the-wrong-abstraction +[2]: https://dave.cheney.net/2014/10/22/simple-profiling-package-moved-updated (Simple profiling package moved, updated) +[3]: https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern (The package level logger anti pattern) +[4]: https://dave.cheney.net/2013/09/07/how-to-include-c-code-in-your-go-package (How to include C code in your Go package) +[5]: https://dave.cheney.net/2013/10/10/why-i-think-go-package-management-is-important (Why I think Go package management is important) diff --git a/sources/tech/20190108 Create your own video streaming server with Linux.md b/sources/tech/20190108 Create your own video streaming server with Linux.md new file mode 100644 index 0000000000..24dd44524d --- /dev/null +++ b/sources/tech/20190108 Create your own video streaming server with Linux.md @@ -0,0 +1,301 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create your own video streaming server with Linux) +[#]: via: (https://opensource.com/article/19/1/basic-live-video-streaming-server) +[#]: author: (Aaron J.Prisk https://opensource.com/users/ricepriskytreat) + +Create your own video streaming server with Linux +====== +Set up a basic live streaming server on a Linux or BSD operating system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo) + +Live video streaming is incredibly popular—and it's still growing. Platforms like Amazon's Twitch and Google's YouTube boast millions of users that stream and consume countless hours of live and recorded media. These services are often free to use but require you to have an account and generally hold your content behind advertisements. Some people don't need their videos to be available to the masses or just want more control over their content. Thankfully, with the power of open source software, anyone can set up a live streaming server. + +### Getting started + +In this tutorial, I'll explain how to set up a basic live streaming server with a Linux or BSD operating system. + +This leads to the inevitable question of system requirements. These can vary, as there are a lot of variables involved with live streaming, such as: + + * **Stream quality:** Do you want to stream in high definition or will standard definition fit your needs? + * **Viewership:** How many viewers are you expecting for your videos? + * **Storage:** Do you plan on keeping saved copies of your video stream? + * **Access:** Will your stream be private or open to the world? + + + +There are no set rules when it comes to system requirements, so I recommend you experiment and find what works best for your needs. I installed my server on a virtual machine with 4GB RAM, a 20GB hard drive, and a single Intel i7 processor core. + +This project uses the Real-Time Messaging Protocol (RTMP) to handle audio and video streaming. There are other protocols available, but I chose RTMP because it has broad support. As open standards like WebRTC become more compatible, I would recommend that route. + +It's also very important to know that "live" doesn't always mean instant. A video stream must be encoded, transferred, buffered, and displayed, which often adds delays. The delay can be shortened or lengthened depending on the type of stream you're creating and its attributes. + +### Setting up a Linux server + +You can use many different distributions of Linux, but I prefer Ubuntu, so I downloaded the [Ubuntu Server][1] edition for my operating system. If you prefer your server to have a graphical user interface (GUI), feel free to use [Ubuntu Desktop][2] or one of its many flavors. Then, I fired up the Ubuntu installer on my computer or virtual machine and chose the settings that best matched my environment. Below are the steps I took. + +Note: Because this is a server, you'll probably want to set some static network settings. + +![](https://opensource.com/sites/default/files/uploads/stream-server_profilesetup.png) + +After the installer finishes and your system reboots, you'll be greeted with a lovely new Ubuntu system. As with any newly installed operating system, install any updates that are available: + +``` +sudo apt update +sudo apt upgrade +``` + +This streaming server will use the very powerful and versatile Nginx web server, so you'll need to install it: + +``` +sudo apt install nginx +``` + +Then you'll need to get the RTMP module so Nginx can handle your media stream: + +``` +sudo add-apt-repository universe +sudo apt install libnginx-mod-rtmp +``` + +Adjust your web server's configuration so it can accept and deliver your media stream. + +``` +sudo nano /etc/nginx/nginx.conf +``` + +Scroll to the bottom of the configuration file and add the following code: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_config.png) + +Save the config. Because I'm a heretic, I use [Nano][3] for editing configuration files. In Nano, you can save your config by pressing **Ctrl+X** , **Y** , and then **Enter.** + +This is a very minimal config that will create a working streaming server. You'll add to this config later, but this is a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new configuration: + +``` +sudo systemctl restart nginx +``` + +### Setting up a BSD server + +If you're of the "beastie" persuasion, getting a streaming server up and running is also devilishly easy. + +Head on over to the [FreeBSD][4] website and download the latest release. Fire up the FreeBSD installer on your computer or virtual machine and go through the initial steps and choose settings that best match your environment. Since this is a server, you'll likely want to set some static network settings. + +After the installer finishes and your system reboots, you should have a shiny new FreeBSD system. Like any other freshly installed system, you'll likely want to get everything updated (from this step forward, make sure you're logged in as root): + +``` +pkg update +pkg upgrade +``` + +I install [Nano][3] for editing configuration files: + +``` +pkg install nano +``` + +This streaming server will use the very powerful and versatile Nginx web server. You can build Nginx using the excellent ports system that FreeBSD boasts. + +First, update your ports tree: + +``` +portsnap fetch +portsnap extract +``` + +Browse to the Nginx ports directory: + +``` +cd /usr/ports/www/nginx +``` + +And begin building Nginx by running: + +``` +make install +``` + +You'll see a screen asking what modules to include in your Nginx build. For this project, you'll need to add the RTMP module. Scroll down until the RTMP module is selected and press **Space**. Then Press **Enter** to proceed with the rest of the build and installation. + +Once Nginx has finished installing, it's time to configure it for streaming purposes. + +First, add an entry into **/etc/rc.conf** to ensure the Nginx server starts when your system boots: + +``` +nano /etc/rc.conf +``` + +Add this text to the file: + +``` +nginx_enable="YES" +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamingconfig.png) + +Next, create a webroot directory from where Nginx will serve its content. I call mine **stream** : + +``` +cd /usr/local/www/ +mkdir stream +chmod -R 755 stream/ +``` + +Now that you have created your stream directory, configure Nginx by editing its configuration file: + +``` +nano /usr/local/etc/nginx/nginx.conf +``` + +Load your streaming modules at the top of the file: + +``` +load_module /usr/local/libexec/nginx/ngx_stream_module.so; +load_module /usr/local/libexec/nginx/ngx_rtmp_module.so; +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_modules.png) + +Under the **Server** section, change the webroot location to match the one you created earlier: + +``` +Location / { +root /usr/local/www/stream +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_webroot.png) + +And finally, add your RTMP settings so Nginx will know how to handle your media streams: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +Save the config. In Nano, you can do this by pressing **Ctrl+X** , **Y** , and then **Enter.** + +As you can see, this is a very minimal config that will create a working streaming server. Later, you'll add to this config, but this will provide you with a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new config: + +``` +service nginx restart +``` + +### Set up your streaming software + +#### Broadcasting with OBS + +Now that your server is ready to accept your video streams, it's time to set up your streaming software. This tutorial uses the powerful and open source Open Broadcast Studio (OBS). + +Head over to the [OBS website][5] and find the build for your operating system and install it. Once OBS launches, you should see a first-time-run wizard that will help you configure OBS with the settings that best fit your hardware. + +![](https://opensource.com/sites/default/files/uploads/stream-server_autoconfig.png) + +OBS isn't capturing anything because you haven't supplied it with a source. For this tutorial, you'll just capture your desktop for the stream. Simply click the **+** button under **Source** , choose **Screen Capture** , and select which desktop you want to capture. + +Click OK, and you should see OBS mirroring your desktop. + +Now it's time to send your newly configured video stream to your server. In OBS, click **File** > **Settings**. Click on the **Stream** section, and set **Stream Type** to **Custom Streaming Server**. + +In the URL box, enter the prefix **rtmp://** followed the IP address of your streaming server followed by **/live**. For example, **rtmp://IP-ADDRESS/live**. + +Next, you'll probably want to enter a Stream key—a special identifier required to view your stream. Enter whatever key you want (and can remember) in the **Stream key** box. + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamkey.png) + +Click **Apply** and then **OK**. + +Now that OBS is configured to send your stream to your server, you can start your first stream. Click **Start Streaming**. + +If everything worked, you should see the button change to **Stop Streaming** and some bandwidth metrics will appear at the bottom of OBS. + +![](https://opensource.com/sites/default/files/uploads/stream-server_metrics.png) + +If you receive an error, double-check Stream Settings in OBS for misspellings. If everything looks good, there could be another issue preventing it from working. + +### Viewing your stream + +A live video isn't much good if no one is watching it, so be your first viewer! + +There are a multitude of open source media players that support RTMP, but the most well-known is probably [VLC media player][6]. + +After you install and launch VLC, open your stream by clicking on **Media** > **Open Network Stream**. Enter the path to your stream, adding the Stream Key you set up in OBS, then click **Play**. For example, **rtmp://IP-ADDRESS/live/SECRET-KEY**. + +You should now be viewing your very own live video stream! + +![](https://opensource.com/sites/default/files/uploads/stream-server_livevideo.png) + +### Where to go next? + +This is a very simple setup that will get you off the ground. Here are two other features you likely will want to use. + + * **Limit access:** The next step you might want to take is to limit access to your server, as the default setup allows anyone to stream to and from the server. There are a variety of ways to set this up, such as an operating system firewall, [.htaccess file][7], or even using the [built-in access controls in the STMP module][8]. + + * **Record streams:** This simple Nginx configuration will only stream and won't save your videos, but this is easy to add. In the Nginx config, under the RTMP section, set up the recording options and the location where you want to save your videos. Make sure the path you set exists and Nginx is able to write to it. + + + + +``` +application live { +             live on; +             record all; +             record_path /var/www/html/recordings; +             record_unique on; +} +``` + +The world of live streaming is constantly evolving, and if you're interested in more advanced uses, there are lots of other great resources you can find floating around the internet. Good luck and happy streaming! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/basic-live-video-streaming-server + +作者:[Aaron J.Prisk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ricepriskytreat +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/download/server +[2]: https://www.ubuntu.com/download/desktop +[3]: https://www.nano-editor.org/ +[4]: https://www.freebsd.org/ +[5]: https://obsproject.com/ +[6]: https://www.videolan.org/vlc/index.html +[7]: https://httpd.apache.org/docs/current/howto/htaccess.html +[8]: https://github.com/arut/nginx-rtmp-module/wiki/Directives#access diff --git a/sources/tech/20190109 Automating deployment strategies with Ansible.md b/sources/tech/20190109 Automating deployment strategies with Ansible.md new file mode 100644 index 0000000000..175244e760 --- /dev/null +++ b/sources/tech/20190109 Automating deployment strategies with Ansible.md @@ -0,0 +1,152 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automating deployment strategies with Ansible) +[#]: via: (https://opensource.com/article/19/1/automating-deployment-strategies-ansible) +[#]: author: (Jario da Silva Junior https://opensource.com/users/jairojunior) + +Automating deployment strategies with Ansible +====== +Use automation to eliminate time sinkholes due to repetitive tasks and unplanned work. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) + +When you examine your technology stack from the bottom layer to the top—hardware, operating system (OS), middleware, and application—with their respective configurations, it's clear that changes are far more frequent as you go up in the stack. Your hardware will hardly change, your OS has a long lifecycle, and your middleware will keep up with the application's needs, but even if your release cycle is long (weeks or months), your applications will be the most volatile. + +![](https://opensource.com/sites/default/files/uploads/osdc-deployment-strategies.png) + +In [The Practice of System and Network Administration][1], the authors categorize the biggest "time sinkholes" in IT as manual/non-standard provisioning of OSes and application deployments. These time sinkholes will consume you with repetitive tasks or unplanned work. + +How so? Let's say you provision a new server without Network Time Protocol (NTP) properly configured, and a small percentage of your requests—in a cluster of dozens of servers—start to behave strangely because an application uses some sort of scheduler that relies on correct time. When you look at it like this, it is an easy problem to fix, but how long it would it take your team figure it out? Incidents or unplanned work consume a lot of your time and, even worse, your greatest talents. Should you really be wasting time investigating production systems like this? Wouldn't it be better to set this server aside and automatically provision a new one from scratch? + +What about manual deployment? Imagine 20 binaries deployed across a farm or nodes with their respective configuration files? How error-prone is this? Inevitably, it will eventually end up in unplanned work. + +The [State of DevOps Report 2018][2] introduces the stages of DevOps adoption, and it's no surprise that Stage 0 includes deployment automation and reuse of deployment patterns, while Stage 1 and 2 focus on standardization of your infrastructure stack to reduce inconsistencies across your environment. + +Note that, more than once, I have seen an ops team using this "standardization" as an excuse to limit a development team's ability to deliver, forcing them to use a hammer on something that is definitely not a nail. Don't do it; the price is extremely high. + +The lesson to be learned here is that lack of automation not only increases your lead time but also the rate of problems in your process and the amount of unplanned work you face. If you've read [The Phoenix Project][3], you know this is the root of all evil in any value stream, and if you don't get rid of it, it will eventually kill your business. + +When trying to fill the biggest time sinkholes, why not start with automating operating system installation? We could, but the results would take longer to appear since new virtual machines are not created as frequently as applications are deployed. In other words, this may not free up the time we need to power our initiative, so it could die prematurely. + +Still not convinced? Smaller and more frequent releases are also extremely positive from the development side. Let's explain a little further… + +### Deploy ≠ Release + +The first thing to understand is that, although they're used interchangeably, deployment and release do **NOT** mean the same thing. Release refers to providing the user a new version, while deployment is the technical process of deploying the new version. Let's focus on the technical process of deployment. + +### Tasks, groups, and Ansible + +We need to understand the deployment process from the beginning to the end, including everything in the middle—the tasks, which servers are involved in the process, and which steps are executed—to avoid falling into the pitfalls described by Mattias Geniar in [Automating the unknown][4]. + +#### Tasks + +The steps commonly executed in a regular deployment process include: + + * Deploy application(s)/database(s) or database(s) change(s) + * Stop/start services and monitoring + * Add/remove the server from our load balancers + * Verify application state—is it ready to serve requests? + * Manual approval—is it necessary? + + + +For some people, automating the deployment process but leaving a manual approval step is like riding a bike with training wheels. As someone once told me: "It's better to ride with training wheels than not ride at all." + +What if a tool doesn't include an API or a command-line interface (CLI) to enable task automation? Well, maybe it's time to think about changing tools. There are many open source application servers, databases, monitoring systems, and load balancers that are easily automated—thanks in large part to the [Unix way][5]. When adopting a new technology, eliminate options that are not automated and use your creativity to support your legacy technologies. For example, I've seen people versioning network appliance configuration files and updating them using FTP. + +And guess what? It's a wonderful time to adopt open source tools. The recent [Accelerate: State of DevOps][6] report found that open source technologies are in predominant use in high-performance organizations. The logic is pretty simple: open source projects function in a "Darwinist" model, where those that do not adapt and evolve will die for lack of a user base or contributions. Feedback is paramount to software evolution. + +#### Groups + +To identify groups of servers to target for automation, think about the most tasks you want to automate, such as those that: + + * Deploy application(s)/database(s) or database change(s) + * Stop/start services and monitoring + * Add/remove server(s) from load balancer(s) + * Verify application state—is it ready to serve requests? + + + +#### The playbook + +A high-level deployment process could be: + + 1. Stop monitoring (to avoid false-positives) + 2. Remove server from the load balancer (to prevent the user from receiving an error code) + 3. Stop the service (to enable a graceful shutdown) + 4. Deploy the new version of the application + 5. Wait for the application to be ready to receive new requests + 6. Execute steps 3, 2, and 1. + 7. Do the same for the next N servers. + + + +Having documentation of your process is nice, but having an executable documenting your deployment is better! Here's what steps 1–5 would look like in Ansible for a fully open source stack: + +``` +- name: Disable alerts +  nagios: +    action: disable_alerts +    host: "{{ inventory_hostname }}" +    services: webserver +  delegate_to: "{{ item }}" +  loop: "{{ groups.monitoring }}" + +- name: Disable servers in the LB +  haproxy: +    host: "{{ inventory_hostname }}" +    state: disabled +    backend: app +  delegate_to: "{{ item }}" +  loop: " {{ groups.lbserver }}" + +- name: Stop the service +  service: name=httpd state=stopped + +- name: Deploy a new version +  unarchive: src=app.tar.gz dest=/var/www/app + +- name: Verify application state +  uri: +    url: "http://{{ inventory_hostname }}/app/healthz" +    status_code: 200 +  retries: 5 +``` + +### Why Ansible? + +There are other alternatives for application deployment, but the things that make Ansible an excellent choice include: + + * Multi-tier orchestration (i.e., **delegate_to** ) allowing you to orderly target different groups of servers: monitoring, load balancer, application server, database, etc. + * Rolling upgrade (i.e., serial) to control how changes are made (e.g., 1 by 1, N by N, X% at a time, etc.) + * Error control, **max_fail_percentage** and **any_errors_fatal** , is my process all-in or will it tolerate fails? + * A vast library of modules for: + * Monitoring (e.g., Nagios, Zabbix, etc.) + * Load balancers (e.g., HAProxy, F5, Netscaler, Cisco, etc.) + * Services (e.g., service, command, file) + * Deployment (e.g., copy, unarchive) + * Programmatic verifications (e.g., command, Uniform Resource Identifier) + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/automating-deployment-strategies-ansible + +作者:[Jario da Silva Junior][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jairojunior +[b]: https://github.com/lujun9972 +[1]: https://www.amazon.com/Practice-System-Network-Administration-Enterprise/dp/0321919165/ref=dp_ob_title_bk +[2]: https://puppet.com/resources/whitepaper/state-of-devops-report +[3]: https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592 +[4]: https://ma.ttias.be/automating-unknown/ +[5]: https://en.wikipedia.org/wiki/Unix_philosophy +[6]: https://cloudplatformonline.com/2018-state-of-devops.html diff --git a/sources/tech/20190111 Build a retro gaming console with RetroPie.md b/sources/tech/20190111 Build a retro gaming console with RetroPie.md new file mode 100644 index 0000000000..eedac575c9 --- /dev/null +++ b/sources/tech/20190111 Build a retro gaming console with RetroPie.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Build a retro gaming console with RetroPie) +[#]: via: (https://opensource.com/article/19/1/retropie) +[#]: author: (Jay LaCroix https://opensource.com/users/jlacroix) + +Build a retro gaming console with RetroPie +====== +Play your favorite classic Nintendo, Sega, and Sony console games on Linux. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f) + +The most common question I get on [my YouTube channel][1] and in person is what my favorite Linux distribution is. If I limit the answer to what I run on my desktops and laptops, my answer will typically be some form of an Ubuntu-based Linux distro. My honest answer to this question may surprise many. My favorite Linux distribution is actually [RetroPie][2]. + +As passionate as I am about Linux and open source software, I'm equally passionate about classic gaming, specifically video games produced in the '90s and earlier. I spend most of my surplus income on older games, and I now have a collection of close to a thousand games for over 20 gaming consoles. In my spare time, I raid flea markets, yard sales, estate sales, and eBay buying games for various consoles, including almost every iteration made by Nintendo, Sega, and Sony. There's something about classic games that I adore, a charm that seems lost in games released nowadays. + +Unfortunately, collecting retro games has its fair share of challenges. Cartridges with memory for save files will lose their charge over time, requiring the battery to be replaced. While it's not hard to replace save batteries (if you know how), it's still time-consuming. Games on CD-ROMs are subject to disc rot, which means that even if you take good care of them, they'll still lose data over time and become unplayable. Also, sometimes it's difficult to find replacement parts for some consoles. This wouldn't be so much of an issue if the majority of classic games were available digitally, but the vast majority are never re-released on a digital platform. + +### Gaming on RetroPie + +RetroPie is a great project and an asset to retro gaming enthusiasts like me. RetroPie is a Raspbian-based distribution designed for use on the Raspberry Pi (though it is possible to get it working on other platforms, such as a PC). RetroPie boots into a graphical interface that is completely controllable via a gamepad or joystick and allows you to easily manage digital copies (ROMs) of your favorite games. You can scrape information from the internet to organize your collection better and manage lists of favorite games, and the entire interface is very user-friendly and efficient. From the interface, you can launch directly into a game, then exit the game by pressing a combination of buttons on your gamepad. You rarely need a keyboard, unless you have to enter your WiFi password or manually edit configuration files. + +I use RetroPie to host a digital copy of every physical game I own in my collection. When I purchase a game from a local store or eBay, I also download the ROM. As a collector, this is very convenient. If I don't have a particular physical console within arms reach, I can boot up RetroPie and enjoy a game quickly without having to connect cables or clean cartridge contacts. There's still something to be said about playing a game on the original hardware, but if I'm pressed for time, RetroPie is very convenient. I also don't have to worry about dead save batteries, dirty cartridge contacts, disc rot, or any of the other issues collectors like me have to regularly deal with. I simply play the game. + +Also, RetroPie allows me to be very clever and utilize my technical know-how to achieve additional functionality that's not normally available. For example, I have three RetroPies set up, each of them synchronizing their files between each other by leveraging [Syncthing][3], a popular open source file synchronization tool. The synchronization happens automatically, and it means I can start a game on one television and continue in the same place on another unit since the save files are included in the synchronization. To take it a step further, I also back up my save and configuration files to [Backblaze B2][4], so I'm protected if an SD card becomes defective. + +### Setting up RetroPie + +Setting up RetroPie is very easy, and if you've ever set up a Raspberry Pi Linux distribution before (such as Raspbian) the process is essentially the same—you simply download the IMG file and flash it to your SD card by utilizing another tool, such as [Etcher][5], and insert it into your RetroPie. Then plug in an AC adapter and gamepad and hook it up to your television via HDMI. Optionally, you can buy a case to protect your RetroPie from outside elements and add visual appeal. Here is a listing of things you'll need to get started: + + * Raspberry Pi board (Model 3B+ or higher recommended) + * SD card (16GB or larger recommended) + * A USB gamepad + * UL-listed micro USB power adapter, at least 2.5 amp + + + +If you choose to add the optional Raspberry Pi case, I recommend the Super NES and Super Famicom themed cases from [RetroFlag][6]. Not only do these cases look cool, but they also have fully functioning power and reset buttons. This means you can configure the reset and power buttons to directly trigger the operating system's halt process, rather than abruptly terminating power. This definitely makes for a more professional experience, but it does require the installation of a special script. The instructions are on [RetroFlag's GitHub page][7]. Be wary: there are many cases available on Amazon and eBay of varying quality. Some of them are cheap knock-offs of RetroFlag cases, and others are just a lower quality overall. In fact, even cases by RetroFlag vary in quality—I had some power-distribution issues with the NES-themed case that made for an unstable experience. If in doubt, I've found that RetroFlag's Super NES and Super Famicom themed cases work very well. + +### Adding games + +When you boot RetroPie for the first time, it will resize the filesystem to ensure you have full access to the available space on your SD card and allow you to set up your gamepad. I can't give you links for game ROMs, so I'll leave that part up to you to figure out. When you've found them, simply add them to the RetroPie SD card in the designated folder, which would be located under **/home/pi/RetroPie/roms/ **. You can use your favorite tool for transferring the ROMs to the Pi, such as [SCP][8] in a terminal, [WinSCP][9], [Samba][10], etc. Once you've added the games, you can rescan them by pressing start and choosing the option to restart EmulationStation. When it restarts, it should automatically add menu entries for the ROMs you've added. That's basically all there is to it. + +(The rescan updates EmulationStation’s game inventory. If you don’t do that, it won’t list any newly added games you copy over.) + +Regarding the games' performance, your mileage will vary depending on which consoles you're emulating. For example, I've noticed that Sega Dreamcast games barely run at all, and most Nintendo 64 games will run sluggishly with a bad framerate. Many PlayStation Portable (PSP) games also perform inconsistently. However, all of the 8-bit and 16-bit consoles emulate seemingly perfectly—I haven't run into a single 8-bit or 16-bit game that doesn't run well. Surprisingly, games designed for the original PlayStation run great for me, which is a great feat considering the lower-performance potential of the Raspberry Pi. + +Overall, RetroPie's performance is great, but the Raspberry Pi is not as powerful as a gaming PC, so adjust your expectations accordingly. + +### Conclusion + +RetroPie is a fantastic open source project dedicated to preserving classic games and an asset to game collectors everywhere. Having a digital copy of my physical game collection is extremely convenient. If I were to tell my childhood self that one day I could have an entire game collection on one device, I probably wouldn't believe it. But RetroPie has become a staple in my household and provides hours of fun and enjoyment. + +If you want to see the parts I mentioned as well as a quick installation overview, I have [a video][11] on [my YouTube channel][12] that goes over the process and shows off some gameplay at the end. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/retropie + +作者:[Jay LaCroix][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jlacroix +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/channel/UCxQKHvKbmSzGMvUrVtJYnUA +[2]: https://retropie.org.uk/ +[3]: https://syncthing.net/ +[4]: https://www.backblaze.com/b2/cloud-storage.html +[5]: https://www.balena.io/etcher/ +[6]: https://www.amazon.com/shop/learnlinux.tv?listId=1N9V89LEH5S8K +[7]: https://github.com/RetroFlag/retroflag-picase +[8]: https://en.wikipedia.org/wiki/Secure_copy +[9]: https://winscp.net/eng/index.php +[10]: https://www.samba.org/ +[11]: https://www.youtube.com/watch?v=D8V-KaQzsWM +[12]: http://www.youtube.com/c/LearnLinuxtv diff --git a/sources/tech/20190113 Editing Subtitles in Linux.md b/sources/tech/20190113 Editing Subtitles in Linux.md new file mode 100644 index 0000000000..1eaa6a68fd --- /dev/null +++ b/sources/tech/20190113 Editing Subtitles in Linux.md @@ -0,0 +1,168 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Editing Subtitles in Linux) +[#]: via: (https://itsfoss.com/editing-subtitles) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Editing Subtitles in Linux +====== + +I have been a world movie and regional movies lover for decades. Subtitles are the essential tool that have enabled me to enjoy the best movies in various languages and from various countries. + +If you enjoy watching movies with subtitles, you might have noticed that sometimes the subtitles are not synced or not correct. + +Did you know that you can edit subtitles and make them better? Let me show you some basic subtitle editing in Linux. + +![Editing subtitles in Linux][1] + +### Extracting subtitles from closed captions data + +Around 2012, 2013 I came to know of a tool called [CCEextractor.][2] As time passed, it has become one of the vital tools for me, especially if I come across a media file which has the subtitle embedded in it. + +CCExtractor analyzes video files and produces independent subtitle files from the closed captions data. + +CCExtractor is a cross-platform, free and open source tool. The tool has matured quite a bit from its formative years and has been part of [GSOC][3] and Google Code-in now and [then.][4] + +The tool, to put it simply, is more or less a set of scripts which work one after another in a serialized order to give you an extracted subtitle. + +You can follow the installation instructions for CCExtractor on [this page][5]. + +After installing when you want to extract subtitles from a media file, do the following: + +``` +ccextractor +``` + +The output of the command will be something like this: + +It basically scans the media file. In this case, it found that the media file is in malyalam and that the media container is an [.mkv][6] container. It extracted the subtitle file with the same name as the video file adding _eng to it. + +CCExtractor is a wonderful tool which can be used to enhance subtitles along with Subtitle Edit which I will share in the next section. + +``` +Interesting Read: There is an interesting synopsis of subtitles at [vicaps][7] which tells and shares why subtitles are important to us. It goes into quite a bit of detail of movie-making as well for those interested in such topics. +``` + +### Editing subtitles with SubtitleEditor Tool + +You probably are aware that most subtitles are in [.srt format][8] . The beautiful thing about this format is and was you could load it in your text editor and do little fixes in it. + +A srt file looks something like this when launched into a simple text-editor: + +The excerpt subtitle I have shared is from a pretty Old German Movie called [The Cabinet of Dr. Caligari (1920)][9] + +Subtitleeditor is a wonderful tool when it comes to editing subtitles. Subtitle Editor is and can be used to manipulate time duration, frame-rate of the subtitle file to be in sync with the media file, duration of breaks in-between and much more. I’ll share some of the basic subtitle editing here. + +![][10] + +First install subtitleeditor the same way you installed ccextractor, using your favorite installation method. In Debian, you can use this command: + +``` +sudo apt install subtitleeditor +``` + +When you have it installed, let’s see some of the common scenarios where you need to edit a subtitle. + +#### Manipulating Frame-rates to sync with Media file + +If you find that the subtitles are not synced with the video, one of the reasons could be the difference between the frame rates of the video file and the subtitle file. + +How do you know the frame rates of these files, then? + +To get the frame rate of a video file, you can use the mediainfo tool. You may need to install it first using your distribution’s package manager. + +Using mediainfo is simple: + +``` +$ mediainfo somefile.mkv | grep Frame + Format settings : CABAC / 4 Ref Frames + Format settings, ReFrames : 4 frames + Frame rate mode : Constant + Frame rate : 25.000 FPS + Bits/(Pixel*Frame) : 0.082 + Frame rate : 46.875 FPS (1024 SPF) +``` + +Now you can see that framerate of the video file is 25.000 FPS. The other Frame-rate we see is for the audio. While I can share why particular fps are used in Video-encoding, Audio-encoding etc. it would be a different subject matter. There is a lot of history associated with it. + +Next is to find out the frame rate of the subtitle file and this is a slightly complicated. + +Usually, most subtitles are in a zipped format. Unzipping the .zip archive along with the subtitle file which ends in something.srt. Along with it, there is usually also a .info file with the same name which sometime may have the frame rate of the subtitle. + +If not, then it usually is a good idea to go some site and download the subtitle from a site which has that frame rate information. For this specific German file, I will be using [Opensubtitle.org][11] + +As you can see in the link, the frame rate of the subtitle is 23.976 FPS. Quite obviously, it won’t play well with my video file with frame rate 25.000 FPS. + +In such cases, you can change the frame rate of the subtitle file using the Subtitle Editor tool: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Change Framerate and change frame rates from 23.976 fps to 25.000 fps or whatever it is that is desired. Save the changed file. + +![synchronize frame rates of subtitles in Linux][12] + +#### Changing the Starting position of a subtitle file + +Sometimes the above method may be enough, sometimes though it will not be enough. + +You might find some cases when the start of the subtitle file is different from that in the movie or a media file while the frame rate is the same. + +In such cases, do the following: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Select Move Subtitle. + +![Move subtitles using Subtitle Editor on Linux][13] + +Change the new Starting position of the subtitle file. Save the changed file. + +![Move subtitles using Subtitle Editor in Linux][14] + +If you wanna be more accurate, then use [mpv][15] to see the movie or media file and click on the timing, if you click on the timing bar which shows how much the movie or the media file has elapsed, clicking on it will also reveal the microsecond. + +I usually like to be accurate so I try to be as precise as possible. It is very difficult in MPV as human reaction time is imprecise. If I wanna be super accurate then I use something like [Audacity][16] but then that is another ball-game altogether as you can do so much more with it. That may be something to explore in a future blog post as well. + +#### Manipulating Duration + +Sometimes even doing both is not enough and you even have to shrink or add the duration to make it sync with the media file. This is one of the more tedious works as you have to individually fix the duration of each sentence. This can happen especially if you have variable frame rates in the media file (nowadays rare but you still get such files). + +In such a scenario, you may have to edit the duration manually and automation is not possible. The best way is either to fix the video file (not possible without degrading the video quality) or getting video from another source at a higher quality and then [transcode][17] it with the settings you prefer. This again, while a major undertaking I could shed some light on in some future blog post. + +### Conclusion + +What I have shared in above is more or less on improving on existing subtitle files. If you were to start a scratch you need loads of time. I haven’t shared that at all because a movie or any video material of say an hour can easily take anywhere from 4-6 hours or even more depending upon skills of the subtitler, patience, context, jargon, accents, native English speaker, translator etc. all of which makes a difference to the quality of the subtitle. + +I hope you find this interesting and from now onward, you’ll handle your subtitles slightly better. If you have any suggestions to add, please leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/editing-subtitles + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?resize=800%2C450&ssl=1 +[2]: https://www.ccextractor.org/ +[3]: https://itsfoss.com/best-open-source-internships/ +[4]: https://www.ccextractor.org/public:codein:google_code-in_2018 +[5]: https://github.com/CCExtractor/ccextractor/wiki/Installation +[6]: https://en.wikipedia.org/wiki/Matroska +[7]: https://www.vicaps.com/blog/history-of-silent-movies-and-subtitles/ +[8]: https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format +[9]: https://www.imdb.com/title/tt0010323/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/subtitleeditor.jpg?ssl=1 +[11]: https://www.opensubtitles.org/en/search/sublanguageid-eng/idmovie-4105 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/subtitleeditor-frame-rate-sync.jpg?resize=800%2C450&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Move-subtitles-Caligiri.jpg?resize=800%2C450&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/move-subtitles.jpg?ssl=1 +[15]: https://itsfoss.com/mpv-video-player/ +[16]: https://www.audacityteam.org/ +[17]: https://en.wikipedia.org/wiki/Transcoding +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md new file mode 100644 index 0000000000..29d5f63d2a --- /dev/null +++ b/sources/tech/20190115 Linux Desktop Setup - HookRace Blog.md @@ -0,0 +1,514 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Desktop Setup · HookRace Blog) +[#]: via: (https://hookrace.net/blog/linux-desktop-setup/) +[#]: author: (Dennis Felsing http://felsin9.de/nnis/) + +Linux Desktop Setup +====== + + +My software setup has been surprisingly constant over the last decade, after a few years of experimentation since I initially switched to Linux in 2006. It might be interesting to look back in another 10 years and see what changed. A quick overview of what’s running as I’m writing this post: + +[![htop overview][1]][2] + +### Motivation + +My software priorities are, in no specific order: + + * Programs should run on my local system so that I’m in control of them, this excludes cloud solutions. + * Programs should run in the terminal, so that they can be used consistently from anywhere, including weak computers or a phone. + * Keyboard focused is nearly automatic by using terminal software. I prefer to use the mouse where it makes sense only, reaching for the mouse all the time during typing feels like a waste of time. Occasionally it took me an hour to notice that the mouse wasn’t even plugged in. + * Ideally use fast and efficient software, I don’t like hearing the fan and feeling the room heat up. I can also keep running older hardware for much longer, my 10 year old Thinkpad x200s is still fine for all the software I use. + * Be composable. I don’t want to do every step manually, instead automate more when it makes sense. This naturally favors the shell. + + + +### Operating Systems + +I had a hard start with Linux 12 years ago by removing Windows, armed with just the [Gentoo Linux][3] installation CD and a printed manual to get a functioning Linux system. It took me a few days of compiling and tinkering, but in the end I felt like I had learnt a lot. + +I haven’t looked back to Windows since then, but I switched to [Arch Linux][4] on my laptop after having the fan fail from the constant compilation stress. Later I also switched all my other computers and private servers to Arch Linux. As a rolling release distribution you get package upgrades all the time, but the most important breakages are nicely reported in the [Arch Linux News][5]. + +One annoyance though is that Arch Linux removes the old kernel modules once you upgrade it. I usually notice that once I try plugging in a USB flash drive and the kernel fails to load the relevant module. Instead you’re supposed to reboot after each kernel upgrade. There are a few [hacks][6] around to get around the problem, but I haven’t been bothered enough to actually use them. + +Similar problems happen with other programs, commonly Firefox, cron or Samba requiring a restart after an upgrade, but annoyingly not warning you that that’s the case. [SUSE][7], which I use at work, nicely warns about such cases. + +For the [DDNet][8] production servers I prefer [Debian][9] over Arch Linux, so that I have a lower chance of breakage on each upgrade. For my firewall and router I used [OpenBSD][10] for its clean system, documentation and great [pf firewall][11], but right now I don’t have a need for a separate router anymore. + +### Window Manager + +Since I started out with Gentoo I quickly noticed the huge compile time of KDE, which made it a no-go for me. I looked around for more minimal solutions, and used [Openbox][12] and [Fluxbox][13] initially. At some point I jumped on the tiling window manager train in order to be more keyboard-focused and picked up [dwm][14] and [awesome][15] close to their initial releases. + +In the end I settled on [xmonad][16] thanks to its flexibility, extendability and being written and configured in pure [Haskell][17], a great functional programming language. One example of this is that at home I run a single 40” 4K screen, but often split it up into four virtual screens, each displaying a workspace on which my windows are automatically arranged. Of course xmonad has a [module][18] for that. + +[dzen][19] and [conky][20] function as a simple enough status bar for me. My entire conky config looks like this: + +``` +out_to_console yes +update_interval 1 +total_run_times 0 + +TEXT +${downspeed eth0} ${upspeed eth0} | $cpu% ${loadavg 1} ${loadavg 2} ${loadavg 3} $mem/$memmax | ${time %F %T} +``` + +And gets piped straight into dzen2 with `conky | dzen2 -fn '-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*' -bg '#000000' -fg '#ffffff' -p -e '' -x 1000 -w 920 -xs 1 -ta r`. + +One important feature for me is to make the terminal emit a beep sound once a job is done. This is done simply by adding a `\a` character to the `PR_TITLEBAR` variable in zsh, which is shown whenever a job is done. Of course I disable the actual beep sound by blacklisting the `pcspkr` kernel module with `echo "blacklist pcspkr" > /etc/modprobe.d/nobeep.conf`. Instead the sound gets turned into an urgency by urxvt’s `URxvt.urgentOnBell: true` setting. Then xmonad has an urgency hook to capture this and I can automatically focus the currently urgent window with a key combination. In dzen I get the urgent windowspaces displayed with a nice and bright `#ff0000`. + +The final result in all its glory on my Laptop: + +[![Laptop screenshot][21]][22] + +I hear that [i3][23] has become quite popular in the last years, but it requires more manual window alignment instead of specifying automated methods to do it. + +I realize that there are also terminal multiplexers like [tmux][24], but I still require a few graphical applications, so in the end I never used them productively. + +### Terminal Persistency + +In order to keep terminals alive I use [dtach][25], which is just the detach feature of screen. In order to make every terminal on my computer detachable I wrote a [small wrapper script][26]. This means that even if I had to restart my X server I could keep all my terminals running just fine, both local and remote. + +### Shell & Programming + +Instead of [bash][27] I use [zsh][28] as my shell for its huge number of features. + +As a terminal emulator I found [urxvt][29] to be simple enough, support Unicode and 256 colors and has great performance. Another great feature is being able to run the urxvt client and daemon separately, so that even a large number of terminals barely takes up any memory (except for the scrollback buffer). + +There is only one font that looks absolutely clean and perfect to me: [Terminus][30]. Since i’s a bitmap font everything is pixel perfect and renders extremely fast and at low CPU usage. In order to switch fonts on-demand in each terminal with `CTRL-WIN-[1-7]` my ~/.Xdefaults contains: + +``` +URxvt.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-* +dzen2.font: -xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-* + +URxvt.keysym.C-M-1: command:\033]50;-xos4-terminus-medium-r-normal-*-12-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-2: command:\033]50;-xos4-terminus-medium-r-normal-*-14-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-3: command:\033]50;-xos4-terminus-medium-r-normal-*-18-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-4: command:\033]50;-xos4-terminus-medium-r-normal-*-22-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-5: command:\033]50;-xos4-terminus-medium-r-normal-*-24-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-6: command:\033]50;-xos4-terminus-medium-r-normal-*-28-*-*-*-*-*-*-*\007 +URxvt.keysym.C-M-7: command:\033]50;-xos4-terminus-medium-r-normal-*-32-*-*-*-*-*-*-*\007 + +URxvt.keysym.C-M-n: command:\033]10;#ffffff\007\033]11;#000000\007\033]12;#ffffff\007\033]706;#00ffff\007\033]707;#ffff00\007 +URxvt.keysym.C-M-b: command:\033]10;#000000\007\033]11;#ffffff\007\033]12;#000000\007\033]706;#0000ff\007\033]707;#ff0000\007 +``` + +For programming and writing I use [Vim][31] with syntax highlighting and [ctags][32] for indexing, as well as a few terminal windows with grep, sed and the other usual suspects for search and manipulation. This is probably not at the same level of comfort as an IDE, but allows me more automation. + +One problem with Vim is that you get so used to its key mappings that you’ll want to use them everywhere. + +[Python][33] and [Nim][34] do well as scripting languages where the shell is not powerful enough. + +### System Monitoring + +[htop][35] (look at the background of that site, it’s a live view of the server that’s hosting it) works great for getting a quick overview of what the software is currently doing. [lm_sensors][36] allows monitoring the hardware temperatures, fans and voltages. [powertop][37] is a great little tool by Intel to find power savings. [ncdu][38] lets you analyze disk usage interactively. + +[nmap][39], iptraf-ng, [tcpdump][40] and [Wireshark][41] are essential tools for analyzing network problems. + +There are of course many more great tools. + +### Mails & Synchronization + +On my home server I have a [fetchmail][42] daemon running for each email acccount that I have. Fetchmail just retrieves the incoming emails and invokes [procmail][43]: + +``` +#!/bin/sh +for i in /home/deen/.fetchmail/*; do + FETCHMAILHOME=$i /usr/bin/fetchmail -m 'procmail -d %T' -d 60 +done +``` + +The configuration is as simple as it could be and waits for the server to inform us of fresh emails: + +``` +poll imap.1und1.de protocol imap timeout 120 user "dennis@felsin9.de" password "XXX" folders INBOX keep ssl idle +``` + +My `.procmailrc` config contains a few rules to backup all mails and sort them into the correct directories, for example based on the mailing list id or from field in the mail header: + +``` +MAILDIR=/home/deen/shared/Maildir +LOGFILE=$HOME/.procmaillog +LOGABSTRACT=no +VERBOSE=off +FORMAIL=/usr/bin/formail +NL=" +" + +:0wc +* ! ? test -d /media/mailarchive/`date +%Y` +| mkdir -p /media/mailarchive/`date +%Y` + +# Make backups of all mail received in format YYYY/YYYY-MM +:0c +/media/mailarchive/`date +%Y`/`date +%Y-%m` + +:0 +* ^From: .*(.*@.*.kit.edu|.*@.*.uka.de|.*@.*.uni-karlsruhe.de) +$MAILDIR/.uni/ + +:0 +* ^list-Id:.*lists.kit.edu +$MAILDIR/.uni-ml/ + +[...] +``` + +To send emails I use [msmtp][44], which is also great to configure: + +``` +account default +host smtp.1und1.de +tls on +tls_trust_file /etc/ssl/certs/ca-certificates.crt +auth on +from dennis@felsin9.de +user dennis@felsin9.de +password XXX + +[...] +``` + +But so far the emails are still on the server. My documents are all stored in a directory that I synchronize between all computers using [Unison][45]. Think of Unison as a bidirectional interactive [rsync][46]. My emails are part of this documents directory and thus they end up on my desktop computers. + +This also means that while the emails reach my server immediately, I only fetch them on deman instead of getting instant notifications when an email comes in. + +From there I read the mails with [mutt][47], using the sidebar plugin to display my mail directories. The `/etc/mailcap` file is essential to display non-plaintext mails containing HTML, Word or PDF: + +``` +text/html;w3m -I %{charset} -T text/html; copiousoutput +application/msword; antiword %s; copiousoutput +application/pdf; pdftotext -layout /dev/stdin -; copiousoutput +``` + +### News & Communication + +[Newsboat][48] is a nice little RSS/Atom feed reader in the terminal. I have it running on the server in a `tach` session with about 150 feeds. Filtering feeds locally is also possible, for example: + +``` +ignore-article "https://forum.ddnet.tw/feed.php" "title =~ \"Map Testing •\" or title =~ \"Old maps •\" or title =~ \"Map Bugs •\" or title =~ \"Archive •\" or title =~ \"Waiting for mapper •\" or title =~ \"Other mods •\" or title =~ \"Fixes •\"" +``` + +I use [Irssi][49] the same way for communication via IRC. + +### Calendar + +[remind][50] is a calendar that can be used from the command line. Setting new reminders is done by editing the `rem` files: + +``` +# One time events +REM 2019-01-20 +90 Flight to China %b + +# Recurring Holidays +REM 1 May +90 Holiday "Tag der Arbeit" %b +REM [trigger(easterdate(year(today()))-2)] +90 Holiday "Karfreitag" %b + +# Time Change +REM Nov Sunday 1 --7 +90 Time Change (03:00 -> 02:00) %b +REM Apr Sunday 1 --7 +90 Time Change (02:00 -> 03:00) %b + +# Birthdays +FSET birthday(x) "'s " + ord(year(trigdate())-x) + " birthday is %b" +REM 16 Apr +90 MSG Andreas[birthday(1994)] + +# Sun +SET $LatDeg 49 +SET $LatMin 19 +SET $LatSec 49 +SET $LongDeg -8 +SET $LongMin -40 +SET $LongSec -24 + +MSG Sun from [sunrise(trigdate())] to [sunset(trigdate())] +[...] +``` + +Unfortunately there is no Chinese Lunar calendar function in remind yet, so Chinese holidays can’t be calculated easily. + +I use two aliases for remind: + +``` +rem -m -b1 -q -g +``` + +to see a list of the next events in chronological order and + +``` +rem -m -b1 -q -cuc12 -w$(($(tput cols)+1)) | sed -e "s/\f//g" | less +``` + +to show a calendar fitting just the width of my terminal: + +![remcal][51] + +### Dictionary + +[rdictcc][52] is a little known dictionary tool that uses the excellent dictionary files from [dict.cc][53] and turns them into a local database: + +``` +$ rdictcc rasch +====================[ A => B ]==================== +rasch: + - apace + - brisk [speedy] + - cursory + - in a timely manner + - quick + - quickly + - rapid + - rapidly + - sharpish [Br.] [coll.] + - speedily + - speedy + - swift + - swiftly +rasch [gehen]: + - smartly [quickly] +Rasch {n} [Zittergras-Segge]: + - Alpine grass [Carex brizoides] + - quaking grass sedge [Carex brizoides] +Rasch {m} [regional] [Putzrasch]: + - scouring pad +====================[ B => A ]==================== +Rasch model: + - Rasch-Modell {n} +``` + +### Writing and Reading + +I have a simple todo file containing my tasks, that is basically always sitting open in a Vim session. For work I also use the todo file as a “done” file so that I can later check what tasks I finished on each day. + +For writing documents, letters and presentations I use [LaTeX][54] for its superior typesetting. A simple letter in German format can be set like this for example: + +``` +\documentclass[paper = a4, fromalign = right]{scrlttr2} +\usepackage{german} +\usepackage{eurosym} +\usepackage[utf8]{inputenc} +\setlength{\parskip}{6pt} +\setlength{\parindent}{0pt} + +\setkomavar{fromname}{Dennis Felsing} +\setkomavar{fromaddress}{Meine Str. 1\\69181 Leimen} +\setkomavar{subject}{Titel} + +\setkomavar*{enclseparator}{Anlagen} + +\makeatletter +\@setplength{refvpos}{89mm} +\makeatother + +\begin{document} +\begin{letter} {Herr Soundso\\Deine Str. 2\\69121 Heidelberg} +\opening{Sehr geehrter Herr Soundso,} + +Sie haben bei mir seit dem Bla Bla Bla. + +Ich fordere Sie hiermit zu Bla Bla Bla auf. + +\closing{Mit freundlichen Grüßen} + +\end{letter} +\end{document} +``` + +Further example documents and presentations can be found over at [my private site][55]. + +To read PDFs [Zathura][56] is fast, has Vim-like controls and even supports two different PDF backends: Poppler and MuPDF. [Evince][57] on the other hand is more full-featured for the cases where I encounter documents that Zathura doesn’t like. + +### Graphical Editing + +[GIMP][58] and [Inkscape][59] are easy choices for photo editing and interactive vector graphics respectively. + +In some cases [Imagemagick][60] is good enough though and can be used straight from the command line and thus automated to edit images. Similarly [Graphviz][61] and [TikZ][62] can be used to draw graphs and other diagrams. + +### Web Browsing + +As a web browser I’ve always used [Firefox][63] for its extensibility and low resource usage compared to Chrome. + +Unfortunately the [Pentadactyl][64] extension development stopped after Firefox switched to Chrome-style extensions entirely, so I don’t have satisfying Vim-like controls in my browser anymore. + +### Media Players + +[mpv][65] with hardware decoding allows watching videos at 5% CPU load using the `vo=gpu` and `hwdec=vaapi` config settings. `audio-channels=2` in mpv seems to give me clearer downmixing to my stereo speakers / headphones than what PulseAudio does by default. A great little feature is exiting with `Shift-Q` instead of just `Q` to save the playback location. When watching with someone with another native tongue you can use `--secondary-sid=` to show two subtitles at once, the primary at the bottom, the secondary at the top of the screen + +My wirelss mouse can easily be made into a remote control with mpv with a small `~/.config/mpv/input.conf`: + +``` +MOUSE_BTN5 run "mixer" "pcm" "-2" +MOUSE_BTN6 run "mixer" "pcm" "+2" +MOUSE_BTN1 cycle sub-visibility +MOUSE_BTN7 add chapter -1 +MOUSE_BTN8 add chapter 1 +``` + +[youtube-dl][66] works great for watching videos hosted online, best quality can be achieved with `-f bestvideo+bestaudio/best --all-subs --embed-subs`. + +As a music player [MOC][67] hasn’t been actively developed for a while, but it’s still a simple player that plays every format conceivable, including the strangest Chiptune formats. In the AUR there is a [patch][68] adding PulseAudio support as well. Even with the CPU clocked down to 800 MHz MOC barely uses 1-2% of a single CPU core. + +![moc][69] + +My music collection sits on my home server so that I can access it from anywhere. It is mounted using [SSHFS][70] and automount in the `/etc/fstab/`: + +``` +root@server:/media/media /mnt/media fuse.sshfs noauto,x-systemd.automount,idmap=user,IdentityFile=/root/.ssh/id_rsa,allow_other,reconnect 0 0 +``` + +### Cross-Platform Building + +Linux is great to build packages for any major operating system except Linux itself! In the beginning I used [QEMU][71] to with an old Debian, Windows and Mac OS X VM to build for these platforms. + +Nowadays I switched to using chroot for the old Debian distribution (for maximum Linux compatibility), [MinGW][72] to cross-compile for Windows and [OSXCross][73] to cross-compile for Mac OS X. + +The script used to [build DDNet][74] as well as the [instructions for updating library builds][75] are based on this. + +### Backups + +As usual, we nearly forgot about backups. Even if this is the last chapter, it should not be an afterthought. + +I wrote [rrb][76] (reverse rsync backup) 10 years ago to wrap rsync so that I only need to give the backup server root SSH rights to the computers that it is backing up. Surprisingly rrb needed 0 changes in the last 10 years, even though I kept using it the entire time. + +The backups are stored straight on the filesystem. Incremental backups are implemented using hard links (`--link-dest`). A simple [config][77] defines how long backups are kept, which defaults to: + +``` +KEEP_RULES=( \ + 7 7 \ # One backup a day for the last 7 days + 31 8 \ # 8 more backups for the last month + 365 11 \ # 11 more backups for the last year +1825 4 \ # 4 more backups for the last 5 years +) +``` + +Since some of my computers don’t have a static IP / DNS entry and I still want to back them up using rrb I use a reverse SSH tunnel (as a systemd service) for them: + +``` +[Unit] +Description=Reverse SSH Tunnel +After=network.target + +[Service] +ExecStart=/usr/bin/ssh -N -R 27276:localhost:22 -o "ExitOnForwardFailure yes" server +KillMode=process +Restart=always + +[Install] +WantedBy=multi-user.target +``` + +Now the server can reach the client through `ssh -p 27276 localhost` while the tunnel is running to perform the backup, or in `.ssh/config` format: + +``` +Host cr-remote + HostName localhost + Port 27276 +``` + +While talking about SSH hacks, sometimes a server is not easily reachable thanks to some bad routing. In that case you can route the SSH connection through another server to get better routing, in this case going through the USA to reach my Chinese server which had not been reliably reachable from Germany for a few weeks: + +``` +Host chn.ddnet.tw + ProxyCommand ssh -q usa.ddnet.tw nc -q0 chn.ddnet.tw 22 + Port 22 +``` + +### Final Remarks + +Thanks for reading my random collection of tools. I probably forgot many programs that I use so naturally every day that I don’t even think about them anymore. Let’s see how stable my software setup stays in the next years. If you have any questions, feel free to get in touch with me at [dennis@felsin9.de][78]. + +Comments on [Hacker News][79]. + +-------------------------------------------------------------------------------- + +via: https://hookrace.net/blog/linux-desktop-setup/ + +作者:[Dennis Felsing][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://felsin9.de/nnis/ +[b]: https://github.com/lujun9972 +[1]: https://hookrace.net/public/linux-desktop/htop_small.png +[2]: https://hookrace.net/public/linux-desktop/htop.png +[3]: https://gentoo.org/ +[4]: https://www.archlinux.org/ +[5]: https://www.archlinux.org/news/ +[6]: https://www.reddit.com/r/archlinux/comments/4zrsc3/keep_your_system_fully_functional_after_a_kernel/ +[7]: https://www.suse.com/ +[8]: https://ddnet.tw/ +[9]: https://www.debian.org/ +[10]: https://www.openbsd.org/ +[11]: https://www.openbsd.org/faq/pf/ +[12]: http://openbox.org/wiki/Main_Page +[13]: http://fluxbox.org/ +[14]: https://dwm.suckless.org/ +[15]: https://awesomewm.org/ +[16]: https://xmonad.org/ +[17]: https://www.haskell.org/ +[18]: http://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Layout-LayoutScreens.html +[19]: http://robm.github.io/dzen/ +[20]: https://github.com/brndnmtthws/conky +[21]: https://hookrace.net/public/linux-desktop/laptop_small.png +[22]: https://hookrace.net/public/linux-desktop/laptop.png +[23]: https://i3wm.org/ +[24]: https://github.com/tmux/tmux/wiki +[25]: http://dtach.sourceforge.net/ +[26]: https://github.com/def-/tach/blob/master/tach +[27]: https://www.gnu.org/software/bash/ +[28]: http://www.zsh.org/ +[29]: http://software.schmorp.de/pkg/rxvt-unicode.html +[30]: http://terminus-font.sourceforge.net/ +[31]: https://www.vim.org/ +[32]: http://ctags.sourceforge.net/ +[33]: https://www.python.org/ +[34]: https://nim-lang.org/ +[35]: https://hisham.hm/htop/ +[36]: http://lm-sensors.org/ +[37]: https://01.org/powertop/ +[38]: https://dev.yorhel.nl/ncdu +[39]: https://nmap.org/ +[40]: https://www.tcpdump.org/ +[41]: https://www.wireshark.org/ +[42]: http://www.fetchmail.info/ +[43]: http://www.procmail.org/ +[44]: https://marlam.de/msmtp/ +[45]: https://www.cis.upenn.edu/~bcpierce/unison/ +[46]: https://rsync.samba.org/ +[47]: http://www.mutt.org/ +[48]: https://newsboat.org/ +[49]: https://irssi.org/ +[50]: https://www.roaringpenguin.com/products/remind +[51]: https://hookrace.net/public/linux-desktop/remcal.png +[52]: https://github.com/tsdh/rdictcc +[53]: https://www.dict.cc/ +[54]: https://www.latex-project.org/ +[55]: http://felsin9.de/nnis/research/ +[56]: https://pwmt.org/projects/zathura/ +[57]: https://wiki.gnome.org/Apps/Evince +[58]: https://www.gimp.org/ +[59]: https://inkscape.org/ +[60]: https://imagemagick.org/Usage/ +[61]: https://www.graphviz.org/ +[62]: https://sourceforge.net/projects/pgf/ +[63]: https://www.mozilla.org/en-US/firefox/new/ +[64]: https://github.com/5digits/dactyl +[65]: https://mpv.io/ +[66]: https://rg3.github.io/youtube-dl/ +[67]: http://moc.daper.net/ +[68]: https://aur.archlinux.org/packages/moc-pulse/ +[69]: https://hookrace.net/public/linux-desktop/moc.png +[70]: https://github.com/libfuse/sshfs +[71]: https://www.qemu.org/ +[72]: http://www.mingw.org/ +[73]: https://github.com/tpoechtrager/osxcross +[74]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-release.sh +[75]: https://github.com/ddnet/ddnet-scripts/blob/master/ddnet-lib-update.sh +[76]: https://github.com/def-/rrb/blob/master/rrb +[77]: https://github.com/def-/rrb/blob/master/config.example +[78]: mailto:dennis@felsin9.de +[79]: https://news.ycombinator.com/item?id=18979731 diff --git a/sources/tech/20190116 Best Audio Editors For Linux.md b/sources/tech/20190116 Best Audio Editors For Linux.md new file mode 100644 index 0000000000..d588c886e2 --- /dev/null +++ b/sources/tech/20190116 Best Audio Editors For Linux.md @@ -0,0 +1,156 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Best Audio Editors For Linux) +[#]: via: (https://itsfoss.com/best-audio-editors-linux) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Best Audio Editors For Linux +====== + +You’ve got a lot of choices when it comes to audio editors for Linux. No matter whether you are a professional music producer or just learning to create awesome music, the audio editors will always come in handy. + +Well, for professional-grade usage, a [DAW][1] (Digital Audio Workstation) is always recommended. However, not everyone needs all the functionalities, so you should know about some of the most simple audio editors as well. + +In this article, we will talk about a couple of DAWs and basic audio editors which are available as **free and open source** solutions for Linux and (probably) for other operating systems. + +### Top Audio Editors for Linux + +![Best audio editors and DAW for Linux][2] + +We will not be focusing on all the functionalities that DAWs offer – but the basic audio editing capabilities. You may still consider this as the list of best DAW for Linux. + +**Installation instruction:** You will find all the mentioned audio editors or DAWs in your AppCenter or Software center. In case, you do not find them listed, please head to their official website for more information. + +#### 1\. Audacity + +![audacity audio editor][3] + +Audacity is one of the most basic yet a capable audio editor available for Linux. It is a free and open-source cross-platform tool. A lot of you must be already knowing about it. + +It has improved a lot when compared to the time when it started trending. I do recall that I utilized it to “try” making karaokes by removing the voice from an audio file. Well, you can still do it – but it depends. + +**Features:** + +It also supports plug-ins that include VST effects. Of course, you should not expect it to support VST Instruments. + + * Live audio recording through a microphone or a mixer + * Export/Import capability supporting multiple formats and multiple files at the same time + * Plugin support: LADSPA, LV2, Nyquist, VST and Audio Unit effect plug-ins + * Easy editing with cut, paste, delete and copy functions. + * Spectogram view mode for analyzing frequencies + + + +#### 2\. LMMS + +![][4] + +LMMS is a free and open source (cross-platform) digital audio workstation. It includes all the basic audio editing functionalities along with a lot of advanced features. + +You can mix sounds, arrange them, or create them using VST instruments. It does support them. Also, it comes baked in with some samples, presets, VST Instruments, and effects to get started. In addition, you also get a spectrum analyzer for some advanced audio editing. + +**Features:** + + * Note playback via MIDI + * VST Instrument support + * Native multi-sample support + * Built-in compressor, limiter, delay, reverb, distortion and bass enhancer + + + +#### 3\. Ardour + +![Ardour audio editor][5] + +Ardour is yet another free and open source digital audio workstation. If you have an audio interface, Ardour will support it. Of course, you can add unlimited multichannel tracks. The multichannel tracks can also be routed to different mixer tapes for the ease of editing and recording. + +You can also import a video to it and edit the audio to export the whole thing. It comes with a lot of built-in plugins and supports VST plugins as well. + +**Features:** + + * Non-linear editing + * Vertical window stacking for easy navigation + * Strip silence, push-pull trimming, Rhythm Ferret for transient and note onset-based editing + + + +#### 4\. Cecilia + +![cecilia audio editor][6] + +Cecilia is not an ordinary audio editor application. It is meant to be used by sound designers or if you are just in the process of becoming one. It is technically an audio signal processing environment. It lets you create ear-bending sound out of them. + +You get in-build modules and plugins for sound effects and synthesis. It is tailored for a specific use – if that is what you were looking for – look no further! + +**Features:** + + * Modules to achieve more (UltimateGrainer – A state-of-the-art granulation processing, RandomAccumulator – Variable speed recording accumulator, +UpDistoRes – Distortion with upsampling and resonant lowpass filter) + * Automatic Saving of modulations + + + +#### 5\. Mixxx + +![Mixxx audio DJ ][7] + +If you want to mix and record something while being able to have a virtual DJ tool, [Mixxx][8] would be a perfect tool. You get to know the BPM, key, and utilize the master sync feature to match the tempo and beats of a song. Also, do not forget that it is yet another free and open source application for Linux! + +It supports custom DJ equipment as well. So, if you have one or a MIDI – you can record your live mixes using this tool. + +**Features** + + * Broadcast and record DJ Mixes of your song + * Ability to connect your equipment and perform live + * Key detection and BPM detection + + + +#### 6\. Rosegarden + +![rosegarden audio editor][9] + +Rosegarden is yet another impressive audio editor for Linux which is free and open source. It is neither a fully featured DAW nor a basic audio editing tool. It is a mixture of both with some scaled down functionalities. + +I wouldn’t recommend this for professionals but if you have a home studio or just want to experiment, this would be one of the best audio editors for Linux to have installed. + +**Features:** + + * Music notation editing + * Recording, Mixing, and samples + + + +### Wrapping Up + +These are some of the best audio editors you could find out there for Linux. No matter whether you need a DAW, a cut-paste editing tool, or a basic mixing/recording audio editor, the above-mentioned tools should help you out. + +Did we miss any of your favorite? Let us know about it in the comments below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-audio-editors-linux + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Digital_audio_workstation +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/linux-audio-editors-800x450.jpeg?resize=800%2C450&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/audacity-audio-editor.jpg?fit=800%2C591&ssl=1 +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/lmms-daw.jpg?fit=800%2C472&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/ardour-audio-editor.jpg?fit=800%2C639&ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/cecilia.jpg?fit=800%2C510&ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/mixxx.jpg?fit=800%2C486&ssl=1 +[8]: https://itsfoss.com/dj-mixxx-2/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/rosegarden.jpg?fit=800%2C391&ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/linux-audio-editors.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md b/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md new file mode 100644 index 0000000000..bdaae74b43 --- /dev/null +++ b/sources/tech/20190116 GameHub - An Unified Library To Put All Games Under One Roof.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GameHub – An Unified Library To Put All Games Under One Roof) +[#]: via: (https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +GameHub – An Unified Library To Put All Games Under One Roof +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub-720x340.png) + +**GameHub** is an unified gaming library that allows you to view, install, run and remove games on GNU/Linux operating system. It supports both native and non-native games from various sources including Steam, GOG, Humble Bundle, and Humble Trove etc. The non-native games are supported by [Wine][1], Proton, [DOSBox][2], ScummVM and RetroArch. It also allows you to add custom emulators and download bonus content and DLCs for GOG games. Simply put, Gamehub is a frontend for Steam/GoG/Humblebundle/Retroarch. It can use steam technologies like Proton to run windows gog games. GameHub is free, open source gaming platform written in **Vala** using **GTK+3**. If you’re looking for a way to manage all games under one roof, GameHub might be a good choice. + +### Installing GameHub + +The author of GameHub has designed it specifically for elementary OS. So, you can install it on Debian, Ubuntu, elementary OS and other Ubuntu-derivatives using GameHub PPA. + +``` +$ sudo apt install --no-install-recommends software-properties-common +$ sudo add-apt-repository ppa:tkashkin/gamehub +$ sudo apt update +$ sudo apt install com.github.tkashkin.gamehub +``` + +GameHub is available in [**AUR**][3], so just install it on Arch Linux and its variants using any AUR helpers, for example [**YaY**][4]. + +``` +$ yay -S gamehub-git +``` + +It is also available as **AppImage** and **Flatpak** packages in [**releases page**][5]. + +If you prefer AppImage package, do the following: + +``` +$ wget https://github.com/tkashkin/GameHub/releases/download/0.12.1-91-dev/GameHub-bionic-0.12.1-91-dev-cd55bb5-x86_64.AppImage -O gamehub +``` + +Make it executable: + +``` +$ chmod +x gamehub +``` + +And, run GameHub using command: + +``` +$ ./gamehub +``` + +If you want to use Flatpak installer, run the following commands one by one. + +``` +$ git clone https://github.com/tkashkin/GameHub.git +$ cd GameHub +$ scripts/build.sh build_flatpak +``` + +### Put All Games Under One Roof + +Launch GameHub from menu or application launcher. At first launch, you will see the following welcome screen. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub1.png) + +As you can see in the above screenshot, you need to login to the given sources namely Steam, GoG or Humble Bundle. If you don’t have Steam client on your Linux system, you need to install it first to access your steam account. For GoG and Humble bundle sources, click on the icon to log in to the respective source. + +Once you logged in to your account(s), all games from the all sources can be visible on GameHub dashboard. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub2.png) + +You will see list of logged-in sources on the top left corner. To view the games from each source, just click on the respective icon. + +You can also switch between list view or grid view, sort the games by applying the filters and search games from the list in GameHub dashboard. + +#### Installing a game + +Click on the game of your choice from the list and click Install button. If the game is non-native, GameHub will automatically choose the compatibility layer (E.g Wine) that suits to run the game and install the selected game. As you see in the below screenshot, Indiana Jones game is not available for Linux platform. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub3-1.png) + +If it is a native game (i.e supports Linux), simply press the Install button. + +![][7] + +If you don’t want to install the game, just hit the **Download** button to save it in your games directory. It is also possible to add locally installed games to GameHub using the **Import** option. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/gamehub5.png) + +#### GameHub Settings + +GameHub Settings window can be launched by clicking on the four straight lines on top right corner. + +From Settings section, we can enable, disable and set various settings such as, + + * Switch between light/dark themes. + * Use Symbolic icons instead of colored icons for games. + * Switch to compact list. + * Enable/disable merging games from different sources. + * Enable/disable compatibility layers. + * Set games collection directory. The default directory for storing the collection is **$HOME/Games/_Collection**. + * Set games directories for each source. + * Add/remove emulators, + * And many. + + + +For more details, refer the project links given at the end of this guide. + +**Related read:** + +And, that’s all for now. Hope this helps. I will be soon here with another guide. Until then, stay tuned with OSTechNix. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/gamehub-an-unified-library-to-put-all-games-under-one-roof/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/run-windows-games-softwares-ubuntu-16-04/ +[2]: https://www.ostechnix.com/how-to-run-ms-dos-games-and-programs-in-linux/ +[3]: https://aur.archlinux.org/packages/gamehub-git/ +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: https://github.com/tkashkin/GameHub/releases +[6]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[7]: http://www.ostechnix.com/wp-content/uploads/2019/01/gamehub4.png diff --git a/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md new file mode 100644 index 0000000000..fb98f78b06 --- /dev/null +++ b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md @@ -0,0 +1,324 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zipping files on Linux: the many variations and how to use them) +[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Zipping files on Linux: the many variations and how to use them +====== +![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg) + +Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, we’re going to look at standard zipping and unzipping as well as some other interesting zipping options. + +### The basic zip command + +First, let’s look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work: + +``` +$ gzip onefile +$ ls -l +-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz +``` + +And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension. + +``` +$ zip twofiles.zip file* + adding: file1 (deflated 82%) + adding: file2 (deflated 82%) +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +Notice also that the original files are still sitting there. + +The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable. + +``` +$ zip mybin.zip ~/bin/* + adding: bin/1 (deflated 26%) + adding: bin/append (deflated 64%) + adding: bin/BoD_meeting (deflated 18%) + adding: bin/cpuhog1 (deflated 14%) + adding: bin/cpuhog2 (stored 0%) + adding: bin/ff (deflated 32%) + adding: bin/file.0 (deflated 1%) + adding: bin/loop (deflated 14%) + adding: bin/notes (deflated 23%) + adding: bin/patterns (stored 0%) + adding: bin/runme (stored 0%) + adding: bin/tryme (deflated 13%) + adding: bin/tt (deflated 6%) +``` + +### The unzip command + +The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file. + +``` +$ unzip twofiles.zip +Archive: twofiles.zip + inflating: file1 + inflating: file2 +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +### The zipcloak command + +The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original. + +``` +$ zipcloak twofiles.zip +Enter password: +Verify password: +encrypting: file1 +encrypting: file2 +$ ls -l +total 204 +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21313 Jan 15 13:46 twofiles.zip <== slightly larger than + unencrypted version +``` + +Keep in mind that the original files are still sitting there unencrypted. + +### The zipdetails command + +The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files. + +``` +$ zipdetails twofiles.zip + +0000 LOCAL HEADER #1 04034B50 +0004 Extract Zip Spec 14 '2.0' +0005 Extract OS 00 'MS-DOS' +0006 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +0008 Compression Method 0008 'Deflated' +000A Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +000E CRC F1B115BD +0012 Compressed Length 00002904 +0016 Uncompressed Length 0000E2A5 +001A Filename Length 0005 +001C Extra Length 001C +001E Filename 'file1' +0023 Extra ID #0001 5455 'UT: Extended Timestamp' +0025 Length 0009 +0027 Flags '03 mod access' +0028 Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +002C Access Time 5C3E27BB 'Tue Jan 15 13:34:35 2019' +0030 Extra ID #0002 7875 'ux: Unix Extra Type 3' +0032 Length 000B +0034 Version 01 +0035 UID Size 04 +0036 UID 000003E8 +003A GID Size 04 +003B GID 000003E8 +003F PAYLOAD + +2943 LOCAL HEADER #2 04034B50 +2947 Extract Zip Spec 14 '2.0' +2948 Extract OS 00 'MS-DOS' +2949 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +294B Compression Method 0008 'Deflated' +294D Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +2951 CRC EC214569 +2955 Compressed Length 00002913 +2959 Uncompressed Length 0000E635 +295D Filename Length 0005 +295F Extra Length 001C +2961 Filename 'file2' +2966 Extra ID #0001 5455 'UT: Extended Timestamp' +2968 Length 0009 +296A Flags '03 mod access' +296B Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +296F Access Time 5C3E27BD 'Tue Jan 15 13:34:37 2019' +2973 Extra ID #0002 7875 'ux: Unix Extra Type 3' +2975 Length 000B +2977 Version 01 +2978 UID Size 04 +2979 UID 000003E8 +297D GID Size 04 +297E GID 000003E8 +2982 PAYLOAD + +5295 CENTRAL HEADER #1 02014B50 +5299 Created Zip Spec 1E '3.0' +529A Created OS 03 'Unix' +529B Extract Zip Spec 14 '2.0' +529C Extract OS 00 'MS-DOS' +529D General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +529F Compression Method 0008 'Deflated' +52A1 Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +52A5 CRC F1B115BD +52A9 Compressed Length 00002904 +52AD Uncompressed Length 0000E2A5 +52B1 Filename Length 0005 +52B3 Extra Length 0018 +52B5 Comment Length 0000 +52B7 Disk Start 0000 +52B9 Int File Attributes 0001 + [Bit 0] 1 Text Data +52BB Ext File Attributes 81B40000 +52BF Local Header Offset 00000000 +52C3 Filename 'file1' +52C8 Extra ID #0001 5455 'UT: Extended Timestamp' +52CA Length 0005 +52CC Flags '03 mod access' +52CD Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +52D1 Extra ID #0002 7875 'ux: Unix Extra Type 3' +52D3 Length 000B +52D5 Version 01 +52D6 UID Size 04 +52D7 UID 000003E8 +52DB GID Size 04 +52DC GID 000003E8 + +52E0 CENTRAL HEADER #2 02014B50 +52E4 Created Zip Spec 1E '3.0' +52E5 Created OS 03 'Unix' +52E6 Extract Zip Spec 14 '2.0' +52E7 Extract OS 00 'MS-DOS' +52E8 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +52EA Compression Method 0008 'Deflated' +52EC Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +52F0 CRC EC214569 +52F4 Compressed Length 00002913 +52F8 Uncompressed Length 0000E635 +52FC Filename Length 0005 +52FE Extra Length 0018 +5300 Comment Length 0000 +5302 Disk Start 0000 +5304 Int File Attributes 0001 + [Bit 0] 1 Text Data +5306 Ext File Attributes 81B40000 +530A Local Header Offset 00002943 +530E Filename 'file2' +5313 Extra ID #0001 5455 'UT: Extended Timestamp' +5315 Length 0005 +5317 Flags '03 mod access' +5318 Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +531C Extra ID #0002 7875 'ux: Unix Extra Type 3' +531E Length 000B +5320 Version 01 +5321 UID Size 04 +5322 UID 000003E8 +5326 GID Size 04 +5327 GID 000003E8 + +532B END CENTRAL HEADER 06054B50 +532F Number of this disk 0000 +5331 Central Dir Disk no 0000 +5333 Entries in this disk 0002 +5335 Total Entries 0002 +5337 Size of Central Dir 00000096 +533B Offset to Central Dir 00005295 +533F Comment Length 0000 +Done +``` + +### The zipgrep command + +The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below. + +``` +$ zipgrep hazard twofiles.zip file1 +[twofiles.zip] file1 password: +Certain pesticides should be banned since they are hazardous to the environment. +``` + +### The zipinfo command + +The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions. + +``` +$ zipinfo twofiles.zip +Archive: twofiles.zip +Zip file size: 21313 bytes, number of entries: 2 +-rw-rw-r-- 3.0 unx 58021 Tx defN 19-Jan-15 13:25 file1 +-rw-rw-r-- 3.0 unx 58933 Tx defN 19-Jan-15 13:34 file2 +2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1% +``` + +### The zipnote command + +The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this: + +``` +$ zipnote twofiles.zip +@ file1 +@ (comment above this line) +@ file2 +@ (comment above this line) +@ (zip file comment below this line) +``` + +If you want to add comments, write the output from the zipnote command to a file: + +``` +$ zipnote twofiles.zip > comments +``` + +Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one: + +``` +$ zipnote -w twofiles.zip < comments +``` + +### The zipsplit command + +The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file. + +``` +$ zipsplit -n 12000 twofiles.zip +2 zip files will be made (100% efficiency) +creating: twofile1.zip +creating: twofile2.zip +$ ls twofile*.zip +-rw-rw-r-- 1 shs shs 10697 Jan 15 14:52 twofile1.zip +-rw-rw-r-- 1 shs shs 10702 Jan 15 14:52 twofile2.zip +-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip +``` + +Notice how the extracted files are sequentially named "twofile1" and "twofile2". + +### Wrap-up + +The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives. + +**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]** + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md b/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md new file mode 100644 index 0000000000..b0aa45d618 --- /dev/null +++ b/sources/tech/20190117 Pyvoc - A Command line Dictionary And Vocabulary Building Tool.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Pyvoc – A Command line Dictionary And Vocabulary Building Tool) +[#]: via: (https://www.ostechnix.com/pyvoc-a-command-line-dictionary-and-vocabulary-building-tool/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Pyvoc – A Command line Dictionary And Vocabulary Building Tool +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc-720x340.jpg) + +Howdy! I have a good news for non-native English speakers. Now, you can improve your English vocabulary and find the meaning of English words, right from your Terminal. Say hello to **Pyvoc** , a cross-platform, open source, command line dictionary and vocabulary building tool written in **Python** programming language. Using this tool, you can brush up some English words meanings, test or improve your vocabulary skill or simply use it as a CLI dictionary on Unix-like operating systems. + +### Installing Pyvoc + +Since Pyvoc is written using Python language, you can install it using [**Pip3**][1] package manager. + +``` +$ pip3 install pyvoc +``` + +Once installed, run the following command to automatically create necessary configuration files in your $HOME directory. + +``` +$ pyvoc word +``` + +Sample output: + +``` +|Creating necessary config files +/getting api keys. please handle with care! +| + +word +Noun: single meaningful element of speech or writing +example: I don't like the word ‘unofficial’ + +Verb: express something spoken or written +example: he words his request in a particularly ironic way + +Interjection: used to express agreement or affirmation +example: Word, that's a good record, man +``` + +Done! Let us go ahead and brush the English skills. + +### Use Pyvoc as a command line Dictionary tool + +Pyvoc fetches the word meaning from **Oxford Dictionary API**. + +Let us say, you want to find the meaning of a word **‘digression’**. To do so, run: + +``` +$ pyvoc digression +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc1.png) + +See? Pyvoc not only displays the meaning of word **‘digression’** , but also an example sentence which shows how to use that word in practical. + +Let us see an another example. + +``` +$ pyvoc subterfuge +| + +subterfuge +Noun: deceit used in order to achieve one's goal +example: he had to use subterfuge and bluff on many occasions +``` + +It also shows the word classes as well. As you already know, English has four major **word classes** : + + 1. Nouns, + + 2. Verbs, + + 3. Adjectives, + + 4. Adverbs. + + + + +Take a look at the following example. + +``` +$ pyvoc welcome + / + +welcome +Noun: instance or manner of greeting someone +example: you will receive a warm welcome + +Interjection: used to greet someone in polite or friendly way +example: welcome to the Wildlife Park + +Verb: greet someone arriving in polite or friendly way +example: hotels should welcome guests in their own language + +Adjective: gladly received +example: I'm pleased to see you, lad—you're welcome +``` + +As you see in the above output, the word ‘welcome’ can be used as a verb, noun, adjective and interjection. Pyvoc has given example for each class. + +If you misspell a word, it will inform you to check the spelling of the given word. + +``` +$ pyvoc wlecome +\ +No definition found. Please check the spelling!! +``` + +Useful, isn’t it? + +### Create vocabulary groups + +A vocabulary group is nothing but a collection words added by the user. You can later revise or take quiz from these groups. 100 groups of 60 words are **reserved** for the user. + +To add a word (E.g **sporadic** ) to a group, just run: + +``` +$ pyvoc sporadic -a +- + +sporadic +Adjective: occurring at irregular intervals or only in few places +example: sporadic fighting broke out + + +writing to vocabulary group... +word added to group number 51 +``` + +As you can see, I didn’t provide any group number and pyvoc displayed the meaning of given word and automatically added that word to group number **51**. If you don’t provide the group number, Pyvoc will **incrementally add words** to groups **51-100**. + +Pyvoc also allows you to specify a group number if you want to. You can specify a group from 1-50 using **-g** option. For example, I am going to add a word to Vocabulary group 20 using the following command. + +``` +$ pyvoc discrete -a -g 20 + / + +discrete +Adjective: individually separate and distinct +example: speech sounds are produced as a continuous sound signal rather + than discrete units + +creating group Number 20... +writing to vocabulary group... +word added to group number 20 +``` + +See? The above command displays the meaning of ‘discrete’ word and adds it to the vocabulary group 20. If the group doesn’t exists, Pyvoc will create it and add the word. + +By default, Pyvoc includes three predefined vocabulary groups (101, 102, and 103). These custom groups has 800 words of each. All words in these groups are taken from **GRE** and **SAT** preparation websites. + +To view the user-generated groups, simply run: + +``` +$ pyvoc word -l + - + +word +Noun: single meaningful element of speech or writing +example: I don't like the word ‘unofficial’ + +Verb: express something spoken or written +example: he words his request in a particularly ironic way + +Interjection: used to express agreement or affirmation +example: Word, that's a good record, man + + +USER GROUPS +Group no. No. of words +20 1 + +DEFAULT GROUP +Group no. No. of words +51 1 +``` +``` + +``` + +As you see, I have created one group (20) including the default group (51). + +### Test and improve English vocabulary + +As I already said, you can use the Vocabulary groups to revise or take quiz from them. + +For instance, to revise the group no. **101** , use **-r** option like below. + +``` +$ pyvoc 101 -r +``` + +You can now revise the meaning of all words in the Vocabulary group 101 in random order. Just hit ENTER to go through next questions. Once done, hit **CTRL+C** to exit. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc2-1.png) + +Also, you take quiz from the existing groups to brush up your vocabulary. To do so, use **-q** option like below. + +``` +$ pyvoc 103 -q 50 +``` + +This command allows you to take quiz of 50 questions from vocabulary group 103. Choose the correct answer from the list by entering the appropriate number. You will get 1 point for every correct answer. The more you score the more your vocabulary skill will be. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/pyvoc3.png) + +Pyvoc is in the early-development stage. I hope the developer will improve it and add more features in the days to come. + +As a non-native English speaker, I personally find it useful to test and learn new word meanings in my free time. If you’re a heavy command line user and wanted to quickly check the meaning of a word, Pyvoc is the right tool. You can also test your English Vocabulary at your free time to memorize and improve your English language skill. Give it a try. You won’t be disappointed. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! + +Cheers! + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/pyvoc-a-command-line-dictionary-and-vocabulary-building-tool/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/manage-python-packages-using-pip/ diff --git a/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md new file mode 100644 index 0000000000..e235cabdbf --- /dev/null +++ b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Resize OpenStack Instance (Virtual Machine) from Command line) +[#]: via: (https://www.linuxtechi.com/resize-openstack-instance-command-line/) +[#]: author: (Pradeep Kumar http://www.linuxtechi.com/author/pradeep/) + +How to Resize OpenStack Instance (Virtual Machine) from Command line +====== + +Being a Cloud administrator, resizing or changing resources of an instance or virtual machine is one of the most common tasks. + +![](https://www.linuxtechi.com/wp-content/uploads/2019/01/Resize-openstack-instance.jpg) + +In Openstack environment, there are some scenarios where cloud user has spin a vm using some flavor( like m1.smalll) where root partition disk size is 20 GB, but at some point of time user wants to extends the root partition size to 40 GB. So resizing of vm’s root partition can be accomplished by using the resize option in nova command. During the resize, we need to specify the new flavor that will include disk size as 40 GB. + +**Note:** Once you extend the instance resources like RAM, CPU and disk using resize option in openstack then you can’t reduce it. + +**Read More on** : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1] + +In this tutorial I will demonstrate how to resize an openstack instance from command line. Let’s assume I have an existing instance named “ **test_resize_vm** ” and it’s associated flavor is “m1.small” and root partition disk size is 20 GB. + +Execute the below command from controller node to check on which compute host our vm “test_resize_vm” is provisioned and its flavor details + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-57    | +| flavor                               | m1.small (2)  | +:~# +``` + +Login to VM as well and check the root partition size, + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        20G  885M   20G   5% / +devtmpfs       devtmpfs  900M     0  900M   0% /dev +tmpfs          tmpfs     920M     0  920M   0% /dev/shm +tmpfs          tmpfs     920M  8.4M  912M   1% /run +tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup +tmpfs          tmpfs     184M     0  184M   0% /run/user/1000 +[[email protected] ~]# echo "test file for resize operation" > demofile +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +Get the available flavor list using below command, + +``` +:~# openstack flavor list ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| ID                                   | Name            |   RAM | Disk | Ephemeral | VCPUs | Is Public | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| 2                                    | m1.small        |  2048 |   20 |         0 |     1 | True      | +| 3                                    | m1.medium       |  4096 |   40 |         0 |     2 | True      | +| 4                                    | m1.large        |  8192 |   80 |         0 |     4 | True      | +| 5                                    | m1.xlarge       | 16384 |  160 |         0 |     8 | True      | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +``` + +So we will be using the flavor “m1.medium” for resize operation, Run the beneath nova command to resize “test_resize_vm”, + +Syntax: # nova resize {VM_Name} {flavor_id} —poll + +``` +:~# nova resize test_resize_vm 3 --poll +Server resizing... 100% complete +Finished +:~# +``` + +Now confirm the resize operation using “ **openstack server –confirm”** command, + +``` +~# openstack server list | grep -i test_resize_vm +| 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 | test_resize_vm | VERIFY_RESIZE |private-net=10.20.10.51                                  | +:~# +``` + +As we can see in the above command output the current status of the vm is “ **verify_resize** “, execute below command to confirm resize, + +``` +~# openstack server resize --confirm 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 +~# +``` + +After the resize confirmation, status of VM will become active, now re-verify hypervisor and flavor details for the vm + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-58   | +| flavor                               | m1.medium (3)| +``` + +Login to your VM now and verify the root partition size + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        40G  887M   40G   3% / +devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev +tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm +tmpfs          tmpfs     1.9G  8.4M  1.9G   1% /run +tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup +tmpfs          tmpfs     380M     0  380M   0% /run/user/1000 +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +This confirm that VM root partition has been resized successfully. + +**Note:** Due to some reason if resize operation was not successful and you want to revert the vm back to previous state, then run the following command, + +``` +# openstack server resize --revert {instance_uuid} +``` + +If have noticed “ **openstack server show** ” commands output, VM is migrated from compute-57 to compute-58 after resize. This is the default behavior of “nova resize” command ( i.e nova resize command will migrate the instance to another compute & then resize it based on the flavor details) + +In case if you have only one compute node then nova resize will not work, but we can make it work by changing the below parameter in nova.conf file on compute node, + +Login to compute node, verify the parameter value + +If “ **allow_resize_to_same_host** ” is set as False then change it to True and restart the nova compute service. + +**Read More on** [**OpenStack Deployment using Devstack on CentOS 7 / RHEL 7 System**][2] + +That’s all from this tutorial, in case it helps you technically then please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/resize-openstack-instance-command-line/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/ +[2]: https://www.linuxtechi.com/openstack-deployment-devstack-centos-7-rhel-7/ diff --git a/sources/tech/20190123 Dockter- A container image builder for researchers.md b/sources/tech/20190123 Dockter- A container image builder for researchers.md new file mode 100644 index 0000000000..359d0c1d1e --- /dev/null +++ b/sources/tech/20190123 Dockter- A container image builder for researchers.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dockter: A container image builder for researchers) +[#]: via: (https://opensource.com/article/19/1/dockter-image-builder-researchers) +[#]: author: (Nokome Bentley https://opensource.com/users/nokome) + +Dockter: A container image builder for researchers +====== +Dockter supports the specific requirements of researchers doing data analysis, including those using R. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_skyscaper_organization.jpg?itok=Ir5epxm8) + +Dependency hell is ubiquitous in the world of software for research, and this affects research transparency and reproducibility. Containerization is one solution to this problem, but it creates new challenges for researchers. Docker is gaining popularity in the research community—but using it efficiently requires solid Dockerfile writing skills. + +As a part of the [Stencila][1] project, which is a platform for creating, collaborating on, and sharing data-driven content, we are developing [Dockter][2], an open source tool that makes it easier for researchers to create Docker images for their projects. Dockter scans a research project's source code, generates a Dockerfile, and builds a Docker image. It has a range of features that allow flexibility and can help researchers learn more about working with Docker. + +Dockter also generates a JSON file with information about the software environment (based on [CodeMeta][3] and [Schema.org][4]) to enable further processing and interoperability with other tools. + +Several other projects create Docker images from source code and/or requirements files, including: [alibaba/derrick][5], [jupyter/repo2docker][6], [Gueils/whales][7], [o2r-project/containerit][8]; [openshift/source-to-image][9], and [ViDA-NYU/reprozip][10]. Dockter is similar to repo2docker, containerit, and ReproZip in that it is aimed at researchers doing data analysis (and supports R), whereas most other tools are aimed at software developers (and don't support R). + +Dockter differs from these projects principally in that it: + + * Performs static code analysis for multiple languages to determine package requirements + * Uses package databases to determine package system dependencies and generate linked metadata (containerit does this for R) + * Installs language package dependencies quicker (which can be useful during research projects where dependencies often change) + * By default but optionally, installs Stencila packages so that Stencila client interfaces can execute code in the container + + + +### Dockter's features + +Following are some of the ways researchers can use Dockter. + +#### Generating Docker images from code + +Dockter scans a research project folder and builds a Docker image for it. If the folder already has a Dockerfile, Dockter will build the image from that. If not, Dockter will scan the source code files in the folder and generate one. Dockter currently handles R, Python, and Node.js source code. The .dockerfile (with the dot at the beginning) it generates is fully editable so users can take over from Dockter and carry on with editing the file as they see fit. + +If the folder contains an R package [DESCRIPTION][11] file, Dockter will install the R packages listed under Imports into the image. If the folder does not contain a DESCRIPTION file, Dockter will scan all the R files in the folder for package import or usage statements and create a .DESCRIPTION file. + +If the folder contains a [requirements.txt][12] file for Python, Dockter will copy it into the Docker image and use [pip][13] to install the specified packages. If the folder does not contain either of those files, Dockter will scan all the folder's .py files for import statements and create a .requirements.txt file. + +If the folder contains a [package.json][14] file, Dockter will copy it into the Docker image and use npm to install the specified packages. If the folder does not contain a package.json file, Dockter will scan all the folder's .js files for require calls and create a .package.json file. + +#### Capturing system requirements automatically + +One of the headaches researchers face when hand-writing Dockerfiles is figuring out which system dependencies their project needs. Often this involves a lot of trial and error. Dockter automatically checks if any dependencies (or dependencies of dependencies, or dependencies of…) require system packages and installs those into the image. No more trial and error cycles of build, fail, add dependency, repeat… + +#### Reinstalling language packages faster + +If you have ever built a Docker image, you know it can be frustrating waiting for all your project's dependencies to reinstall when you add or remove just one. + +This happens because of Docker's layered filesystem: When you update a requirements file, Docker throws away all the subsequent layers—including the one where you previously installed your dependencies. That means all the packages have to be reinstalled. + +Dockter takes a different approach. It leaves the installation of language packages to the language package managers: Python's pip, Node.js's npm, and R's install.packages. These package managers are good at the job they were designed for: checking which packages need to be updated and updating only them. The result is much faster rebuilds, especially for R packages, which often involve compilation. + +Dockter does this by looking for a special **# dockter** comment in a Dockerfile. Instead of throwing away layers, it executes all instructions after this comment in the same layer—thereby reusing packages that were previously installed. + +#### Generating structured metadata for a project + +Dockter uses [JSON-LD][15] as its internal data structure. When it parses a project's source code, it generates a JSON-LD tree using vocabularies from schema.org and CodeMeta. + +Dockter also fetches metadata on a project's dependencies, which could be used to generate a complete software citation for the project. + +### Easy to pick up, easy to throw away + +Dockter is designed to make it easier to get started creating Docker images for your project. But it's also designed to not get in your way or restrict you from using bare Docker. You can easily and individually override any of the steps Dockter takes to build an image. + + * **Code analysis:** To stop Dockter from doing code analysis and specify your project's package dependencies, just remove the leading **.** (dot) from the .DESCRIPTION, .requirements.txt, or .package.json files. + + * **Dockerfile generation:** Dockter aims to generate readable Dockerfiles that conform to best practices. They include comments on what each section does and are a good way to start learning how to write your own Dockerfiles. To stop Dockter from generating a .Dockerfile and start editing it yourself, just rename it Dockerfile (without the leading dot). + + + + +### Install Dockter + +[Dockter is available][16] as pre-compiled, standalone command line tool or as a Node.js package. Click [here][17] for a demo. + +We welcome and encourage all [contributions][18]! + +A longer version of this article is available on the project's [GitHub page][19]. + +Aleksandra Pawlik will present [Building reproducible computing environments: a workshop for non-experts][20] at [linux.conf.au][21], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/dockter-image-builder-researchers + +作者:[Nokome Bentley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nokome +[b]: https://github.com/lujun9972 +[1]: https://stenci.la/ +[2]: https://stencila.github.io/dockter/ +[3]: https://codemeta.github.io/index.html +[4]: http://Schema.org +[5]: https://github.com/alibaba/derrick +[6]: https://github.com/jupyter/repo2docker +[7]: https://github.com/Gueils/whales +[8]: https://github.com/o2r-project/containerit +[9]: https://github.com/openshift/source-to-image +[10]: https://github.com/ViDA-NYU/reprozip +[11]: http://r-pkgs.had.co.nz/description.html +[12]: https://pip.readthedocs.io/en/1.1/requirements.html +[13]: https://pypi.org/project/pip/ +[14]: https://docs.npmjs.com/files/package.json +[15]: https://json-ld.org/ +[16]: https://github.com/stencila/dockter/releases/ +[17]: https://asciinema.org/a/pOHpxUqIVkGdA1dqu7bENyxZk?size=medium&cols=120&autoplay=1 +[18]: https://github.com/stencila/dockter/blob/master/CONTRIBUTING.md +[19]: https://github.com/stencila/dockter +[20]: https://2019.linux.conf.au/schedule/presentation/185/ +[21]: https://linux.conf.au/ diff --git a/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md new file mode 100644 index 0000000000..bb7e129ff3 --- /dev/null +++ b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GStreamer WebRTC: A flexible solution to web-based media) +[#]: via: (https://opensource.com/article/19/1/gstreamer) +[#]: author: (Nirbheek Chauhan https://opensource.com/users/nirbheek) + +GStreamer WebRTC: A flexible solution to web-based media +====== +GStreamer's WebRTC implementation eliminates some of the shortcomings of using WebRTC in native apps, server applications, and IoT devices. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe) + +Currently, [WebRTC.org][1] is the most popular and feature-rich WebRTC implementation. It is used in Chrome and Firefox and works well for browsers, but the Native API and implementation have several shortcomings that make it a less-than-ideal choice for uses outside of browsers, including native apps, server applications, and internet of things (IoT) devices. + +Last year, our company ([Centricular][2]) made an independent implementation of a Native WebRTC API available in GStreamer 1.14. This implementation is much easier to use and more flexible than the WebRTC.org Native API, is transparently compatible with WebRTC.org, has been tested with all browsers, and is already in production use. + +### What are GStreamer and WebRTC? + +[GStreamer][3] is an open source, cross-platform multimedia framework and one of the easiest and most flexible ways to implement any application that needs to play, record, or transform media-like data across a diverse scale of devices and products, including embedded (IoT, in-vehicle infotainment, phones, TVs, etc.), desktop (video/music players, video recording, non-linear editing, video conferencing, [VoIP][4] clients, browsers, etc.), servers (encode/transcode farms, video/voice conferencing servers, etc.), and [more][5]. + +The main feature that makes GStreamer the go-to multimedia framework for many people is its pipeline-based model, which solves one of the hardest problems in API design: catering to applications of varying complexity; from the simplest one-liners and quick solutions to those that need several hundreds of thousands of lines of code to implement their full feature set. If you want to learn how to use GStreamer, [Jan Schmidt's tutorial][6] from [LCA 2018][7] is a good place to start. + +[WebRTC][8] is a set of draft specifications that build upon existing [RTP][9], [RTCP][10], [SDP][11], [DTLS][12], [ICE][13], and other real-time communication (RTC) specifications and define an API for making them accessible using browser JavaScript (JS) APIs. + +People have been doing real-time communication over [IP][14] for [decades][15] with the protocols WebRTC builds upon. WebRTC's real innovation was creating a bridge between native applications and web apps by defining a standard yet flexible API that browsers can expose to untrusted JavaScript code. + +These specifications are [constantly being improved][16], which, combined with the ubiquitous nature of browsers, means WebRTC is fast becoming the standard choice for video conferencing on all platforms and for most applications. + +### **Everything is great, let's build amazing apps!** + +Not so fast, there's more to the story! For web apps, the [PeerConnection API][17] is [everywhere][18]. There are some browser-specific quirks, and the API keeps changing, but the [WebRTC JS adapter][19] handles most of that. Overall, the web app experience is mostly 👍. + +Unfortunately, for native code or applications that need more flexibility than a sandboxed JavaScript app can achieve, there haven't been a lot of great options. + +[Libwebrtc][20] (Google's implementation), [Janus][21], [Kurento][22], and [OpenWebRTC][23] have traditionally been the main contenders, but each implementation has its own inflexibilities, shortcomings, and constraints. + +Libwebrtc is still the most mature implementation, but it is also the most difficult to work with. Since it's embedded inside Chrome, it's a moving target and the project [is quite difficult to build and integrate][24]. These are all obstacles for native or server app developers trying to quickly prototype and experiment with things. + +Also, WebRTC was not built for multimedia, so the lower layers get in the way of non-browser use cases and applications. It is quite painful to do anything other than the default "set raw media, transmit" and "receive from remote, get raw media." This means if you want to use your own filters or hardware-specific codecs or sinks/sources, you end up having to fork libwebrtc. + +[**OpenWebRTC**][23] by Ericsson was the first attempt to rectify this situation. It was built on top of GStreamer. Its target audience was app developers, and it fit the bill quite well as a proof of concept—even though it used a custom API and some of the architectural decisions made it quite inflexible for most other uses. However, after an initial flurry of activity around the project, momentum petered out, the project failed to gather a community, and it is now effectively dead. Full disclosure: Centricular worked with Ericsson to polish some of the rough edges around the project immediately prior to its public release. + +### WebRTC in GStreamer + +GStreamer's WebRTC implementation gives you full control, as it does with any other [GStreamer pipeline][25]. + +As we said, the WebRTC standards build upon existing standards and protocols that serve similar purposes. GStreamer has supported almost all of them for a while now because they were being used for real-time communication, live streaming, and many other IP-based applications. This led Ericsson to choose GStreamer as the base for its OpenWebRTC project. + +Combined with the [SRTP][26] and DTLS plugins that were written during OpenWebRTC's development, it means that the implementation is built upon a solid and well-tested base, and implementing WebRTC features does not involve as much code-from-scratch work as one might presume. However, WebRTC is a large collection of standards, and reaching feature-parity with libwebrtc is an ongoing task. + +Due to decisions made while architecting WebRTCbin's internals, the API follows the PeerConnection specification quite closely. Therefore, almost all its missing features involve writing code that would plug into clearly defined sockets. For instance, since the GStreamer 1.14 release, the following features have been added to the WebRTC implementation and will be available in the next release of the GStreamer WebRTC: + + * Forward error correction + * RTP retransmission (RTX) + * RTP BUNDLE + * Data channels over SCTP + + + +We believe GStreamer's API is the most flexible, versatile, and easy to use WebRTC implementation out there, and it will only get better as time goes by. Bringing the power of pipeline-based multimedia manipulation to WebRTC opens new doors for interesting, unique, and highly efficient applications. If you'd like to demo the technology and play with the code, build and run [these demos][27], which include C, Rust, Python, and C# examples. + +Matthew Waters will present [GStreamer WebRTC—The flexible solution to web-based media][28] at [linux.conf.au][29], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/gstreamer + +作者:[Nirbheek Chauhan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nirbheek +[b]: https://github.com/lujun9972 +[1]: http://webrtc.org/ +[2]: https://www.centricular.com/ +[3]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/gstreamer.html +[4]: https://en.wikipedia.org/wiki/Voice_over_IP +[5]: https://wiki.ligo.org/DASWG/GstLAL +[6]: https://www.youtube.com/watch?v=ZphadMGufY8 +[7]: http://lca2018.linux.org.au/ +[8]: https://en.wikipedia.org/wiki/WebRTC +[9]: https://en.wikipedia.org/wiki/Real-time_Transport_Protocol +[10]: https://en.wikipedia.org/wiki/RTP_Control_Protocol +[11]: https://en.wikipedia.org/wiki/Session_Description_Protocol +[12]: https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security +[13]: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment +[14]: https://en.wikipedia.org/wiki/Internet_Protocol +[15]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol +[16]: https://datatracker.ietf.org/wg/rtcweb/documents/ +[17]: https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection +[18]: https://caniuse.com/#feat=rtcpeerconnection +[19]: https://github.com/webrtc/adapter +[20]: https://github.com/aisouard/libwebrtc +[21]: https://janus.conf.meetecho.com/ +[22]: https://www.kurento.org/kurento-architecture +[23]: https://en.wikipedia.org/wiki/OpenWebRTC +[24]: https://webrtchacks.com/building-webrtc-from-source/ +[25]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/basics.html +[26]: https://en.wikipedia.org/wiki/Secure_Real-time_Transport_Protocol +[27]: https://github.com/centricular/gstwebrtc-demos/ +[28]: https://linux.conf.au/schedule/presentation/143/ +[29]: https://linux.conf.au/ diff --git a/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md new file mode 100644 index 0000000000..10e666f625 --- /dev/null +++ b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Orpie: A command-line reverse Polish notation calculator) +[#]: via: (https://opensource.com/article/19/1/orpie) +[#]: author: (Peter Faller https://opensource.com/users/peterfaller) + +Orpie: A command-line reverse Polish notation calculator +====== +Orpie is a scientific calculator that functions much like early, well-loved HP calculators. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c) +Orpie is a text-mode [reverse Polish notation][1] (RPN) calculator for the Linux console. It works very much like the early, well-loved Hewlett-Packard calculators. + +### Installing Orpie + +RPM and DEB packages are available for most distributions, so installation is just a matter of using either: + +``` +$ sudo apt install orpie +``` + +or + +``` +$ sudo yum install orpie +``` + +Orpie has a comprehensive man page; new users may want to have it open in another terminal window as they get started. Orpie can be customized for each user by editing the **~/.orpierc** configuration file. The [orpierc(5)][2] man page describes the contents of this file, and **/etc/orpierc** describes the default configuration. + +### Starting up + +Start Orpie by typing **orpie** at the command line. The main screen shows context-sensitive help on the left and the stack on the right. The cursor, where you enter numbers you want to calculate, is at the bottom-right corner. + +![](https://opensource.com/sites/default/files/uploads/orpie_start.png) + +### Example calculation + +For a simple example, let's calculate the factorial of **5 (2 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 3 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 5)**. First the long way: + +| Keys | Result | +| --------- | --------- | +| 2 | Push 2 onto the stack | +| 3 | Push 3 onto the stack | +| * | Multiply to get 6 | +| 4 | Push 4 onto the stack | +| * | Multiply to get 24 | +| 5 | Push 5 onto the stack | +| * | Multiply to get 120 | + +Note that the multiplication happens as soon as you type *****. If you hit **< enter>** after ***** , Orpie will duplicate the value at position 1 on the stack. (If this happens, you can drop the duplicate with **\**.) + +Equivalent sequences are: + +| Keys | Result | +| ------------- | ------------- | +| 2 3 * 4 * 5 * | Faster! | +| 2 3 4 5 * * * | Same result | +| 5 ' fact | Fastest: Use the built-in function | + +Observe that when you enter **'** , the left pane changes to show matching functions as you type. In the example above, typing **fa** is enough to get the **fact** function. Orpie offers many functions—experiment by typing **'** and a few letters to see what's available. + +![](https://opensource.com/sites/default/files/uploads/orpie_functions.png) + +Note that each operation replaces one or more values on the stack. If you want to store the value at position 1 in the stack, key in (for example) **@factot ** and **S'**. To retrieve the value, key in (for example) **@factot ** then **;** (if you want to see it; otherwise just leave **@factot** as the value for the next calculation). + +### Constants and units + +Orpie understands units and predefines many useful scientific constants. For example, to calculate the energy in a blue light photon at 400nm, calculate **E=hc/(400nm)**. The key sequences are: + +| Keys | Result | +| -------------- | -------------- | +| C c | Get the speed of light in m/s | +| C h | Get Planck's constant in Js | +| * | Calculate h*c | +| 400 9 n _ m | Input 4 _ 10^-9 m | +| / | Do the division and get the result: 4.966 _ 10^-19 J | + +Like choosing functions after typing **'** , typing **C** shows matching constants based on what you type. + +![](https://opensource.com/sites/default/files/uploads/orpie_constants.png) + +### Matrices + +Orpie can also do operations with matrices. For example, to multiply two 2x2 matrices: + +| Keys | Result | +| -------- | -------- | +| [ 1 , 2 [ 3 , 4 | Stack contains the matrix [[ 1, 2 ][ 3, 4 ]] | +| [ 1 , 0 [ 1 , 1 | Push the multiplier matrix onto the stack | +| * | The result is: [[ 3, 2 ][ 7, 4 ]] | + +Note that the **]** characters are automatically inserted—entering **[** starts a new row. + +### Complex numbers + +Orpie can also calculate with complex numbers. They can be entered or displayed in either polar or rectangular form. You can toggle between the polar and rectangular display using the **p** key, and between degrees and radians using the **r** key. For example, to multiply **3 + 4i** by **4 + 4i** : + +| Keys | Result | +| -------- | -------- | +| ( 3 , 4 | The stack contains (3, 4) | +| ( 4 , 4 | Push (4, 4) | +| * | Get the result: (-4, 28) | + +Note that as you go, the results are kept on the stack so you can observe intermediate results in a lengthy calculation. + +![](https://opensource.com/sites/default/files/uploads/orpie_final.png) + +### Quitting Orpie + +You can exit from Orpie by typing **Q**. Your state is saved, so the next time you start Orpie, you'll find the stack as you left it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/orpie + +作者:[Peter Faller][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/peterfaller +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Reverse_Polish_notation +[2]: https://github.com/pelzlpj/orpie/blob/master/doc/orpierc.5 diff --git a/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md new file mode 100644 index 0000000000..fcbdd3c5c7 --- /dev/null +++ b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md @@ -0,0 +1,330 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client) +[#]: via: (https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client +====== + +Linux users were preferred to go with scp or rsync for files or folders copy. + +However, so many new options are coming to Linux because it’s a opensource. + +Anyone can develop a secure software for Linux. + +We had written multiple articles in our site in the past about this topic. + +Even, today we are going to discuss the same kind of topic called ffsend. + +Those are **[OnionShare][1]** , **[Magic Wormhole][2]** , **[Transfer.sh][3]** and **[Dcp – Dat Copy][4]**. + +### What’s ffsend? + +[ffsend][5] is a command line Firefox Send client that allow users to transfer and receive files and folders through command line. + +It allow us to easily and securely share files and directories from the command line through a safe, private and encrypted link using a single simple command. + +Files are shared using the Send service and the allowed file size is up to 2GB. + +Others are able to download these files with this tool, or through their web browser. + +All files are always encrypted on the client, and secrets are never shared with the remote host. + +Additionally you can add a password for the file upload. + +The uploaded files will be removed after the download (default count is 1 up to 10) or after 24 hours. This will make sure that your files does not remain online forever. + +This tool is currently in the alpha phase. Use at your own risk. Also, only limited installation options are available right now. + +### ffsend Features: + + * Fully featured and friendly command line tool + * Upload and download files and directories securely + * Always encrypted on the client + * Additional password protection, generation and configurable download limits + * Built-in file and directory archiving and extraction + * History tracking your files for easy management + * Ability to use your own Send host + * Inspect or delete shared files + * Accurate error reporting + * Low memory footprint, due to encryption and download/upload streaming + * Intended to be used in scripts without interaction + + + +### How To Install ffsend in Linux? + +There is no package for each distributions except Debian and Arch Linux systems. However, we can easily get this utility by downloading the prebuilt appropriate binaries file based on the operating system and architecture. + +Run the below command to download the latest available version for your operating system. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend-v0.1.2-linux-x64.tar.gz +``` + +Extract the tar archive using the following command. + +``` +$ tar -xvf ffsend-v0.1.2-linux-x64.tar.gz +``` + +Run the following command to identify your path variable. + +``` +$ echo $PATH +/home/daygeek/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl +``` + +As i told previously, just move the executable file to your path directory. + +``` +$ sudo mv ffsend /usr/local/sbin +``` + +Run the `ffsend` command alone to get the basic usage information. + +``` +$ ffsend +ffsend 0.1.2 +Usage: ffsend [FLAGS] ... + +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +Missing subcommand. Here are the most used: + ffsend upload ... + ffsend download ... + +To show all subcommands, features and other help: + ffsend help [SUBCOMMAND] +``` + +For Arch Linux based users can easily install it with help of **[AUR Helper][6]** , as this package is available in AUR repository. + +``` +$ yay -S ffsend +``` + +For **`Debian/Ubuntu`** systems, use **[DPKG Command][7]** to install ffsend. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend_0.1.2_amd64.deb +$ sudo dpkg -i ffsend_0.1.2_amd64.deb +``` + +### How To Send A File Using ffsend? + +It’s not complicated. We can easily send a file using simple syntax. + +**Syntax:** + +``` +$ ffsend upload [/Path/to/the/file/name] +``` + +In the following example, we are going to upload a file called `passwd-up1.sh`. Once you upload the file then you will be getting the unique URL. + +``` +$ ffsend upload passwd-up1.sh --copy +Upload complete +Share link: https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +``` + +![][9] + +Just download the above unique URL to get the file in any remote system. + +**Syntax:** + +``` +$ ffsend download [Generated URL] +``` + +Output for the above command. + +``` +$ ffsend download https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +Download complete +``` + +![][10] + +Use the following syntax format for directory upload. + +``` +$ ffsend upload [/Path/to/the/Directory] --copy +``` + +In this example, we are going to upload `2g` directory. + +``` +$ ffsend upload /home/daygeek/2g --copy +You've selected a directory, only a single file may be uploaded. +Archive the directory into a single file? [Y/n]: y +Archiving... +Upload complete +Share link: https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +``` + +Just download the above generated the unique URL to get a folder in any remote system. + +``` +$ ffsend download https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +You're downloading an archive, extract it into the selected directory? [Y/n]: y +Extracting... +Download complete +``` + +As this already send files through a safe, private, and encrypted link. However, if you would like to add a additional security at your level. Yes, you can add a password for a file. + +``` +$ ffsend upload file-copy-rsync.sh --copy --password +Password: +Upload complete +Share link: https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +``` + +It will prompt you to update a password when you are trying to download a file in the remote system. + +``` +$ ffsend download https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +This file is protected with a password. +Password: +Download complete +``` + +Alternatively you can limit a download speed by providing the download speed while uploading a file. + +``` +$ ffsend upload file-copy-scp.sh --copy --downloads 10 +Upload complete +Share link: https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +``` + +Just download the above unique URL to get a file in any remote system. + +``` +ffsend download https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +Download complete +``` + +If you want to see more details about the file, use the following format. It will shows you the file name, file size, Download counts and when it will going to expire. + +**Syntax:** + +``` +$ ffsend info [Generated URL] + +$ ffsend info https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +ID: 23cb923c4e +Name: file-copy-scp.sh +Size: 115 B +MIME: application/x-sh +Downloads: 3 of 10 +Expiry: 23h58m (86280s) +``` + +You can view your transaction history using the following format. + +``` +$ ffsend history +# LINK EXPIRY +1 https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw 23h57m +2 https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA 23h55m +3 https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg 23h52m +4 https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ 23h46m +5 https://send.firefox.com/download/74ff30e43e/#NYfDOUp_Ai-RKg5g0fCZXw 23h44m +6 https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA 23h43m +``` + +If you don’t want the link anymore then we can delete it. + +**Syntax:** + +``` +$ ffsend delete [Generated URL] + +$ ffsend delete https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA +File deleted +``` + +Alternatively this can be done using firefox browser by opening the page . + +Just drag and drop a file to upload it. +![][11] + +Once the file is downloaded, it will show you that 100% download completed. +![][12] + +To check other possible options, navigate to man page or help page. + +``` +$ ffsend --help +ffsend 0.1.2 +Tim Visee +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +USAGE: + ffsend [FLAGS] [OPTIONS] [SUBCOMMAND] + +FLAGS: + -f, --force Force the action, ignore warnings + -h, --help Prints help information + -i, --incognito Don't update local history for actions + -I, --no-interact Not interactive, do not prompt + -q, --quiet Produce output suitable for logging and automation + -V, --version Prints version information + -v, --verbose Enable verbose information and logging + -y, --yes Assume yes for prompts + +OPTIONS: + -H, --history Use the specified history file [env: FFSEND_HISTORY] + -t, --timeout Request timeout (0 to disable) [env: FFSEND_TIMEOUT] + -T, --transfer-timeout Transfer timeout (0 to disable) [env: FFSEND_TRANSFER_TIMEOUT] + +SUBCOMMANDS: + upload Upload files [aliases: u, up] + download Download files [aliases: d, down] + debug View debug information [aliases: dbg] + delete Delete a shared file [aliases: del] + exists Check whether a remote file exists [aliases: e] + help Prints this message or the help of the given subcommand(s) + history View file history [aliases: h] + info Fetch info about a shared file [aliases: i] + parameters Change parameters of a shared file [aliases: params] + password Change the password of a shared file [aliases: pass, p] + +The public Send service that is used as default host is provided by Mozilla. +This application is not affiliated with Mozilla, Firefox or Firefox Send. +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/onionshare-secure-way-to-share-files-sharing-tool-linux/ +[2]: https://www.2daygeek.com/wormhole-securely-share-files-from-linux-command-line/ +[3]: https://www.2daygeek.com/transfer-sh-easy-fast-way-share-files-over-internet-from-command-line/ +[4]: https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/ +[5]: https://github.com/timvisee/ffsend +[6]: https://www.2daygeek.com/category/aur-helper/ +[7]: https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ +[8]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-1.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-2.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-3.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-4.png diff --git a/sources/tech/20190125 Using Antora for your open source documentation.md b/sources/tech/20190125 Using Antora for your open source documentation.md new file mode 100644 index 0000000000..3df2862ba1 --- /dev/null +++ b/sources/tech/20190125 Using Antora for your open source documentation.md @@ -0,0 +1,208 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Antora for your open source documentation) +[#]: via: (https://fedoramagazine.org/using-antora-for-your-open-source-documentation/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +Using Antora for your open source documentation +====== +![](https://fedoramagazine.org/wp-content/uploads/2019/01/antora-816x345.jpg) + +Are you looking for an easy way to write and publish technical documentation? Let me introduce [Antora][1] — an open source documentation site generator. Simple enough for a tiny project, but also complex enough to cover large documentation sites such as [Fedora Docs][2]. + +With sources stored in git, written in a simple yet powerful markup language AsciiDoc, and a static HTML as an output, Antora makes writing, collaborating on, and publishing your documentation a no-brainer. + +### The basic concepts + +Before we build a simple site, let’s have a look at some of the core concepts Antora uses to make the world a happier place. Or, at least, to build a documentation website. + +#### Organizing the content + +All sources that are used to build your documentation site are stored in a **git repository**. Or multiple ones — potentially owned by different people. For example, at the time of writing, the Fedora Docs had its sources stored in 24 different repositories owned by different groups having their own rules around contributions. + +The content in Antora is organized into **components** , usually representing different areas of your project, or, well, different components of the software you’re documenting — such as the backend, the UI, etc. Components can be independently versioned, and each component gets a separate space on the docs site with its own menu. + +Components can be optionally broken down into so-called **modules**. Modules are mostly invisible on the site, but they allow you to organize your sources into logical groups, and even store each in different git repository if that’s something you need to do. We use this in Fedora Docs to separate [the Release Notes, the Installation Guide, and the System Administrator Guide][3] into three different source repositories with their own rules, while preserving a single view in the UI. + +What’s great about this approach is that, to some extent, the way your sources are physically structured is not reflected on the site. + +#### Virtual catalog + +When assembling the site, Antora builds a **virtual catalog** of all pages, assigning a [unique ID][4] to each one based on its name and the component, the version, and module it belongs to. The page ID is then used to generate URLs for each page, and for internal links as well. So, to some extent, the source repository structure doesn’t really matter as far as the site is concerned. + +As an example, if we’d for some reason decided to merge all the 24 repositories of Fedora Docs into one, nothing on the site would change. Well, except the “Edit this page” link on every page that would suddenly point to this one repository. + +#### Independent UI + +We’ve covered the content, but how it’s going to look like? + +Documentation sites generated with Antora use a so-called [UI bundle][5] that defines the look and feel of your site. The UI bundle holds all graphical assets such as CSS, images, etc. to make your site look beautiful. + +It is expected that the UI will be developed independently of the documentation content, and that’s exactly what Antora supports. + +#### Putting it all together + +Having sources distributed in multiple repositories might raise a question: How do you build the site? The answer is: [Antora Playbook][6]. + +Antora Playbook is a file that points to all the source repositories and the UI bundle. It also defines additional metadata such as the name of your site. + +The Playbook is the only file you need to have locally available in order to build the site. Everything else gets fetched automatically as a part of the build process. + +### Building a site with Antora + +Demo time! To build a minimal site, you need three things: + + 1. At least one component holding your AsciiDoc sources. + 2. An Antora Playbook. + 3. A UI bundle + + + +Good news is the nice people behind Antora provide [example Antora sources][7] we can try right away. + +#### The Playbook + +Let’s first have a look at [the Playbook][8]: + +``` +site: + title: Antora Demo Site +# the 404 page and sitemap files only get generated when the url property is set + url: https://example.org/docs + start_page: component-b::index.adoc +content: + sources: + - url: https://gitlab.com/antora/demo/demo-component-a.git + branches: master + - url: https://gitlab.com/antora/demo/demo-component-b.git + branches: [v2.0, v1.0] + start_path: docs +ui: + bundle: + url: https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable + snapshot: true +``` + +As we can see, the Playbook defines some information about the site, lists the content repositories, and points to the UI bundle. + +There are two repositories. The [demo-component-a][9] with a single branch, and the [demo-component-b][10] having two branches, each representing a different version. + +#### Components + +The minimal source repository structure is nicely demonstrated in the [demo-component-a][9] repository: + +``` +antora.yml <- component metadata +modules/ + ROOT/ <- the default module + nav.adoc <- menu definition + pages/ <- a directory with all the .adoc sources + source1.adoc + source2.adoc + ... +``` + +The following + +``` +antora.yml +``` + +``` +name: component-a +title: Component A +version: 1.5.6 +start_page: ROOT:inline-text-formatting.adoc +nav: + - modules/ROOT/nav.adoc +``` + +contains metadata for this component such as the name and the version of the component, the starting page, and it also points to a menu definition file. + +The menu definition file is a simple list that defines the structure of the menu and the content. It uses the [page ID][4] to identify each page. + +``` +* xref:inline-text-formatting.adoc[Basic Inline Text Formatting] +* xref:special-characters.adoc[Special Characters & Symbols] +* xref:admonition.adoc[Admonition] +* xref:sidebar.adoc[Sidebar] +* xref:ui-macros.adoc[UI Macros] +* Lists +** xref:lists/ordered-list.adoc[Ordered List] +** xref:lists/unordered-list.adoc[Unordered List] + +And finally, there's the actual content under modules/ROOT/pages/ — you can see the repository for examples, or the AsciiDoc syntax reference +``` + +#### The UI bundle + +For the UI, we’ll be using the example UI provided by the project. + +Going into the details of Antora UI would be above the scope of this article, but if you’re interested, please see the [Antora UI documentation][5] for more info. + +#### Building the site + +Note: We’ll be using Podman to run Antora in a container. You can [learn about Podman on the Fedora Magazine][11]. + +To build the site, we only need to call Antora on the Playbook file. + +The easiest way to get antora at the moment is to use the container image provided by the project. You can get it by running: + +``` +$ podman pull antora/antora +``` + +Let’s get the playbook repository: + +``` +$ git clone https://gitlab.com/antora/demo/demo-site.git +$ cd demo-site +``` + +And run Antora using the following command: + +``` +$ podman run --rm -it -v $(pwd):/antora:z antora/antora site.yml +``` + +The site will be available in the + +public + +``` +$ cd public +$ python3 -m http.server 8080 +``` + +directory. You can either open it in your web browser directly, or start a local web server using: + +Your site will be available on . + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-antora-for-your-open-source-documentation/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://antora.org/ +[2]: http://docs.fedoraproject.org/ +[3]: https://docs.fedoraproject.org/en-US/fedora/f29/ +[4]: https://docs.antora.org/antora/2.0/page/page-id/#structure +[5]: https://docs.antora.org/antora-ui-default/ +[6]: https://docs.antora.org/antora/2.0/playbook/ +[7]: https://gitlab.com/antora/demo +[8]: https://gitlab.com/antora/demo/demo-site/blob/master/site.yml +[9]: https://gitlab.com/antora/demo/demo-component-a +[10]: https://gitlab.com/antora/demo/demo-component-b +[11]: https://fedoramagazine.org/running-containers-with-podman/ diff --git a/sources/tech/20190127 Eliminate error handling by eliminating errors.md b/sources/tech/20190127 Eliminate error handling by eliminating errors.md new file mode 100644 index 0000000000..6eac4740eb --- /dev/null +++ b/sources/tech/20190127 Eliminate error handling by eliminating errors.md @@ -0,0 +1,204 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Eliminate error handling by eliminating errors) +[#]: via: (https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Eliminate error handling by eliminating errors +====== + +Go 2 aims to improve the overhead of [error handling][1], but do you know what is better than an improved syntax for handling errors? Not needing to handle errors at all. Now, I’m not saying “delete your error handling code”, instead I’m suggesting changing your code so you don’t have as many errors to handle. + +This article draws inspiration from a chapter in John Ousterhout’s, _[A philosophy of Software Design,][2]_ “Define Errors Out of Existence”. I’m going to try to apply his advice to Go. + +* * * + +Here’s a function to count the number of lines in a file, + +``` +func CountLines(r io.Reader) (int, error) { + var ( + br = bufio.NewReader(r) + lines int + err error + ) + + for { + _, err = br.ReadString('\n') + lines++ + if err != nil { + break + } + } + + if err != io.EOF { + return 0, err + } + return lines, nil + } +``` + +We construct a `bufio.Reader`, then sit in a loop calling the `ReadString` method, incrementing a counter until we reach the end of the file, then we return the number of lines read. That’s the code we _wanted_ to write, instead `CountLines` is made more complicated by its error handling. For example, there is this strange construction: + +``` +_, err = br.ReadString('\n') +lines++ +if err != nil { + break +} +``` + +We increment the count of lines _before_ checking the error—​that looks odd. The reason we have to write it this way is `ReadString` will return an error if it encounters an end-of-file—`io.EOF`—before hitting a newline character. This can happen if there is no trailing newline. + +To address this problem, we rearrange the logic to increment the line count, then see if we need to exit the loop.1 + +But we’re not done checking errors yet. `ReadString` will return `io.EOF` when it hits the end of the file. This is expected, `ReadString` needs some way of saying _stop, there is nothing more to read_. So before we return the error to the caller of `CountLine`, we need to check if the error was _not_ `io.EOF`, and in that case propagate it up, otherwise we return `nil` to say that everything worked fine. This is why the final line of the function is not simply + +``` +return lines, err +``` + +I think this is a good example of Russ Cox’s [observation that error handling can obscure the operation of the function][3]. Let’s look at an improved version. + +``` +func CountLines(r io.Reader) (int, error) { + sc := bufio.NewScanner(r) + lines := 0 + + for sc.Scan() { + lines++ + } + + return lines, sc.Err() +} +``` + +This improved version switches from using `bufio.Reader` to `bufio.Scanner`. Under the hood `bufio.Scanner` uses `bufio.Reader` adding a layer of abstraction which helps remove the error handling which obscured the operation of our previous version of `CountLines` 2 + +The method `sc.Scan()` returns `true` if the scanner _has_ matched a line of text and _has not_ encountered an error. So, the body of our `for` loop will be called only when there is a line of text in the scanner’s buffer. This means our revised `CountLines` correctly handles the case where there is no trailing newline, It also correctly handles the case where the file is empty. + +Secondly, as `sc.Scan` returns `false` once an error is encountered, our `for` loop will exit when the end-of-file is reached or an error is encountered. The `bufio.Scanner` type memoises the first error it encounters and we recover that error once we’ve exited the loop using the `sc.Err()` method. + +Lastly, `buffo.Scanner` takes care of handling `io.EOF` and will convert it to a `nil` if the end of file was reached without encountering another error. + +* * * + +My second example is inspired by Rob Pikes’ _[Errors are values][4]_ blog post. + +When dealing with opening, writing and closing files, the error handling is present but not overwhelming as, the operations can be encapsulated in helpers like `ioutil.ReadFile` and `ioutil.WriteFile`. However, when dealing with low level network protocols it often becomes necessary to build the response directly using I/O primitives, thus the error handling can become repetitive. Consider this fragment of a HTTP server which is constructing a HTTP/1.1 response. + +``` +type Header struct { + Key, Value string +} + +type Status struct { + Code int + Reason string +} + +func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error { + _, err := fmt.Fprintf(w, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason) + if err != nil { + return err + } + + for _, h := range headers { + _, err := fmt.Fprintf(w, "%s: %s\r\n", h.Key, h.Value) + if err != nil { + return err + } + } + + if _, err := fmt.Fprint(w, "\r\n"); err != nil { + return err + } + + _, err = io.Copy(w, body) + return err +} +``` + +First we construct the status line using `fmt.Fprintf`, and check the error. Then for each header we write the header key and value, checking the error each time. Lastly we terminate the header section with an additional `\r\n`, check the error, and copy the response body to the client. Finally, although we don’t need to check the error from `io.Copy`, we do need to translate it from the two return value form that `io.Copy` returns into the single return value that `WriteResponse` expects. + +Not only is this a lot of repetitive work, each operation—fundamentally writing bytes to an `io.Writer`—has a different form of error handling. But we can make it easier on ourselves by introducing a small wrapper type. + +``` +type errWriter struct { + io.Writer + err error +} + +func (e *errWriter) Write(buf []byte) (int, error) { + if e.err != nil { + return 0, e.err + } + + var n int + n, e.err = e.Writer.Write(buf) + return n, nil +} +``` + +`errWriter` fulfils the `io.Writer` contract so it can be used to wrap an existing `io.Writer`. `errWriter` passes writes through to its underlying writer until an error is detected. From that point on, it discards any writes and returns the previous error. + +``` +func WriteResponse(w io.Writer, st Status, headers []Header, body io.Reader) error { + ew := &errWriter{Writer: w} + fmt.Fprintf(ew, "HTTP/1.1 %d %s\r\n", st.Code, st.Reason) + + for _, h := range headers { + fmt.Fprintf(ew, "%s: %s\r\n", h.Key, h.Value) + } + + fmt.Fprint(ew, "\r\n") + io.Copy(ew, body) + + return ew.err +} +``` + +Applying `errWriter` to `WriteResponse` dramatically improves the clarity of the code. Each of the operations no longer needs to bracket itself with an error check. Reporting the error is moved to the end of the function by inspecting the `ew.err` field, avoiding the annoying translation from `io.Copy`’s return values. + +* * * + +When you find yourself faced with overbearing error handling, try to extract some of the operations into a helper type. + + 1. This logic _still_ isn’t correct, can you spot the bug? + 2. `bufio.Scanner` can scan for any pattern, by default it looks for newlines. + + + +### Related posts: + + 1. [Error handling vs. exceptions redux][5] + 2. [Stack traces and the errors package][6] + 3. [Subcommand handling in Go][7] + 4. [Constant errors][8] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/01/27/eliminate-error-handling-by-eliminating-errors + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://go.googlesource.com/proposal/+/master/design/go2draft-error-handling-overview.md +[2]: https://www.amazon.com/Philosophy-Software-Design-John-Ousterhout/dp/1732102201 +[3]: https://www.youtube.com/watch?v=6wIP3rO6On8 +[4]: https://blog.golang.org/errors-are-values +[5]: https://dave.cheney.net/2014/11/04/error-handling-vs-exceptions-redux (Error handling vs. exceptions redux) +[6]: https://dave.cheney.net/2016/06/12/stack-traces-and-the-errors-package (Stack traces and the errors package) +[7]: https://dave.cheney.net/2013/11/07/subcommand-handling-in-go (Subcommand handling in Go) +[8]: https://dave.cheney.net/2016/04/07/constant-errors (Constant errors) diff --git a/sources/tech/20190129 A small notebook for a system administrator.md b/sources/tech/20190129 A small notebook for a system administrator.md new file mode 100644 index 0000000000..45d6ba50eb --- /dev/null +++ b/sources/tech/20190129 A small notebook for a system administrator.md @@ -0,0 +1,552 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A small notebook for a system administrator) +[#]: via: (https://habr.com/en/post/437912/) +[#]: author: (sukhe https://habr.com/en/users/sukhe/) + +A small notebook for a system administrator +====== + +I am a system administrator, and I need a small, lightweight notebook for every day carrying. Of course, not just to carry it, but for use it to work. + +I already have a ThinkPad x200, but it’s heavier than I would like. And among the lightweight notebooks, I did not find anything suitable. All of them imitate the MacBook Air: thin, shiny, glamorous, and they all critically lack ports. Such notebook is suitable for posting photos on Instagram, but not for work. At least not for mine. + +After not finding anything suitable, I thought about how a notebook would turn out if it were developed not with design, but the needs of real users in mind. System administrators, for example. Or people serving telecommunications equipment in hard-to-reach places — on roofs, masts, in the woods, literally in the middle of nowhere. + +The results of my thoughts are presented in this article. + + +[![Figure to attract attention][1]][2] + +Of course, your understanding of the admin notebook does not have to coincide with mine. But I hope you will find a couple of interesting thoughts here. + +Just keep in mind that «system administrator» is just the name of my position. And in fact, I have to work as a network engineer, and installer, and perform a significant part of other work related to hardware. Our company is tiny, we are far from large settlements, so all of us have to be universal specialist. + +In order not to constantly clarify «this notebook», later in the article I will call it the “adminbook”. Although it can be useful not only to administrators, but also to all who need a small, lightweight notebook with a lot of connectors. In fact, even large laptops don’t have as many connectors. + +So let's get started… + +### 1\. Dimensions and weight + +Of course, you want it smaller and more lightweight, but the keyboard with the screen should not be too small. And there has to be space for connectors, too. + +In my opinion, a suitable option is a notebook half the size of an x200. That is, approximately the size of a sheet of A5 paper (210x148mm). In addition, the side pockets of many bags and backpacks are designed for this size. This means that the adminbook doesn’t even have to be carried in the main compartment. + +Though I couldn’t fit everything I wanted into 210mm. To make a comfortable keyboard, the width had to be increased to 230mm. + +In the illustrations the adminbook may seem too thick. But that’s only an optical illusion. In fact, its thickness is 25mm (28mm taking the rubber feet into account). + +Its size is close to the usual hardcover book, 300-350 pages thick. + +It’s lightweight, too — about 800 grams (half the weight of the ThinkPad). + +The case of the adminbook is made of mithril aluminum. It’s a lightweight, durable metal with good thermal conductivity. + +### 2\. Keyboard and trackpoint + +A quality keyboard is very important for me. “Quality” there means the fastest possible typing and hotkey speed. It needs to be so “matter-of-fact” I don’t have to think about it at all, as if it types seemingly by force of thought. + +This is possible if the keys are normal size and in their typical positions. But the adminbook is too small for that. In width, it is even smaller than the main block of keys of a desktop keyboard. So, you have to work around that somehow. + +After a long search and numerous tests, I came up with what you see in the picture: + +![](https://habrastorage.org/webt/2-/mh/ag/2-mhagvoofl7vgqiadv3rcnclb0.jpeg) +Fig.2.1 — Adminbook keyboard + +This keyboard has the same vertical key distance as on a regular keyboard. A horizontal distance decreased just only 2mm (17 instead of 19). + +You can even type blindly on this keyboard! To do this, some keys have small bumps for tactile orientation. + +However, if you do not sit at a table, the main input method will be to press the keys “at a glance”. And here the muscle memory does not help — you have to look at the keys with your eyes. + +To hit the buttons faster, different key colors are used. + +For example, the numeric row is specifically colored gray to visually separate it from the QWERTY row, and NumLock is mapped to the “6” key, colored black to stand out. + +To the right of NumLock, gray indicates the area of the numeric keypad. These (and neighboring) buttons work like a numeric keypad in NumLock mode or when you press Fn. I must say, this is a useful feature for the admin computer — some users come up with passwords on the numpad in the form of a “cross”, “snake”, “spiral”, etc. I want to be able to type them that way too. + +As for the function keys. I don’t know about you, but it annoys me when, in a 15-inch laptop, this row is half-height and only accessible through pressing Fn. Given that there’s a lot free space around the keyboard! + +The adminbook doesn’t have free space at all. But the function keys can be pressed without Fn. These are separate keys that are even divided into groups of 4 using color coding and location. + +By the way, have you seen which key is to the right of AltGr on modern ThinkPads? I don’t know what they were thinking, but now they have PrintScreen there! + +Where? Where, I ask, is the context menu key that I use every day? It’s not there. + +So the adminbook has it. Two, even! You can put it up by pressing Fn + Alt. Sorry, I couldn’t map it to a separate key due to lack of space. Just in case, I added the “Right Win” key as Fn + CtrlR. Maybe some people use it for something. + +However, the adminbook allows you to customize the keyboard to your liking. The keyboard is fully reprogrammable. You can assign the scan codes you need to the keys. Setting the keyboard parameters is done via the “KEY” button (Fn + F3). + +Of course, the adminbook has a keyboard backlight. It is turned on with Fn + B (below the trackpoint, you can even find it in the dark). The backlight here is similar to the ThinkPad ThinkLight. That is, it’s an LED above the display, illuminating the keyboard from the top. In this case, it is better than a backlight from below, because it allows you to distinguish the color of the keys. In addition, keys have several characters printed on them, while only English letters are usually made translucent to the backlight. + +Since we’re on the topic of characters… Red letters are Ukrainian and Russian. I specifically drew them to show that keys have space for several alphabets: after all, English is not a native language for most of humanity. + +Since there isn’t enough space for a full touchpad, the trackpoint is used as the positioning device. If you have no experience working with it — don’t worry, it’s actually quite handy. The mouse cursor moves with slight inclines of the trackpoint, like an analog joystick, and its three buttons (under the spacebar) work the same as on the mouse. + +To the left of the trackpoint keys is a fingerprint scanner. That makes it possible to login by fingerprint. It’s very convenient in most cases. + +The space bar has an NFC antenna location mark. You can simply read data from devices equipped with NFC, and you can make it to lock the system while not in use. For example, if you wear an NFC-equipped ring, it looks like this: when you remove hands from the keyboard, the computer locks after a certain time, and unlocks when you put hands on the keyboard again. + +And now the unexpected part. The keyboard and the trackpoint can work as a USB keyboard and mouse for an external computer! For this, there are USB Type C and MicroUSB connectors on the back, labeled «OTG». You can connect to an external computer using a standard USB cable from a phone (which is usually always with you). + +![](https://habrastorage.org/webt/e2/wa/m5/e2wam5d1bbckfdxpvqwl-i6aqle.jpeg) +Fig.2.2 — On the right: the power connector 5.5x2.5mm, the main LAN connector, POE indicator, USB 3.0 Type A, USB Type C (with alternate HDMI mode), microSD card reader and two «magic» buttons + +Switching to the external keyboard mode is done with the «K» button on the right side of the adminbook. And there are actually three modes, since the keyboard+trackpoint combo can also work as a Bluetooth keyboard/mouse! + +Moreover: to save energy, the keyboard and trackpoint can work autonomously from the rest of the adminbook. When the adminbook is turned off, pressing «K» can turn on only the keyboard and trackpoint to use them by connecting to another computer. + +Of course, the keyboard is water-resistant. Excess water is drained down through the drainage holes. + +### 3\. Video subsystem + +There are some devices that normally do not need a monitor and keyboard. For example, industrial computers, servers or DVRs. And since the monitor is «not needed», it is, in most cases, absent. + +And when there is a need to configure such a device from the console, it can be a big surprise that the entire office is working on laptops and there is not a single stationary monitor within reach. Therefore, in some cases you have to take a monitor with you. + +But you don’t need to worry about this if you have the adminbook. + +The fact is that the video outputs of the adminbook can switch «in the opposite direction» and work as video inputs by displaying the incoming image on the built-in screen. So, the adminbook can also replace the monitor (in addition to replace the mouse and keyboard). + +![](https://habrastorage.org/webt/4a/qr/f-/4aqrf-1sgstwwffhx-n4wr0p7ws.jpeg) +Fig.3.1 — On the left side of the adminbook, there are Mini DisplayPort, USB Type C (with alternate DisplayPort mode), SD card reader, USB 3.0 Type A connectors, HDMI, four audio connectors, VGA and power button + +Switching modes between input and output is done by pressing the «M» button on the right side of the adminbook. + +The video subsystem, as well as the keyboard, can work autonomously — that is, when used as a monitor, the other parts of the adminbook remain disabled. To turn on to this mode also uses the «M» button. + +Detailed screen adjustment (contrast, geometry, video input selection, etc.) is performed using the menu, brought up with the «SCR» button (Fn + F4). + +The adminbook has HDMI, MiniDP, VGA and USB Type C connectors (with DisplayPort and HDMI alternate mode) for video input / output. The integrated GPU can display the image simultaneously in three directions (including the integrated display). + +The adminbook display is FullHD (1920x1080), 9.5’’, matte screen. The brightness is sufficient for working outside during the day. And to do it better, the set includes folding blinds for protection from sunlight. + +![](https://habrastorage.org/webt/k-/nc/rh/k-ncrhphspvcoimfds1wurnzk3i.jpeg) +Fig.3.2 — Blinds to protect from sunlight + +In addition to video output via these connectors, the adminbook can use wireless transmission via WiDi or Miracast protocols. + +### 4\. Emulation of external drives + +One of the options for installing the operating system is to install it from a CD / DVD, but now very few computers have optical drives. USB connectors are everywhere, though. Therefore, the adminbook can pretend to be an external optical drive connected via USB. + +That allows connecting it to any computer to install an operating system on it, while also running boot discs with test programs or antiviruses. + +To connect, it uses the same USB cable that’s used for connecting it to a desktop as an external keyboard/mouse. + +The “CD” button (Fn + F2) controls the drive emulation — select a disc image (in an .iso file) and mount / unmount it. + +If you need to copy data from a computer or to it, the adminbook can emulate an external hard drive connected via the same USB cable. HDD emulation is also enabled by the “CD” button. + +This button also turns on the emulation of bootable USB flash drives. They are now used to install operating systems almost more often than CDs. Therefore, the adminbook can pretend to be a bootable flash drive. + +The .iso files are located on a separate partition of the hard disk. This allows you to use them regardless of the operating system. Moreover, in the emulation menu you can connect a virtual drive to one of the USB interfaces of the adminbook. This makes it possible to install an operating system on the adminbook using itself as an installation disc drive. + +By the way, the adminbook is designed to work under Windows 10 and Debian / Kali / Ubuntu. The menu system called via function buttons with Fn works autonomously on a separate microcontroller. + +### 5\. Rear connectors + +First, a classic DB-9 connector for RS-232. Any admin notebook simply has to have it. We have it here, too, and galvanically isolated from the rest of the notebook. + +In addition to RS-232, RS-485 widely used in industrial automation is supported. It has a two-wire and four-wire version, with a terminating resistor and without, with the ability to enable a protective offset. It can also work in RS-422 and UART modes. + +All these protocols are configured in the on-screen menu, called by the «COM» button (Fn + F8). + +Since there are multiple protocols, it is possible to accidentally connect the equipment to a wrong connector and break it. + +To prevent this from happening, when you turn off the computer (or go into sleep mode, or close the display lid), the COM port switches to the default mode. This may be a “port disabled” state, or enabling one of the protocols. + +![](https://habrastorage.org/webt/uz/ii/ig/uziiig_yr86yzdcnivkbapkbbgi.jpeg) +Fig.5.1 — The rear connectors: DB-9, SATA + SATA Power, HD Mini SAS, the second wired LAN connector, two USB 3.0 Type A connectors, two USB 2.0 MicroB connectors, three USB Type C connectors, a USIM card tray, a PBD-12 pin connector (jack) + +The adminbook has one more serial port. But if the first one uses the hardware UART chipset, the second one is connected to the USB 2.0 line through the FT232H converter. + +Thanks to this, via COM2, you can exchange data via I2C, SMBus, SPI, JTAG, UART protocols or use it as 8 outputs for Bit-bang / GPIO. These protocols are used when working with microcontrollers, flashing firmware on routers and debugging any other electronics. For this purpose, pin connectors are usually used with a 2.54mm pitch. Therefore, COM2 is made to look like one of these connectors. + +![](https://habrastorage.org/webt/qd/rc/ln/qdrclnoljgnlohthok4hgjb0be4.jpeg) +Fig.5.2 — USB to UART adapter replaced by COM2 port + +There is also a secondary LAN interface at the back. Like the main one, it is gigabit-capable, with support for VLAN. Both interfaces are able to test the integrity of the cable (for pair length and short circuits), the presence of connected devices, available communication speeds, the presence of POE voltage. With the using a wiremap adapter on the other side (see chapter 17) it is possible to determine how the cable is connected to crimps. + +The network interface menu is called with the “LAN” button (Fn + F6). + +The adminbook has a combined SATA + SATA Power connector, connected directly to the chipset. That makes it possible to perform low-level tests of hard drives that do not work through USB-SATA adapters. Previously, you had to do it through ExpressCards-type adapters, but the adminbook can do without them because it has a true SATA output. + +![](https://habrastorage.org/webt/dr/si/in/drsiinbafiyz8ztzwrowtvi0lk8.jpeg) +Fig.5.3 — USB to SATA/IDE and ExpressCard to SATA adapters + +The adminbook also has a connector that no other laptops have — HD Mini SAS (SFF-8643). PCIe x4 is routed outside through this connector. Thus, it's possible to connect an external U.2 (directly) or M.2 type (through an adapter) drives. Or even a typical desktop PCIe expansion card (like a graphics card). + +![](https://habrastorage.org/webt/ud/ph/86/udph860bshazyd6lvuzvwgymwnk.jpeg) +Fig.5.4 — HD Mini SAS (SFF-8643) to U.2 cable + +![](https://habrastorage.org/webt/kx/dd/99/kxdd99krcllm5ooz67l_egcttym.jpeg) +Fig.5.5 — U.2 drive + +![](https://habrastorage.org/webt/xn/de/gx/xndegxy5i1g7h2lwefs2jt1scpq.jpeg) +Fig.5.6 — U.2 to M.2 adapter + +![](https://habrastorage.org/webt/z2/dd/hd/z2ddhdoioezdwov_nv9e3b0egsa.jpeg) +Fig.5.7 — Combined adapter from U.2 to M.2 and PCIe (sample M.2 22110 drive is installed) + +Unfortunately, the limitations of the chipset don’t allow arbitrary use of PCIe lanes. In addition, the processor uses the same data lanes for PCIe and SATA. Therefore, the rear connectors can only work in two ways: +— all four PCIe lanes go to the Mini SAS connector (the second network interface and SATA don’t work) +— two PCIe lanes go to the Mini SAS, and two lanes to the second network interface and SATA connector + +On the back there are also two USB connectors (usual and Type C), which are constantly powered. That allows you to charge other devices from your notebook, even when the notebook is turned off. + +### 6\. Power Supply + +The adminbook is designed to work in difficult and unpredictable conditions, therefore, it is able to receive power in various ways. + +**Method number one** is Power Delivery. The power supply cable can be connected to any USB Type C connector (except the one marked “OTG”). + +**The second option** is from a normal 5V phone charger with a microUSB or USB Type C connector. At the same time, if you connect to the ports labeled QC 3.0, the QuickCharge fast charging standard will be supported. + +**The third option** — from any source of 12-60V DC power. To connect, use a coaxial ( also known as “barrel”) 5.5x2.5mm power connector, often found in laptop power supplies. + +For greater safety, the 12-60V power supply is galvanically isolated from the rest of the notebook. In addition, there’s reverse polarity protection. In fact, the adminbook can receive energy even if positive and negative ends are mismatched. + +![](https://habrastorage.org/webt/ju/xo/c3/juxoc3lxi7urqwgegyd6ida5h_8.jpeg) +Fig.6.1 — The cable, connecting the power supply to the adminbook (terminated with 5.5x2.5mm connectors) + +Adapters for a car cigarette lighter and crocodile clips are included in the box. + +![](https://habrastorage.org/webt/l6/-v/gv/l6-vgvqjrssirnvyi14czhi0mrc.jpeg) +Fig.6.2 — Adapter from 5.5x2.5mm coaxial connector to crocodile clips + +![](https://habrastorage.org/webt/zw/an/gs/zwangsvfdvoievatpbfxqvxrszg.png) +Fig.6.3 — Adapter to a car cigarette lighter + +**The fourth option** — Power Over Ethernet (POE) through the main network adapter. Supported options are 802.3af, 802.3at and Passive POE. Input voltage from 12 to 60V. This method is convenient if you have to work on the roof or on the tower, setting up Wi-Fi antennas. Power to them comes through Ethernet cables, and there is no other electricity on the tower. + +POE electricity can be used in three ways: + + * power the notebook only + * forward to a second network adapter and power the notebook from batteries + * power the notebook and the antenna at the same time + + + +To prevent equipment damage, if one of the Ethernet cables is disconnected, the power to the second network interface is terminated. The power can only be turned on manually through the corresponding menu item. + +When using the 802.3af / at protocols, you can set the power class that the adminbook will request from the power supply device. This and other POE properties are configured from the menu called with the “LAN” button (Fn + F6). + +By the way, you can remotely reset Ubiquity access points (which is done by closing certain wires in the cable) with the second network interface. + +The indicator next to the main network interface shows the presence and type of POE: green — 802.3af / at, red — Passive POE. + +**The last, fifth** power supply is the battery. Here it’s a LiPol, 42W/hour battery. + +In case the external power supply does not provide sufficient power, the missing power can be drawn from the battery. Thus, it can draw power from the battery and external sources at the same time. + +### 7\. Display unit + +The display can tilt 180 degrees, and it’s locked with latches while closed (opens with a button on the front side). When the display is closed, adminbook doesn’t react to pressing any external buttons. + +In addition to the screen, the notebook lid contains: + + * front and rear cameras with lights, microphones, activity LEDs and mechanical curtains + * LED of the upper backlight of the keyboard (similar to ThinkLight) + * LED indicators for Wi-Fi, Bluetooth, HDD and others + * wireless protocol antennas (in the blue plastic insert) + * photo sensors and LEDs for the infrared remote + * gyroscope, accelerometer, magnetometer + + + +The plastic insert for the antennas does not reach the corners of the display lid. This is done because in the «traveling» notebooks the corners are most affected by impacts, and it's desirable that they be made of metal. + +### 8\. Webcams + +The notebook has 2 webcams. The front-facing one is 8MP (4K / UltraHD), while the “selfie” one is 2MP (FullHD). Both cameras have a backlight controlled by separate buttons (Fn + G and Fn + H). Each camera has a mechanical curtain and an activity LED. The shifted mechanical curtain also turns off the microphones of the corresponding side (configurable). + +The external camera has two quick launch buttons — Fn + 1 takes an instant photo, Fn + 2 turns on video recording. The internal camera has a combination of Fn + Q and Fn + W. + +You can configure cameras and microphones from the menu called up by the “CAM” button (Fn + F10). + +### 9\. Indicator row + +It has the following indicators: Microphone, NumLock, ScrollLock, hard drive access, battery charge, external power connection, sleep mode, mobile connection, WiFi, Bluetooth. + +Three indicators are made to shine through the back side of the display lid, so that they can be seen while the lid is closed: external power connection, battery charge, sleep mode. + +Indicators are color-coded. + +Microphone — lights up red when all microphones are muted + +Battery charge: more than 60% is green, 30-60% is yellow, less than 30% is red, less than 10% is blinking red. + +External power: green — power is supplied, the battery is charged; yellow — power is supplied, the battery is charging; red — there is not enough external power to operate, the battery is drained + +Mobile: 4G (LTE) — green, 3G — yellow, EDGE / GPRS — red, blinking red — on, but no connection + +Wi-Fi: green — connected to 5 GHz, yellow — to 2.4 GHz, red — on, but not connected + +You can configure the indication with the “IND” button (Fn + F9) + +### 10\. Infrared remote control + +Near the indicators (on the front and back of the display lid) there are infrared photo sensors and LEDs to recording and playback commands from IR remotes. You can set it up, as well as emulate a remote control by pressing the “IR” button (Fn + F5). + +### 11\. Wireless interfaces + +WiFi — dual-band, 802.11a/b/g/n/ac with support for Wireless Direct, Intel WI-Di / Miracast, Wake On Wireless LAN. + +You ask, why is Miracast here? Because is already embedded in many WiFi chips, so its presence does not lead to additional costs. But you can transfer the image wirelessly to TVs, projectors and TV set-top boxes, that already have Miracast built in. + +Regarding Bluetooth, there’s nothing special. It’s version 4.2 or newest. By the way, the keyboard and trackpoint have a separate Bluetooth module. This is much easier than connect them to the system-wide module. + +Of course, the adminbook has a built-in cellular modem for 4G (LTE) / 3G / EDGE / GPRS, as well as a GPS / GLONASS / Galileo / Beidou receiver. This receiver also doesn’t cost much, because it’s already built into the 4G modem. + +There is also an NFC communication module, with the antenna under the spacebar. Antennas of all other wireless interfaces are in a plastic insert above the display. + +You can configure wireless interfaces with the «WRLS» button (Fn + F7). + +### 12\. USB connectors + +In total, four USB 3.0 Type A connectors and four USB 3.1 Type C connectors are built into the adminbook. Peripherals are connected to the adminbook through these. + +One more Type C and MicroUSB are allocated only for keyboard / mouse / drive emulation (denoted as “OTG”). + +«QC 3.0» labeled MicroUSB connector can not only be used for power, but it can switch to normal USB 2.0 port mode, except using MicroB instead of normal Type A. Why is it necessary? Because to flash some electronics you sometimes need non-standard USB A to USB A cables. + +In order to not make adapters outselves, you can use a regular phone charging cable by plugging it into this Micro B connector. Or use an USB A to USB Type C cable (if you have one). + +![](https://habrastorage.org/webt/0p/90/7e/0p907ezbunekqwobeogjgs5fgsa.jpeg) +Fig.12.1 — Homemade USB A to USB A cable + +Since USB Type C supports alternate modes, it makes sense to use it. Alternate modes are when the connector works as HDMI or DisplayPort video outputs. Though you’ll need adapters to connect it to a TV or monitor. Or appropriate cables that have Type C on one end and HDMI / DP on the other. However, USB Type C to USB Type C cables might soon become the most common video transfer cable. + +The Type C connector on the left side of the adminbook supports an alternate Display Port mode, and on the right side, HDMI. Like the other video outputs of the adminbook, they can work as both input and output. + +The one thing left to say is that Type C is bidirectional in regard to power delivery — it can both take in power as well as output it. + +### 13\. Other + +On the left side there are four audio connectors: Line In, Line Out, Microphone and the combo headset jack (headphones + microphone). Supports simple stereo, quad and 5.1 mode output. + +Audio outputs are specially placed next to the video connectors, so that when connected to any equipment, the wires are on one side. + +Built-in speakers are on the sides. Outside, they are covered with grills and acoustic fabric with water-repellent impregnation. + +There are also two slots for memory cards — full-size SD and MicroSD. If you think that the first slot is needed only for copying photos from the camera — you are mistaken. Now, both single-board computers like Raspberry Pi and even rack-mount servers are loaded from SD cards. MicroSD cards are also commonly found outside of phones. In general, you need both card slots. + +Sensors more familiar to phones — a gyroscope, an accelerometer and a magnetometer — are built into the lid of the notebook. Thanks to this, one can determine where the notebook cameras are directed and use this for augmented reality, as well as navigation. Sensors are controlled via the menu using the “SNSR” button (Fn + F11). + +Among the function buttons with Fn, F1 (“MAN”) and F12 (“ETC”) I haven’t described yet. The first is a built-in guide on connectors, modes and how to use the adminbook. The second is the settings of non-standard subsystems that have not separate buttons. + +### 14\. What's inside + +The adminbook is based on the Core i5-7Y57 CPU (Kaby Lake architecture). Although it’s less of a CPU, but more of a real SOC (System On a Chip). That is, almost the entire computer (without peripherals) fits in one chip the size of a thumb nail (2x1.6 cm). + +It emits from 3.5W to 7W of heat (depends on the frequency). So, a passive cooling system is adequate in this case. + +8GB of RAM are installed by default, expandable up to 16GB. + +A 256GB M.2 2280 SSD, connected with two PCIe lanes, is used as the hard drive. + +Wi-Fi + Bluetooth and WWAN + GNSS adapters are also designed as M.2 modules. + +RAM, the hard drive and wireless adapters are located on the top of the motherboard and can be replaced by the user — just unscrew and lift the keyboard. + +The battery is assembled from four LP545590 cells and can also be replaced. + +SOC and other irreplaceable hardware are located on the bottom of the motherboard. The heating components for cooling are pressed directly against the case. + +External connectors are located on daughter boards connected to the motherboard via ribbon cables. That allows to release different versions of the adminbook based on the same motherboard. + +For example, one of the possible version: + +![](https://habrastorage.org/webt/j9/sw/vq/j9swvqfi1-ituc4u9nr6-ijv3nq.jpeg) +Fig.14.1 — Adminbook A4 (front view) + +![](https://habrastorage.org/webt/pw/fq/ag/pwfqagvrluf1dbnmcd0rt-0eyc0.jpeg) +Fig.14.2 — Adminbook A4 (back view) + +![](https://habrastorage.org/webt/mn/ir/8i/mnir8in1pssve0m2tymevz2sue4.jpeg) +Fig.14.3 — Adminbook A4 (keyboard) + +This is an adminbook with a 12.5” display, its overall dimensions are 210x297mm (A4 paper format). The keyboard is full-size, with a standard key size (only the top row is a bit narrower). All the standard keys are there, except for the numpad and the Right Win, available with Fn keys. And trackpad added. + +### 15\. The underside of the adminbook + +Not expecting anything interesting from the bottom? But there is! + +First I will say a few words about the rubber feet. On my ThinkPad, they sometimes fall away and lost. I don't know if it's a bad glue, or a backpack is not suitable for a notebook, but it happens. + +Therefore, in the adminbook, the rubber feet are screwed in (the screws are slightly buried in rubber, so as not to scratch the tables). The feet are sufficiently streamlined so that they cling less to other objects. + +On the bottom there are visible drainage holes marked with a water drop. + +And the four threaded holes for connecting the adminbook with fasteners. + +![](https://habrastorage.org/webt/3d/q9/ku/3dq9kus6t7ql3rh5mbpfo3_xqng.jpeg) +Fig.15.1 — The underside of the adminbook + +Large hole in the center has a tripod thread. + +![](https://habrastorage.org/webt/t5/e5/ps/t5e5ps3iasu2j-22uc2rgl_5x_y.jpeg) +Fig.15.2 — Camera clamp mount + +Why is it necessary? Because sometimes you have to hang on high, holding the mast with one hand, holding the notebook with the second, and typing something on the third… Unfortunately, I am not Shiva, so these tricks are not easy for me. And you can just screw the adminbook by a camera mount to any protruding part of the structure and free your hands! + +No protruding parts? No problem. A plate with neodymium magnets is screwed to three other holes and the adminbook is magnetised to any steel surface — even vertical! As you see, opening the display by 180° is pretty useful. + +![](https://habrastorage.org/webt/ua/28/ub/ua28ubhpyrmountubiqjegiibem.jpeg) +Fig.15.3 — Fastening with magnets and shaped holes for nails / screws + +And if there is no metal? For example, working on the roof, and next to only a wooden wall. Then you can screw 1-2 screws in the wall and hang the adminbook on them. To do this, there are special slots in the mount, plus an eyelet on the handle. + +For especially difficult cases, there’s an arm mount. This is not very convenient, but better than nothing. Besides, it allows you to navigate even with a working notebook. + +![](https://habrastorage.org/webt/tp/fo/0y/tpfo0y_8gku4bmlbeqwfux1j4me.jpeg) +Fig.15.4 — Arm mount + +In general, these three holes use a regular metric thread, specifically so that you can make some DIY fastening and fasten it with ordinary screws. + +Except fasteners, an additional radiator can be screwed to these holes, so that you can work for a long time under high load or at high ambient temperature. + +![](https://habrastorage.org/webt/k4/jo/eq/k4joeqhmaxgvzhnxno6z3alg5go.jpeg) +Fig.15.5 — Adminbook with additional radiator + +### 16\. Accessories + +The adminbook has some unique features, and some of them are implemented using equipment designed specifically for the adminbook. Therefore, these accessories are immediately included. However, non-unique accessories are also available immediately. + +Here is a complete list of both: + + * fasteners with magnets + * arm mount + * heatsink + * screen blinds covering it from sunlight + * HD Mini SAS to U.2 cable + * combined adapter from U.2 to M.2 and PCIe + * power cable, terminated by coaxial 5.5x2.5mm connectors + * adapter from power cable to cigarette lighter + * adapter from power cable to crocodile clips + * different adapters from the power cable to coaxial connectors + * universal power supply and power cord from it into the outlet + + + +### 17\. Power supply + +Since this is a power supply for a system administrator's notebook, it would be nice to make it universal, capable of powering various electronic devices. Fortunately, the vast majority of devices are connected via coaxial connectors or USB. I mean devices with external power supplies: routers, switches, notebooks, nettops, single-board computers, DVRs, IPTV set top boxes, satellite tuners and more. + +![](https://habrastorage.org/webt/jv/zs/ve/jvzsveqavvi2ihuoajjnsr1xlp0.jpeg) +Fig.17.1 — Adapters from 5.5x2.5mm coaxial connector to other types of connectors + +There aren’t many connector types, which allows to get by with an adjustable-voltage PSU and adapters for the necessary connectors. It also needs to support various power delivery standards. + +In our case, the power supply supports the following modes: + + * Power Delivery — displayed as **[pd]** + * Quick Charge **[qc]** + * 802.3af/at **[at]** + * voltage from 5 to 54 volts in 0.5V increments (displayed voltage) + + + +![](https://habrastorage.org/webt/fj/jm/qv/fjjmqvdhezywuyh9ew3umy9wgmg.jpeg) +Fig.17.2 — Mode display on the 7-segment indicator (1.9. = 19.5V) + +![](https://habrastorage.org/webt/h9/zg/u0/h9zgu0ngl01rvhgivlw7fb49gpq.jpeg) +Fig.17.3 — Front and top sides of power supply + +USB outputs on the power supply (5V 2A) are always on. On the other outputs the voltage is applied by pressing the ON/OFF button. + +The desired mode is selected with the MODE button and this selection is remembered even when the power is turned off. The modes are listed like this: pd, qc, at, then a series of voltages. + +Voltage increases by pressing and holding the MODE button, decreases by short pressing. Step to the right — 1 Volt, step to the left — 0.5 Volt. Half a volt is needed because some equipment requires, for example, 19.5 volts. These half volts are displayed on the display with decimal points (19V -> **[19]** , 19.5V -> **[1.9.]** ). + +When power is on, the green LED is on. When a short-circuit or overcurrent protection is triggered, **[SH]** is displayed, and the LED lights up red. + +In the Power Delivery and Quick Charge modes, voltage is applied to the USB outputs (Type A and Type C). Only one of them can be used at one time. + +In 802.3af/at modes, the power supply acts as an injector, combining the supply voltage with data from the LAN connector and supplying it to the POE connector. Power is supplied only if a device with 802.3af or 802.3at support is plugged into the POE connector. + +But in the simple voltage supply mode, electricity throu the POE connector is issued immediately, without any checks. This is the so-called Passive POE — positive charge goes to conductors 4 and 5, and negative charge to conductors 7 and 8. At the same time, the voltage is applied to the coaxial connector. Adapters for various types of connectors are used in this mode. + +The power supply unit has a built-in button to remotely reset Ubiquity access points. This is a very useful feature that allows you to reset the antenna to factory settings without having to climb on the mast. I don’t know — is any other manufacturers support a feature like this? + +The power supply also has the passive wiremap adapter, which allows you to determine the correct Ethernet cable crimping. The active part is located in the Ethernet ports of the adminbook. + +![](https://habrastorage.org/webt/pp/bm/ws/ppbmws4g1o5j05eyqqulnwuuwge.jpeg) +Fig.17.4 — Back side and wiremap adapter + +Of course, the network cable tester built into the adminbook will not replace a professional OTDR, but for most tasks it will be enough. + +To prevent overheating, part of the PSU’s body acts as an aluminum heatsink. Power supply power — 65 watts, size 10x5x4cm. + +### 18\. Afterword + +“It won’t fit into such a small case!” — the sceptics will probably say. To be frank, I also sometimes think that way when re-reading what I wrote above. + +And then I open the 3D model and see, that all parts fits. Of course, I am not an electronic engineer, and for sure I miss some important things. But, I hope that if there are mistakes, they are “overcorrections”. That is, real engineers would fit all of that into a case even smaller. + +By and large, the adminbook can be divided into 5 functional parts: + + * the usual part, as in all notebooks — processor, memory, hard drive, etc. + * keyboard and trackpoint that can work separately + * autonomous video subsystem + * subsystem for managing non-standard features (enable / disable POE, infrared remote control, PCIe mode switching, LAN testing, etc.) + * power subsystem + + + +If we consider them separately, then everything looks quite feasible. + +The **SOC Kaby Lake** contains a CPU, a graphics accelerator, a memory controller, PCIe, SATA controller, USB controller for 6 USB3 and 10 USB2 outputs, Gigabit Ethernet controller, 4 lanes to connect webcams, integrated audio and etc. + +All that remains is to trace the lanes to connectors and supply power to it. + +**Keyboard and trackpoint** is a separate module that connects via USB to the adminbook or to an external connector. Nothing complicated here: USB and Bluetooth keyboards are very widespread. In our case, in addition, needs to make a rewritable table of scan codes and transfer non-standard keys over a separate interface other than USB. + +**The video subsystem** receives the video signal from the adminbook or from external connectors. In fact, this is a regular monitor with a video switchboard plus a couple of VGA converters. + +**Non-standard features** are managed independently of the operating system. The easiest way to do it with via a separate microcontroller which receives codes for pressing non-standard keys (those that are pressed with Fn) and performs the corresponding actions. + +Since you have to display a menu to change the settings, the microcontroller has a video output, connected to the adminbook for the duration of the setup. + +**The internal PSU** is galvanically isolated from the rest of the system. Why not? On habr.com there was an article about making a 100W, 9.6mm thickness planar transformer! And it only costs $0.5. + +So the electronic part of the adminbook is quite feasible. There is the programming part, and I don’t know which part will harder. + +This concludes my fairly long article. It long, even though I simplified, shortened and threw out minor details. + +The ideal end of the article was a link to an online store where you can buy an adminbook. But it's not yet designed and released. Since this requires money. + +Unfortunately, I have no experience with Kickstarter or Indigogo. Maybe you have this experience? Let's do it together! + +### Update + +Many people asked for a simplified version. Ok. Done. Sorry — just a 3d model, without render. + +Deleted: second LAN adapter, micro SD card reader, one USB port Type C, second camera, camera lights and camera curtines, display latch, unnecessary audio connectors. + +Also in this version there will be no infrared remote control, a reprogrammable keyboard, QC 3.0 charging standard, and getting power by POE. + +![](https://habrastorage.org/webt/3l/lg/vm/3llgvmv4pebiruzgldqckab0uyc.jpeg) +![](https://habrastorage.org/webt/sp/x6/rv/spx6rvmn6zlumbwg46xwfmjnako.jpeg) +![](https://habrastorage.org/webt/sm/g0/xz/smg0xzdspfm3vr3gep__6bcqae8.jpeg) + + +-------------------------------------------------------------------------------- + +via: https://habr.com/en/post/437912/ + +作者:[sukhe][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://habr.com/en/users/sukhe/ +[b]: https://github.com/lujun9972 +[1]: https://habrastorage.org/webt/_1/mp/vl/_1mpvlyujldpnad0cvvzvbci50y.jpeg +[2]: https://habrastorage.org/webt/mr/m6/d3/mrm6d3szvghhpghfchsl_-lzgb4.jpeg diff --git a/sources/tech/20190129 Create an online store with this Java-based framework.md b/sources/tech/20190129 Create an online store with this Java-based framework.md new file mode 100644 index 0000000000..b72a8551de --- /dev/null +++ b/sources/tech/20190129 Create an online store with this Java-based framework.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create an online store with this Java-based framework) +[#]: via: (https://opensource.com/article/19/1/scipio-erp) +[#]: author: (Paul Piper https://opensource.com/users/madppiper) + +Create an online store with this Java-based framework +====== +Scipio ERP comes with a large range of applications and functionality. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_whitehurst_money.png?itok=ls-SOzM0) + +So you want to sell products or services online, but either can't find a fitting software or think customization would be too costly? [Scipio ERP][1] may just be what you are looking for. + +Scipio ERP is a Java-based open source e-commerce framework that comes with a large range of applications and functionality. The project was forked from [Apache OFBiz][2] in 2014 with a clear focus on better customization and a more modern appeal. The e-commerce component is quite extensive and works in a multi-store setup, internationally, and with a wide range of product configurations, and it's also compatible with modern HTML frameworks. The software also provides standard applications for many other business cases, such as accounting, warehouse management, or sales force automation. It's all highly standardized and therefore easy to customize, which is great if you are looking for more than a virtual cart. + +The system makes it very easy to keep up with modern web standards, too. All screens are constructed using the system's "[templating toolkit][3]," an easy-to-learn macro set that separates HTML from all applications. Because of it, every application is already standardized to the core. Sounds confusing? It really isn't—it all looks a lot like HTML, but you write a lot less of it. + +### Initial setup + +Before you get started, make sure you have Java 1.8 (or greater) SDK and a Git client installed. Got it? Great! Next, check out the master branch from GitHub: + +``` +git clone https://github.com/ilscipio/scipio-erp.git +cd scipio-erp +git checkout master +``` + +To set up the system, simply run **./install.sh** and select either option from the command line. Throughout development, it is best to stick to an **installation for development** (Option 1), which will also install a range of demo data. For professional installations, you can modify the initial config data ("seed data") so it will automatically set up the company and catalog data for you. By default, the system will run with an internal database, but it [can also be configured][4] with a wide range of relational databases such as PostgreSQL and MariaDB. + +![Setup wizard][6] + +Follow the setup wizard to complete your initial configuration, + +Start the system with **./start.sh** and head over to **** to complete the configuration. If you installed with demo data, you can log in with username **admin** and password **scipio**. During the setup wizard, you can set up a company profile, accounting, a warehouse, your product catalog, your online store, and additional user profiles. Keep the website entries on the product store configuration screen for now. The system allows you to run multiple webstores with different underlying code; unless you want to do that, it is easiest to stick to the defaults. + +Congratulations, you just installed Scipio ERP! Play around with the screens for a minute or two to get a feel for the functionality. + +### Shortcuts + +Before you jump into the customization, here are a few handy commands that will help you along the way: + + * Create a shop-override: **./ant create-component-shop-override** + * Create a new component: **./ant create-component** + * Create a new theme component: **./ant create-theme** + * Create admin user: **./ant create-admin-user-login** + * Various other utility functions: **./ant -p** + * Utility to install & update add-ons: **./git-addons help** + + + +Also, make a mental note of the following locations: + + * Scripts to run Scipio as a service: **/tools/scripts/** + * Log output directory: **/runtime/logs** + * Admin application: **** + * E-commerce application: **** + + + +Last, Scipio ERP structures all code in the following five major directories: + + * Framework: framework-related sources, the application server, generic screens, and configurations + * Applications: core applications + * Addons: third-party extensions + * Themes: modifies the look and feel + * Hot-deploy: your own components + + + +Aside from a few configurations, you will be working within the hot-deploy and themes directories. + +### Webstore customizations + +To really make the system your own, start thinking about [components][7]. Components are a modular approach to override, extend, and add to the system. Think of components as self-contained web modules that capture information on databases ([entity][8]), functions ([services][9]), screens ([views][10]), [events and actions][11], and web applications. Thanks to components, you can add your own code while remaining compatible with the original sources. + +Run **./ant create-component-shop-override** and follow the steps to create your webstore component. A new directory will be created inside of the hot-deploy directory, which extends and overrides the original e-commerce application. + +![component directory structure][13] + +A typical component directory structure. + +Your component will have the following directory structure: + + * config: configurations + * data: seed data + * entitydef: database table definitions + * script: Groovy script location + * servicedef: service definitions + * src: Java classes + * webapp: your web application + * widget: screen definitions + + + +Additionally, the **ivy.xml** file allows you to add Maven libraries to the build process and the **ofbiz-component.xml** file defines the overall component and web application structure. Apart from the obvious, you will also find a **controller.xml** file inside the web apps' **WEB-INF** directory. This allows you to define request entries and connect them to events and screens. For screens alone, you can also use the built-in CMS functionality, but stick to the core mechanics first. Familiarize yourself with **/applications/shop/** before introducing changes. + +#### Adding custom screens + +Remember the [templating toolkit][3]? You will find it used on every screen. Think of it as a set of easy-to-learn macros that structure all content. Here's an example: + +``` +<@section title="Title"> +    <@heading id="slider">Slider +    <@row> +        <@cell columns=6> +            <@slider id="" class="" controls=true indicator=true> +                <@slide link="#" image="https://placehold.it/800x300">Just some content… +                <@slide title="This is a title" link="#" image="https://placehold.it/800x300"> +            +        +        <@cell columns=6>Second column +    + +``` + +Not too difficult, right? Meanwhile, themes contain the HTML definitions and styles. This hands the power over to your front-end developers, who can define the output of each macro and otherwise stick to their own build tools for development. + +Let's give it a quick try. First, define a request on your own webstore. You will modify the code for this. A built-in CMS is also available at **** , which allows you to create new templates and screens in a much more efficient way. It is fully compatible with the templating toolkit and comes with example templates that can be adopted to your preferences. But since we are trying to understand the system here, let's go with the more complicated way first. + +Open the **[controller.xml][14]** file inside of your shop's webapp directory. The controller keeps track of request events and performs actions accordingly. The following will create a new request under **/shop/test** : + +``` + + +      +      + +``` + +You can define multiple responses and, if you want, you could use an event or a service call inside the request to determine which response you may want to use. I opted for a response of type "view." A view is a rendered response; other types are request-redirects, forwards, and alike. The system comes with various renderers and allows you to determine the output later; to do so, add the following: + +``` + + +``` + +Replace **my-component** with your own component name. Then you can define your very first screen by adding the following inside the tags within the **widget/CommonScreens.xml** file: + +``` + +       
                  +            +            +            +                +                    +                        +                    +                +            +       
                  +   
                  +``` + +Screens are actually quite modular and consist of multiple elements ([widgets, actions, and decorators][15]). For the sake of simplicity, leave this as it is for now, and complete the new webpage by adding your very first templating toolkit file. For that, create a new **webapp/mycomponent/test/test.ftl** file and add the following: + +``` +<@alert type="info">Success! +``` + +![Custom screen][17] + +A custom screen. + +Open **** and marvel at your own accomplishments. + +#### Custom themes + +Modify the look and feel of the shop by creating your very own theme. All themes can be found as components inside of the themes folder. Run **./ant create-theme** to add your own. + +![theme component layout][19] + +A typical theme component layout. + +Here's a list of the most important directories and files: + + * Theme configuration: **data/*ThemeData.xml** + * Theme-specific wrapping HTML: **includes/*.ftl** + * Templating Toolkit HTML definition: **includes/themeTemplate.ftl** + * CSS class definition: **includes/themeStyles.ftl** + * CSS framework: **webapp/theme-title/*** + + + +Take a quick look at the Metro theme in the toolkit; it uses the Foundation CSS framework and makes use of all the things above. Afterwards, set up your own theme inside your newly constructed **webapp/theme-title** directory and start developing. The Foundation-shop theme is a very simple shop-specific theme implementation that you can use as a basis for your own work. + +Voila! You have set up your own online store and are ready to customize! + +![Finished Scipio ERP shop][21] + +A finished shop based on Scipio ERP. + +### What's next? + +Scipio ERP is a powerful framework that simplifies the development of complex e-commerce applications. For a more complete understanding, check out the project [documentation][7], try the [online demo][22], or [join the community][23]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/scipio-erp + +作者:[Paul Piper][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/madppiper +[b]: https://github.com/lujun9972 +[1]: https://www.scipioerp.com +[2]: https://ofbiz.apache.org/ +[3]: https://www.scipioerp.com/community/developer/freemarker-macros/ +[4]: https://www.scipioerp.com/community/developer/installation-configuration/configuration/#database-configuration +[5]: /file/419711 +[6]: https://opensource.com/sites/default/files/uploads/setup_step5_sm.jpg (Setup wizard) +[7]: https://www.scipioerp.com/community/developer/architecture/components/ +[8]: https://www.scipioerp.com/community/developer/entities/ +[9]: https://www.scipioerp.com/community/developer/services/ +[10]: https://www.scipioerp.com/community/developer/views-requests/ +[11]: https://www.scipioerp.com/community/developer/events-actions/ +[12]: /file/419716 +[13]: https://opensource.com/sites/default/files/uploads/component_structure.jpg (component directory structure) +[14]: https://www.scipioerp.com/community/developer/views-requests/request-controller/ +[15]: https://www.scipioerp.com/community/developer/views-requests/screen-widgets-decorators/ +[16]: /file/419721 +[17]: https://opensource.com/sites/default/files/uploads/success_screen_sm.jpg (Custom screen) +[18]: /file/419726 +[19]: https://opensource.com/sites/default/files/uploads/theme_structure.jpg (theme component layout) +[20]: /file/419731 +[21]: https://opensource.com/sites/default/files/uploads/finished_shop_1_sm.jpg (Finished Scipio ERP shop) +[22]: https://www.scipioerp.com/demo/ +[23]: https://forum.scipioerp.com/ diff --git a/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md b/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md new file mode 100644 index 0000000000..0848111d08 --- /dev/null +++ b/sources/tech/20190129 How To Configure System-wide Proxy Settings Easily And Quickly.md @@ -0,0 +1,309 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Configure System-wide Proxy Settings Easily And Quickly) +[#]: via: (https://www.ostechnix.com/how-to-configure-system-wide-proxy-settings-easily-and-quickly/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Configure System-wide Proxy Settings Easily And Quickly +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/ProxyMan-720x340.png) + +Today, we will be discussing a simple, yet useful command line utility named **“ProxyMan”**. As the name says, it helps you to apply and manage proxy settings on our system easily and quickly. Using ProxyMan, we can set or unset proxy settings automatically at multiple points, without having to configure them manually one by one. It also allows you to save the settings for later use. In a nutshell, ProxyMan simplifies the task of configuring system-wide proxy settings with a single command. It is free, open source utility written in **Bash** and standard POSIX tools, no dependency required. ProxyMan can be helpful if you’re behind a proxy server and you want to apply the proxy settings system-wide in one go. + +### Installing ProxyMan + +Download the latest ProxyMan version from the [**releases page**][1]. It is available as zip and tar file. I am going to download zip file. + +``` +$ wget https://github.com/himanshub16/ProxyMan/archive/v3.1.1.zip +``` + +Extract the downloaded zip file: + +``` +$ unzip v3.1.1.zip +``` + +The above command will extract the contents in a folder named “ **ProxyMan-3.1.1** ” in your current working directory. Cd to that folder and install ProxyMan as shown below: + +``` +$ cd ProxyMan-3.1.1/ + +$ ./install +``` + +If you see **“Installed successfully”** message as output, congratulations! ProxyMan has been installed. + +Let us go ahead and see how to configure proxy settings. + +### Configure System-wide Proxy Settings + +ProxyMan usage is pretty simple and straight forward. Like I already said, It allows us to set/unset proxy settings, list current proxy settings, list available configs, save settings in a profile and load profile later. Proxyman currently manages proxy settings for **GNOME gsettings** , **bash** , **apt** , **dnf** , **git** , **npm** and **Dropbox**. + +**Set proxy settings** + +To set proxy settings system-wide, simply run: + +``` +$ proxyman set +``` + +You will asked to answer a series of simple questions such as, + + 1. HTTP Proxy host IP address, + 2. HTTP port, + 3. Use username/password authentication, + 4. Use same settings for HTTPS and FTP, + 5. Save profile for later use, + 6. Finally, choose the list of targets to apply the proxy settings. You can choose all at once or separate multiple choices with space. + + + +Sample output for the above command: + +``` +Enter details to set proxy +HTTP Proxy Host 192.168.225.22 +HTTP Proxy Port 8080 +Use auth - userid/password (y/n)? n +Use same for HTTPS and FTP (y/n)? y +No Proxy (default localhost,127.0.0.1,192.168.1.1,::1,*.local) +Save profile for later use (y/n)? y +Enter profile name : proxy1 +Saved to /home/sk/.config/proxyman/proxy1. + +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Setting proxy... +To activate in current terminal window +run source ~/.bashrc +[sudo] password for sk: +Done +``` + +**List proxy settings** + +To view the current proxy settings, run: + +``` +$ proxyman list +``` + +Sample output: + +``` +Hmm... listing it all + +Shell proxy settings : /home/sk/.bashrc +export http_proxy="http://192.168.225.22:8080/" +export ftp_proxy="ftp://192.168.225.22:8080/" +export rsync_proxy="rsync://192.168.225.22:8080/" +export no_proxy="localhost,127.0.0.1,192.168.1.1,::1,*.local" +export HTTP_PROXY="http://192.168.225.22:8080/" +export FTP_PROXY="ftp://192.168.225.22:8080/" +export RSYNC_PROXY="rsync://192.168.225.22:8080/" +export NO_PROXY="localhost,127.0.0.1,192.168.1.1,::1,*.local" +export https_proxy="/" +export HTTPS_PROXY="/" + +git proxy settings : +http http://192.168.225.22:8080/ +https https://192.168.225.22:8080/ + +APT proxy settings : +3 +Done +``` + +**Unset proxy settings** + +To unset proxy settings, the command would be: + +``` +$ proxyman unset +``` + +You can unset proxy settings for all targets at once by entering number **1** or enter any given number to unset proxy settings for the respective target. + +``` +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Unset all proxy settings +To activate in current terminal window +run source ~/.bashrc +Done +``` + +To apply the changes, simply run: + +``` +$ source ~/.bashrc +``` + +On ZSH, use this command instead: + +``` +$ source ~/.zshrc +``` + +To verify if the proxy settings have been removed, simply run “proxyman list” command: + +``` +$ proxyman list +Hmm... listing it all + +Shell proxy settings : /home/sk/.bashrc +None + +git proxy settings : +http +https + +APT proxy settings : +None +Done +``` + +As you can see, there is no proxy settings for all targets. + +**View list of configs (profiles)** + +Remember we saved proxy settings as a profile in the “Set proxy settings” section? You can view the list of available profiles with command: + +``` +$ proxyman configs +``` + +Sample output: + +``` +Here are available configs! +proxy1 +Done +``` + +As you can see, we have only one profile i.e **proxy1**. + +**Load profiles** + +The profiles will be available until you delete them permanently, so you can load a profile (E.g proxy1) at any time using command: + +``` +$ proxyman load proxy1 +``` + +This command will list the proxy settings for proxy1 profile. You can apply these settings to all or multiple targets by entering the respective number with space-separated. + +``` +Loading profile : proxy1 +HTTP > 192.168.225.22 8080 +HTTPS > 192.168.225.22 8080 +FTP > 192.168.225.22 8080 +no_proxy > localhost,127.0.0.1,192.168.1.1,::1,*.local +Use auth > n +Use same > y +Config > +Targets > +Select targets to modify +| 1 | All of them ... Don't bother me +| 2 | Terminal / bash / zsh (current user) +| 3 | /etc/environment +| 4 | apt/dnf (Package manager) +| 5 | Desktop settings (GNOME/Ubuntu) +| 6 | npm & yarn +| 7 | Dropbox +| 8 | Git +| 9 | Docker + +Separate multiple choices with space +? 1 +Setting proxy... +To activate in current terminal window +run source ~/.bashrc +Done +``` + +Finally, activate the changes using command: + +``` +$ source ~/.bashrc +``` + +For ZSH: + +``` +$ source ~/.zshrc +``` + +**Deleting profiles** + +To delete a profile, run: + +``` +$ proxyman delete proxy1 +``` + +Output: + +``` +Deleting profile : proxy1 +Done +``` + +To display help, run: + +``` +$ proxyman help +``` + + +### Conclusion + +Before I came to know about Proxyman, I used to apply proxy settings manually at multiple places, for example package manager, web browser etc. Not anymore! ProxyMan did this job automatically in couple seconds. + +And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-configure-system-wide-proxy-settings-easily-and-quickly/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/himanshub16/ProxyMan/releases/ diff --git a/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md b/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md new file mode 100644 index 0000000000..75ad9e93c6 --- /dev/null +++ b/sources/tech/20190129 You shouldn-t name your variables after their types for the same reason you wouldn-t name your pets -dog- or -cat.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”) +[#]: via: (https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat” +====== + +The name of a variable should describe its contents, not the _type_ of the contents. Consider this example: + +``` +var usersMap map[string]*User +``` + +What are some good properties of this declaration? We can see that it’s a map, and it has something to do with the `*User` type, so that’s probably good. But `usersMap` _is_ a map and Go, being a statically typed language, won’t let us accidentally use a map where a different type is required, so the `Map` suffix as a safety precaution is redundant. + +Now, consider what happens if we declare other variables using this pattern: + +``` +var ( + companiesMap map[string]*Company + productsMap map[string]*Products +) +``` + +Now we have three map type variables in scope, `usersMap`, `companiesMap`, and `productsMap`, all mapping `string`s to different `struct` types. We know they are maps, and we also know that their declarations prevent us from using one in place of another—​the compiler will throw an error if we try to use `companiesMap` where the code is expecting a `map[string]*User`. In this situation it’s clear that the `Map` suffix does not improve the clarity of the code, its just extra boilerplate to type. + +My suggestion is avoid any suffix that resembles the _type_ of the variable. Said another way, if `users` isn’t descriptive enough, then `usersMap` won’t be either. + +This advice also applies to function parameters. For example: + +``` +type Config struct { + // +} + +func WriteConfig(w io.Writer, config *Config) +``` + +Naming the `*Config` parameter `config` is redundant. We know it’s a pointer to a `Config`, it says so right there in the declaration. Instead consider if `conf` will do, or maybe just `c` if the lifetime of the variable is short enough. + +This advice is more than just a desire for brevity. If there is more that one `*Config` in scope at any one time, calling them `config1` and `config2` is less descriptive than calling them `original` and `updated` . The latter are less likely to be accidentally transposed—something the compiler won’t catch—while the former differ only in a one character suffix. + +Finally, don’t let package names steal good variable names. The name of an imported identifier includes its package name. For example the `Context` type in the `context` package will be known as `context.Context` when imported into another package . This makes it impossible to use `context` as a variable or type, unless of course you rename the import, but that’s throwing good after bad. This is why the local declaration for `context.Context` types is traditionally `ctx`. eg. + +``` +func WriteLog(ctx context.Context, message string) +``` + +* * * + +A variable’s name should be independent of its type. You shouldn’t name your variables after their types for the same reason you wouldn’t name your pets “dog” or “cat”. You shouldn’t include the name of your type in the name of your variable for the same reason. + +### Related posts: + + 1. [On declaring variables][1] + 2. [Go, without package scoped variables][2] + 3. [A whirlwind tour of Go’s runtime environment variables][3] + 4. [Declaration scopes in Go][4] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/01/29/you-shouldnt-name-your-variables-after-their-types-for-the-same-reason-you-wouldnt-name-your-pets-dog-or-cat + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/2014/05/24/on-declaring-variables (On declaring variables) +[2]: https://dave.cheney.net/2017/06/11/go-without-package-scoped-variables (Go, without package scoped variables) +[3]: https://dave.cheney.net/2015/11/29/a-whirlwind-tour-of-gos-runtime-environment-variables (A whirlwind tour of Go’s runtime environment variables) +[4]: https://dave.cheney.net/2016/12/15/declaration-scopes-in-go (Declaration scopes in Go) diff --git a/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md new file mode 100644 index 0000000000..78e0d0ecfd --- /dev/null +++ b/sources/tech/20190131 VA Linux- The Linux Company That Once Ruled NASDAQ.md @@ -0,0 +1,147 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (VA Linux: The Linux Company That Once Ruled NASDAQ) +[#]: via: (https://itsfoss.com/story-of-va-linux/) +[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/) + +VA Linux: The Linux Company That Once Ruled NASDAQ +====== + +This is our first article in the Linux and open source history series. We will be covering more trivia, anecdotes and other nostalgic events from the past. + +At its time, _VA Linux_ was indeed a crusade to free the world from Microsoft’s domination. + +On a historical incident in December 1999, the shares of a private firm skyrocketed from just $30 to a whopping $239 within just a day of its [IPO][1]! It was a record-breaking development that day. + +The company was _VA Linux_ , a firm with only 200 employees that was based on the idea of deploying Intel Hardware with Linux and FOSS, had begun a fantastic journey [on the likes of Sun and Dell][2]. + +It traded with a symbol LNUX and gained around 700 percent on its first day of trading. But hardly one year later, the [LNUX stocks were selling below $9 per share][3]. + +How come a successful Linux based company become a subsidiary of [Gamestop][4], a gaming company? + +Let us look back into the highs and lows of this record-breaking Linux corporation by knowing their history in brief. + +### How did it all actually begin? + +In the year 1993, a graduate student at Stanford University wanted to own a powerful workstation but could not afford to buy expensive [Sun][5] Workstations, which used to be sold at extremely high prices of $7,000 per system at that time. + +So, he decided to do build one on his own ([DIY][6] [FTW][7]!). Using an Intel 486-chip running at just 33 megahertz, he installed Linux and finally had a machine that was twice as fast as Sun’s but at a much lower price tag: $2,000. + +That student was none other than _VA Research_ founder [Larry Augustin][8], whose idea was loved by many at that exciting time in the Stanford campus. People started buying machines with similar configurations from him and his friend and co-founder, James Vera. This is how _VA Research_ was formed. + +![VA Linux founder, Larry Augustin][9] + +> Once software goes into the GPL, you can’t take it back. People can stop contributing, but the code that exists, people can continue to develop on it. +> +> Without a doubt, a futuristic quote from VA Linux founder, Larry Augustin, 10 years ago | Read the whole interview [here][10] + +#### Some screenshots of their web domains from the early days + +![Linux Powered Machines on sale on varesearch.com | July 15, 1997][11] + +![varesearch.com reveals emerging growth | February 16, 1998][12] + +![On June 26, 2001, they transitioned from hardware to software | valinux.com as on June 22, 2001][13] + +### The spectacular rise and the devastating fall of VA Linux + +VA Research had a big year in 1999 and perhaps it was the biggest for them as they acquired many growing companies and competitors at that time, along with starting many innovative initiatives. The next year in 2000, they created a subsidiary in Japan named _VA Linux Systems Japan K.K._ They were at their peak that year. + +After they transitioned completely from hardware to software, stock prices started to fall drastically since 2002. It all happened because of slower-than-expected sales growth from new customers in the dot-com sector. In the later years they sold off a few brands and top employees also resigned in 2010. + +Gamestop finally [acquired][14] Geeknet Inc. (the new name of VA Linux) for $140 million on June 2, 2015. + +In case you’re curious for a detailed chronicle, I have separately created this [timeline][15], highlighting events year-wise. + +![Image Credit: Wikipedia][16] + +### What happened to VA Linux afterward? + +Geeknet owned by Gamestop is now an online retailer for the global geek community as [ThinkGeek][17]. + +SourceForge and Slashdot were what still kept them linked with Linux and Open Source until _Dice Holdings_ acquired Slashdot, SourceForge, and Freecode. + +An [article][18] from 2016 sadly quotes in its final paragraph: + +> “Being acquired by a company that caters to gamers and does not have anything in particular to do with open source software may be a lackluster ending for what was once a spectacularly valuable Linux business.” + +Did we note Linux and Gamers? Does Linux really not have anything to do with Gaming? Are these two terms really so far apart? What about [Gaming on Linux][19]? What about [Open Source Games][20]? + +How could have the stalwarts from _VA Linux_ with years and years of experience in the Linux arena contributed to the Linux Gaming community? What could have happened had [Valve][21] (who are currently so [dedicated][22] towards Linux Gaming) acquired _VA Linux_ instead of Gamestop? Can we ponder? + +The seeds of ideas that were planted by _VA Research_ will continue to inspire the Linux and FOSS community because of its significant contributions in the world of Open Source. At _It’s FOSS,_ our heartfelt salute goes out to those noble ideas! + +Want to feel the nostalgia? Use the [timeline][15] dates with the [Way Back Machine][23] to check out previously owned _VA_ domains like _valinux.com_ or _varesearch.com_ in the past three decades! You can even check _linux.com_ that was once owned by _VA Linux Systems_. + +But wait, are we really done here? What happened to the subsidiary named _VA Linux Systems Japan K.K._? Well, it’s [a different story there][24] and still going strong with the original ideologies of _VA Linux_! + +![VA Linux booth circa 2000 | Image Credit: Storem][25] + +#### _VA Linux_ Subsidiary Still Operational in Japan! + +VA Linux is still operational through its [Japanese subsidiary][26]. It provides the following services: + + * Failure Analysis and Support Services: [_VA Quest_][27] + * Entrusted Development Service + * Consulting Service + + + +_VA_ _Quest_ , in particular, continues its services as a failure-analysis solution for tracking down and dealing with kernel bugs which might be getting in its customers’ way since 2005. [Tetsuro Yogo][28] took over as the New President and CEO on April 3, 2017. Check out their timeline [here][29]! They are also [on GitHub][30]! + +You can also read about a recent development reported on August 2 last year, on this [translated][31] version of a Japanese IT news page. It’s an update about _VA Linux_ providing technical support service of “[Kubernetes][32]” container management software in Japan. + +Its good to know that their 18-year-old subsidiary is still doing well in Japan and the name of _VA Linux_ continues to flourish there even today! + +What are your views? Do you want to share anything on _VA Linux_? Please let us know in the comments section below. + +I hope you liked this first article in the Linux history series. If you know such interesting facts from the past that you would like us to cover here, please let us know. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/story-of-va-linux/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/avimanyu/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Initial_public_offering +[2]: https://www.forbes.com/1999/05/03/feat.html +[3]: https://www.channelfutures.com/open-source/open-source-history-the-spectacular-rise-and-fall-of-va-linux +[4]: https://www.gamestop.com/ +[5]: http://www.sun.com/ +[6]: https://en.wikipedia.org/wiki/Do_it_yourself +[7]: https://www.urbandictionary.com/define.php?term=FTW +[8]: https://www.linkedin.com/in/larryaugustin/ +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-Founder-Larry-Augustin.jpg?ssl=1 +[10]: https://www.linuxinsider.com/story/SourceForges-Larry-Augustin-A-Better-Way-to-Build-Web-Apps-62155.html +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-July-15-1997.jpg?ssl=1 +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Research-com-Snapshot-Feb-16-1998.jpg?ssl=1 +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/09/VA-Linux-com-Snapshot-June-22-2001.jpg?ssl=1 +[14]: http://geekgirlpenpals.com/geeknet-parent-company-to-thinkgeek-entered-agreement-with-gamestop/ +[15]: https://medium.com/@avimanyu786/a-timeline-of-va-linux-through-the-years-6813e2bd4b13 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/LNUX-stock-fall.png?ssl=1 +[17]: https://www.thinkgeek.com/ +[18]: https://www.channelfutures.com/open-source/open-source-history-spectacular-rise-and-fall-va-linux +[19]: https://itsfoss.com/linux-gaming-distributions/ +[20]: https://en.wikipedia.org/wiki/Open-source_video_game +[21]: https://www.valvesoftware.com/ +[22]: https://itsfoss.com/steam-play-proton/ +[23]: https://archive.org/web/web.php +[24]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fwww.valinux.co.jp%2Fcorp%2Fstatement%2F&edit-text= +[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/va-linux-team-booth.jpg?resize=800%2C600&ssl=1 +[26]: https://www.valinux.co.jp/english/ +[27]: https://www.linux.com/news/va-linux-announces-linux-failure-analysis-service +[28]: https://www.linkedin.com/in/yogo45/ +[29]: https://www.valinux.co.jp/english/about/timeline/ +[30]: https://github.com/vaj +[31]: https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=https%3A%2F%2Fit.impressbm.co.jp%2Farticles%2F-%2F16499 +[32]: https://en.wikipedia.org/wiki/Kubernetes diff --git a/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md new file mode 100644 index 0000000000..15349fbf32 --- /dev/null +++ b/sources/tech/20190202 CrossCode is an Awesome 16-bit Sci-Fi RPG Game.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (CrossCode is an Awesome 16-bit Sci-Fi RPG Game) +[#]: via: (https://itsfoss.com/crosscode-game/) +[#]: author: (Phillip Prado https://itsfoss.com/author/phillip/) + +CrossCode is an Awesome 16-bit Sci-Fi RPG Game +====== + +What starts off as an obvious sci-fi 16-bit 2D action RPG quickly turns into a JRPG inspired pseudo-MMO open-world puzzle platformer. Though at first glance this sounds like a jumbled mess, [CrossCode][1] manages to bundle all of its influences into a seamless gaming experience that feels nothing shy of excellent. + +Note: CrossCode is not open source software. We have covered it because it is Linux specific. + +![][2] + +### Story + +You play as Lea, a girl who has forgotten her identity, where she comes from, and how to speak. As you walk through the early parts of the story, you come to find that you are a character in a digital world — a video game. But not just any video game — an MMO. And you, Lea, must venture into the digital world known as CrossWorlds in order to unravel the secrets of your past. + +As you progress through the game, you unveil more and more about yourself, learning how you got to this point in the first place. This doesn’t sound too crazy of a story, but the gameplay implementation and appropriately paced storyline make for quite a captivating experience. + +The story unfolds at a satisfying speed and the character’s development is genuinely gratifying — both fictionally and mechanically. The only critique I had was that it felt like the introductory segment took a little too long — dragging the tutorial into the gameplay for quite some time, and keeping the player from getting into the real meat of the game. + +All-in-all, CrossCode’s story did not leave me wanting, not even in the slightest. It’s deep, fun, heartwarming, intelligent, and all while never sacrificing great character development. Without spoiling anything, I will say that if you are someone that enjoys a good story, you will need to give CrossCode a look. + +![][3] + +### Gameplay + +Yes, the story is great and all, but if there is one place that CrossCode truly shines, it has to be its gameplay. The game’s mechanics are fast-paced, challenging, intuitive, and downright fun! + +You start off with a dodge, block, melee, and ranged attack, each slowly developing overtime as the character tree is unlocked. This all-too-familiar mix of combat elements balances skill and hack-n-slash mechanics in a way that doesn’t conflict with one another. + +The game utilizes this mix of skills to create some amazing puzzle solving and combat that helps CrossCode’s gameplay truly stand out. Whether you are making your way through one of the four main dungeons, or you are taking a boss head on, you can’t help but periodically stop and think “wow, this game is great!” + +Though this has to be the game’s strongest feature, it can also be the game’s biggest downfall. Part of the reason that the story and character progression is so satisfying is because the combat and puzzle mechanics can be incredibly challenging, and that’s putting it lightly. + +There are times in which CrossCode’s gameplay feels downright impossible. Bosses take an expert amount of focus, and dungeons require all of the patience you can muster up just to simply finish them. + +![][4] + +The game requires a type of dexterity I have not quite had to master yet. I mean, sure there are more challenging puzzle games out there, yes there are more difficult platformers, and of course there are more grueling RPGs, but adding all of these elements into one game while spurring the player along with an alluring story requires a level of mechanical balance that I haven’t found in many other games. + +And though there were times I felt the gameplay was flat out punishing, I was constantly reminded that this is simply not the case. Death doesn’t cause serious character regression, you can take a break from dungeons when you feel overwhelmed, and there is a plethora of checkpoints throughout the game’s most difficult parts to help the player along. + +Where other games fall short by giving the player nothing to lose, this reality redeems CrossCode amid its rigorous gameplay. CrossCode may be one of the only games I know that takes two common flaws in games and holds the tension between them so well that it becomes one of the game’s best strengths. + +![][5] + +### Design + +One of the things that surprised me most about CrossCode was how well it’s world and sound design come together. Right off the bat, from the moment you boot the game up, it is clear the developers meant business when designing CrossCode. + +Being in a fictional MMO world, the game’s character ensemble is vibrant and distinctive, each having its own tone and personality. The games sound and motion graphics are tactile and responsive, giving the player a healthy amount of feedback during gameplay. And the soundtrack behind the game is simply beautiful, ebbing and flowing between intense moments of combat to blissful moments of exploration. + +If I had to fault CrossCode in this category it would have to be in the size of the map. Yes, the dungeons are long, and yes, the CrossWorlds map looks gigantic, but I still wanted more to explore outside crippling dungeons. The game is beautiful and fluid, but akin to RPG games of yore — aka. Zelda games pre-Breath of the Wild — I wish there was just a little more for me to freely explore. + +It is obvious that the developers really cared about this aspect of the game, and you can tell they spent an incredible amount of time developing its design. CrossCode set itself up for success here in its plot and content, and the developers capitalize on the opportunity, knocking another category out of the park. + +![][6] + +### Conclusion + +In the end, it is obvious how I feel about this game. And just in case you haven’t caught on yet…I love it. It holds a near perfect balance between being difficult and rewarding, simple and complex, linear and open, making CrossCode one of [the best Linux games][7] out there. + +Developed by [Radical Fish Games][8], CrossCode was officially released for Linux on September 21, 2018, seven years after development began. You can pick up the game over on [Steam][9], [GOG][10], or [Humble Bundle][11]. + +If you play games regularly, you may want to [subscribe to Humble Monthly][12] ([affiliate][13] link). For $12 per month, you’ll get games worth over $100 (not all for Linux). Over 450,000 gamers worldwide use Humble Monthly. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/crosscode-game/ + +作者:[Phillip Prado][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/phillip/ +[b]: https://github.com/lujun9972 +[1]: http://www.cross-code.com/en/home +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Level-up.png?fit=800%2C451&ssl=1 +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Equpiment.png?fit=800%2C451&ssl=1 +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-character-development.png?fit=800%2C451&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-Environment.png?fit=800%2C451&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/CrossCode-dungeon.png?fit=800%2C451&ssl=1 +[7]: https://itsfoss.com/free-linux-games/ +[8]: http://www.radicalfishgames.com/ +[9]: https://store.steampowered.com/app/368340/CrossCode/ +[10]: https://www.gog.com/game/crosscode +[11]: https://www.humblebundle.com/store/crosscode +[12]: https://www.humblebundle.com/monthly?partner=itsfoss +[13]: https://itsfoss.com/affiliate-policy/ diff --git a/sources/tech/20190204 Top 5 open source network monitoring tools.md b/sources/tech/20190204 Top 5 open source network monitoring tools.md new file mode 100644 index 0000000000..5b6e7f1bfa --- /dev/null +++ b/sources/tech/20190204 Top 5 open source network monitoring tools.md @@ -0,0 +1,125 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 open source network monitoring tools) +[#]: via: (https://opensource.com/article/19/2/network-monitoring-tools) +[#]: author: (Paul Bischoff https://opensource.com/users/paulbischoff) + +Top 5 open source network monitoring tools +====== +Keep an eye on your network to avoid downtime with these monitoring tools. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3) + +Maintaining a live network is one of a system administrator's most essential tasks, and keeping a watchful eye over connected systems is essential to keeping a network functioning at its best. + +There are many different ways to keep tabs on a modern network. Network monitoring tools are designed for the specific purpose of monitoring network traffic and response times, while application performance management solutions use agents to pull performance data from the application stack. If you have a live network, you need network monitoring to make sure you aren't vulnerable to an attacker. Likewise, if you rely on lots of different applications to run your daily operations, you will need an [application performance management][1] solution as well. + +This article will focus on open source network monitoring tools. These tools help monitor individual nodes and applications for signs of poor performance. Through one window, you can view the performance of an entire network and even get alerts to keep you in the loop if you're away from your desk. + +Before we get into the top five network monitoring tools, let's look more closely at the reasons you need to use one. + +### Why do I need a network monitoring tool? + +Network monitoring tools are vital to maintaining networks because they allow you to keep an eye on devices connected to the network from a central location. These tools help flag devices with subpar performance so you can step in and run troubleshooting to get to the root of the problem. + +Running in-depth troubleshooting can minimize performance problems and prevent security breaches. In practical terms, this keeps the network online and eliminates the risk of falling victim to unnecessary downtime. Regular network maintenance can also help prevent outages that could take thousands of users offline. + +A network monitoring tool enables you to: + + * Autodiscover devices connected to your network + * View live and historic performance data for a range of devices and applications + * Configure alerts to notify you of unusual activity + * Generate graphs and reports to analyze network activity in greater depth + +### The top 5 open source network monitoring tools + +Now, that you know why you need a network monitoring tool, take a look at the top 5 open source tools to see which might best meet your needs. + +#### Cacti + +![](https://opensource.com/sites/default/files/uploads/cacti_network-monitoring-tools.png) + +If you know anything about open source network monitoring tools, you've probably heard of [Cacti][2]. It's a graphing solution that acts as an addition to [RRDTool][3] and is used by many network administrators to collect performance data in LANs. Cacti comes with Simple Network Management Protocol (SNMP) support on Windows and Linux to create graphs of traffic data. + +Cacti typically works by using data sourced from user-created scripts that ping hosts on a network. The values returned by the scripts are stored in a MySQL database, and this data is used to generate graphs. + +This sounds complicated, but Cacti has templates to help speed the process along. You can also create a graph or data source template that can be used for future monitoring activity. If you'd like to try it out, [download Cacti][4] for free on Linux and Windows. + +#### Nagios Core + +![](https://opensource.com/sites/default/files/uploads/nagioscore_network-monitoring-tools.png) + +[Nagios Core][5] is one of the most well-known open source monitoring tools. It provides a network monitoring experience that combines open source extensibility with a top-of-the-line user interface. With Nagios Core, you can auto-discover devices, monitor connected systems, and generate sophisticated performance graphs. + +Support for customization is one of the main reasons Nagios Core has become so popular. For example, [Nagios V-Shell][6] was added as a PHP web interface built in AngularJS, searchable tables and a RESTful API designed with CodeIgniter. + +If you need more versatility, you can check the Nagios Exchange, which features a range of add-ons that can incorporate additional features into your network monitoring. These range from the strictly cosmetic to monitoring enhancements like [nagiosgraph][7]. You can try it out by [downloading Nagios Core][8] for free. + +#### Icinga 2 + +![](https://opensource.com/sites/default/files/uploads/icinga2_network-monitoring-tools.png) + +[Icinga 2][9] is another widely used open source network monitoring tool. It builds on the groundwork laid by Nagios Core. It has a flexible RESTful API that allows you to enter your own configurations and view live performance data through the dashboard. Dashboards are customizable, so you can choose exactly what information you want to monitor in your network. + +Visualization is an area where Icinga 2 performs particularly well. It has native support for Graphite and InfluxDB, which can turn performance data into full-featured graphs for deeper performance analysis. + +Icinga2 also allows you to monitor both live and historical performance data. It offers excellent alerts capabilities for live monitoring, and you can configure it to send notifications of performance problems by email or text. You can [download Icinga 2][10] for free for Windows, Debian, DHEL, SLES, Ubuntu, Fedora, and OpenSUSE. + +#### Zabbix + +![](https://opensource.com/sites/default/files/uploads/zabbix_network-monitoring-tools.png) + +[Zabbix][11] is another industry-leading open source network monitoring tool, used by companies from Dell to Salesforce on account of its malleable network monitoring experience. Zabbix does network, server, cloud, application, and services monitoring very well. + +You can track network information such as network bandwidth usage, network health, and configuration changes, and weed out problems that need to be addressed. Performance data in Zabbix is connected through SNMP, Intelligent Platform Management Interface (IPMI), and IPv6. + +Zabbix offers a high level of convenience compared to other open source monitoring tools. For instance, you can automatically detect devices connected to your network before using an out-of-the-box template to begin monitoring your network. You can [download Zabbix][12] for free for CentOS, Debian, Oracle Linux, Red Hat Enterprise Linux, Ubuntu, and Raspbian. + +#### Prometheus + +![](https://opensource.com/sites/default/files/uploads/promethius_network-monitoring-tools.png) + +[Prometheus][13] is an open source network monitoring tool with a large community following. It was built specifically for monitoring time-series data. You can identify time-series data by metric name or key-value pairs. Time-series data is stored on local disks so that it's easy to access in an emergency. + +Prometheus' [Alertmanager][14] allows you to view notifications every time it raises an event. Alertmanager can send notifications via email, PagerDuty, or OpsGenie, and you can silence alerts if necessary. + +Prometheus' visual elements are excellent and allow you to switch from the browser to the template language and Grafana integration. You can also integrate various third-party data sources into Prometheus from Docker, StatsD, and JMX to customize your Prometheus experience. + +As a network monitoring tool, Prometheus is suitable for organizations of all sizes. The onboard integrations and the easy-to-use Alertmanager make it capable of handling any workload, regardless of its size. You can [download Prometheus][15] for free. + +### Which are best? + +No matter what industry you're working in, if you rely on a network to do business, you need to implement some form of network monitoring. Network monitoring tools are an invaluable resource that help provide you with the visibility to keep your systems online. Monitoring your systems will give you the best chance to keep your equipment in working order. + +As the tools on this list show, you don't need to spend an exorbitant amount of money to reap the rewards of network monitoring. Of the five, I believe Icinga 2 and Zabbix are the best options for providing you with everything you need to start monitoring your network to keep it online. Staying vigilant will help to minimize the change of being caught off-guard by performance issues. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/network-monitoring-tools + +作者:[Paul Bischoff][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/paulbischoff +[b]: https://github.com/lujun9972 +[1]: https://www.comparitech.com/net-admin/application-performance-management/ +[2]: https://www.cacti.net/index.php +[3]: https://en.wikipedia.org/wiki/RRDtool +[4]: https://www.cacti.net/download_cacti.php +[5]: https://www.nagios.org/projects/nagios-core/ +[6]: https://exchange.nagios.org/directory/Addons/Frontends-%28GUIs-and-CLIs%29/Web-Interfaces/Nagios-V-2DShell/details +[7]: https://exchange.nagios.org/directory/Addons/Graphing-and-Trending/nagiosgraph/details#_ga=2.79847774.890594951.1545045715-2010747642.1545045715 +[8]: https://www.nagios.org/downloads/nagios-core/ +[9]: https://icinga.com/products/icinga-2/ +[10]: https://icinga.com/download/ +[11]: https://www.zabbix.com/ +[12]: https://www.zabbix.com/download +[13]: https://prometheus.io/ +[14]: https://prometheus.io/docs/alerting/alertmanager/ +[15]: https://prometheus.io/download/ diff --git a/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md new file mode 100644 index 0000000000..ef8c8dc460 --- /dev/null +++ b/sources/tech/20190205 12 Methods To Check The Hard Disk And Hard Drive Partition On Linux.md @@ -0,0 +1,435 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (12 Methods To Check The Hard Disk And Hard Drive Partition On Linux) +[#]: via: (https://www.2daygeek.com/linux-command-check-hard-disks-partitions/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +12 Methods To Check The Hard Disk And Hard Drive Partition On Linux +====== + +Usually Linux admins check the available hard disk and it’s partitions whenever they want to add a new disks or additional partition in the system. + +We used to check the partition table of our hard disk to view the disk partitions. + +This will help you to view how many partitions were already created on the disk. Also, it allow us to verify whether we have any free space or not. + +In general hard disks can be divided into one or more logical disks called partitions. + +Each partitions can be used as a separate disk with its own file system and partition information is stored in a partition table. + +It’s a 64-byte data structure. The partition table is part of the master boot record (MBR), which is a small program that is executed when a computer boots. + +The partition information are saved in the 0 the sector of the disk. Make a note, all the partitions must be formatted with an appropriate file system before files can be written to it. + +This can be verified using the following 12 methods. + + * **`fdisk:`** manipulate disk partition table + * **`sfdisk:`** display or manipulate a disk partition table + * **`cfdisk:`** display or manipulate a disk partition table + * **`parted:`** a partition manipulation program + * **`lsblk:`** lsblk lists information about all available or the specified block devices. + * **`blkid:`** locate/print block device attributes. + * **`hwinfo:`** hwinfo stands for hardware information tool is another great utility that used to probe for the hardware present in the system. + * **`lshw:`** lshw is a small tool to extract detailed information on the hardware configuration of the machine. + * **`inxi:`** inxi is a command line system information script built for for console and IRC. + * **`lsscsi:`** list SCSI devices (or hosts) and their attributes + * **`cat /proc/partitions:`** + * **`ls -lh /dev/disk/:`** The directory contains Disk manufacturer name, serial number, partition ID and real block device files, Those were symlink with real block device files. + + + +### How To Check Hard Disk And Hard Drive Partition In Linux Using fdisk Command? + +**[fdisk][1]** stands for fixed disk or format disk is a cli utility that allow users to perform following actions on disks. It allows us to view, create, resize, delete, move and copy the partitions. + +``` +# fdisk -l + +Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0xeab59449 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux + + +Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x8cc8f9e5 + +Device Boot Start End Sectors Size Id Type +/dev/sdc1 2048 2099199 2097152 1G 83 Linux +/dev/sdc3 4196352 6293503 2097152 1G 83 Linux +/dev/sdc4 6293504 20971519 14678016 7G 5 Extended +/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + + +Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using sfdisk Command? + +sfdisk is a script-oriented tool for partitioning any block device. sfdisk supports MBR (DOS), GPT, SUN and SGI disk labels, but no longer provides any functionality for CHS (Cylinder-Head-Sector) addressing. + +``` +# sfdisk -l + +Disk /dev/sda: 30 GiB, 32212254720 bytes, 62914560 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0xeab59449 + +Device Boot Start End Sectors Size Id Type +/dev/sda1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 20973568 62914559 41940992 20G 83 Linux + + +Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sdc: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +Disklabel type: dos +Disk identifier: 0x8cc8f9e5 + +Device Boot Start End Sectors Size Id Type +/dev/sdc1 2048 2099199 2097152 1G 83 Linux +/dev/sdc3 4196352 6293503 2097152 1G 83 Linux +/dev/sdc4 6293504 20971519 14678016 7G 5 Extended +/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + + +Disk /dev/sdd: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes + + +Disk /dev/sde: 10 GiB, 10737418240 bytes, 20971520 sectors +Units: sectors of 1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 512 = 512 bytes +Sector size (logical/physical): 512 bytes / 512 bytes +I/O size (minimum/optimal): 512 bytes / 512 bytes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using cfdisk Command? + +cfdisk is a curses-based program for partitioning any block device. The default device is /dev/sda. It provides basic partitioning functionality with a user-friendly interface. + +``` +# cfdisk /dev/sdc + Disk: /dev/sdc + Size: 10 GiB, 10737418240 bytes, 20971520 sectors + Label: dos, identifier: 0x8cc8f9e5 + + Device Boot Start End Sectors Size Id Type +>> /dev/sdc1 2048 2099199 2097152 1G 83 Linux + Free space 2099200 4196351 2097152 1G + /dev/sdc3 4196352 6293503 2097152 1G 83 Linux + /dev/sdc4 6293504 20971519 14678016 7G 5 Extended + ├─/dev/sdc5 6295552 8392703 2097152 1G 83 Linux + └─Free space 8394752 20971519 12576768 6G + + + + ┌───────────────────────────────────────────────────────────────────────────────┐ + │ Partition type: Linux (83) │ + │Filesystem UUID: d17e3c31-e2c9-4f11-809c-94a549bc43b7 │ + │ Filesystem: ext2 │ + │ Mountpoint: /part1 (mounted) │ + └───────────────────────────────────────────────────────────────────────────────┘ + [Bootable] [ Delete ] [ Quit ] [ Type ] [ Help ] [ Write ] + [ Dump ] + + Quit program without writing changes +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using parted Command? + +**[parted][2]** is a program to manipulate disk partitions. It supports multiple partition table formats, including MS-DOS and GPT. It is useful for creating space for new operating systems, reorganising disk usage, and copying data to new hard disks. + +``` +# parted -l + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sda: 32.2GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Number Start End Size Type File system Flags + 1 10.7GB 32.2GB 21.5GB primary ext4 boot + + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdb: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdc: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Number Start End Size Type File system Flags + 1 1049kB 1075MB 1074MB primary ext2 + 3 2149MB 3222MB 1074MB primary ext4 + 4 3222MB 10.7GB 7515MB extended + 5 3223MB 4297MB 1074MB logical + + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sdd: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: + +Model: ATA VBOX HARDDISK (scsi) +Disk /dev/sde: 10.7GB +Sector size (logical/physical): 512B/512B +Partition Table: msdos +Disk Flags: +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lsblk Command? + +lsblk lists information about all available or the specified block devices. The lsblk command reads the sysfs filesystem and udev db to gather information. + +If the udev db is not available or lsblk is compiled without udev support than it tries to read LABELs, UUIDs and filesystem types from the block device. In this case root permissions are necessary. The command prints all block devices (except RAM disks) in a tree-like format by default. + +``` +# lsblk +NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +sda 8:0 0 30G 0 disk +└─sda1 8:1 0 20G 0 part / +sdb 8:16 0 10G 0 disk +sdc 8:32 0 10G 0 disk +├─sdc1 8:33 0 1G 0 part /part1 +├─sdc3 8:35 0 1G 0 part /part2 +├─sdc4 8:36 0 1K 0 part +└─sdc5 8:37 0 1G 0 part +sdd 8:48 0 10G 0 disk +sde 8:64 0 10G 0 disk +sr0 11:0 1 1024M 0 rom +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using blkid Command? + +blkid is a command-line utility to locate/print block device attributes. It uses libblkid library to get disk partition UUID in Linux system. + +``` +# blkid +/dev/sda1: UUID="d92fa769-e00f-4fd7-b6ed-ecf7224af7fa" TYPE="ext4" PARTUUID="eab59449-01" +/dev/sdc1: UUID="d17e3c31-e2c9-4f11-809c-94a549bc43b7" TYPE="ext2" PARTUUID="8cc8f9e5-01" +/dev/sdc3: UUID="ca307aa4-0866-49b1-8184-004025789e63" TYPE="ext4" PARTUUID="8cc8f9e5-03" +/dev/sdc5: PARTUUID="8cc8f9e5-05" +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using hwinfo Command? + +**[hwinfo][3]** stands for hardware information tool is another great utility that used to probe for the hardware present in the system and display detailed information about varies hardware components in human readable format. + +``` +# hwinfo --block --short +disk: + /dev/sdd VBOX HARDDISK + /dev/sdb VBOX HARDDISK + /dev/sde VBOX HARDDISK + /dev/sdc VBOX HARDDISK + /dev/sda VBOX HARDDISK +partition: + /dev/sdc1 Partition + /dev/sdc3 Partition + /dev/sdc4 Partition + /dev/sdc5 Partition + /dev/sda1 Partition +cdrom: + /dev/sr0 VBOX CD-ROM +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lshw Command? + +**[lshw][4]** (stands for Hardware Lister) is a small nifty tool that generates detailed reports about various hardware components on the machine such as memory configuration, firmware version, mainboard configuration, CPU version and speed, cache configuration, usb, network card, graphics cards, multimedia, printers, bus speed, etc. + +``` +# lshw -short -class disk -class volume +H/W path Device Class Description +=================================================== +/0/3/0.0.0 /dev/cdrom disk CD-ROM +/0/4/0.0.0 /dev/sda disk 32GB VBOX HARDDISK +/0/4/0.0.0/1 /dev/sda1 volume 19GiB EXT4 volume +/0/5/0.0.0 /dev/sdb disk 10GB VBOX HARDDISK +/0/6/0.0.0 /dev/sdc disk 10GB VBOX HARDDISK +/0/6/0.0.0/1 /dev/sdc1 volume 1GiB Linux filesystem partition +/0/6/0.0.0/3 /dev/sdc3 volume 1GiB EXT4 volume +/0/6/0.0.0/4 /dev/sdc4 volume 7167MiB Extended partition +/0/6/0.0.0/4/5 /dev/sdc5 volume 1GiB Linux filesystem partition +/0/7/0.0.0 /dev/sdd disk 10GB VBOX HARDDISK +/0/8/0.0.0 /dev/sde disk 10GB VBOX HARDDISK +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using inxi Command? + +**[inxi][5]** is a nifty tool to check hardware information on Linux and offers wide range of option to get all the hardware information on Linux system that i never found in any other utility which are available in Linux. It was forked from the ancient and mindbendingly perverse yet ingenius infobash, by locsmif. + +``` +# inxi -Dp +Drives: HDD Total Size: 75.2GB (22.3% used) + ID-1: /dev/sda model: VBOX_HARDDISK size: 32.2GB + ID-2: /dev/sdb model: VBOX_HARDDISK size: 10.7GB + ID-3: /dev/sdc model: VBOX_HARDDISK size: 10.7GB + ID-4: /dev/sdd model: VBOX_HARDDISK size: 10.7GB + ID-5: /dev/sde model: VBOX_HARDDISK size: 10.7GB +Partition: ID-1: / size: 20G used: 16G (85%) fs: ext4 dev: /dev/sda1 + ID-3: /part1 size: 1008M used: 1.3M (1%) fs: ext2 dev: /dev/sdc1 + ID-4: /part2 size: 976M used: 2.6M (1%) fs: ext4 dev: /dev/sdc3 +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using lsscsi Command? + +Uses information in sysfs (Linux kernel series 2.6 and later) to list SCSI devices (or hosts) currently attached to the system. Options can be used to control the amount and form of information provided for each device. + +By default in this utility device node names (e.g. “/dev/sda” or “/dev/root_disk”) are obtained by noting the major and minor numbers for the listed device obtained from sysfs + +``` +# lsscsi +[0:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0 +[2:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda +[3:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdb +[4:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdc +[5:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sdd +[6:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sde +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using ProcFS? + +The proc filesystem (procfs) is a special filesystem in Unix-like operating systems that presents information about processes and other system information. + +It’s sometimes referred to as a process information pseudo-file system. It doesn’t contain ‘real’ files but runtime system information (e.g. system memory, devices mounted, hardware configuration, etc). + +``` +# cat /proc/partitions +major minor #blocks name + + 11 0 1048575 sr0 + 8 0 31457280 sda + 8 1 20970496 sda1 + 8 16 10485760 sdb + 8 32 10485760 sdc + 8 33 1048576 sdc1 + 8 35 1048576 sdc3 + 8 36 1 sdc4 + 8 37 1048576 sdc5 + 8 48 10485760 sdd + 8 64 10485760 sde +``` + +### How To Check Hard Disk And Hard Drive Partition In Linux Using /dev/disk Path? + +This directory contains four directories, it’s by-id, by-uuid, by-path and by-partuuid. Each directory contains some useful information and it’s symlinked with real block device files. + +``` +# ls -lh /dev/disk/by-id +total 0 +lrwxrwxrwx 1 root root 9 Feb 2 23:08 ata-VBOX_CD-ROM_VB0-01f003f6 -> ../../sr0 +lrwxrwxrwx 1 root root 9 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4 -> ../../sda +lrwxrwxrwx 1 root root 10 Feb 3 00:14 ata-VBOX_HARDDISK_VB26e827b5-668ab9f4-part1 -> ../../sda1 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VB3774c742-fb2b3e4e -> ../../sdd +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e -> ../../sdc +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part1 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part3 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part4 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ata-VBOX_HARDDISK_VBe72672e5-029a918e-part5 -> ../../sdc5 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBed1cf451-9f51c5f6 -> ../../sdb +lrwxrwxrwx 1 root root 9 Feb 2 23:39 ata-VBOX_HARDDISK_VBf242dbdd-49a982eb -> ../../sde +``` + +Output of by-uuid + +``` +# ls -lh /dev/disk/by-uuid +total 0 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 ca307aa4-0866-49b1-8184-004025789e63 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 d17e3c31-e2c9-4f11-809c-94a549bc43b7 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 3 00:14 d92fa769-e00f-4fd7-b6ed-ecf7224af7fa -> ../../sda1 +``` + +Output of by-path + +``` +# ls -lh /dev/disk/by-path +total 0 +lrwxrwxrwx 1 root root 9 Feb 2 23:08 pci-0000:00:01.1-ata-1 -> ../../sr0 +lrwxrwxrwx 1 root root 9 Feb 3 00:14 pci-0000:00:0d.0-ata-1 -> ../../sda +lrwxrwxrwx 1 root root 10 Feb 3 00:14 pci-0000:00:0d.0-ata-1-part1 -> ../../sda1 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-2 -> ../../sdb +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-3 -> ../../sdc +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part1 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part3 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part4 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 pci-0000:00:0d.0-ata-3-part5 -> ../../sdc5 +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-4 -> ../../sdd +lrwxrwxrwx 1 root root 9 Feb 2 23:39 pci-0000:00:0d.0-ata-5 -> ../../sde +``` + +Output of by-partuuid + +``` +# ls -lh /dev/disk/by-partuuid +total 0 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-01 -> ../../sdc1 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-03 -> ../../sdc3 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-04 -> ../../sdc4 +lrwxrwxrwx 1 root root 10 Feb 2 23:39 8cc8f9e5-05 -> ../../sdc5 +lrwxrwxrwx 1 root root 10 Feb 3 00:14 eab59449-01 -> ../../sda1 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-command-check-hard-disks-partitions/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/linux-fdisk-command-to-manage-disk-partitions/ +[2]: https://www.2daygeek.com/how-to-manage-disk-partitions-using-parted-command/ +[3]: https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ +[4]: https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ +[5]: https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ diff --git a/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md b/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md new file mode 100644 index 0000000000..45e0bf1342 --- /dev/null +++ b/sources/tech/20190205 5 Linux GUI Cloud Backup Tools.md @@ -0,0 +1,251 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Linux GUI Cloud Backup Tools) +[#]: via: (https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Linux GUI Cloud Backup Tools +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloud-m-wong-unsplash.jpg?itok=aW0mpzio) + +We have reached a point in time where most every computer user depends upon the cloud … even if only as a storage solution. What makes the cloud really important to users, is when it’s employed as a backup. Why is that such a game changer? By backing up to the cloud, you have access to those files, from any computer you have associated with your cloud account. And because Linux powers the cloud, many services offer Linux tools. + +Let’s take a look at five such tools. I will focus on GUI tools, because they offer a much lower barrier to entry to many of the CLI tools. I’ll also be focusing on various, consumer-grade cloud services (e.g., [Google Drive][1], [Dropbox][2], [Wasabi][3], and [pCloud][4]). And, I will be demonstrating on the Elementary OS platform, but all of the tools listed will function on most Linux desktop distributions. + +Note: Of the following backup solutions, only Duplicati is licensed as open source. With that said, let’s see what’s available. + +### Insync + +I must confess, [Insync][5] has been my cloud backup of choice for a very long time. Since Google refuses to release a Linux desktop client for Google Drive (and I depend upon Google Drive daily), I had to turn to a third-party solution. Said solution is Insync. This particular take on syncing the desktop to Drive has not only been seamless, but faultless since I began using the tool. + +The cost of Insync is a one-time $29.99 fee (per Google account). Trust me when I say this tool is worth the price of entry. With Insync you not only get an easy-to-use GUI for managing your Google Drive backup and sync, you get a tool (Figure 1) that gives you complete control over what is backed up and how it is backed up. Not only that, but you can also install Nautilus integration (which also allows you to easy add folders outside of the configured Drive sync destination). + +![Insync app][7] + +Figure 1: The Insync app window on Elementary OS. + +[Used with permission][8] + +You can download Insync for Ubuntu (or its derivatives), Linux Mint, Debian, and Fedora from the [Insync download page][9]. Once you’ve installed Insync (and associated it with your account), you can then install Nautilus integration with these steps (demonstrating on Elementary OS): + + 1. Open a terminal window and issue the command sudo nano /etc/apt/sources.list.d/insync.list. + + 2. Paste the following into the new file: deb precise non-free contrib. + + 3. Save and close the file. + + 4. Update apt with the command sudo apt-get update. + + 5. Install the necessary package with the command sudo apt-get install insync-nautilus. + + + + +Allow the installation to complete. Once finished, restart Nautilus with the command nautilus -q (or log out and back into the desktop). You should now see an Insync entry in the Nautilus right-click context menu (Figure 2). + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/insync_2.jpg?itok=-kA0epiH) + +Figure 2: Insync/Nautilus integration in action. + +[Used with permission][8] + +### Dropbox + +Although [Dropbox][2] drew the ire of many in the Linux community (by dropping support for all filesystems but unencrypted ext4), it still supports a great deal of Linux desktop deployments. In other words, if your distribution still uses the ext4 file system (and you do not opt to encrypt your full drive), you’re good to go. + +The good news is the Dropbox Linux desktop client is quite good. The tool offers a system tray icon that allows you to easily interact with your cloud syncing. Dropbox also includes CLI tools and a Nautilus integration (by way of an additional addon found [here][10]). + +The Linux Dropbox desktop sync tool works exactly as you’d expect. From the Dropbox system tray drop-down (Figure 3) you can open the Dropbox folder, launch the Dropbox website, view recently changed files, get more space, pause syncing, open the preferences window, find help, and quite Dropbox. + +![Dropbox][12] + +Figure 3: The Dropbox system tray drop-down on Elementary OS. + +[Used with permission][8] + +The Dropbox/Nautilus integration is an important component, as it makes quickly adding to your cloud backup seamless and fast. From the Nautilus file manager, locate and right-click the folder to bad added, and select Dropbox > Move to Dropbox (Figure 4). + +The only caveat to the Dropbox/Nautilus integration is that the only option is to move a folder to Dropbox. To some this might not be an option. The developers of this package would be wise to instead have the action create a link (instead of actually moving the folder). + +Outside of that one issue, the Dropbox cloud sync/backup solution for Linux is a great route to go. + +### pCloud + +pCloud might well be one of the finest cloud backup solutions you’ve never heard of. This take on cloud storage/backup includes features like: + + * Encryption (subscription service required for this feature); + + * Mobile apps for Android and iOS; + + * Linux, Mac, and Windows desktop clients; + + * Easy file/folder sharing; + + * Built-in audio/video players; + + * No file size limitation; + + * Sync any folder from the desktop; + + * Panel integration for most desktops; and + + * Automatic file manager integration. + + + + +pCloud offers both Linux desktop and CLI tools that function quite well. pCloud offers both a free plan (with 10GB of storage), a Premium Plan (with 500GB of storage for a one-time fee of $175.00), and a Premium Plus Plan (with 2TB of storage for a one-time fee of $350.00). Both non-free plans can also be paid on a yearly basis (instead of the one-time fee). + +The pCloud desktop client is quite user-friendly. Once installed, you have access to your account information (Figure 5), the ability to create sync pairs, create shares, enable crypto (which requires an added subscription), and general settings. + +![pCloud][14] + +Figure 5: The pCloud desktop client is incredibly easy to use. + +[Used with permission][8] + +The one caveat to pCloud is there’s no file manager integration for Linux. That’s overcome by the Sync folder in the pCloud client. + +### CloudBerry + +The primary focus for [CloudBerry][15] is for Managed Service Providers. The business side of CloudBerry does have an associated cost (one that is probably well out of the price range for the average user looking for a simple cloud backup solution). However, for home usage, CloudBerry is free. + +What makes CloudBerry different than the other tools is that it’s not a backup/storage solution in and of itself. Instead, CloudBerry serves as a link between your desktop and the likes of: + + * AWS + + * Microsoft Azure + + * Google Cloud + + * BackBlaze + + * OpenStack + + * Wasabi + + * Local storage + + * External drives + + * Network Attached Storage + + * Network Shares + + * And more + + + + +In other words, you use CloudBerry as the interface between the files/folders you want to share and the destination with which you want send them. This also means you must have an account with one of the many supported solutions. +Once you’ve installed CloudBerry, you create a new Backup plan for the target storage solution. For that configuration, you’ll need such information as: + + * Access Key + + * Secret Key + + * Bucket + + + + +What you’ll need for the configuration will depend on the account you’re connecting to (Figure 6). + +![CloudBerry][17] + +Figure 6: Setting up a CloudBerry backup for Wasabi. + +[Used with permission][8] + +The one caveat to CloudBerry is that it does not integrate with any file manager, nor does it include a system tray icon for interaction with the service. + +### Duplicati + +[Duplicati][18] is another option that allows you to sync your local directories with either locally attached drives, network attached storage, or a number of cloud services. The options supported include: + + * Local folders + + * Attached drives + + * FTP/SFTP + + * OpenStack + + * WebDAV + + * Amazon Cloud Drive + + * Amazon S3 + + * Azure Blob + + * Box.com + + * Dropbox + + * Google Cloud Storage + + * Google Drive + + * Microsoft OneDrive + + * And many more + + + + +Once you install Duplicati (download the installer for Debian, Ubuntu, Fedora, or RedHat from the [Duplicati downloads page][19]), click on the entry in your desktop menu, which will open a web page to the tool (Figure 7), where you can configure the app settings, create a new backup, restore from a backup, and more. + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/duplicati_1.jpg?itok=SVf06tnv) + +To create a backup, click Add backup and walk through the easy-to-use wizard (Figure 8). The backup service you choose will dictate what you need for a successful configuration. + +![Duplicati backup][21] + +Figure 8: Creating a new Duplicati backup for Google Drive. + +[Used with permission][8] + +For example, in order to create a backup to Google Drive, you’ll need an AuthID. For that, click the AuthID link in the Destination section of the setup, where you’ll be directed to select the Google Account to associate with the backup. Once you’ve allowed Duplicati access to the account, the AuthID will fill in and you’re ready to continue. Click Test connection and you’ll be asked to okay the creation of a new folder (if necessary). Click Next to complete the setup of the backup. + +### More Where That Came From + +These five cloud backup tools aren’t the end of this particular rainbow. There are plenty more options where these came from (including CLI-only tools). But any of these backup clients will do a great job of serving your Linux desktop-to-cloud backup needs. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/2/5-linux-gui-cloud-backup-tools + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.google.com/drive/ +[2]: https://www.dropbox.com/ +[3]: https://wasabi.com/ +[4]: https://www.pcloud.com/ +[5]: https://www.insynchq.com/ +[6]: /files/images/insync1jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/insync_1.jpg?itok=_SDP77uE (Insync app) +[8]: /licenses/category/used-permission +[9]: https://www.insynchq.com/downloads +[10]: https://www.dropbox.com/install-linux +[11]: /files/images/dropbox1jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dropbox_1.jpg?itok=BYbg-sKB (Dropbox) +[13]: /files/images/pcloud1jpg +[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pcloud_1.jpg?itok=cAUz8pya (pCloud) +[15]: https://www.cloudberrylab.com +[16]: /files/images/cloudberry1jpg +[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cloudberry_1.jpg?itok=s0aP5xuN (CloudBerry) +[18]: https://www.duplicati.com/ +[19]: https://www.duplicati.com/download +[20]: /files/images/duplicati2jpg +[21]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/duplicati_2.jpg?itok=Xkn8s3jg (Duplicati backup) diff --git a/sources/tech/20190205 5 Streaming Audio Players for Linux.md b/sources/tech/20190205 5 Streaming Audio Players for Linux.md new file mode 100644 index 0000000000..1ddd4552f5 --- /dev/null +++ b/sources/tech/20190205 5 Streaming Audio Players for Linux.md @@ -0,0 +1,172 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Streaming Audio Players for Linux) +[#]: via: (https://www.linux.com/blog/2019/2/5-streaming-audio-players-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Streaming Audio Players for Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/music-main.png?itok=bTxfvadR) + +As I work, throughout the day, music is always playing in the background. Most often, that music is in the form of vinyl spinning on a turntable. But when I’m not in purist mode, I’ll opt to listen to audio by way of a streaming app. Naturally, I’m on the Linux platform, so the only tools I have at my disposal are those that play well on my operating system of choice. Fortunately, plenty of options exist for those who want to stream audio to their Linux desktops. + +In fact, Linux offers a number of solid offerings for music streaming, and I’ll highlight five of my favorite tools for this task. A word of warning, not all of these players are open source. But if you’re okay running a proprietary app on your open source desktop, you have some really powerful options. Let’s take a look at what’s available. + +### Spotify + +Spotify for Linux isn’t some dumb-downed, half-baked app that crashes every other time you open it, and doesn’t offer the full-range of features found on the macOS and Windows equivalent. In fact, the Linux version of Spotify is exactly the same as you’ll find on other platforms. With the Spotify streaming client you can listen to music and podcasts, create playlists, discover new artists, and so much more. And the Spotify interface (Figure 1) is quite easy to navigate and use. + +![Spotify][2] + +Figure 1: The Spotify interface makes it easy to find new music and old favorites. + +[Used with permission][3] + +You can install Spotify either using snap (with the command sudo snap install spotify), or from the official repository, with the following commands: + + * sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 931FF8E79F0876134EDDBDCCA87FF9DF48BF1C90 + + * sudo echo deb stable non-free | sudo tee /etc/apt/sources.list.d/spotify.list + + * sudo apt-get update + + * sudo apt-get install spotify-client + + + + +Once installed, you’ll want to log into your Spotify account, so you can start streaming all of the great music to help motivate you to get your work done. If you have Spotify installed on other devices (and logged into the same account), you can dictate to which device the music should stream (by clicking the Devices Available icon near the bottom right corner of the Spotify window). + +### Clementine + +Clementine one of the best music players available to the Linux platform. Clementine not only allows user to play locally stored music, but to connect to numerous streaming audio services, such as: + + * Amazon Cloud Drive + + * Box + + * Dropbox + + * Icecast + + * Jamendo + + * Magnatune + + * RockRadio.com + + * Radiotunes.com + + * SomaFM + + * SoundCloud + + * Spotify + + * Subsonic + + * Vk.com + + * Or internet radio streams + + + + +There are two caveats to using Clementine. The first is you must be using the most recent version (as the build available in some repositories is out of date and won’t install the necessary streaming plugins). Second, even with the most recent build, some streaming services won’t function as expected. For example, with Spotify, you’ll only have available to you the Top Tracks (and not your playlist … or the ability to search for songs). + +With Clementine Internet radio streaming, you’ll find musicians and bands you’ve never heard of (Figure 2), and plenty of them to tune into. + +![Clementine][5] + +Figure 2: Clementine Internet radio is a great way to find new music. + +[Used with permission][3] + +### Odio + +Odio is a cross-platform, proprietary app (available for Linux, MacOS, and Windows) that allows you to stream internet music stations of all genres. Radio stations are curated from [www.radio-browser.info][6] and the app itself does an incredible job of presenting the streams for you (Figure 3). + + +![Odio][8] + +Figure 3: The Odio interface is one of the best you’ll find. + +[Used with permission][3] + +Odio makes it very easy to find unique Internet radio stations and even add those you find and enjoy to your library. Currently, the only way to install Odio on Linux is via Snap. If your distribution supports snap packages, install this streaming app with the command: + +sudo snap install odio + +Once installed, you can open the app and start using it. There is no need to log into (or create) an account. Odio is very limited in its settings. In fact, it only offers the choice between a dark or light theme in the settings window. However, as limited as it might be, Odio is one of your best bets for playing Internet radio on Linux. + +Streamtuner2 is an outstanding Internet radio station GUI tool. With it you can stream music from the likes of: + + * Internet radio stations + + * Jameno + + * MyOggRadio + + * Shoutcast.com + + * SurfMusic + + * TuneIn + + * Xiph.org + + * YouTube + + +### StreamTuner2 + +Streamtuner2 offers a nice (if not slightly outdated) interface, that makes it quite easy to find and stream your favorite music. The one caveat with StreamTuner2 is that it’s really just a GUI for finding the streams you want to hear. When you find a station, double-click on it to open the app associated with the stream. That means you must have the necessary apps installed, in order for the streams to play. If you don’t have the proper apps, you can’t play the streams. Because of this, you’ll spend a good amount of time figuring out what apps to install for certain streams (Figure 4). + +![Streamtuner2][10] + +Figure 4: Configuring Streamtuner2 isn’t for the faint of heart. + +[Used with permission][3] + +### VLC + +VLC has been, for a very long time, dubbed the best media playback tool for Linux. That’s with good reason, as it can play just about anything you throw at it. Included in that list is streaming radio stations. Although you won’t find VLC connecting to the likes of Spotify, you can head over to Internet-Radio, click on a playlist and have VLC open it without a problem. And considering how many internet radio stations are available at the moment, you won’t have any problem finding music to suit your tastes. VLC also includes tools like visualizers, equalizers (Figure 5), and more. + +![VLC ][12] + +Figure 5: The VLC visualizer and equalizer features in action. + +[Used with permission][3] + +The only caveat to VLC is that you do have to have a URL for the Internet Radio you wish you hear, as the tool itself doesn’t curate. But with those links in hand, you won’t find a better media player than VLC. + +### Always More Where That Came From + +If one of these five tools doesn’t fit your needs, I suggest you open your distribution’s app store and search for one that will. There are plenty of tools to make streaming music, podcasts, and more not only possible on Linux, but easy. + +Learn more about Linux through the free ["Introduction to Linux" ][13] course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/2/5-streaming-audio-players-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[2]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/spotify_0.jpg?itok=8-Ym-R61 (Spotify) +[3]: https://www.linux.com/licenses/category/used-permission +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/clementine_0.jpg?itok=5oODJO3b (Clementine) +[6]: http://www.radio-browser.info +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/odio.jpg?itok=sNPTSS3c (Odio) +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/streamtuner2.jpg?itok=1MSbafWj (Streamtuner2) +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/vlc_0.jpg?itok=QEOsq7Ii (VLC ) +[13]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md new file mode 100644 index 0000000000..be44e75fea --- /dev/null +++ b/sources/tech/20190205 CFS- Completely fair process scheduling in Linux.md @@ -0,0 +1,122 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (CFS: Completely fair process scheduling in Linux) +[#]: via: (https://opensource.com/article/19/2/fair-scheduling-linux) +[#]: author: (Marty kalin https://opensource.com/users/mkalindepauledu) + +CFS: Completely fair process scheduling in Linux +====== +CFS gives every task a fair share of processor resources in a low-fuss but highly efficient way. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fail_progress_cycle_momentum_arrow.png?itok=q-ZFa_Eh) + +Linux takes a modular approach to processor scheduling in that different algorithms can be used to schedule different process types. A scheduling class specifies which scheduling policy applies to which type of process. Completely fair scheduling (CFS), which became part of the Linux 2.6.23 kernel in 2007, is the scheduling class for normal (as opposed to real-time) processes and therefore is named **SCHED_NORMAL**. + +CFS is geared for the interactive applications typical in a desktop environment, but it can be configured as **SCHED_BATCH** to favor the batch workloads common, for example, on a high-volume web server. In any case, CFS breaks dramatically with what might be called "classic preemptive scheduling." Also, the "completely fair" claim has to be seen with a technical eye; otherwise, the claim might seem like an empty boast. + +Let's dig into the details of what sets CFS apart from—indeed, above—other process schedulers. Let's start with a quick review of some core technical terms. + +### Some core concepts + +Linux inherits the Unix view of a process as a program in execution. As such, a process must contend with other processes for shared system resources: memory to hold instructions and data, at least one processor to execute instructions, and I/O devices to interact with the external world. Process scheduling is how the operating system (OS) assigns tasks (e.g., crunching some numbers, copying a file) to processors—a running process then performs the task. A process has one or more threads of execution, which are sequences of machine-level instructions. To schedule a process is to schedule one of its threads on a processor. + +In a simplifying move, Linux turns process scheduling into thread scheduling by treating a scheduled process as if it were single-threaded. If a process is multi-threaded with N threads, then N scheduling actions would be required to cover the threads. Threads within a multi-threaded process remain related in that they share resources such as memory address space. Linux threads are sometimes described as lightweight processes, with the lightweight underscoring the sharing of resources among the threads within a process. + +Although a process can be in various states, two are of particular interest in scheduling. A blocked process is awaiting the completion of some event such as an I/O event. The process can resume execution only after the event completes. A runnable process is one that is not currently blocked. + +A process is processor-bound (aka compute-bound) if it consumes mostly processor as opposed to I/O resources, and I/O-bound in the opposite case; hence, a processor-bound process is mostly runnable, whereas an I/O-bound process is mostly blocked. As examples, crunching numbers is processor-bound, and accessing files is I/O-bound. Although an entire process might be characterized as either processor-bound or I/O-bound, a given process may be one or the other during different stages of its execution. Interactive desktop applications, such as browsers, tend to be I/O-bound. + +A good process scheduler has to balance the needs of processor-bound and I/O-bound tasks, especially in an operating system such as Linux that thrives on so many hardware platforms: desktop machines, embedded devices, mobile devices, server clusters, supercomputers, and more. + +### Classic preemptive scheduling versus CFS + +Unix popularized classic preemptive scheduling, which other operating systems including VAX/VMS, Windows NT, and Linux later adopted. At the center of this scheduling model is a fixed timeslice, the amount of time (e.g., 50ms) that a task is allowed to hold a processor until preempted in favor of some other task. If a preempted process has not finished its work, the process must be rescheduled. This model is powerful in that it supports multitasking (concurrency) through processor time-sharing, even on the single-CPU machines of yesteryear. + +The classic model typically includes multiple scheduling queues, one per process priority: Every process in a higher-priority queue gets scheduled before any process in a lower-priority queue. As an example, VAX/VMS uses 32 priority queues for scheduling. + +CFS dispenses with fixed timeslices and explicit priorities. The amount of time for a given task on a processor is computed dynamically as the scheduling context changes over the system's lifetime. Here is a sketch of the motivating ideas and technical details: + + * Imagine a processor, P, which is idealized in that it can execute multiple tasks simultaneously. For example, tasks T1 and T2 can execute on P at the same time, with each receiving 50% of P's magical processing power. This idealization describes perfect multitasking, which CFS strives to achieve on actual as opposed to idealized processors. CFS is designed to approximate perfect multitasking. + + * The CFS scheduler has a target latency, which is the minimum amount of time—idealized to an infinitely small duration—required for every runnable task to get at least one turn on the processor. If such a duration could be infinitely small, then each runnable task would have had a turn on the processor during any given timespan, however small (e.g., 10ms, 5ns, etc.). Of course, an idealized infinitely small duration must be approximated in the real world, and the default approximation is 20ms. Each runnable task then gets a 1/N slice of the target latency, where N is the number of tasks. For example, if the target latency is 20ms and there are four contending tasks, then each task gets a timeslice of 5ms. By the way, if there is only a single task during a scheduling event, this lucky task gets the entire target latency as its slice. The fair in CFS comes to the fore in the 1/N slice given to each task contending for a processor. + + * The 1/N slice is, indeed, a timeslice—but not a fixed one because such a slice depends on N, the number of tasks currently contending for the processor. The system changes over time. Some processes terminate and new ones are spawned; runnable processes block and blocked processes become runnable. The value of N is dynamic and so, therefore, is the 1/N timeslice computed for each runnable task contending for a processor. The traditional **nice** value is used to weight the 1/N slice: a low-priority **nice** value means that only some fraction of the 1/N slice is given to a task, whereas a high-priority **nice** value means that a proportionately greater fraction of the 1/N slice is given to a task. In summary, **nice** values do not determine the slice, but only modify the 1/N slice that represents fairness among the contending tasks. + + * The operating system incurs overhead whenever a context switch occurs; that is, when one process is preempted in favor of another. To keep this overhead from becoming unduly large, there is a minimum amount of time (with a typical setting of 1ms to 4ms) that any scheduled process must run before being preempted. This minimum is known as the minimum granularity. If many tasks (e.g., 20) are contending for the processor, then the minimum granularity (assume 4ms) might be more than the 1/N slice (in this case, 1ms). If the minimum granularity turns out to be larger than the 1/N slice, the system is overloaded because there are too many tasks contending for the processor—and fairness goes out the window. + + * When does preemption occur? CFS tries to minimize context switches, given their overhead: time spent on a context switch is time unavailable for other tasks. Accordingly, once a task gets the processor, it runs for its entire weighted 1/N slice before being preempted in favor of some other task. Suppose task T1 has run for its weighted 1/N slice, and runnable task T2 currently has the lowest virtual runtime (vruntime) among the tasks contending for the processor. The vruntime records, in nanoseconds, how long a task has run on the processor. In this case, T1 would be preempted in favor of T2. + + * The scheduler tracks the vruntime for all tasks, runnable and blocked. The lower a task's vruntime, the more deserving the task is for time on the processor. CFS accordingly moves low-vruntime tasks towards the front of the scheduling line. Details are forthcoming because the line is implemented as a tree, not a list. + + * How often should the CFS scheduler reschedule? There is a simple way to determine the scheduling period. Suppose that the target latency (TL) is 20ms and the minimum granularity (MG) is 4ms: + +`TL / MG = (20 / 4) = 5 ## five or fewer tasks are ok` + +In this case, five or fewer tasks would allow each task a turn on the processor during the target latency. For example, if the task number is five, each runnable task has a 1/N slice of 4ms, which happens to equal the minimum granularity; if the task number is three, each task gets a 1/N slice of almost 7ms. In either case, the scheduler would reschedule in 20ms, the duration of the target latency. + +Trouble occurs if the number of tasks (e.g., 10) exceeds TL / MG because now each task must get the minimum time of 4ms instead of the computed 1/N slice, which is 2ms. In this case, the scheduler would reschedule in 40ms: + +`(number of tasks) core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated MG = (10 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4) = 40ms ## period = 40ms` + + + + +Linux schedulers that predate CFS use heuristics to promote the fair treatment of interactive tasks with respect to scheduling. CFS takes a quite different approach by letting the vruntime facts speak mostly for themselves, which happens to support sleeper fairness. An interactive task, by its very nature, tends to sleep a lot in the sense that it awaits user inputs and so becomes I/O-bound; hence, such a task tends to have a relatively low vruntime, which tends to move the task towards the front of the scheduling line. + +### Special features + +CFS supports symmetrical multiprocessing (SMP) in which any process (whether kernel or user) can execute on any processor. Yet configurable scheduling domains can be used to group processors for load balancing or even segregation. If several processors share the same scheduling policy, then load balancing among them is an option; if a particular processor has a scheduling policy different from the others, then this processor would be segregated from the others with respect to scheduling. + +Configurable scheduling groups are another CFS feature. As an example, consider the Nginx web server that's running on my desktop machine. At startup, this server has a master process and four worker processes, which act as HTTP request handlers. For any HTTP request, the particular worker that handles the request is irrelevant; it matters only that the request is handled in a timely manner, and so the four workers together provide a pool from which to draw a task-handler as requests come in. It thus seems fair to treat the four Nginx workers as a group rather than as individuals for scheduling purposes, and a scheduling group can be used to do just that. The four Nginx workers could be configured to have a single vruntime among them rather than individual vruntimes. Configuration is done in the traditional Linux way, through files. For vruntime-sharing, a file named **cpu.shares** , with the details given through familiar shell commands, would be created. + +As noted earlier, Linux supports scheduling classes so that different scheduling policies, together with their implementing algorithms, can coexist on the same platform. A scheduling class is implemented as a code module in C. CFS, the scheduling class described so far, is **SCHED_NORMAL**. There are also scheduling classes specifically for real-time tasks, **SCHED_FIFO** (first in, first out) and **SCHED_RR** (round robin). Under **SCHED_FIFO** , tasks run to completion; under **SCHED_RR** , tasks run until they exhaust a fixed timeslice and are preempted. + +### CFS implementation + +CFS requires efficient data structures to track task information and high-performance code to generate the schedules. Let's begin with a central term in scheduling, the runqueue. This is a data structure that represents a timeline for scheduled tasks. Despite the name, the runqueue need not be implemented in the traditional way, as a FIFO list. CFS breaks with tradition by using a time-ordered red-black tree as a runqueue. The data structure is well-suited for the job because it is a self-balancing binary search tree, with efficient **insert** and **remove** operations that execute in **O(log N)** time, where N is the number of nodes in the tree. Also, a tree is an excellent data structure for organizing entities into a hierarchy based on a particular property, in this case a vruntime. + +In CFS, the tree's internal nodes represent tasks to be scheduled, and the tree as a whole, like any runqueue, represents a timeline for task execution. Red-black trees are in wide use beyond scheduling; for example, Java uses this data structure to implement its **TreeMap**. + +Under CFS, every processor has a specific runqueue of tasks, and no task occurs at the same time in more than one runqueue. Each runqueue is a red-black tree. The tree's internal nodes represent tasks or task groups, and these nodes are indexed by their vruntime values so that (in the tree as a whole or in any subtree) the internal nodes to the left have lower vruntime values than the ones to the right: + +``` +    25     ## 25 is a task vruntime +    /\ +  17  29   ## 17 roots the left subtree, 29 the right one +  /\  ... + 5  19     ## and so on +...  \ +     nil   ## leaf nodes are nil +``` + +In summary, tasks with the lowest vruntime—and, therefore, the greatest need for a processor—reside somewhere in the left subtree; tasks with relatively high vruntimes congregate in the right subtree. A preempted task would go into the right subtree, thus giving other tasks a chance to move leftwards in the tree. A task with the smallest vruntime winds up in the tree's leftmost (internal) node, which is thus the front of the runqueue. + +The CFS scheduler has an instance, the C **task_struct** , to track detailed information about each task to be scheduled. This structure embeds a **sched_entity** structure, which in turn has scheduling-specific information, in particular, the vruntime per task or task group: + +``` +struct task_struct {       /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var info on a task **/ +  ... +  struct sched_entity se;  /** vruntime, etc. **/ +  ... +}; +``` + +The red-black tree is implemented in familiar C fashion, with a premium on pointers for efficiency. A **cfs_rq** structure instance embeds a **rb_root** field named **tasks_timeline** , which points to the root of a red-black tree. Each of the tree's internal nodes has pointers to the parent and the two child nodes; the leaf nodes have nil as their value. + +CFS illustrates how a straightforward idea—give every task a fair share of processor resources—can be implemented in a low-fuss but highly efficient way. It's worth repeating that CFS achieves fair and efficient scheduling without traditional artifacts such as fixed timeslices and explicit task priorities. The pursuit of even better schedulers goes on, of course; for the moment, however, CFS is as good as it gets for general-purpose processor scheduling. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/fair-scheduling-linux + +作者:[Marty kalin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..7ce1201c4f --- /dev/null +++ b/sources/tech/20190205 Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS.md @@ -0,0 +1,443 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS) +[#]: via: (https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/lamp-720x340.jpg) + +**LAMP** stack is a popular, open source web development platform that can be used to run and deploy dynamic websites and web-based applications. Typically, LAMP stack consists of Apache webserver, MariaDB/MySQL databases, PHP/Python/Perl programming languages. LAMP is the acronym of **L** inux, **M** ariaDB/ **M** YSQL, **P** HP/ **P** ython/ **P** erl. This tutorial describes how to install Apache, MySQL, PHP (LAMP stack) in Ubuntu 18.04 LTS server. + +### Install Apache, MySQL, PHP (LAMP) Stack On Ubuntu 18.04 LTS + +For the purpose of this tutorial, we will be using the following Ubuntu testbox. + + * **Operating System** : Ubuntu 18.04.1 LTS Server Edition + * **IP address** : 192.168.225.22/24 + + + +#### 1. Install Apache web server + +First of all, update Ubuntu server using commands: + +``` +$ sudo apt update + +$ sudo apt upgrade +``` + +Next, install Apache web server: + +``` +$ sudo apt install apache2 +``` + +Check if Apache web server is running or not: + +``` +$ sudo systemctl status apache2 +``` + +Sample output would be: + +``` +● apache2.service - The Apache HTTP Server + Loaded: loaded (/lib/systemd/system/apache2.service; enabled; vendor preset: en + Drop-In: /lib/systemd/system/apache2.service.d + └─apache2-systemd.conf + Active: active (running) since Tue 2019-02-05 10:48:03 UTC; 1min 5s ago + Main PID: 2025 (apache2) + Tasks: 55 (limit: 2320) + CGroup: /system.slice/apache2.service + ├─2025 /usr/sbin/apache2 -k start + ├─2027 /usr/sbin/apache2 -k start + └─2028 /usr/sbin/apache2 -k start + +Feb 05 10:48:02 ubuntuserver systemd[1]: Starting The Apache HTTP Server... +Feb 05 10:48:03 ubuntuserver apachectl[2003]: AH00558: apache2: Could not reliably +Feb 05 10:48:03 ubuntuserver systemd[1]: Started The Apache HTTP Server. +``` + +Congratulations! Apache service is up and running!! + +##### 1.1 Adjust firewall to allow Apache web server + +By default, the apache web browser can’t be accessed from remote systems if you have enabled the UFW firewall in Ubuntu 18.04 LTS. You must allow the http and https ports by following the below steps. + +First, list out the application profiles available on your Ubuntu system using command: + +``` +$ sudo ufw app list +``` + +Sample output: + +``` +Available applications: +Apache +Apache Full +Apache Secure +OpenSSH +``` + +As you can see, Apache and OpenSSH applications have installed UFW profiles. You can list out information about each profile and its included rules using “ **ufw app info “Profile Name”** command. + +Let us look into the **“Apache Full”** profile. To do so, run: + +``` +$ sudo ufw app info "Apache Full" +``` + +Sample output: + +``` +Profile: Apache Full +Title: Web Server (HTTP,HTTPS) +Description: Apache v2 is the next generation of the omnipresent Apache web +server. + +Ports: +80,443/tcp +``` + +As you see, “Apache Full” profile has included the rules to enable traffic to the ports **80** and **443** : + +Now, run the following command to allow incoming HTTP and HTTPS traffic for this profile: + +``` +$ sudo ufw allow in "Apache Full" +Rules updated +Rules updated (v6) +``` + +If you don’t want to allow https traffic, but only http (80) traffic, run: + +``` +$ sudo ufw app info "Apache" +``` + +##### 1.2 Test Apache Web server + +Now, open your web browser and access Apache test page by navigating to **** or ****. + +![](https://www.ostechnix.com/wp-content/uploads/2016/06/apache-2.png) + +If you are see a screen something like above, you are good to go. Apache server is working! + +#### 2. Install MySQL + +To install MySQL On Ubuntu, run: + +``` +$ sudo apt install mysql-server +``` + +Verify if MySQL service is running or not using command: + +``` +$ sudo systemctl status mysql +``` + +**Sample output:** + +``` +● mysql.service - MySQL Community Server +Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enab +Active: active (running) since Tue 2019-02-05 11:07:50 UTC; 17s ago +Main PID: 3423 (mysqld) +Tasks: 27 (limit: 2320) +CGroup: /system.slice/mysql.service +└─3423 /usr/sbin/mysqld --daemonize --pid-file=/run/mysqld/mysqld.pid + +Feb 05 11:07:49 ubuntuserver systemd[1]: Starting MySQL Community Server... +Feb 05 11:07:50 ubuntuserver systemd[1]: Started MySQL Community Server. +``` + +Mysql is running! + +##### 2.1 Setup database administrative user (root) password + +By default, MySQL **root** user password is blank. You need to secure your MySQL server by running the following script: + +``` +$ sudo mysql_secure_installation +``` + +You will be asked whether you want to setup **VALIDATE PASSWORD plugin** or not. This plugin allows the users to configure strong password for database credentials. If enabled, It will automatically check the strength of the password and enforces the users to set only those passwords which are secure enough. **It is safe to leave this plugin disabled**. However, you must use a strong and unique password for database credentials. If don’t want to enable this plugin, just press any key to skip the password validation part and continue the rest of the steps. + +If your answer is **Yes** , you will be asked to choose the level of password validation. + +``` +Securing the MySQL server deployment. + +Connecting to MySQL using a blank password. + +VALIDATE PASSWORD PLUGIN can be used to test passwords +and improve security. It checks the strength of password +and allows the users to set only those passwords which are +secure enough. Would you like to setup VALIDATE PASSWORD plugin? + +Press y|Y for Yes, any other key for No y +``` + +The available password validations are **low** , **medium** and **strong**. Just enter the appropriate number (0 for low, 1 for medium and 2 for strong password) and hit ENTER key. + +``` +There are three levels of password validation policy: + +LOW Length >= 8 +MEDIUM Length >= 8, numeric, mixed case, and special characters +STRONG Length >= 8, numeric, mixed case, special characters and dictionary file + +Please enter 0 = LOW, 1 = MEDIUM and 2 = STRONG: +``` + +Now, enter the password for MySQL root user. Please be mindful that you must use password for mysql root user depending upon the password policy you choose in the previous step. If you didn’t enable the plugin, just use any strong and unique password of your choice. + +``` +Please set the password for root here. + +New password: + +Re-enter new password: + +Estimated strength of the password: 50 +Do you wish to continue with the password provided?(Press y|Y for Yes, any other key for No) : y +``` + +Once you entered the password twice, you will see the password strength (In our case it is **50** ). If it is OK for you, press Y to continue with the provided password. If not satisfied with password length, press any other key and set a strong password. I am OK with my current password, so I chose **y**. + +For the rest of questions, just type **y** and hit ENTER. This will remove anonymous user, disallow root user login remotely and remove test database. + +``` +Remove anonymous users? (Press y|Y for Yes, any other key for No) : y +Success. + +Normally, root should only be allowed to connect from +'localhost'. This ensures that someone cannot guess at +the root password from the network. + +Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y +Success. + +By default, MySQL comes with a database named 'test' that +anyone can access. This is also intended only for testing, +and should be removed before moving into a production +environment. + +Remove test database and access to it? (Press y|Y for Yes, any other key for No) : y +- Dropping test database... +Success. + +- Removing privileges on test database... +Success. + +Reloading the privilege tables will ensure that all changes +made so far will take effect immediately. + +Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y +Success. + +All done! +``` + +That’s it. Password for MySQL root user has been set. + +##### 2.2 Change authentication method for MySQL root user + +By default, MySQL root user is set to authenticate using the **auth_socket** plugin in MySQL 5.7 and newer versions on Ubuntu. Even though it enhances the security, it will also complicate things when you access your database server using any external programs, for example phpMyAdmin. To fix this issue, you need to change authentication method from **auth_socket** to **mysql_native_password**. To do so, login to your MySQL prompt using command: + +``` +$ sudo mysql +``` + +Run the following command at the mysql prompt to find the current authentication method for all mysql user accounts: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +**Sample output:** + +``` ++------------------|-------------------------------------------|-----------------------|-----------+ +| user | authentication_string | plugin | host | ++------------------|-------------------------------------------|-----------------------|-----------+ +| root | | auth_socket | localhost | +| mysql.session | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| mysql.sys | *THISISNOTAVALIDPASSWORDTHATCANBEUSEDHERE | mysql_native_password | localhost | +| debian-sys-maint | *F126737722832701DD3979741508F05FA71E5BA0 | mysql_native_password | localhost | ++------------------|-------------------------------------------|-----------------------|-----------+ +4 rows in set (0.00 sec) +``` + +![][2] + +As you see, mysql root user uses `auth_socket` plugin for authentication. + +To change this authentication to **mysql_native_password** method, run the following command at mysql prompt. Don’t forget to replace **“password”** with a strong and unique password of your choice. If you have enabled VALIDATION plugin, make sure you have used a strong password based on the current policy requirements. + +``` +ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; +``` + +Update the changes using command: + +``` +FLUSH PRIVILEGES; +``` + +Now check again if the authentication method is changed or not using command: + +``` +SELECT user,authentication_string,plugin,host FROM mysql.user; +``` + +Sample output: + +![][3] + +Good! Now the myql root user can authenticate using password to access mysql shell. + +Exit from the mysql prompt: + +``` +exit +``` + +#### 3\. Install PHP + +To install PHP, run: + +``` +$ sudo apt install php libapache2-mod-php php-mysql +``` + +After installing PHP, create **info.php** file in the Apache root document folder. Usually, the apache root document folder will be **/var/www/html/** or **/var/www/** in most Debian based Linux distributions. In Ubuntu 18.04 LTS, it is **/var/www/html/**. + +Let us create **info.php** file in the apache root folder: + +``` +$ sudo vi /var/www/html/info.php +``` + +Add the following lines: + +``` + +``` + +Press ESC key and type **:wq** to save and quit the file. Restart apache service to take effect the changes. + +``` +$ sudo systemctl restart apache2 +``` + +##### 3.1 Test PHP + +Open up your web browser and navigate to **** URL. + +You will see the php test page now. + +![](https://www.ostechnix.com/wp-content/uploads/2019/02/php-test-page.png) + +Usually, when a user requests a directory from the web server, Apache will first look for a file named **index.html**. If you want to change Apache to serve php files rather than others, move **index.php** to first position in the **dir.conf** file as shown below + +``` +$ sudo vi /etc/apache2/mods-enabled/dir.conf +``` + +Here is the contents of the above file. + +``` + +DirectoryIndex index.html index.cgi index.pl index.php index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +Move the “index.php” file to first. Once you made the changes, your **dir.conf** file will look like below. + +``` + +DirectoryIndex index.php index.html index.cgi index.pl index.xhtml index.htm + + +# vim: syntax=apache ts=4 sw=4 sts=4 sr noet +``` + +Press **ESC** key and type **:wq** to save and close the file. Restart Apache service to take effect the changes. + +``` +$ sudo systemctl restart apache2 +``` + +##### 3.2 Install PHP modules + +To improve the functionality of PHP, you can install some additional PHP modules. + +To list the available PHP modules, run: + +``` +$ sudo apt-cache search php- | less +``` + +**Sample output:** + +![][4] + +Use the arrow keys to go through the result. To exit, type **q** and hit ENTER key. + +To find the details of any particular php module, for example **php-gd** , run: + +``` +$ sudo apt-cache show php-gd +``` + +To install a php module run: + +``` +$ sudo apt install php-gd +``` + +To install all modules (not necessary though), run: + +``` +$ sudo apt-get install php* +``` + +Do not forget to restart Apache service after installing any php module. To check if the module is loaded or not, open info.php file in your browser and check if it is present. + +Next, you might want to install any database management tools to easily manage databases via a web browser. If so, install phpMyAdmin as described in the following link. + +Congratulations! We have successfully setup LAMP stack in Ubuntu 18.04 LTS server. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-apache-mysql-php-lamp-stack-on-ubuntu-18-04-lts/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2019/02/mysql-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2016/06/php-modules.png diff --git a/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md new file mode 100644 index 0000000000..e8722c63cc --- /dev/null +++ b/sources/tech/20190205 Installing Kali Linux on VirtualBox- Quickest - Safest Way.md @@ -0,0 +1,146 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Installing Kali Linux on VirtualBox: Quickest & Safest Way) +[#]: via: (https://itsfoss.com/install-kali-linux-virtualbox/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Installing Kali Linux on VirtualBox: Quickest & Safest Way +====== + +_**This tutorial shows you how to install Kali Linux on Virtual Box in Windows and Linux in the quickest way possible.**_ + +[Kali Linux][1] is one of the [best Linux distributions for hacking][2] and security enthusiasts. + +Since it deals with a sensitive topic like hacking, it’s like a double-edged sword. We have discussed it in the detailed Kali Linux review in the past so I am not going to bore you with the same stuff again. + +While you can install Kali Linux by replacing the existing operating system, using it via a virtual machine would be a better and safer option. + +With Virtual Box, you can use Kali Linux as a regular application in your Windows/Linux system. It’s almost the same as running VLC or a game in your system. + +Using Kali Linux in a virtual machine is also safe. Whatever you do inside Kali Linux will NOT impact your ‘host system’ (i.e. your original Windows or Linux operating system). Your actual operating system will be untouched and your data in the host system will be safe. + +![][3] + +### How to install Kali Linux on VirtualBox + +I’ll be using [VirtualBox][4] here. It is a wonderful open source virtualization solution for just about anyone (professional or personal use). It’s available free of cost. + +In this tutorial, we will talk about Kali Linux in particular but you can install almost any other OS whose ISO file exists or a pre-built virtual machine save file is available. + +**Note:** _The same steps apply for Windows/Linux running VirtualBox._ + +As I already mentioned, you can have either Windows or Linux installed as your host. But, in this case, I have Windows 10 installed (don’t hate me!) where I try to install Kali Linux in VirtualBox step by step. + +And, the best part is – even if you happen to use a Linux distro as your primary OS, the same steps will be applicable! + +Wondering, how? Let’s see… + +[Subscribe to Our YouTube Channel for More Linux Videos][5] + +### Step by Step Guide to install Kali Linux on VirtualBox + +_We are going to use a custom Kali Linux image made for VirtualBox specifically. You can also download the ISO file for Kali Linux and create a new virtual machine – but why do that when you have an easy alternative?_ + +#### 1\. Download and install VirtualBox + +The first thing you need to do is to download and install VirtualBox from Oracle’s official website. + +[Download VirtualBox][6] + +Once you download the installer, just double click on it to install VirtualBox. It’s the same for [installing VirtualBox on Ubuntu][7]/Fedora Linux as well. + +#### 2\. Download ready-to-use virtual image of Kali Linux + +After installing it successfully, head to [Offensive Security’s download page][8] to download the VM image for VirtualBox. If you change your mind to utilize [VMware][9], that is available too. + +![][10] + +As you can see the file size is well over 3 GB, you should either use the torrent option or download it using a [download manager][11]. + +[Kali Linux Virtual Image][8] + +#### 3\. Install Kali Linux on Virtual Box + +Once you have installed VirtualBox and downloaded the Kali Linux image, you just need to import it to VirtualBox in order to make it work. + +Here’s how to import the VirtualBox image for Kali Linux: + +**Step 1** : Launch VirtualBox. You will notice an **Import** button – click on it + +![Click on Import button][12] + +**Step 2:** Next, browse the file you just downloaded and choose it to be imported (as you can see in the image below). The file name should start with ‘kali linux‘ and end with . **ova** extension. + +![Importing Kali Linux image][13] + +**S** Once selected, proceed by clicking on **Next**. + +**Step 3** : Now, you will be shown the settings for the virtual machine you are about to import. So, you can customize them or not – that is your choice. It is okay if you go with the default settings. + +You need to select a path where you have sufficient storage available. I would never recommend the **C:** drive on Windows. + +![Import hard drives as VDI][14] + +Here, the hard drives as VDI refer to virtually mount the hard drives by allocating the storage space set. + +After you are done with the settings, hit **Import** and wait for a while. + +**Step 4:** You will now see it listed. So, just hit **Start** to launch it. + +You might get an error at first for USB port 2.0 controller support, you can disable it to resolve it or just follow the on-screen instruction of installing an additional package to fix it. And, you are done! + +![Kali Linux running in VirtualBox][15] + +The default username in Kali Linux is root and the default password is toor. You should be able to login to the system with it. + +Do note that you should [update Kali Linux][16] before trying to install a new applications or trying to hack your neighbor’s WiFi. + +I hope this guide helps you easily install Kali Linux on Virtual Box. Of course, Kali Linux has a lot of useful tools in it for penetration testing – good luck with that! + +**Tip** : Both Kali Linux and Ubuntu are Debian-based. If you face any issues or error with Kali Linux, you may follow the tutorials intended for Ubuntu or Debian on the internet. + +### Bonus: Free Kali Linux Guide Book + +If you are just starting with Kali Linux, it will be a good idea to know how to use Kali Linux. + +Offensive Security, the company behind Kali Linux, has created a guide book that explains the basics of Linux, basics of Kali Linux, configuration, setups. It also has a few chapters on penetration testing and security tools. + +Basically, it has everything you need to get started with Kali Linux. And the best thing is that the book is available to download for free. + +[Download Kali Linux Revealed for FREE][17] + +Let us know in the comments below if you face an issue or simply share your experience with Kali Linux on VirtualBox. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-kali-linux-virtualbox/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.kali.org/ +[2]: https://itsfoss.com/linux-hacking-penetration-testing/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box.png?resize=800%2C450&ssl=1 +[4]: https://www.virtualbox.org/ +[5]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[6]: https://www.virtualbox.org/wiki/Downloads +[7]: https://itsfoss.com/install-virtualbox-ubuntu/ +[8]: https://www.offensive-security.com/kali-linux-vm-vmware-virtualbox-image-download/ +[9]: https://itsfoss.com/install-vmware-player-ubuntu-1310/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-virtual-box-image.jpg?resize=800%2C347&ssl=1 +[11]: https://itsfoss.com/4-best-download-managers-for-linux/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-import-kali-linux.jpg?ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-linux-next.jpg?ssl=1 +[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmbox-kali-linux-settings.jpg?ssl=1 +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kali-linux-on-windows-virtualbox.jpg?resize=800%2C429&ssl=1 +[16]: https://linuxhandbook.com/update-kali-linux/ +[17]: https://kali.training/downloads/Kali-Linux-Revealed-1st-edition.pdf diff --git a/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md new file mode 100644 index 0000000000..603ae570eb --- /dev/null +++ b/sources/tech/20190206 Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI) +[#]: via: (https://itsfoss.com/flowblade-video-editor-release/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Flowblade 2.0 is Here with New Video Editing Tools and a Refreshed UI +====== + +[Flowblade][1] is one of the rare [video editors that are only available for Linux][2]. It is not the feature set – but the simplicity, flexibility, and being an open source project that counts. + +However, with Flowblade 2.0 – released recently – it is now more powerful and useful. A lot of new tools along with a complete overhaul in the workflow can be seen. + +In this article, we shall take a look at what’s new with Flowblade 2.0. + +### New Features in Flowblade 2.0 + +Here are some of the major new changes in the latest release of Flowblade. + +#### GUI Updates + +![Flowblade 2.0][3] + +This was a much needed change. I’m always looking for open source solutions that works as expected along with a great GUI. + +So, in this update, you will observe a new custom theme set as the default – which looks good though. + +Overall, the panel design, the toolbox and stuff has been taken care of to make it look modern. The overhaul considers small changes like the cursor icon upon tool selection and so on. + +#### Workflow Overhaul + +No matter what features you get to utilize, the workflow matters to people who regularly edit videos. So, it has to be intuitive. + +With the recent release, they have made sure that you can configure and set the workflow as per your preference. Well, that is definitely flexible because not everyone has the same requirement. + +#### New Tools + +![Flowblade Video Editor Interface][4] + +**Keyframe tool** : This enables editing and adjusting the Volume and Brightness [keyframes][5] on timeline. + +**Multitrim** : A combination of trill, roll, and slip tool. + +**Cut:** Available now as a tool in addition to the traditional cut at the playhead. + +**Ripple trim:** It is a mode of Trim tool – not often used by many – now available as a separate tool. + +#### More changes? + +In addition to these major changes listed above, they have added some keyframe editing updates and compositors ( _AlphaXOR, Alpha Out, and Alpha_ ) to utilize alpha channel data to combine images. + +A lot of more tiny little changes have taken place as well – you can check those out in the official [changelog][6] on GitHub. + +### Installing Flowblade 2.0 + +If you use Debian or Ubuntu based Linux distributions, there are .deb binaries available for easily installing Flowblade 2.0. + +For the rest, you’ll have to [install it using the source code][7]. + +All the files are available on it’s GitHub page. You can download it from the page below. + +[Download Flowblade 2.0][8] + +### Wrapping Up + +If you are interested in video editing, perhaps you would like to follow the development of [Olive][9], a new open source video editor in development. + +Now that you know about the latest changes and additions. What do you think about Flowblade 2.0 as a video editor? Is it good enough for you? + +Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/flowblade-video-editor-release/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/jliljebl/flowblade +[2]: https://itsfoss.com/best-video-editing-software-linux/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2.jpg?ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/flowblade-2-1.jpg?resize=800%2C450&ssl=1 +[5]: https://en.wikipedia.org/wiki/Key_frame +[6]: https://github.com/jliljebl/flowblade/blob/master/flowblade-trunk/docs/RELEASE_NOTES.md +[7]: https://itsfoss.com/install-software-from-source-code/ +[8]: https://github.com/jliljebl/flowblade/releases/tag/v2.0 +[9]: https://itsfoss.com/olive-video-editor/ diff --git a/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md new file mode 100644 index 0000000000..7b51459c6b --- /dev/null +++ b/sources/tech/20190207 Review of Debian System Administrator-s Handbook.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Review of Debian System Administrator’s Handbook) +[#]: via: (https://itsfoss.com/debian-administrators-handbook/) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Review of Debian System Administrator’s Handbook +====== + +_**Debian System Administrator’s Handbook is a free-to-download book that covers all the essential part of Debian that a sysadmin might need.**_ + +This has been on my to-do review list for quite some time. The book was started by two French Debian Developers Raphael Hertzog and Roland Mas to increase awareness about the Debian project in France. The book was a huge hit among francophone Linux users. The English translation followed soon after that. + +### Debian Administrator’s Handbook + +![][1] + +[Debian Administrator’s Handbook][2] is targeted from a newbie who may be looking to understand what the [Debian project][3] is all about to somebody who might be running a Debian in a production server. + +The latest version of the book covers Debian 8 while the current stable version is Debian 9. But it doesn’t mean that book is outdated and is of no use to Debian 9 users. Most of the part of the book is valid for all Debian and Linux users. + +Let me give you a quick summary of what this book covers. + +#### Section 1 – Debian Project + +The first section sets the tone of the book where it gives a solid foundation to somebody who might be looking into Debian as to what it actually means. Some of it will probably be updated to match the current scenario. + +#### Section 2 – Using fictional case studies for different needs + +The second section deals with the various case-scenarios as to where Debian could be used. The idea being how Debian can be used in various hierarchical or functional scenarios. One aspect which I felt that should have stressed upon is the culture mindshift and openness which at least should have been mentioned. + +#### Section 3 & 4- Setups and Installation + +The third section goes into looking in existing setups. I do think it should have stressed more into documenting existing setups, migrating partial services and users before making a full-fledged transition. While all of the above seem minor points, I have seen many of them come and bit me on the back during a transition. + +Section Four covers the various ways you could install, how the installation process flows and things to keep in mind before installing a Debian System. Unfortunately, UEFI was not present at that point so it was not talked about. + +#### Section 5 & 6 – Packaging System and Updates + +Section Five starts on how a binary package is structured and then goes on to tell how a source package is structured as well. It does mention several gotchas or tricky ways in which a sys-admin can be caught. + +Section Six is perhaps where most of the sysadmins spend most of the time apart from troubleshooting which is another chapter altogether. While it starts from many of the most often used sysadmin commands, the interesting point which I liked was on page 156 which is on better solver algorithims. + +#### Section 7 – Solving Problems and finding Relevant Solutions + +Section Seven, on the other hand, speaks of the various problem scenarios and various ways when you find yourself with a problem. In Debian and most GNU/Linux distributions, the keyword is ‘patience’. If you are patient then many problems in Debian are resolved or can be resolved after a good night’s sleep. + +#### Section 8 – Basic Configuration, Network, Accounts, Printing + +Section Eight introduces you to the basics of networking and having single or multiple user accounts on the workstation. It goes a bit into user and group configuration and practices then gives a brief introduction to the bash shell and gets a brief overview of the [CUPS][4] printing daemon. There is much to explore here. + +#### Section 9 – Unix Service + +Section 9 starts with the introduction to specific Unix services. While it starts with the much controversial, hated and reviled in many quarters [systemd][5], they also shared System V which is still used by many a sysadmin. + +#### Section 10, 11 & 12 – Networking and Adminstration + +Section 10 makes you dive into network infrastructure where it goes into the basics of Virtual Private Networks (OpenVPN), OpenSSH, the PKI credentials and some basics of information security. It also gets into basics of DNS, DHCP and IPv6 and ends with some tools which could help in troubleshooting network issues. + +Section 11 starts with basic configuration and workflow of mail server and postfix. It tries to a bit into depth as there is much to play with. It then goes into the popular web server Apache, FTP File server, NFS and CIFS with Windows shares via Samba. Again, much to explore therein. + +Section 12 starts with Advanced Administration topics such as RAID, LVM, when one is better than the other. Then gets into Virtualization, Xen and give brief about lxc. Again, there is much more to explore than shared herein. + +![Author Raphael Hertzog at a Debian booth circa 2013 | Image Credit][6] + +#### Section 13 – Workstation + +Section 13 shares about having schemas for xserver, display managers, window managers, menu management, the different desktops i.e. GNOME, KDE, XFCE and others. It does mention about lxde in the others. The one omission I felt which probably will be updated in a new release would be [Wayland][7] and [Xwayland][8]. Again much to explore in this section as well. This is rectified in the conclusion + +#### Section 14 – Security + +Section 14 is somewhat comprehensive on what constitues security and bits of threats analysis but stops short as it shares in the introduction of the chapter itself that it’s a vast topic. + +#### Section 15 – Creating a Debian package + +Section 15 explains the tools and processes to ‘ _debianize_ ‘ an application so it becomes part of the Debian archive and available for distribution on the 10 odd hardware architectures that Debian supports. + +### Pros and Cons + +Where Raphael and Roland have excelled is at breaking the visual monotony of the book by using a different style and structure wherever possible from the rest of the reading material. This compels the reader to refresh her eyes while at the same time focus on the important matter at the hand. The different visual style also indicates that this is somewhat more important from the author’s point of view. + +One of the drawbacks, if I may call it that, is the absolute absence of humor in the book. + +### Final Thoughts + +I have been [using Debian][9] for a decade so lots of it was a refresher for myself. Some of it is outdated if I look it from a buster perspective but is invaluable as a historical artifact. + +If you are looking to familiarize yourself with Debian or looking to run Debian 8 or 9 as a production server for your business wouldn’t be able to recommend a better book than this. + +### Download Debian Administrator’s Handbook + +The Debian Handbook has been available in every Debian release after 2012. The [liberation][10] of the Debian Handbook was done in 2012 using [ulule][11]. + +You can download an electronic version of the Debian Administrator’s Handbook in PDF, ePub or Mobi format from the link below: + +[Download Debian Administrator’s Handbook][12] + +You can also buy the book paperback edition of the book if you want to support the amazing work of the authors. + +[Buy the paperback edition][13] + +Lastly, if you want to motivate Raphael, you can reward by donating to his PayPal [account][14]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/debian-administrators-handbook/ + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/Debian-Administrators-Handbook-review.png?resize=800%2C450&ssl=1 +[2]: https://debian-handbook.info/ +[3]: https://www.debian.org/ +[4]: https://www.cups.org +[5]: https://itsfoss.com/systemd-features/ +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/stand-debian-Raphael.jpg?resize=800%2C600&ssl=1 +[7]: https://wayland.freedesktop.org/ +[8]: https://en.wikipedia.org/wiki/X.Org_Server#XWayland +[9]: https://itsfoss.com/reasons-why-i-love-debian/ +[10]: https://debian-handbook.info/liberation/ +[11]: https://www.ulule.com/debian-handbook/ +[12]: https://debian-handbook.info/get/now/ +[13]: https://debian-handbook.info/get/ +[14]: https://raphaelhertzog.com/ diff --git a/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md new file mode 100644 index 0000000000..55c1067d12 --- /dev/null +++ b/sources/tech/20190208 3 Ways to Install Deb Files on Ubuntu Linux.md @@ -0,0 +1,185 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 Ways to Install Deb Files on Ubuntu Linux) +[#]: via: (https://itsfoss.com/install-deb-files-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +3 Ways to Install Deb Files on Ubuntu Linux +====== + +**This beginner article explains how to install deb packages in Ubuntu. It also shows you how to remove those deb packages afterwards.** + +This is another article in the Ubuntu beginner series. If you are absolutely new to Ubuntu, you might wonder about [how to install applications][1]. + +The easiest way is to use the Ubuntu Software Center. Search for an application by its name and install it from there. + +Life would be too simple if you could find all the applications in the Software Center. But that does not happen, unfortunately. + +Some software are available via DEB packages. These are archived files that end with .deb extension. + +You can think of .deb files as the .exe files in Windows. You double click on the .exe file and it starts the installation procedure in Windows. DEB packages are pretty much the same. + +You can find these DEB packages from the download section of the software provider’s website. For example, if you want to [install Google Chrome on Ubuntu][2], you can download the DEB package of Chrome from its website. + +Now the question arises, how do you install deb files? There are multiple ways of installing DEB packages in Ubuntu. I’ll show them to you one by one in this tutorial. + +![Install deb files in Ubuntu][3] + +### Installing .deb files in Ubuntu and Debian-based Linux Distributions + +You can choose a GUI tool or a command line tool for installing a deb package. The choice is yours. + +Let’s go on and see how to install deb files. + +#### Method 1: Use the default Software Center + +The simplest method is to use the default software center in Ubuntu. You have to do nothing special here. Simply go to the folder where you have downloaded the .deb file (it should be the Downloads folder) and double click on this file. + +![Google Chrome deb file on Ubuntu][4]Double click on the downloaded .deb file to start installation + +It will open the software center and you should see the option to install the software. All you have to do is to hit the install button and enter your login password. + +![Install Google Chrome in Ubuntu Software Center][5]The installation of deb file will be carried out via Software Center + +See, it’s even simple than installing from a .exe files on Windows, isn’t it? + +#### Method 2: Use Gdebi application for installing deb packages with dependencies + +Again, life would be a lot simpler if things always go smooth. But that’s not life as we know it. + +Now that you know that .deb files can be easily installed via Software Center, let me tell you about the dependency error that you may encounter with some packages. + +What happens is that a program may be dependent on another piece of software (libraries). When the developer is preparing the DEB package for you, he/she may assume that your system already has that piece of software on your system. + +But if that’s not the case and your system doesn’t have those required pieces of software, you’ll encounter the infamous ‘dependency error’. + +The Software Center cannot handle such errors on its own so you have to use another tool called [gdebi][6]. + +gdebi is a lightweight GUI application that has the sole purpose of installing deb packages. + +It identifies the dependencies and tries to install these dependencies along with installing the .deb files. + +![gdebi handling dependency while installing deb package][7]Image Credit: [Xmodulo][8] + +Personally, I prefer gdebi over software center for installing deb files. It is a lightweight application so the installation seems quicker. You can read in detail about [using gDebi and making it the default for installing DEB packages][6]. + +You can install gdebi from the software center or using the command below: + +``` +sudo apt install gdebi +``` + +#### Method 3: Install .deb files in command line using dpkg + +If you want to install deb packages in command lime, you can use either apt command or dpkg command. Apt command actually uses [dpkg command][9] underneath it but apt is more popular and easy to use. + +If you want to use the apt command for deb files, use it like this: + +``` +sudo apt install path_to_deb_file +``` + +If you want to use dpkg command for installing deb packages, here’s how to do it: + +``` +sudo dpkg -i path_to_deb_file +``` + +In both commands, you should replace the path_to_deb_file with the path and name of the deb file you have downloaded. + +![Install deb files using dpkg command in Ubuntu][10]Installing deb files using dpkg command in Ubuntu + +If you get a dependency error while installing the deb packages, you may use the following command to fix the dependency issues: + +``` +sudo apt install -f +``` + +### How to remove deb packages + +Removing a deb package is not a big deal as well. And no, you don’t need the original deb file that you had used for installing the program. + +#### Method 1: Remove deb packages using apt commands + +All you need is the name of the program that you have installed and then you can use apt or dpkg to remove that program. + +``` +sudo apt remove program_name +``` + +Now the question comes, how do you find the exact program name that you need to use in the remove command? The apt command has a solution for that as well. + +You can find the list of all installed files with apt command but manually going through this will be a pain. So you can use the grep command to search for your package. + +For example, I installed AppGrid application in the previous section but if I want to know the exact program name, I can use something like this: + +``` +sudo apt list --installed | grep grid +``` + +This will give me all the packages that have grid in their name and from there, I can get the exact program name. + +``` +apt list --installed | grep grid +WARNING: apt does not have a stable CLI interface. Use with caution in scripts. +appgrid/now 0.298 all [installed,local] +``` + +As you can see, a program called appgrid has been installed. Now you can use this program name with the apt remove command. + +#### Method 2: Remove deb packages using dpkg commands + +You can use dpkg to find the installed program’s name: + +``` +dpkg -l | grep grid +``` + +The output will give all the packages installed that has grid in its name. + +``` +dpkg -l | grep grid + +ii appgrid 0.298 all Discover and install apps for Ubuntu +``` + +ii in the above command output means package has been correctly installed. + +Now that you have the program name, you can use dpkg command to remove it: + +``` +dpkg -r program_name +``` + +**Tip: Updating deb packages** +Some deb packages (like Chrome) provide updates through system updates but for most other programs, you’ll have to remove the existing program and install the newer version. + +I hope this beginner guide helped you to install deb packages on Ubuntu. I added the remove part so that you’ll have better control over the programs you installed. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-deb-files-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/remove-install-software-ubuntu/ +[2]: https://itsfoss.com/install-chrome-ubuntu/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?resize=800%2C450&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-4.jpeg?resize=800%2C347&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/install-google-chrome-ubuntu-5.jpeg?resize=800%2C516&ssl=1 +[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/gdebi-handling-dependency.jpg?ssl=1 +[8]: http://xmodulo.com +[9]: https://help.ubuntu.com/lts/serverguide/dpkg.html.en +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-deb-file-with-dpkg.png?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/deb-packages-ubuntu.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190211 How does rootless Podman work.md b/sources/tech/20190211 How does rootless Podman work.md new file mode 100644 index 0000000000..a085ae9014 --- /dev/null +++ b/sources/tech/20190211 How does rootless Podman work.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How does rootless Podman work?) +[#]: via: (https://opensource.com/article/19/2/how-does-rootless-podman-work) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +How does rootless Podman work? +====== +Learn how Podman takes advantage of user namespaces to run in rootless mode. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82) + +In my [previous article][1] on user namespace and [Podman][2], I discussed how you can use Podman commands to launch different containers with different user namespaces giving you better separation between containers. Podman also takes advantage of user namespaces to be able to run in rootless mode. Basically, when a non-privileged user runs Podman, the tool sets up and joins a user namespace. After Podman becomes root inside of the user namespace, Podman is allowed to mount certain filesystems and set up the container. Note there is no privilege escalation here other then additional UIDs available to the user, explained below. + +### How does Podman create the user namespace? + +#### shadow-utils + +Most current Linux distributions include a version of shadow-utils that uses the **/etc/subuid** and **/etc/subgid** files to determine what UIDs and GIDs are available for a user in a user namespace. + +``` +$ cat /etc/subuid +dwalsh:100000:65536 +test:165536:65536 +$ cat /etc/subgid +dwalsh:100000:65536 +test:165536:65536 +``` + +The useradd program automatically allocates 65536 UIDs for each user added to the system. If you have existing users on a system, you would need to allocate the UIDs yourself. The format of these files is **username:STARTUID:TOTALUIDS**. Meaning in my case, dwalsh is allocated UIDs 100000 through 165535 along with my default UID, which happens to be 3265 defined in /etc/passwd. You need to be careful when allocating these UID ranges that they don't overlap with any **real** UID on the system. If you had a user listed as UID 100001, now I (dwalsh) would be able to become this UID and potentially read/write/execute files owned by the UID. + +Shadow-utils also adds two setuid programs (or setfilecap). On Fedora I have: + +``` +$ getcap /usr/bin/newuidmap +/usr/bin/newuidmap = cap_setuid+ep +$ getcap /usr/bin/newgidmap +/usr/bin/newgidmap = cap_setgid+ep +``` + +Podman executes these files to set up the user namespace. You can see the mappings by examining /proc/self/uid_map and /proc/self/gid_map from inside of the rootless container. + +``` +$ podman run alpine cat /proc/self/uid_map /proc/self/gid_map +        0       3267            1 +        1       100000          65536 +        0       3267            1 +        1       100000          65536 +``` + +As seen above, Podman defaults to mapping root in the container to your current UID (3267) and then maps ranges of allocated UIDs/GIDs in /etc/subuid and /etc/subgid starting at 1. Meaning in my example, UID=1 in the container is UID 100000, UID=2 is UID 100001, all the way up to 65536, which is 165535. + +Any item from outside of the user namespace that is owned by a UID or GID that is not mapped into the user namespace appears to belong to the user configured in the **kernel.overflowuid** sysctl, which by default is 35534, which my /etc/passwd file says has the name **nobody**. Since your process can't run as an ID that isn't mapped, the owner and group permissions don't apply, so you can only access these files based on their "other" permissions. This includes all files owned by **real** root on the system running the container, since root is not mapped into the user namespace. + +The [Buildah][3] command has a cool feature, [**buildah unshare**][4]. This puts you in the same user namespace that Podman runs in, but without entering the container's filesystem, so you can list the contents of your home directory. + +``` +$ ls -ild /home/dwalsh +8193 drwx--x--x. 290 dwalsh dwalsh 20480 Jan 29 07:58 /home/dwalsh +$ buildah unshare ls -ld /home/dwalsh +drwx--x--x. 290 root root 20480 Jan 29 07:58 /home/dwalsh +``` + +Notice that when listing the home dir attributes outside the user namespace, the kernel reports the ownership as dwalsh, while inside the user namespace it reports the directory as owned by root. This is because the home directory is owned by 3267, and inside the user namespace we are treating that UID as root. + +### What happens next in Podman after the user namespace is set up? + +Podman uses [containers/storage][5] to pull the container image, and containers/storage is smart enough to map all files owned by root in the image to the root of the user namespace, and any other files owned by different UIDs to their user namespace UIDs. By default, this content gets written to ~/.local/share/containers/storage. Container storage works in rootless mode with either the vfs mode or with Overlay. Note: Overlay is supported only if the [fuse-overlayfs][6] executable is installed. + +The kernel only allows user namespace root to mount certain types of filesystems; at this time it allows mounting of procfs, sysfs, tmpfs, fusefs, and bind mounts (as long as the source and destination are owned by the user running Podman. OverlayFS is not supported yet, although the kernel teams are working on allowing it). + +Podman then mounts the container's storage if it is using fuse-overlayfs; if the storage driver is using vfs, then no mounting is required. Podman on vfs requires a lot of space though, since each container copies the entire underlying filesystem. + +Podman then mounts /proc and /sys along with a few tmpfs and creates the devices in the container. + +In order to use networking other than the host networking, Podman uses the [slirp4netns][7] program to set up **User mode networking for unprivileged network namespace**. Slirp4netns allows Podman to expose ports within the container to the host. Note that the kernel still will not allow a non-privileged process to bind to ports less than 1024. Podman-1.1 or later is required for binding to ports. + +Rootless Podman can use user namespace for container separation, but you only have access to the UIDs defined in the /etc/subuid file. + +### Conclusion + +The Podman tool is enabling people to build and use containers without sacrificing the security of the system; you can give your developers the access they need without giving them root. + +And when you put your containers into production, you can take advantage of the extra security provided by the user namespace to keep the workloads isolated from each other. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/how-does-rootless-podman-work + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/12/podman-and-user-namespaces +[2]: https://podman.io/ +[3]: https://buildah.io/ +[4]: https://github.com/containers/buildah/blob/master/docs/buildah-unshare.md +[5]: https://github.com/containers/storage +[6]: https://github.com/containers/fuse-overlayfs +[7]: https://github.com/rootless-containers/slirp4netns diff --git a/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md b/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md new file mode 100644 index 0000000000..c04d47e5ca --- /dev/null +++ b/sources/tech/20190211 What-s the right amount of swap space for a modern Linux system.md @@ -0,0 +1,68 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What's the right amount of swap space for a modern Linux system?) +[#]: via: (https://opensource.com/article/19/2/swap-space-poll) +[#]: author: (David Both https://opensource.com/users/dboth) + +What's the right amount of swap space for a modern Linux system? +====== +Complete our survey and voice your opinion on how much swap space to allocate. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/find-file-linux-code_magnifying_glass_zero.png?itok=E2HoPDg0) + +Swap space is one of those things that everyone seems to have an idea about, and I am no exception. All my sysadmin friends have their opinions, and most distributions make recommendations too. + +Many years ago, the rule of thumb for the amount of swap space that should be allocated was 2X the amount of RAM installed in the computer. Of course that was when a typical computer's RAM was measured in KB or MB. So if a computer had 64KB of RAM, a swap partition of 128KB would be an optimum size. + +This took into account the fact that RAM memory sizes were typically quite small, and allocating more than 2X RAM for swap space did not improve performance. With more than twice RAM for swap, most systems spent more time thrashing than performing useful work. + +RAM memory has become quite inexpensive and many computers now have RAM in the tens of gigabytes. Most of my newer computers have at least 4GB or 8GB of RAM, two have 32GB, and my main workstation has 64GB. When dealing with computers with huge amounts of RAM, the limiting performance factor for swap space is far lower than the 2X multiplier. As a consequence, recommended swap space is considered a function of system memory workload, not system memory. + +Table 1 provides the Fedora Project's recommended size for a swap partition, depending on the amount of RAM in your system and whether you want enough memory for your system to hibernate. To allow for hibernation, you need to edit the swap space in the custom partitioning stage. The "recommended" swap partition size is established automatically during a default installation, but I usually find it's either too large or too small for my needs. + +The [Fedora 28 Installation Guide][1] defines current thinking about swap space allocation. Note that other versions of Fedora and other Linux distributions may differ slightly, but this is the same table Red Hat Enterprise Linux uses for its recommendations. These recommendations have not changed since Fedora 19. + +| Amount of RAM installed in system | Recommended swap space | Recommended swap space with hibernation | +| --------------------------------- | ---------------------- | --------------------------------------- | +| ≤ 2GB | 2X RAM | 3X RAM | +| 2GB – 8GB | = RAM | 2X RAM | +| 8GB – 64GB | 4G to 0.5X RAM | 1.5X RAM | +| >64GB | Minimum 4GB | Hibernation not recommended | + +Table 1: Recommended system swap space in Fedora 28's documentation. + +Table 2 contains my recommendations based on my experiences in multiple environments over the years. +| Amount of RAM installed in system | Recommended swap space | +| --------------------------------- | ---------------------- | +| ≤ 2GB | 2X RAM | +| 2GB – 8GB | = RAM | +| > 8GB | 8GB | + +Table 2: My recommended system swap space. + +It's possible that neither of these tables will work for your environment, but they will give you a place to start. The main consideration is that as the amount of RAM increases, adding more swap space simply leads to thrashing well before the swap space comes close to being filled. If you have too little virtual memory, you should add more RAM, if possible, rather than more swap space. + +In order to test the Fedora (and RHEL) swap space recommendations, I used its recommendation of **0.5*RAM** on my two largest systems (the ones with 32GB and 64GB of RAM). Even when running four or five VMs, multiple documents in LibreOffice, Thunderbird, the Chrome web browser, several terminal emulator sessions, the Xfe file manager, and a number of other background applications, the only time I see any use of swap is during backups I have scheduled for every morning at about 2am. Even then, swap usage is no more than 16MB—yes megabytes. These results are for my system with my loads and do not necessarily apply to your real-world environment. + +I recently had a conversation about swap space with some of the other Community Moderators here at [Opensource.com][2], and Chris Short, one of my friends in that illustrious and talented group, pointed me to an old [article][3] where he recommended using 1GB for swap space. This article was written in 2003, and he told me later that he now recommends zero swap space. + +So, we wondered, what you think? What do you recommend or use on your systems for swap space? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/swap-space-poll + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://docs.fedoraproject.org/en-US/fedora/f28/install-guide/ +[2]: http://Opensource.com +[3]: https://chrisshort.net/moving-to-linux-partitioning/ diff --git a/sources/tech/20190212 Top 10 Best Linux Media Server Software.md b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md new file mode 100644 index 0000000000..8fcea6343a --- /dev/null +++ b/sources/tech/20190212 Top 10 Best Linux Media Server Software.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 10 Best Linux Media Server Software) +[#]: via: (https://itsfoss.com/best-linux-media-server) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Top 10 Best Linux Media Server Software +====== + +Did someone tell you that Linux is just for programmers? That is so wrong! You have got a lot of great tools for [digital artists][1], [writers][2] and musicians. + +We have covered such tools in the past. Today it’s going to be slightly different. Instead of creating new digital content, let’s talk about consuming it. + +You have probably heard of media servers? Basically these software (and sometimes gadgets) allow you to view your local or cloud media (music, videos etc) in an intuitive interface. You can even use it to stream the content to other devices on your network. Sort of your personal Netflix. + +In this article, we will talk about the best media software available for Linux that you can use as a media player or as a media server software – as per your requirements. + +Some of these applications can also be used with Google’s Chromecast and Amazon’s Firestick. + +### Best Media Server Software for Linux + +![Best Media Server Software for Linux][3] + +The mentioned Linux media server software are in no particular order of ranking. + +I have tried to provide installation instructions for Ubuntu and Debian based distributions. It’s not possible to list installation steps for all Linux distributions for all the media servers mentioned here. Please take no offence for that. + +A couple of software in this list are not open source. If that’s the case, I have highlighted it appropriately. + +### 1\. Kodi + +![Kodi Media Server][4] + +Kod is one of the most popular media server software and player. Recently, Kodi 18.0 dropped in with a bunch of improvements that includes the support for Digital Rights Management (DRM) decryption, game emulators, ROMs, voice control, and more. + +It is a completely free and open source software. An active community for discussions and support exists as well. The user interface for Kodi is beautiful. I haven’t had the chance to use it in its early days – but I was amazed to see such a good UI for a Linux application. + +It has got great playback support – so you can add any supported 3rd party media service for the content or manually add the ripped video files to watch. + +#### How to install Kodi + +Type in the following commands in the terminal to install the latest version of Kodi via its [official PPA][5]. + +``` +sudo apt-get install software-properties-common +sudo add-apt-repository ppa:team-xbmc/ppa +sudo apt-get update +sudo apt-get install kodi +``` + +To know more about installing a development build or upgrading Kodi, refer to the [official installation guide][6]. + +### 2\. Plex + +![Plex Media Server][7] + +Plex is yet another impressive media player or could be used as a media server software. It is a great alternative to Kodi for the users who mostly utilize it to create an offline network of their media collection to sync and watch across multiple devices. + +Unlike Kodi, **Plex is not entirely open source**. It does offer a free account in order to use it. In addition, it offers premium pricing plans to unlock more features and have a greater control over your media while also being able to get a detailed insight on who/what/how Plex is being used. + +If you are an audiophile, you would love the integration of Plex with [TIDAL][8] music streaming service. You can also set up Live TV by adding it to your tuner. + +#### How to install Plex + +You can simply download the .deb file available on their official webpage and install it directly (or using [GDebi][9]) + +### 3\. Jellyfin + +![Emby media server][10] + +Yet another open source media server software with a bunch of features. [Jellyfin][11] is actually a fork of Emby media server. It may be one of the best out there available for ‘free’ but the multi-platform support still isn’t there yet. + +You can run it on a browser or utilize Chromecast – however – you will have to wait if you want the Android app or if you want it to support several devices. + +#### How to install Jellyfin + +Jellyfin provides a [detailed documentation][12] on how to install it from the binary packages/image available for Linux, Docker, and more. + +You will also find it easy to install it from the repository via the command line for Debian-based distribution. Check out their [installation guide][13] for more information. + +### 4\. LibreELEC + +![libreELEC][14] + +LibreELEC is an interesting media server software which is based on Kodi v18.0. They have recently released a new version (9.0.0) with a complete overhaul of the core OS support, hardware compatibility and user experience. + +Of course, being based on Kodi, it also has the DRM support. In addition, you can utilize its generic Linux builds or the special ones tailored for Raspberry Pi builds, WeTek devices, and more. + +#### How to install LibreELEC + +You can download the installer from their [official site][15]. For detailed instructions on how to use it, please refer to the [installation guide][16]. + +### 5\. OpenFLIXR Media Server + +![OpenFLIXR Media Server][17] + +Want something similar that compliments Plex media server but also compatible with VirtualBox or VMWare? You got it! + +OpenFLIXR is an automated media server software which integrates with Plex to provide all the features along with the ability to auto download TV shows and movies from Torrents. It even fetches the subtitles automatically giving you a seamless experience when coupled with Plex media software. + +You can also automate your home theater with this installed. In case you do not want to run it on a physical instance, it supports VMware, VirtualBox and Hyper-V as well. The best part is – it is an open source solution and based on Ubuntu Server. + +#### How to install OpenFLIXR + +The best way to do it is by installing VirtualBox – it will be easier. After you do that, just download it from the [official website][18] and import it. + +### 6\. MediaPortal + +![MediaPortal][19] + +MediaPortal is just another open source simple media server software with a decent user interface. It all depends on your personal preference – event though I would recommend Kodi over this. + +You can play DVDs, stream videos on your local network, and listen to music as well. It does not offer a fancy set of features but the ones you will mostly need. + +It gives you the option to choose from two different versions (one that is stable and the second which tries to incorporate new features – could be unstable). + +#### How to install MediaPotal + +Depending on what you want to setup (A TV-server only or a complete server setup), follow the [official setup guide][20] to install it properly. + +### 7\. Gerbera + +![Gerbera Media Center][21] + +A simple implementation for a media server to be able to stream using your local network. It does support transcoding which will convert the media in the format your device supports. + +If you have been following the options for media server form a very long time, then you might identify this as the rebranded (and improved) version of MediaTomb. Even though it is not a popular choice among the Linux users – it is still something usable when all fails or for someone who prefers a straightforward and a basic media server. + +#### How to install Gerbera + +Type in the following commands in the terminal to install it on any Ubuntu-based distro: + +``` +sudo apt install gerbera +``` + +For other Linux distributions, refer to the [documentation][22]. + +### 8\. OSMC (Open Source Media Center) + +![OSMC Open Source Media Center][23] + +It is an elegant-looking media server software originally based on Kodi media center. I was quite impressed with the user interface. It is simple and robust, being a free and open source solution. In a nutshell, all the essential features you would expect in a media server software. + +You can also opt in to purchase OSMC’s flagship device. It will play just about anything up to 4K standards with HD audio. In addition, it supports Raspberry Pi builds and 1st-gen Apple TV. + +#### How to install OSMC + +If your device is compatible, you can just select your operating system and download the device installer from the official [download page][24] and create a bootable image to install. + +### 9\. Universal Media Server + +![][25] + +Yet another simple addition to this list. Universal Media Server does not offer any fancy features but just helps you transcode / stream video and audio without needing much configuration. + +It supports Xbox 360, PS 3, and just about any other [DLNA][26]-capable devices. + +#### How to install Universal Media Center + +You can find all the packages listed on [FossHub][27] but you should follow the [official forum][28] to know more about how to install the package that you downloaded from the website. + +### 10\. Red5 Media Server + +![Red5 Media Server][29]Image Credit: [Red5 Server][30] + +A free and open source media server tailored for enterprise usage. You can use it for live streaming solutions – no matter if it is for entertainment or just video conferencing. + +They also offer paid licensing options for mobiles and high scalability. + +#### How to install Red5 + +Even though it is not the quickest installation method, follow the [installation guide on GitHub][31] to get started with the server without needing to tinker around. + +### Wrapping Up + +Every media server software listed here has its own advantages – you should pick one up and try the one which suits your requirement. + +Did we miss any of your favorite media server software? Let us know about it in the comments below! + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-linux-media-server + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-linux-graphic-design-software/ +[2]: https://itsfoss.com/open-source-tools-writers/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?resize=800%2C450&ssl=1 +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/kodi-18-media-server.jpg?fit=800%2C450&ssl=1 +[5]: https://itsfoss.com/ppa-guide/ +[6]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/plex.jpg?fit=800%2C368&ssl=1 +[8]: https://tidal.com/ +[9]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/emby-server.jpg?fit=800%2C373&ssl=1 +[11]: https://jellyfin.github.io/ +[12]: https://jellyfin.readthedocs.io/en/latest/ +[13]: https://jellyfin.readthedocs.io/en/latest/administrator-docs/installing/ +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/libreelec.jpg?resize=800%2C600&ssl=1 +[15]: https://libreelec.tv/downloads_new/ +[16]: https://libreelec.wiki/libreelec_usb-sd_creator +[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/openflixr-media-server.jpg?fit=800%2C449&ssl=1 +[18]: http://www.openflixr.com/#Download +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/mediaportal.jpg?ssl=1 +[20]: https://www.team-mediaportal.com/wiki/display/MediaPortal1/Quick+Setup +[21]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gerbera-server-softwarei.jpg?fit=800%2C583&ssl=1 +[22]: http://docs.gerbera.io/en/latest/install.html +[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/osmc-server.jpg?fit=800%2C450&ssl=1 +[24]: https://osmc.tv/download/ +[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/universal-media-server.jpg?ssl=1 +[26]: https://en.wikipedia.org/wiki/Digital_Living_Network_Alliance +[27]: https://www.fosshub.com/Universal-Media-Server.html?dwl=UMS-7.8.0.tgz +[28]: https://www.universalmediaserver.com/forum/viewtopic.php?t=10275 +[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/red5.jpg?resize=800%2C364&ssl=1 +[30]: https://www.red5server.com/ +[31]: https://github.com/Red5/red5-server/wiki/Installation-on-Linux +[32]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/best-media-server-linux.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md new file mode 100644 index 0000000000..615f7620ed --- /dev/null +++ b/sources/tech/20190213 How to build a WiFi picture frame with a Raspberry Pi.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to build a WiFi picture frame with a Raspberry Pi) +[#]: via: (https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi) +[#]: author: (Manuel Dewald https://opensource.com/users/ntlx) + +How to build a WiFi picture frame with a Raspberry Pi +====== +DIY a digital photo frame that streams photos from the cloud. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/raspberrypi_board_vector_red.png?itok=yaqYjYqI) + +Digital picture frames are really nice because they let you enjoy your photos without having to print them out. Plus, adding and removing digital files is a lot easier than opening a traditional frame and swapping the picture inside when you want to display a new photo. Even so, it's still a bit of overhead to remove your SD card, USB stick, or other storage from a digital picture frame, plug it into your computer, and copy new pictures onto it. + +An easier option is a digital picture frame that gets its pictures over WiFi, for example from a cloud service. Here's how to make one. + +### Gather your materials + + * Old [TFT][1] LCD screen + * HDMI-to-DVI cable (as the TFT screen supports DVI) + * Raspberry Pi 3 + * Micro SD card + * Raspberry Pi power supply + * Keyboard + * Mouse (optional) + + + +Connect the Raspberry Pi to the display using the cable and attach the power supply. + +### Install Raspbian + +**sudo raspi-config**. There I change the hostname (e.g., to **picframe** ) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example) . + +### Build and install the cloud client + +Download and flash Raspbian to the Micro SD card by following these [directions][2] . Plug the Micro SD card into the Raspberry Pi, boot it up, and configure your WiFi. My first action after a new Raspbian installation is usually running. There I change the hostname (e.g., to) in Network Options and enable SSH to work remotely on the Raspberry Pi in Interfacing Options. Connect to the Raspberry Pi using (for example) + +I use [Nextcloud][3] to synchronize my pictures, but you could use NFS, [Dropbox][4], or whatever else fits your needs to upload pictures to the frame. + +If you use Nextcloud, get a client for Raspbian by following these [instructions][5]. This is handy for placing new pictures on your picture frame and will give you the client application you may be familiar with on a desktop PC. When connecting the client application to your Nextcloud server, make sure to select only the folder where you'll store the images you want to be displayed on the picture frame. + +### Set up the slideshow + +The easiest way I've found to set up the slideshow is with a [lightweight slideshow project][6] built for exactly this purpose. There are some alternatives, like configuring a screensaver, but this application appears to be the simplest to set up. + +On your Raspberry Pi, download the binaries from the latest release, unpack them, and move them to an executable folder: + +``` +wget https://github.com/NautiluX/slide/releases/download/v0.9.0/slide_pi_stretch_0.9.0.tar.gz +tar xf slide_pi_stretch_0.9.0.tar.gz +mv slide_0.9.0/slide /usr/local/bin/ +``` + +Install the dependencies: + +``` +sudo apt install libexif12 qt5-default +``` + +Run the slideshow by executing the command below (don't forget to modify the path to your images). If you access your Raspberry Pi via SSH, set the **DISPLAY** variable to start the slideshow on the display attached to the Raspberry Pi. + +``` +DISPLAY=:0.0 slide -p /home/pi/nextcloud/picframe +``` + +### Autostart the slideshow + +To autostart the slideshow on Raspbian Stretch, create the following folder and add an **autostart** file to it: + +``` +mkdir -p /home/pi/.config/lxsession/LXDE/ +vi /home/pi/.config/lxsession/LXDE/autostart +``` + +Insert the following commands to autostart your slideshow. The **slide** command can be adjusted to your needs: + +``` +@xset s noblank +@xset s off +@xset -dpms +@slide -p -t 60 -o 200 -p /home/pi/nextcloud/picframe +``` + +Disable screen blanking, which the Raspberry Pi normally does after 10 minutes, by editing the following file: + +``` +vi /etc/lightdm/lightdm.conf +``` + +and adding these two lines to the end: + +``` +[SeatDefaults] +xserver-command=X -s 0 -dpms +``` + +### Configure a power-on schedule + +You can schedule your picture frame to turn on and off at specific times by using two simple cronjobs. For example, say you want it to turn on automatically at 7 am and turn off at 11 pm. Run **crontab -e** and insert the following two lines. + +``` +0 23 * * * /opt/vc/bin/tvservice -o + +0 7 * * * /opt/vc/bin/tvservice -p && sudo systemctl restart display-manager +``` + +Note that this won't turn the Raspberry Pi power's on and off; it will just turn off HDMI, which will turn the screen off. The first line will power off HDMI at 11 pm. The second line will bring the display back up and restart the display manager at 7 am. + +### Add a final touch + +By following these simple steps, you can create your own WiFi picture frame. If you want to give it a nicer look, build a wooden frame for the display. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/wifi-picture-frame-raspberry-pi + +作者:[Manuel Dewald][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ntlx +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Thin-film-transistor_liquid-crystal_display +[2]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md +[3]: https://nextcloud.com/ +[4]: http://dropbox.com/ +[5]: https://github.com/nextcloud/client_theming#building-on-debian +[6]: https://github.com/NautiluX/slide/releases/tag/v0.9.0 diff --git a/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md new file mode 100644 index 0000000000..3b9af595d6 --- /dev/null +++ b/sources/tech/20190214 The Earliest Linux Distros- Before Mainstream Distros Became So Popular.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The Earliest Linux Distros: Before Mainstream Distros Became So Popular) +[#]: via: (https://itsfoss.com/earliest-linux-distros/) +[#]: author: (Avimanyu Bandyopadhyay https://itsfoss.com/author/avimanyu/) + +The Earliest Linux Distros: Before Mainstream Distros Became So Popular +====== + +In this throwback history article, we’ve tried to look back into how some of the earliest Linux distributions evolved and came into being as we know them today. + +![][1] + +In here we have tried to explore how the idea of popular distros such as Red Hat, Debian, Slackware, SUSE, Ubuntu and many others came into being after the first Linux kernel became available. + +As Linux was initially released in the form of a kernel in 1991, the distros we know today was made possible with the help of numerous collaborators throughout the world with the creation of shells, libraries, compilers and related packages to make it a complete Operating System. + +### 1\. The first known “distro” by HJ Lu + +The way we know Linux distributions today goes back to 1992, when the first known distro-like tools to get access to Linux were released by HJ Lu. It consisted of two 5.25” floppy diskettes: + +![Linux 0.12 Boot and Root Disks | Photo Credit][2] + + * **LINUX 0.12 BOOT DISK** : The “boot” disk was used to boot the system first. + * **LINUX 0.12 ROOT DISK** : The second “root” disk for getting a command prompt for access to the Linux file system after booting. + + + +To install 0.12 on a hard drive, one had to use a hex editor to edit its master boot record (MBR) and that was quite a complex process, especially during that era. + +Feeling too nostalgic? + +You can [install cool-retro-term application][3] that gives you a Linux terminal in the vintage looks of the 90’s computers. + +### 2\. MCC Interim Linux + +![MCC Linux 0.99.14, 1993 | Image Credit][4] + +Initially released in the same year as “LINUX 0.12” by Owen Le Blanc of Manchester Computing Centre in England, MCC Interim Linux was the first Linux distribution for novice users with a menu driven installer and end user/programming tools. Also in the form of a collection of diskettes, it could be installed on a system to provide a basic text-based environment. + +MCC Interim Linux was much more user-friendly than 0.12 and the installation process on a hard drive was much easier and similar to modern ways. It did not require using a hex editor to edit the MBR. + +Though it was first released in February 1992, it was also available for download through FTP since November that year. + +### 3\. TAMU Linux + +![TAMU Linux | Image Credit][5] + +TAMU Linux was developed by Aggies at Texas A&M with the Texas A&M Unix & Linux Users Group in May 1992 and was called TAMU 1.0A. It was the first Linux distribution to offer the X Window System instead of just a text based operating system. + +### 4\. Softlanding Linux System (SLS) + +![SLS Linux 1.05, 1994 | Image Credit][6] + +“Gentle Touchdowns for DOS Bailouts” was their slogan! SLS was released by Peter McDonald in May 1992. SLS was quite widely used and popular during its time and greatly promoted the idea of Linux. But due to a decision by the developers to change the executable format in the distro, users stopped using it. + +Many of the popular distros the present community is most familiar with, evolved via SLS. Two of them are: + + * **Slackware** : One of the earliest Linux distros, Slackware was created by Patrick Volkerding in 1993. Slackware is based on SLS and was one of the very first Linux distributions. + * **Debian** : An initiative by Ian Murdock, Debian was also released in 1993 after moving on from the SLS model. The very popular Ubuntu distro we know today is based on Debian. + + + +### 5\. Yggdrasil + +![LGX Yggdrasil Fall 1993 | Image Credit][7] + +Released on December 1992, Yggdrasil was the first distro to give birth to the idea of Live Linux CDs. It was developed by Yggdrasil Computing, Inc., founded by Adam J. Richter in Berkeley, California. It could automatically configure itself on system hardware as “Plug-and-Play”, which is a very regular and known feature in today’s time. The later versions of Yggdrasil included a hack for running any proprietary MS-DOS CD-ROM driver within Linux. + +![Yggdrasil’s Plug-and-Play Promo | Image Credit][8] + +Their motto was “Free Software For The Rest of Us”. + +In the late 90s, one very popular distro was [Mandriva][9], first released in 1998, by unifying the French _Mandrake Linux_ distribution with the Brazilian _Conectiva Linux_ distribution. It had a release lifetime of 18 months for updates related to Linux and system software and desktop based updates were released every year. It also had server versions with 5 years of support. Now we have [Open Mandriva][10]. + +If you have more nostalgic distros to share from the earliest days of Linux release, please share with us in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/earliest-linux-distros/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/avimanyu/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/earliest-linux-distros.png?resize=800%2C450&ssl=1 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Linux-0.12-Floppies.jpg?ssl=1 +[3]: https://itsfoss.com/cool-retro-term/ +[4]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/MCC-Interim-Linux-0.99.14-1993.jpg?fit=800%2C600&ssl=1 +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/TAMU-Linux.jpg?ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/SLS-1.05-1994.jpg?ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/LGX_Yggdrasil_CD_Fall_1993.jpg?fit=781%2C800&ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Yggdrasil-Linux-Summer-1994.jpg?ssl=1 +[9]: https://en.wikipedia.org/wiki/Mandriva_Linux +[10]: https://www.openmandriva.org/ diff --git a/sources/tech/20190215 Make websites more readable with a shell script.md b/sources/tech/20190215 Make websites more readable with a shell script.md new file mode 100644 index 0000000000..06b748cfb5 --- /dev/null +++ b/sources/tech/20190215 Make websites more readable with a shell script.md @@ -0,0 +1,258 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Make websites more readable with a shell script) +[#]: via: (https://opensource.com/article/19/2/make-websites-more-readable-shell-script) +[#]: author: (Jim Hall https://opensource.com/users/jim-hall) + +Make websites more readable with a shell script +====== +Calculate the contrast ratio between your website's text and background to make sure your site is easy to read. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ) + +If you want people to find your website useful, they need to be able to read it. The colors you choose for your text can affect the readability of your site. Unfortunately, a popular trend in web design is to use low-contrast colors when printing text, such as gray text on a white background. Maybe that looks really cool to the web designer, but it is really hard for many of us to read. + +The W3C provides Web Content Accessibility Guidelines, which includes guidance to help web designers pick text and background colors that can be easily distinguished from each other. This is called the "contrast ratio." The W3C definition of the contrast ratio requires several calculations: given two colors, you first compute the relative luminance of each, then calculate the contrast ratio. The ratio will fall in the range 1 to 21 (typically written 1:1 to 21:1). The higher the contrast ratio, the more the text will stand out against the background. For example, black text on a white background is highly visible and has a contrast ratio of 21:1. And white text on a white background is unreadable at a contrast ratio of 1:1. + +The [W3C says body text][1] should have a contrast ratio of at least 4.5:1 with headings at least 3:1. But that seems to be the bare minimum. The W3C also recommends at least 7:1 for body text and at least 4.5:1 for headings. + +Calculating the contrast ratio can be a chore, so it's best to automate it. I've done that with this handy Bash script. In general, the script does these things: + + 1. Gets the text color and background color + 2. Computes the relative luminance of each + 3. Calculates the contrast ratio + + + +### Get the colors + +You may know that every color on your monitor can be represented by red, green, and blue (R, G, and B). To calculate the relative luminance of a color, my script will need to know the red, green, and blue components of the color. Ideally, my script would read this information as separate R, G, and B values. Web designers might know the specific RGB code for their favorite colors, but most humans don't know RGB values for the different colors. Instead, most people reference colors by names like "red" or "gold" or "maroon." + +Fortunately, the GNOME [Zenity][2] tool has a color-picker app that lets you use different methods to select a color, then returns the RGB values in a predictable format of "rgb( **R** , **G** , **B** )". Using Zenity makes it easy to get a color value: + +``` +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +``` + +In case the user (accidentally) clicks the Cancel button, the script assumes a color: + +``` +if [ $? -ne 0 ] ; then +        echo '** color canceled .. assume black' +        color='rgb(0,0,0)' +fi +``` + +My script does something similar to set the background color value as **$background**. + +### Compute the relative luminance + +Once you have the foreground color in **$color** and the background color in **$background** , the next step is to compute the relative luminance for each. On its website, the [W3C provides an algorithm][3] to compute the relative luminance of a color. + +> For the sRGB colorspace, the relative luminance of a color is defined as +> **L = 0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated R + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated G + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated B** where R, G and B are defined as: +> +> if RsRGB <= 0.03928 then R = RsRGB/12.92 +> else R = ((RsRGB+0.055)/1.055) ^ 2.4 +> +> if GsRGB <= 0.03928 then G = GsRGB/12.92 +> else G = ((GsRGB+0.055)/1.055) ^ 2.4 +> +> if BsRGB <= 0.03928 then B = BsRGB/12.92 +> else B = ((BsRGB+0.055)/1.055) ^ 2.4 +> +> and RsRGB, GsRGB, and BsRGB are defined as: +> +> RsRGB = R8bit/255 +> +> GsRGB = G8bit/255 +> +> BsRGB = B8bit/255 + +Since Zenity returns color values in the format "rgb( **R** , **G** , **B** )," the script can easily pull apart the R, B, and G values to compute the relative luminance. AWK makes this a simple task, using the comma as the field separator ( **-F,** ) and using AWK's **substr()** string function to pick just the text we want from the "rgb( **R** , **G** , **B** )" color value: + +``` +R=$( echo $color | awk -F, '{print substr($1,5)}' ) +G=$( echo $color | awk -F, '{print $2}' ) +B=$( echo $color | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) +``` + +**(For more on extracting and displaying data with AWK,[Get our AWK cheat sheet][4].)** + +Calculating the final relative luminance is best done using the BC calculator. BC supports the simple if-then-else needed in the calculation, which makes this part simple. But since BC cannot directly calculate exponentiation using a non-integer exponent, we need to do some extra math using the natural logarithm instead: + +``` +echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) +0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l +``` + +This passes several instructions to BC, including the if-then-else statements that are part of the relative luminance formula. BC then prints the final value. + +### Calculate the contrast ratio + +With the relative luminance of the text color and the background color, now the script can calculate the contrast ratio. The [W3C determines the contrast ratio][5] with this formula: + +> (L1 + 0.05) / (L2 + 0.05), where +> L1 is the relative luminance of the lighter of the colors, and +> L2 is the relative luminance of the darker of the colors + +Given two relative luminance values **$r1** and **$r2** , it's easy to calculate the contrast ratio using the BC calculator: + +``` +echo "scale=2 +if ( $r1 > $r2 ) { l1=$r1; l2=$r2 } else { l1=$r2; l2=$r1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +``` + +This uses an if-then-else statement to determine which value ( **$r1** or **$r2** ) is the lighter or darker color. BC performs the resulting calculation and prints the result, which the script can store in a variable. + +### The final script + +With the above, we can pull everything together into a final script. I use Zenity to display the final result in a text box: + +``` +#!/bin/sh +# script to calculate contrast ratio of colors + +# read color and background color: +# zenity returns values like 'rgb(255,140,0)' and 'rgb(255,255,255)' + +color=$( zenity --title 'Set text color' --color-selection --color='black' ) +if [ $? -ne 0 ] ; then +        echo '** color canceled .. assume black' +        color='rgb(0,0,0)' +fi + +background=$( zenity --title 'Set background color' --color-selection --color='white' ) +if [ $? -ne 0 ] ; then +        echo '** background canceled .. assume white' +        background='rgb(255,255,255)' +fi + +# compute relative luminance: + +function luminance() +{ +        R=$( echo $1 | awk -F, '{print substr($1,5)}' ) +        G=$( echo $1 | awk -F, '{print $2}' ) +        B=$( echo $1 | awk -F, '{n=length($3); print substr($3,1,n-1)}' ) + +        echo "scale=4 +rsrgb=$R/255 +gsrgb=$G/255 +bsrgb=$B/255 +if ( rsrgb <= 0.03928 ) r = rsrgb/12.92 else r = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((rsrgb+0.055)/1.055) ) +if ( gsrgb <= 0.03928 ) g = gsrgb/12.92 else g = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((gsrgb+0.055)/1.055) ) +if ( bsrgb <= 0.03928 ) b = bsrgb/12.92 else b = e( 2.4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated l((bsrgb+0.055)/1.055) ) +0.2126 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated r + 0.7152 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated g + 0.0722 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated b" | bc -l +} + +lum1=$( luminance $color ) +lum2=$( luminance $background ) + +# compute contrast + +function contrast() +{ +        echo "scale=2 +if ( $1 > $2 ) { l1=$1; l2=$2 } else { l1=$2; l2=$1 } +(l1 + 0.05) / (l2 + 0.05)" | bc +} + +rel=$( contrast $lum1 $lum2 ) + +# print results + +( cat< +``` + +Alternatively, add the following to the beginning of each of your BATS test scripts: + +``` +#!/usr/bin/env ./test/libs/bats/bin/bats +load 'libs/bats-support/load' +load 'libs/bats-assert/load' +``` + +and **chmod +x **. This will a) make them executable with the BATS installed in **./test/libs/bats** and b) include these helper libraries. BATS test scripts are typically stored in the **test** directory and named for the script being tested, but with the **.bats** extension. For example, a BATS script that tests **bin/build** should be called **test/build.bats**. + +You can also run an entire set of BATS test files by passing a regular expression to BATS, e.g., **./test/lib/bats/bin/bats test/*.bats**. + +### Organizing libraries and scripts for BATS coverage + +Bash scripts and libraries must be organized in a way that efficiently exposes their inner workings to BATS. In general, library functions and shell scripts that run many commands when they are called or executed are not amenable to efficient BATS testing. + +For example, [build.sh][4] is a typical script that many people write. It is essentially a big pile of code. Some might even put this pile of code in a function in a library. But it's impossible to run a big pile of code in a BATS test and cover all possible types of failures it can encounter in separate test cases. The only way to test this pile of code with sufficient coverage is to break it into many small, reusable, and, most importantly, independently testable functions. + +It's straightforward to add more functions to a library. An added benefit is that some of these functions can become surprisingly useful in their own right. Once you have broken your library function into lots of smaller functions, you can **source** the library in your BATS test and run the functions as you would any other command to test them. + +Bash scripts must also be broken down into multiple functions, which the main part of the script should call when the script is executed. In addition, there is a very useful trick to make it much easier to test Bash scripts with BATS: Take all the code that is executed in the main part of the script and move it into a function, called something like **run_main**. Then, add the following to the end of the script: + +``` +if [[ "${BASH_SOURCE[0]}" == "${0}" ]] +then +  run_main +fi +``` + +This bit of extra code does something special. It makes the script behave differently when it is executed as a script than when it is brought into the environment with **source**. This trick enables the script to be tested the same way a library is tested, by sourcing it and testing the individual functions. For example, here is [build.sh refactored for better BATS testability][5]. + +### Writing and running tests + +As mentioned above, BATS is a TAP-compliant testing framework with a syntax and output that will be familiar to those who have used other TAP-compliant testing suites, such as JUnit, RSpec, or Jest. Its tests are organized into individual test scripts. Test scripts are organized into one or more descriptive **@test** blocks that describe the unit of the application being tested. Each **@test** block will run a series of commands that prepares the test environment, runs the command to be tested, and makes assertions about the exit and output of the tested command. Many assertion functions are imported with the **bats** , **bats-assert** , and **bats-support** libraries, which are loaded into the environment at the beginning of the BATS test script. Here is a typical BATS test block: + +``` +@test "requires CI_COMMIT_REF_SLUG environment variable" { +  unset CI_COMMIT_REF_SLUG +  assert_empty "${CI_COMMIT_REF_SLUG}" +  run some_command +  assert_failure +  assert_output --partial "CI_COMMIT_REF_SLUG" +} +``` + +If a BATS script includes **setup** and/or **teardown** functions, they are automatically executed by BATS before and after each test block runs. This makes it possible to create environment variables, test files, and do other things needed by one or all tests, then tear them down after each test runs. [**Build.bats**][6] is a full BATS test of our newly formatted **build.sh** script. (The **mock_docker** command in this test will be explained below, in the section on mocking/stubbing.) + +When the test script runs, BATS uses **exec** to run each **@test** block as a separate subprocess. This makes it possible to export environment variables and even functions in one **@test** without affecting other **@test** s or polluting your current shell session. The output of a test run is a standard format that can be understood by humans and parsed or manipulated programmatically by TAP consumers. Here is an example of the output for the **CI_COMMIT_REF_SLUG** test block when it fails: + +``` + ✗ requires CI_COMMIT_REF_SLUG environment variable +   (from function `assert_output' in file test/libs/bats-assert/src/assert.bash, line 231, +    in test file test/ci_deploy.bats, line 26) +     `assert_output --partial "CI_COMMIT_REF_SLUG"' failed + +   -- output does not contain substring -- +   substring (1 lines): +     CI_COMMIT_REF_SLUG +   output (3 lines): +     ./bin/deploy.sh: join_string_by: command not found +     oc error +     Could not login +   -- + +   ** Did not delete , as test failed ** + +1 test, 1 failure +``` + +Here is the output of a successful test: + +``` +✓ requires CI_COMMIT_REF_SLUG environment variable +``` + +### Helpers + +Like any shell script or library, BATS test scripts can include helper libraries to share common code across tests or enhance their capabilities. These helper libraries, such as **bats-assert** and **bats-support** , can even be tested with BATS. + +Libraries can be placed in the same test directory as the BATS scripts or in the **test/libs** directory if the number of files in the test directory gets unwieldy. BATS provides the **load** function that takes a path to a Bash file relative to the script being tested (e.g., **test** , in our case) and sources that file. Files must end with the prefix **.bash** , but the path to the file passed to the **load** function can't include the prefix. **build.bats** loads the **bats-assert** and **bats-support** libraries, a small **[helpers.bash][7]** library, and a **docker_mock.bash** library (described below) with the following code placed at the beginning of the test script below the interpreter magic line: + +``` +load 'libs/bats-support/load' +load 'libs/bats-assert/load' +load 'helpers' +load 'docker_mock' +``` + +### Stubbing test input and mocking external calls + +The majority of Bash scripts and libraries execute functions and/or executables when they run. Often they are programmed to behave in specific ways based on the exit status or output ( **stdout** , **stderr** ) of these functions or executables. To properly test these scripts, it is often necessary to make fake versions of these commands that are designed to behave in a specific way during a specific test, a process called "stubbing." It may also be necessary to spy on the program being tested to ensure it calls a specific command, or it calls a specific command with specific arguments, a process called "mocking." For more on this, check out this great [discussion of mocking and stubbing][8] in Ruby RSpec, which applies to any testing system. + +The Bash shell provides tricks that can be used in your BATS test scripts to do mocking and stubbing. All require the use of the Bash **export** command with the **-f** flag to export a function that overrides the original function or executable. This must be done before the tested program is executed. Here is a simple example that overrides the **cat** executable: + +``` +function cat() { echo "THIS WOULD CAT ${*}" } +export -f cat +``` + +This method overrides a function in the same manner. If a test needs to override a function within the script or library being tested, it is important to source the tested script or library before the function is stubbed or mocked. Otherwise, the stub/mock will be replaced with the actual function when the script is sourced. Also, make sure to stub/mock before you run the command you're testing. Here is an example from **build.bats** that mocks the **raise** function described in **build.sh** to ensure a specific error message is raised by the login fuction: + +``` +@test ".login raises on oc error" { +  source ${profile_script} +  function raise() { echo "${1} raised"; } +  export -f raise +  run login +  assert_failure +  assert_output -p "Could not login raised" +} +``` + +Normally, it is not necessary to unset a stub/mock function after the test, since **export** only affects the current subprocess during the **exec** of the current **@test** block. However, it is possible to mock/stub commands (e.g. **cat** , **sed** , etc.) that the BATS **assert** * functions use internally. These mock/stub functions must be **unset** before these assert commands are run, or they will not work properly. Here is an example from **build.bats** that mocks **sed** , runs the **build_deployable** function, and unsets **sed** before running any assertions: + +``` +@test ".build_deployable prints information, runs docker build on a modified Dockerfile.production and publish_image when its not a dry_run" { +  local expected_dockerfile='Dockerfile.production' +  local application='application' +  local environment='environment' +  local expected_original_base_image="${application}" +  local expected_candidate_image="${application}-candidate:${environment}" +  local expected_deployable_image="${application}:${environment}" +  source ${profile_script} +  mock_docker build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t "${expected_deployable_image}" - +  function publish_image() { echo "publish_image ${*}"; } +  export -f publish_image +  function sed() { +    echo "sed ${*}" >&2; +    echo "FROM application-candidate:environment"; +  } +  export -f sed +  run build_deployable "${application}" "${environment}" +  assert_success +  unset sed +  assert_output --regexp "sed.*${expected_dockerfile}" +  assert_output -p "Building ${expected_original_base_image} deployable ${expected_deployable_image} FROM ${expected_candidate_image}" +  assert_output -p "FROM ${expected_candidate_image} piped" +  assert_output -p "build --build-arg OAUTH_CLIENT_ID --build-arg OAUTH_REDIRECT --build-arg DDS_API_BASE_URL -t ${expected_deployable_image} -" +  assert_output -p "publish_image ${expected_deployable_image}" +} +``` + +Sometimes the same command, e.g. foo, will be invoked multiple times, with different arguments, in the same function being tested. These situations require the creation of a set of functions: + + * mock_foo: takes expected arguments as input, and persists these to a TMP file + * foo: the mocked version of the command, which processes each call with the persisted list of expected arguments. This must be exported with export -f. + * cleanup_foo: removes the TMP file, for use in teardown functions. This can test to ensure that a @test block was successful before removing. + + + +Since this functionality is often reused in different tests, it makes sense to create a helper library that can be loaded like other libraries. + +A good example is **[docker_mock.bash][9]**. It is loaded into **build.bats** and used in any test block that tests a function that calls the Docker executable. A typical test block using **docker_mock** looks like: + +``` +@test ".publish_image fails if docker push fails" { +  setup_publish +  local expected_image="image" +  local expected_publishable_image="${CI_REGISTRY_IMAGE}/${expected_image}" +  source ${profile_script} +  mock_docker tag "${expected_image}" "${expected_publishable_image}" +  mock_docker push "${expected_publishable_image}" and_fail +  run publish_image "${expected_image}" +  assert_failure +  assert_output -p "tagging ${expected_image} as ${expected_publishable_image}" +  assert_output -p "tag ${expected_image} ${expected_publishable_image}" +  assert_output -p "pushing image to gitlab registry" +  assert_output -p "push ${expected_publishable_image}" +} +``` + +This test sets up an expectation that Docker will be called twice with different arguments. With the second call to Docker failing, it runs the tested command, then tests the exit status and expected calls to Docker. + +One aspect of BATS introduced by **mock_docker.bash** is the **${BATS_TMPDIR}** environment variable, which BATS sets at the beginning to allow tests and helpers to create and destroy TMP files in a standard location. The **mock_docker.bash** library will not delete its persisted mocks file if a test fails, but it will print where it is located so it can be viewed and deleted. You may need to periodically clean old mock files out of this directory. + +One note of caution regarding mocking/stubbing: The **build.bats** test consciously violates a dictum of testing that states: [Don't mock what you don't own!][10] This dictum demands that calls to commands that the test's developer didn't write, like **docker** , **cat** , **sed** , etc., should be wrapped in their own libraries, which should be mocked in tests of scripts that use them. The wrapper libraries should then be tested without mocking the external commands. + +This is good advice and ignoring it comes with a cost. If the Docker CLI API changes, the test scripts will not detect this change, resulting in a false positive that won't manifest until the tested **build.sh** script runs in a production setting with the new version of Docker. Test developers must decide how stringently they want to adhere to this standard, but they should understand the tradeoffs involved with their decision. + +### Conclusion + +Introducing a testing regime to any software development project creates a tradeoff between a) the increase in time and organization required to develop and maintain code and tests and b) the increased confidence developers have in the integrity of the application over its lifetime. Testing regimes may not be appropriate for all scripts and libraries. + +In general, scripts and libraries that meet one or more of the following should be tested with BATS: + + * They are worthy of being stored in source control + * They are used in critical processes and relied upon to run consistently for a long period of time + * They need to be modified periodically to add/remove/modify their function + * They are used by others + + + +Once the decision is made to apply a testing discipline to one or more Bash scripts or libraries, BATS provides the comprehensive testing features that are available in other software development environments. + +Acknowledgment: I am indebted to [Darrin Mann][11] for introducing me to BATS testing. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/testing-bash-bats + +作者:[Darin London][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dmlond +[b]: https://github.com/lujun9972 +[1]: https://github.com/sstephenson/bats +[2]: http://testanything.org/ +[3]: https://git-scm.com/book/en/v2/Git-Tools-Submodules +[4]: https://github.com/dmlond/how_to_bats/blob/preBats/build.sh +[5]: https://github.com/dmlond/how_to_bats/blob/master/bin/build.sh +[6]: https://github.com/dmlond/how_to_bats/blob/master/test/build.bats +[7]: https://github.com/dmlond/how_to_bats/blob/master/test/helpers.bash +[8]: https://www.codewithjason.com/rspec-mocks-stubs-plain-english/ +[9]: https://github.com/dmlond/how_to_bats/blob/master/test/docker_mock.bash +[10]: https://github.com/testdouble/contributing-tests/wiki/Don't-mock-what-you-don't-own +[11]: https://github.com/dmann diff --git a/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md new file mode 100644 index 0000000000..93549ac45b --- /dev/null +++ b/sources/tech/20190222 Q4OS Linux Revives Your Old Laptop with Windows- Looks.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Q4OS Linux Revives Your Old Laptop with Windows’ Looks) +[#]: via: (https://itsfoss.com/q4os-linux-review) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Q4OS Linux Revives Your Old Laptop with Windows’ Looks +====== + +There are quite a few Linux distros available that seek to make new users feel at home by [imitating the look and feel of Windows][1]. Today, we’ll look at a distro that attempts to do this with limited success We’ll be looking at [Q4OS][2]. + +### Q4OS Linux focuses on performance on low hardware + +![Q4OS Linux desktop after first boot][3]Q4OS after first boot + +> Q4OS is a fast and powerful operating system based on the latest technologies while offering highly productive desktop environment. We focus on security, reliability, long-term stability and conservative integration of verified new features. System is distinguished by speed and very low hardware requirements, runs great on brand new machines as well as legacy computers. It is also very applicable for virtualization and cloud computing. +> +> Q4OS Website + +Q4OS currently has two different release branches: 2.# Scorpion and 3.# Centaurus. Scorpion is the Long-Term-Support (LTS) release and will be supported for five years. That support should last until 2022. The most recent version of Scorpion is 2.6, which is based on [Debian][4] 9 Stretch. Centaurus is considered the testing branch and is based on Debian Buster. Centaurus will become the LTS when Debian Buster becomes stable. + +Q4OS is one of the few Linux distros that still support both 32-bit and 64-bit. It has also been ported to ARM devices, specifically the Raspberry PI and the PineBook. + +The one major thing that separates Q4OS from the majority of Linux distros is their use of the Trinity Desktop Environment as the default desktop environment. + +#### The not-so-famous Trinity Desktop Environment + +![][5]Trinity Desktop Environment + +I’m sure that most people are unfamiliar with the [Trinity Desktop Environment (TDE)][6]. I didn’t know until I discovered Q4OS a couple of years ago. TDE is a fork of [KDE][7], specifically KDE 3.5. TDE was created by Timothy Pearson and the first release took place in April 2010. + +From what I read, it sounds like TDE was created for the same reason as [MATE][8]). Early versions of KDE 4 were prone to crash and users were unhappy with the direction the new release was taking, it was decided to fork the previous release. That is where the similarities end. MATE has taken on a life of its own and grew to become an equal among desktop environments. Development of TDE seems to have slowed. There were two years between the last two point releases. + +Quick side note: TDE uses its own fork of Qt 3, named TQt. + +#### System Requirements + +According to the [Q4OS download page][9], the system requirements differ based on the desktop environment you install. + +**TDE Version** + + * At least 300MHz CPU + * 128 MB of RAM + * 3 GB Storage + + + +**KDE Version** + + * At least 1GHz CPU + * 1 GB of RAM + * 5 GB Storage + + + +You can see from the system requirements that Q4OS is a [lightweight Linux distribution suitable for older computers][10]. + +#### Included apps by default + +The following applications are included in the full install of Q4OS: + + * Google Chrome + * Thunderbird + * LibreOffice + * VLC player + * Konqueror browser + * Dolphin file manager + * AisleRiot Solitaire + * Konsole + * Software Center + + + * KMines + * Ockular + * KBounce + * DigiKam + * Kooka + * KolourPaint + * KSnapshot + * Gwenview + * Ark + + + * KMail + * SMPlayer + * KRec + * Brasero + * Amarok player + * qpdfview + * KOrganizer + * KMag + * KNotes + + + +Of course, you can install additional applications through the software center. Since Q4OS is based on Debian, you can also [install applications from deb packages][11]. + +#### Q4OS can be installed from within Windows + +I was able to successfully install TrueOS on my Dell Latitude D630 without any issues. This laptop has an Intel Centrino Duo Core processor running at 2.00 GHz, NVIDIA Quadro NVS 135M graphics chip, and 4 GB of RAM. + +You have a couple of options to choose from when installing Q4OS. You can either install Q4OS with a CD (Live or install) or you can install it from inside Window. The Windows installer asks for the drive location you want to install to, how much space you want Q4OS to take up and what login information do you want to use. + +![][12]Q4OS Windows installer + +Compared to most distros, the Live ISOs are small. The KDE version weighs less than 1GB and the TDE version is just a little north of 500 MB. + +### Experiencing Q4OS: Feels like older Windows versions + +Please note that while there is a KDE installation ISO, I used the TDE installation ISO. The KDE Live CD is a recent addition, so TDE is more in line with the project’s long term goals. + +When you boot into Q4OS for the first time, it feels like you jumped through a time portal and are staring at Windows 2000. The initial app offerings are very slim, you have access to a file manager, a web browser and not much else. There isn’t even a screenshot tool installed. + +![][13]Konqueror film manager + +When you try to use the TDE browser (Konqueror), a dialog box pops up recommending using the Desktop Profiler to [install Google Chrome][14] or some other recent web browser. + +The Desktop Profiler allows you to choose between a bare-bones, basic or full desktop and which desktop environment you wish to use as default. You can also use the Desktop Profiler to install other desktop environments, such as MATE, Xfce, LXQT, LXDE, Cinnamon and GNOME. + +![Q4OS Welcome Screen][15]![Q4OS Welcome Screen][15]Q4OS Welcome Screen + +Q4OS comes with its own application center. However, the offerings are limited to less than 20 options, including Synaptic, Google Chrome, Chromium, Firefox, LibreOffice, Update Manager, VLC, Multimedia codecs, Thunderbird, LookSwitcher, NVIDIA drivers, Network Manager, Skype, GParted, Wine, Blueman, X2Go server, X2Go Client, and Virtualbox additions. + +![][16]Q4OS Software Centre + +If you want to install anything else, you need to either use the command line or the [synaptic package manager][17]. Synaptic is a very good package manager and has been very serviceable for many years, but it isn’t quite newbie friendly. + +If you install an application from the Software Centre, you are treated to an installer that looks a lot like a Windows installer. I can only imagine that this is for people converting to Linux from Windows. + +![][18]Firefox installer + +As I mentioned earlier, when you boot into Q4OS’ desktop for the first time it looks like something out of the 1990s. Thankfully, you can install a utility named LookSwitcher to install a different theme. Initially, you are only shown half a dozen themes. There are other themes that are considered works-in-progress. You can also enhance the default theme by picking a more vibrant background and making the bottom panel transparent. + +![][19]Q4OS using the Debonair theme + +### Final Thoughts on Q4OS + +I may have mentioned a few times in this review that Q4OS looks like a dated version of Windows. It is obviously a very conscious decision because great care was taken to make even the control panel and file manager look Windows-eque. The problem is that it reminds me more of [ReactOS][20] than something modern. The Q4OS website says that it is made using the latest technology. The look of the system disagrees and will probably put some new users off. + +The fact that the install ISOs are smaller than most means that they are very quick to download. Unfortunately, it also means that if you want to be productive, you’ll have to spend quite a bit of time downloading software, either manually or automatically. You’ll also need an active internet connection. There is a reason why most ISOs are several gigabytes. + +I made sure to test the Windows installer. I installed a test copy of Windows 10 and ran the Q4OS installer. The process took a few minutes because the installer, which is less than 10 MB had to download an ISO. When the process was done, I rebooted. I selected Q4OS from the menu, but it looked like I was booting into Windows 10 (got the big blue circle). I thought that the install failed, but I eventually got to Q4OS. + +One of the few things that I liked about Q4OS was how easy it was to install the NVIDIA drivers. After I logged in for the first time, a little pop-up told me that there were NVIDIA drivers available and asked me if I wanted to install them. + +Using Q4OS was definitely an interesting experience, especially using TDE for the first time and the Windows look and feel. However, the lack of apps in the Software Centre and some of the design choices stop me from recommending this distro. + +**Do you like Q4OS?** + +Have you ever used Q4OS? What is your favorite Debian-based distro? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21]. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/q4os-linux-review + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/windows-like-linux-distributions/ +[2]: https://q4os.org/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?resize=800%2C500&ssl=1 +[4]: https://www.debian.org/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os4.jpg?resize=800%2C412&ssl=1 +[6]: https://www.trinitydesktop.org/ +[7]: https://en.wikipedia.org/wiki/KDE +[8]: https://en.wikipedia.org/wiki/MATE_(software +[9]: https://q4os.org/downloads1.html +[10]: https://itsfoss.com/lightweight-linux-beginners/ +[11]: https://itsfoss.com/list-installed-packages-ubuntu/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os-windows-installer.jpg?resize=800%2C610&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os2.jpg?resize=800%2C606&ssl=1 +[14]: https://itsfoss.com/install-chrome-ubuntu/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os10.png?ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os3.jpg?resize=800%2C507&ssl=1 +[17]: https://www.nongnu.org/synaptic/ +[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os5.jpg?resize=800%2C616&ssl=1 +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os8Debonaire.jpg?resize=800%2C500&ssl=1 +[20]: https://www.reactos.org/ +[21]: http://reddit.com/r/linuxusersgroup +[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/q4os1.jpg?fit=800%2C500&ssl=1 diff --git a/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md b/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md new file mode 100644 index 0000000000..55d30a7910 --- /dev/null +++ b/sources/tech/20190225 How To Identify That The Linux Server Is Integrated With Active Directory (AD).md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Identify That The Linux Server Is Integrated With Active Directory (AD)?) +[#]: via: (https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +How To Identify That The Linux Server Is Integrated With Active Directory (AD)? +====== + +Single Sign On (SSO) Authentication is an implemented in most of the organizations due to multiple applications access. + +It allows a user to logs in with a single ID and password to all the applications which is available in the organization. + +It uses a centralized authentication system for all the applications. + +A while ago we had written an article, **[how to integrate Linux system with AD][1]**. + +Today we are going to show you, how to check that the Linux system is integrated with AD using multiple ways. + +It can be done in four ways and we will explain one by one. + + * **`ps Command:`** It report a snapshot of the current processes. + * **`id Command:`** It prints user identity. + * **`/etc/nsswitch.conf file:`** It is Name Service Switch configuration file. + * **`/etc/pam.d/system-auth file:`** It is Common configuration file for PAMified services. + + + +### How To Identify That The Linux Server Is Integrated With AD Using PS Command? + +ps command displays information about a selection of the active processes. + +To integrate the Linux server with AD, we need to use either `winbind` or `sssd` or `ldap` service. + +So, use the ps command to filter these services. + +If you found any of these services is running on system then we can decide that the system is currently integrate with AD using “winbind” or “sssd” or “ldap” service. + +You might get the output similar to below if the system is integrated with AD using `SSSD` service. + +``` +# ps -ef | grep -i "winbind\|sssd" + +root 29912 1 0 2017 ? 00:19:09 /usr/sbin/sssd -f -D +root 29913 29912 0 2017 ? 04:36:59 /usr/libexec/sssd/sssd_be --domain 2daygeek.com --uid 0 --gid 0 --debug-to-files +root 29914 29912 0 2017 ? 00:29:28 /usr/libexec/sssd/sssd_nss --uid 0 --gid 0 --debug-to-files +root 29915 29912 0 2017 ? 00:09:19 /usr/libexec/sssd/sssd_pam --uid 0 --gid 0 --debug-to-files +root 31584 26666 0 13:41 pts/3 00:00:00 grep sssd +``` + +You might get the output similer to below if the system is integrated with AD using `winbind` service. + +``` +# ps -ef | grep -i "winbind\|sssd" + +root 676 21055 0 2017 ? 00:00:22 winbindd +root 958 21055 0 2017 ? 00:00:35 winbindd +root 21055 1 0 2017 ? 00:59:07 winbindd +root 21061 21055 0 2017 ? 11:48:49 winbindd +root 21062 21055 0 2017 ? 00:01:28 winbindd +root 21959 4570 0 13:50 pts/2 00:00:00 grep -i winbind\|sssd +root 27780 21055 0 2017 ? 00:00:21 winbindd +``` + +### How To Identify That The Linux Server Is Integrated With AD Using id Command? + +It Prints information for given user name, or the current user. It displays the UID, GUID, User Name, Primary Group Name and Secondary Group Name, etc., + +If the Linux system is integrated with AD then you might get the output like below. The GID clearly shows that the user is coming from AD “domain users”. + +``` +# id daygeek + +uid=1918901106(daygeek) gid=1918900513(domain users) groups=1918900513(domain users) +``` + +### How To Identify That The Linux Server Is Integrated With AD Using nsswitch.conf file? + +The Name Service Switch (NSS) configuration file, `/etc/nsswitch.conf`, is used by the GNU C Library and certain other applications to determine the sources from which to obtain name-service information in a range of categories, and in what order. Each category of information is identified by a database name. + +You might get the output similar to below if the system is integrated with AD using `SSSD` service. + +``` +# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap" + +passwd: files sss +shadow: files sss +group: files sss +services: files sss +netgroup: files sss +automount: files sss +``` + +You might get the output similar to below if the system is integrated with AD using `winbind` service. + +``` +# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap" + +passwd: files [SUCCESS=return] winbind +shadow: files [SUCCESS=return] winbind +group: files [SUCCESS=return] winbind +``` + +You might get the output similer to below if the system is integrated with AD using `ldap` service. + +``` +# cat /etc/nsswitch.conf | grep -i "sss\|winbind\|ldap" + +passwd: files ldap +shadow: files ldap +group: files ldap +``` + +### How To Identify That The Linux Server Is Integrated With AD Using system-auth file? + +It is Common configuration file for PAMified services. + +PAM stands for Pluggable Authentication Module that provides dynamic authentication support for applications and services in Linux. + +system-auth configuration file is provide a common interface for all applications and service daemons calling into the PAM library. + +The system-auth configuration file is included from nearly all individual service configuration files with the help of the include directive. + +You might get the output similar to below if the system is integrated with AD using `SSSD` service. + +``` +# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" +or +# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" + +auth sufficient pam_sss.so use_first_pass +account [default=bad success=ok user_unknown=ignore] pam_sss.so +password sufficient pam_sss.so use_authtok +session optional pam_sss.so +``` + +You might get the output similar to below if the system is integrated with AD using `winbind` service. + +``` +# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" +or +# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" + +auth sufficient pam_winbind.so cached_login use_first_pass +account [default=bad success=ok user_unknown=ignore] pam_winbind.so cached_login +password sufficient pam_winbind.so cached_login use_authtok +``` + +You might get the output similar to below if the system is integrated with AD using `ldap` service. + +``` +# cat /etc/pam.d/system-auth | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" +or +# cat /etc/pam.d/system-auth-ac | grep -i "pam_sss.so\|pam_winbind.so\|pam_ldap.so" + +auth sufficient pam_ldap.so cached_login use_first_pass +account [default=bad success=ok user_unknown=ignore] pam_ldap.so cached_login +password sufficient pam_ldap.so cached_login use_authtok +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-identify-that-the-linux-server-is-integrated-with-active-directory-ad/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/join-integrate-rhel-centos-linux-system-to-windows-active-directory-ad-domain/ diff --git a/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md b/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md new file mode 100644 index 0000000000..4ba0580ece --- /dev/null +++ b/sources/tech/20190225 How to Install VirtualBox on Ubuntu -Beginner-s Tutorial.md @@ -0,0 +1,156 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install VirtualBox on Ubuntu [Beginner’s Tutorial]) +[#]: via: (https://itsfoss.com/install-virtualbox-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +How to Install VirtualBox on Ubuntu [Beginner’s Tutorial] +====== + +**This beginner’s tutorial explains various ways to install VirtualBox on Ubuntu and other Debian-based Linux distributions.** + +Oracle’s free and open source offering [VirtualBox][1] is an excellent virtualization tool, specially for desktop operating systems. I prefer using it over [VMWare Workstation in Linux][2], another virtualization tool. + +You can use virtualization software like VirtualBox for installing and using another operating system within a virtual machine. + +For example, you can [install Linux on VirtualBox inside Windows][3]. Similarly, you can also [install Windows inside Linux using VirtualBox][4]. + +You can also use VirtualBox for installing another Linux distribution in your current Linux system. Actually, this is what I use it for. If I hear about a nice Linux distribution, instead of installing it on a real system, I test it on a virtual machine. It’s more convenient when you just want to try out a distribution before making a decision about installing it on your actual machine. + +![Linux installed inside Linux using VirtualBox][5]Ubuntu 18.10 installed inside Ubuntu 18.04 + +In this beginner’s tutorial, I’ll show you various ways of installing Oracle VirtualBox on Ubuntu and other Debian-based distributions. + +### Installing VirtualBox on Ubuntu and Debian based Linux distributions + +The installation methods mentioned here should also work for other Debian and Ubuntu-based Linux distributions such as Linux Mint, elementary OS etc. + +#### Method 1: Install VirtualBox from Ubuntu Repository + +**Pros** : Easy installation + +**Cons** : Installs older version + +The easiest way to install VirtualBox on Ubuntu would be to search for it in the Software Center and install it from there. + +![VirtualBox in Ubuntu Software Center][6]VirtualBox is available in Ubuntu Software Center + +You can also install it from the command line using the command: + +``` +sudo apt install virtualbox +``` + +However, if you [check the package version before installing it][7], you’ll see that the VirtualBox provided by Ubuntu’s repository is quite old. + +For example, the current VirtualBox version at the time of writing this tutorial is 6.0 but the one in Software Center is 5.2. This means you won’t get the newer features introduced in the [latest version of VirtualBox][8]. + +#### Method 2: Install VirtualBox using Deb file from Oracle’s website + +**Pros** : Easily install the latest version + +**Cons** : Can’t upgrade to newer version + +If you want to use the latest version of VirtualBox on Ubuntu, the easiest way would be to [use the deb file][9]. + +Oracle provides read to use binary files for VirtualBox releases. If you look at its download page, you’ll see the option to download the deb installer files for Ubuntu and other distributions. + +![VirtualBox Linux Download][10] + +You just have to download this deb file and double click on it to install it. It’s as simple as that. + +However, the problem with this method is that you won’t get automatically updated to the newer VirtualBox releases. The only way is to remove the existing version, download the newer version and install it again. That’s not very convenient, is it? + +#### Method 3: Install VirualBox using Oracle’s repository + +**Pros** : Automatically updates with system updates + +**Cons** : Slightly complicated installation + +Now this is the command line method and it may seem complicated to you but it has advantages over the previous two methods. You’ll get the latest version of VirtualBox and it will be automatically updated to the future releases. That’s what you would want, I presume. + +To install VirtualBox using command line, you add the Oracle VirtualBox’s repository in your list of repositories. You add its GPG key so that your system trusts this repository. Now when you install VirtualBox, it will be installed from Oracle’s repository instead of Ubuntu’s repository. If there is a new version released, VirtualBox install will be updated along with the system updates. Let’s see how to do that. + +First, add the key for the repository. You can download and add the key using this single command. + +``` +wget -q https://www.virtualbox.org/download/oracle_vbox_2016.asc -O- | sudo apt-key add - +``` + +``` +Important for Mint users + +The next step will work for Ubuntu only. If you are using Linux Mint or some other distribution based on Ubuntu, replace $(lsb_release -cs) in the command with the Ubuntu version your current version is based on. For example, Linux Mint 19 series users should use bionic and Mint 18 series users should use xenial. Something like this + +sudo add-apt-repository “deb [arch=amd64] **bionic** contrib“ +``` + +Now add the Oracle VirtualBox repository in the list of repositories using this command: + +``` +sudo add-apt-repository "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian $(lsb_release -cs) contrib" +``` + +If you have read my article on [checking Ubuntu version][11], you probably know that ‘lsb_release -cs’ will print the codename of your Ubuntu system. + +**Note** : If you see [add-apt-repository command not found][12] error, you’ll have to install software-properties-common package. + +Now that you have the correct repository added, refresh the list of available packages through these repositories and install VirtualBox. + +``` +sudo apt update && sudo apt install virtualbox-6.0 +``` + +**Tip** : A good idea would be to type sudo apt install **virtualbox–** and hit tab to see the various VirtualBox versions available for installation and then select one of them by typing it completely. + +![Install VirtualBox via terminal][13] + +### How to remove VirtualBox from Ubuntu + +Now that you have learned to install VirtualBox, I would also mention the steps to remove it. + +If you installed it from the Software Center, the easiest way to remove the application is from the Software Center itself. You just have to find it in the [list of installed applications][14] and click the Remove button. + +Another ways is to use the command line. + +``` +sudo apt remove virtualbox virtualbox-* +``` + +Note that this will not remove the virtual machines and the files associated with the operating systems you installed using VirtualBox. That’s not entirely a bad thing because you may want to keep them safe to use it later or in some other system. + +**In the end…** + +I hope you were able to pick one of the methods to install VirtualBox. I’ll also write about using it effectively in another article. For the moment, if you have and tips or suggestions or any questions, feel free to leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-virtualbox-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://www.virtualbox.org +[2]: https://itsfoss.com/install-vmware-player-ubuntu-1310/ +[3]: https://itsfoss.com/install-linux-in-virtualbox/ +[4]: https://itsfoss.com/install-windows-10-virtualbox-linux/ +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/linux-inside-linux-virtualbox.png?resize=800%2C450&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-ubuntu-software-center.jpg?ssl=1 +[7]: https://itsfoss.com/know-program-version-before-install-ubuntu/ +[8]: https://itsfoss.com/oracle-virtualbox-release/ +[9]: https://itsfoss.com/install-deb-files-ubuntu/ +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/virtualbox-download.jpg?resize=800%2C433&ssl=1 +[11]: https://itsfoss.com/how-to-know-ubuntu-unity-version/ +[12]: https://itsfoss.com/add-apt-repository-command-not-found/ +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/install-virtualbox-ubuntu-terminal.png?resize=800%2C165&ssl=1 +[14]: https://itsfoss.com/list-installed-packages-ubuntu/ diff --git a/sources/tech/20190225 Netboot a Fedora Live CD.md b/sources/tech/20190225 Netboot a Fedora Live CD.md new file mode 100644 index 0000000000..2767719b8c --- /dev/null +++ b/sources/tech/20190225 Netboot a Fedora Live CD.md @@ -0,0 +1,187 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Netboot a Fedora Live CD) +[#]: via: (https://fedoramagazine.org/netboot-a-fedora-live-cd/) +[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/) + +Netboot a Fedora Live CD +====== + +![](https://fedoramagazine.org/wp-content/uploads/2019/02/netboot-livecd-816x345.jpg) + +[Live CDs][1] are useful for many tasks such as: + + * installing the operating system to a hard drive + * repairing a boot loader or performing other rescue-mode operations + * providing a consistent and minimal environment for web browsing + * …and [much more][2]. + + + +As an alternative to using DVDs and USB drives to store your Live CD images, you can upload them to an [iSCSI][3] server where they will be less likely to get lost or damaged. This guide shows you how to load your Live CD images onto an iSCSI server and access them with the [iPXE][4] boot loader. + +### Download a Live CD Image + +``` +$ MY_RLSE=27 +$ MY_LIVE=$(wget -q -O - https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso | perl -ne '/(Fedora[^ ]*?-Live-[^ ]*?\.iso)(?{print $^N})/;') +$ MY_NAME=fc$MY_RLSE +$ wget -O $MY_NAME.iso https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/$MY_RLSE/Workstation/x86_64/iso/$MY_LIVE +``` + +The above commands download the Fedora-Workstation-Live-x86_64-27-1.6.iso Fedora Live image and save it as fc27.iso. Change the value of MY_RLSE to download other archived versions. Or you can browse to to download the latest Fedora live image. Versions prior to 21 used different naming conventions, and must be [downloaded manually here][5]. If you download a Live CD image manually, set the MY_NAME variable to the basename of the file without the extension. That way the commands in the following sections will reference the correct file. + +### Convert the Live CD Image + +Use the livecd-iso-to-disk tool to convert the ISO file to a disk image and add the netroot parameter to the embedded kernel command line: + +``` +$ sudo dnf install -y livecd-tools +$ MY_SIZE=$(du -ms $MY_NAME.iso | cut -f 1) +$ dd if=/dev/zero of=$MY_NAME.img bs=1MiB count=0 seek=$(($MY_SIZE+512)) +$ MY_SRVR=server-01.example.edu +$ MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR}) +$ MY_LOOP=$(sudo losetup --show --nooverlap --find $MY_NAME.img) +$ sudo livecd-iso-to-disk --format --extra-kernel-args netroot=iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME $MY_NAME.iso $MY_LOOP +$ sudo losetup -d $MY_LOOP +``` + +### Upload the Live Image to your Server + +Create a directory on your iSCSI server to store your live images and then upload your modified image to it. + +**For releases 21 and greater:** + +``` +$ MY_FLDR=/images +$ scp $MY_NAME.img $MY_SRVR:$MY_FLDR/ +``` + +**For releases prior to 21:** + +``` +$ MY_FLDR=/images +$ MY_LOOP=$(sudo losetup --show --nooverlap --find --partscan $MY_NAME.img) +$ sudo tune2fs -O ^has_journal ${MY_LOOP}p1 +$ sudo e2fsck ${MY_LOOP}p1 +$ sudo dd status=none if=${MY_LOOP}p1 | ssh $MY_SRVR "dd of=$MY_FLDR/$MY_NAME.img" +$ sudo losetup -d $MY_LOOP +``` + +### Define the iSCSI Target + +Run the following commands on your iSCSI server: + +``` +$ sudo -i +# MY_NAME=fc27 +# MY_FLDR=/images +# MY_SRVR=`hostname` +# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR}) +# cat << END > /etc/tgt/conf.d/$MY_NAME.conf + + backing-store $MY_FLDR/$MY_NAME.img + readonly 1 + allow-in-use yes + +END +# tgt-admin --update ALL +``` + +### Create a Bootable USB Drive + +The [iPXE][4] boot loader has a [sanboot][6] command you can use to connect to and start the live images hosted on your iSCSI server. It can be compiled in many different [formats][7]. The format that works best depends on the type of hardware you’re running. As an example, the following instructions show how to [chain load][8] iPXE from [syslinux][9] on a USB drive. + +First, download iPXE and build it in its lkrn format. This should be done as a normal user on a workstation: + +``` +$ sudo dnf install -y git +$ git clone http://git.ipxe.org/ipxe.git $HOME/ipxe +$ sudo dnf groupinstall -y "C Development Tools and Libraries" +$ cd $HOME/ipxe/src +$ make clean +$ make bin/ipxe.lkrn +$ cp bin/ipxe.lkrn /tmp +``` + +Next, prepare a USB drive with a MSDOS partition table and a FAT32 file system. The below commands assume that you have already connected the USB drive to be formatted. **Be careful that you do not format the wrong drive!** + +``` +$ sudo -i +# dnf install -y parted util-linux dosfstools +# echo; find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; | xargs -i bash -c "parted -s {} unit MiB print | perl -0 -ne '/^Model: ([^(]*).*\n.*?([0-9]*MiB)/i && print \"Found: {} = \$2 \$1\n\"'"; echo; read -e -i "$(find /dev/disk/by-id ! -regex '.*-part.*' -name 'usb-*' -exec readlink -f {} \; -quit)" -p "Drive to format: " MY_USB +# umount $MY_USB? +# wipefs -a $MY_USB +# parted -s $MY_USB mklabel msdos mkpart primary fat32 1MiB 100% set 1 boot on +# mkfs -t vfat -F 32 ${MY_USB}1 +``` + +Finally, install syslinux on the USB drive and configure it to chain load iPXE: + +``` +# dnf install -y syslinux-nonlinux +# syslinux -i ${MY_USB}1 +# dd if=/usr/share/syslinux/mbr.bin of=${MY_USB} +# MY_MNT=$(mktemp -d) +# mount ${MY_USB}1 $MY_MNT +# MY_NAME=fc27 +# MY_SRVR=server-01.example.edu +# MY_RVRS=$(echo $MY_SRVR | tr '.' "\n" | tac | tr "\n" '.' | cut -b -${#MY_SRVR}) +# cat << END > $MY_MNT/syslinux.cfg +ui menu.c32 +default $MY_NAME +timeout 100 +menu title SYSLINUX +label $MY_NAME + menu label ${MY_NAME^^} + kernel ipxe.lkrn + append dhcp && sanboot iscsi:$MY_SRVR:::1:iqn.$MY_RVRS:$MY_NAME +END +# cp /usr/share/syslinux/menu.c32 $MY_MNT +# cp /usr/share/syslinux/libutil.c32 $MY_MNT +# cp /tmp/ipxe.lkrn $MY_MNT +# umount ${MY_USB}1 +``` + +You should be able to use this same USB drive to netboot additional iSCSI targets simply by editing the syslinux.cfg file and adding additional menu entries. + +This is just one method of loading iPXE. You could install syslinux directly on your workstation. Another option is to compile iPXE as an EFI executable and place it directly in your [ESP][10]. Yet another is to compile iPXE as a PXE loader and place it on your TFTP server to be referenced by DHCP. The best option depends on your environment. + +### Final Notes + + * You may want to add the –filename \EFI\BOOT\grubx64.efi parameter to the sanboot command if you compile iPXE in its EFI format. + * It is possible to create custom live images. Refer to [Creating and using live CD][11] for more information. + * It is possible to add the –overlay-size-mb and –home-size-mb parameters to the livecd-iso-to-disk command to create live images with persistent storage. However, if you have multiple concurrent users, you’ll need to set up your iSCSI server to manage separate per-user writeable overlays. This is similar to what was shown in the “[How to Build a Netboot Server, Part 4][12]” article. + * The live images support a persistenthome option on their kernel command line (e.g. persistenthome=LABEL=HOME). Used together with CHAP-authenticated iSCSI targets, the persistenthome option provides an interesting alternative to NFS for centralized home directories. + + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/netboot-a-fedora-live-cd/ + +作者:[Gregory Bartholomew][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/glb/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Live_CD +[2]: https://en.wikipedia.org/wiki/Live_CD#Uses +[3]: https://en.wikipedia.org/wiki/ISCSI +[4]: https://ipxe.org/ +[5]: https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/https://dl.fedoraproject.org/pub/archive/fedora/linux/releases/ +[6]: http://ipxe.org/cmd/sanboot/ +[7]: https://ipxe.org/appnote/buildtargets#boot_type +[8]: https://en.wikipedia.org/wiki/Chain_loading +[9]: https://www.syslinux.org/wiki/index.php?title=SYSLINUX +[10]: https://en.wikipedia.org/wiki/EFI_system_partition +[11]: https://docs.fedoraproject.org/en-US/quick-docs/creating-and-using-a-live-installation-image/#proc_creating-and-using-live-cd +[12]: https://fedoramagazine.org/how-to-build-a-netboot-server-part-4/ diff --git a/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md b/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md new file mode 100644 index 0000000000..da0c0df203 --- /dev/null +++ b/sources/tech/20190227 How to Display Weather Information in Ubuntu 18.04.md @@ -0,0 +1,290 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Display Weather Information in Ubuntu 18.04) +[#]: via: (https://itsfoss.com/display-weather-ubuntu) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How to Display Weather Information in Ubuntu 18.04 +====== + +You’ve got a fresh Ubuntu install and you’re [customizing Ubuntu][1] to your liking. You want the best experience and the best apps for your needs. + +The only thing missing is a weather app. Luckily for you, we got you covered. Just make sure you have the Universe repository enabled. + +![Tools to Display Weather Information in Ubuntu Linux][2] + +### 8 Ways to Display Weather Information in Ubuntu 18.04 + +Back in the Unity days, there were a few popular options like My Weather Indicator to display weather on your system. Those options are either discontinued or not available in Ubuntu 18.04 and higher versions anymore. + +Fortunately, there are many other options to choose from. Some are minimalist and plain simple to use, some offer detailed information (or even present you with news headlines) and some are made for terminal gurus. Whatever your needs may be, the right app is waiting for you. + +**Note:** The presented apps are in no particular order of ranking. + +**Top Panel Apps** + +These applications usually sit on the top panel of your screen. Good for quick look at the temperature. + +#### 1\. OpenWeather Shell Extension + +![Open Weather Gnome Shell Extesnsion][3] + +**Key features:** + + * Simple to install and customize + * Uses OpenWeatherMap (by default) + * Many Units and Layout options + * Can save multiple locations (that can easily be changed) + + + +This is a great extension presenting you information in a simple manner. There are multiple ways to install this. It is the weather app that I find myself using the most, because it’s just a simple, no-hassle integrated weather display for the top panel. + +**How to Install:** + +I recommend reading this [detailed tutorial about using GNOME extensions][4]. The easiest way to install this extension is to open up a terminal and run: + +``` +sudo apt install gnome-shell-extension-weather +``` + +Then all you have to restart the gnome shell by executing: + +``` +Alt+F2 +``` + +Enter **r** and press **Enter**. + +Now open up **Tweaks** (gnome tweak tool) and enable **Openweather** in the **Extensions** tab. + +#### 2\. gnome-weather + +![Gnome Weather App UI][5] +![Gnome Weather App Top Panel][6] + +**Key features:** + + * Pleasant Design + * Integrated into Calendar (Top Panel) + * Simple Install + * Flatpak install available + + + +This app is great for new users. The installation is only one command and the app is easy to use. Although it doesn’t have as many features as other apps, it is still great if you don’t want to bother with multiple settings and a complex install procedure. + +**How to Install:** + +All you have to do is run: + +``` +sudo apt install gnome-weather +``` + +Now search for **Weather** and the app should pop up. After logging out (and logging back in), the Calendar extension will be displayed. + +If you prefer, you can get a [flatpak][7] version. + +#### 3\. Meteo + +![Meteo Weather App UI][8] +![Meteo Weather System Tray][9] + +**Key features:** + + * Great UI + * Integrated into System Tray (Top Panel) + * Simple Install + * Great features (Maps) + + + +Meteo is a snap app on the heavier side. Most of that weight comes from the great Maps features, with maps presenting temperatures, clouds, precipitations, pressure and wind speed. It’s a distinct feature that I haven’t encountered in any other weather app. + +**Note** : After changing location, you might have to quit and restart the app for the changes to be applied in the system tray. + +**How to Install:** + +Open up the **Ubuntu Software Center** and search for **Meteo**. Install and launch. + +**Desktop Apps** + +These are basically desktop widgets. They look good and provide more information at a glance. + +#### 4\. Temps + +![Temps Weather App UI][10] + +**Key features:** + + * Beautiful Design + * Useful Hotkeys + * Hourly Temperature Graph + + + +Temps is an electron app with a beautiful UI (though not exactly “light”). The most unique features are the temperature graphs. The hotkeys might feel unintuitive at first, but they prove to be useful in the long run. The app will minimize when you click somewhere else. Just press Ctrl+Shift+W to bring it back. + +This app is **Open-Source** , and the developer can’t afford the cost of a faster API key, so you might want to create your own API at [OpenWeatherMap][11]. + +**How to Install:** + +Go to the website and download the version you need (probably 64-bit). Extract the archive. Open the extracted directory and double-click on **Temps**. Press Ctrl+Shift+W if the window minimizes. + +#### 5\. Cumulus + +![Cumulus Weather App UI][12] + +**Key features:** + + * Color Selector for background and text + + * Re-sizable window + + * Tray Icon (temperature only) + + * Allows multiple instances with different locations etc. + + + + +Cumulus is a greatly customizable weather app, with a backend supporting Yahoo! Weather and OpenWeatherMap. The UI is great and the installer is simple to use. This app has amazing features. It’s one of the few weather apps that allow for multiple instances. You should definitely try it you are looking for an experience tailored to your preferences. + +**How to Install:** + +Go to the website and download the (online) installer. Open up a terminal and **cd** (change directory) to the directory where you downloaded the file. + +Then run + +``` +chmod +x Cumulus-online-installer-x64 +./Cumulus-online-installer-x64 +``` + +Search for **Cumulus** and enjoy the app! + +**Terminal Apps** + +You are a terminal dweller? You can check the weather right in your terminal. + +#### 7\. WeGo + +![WeGo Weather App Terminal][13] + +**Key features:** + + * Supports different APIs + * Pretty detailed + * Customizable config + * Multi-language support + * 1 to 7 day forecast + + + +WeGo is a Go app for displaying weather info in the terminal. It’s install can be a little tricky, but it’s easy to set up. You’ll need to register an API Key [here][14] (if using **forecast.io** , which is default). Once you set it up, it’s fairly practical for someone who mostly works in the terminal. + +**How to Install:** + +I recommend you to check out the GitHub page for complete information on installation, setup and features. + +#### 8\. Wttr.in + +![Wttr.in Weather App Terminal][15] + +**Key features:** + + * Simple install + * Easy to use + * Lightweight + * 3 day forecast + * Moon phase + + + +If you really live in the terminal, this is the weather app for you. This is as lightweight as it gets. You can specify location (by default the app tries to detect your current location) and a few other parameters (eg. units). + +**How to Install:** + +Open up a terminal and install Curl: + +``` +sudo apt install curl +``` + +Then: + +``` +curl wttr.in +``` + +That’s it. You can specify location and parameters like so: + +``` +curl wttr.in/london?m +``` + +To check out other options type: + +``` +curl wttr.in/:help +``` + +If you found some settings you enjoy and you find yourself using them frequently, you might want to add an **alias**. To do so, open **~/.bashrc** with your favorite editor (that’s **vim** , terminal wizard). Go to the end and paste in + +``` +alias wttr='curl wttr.in/CITY_NAME?YOUR_PARAMS' +``` + +For example: + +``` +alias wttr='curl wttr.in/london?m' +``` + +Save and close **~/.bashrc** and run the command below to source the new file. + +``` +source ~/.bashrc +``` + +Now, typing **wttr** in the terminal and pressing Enter should execute your custom command. + +**Wrapping Up** + +These are a handful of the weather apps available for Ubuntu. We hope our list helped you discover an app fitting your needs, be that something with pleasant aesthetics or just a quick tool. + +What is your favorite weather app? Tell us about what you enjoy and why in the comments section. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/display-weather-ubuntu + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/gnome-tricks-ubuntu/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/display-weather-ubuntu.png?resize=800%2C450&ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/open_weather_gnome_shell-1-1.jpg?fit=800%2C383&ssl=1 +[4]: https://itsfoss.com/gnome-shell-extensions/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_ui.jpg?fit=800%2C599&ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/gnome_weather_top_panel.png?fit=800%2C587&ssl=1 +[7]: https://flatpak.org/ +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_ui.jpg?fit=800%2C547&ssl=1 +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/02/meteo_system_tray.png?fit=800%2C653&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/temps_ui.png?fit=800%2C623&ssl=1 +[11]: https://openweathermap.org/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/cumulus_ui.png?fit=800%2C651&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/wego_terminal.jpg?fit=800%2C531&ssl=1 +[14]: https://developer.forecast.io/register +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/wttr_in_terminal.jpg?fit=800%2C526&ssl=1 diff --git a/sources/tech/20190228 3 open source behavior-driven development tools.md b/sources/tech/20190228 3 open source behavior-driven development tools.md new file mode 100644 index 0000000000..9c004a14c2 --- /dev/null +++ b/sources/tech/20190228 3 open source behavior-driven development tools.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 open source behavior-driven development tools) +[#]: via: (https://opensource.com/article/19/2/behavior-driven-development-tools) +[#]: author: (Christine Ketterlin Fisher https://opensource.com/users/cketterlin) + +3 open source behavior-driven development tools +====== +Having the right motivation is as important as choosing the right tool when implementing BDD. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_code_keyboard_orange_hands.png?itok=G6tJ_64Y) + +[Behavior-driven development][1] (BDD) seems very easy. Tests are written in an easily readable format that allows for feedback from product owners, business sponsors, and developers. Those tests are living documentation for your team, so you don't need requirements. The tools are easy to use and allow you to automate your test suite. Reports are generated with each test run to document every step and show you where tests are failing. + +Quick recap: Easily readable! Living documentation! Automation! Reports! What could go wrong, and why isn't everybody doing this? + +### Getting started with BDD + +So, you're ready to jump in and can't wait to pick the right open source tool for your team. You want it to be easy to use, automate all your tests, and provide easily understandable reports for each test run. Great, let's get started! + +Except, not so fast … First, what is your motivation for trying to implement BDD on your team? If the answer is simply to automate tests, go ahead and choose any of the tools listed below because chances are you're going to see minimal success in the long run. + +### My first effort + +I manage a team of business analysts (BA) and quality assurance (QA) engineers, but my background is on the business analysis side. About a year ago, I attended a talk where a developer talked about the benefits of BDD. He said that he and his team had given it a try during their last project. That should have been the first red flag, but I didn't realize it at the time. You cannot simply choose to "give BDD a try." It takes planning, preparation, and forethought into what you want your team to accomplish. + +However, you can try various parts of BDD without a large investment, and I eventually realized he and his team had written feature files and automated those tests using Cucumber. I also learned it was an experiment done solely by the team's developers, not the BA or QA staff, which defeats the purpose of understanding the end user's behavior. + +During the talk we were encouraged to try BDD, so my test analyst and I went to our boss and said we were willing to give it a shot. And then, we didn't know what to do. We had no guidance, no plan in place, and a leadership team who just wanted to automate testing. I don't think I need to tell you how this story ended. Actually, there wasn't even an end, just a slow fizzle after a few initial attempts at writing behavioral scenarios. + +### A fresh start + +Fast-forward a year, and I'm at a different company with a team of my own and BDD on the brain. I knew there was value there, but I also knew it went deeper than what I had initially been sold. I spent a lot of time thinking about how BDD could make a positive impact, not only on my team, but on our entire development team. Then I read [Discovery: Explore Behaviour Using Examples][2] by Gaspar Nagy and Seb Rose, and one of the first things I learned was that automation of tests is a benefit of BDD, but it should not be the main goal. No wonder we failed! + +This book changed how I viewed BDD and helped me start to fill in the pieces I had been missing. We are now on the (hopefully correct!) path to implementing BDD on our team. It involves active involvement from our product owners, business analysts, and manual and automated testers and buy-in and support from our executive leadership. We have a plan in place for our approach and our measures of success. + +We are still writing requirements (don't ever let anyone tell you that these scenarios can completely replace requirements!), but we are doing so with a more critical eye and evaluating where requirements and test scenarios overlap and how we can streamline the two. + +I have told the team we cannot even try to automate these tests for at least two quarters, at which point we'll evaluate and determine whether we're ready to move forward or not. Our current priorities are defining our team's standard language, practicing writing given/when/then scenarios, learning the Gherkin syntax, determining where to store these tests, and investigating how to integrate these tests into our pipeline. + +### 3 BDD tools to choose + +At its core, BDD is a way to help the entire team understand the end user's actions and behaviors, which will lead to more clear requirements, tests, and ultimately higher-quality applications. Before you pick your tool, do your pre-work. Think about your motivation, and understand that while the different parts and pieces of BDD are fairly simple, integrating them into your team is more challenging and needs careful thought and planning. Also, think about where your people fit in. + +Every organization has different roles, and BDD should not belong solely to developers nor test automation engineers. If you don't involve the business side, you're never going to gain the full benefit of this methodology. Once you have a strategy defined and are ready to move forward with automating your BDD scenarios, there are several open source tools for you to choose from. + +#### Cucumber + +[Cucumber][3] is probably the most recognized tool available that supports BDD. It is widely seen as a straightforward tool to learn and is easy to get started with. Cucumber relies on test scenarios that are written in plain text and follow the given/when/then format. Each scenario is an individual test. Scenarios are grouped into features, which is comparable to a test suite. Scenarios must be written in the Gherkin syntax for Cucumber to understand and execute the scenario's steps. The human-readable steps in the scenarios are tied to the step definitions in your code through the Cucumber framework. To successfully write and automate the scenarios, you need the right mix of business knowledge and technical ability. Identify the skill sets on your team to determine who will write and maintain the scenarios and who will automate them; most likely these should be managed by different roles. Because these tests are executed from the step definitions, reporting is very robust and can show you at which exact step your test failed. Cucumber works well with a variety of browser and API automation tools. + +#### JBehave + +[JBehave][4] is very similar to Cucumber. Scenarios are still written in the given/when/then format and are easily understandable by the entire team. JBehave supports Gherkin but also has its own JBehave syntax that can be used. Gherkin is more universal, but either option will work as long as you are consistent in your choice. JBehave has more configuration options than Cucumber, and its reports, although very detailed, need more configuration to get feedback from each step. JBehave is a powerful tool, but because it can be more customized, it is not quite as easy to get started with. Teams need to ask themselves exactly what features they need and whether or not learning the tool's various configurations is worth the time investment. + +#### Gauge + +Where Cucumber and JBehave are specifically designed to work with BDD, [Gauge][5] is not. If automation is your main goal (and not the entire BDD process), it is worth a look. Gauge tests are written in Markdown, which makes them easily readable. However, without a more standard format, such as the given/when/then BDD scenarios, tests can vary widely and, depending on the author, some tests will be much more digestible for business owners than others. Gauge works with multiple languages, so the automation team can leverage what they already use. Gauge also offers reporting with screenshots to show where the tests failed. + +### What are your needs? + +Implementing BDD allows the team to test the users' behaviors. This can be done without automating any tests at all, but when done correctly, can result in a powerful, reusable test suite. As a team, you will need to identify exactly what your automation needs are and whether or not you are truly going to use BDD or if you would rather focus on automating tests that are written in plain text. Either way, open source tools are available for you to use and to help support your testing evolution. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/2/behavior-driven-development-tools + +作者:[Christine Ketterlin Fisher][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/cketterlin +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Behavior-driven_development +[2]: https://www.amazon.com/gp/product/1983591254/ref=dbs_a_def_rwt_bibl_vppi_i0 +[3]: https://cucumber.io/ +[4]: https://jbehave.org/ +[5]: https://www.gauge.org/ diff --git a/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md b/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md new file mode 100644 index 0000000000..3217e304cd --- /dev/null +++ b/sources/tech/20190228 MiyoLinux- A Lightweight Distro with an Old-School Approach.md @@ -0,0 +1,161 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (MiyoLinux: A Lightweight Distro with an Old-School Approach) +[#]: via: (https://www.linux.com/blog/learn/2019/2/miyolinux-lightweight-distro-old-school-approach) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +MiyoLinux: A Lightweight Distro with an Old-School Approach +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_main.jpg?itok=ErLiqGwp) + +I must confess, although I often wax poetic about the old ways of the Linux desktop, I much prefer my distributions to help make my daily workflow as efficient as possible. Because of that, my taste in Linux desktop distributions veers very far toward the modern side of things. I want a distribution that integrates apps seamlessly, gives me notifications, looks great, and makes it easy to work with certain services that I use. + +However, every so often it’s nice to dip my toes back into those old-school waters and remind myself why I fell in love with Linux in the first place. That’s precisely what [MiyoLinux][1] did for me recently. This lightweight distribution is based on [Devuan][2] and makes use of the [i3 Tiling Window Manager][3]. + +Why is it important that MiyoLinux is based on Devuan? Because that means it doesn’t use systemd. There are many within the Linux community who’d be happy to make the switch to an old-school Linux distribution that opts out of systemd. If that’s you, MiyoLinux might just charm you into submission. + +But don’t think MiyoLinux is going to be as easy to get up and running as, say, Ubuntu Linux, Elementary OS, or Linux Mint. Although it’s not nearly as challenging as Arch or Gentoo, MiyoLinux does approach installation and basic usage a bit differently. Let’s take a look at how this particular distro handles things. + +### Installation + +The installation GUI of MiyoLinux is pretty basic. The first thing you’ll notice is that you are presented with a good amount of notes, regarding the usage of the MiyoLinux desktop. If you happen to be testing MiyoLinux via VirtualBox, you’ll wind up having to deal with the frustration of not being able to resize the window (Figure 1), as the Guest Additions cannot be installed. This also means mouse integration cannot be enabled during the installation, so you’ll have to tab through the windows and use your keyboard cursor keys and Enter key to make selections. + +![MiyoLinux][5] + +Figure 1: The first step in the MiyoLinux installation. + +[Used with permission][6] + +Once you click the Install MiyoLinux button, you’ll be prompted to continue using either ‘su” or sudo. Click the use sudo button to continue with the installation. + +The next screen of importance is the Installation Options window (Figure 2), where you can select various options for MiyoLinux (such as encryption, file system labels, disable automatic login, etc.). + +![Configuration][8] + +Figure 2: Configuration Installation options for MiyoLinux. + +[Used with permission][6] + +The MiyoLinux installation does not include an automatic partition tool. Instead, you’ll be prompted to run either cfdisk or GParted (Figure 3). If you don’t know your way around cfdisk, select GParted and make use of the GUI tool. + +![partitioning ][10] + +Figure 3: Select your partitioning tool for MiyoLinux. + +[Used with permission][6] + +With your disk partitioned (Figure 4), you’ll be required to take care of the following steps: + + * Configure the GRUB bootloader. + + * Select the filesystem for the bootloader. + + * Configure time zone and locales. + + * Configure keyboard, keyboard language, and keyboard layout. + + * Okay the installation. + + + + +Once, you’ve okay’d the installation, all packages will be installed and you will then be prompted to install the bootloader. Following that, you’ll be prompted to configure the following: + + * Hostname. + + * User (Figure 5). + + * Root password. + + + + +With the above completed, reboot and log into your new MiyoLinux installation. + +![hostname][12] + +Figure 5: Configuring hostname and username. + +[Creative Commons Zero][13] + +### Usage + +Once you’ve logged into the MiyoLinux desktop, you’ll find things get a bit less-than-user-friendly. This is by design. You won’t find any sort of mouse menu available anywhere on the desktop. Instead you use keyboard shortcuts to open the different types of menus. The Alt+m key combination will open the PMenu, which is what one would consider a fairly standard desktop mouse menu (Figure 6). + +The Alt+d key combination will open the dmenu, a search tool at the top of the desktop, where you can scroll through (using the cursor keys) or search for an app you want to launch (Figure 7). + +![dmenu][15] + +Figure 7: The dmenu in action. + +[Used with permission][6] + +### Installing Apps + +If you open the PMenu, click System > Synaptic Package Manager. From within that tool you can search for any app you want to install. However, if you find Synaptic doesn’t want to start from the PMenu, open the dmenu, search for terminal, and (once the terminal opens), issue the command sudo synaptic. That will get the package manager open, where you can start installing any applications you want (Figure 8). + +![Synaptic][17] + +Figure 8: The Synaptic Package Manager on MiyoLinux. + +[Used with permission][6] + +Of course, you can always install applications from the command line. MiyoLinux depends upon the Apt package manager, so installing applications is as easy as: + +``` +sudo apt-get install libreoffice -y +``` + +Once installed, you can start the new package from either the PMenu or dmenu tools. + +### MiyoLinux Accessories + +If you find you need a bit more from the MiyoLinux desktop, type the keyboard combination Alt+Ctrl+a to open the MiyoLinux Accessories tool (Figure 9). From this tool you can configure a number of options for the desktop. + +![Accessories][19] + +Figure 9: Configure i3, Conky, Compton, your touchpad, and more with the Accessories tool. + +[Used with permission][6] + +All other necessary keyboard shortcuts are listed on the default desktop wallpaper. Make sure to put those shortcuts to memory, as you won’t get very far in the i3 desktop without them. + +### A Nice Nod to Old-School Linux + +If you’re itching to throw it back to a time when Linux offered you a bit of challenge to your daily grind, MiyoLinux might be just the operating system for you. It’s a lightweight operating system that makes good use of a minimal set of tools. Anyone who likes their distributions to be less modern and more streamlined will love this take on the Linux desktop. However, if you prefer your desktop with the standard bells and whistles, found on modern distributions, you’ll probably find MiyoLinux nothing more than a fun distraction from the standard fare. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/2/miyolinux-lightweight-distro-old-school-approach + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://sourceforge.net/p/miyolinux/wiki/Home/ +[2]: https://devuan.org/ +[3]: https://i3wm.org/ +[4]: /files/images/miyo1jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_1.jpg?itok=5PxRDYRE (MiyoLinux) +[6]: /licenses/category/used-permission +[7]: /files/images/miyo2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_2.jpg?itok=svlVr7VI (Configuration) +[9]: /files/images/miyo3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_3.jpg?itok=lpNzZBPz (partitioning) +[11]: /files/images/miyo5jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_5.jpg?itok=lijIsgZ2 (hostname) +[13]: /licenses/category/creative-commons-zero +[14]: /files/images/miyo7jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_7.jpg?itok=I8Ow3PX6 (dmenu) +[16]: /files/images/miyo8jpg +[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_8.jpg?itok=oa502KfM (Synaptic) +[18]: /files/images/miyo9jpg +[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/miyo_9.jpg?itok=gUM4mxEv (Accessories) diff --git a/sources/tech/20190301 Guide to Install VMware Tools on Linux.md b/sources/tech/20190301 Guide to Install VMware Tools on Linux.md new file mode 100644 index 0000000000..e6a43bcde1 --- /dev/null +++ b/sources/tech/20190301 Guide to Install VMware Tools on Linux.md @@ -0,0 +1,143 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Guide to Install VMware Tools on Linux) +[#]: via: (https://itsfoss.com/install-vmware-tools-linux) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Guide to Install VMware Tools on Linux +====== + +**VMware Tools enhances your VM experience by allowing you to share clipboard and folder among other things. Learn how to install VMware tools on Ubuntu and other Linux distributions.** + +In an earlier tutorial, you learned to [install VMware Workstation on Ubuntu][1]. You can further enhance the functionality of your virtual machines by installing VMware Tools. + +If you have already installed a guest OS on VMware, you must have noticed the requirement for [VMware tools][2] – even though not completely aware of what it is needed for. + +In this article, we will highlight the importance of VMware tools, the features it offers, and the method to install VMware tools on Ubuntu or any other Linux distribution. + +### VMware Tools: Overview & Features + +![Installing VMware Tools on Ubuntu][3]Installing VMware Tools on Ubuntu + +For obvious reasons, the virtual machine (your Guest OS) will not behave exactly like the host. There will be certain limitations in terms of its performance and operationg. And, that is why a set of utilities (VMware Tools) was introduced. + +VMware tools help in managing the guest OS in an efficient manner while also improving its performance. + +#### What exactly is VMware tool responsible for? + +![How to Install VMware tools on Linux][4] + +You have got a vague idea of what it does – but let us talk about the details: + + * Synchronize the time between the guest OS and the host to make things easier. + * Unlocks the ability to pass messages from host OS to guest OS. For example, you copy a text on the host to your clipboard and you can easily paste it to your guest OS. + * Enables sound in guest OS. + * Improves video resolution. + * Improves the cursor movement. + * Fixes incorrect network speed data. + * Eliminates inadequate color depth. + + + +These are the major changes that happen when you install VMware tools on Guest OS. But, what exactly does it contain / feature in order to unlock/enhance these functionalities? Let’s see.. + +#### VMware tools: Core Feature Details + +![Sharing clipboard between guest and host OS with VMware Tools][5]Sharing clipboard between guest and host OS with VMware Tools + +If you do not want to know what it includes to enable the functionalities, you can skip this part. But, for the curious readers, let us briefly discuss about it: + +**VMware device drivers:** It really depends on the OS. Most of the major operating systems do include device drivers by default. So, you do not have to install it separately. This generally involves – memory control driver, mouse driver, audio driver, NIC driver, VGA driver and so on. + +**VMware user process:** This is where things get really interesting. With this, you get the ability to copy-paste and drag-drop between the host and the guest OS. You can basically copy and paste the text from the host to the virtual machine or vice versa. + +You get to drag and drop files as well. In addition, it enables the pointer release/lock when you do not have an SVGA driver installed. + +**VMware tools lifecycle management** : Well, we will take a look at how to install VMware tools below – but this feature helps you easily install/upgrade VMware tools in the virtual machine. + +**Shared Folders** : In addition to these, VMware tools also allow you to have shared folders between the guest OS and the host. + +![Sharing folder between guest and host OS using VMware Tools in Linux][6]Sharing folder between guest and host OS using VMware Tools in Linux + +Of course, what it does and facilitates also depends on the host OS. For example, on Windows, you get a Unity mode on VMware to run programs on virtual machine and operate it from the host OS. + +### How to install VMware Tools on Ubuntu & other Linux distributions + +**Note:** For Linux guest operating systems, you should already have “Open VM Tools” suite installed, eliminating the need of installing VMware tools separately, most of the time. + +Most of the time, when you install a guest OS, you will get a prompt as a software update or a popup telling you to install VMware tools if the operating system supports [Easy Install][7]. + +Windows and Ubuntu does support Easy Install. So, even if you are using Windows as your host OS or trying to install VMware tools on Ubuntu, you should first get an option to install the VMware tools easily as popup message. Here’s how it should look like: + +![Pop-up to install VMware Tools][8]Pop-up to install VMware Tools + +This is the easiest way to get it done. So, make sure you have an active network connection when you setup the virtual machine. + +If you do not get any of these pop ups – or options to easily install VMware tools. You have to manually install it. Here’s how to do that: + +1\. Launch VMware Workstation Player. + +2\. From the menu, navigate through **Virtual Machine - > Install VMware tools**. If you already have it installed, and want to repair the installation, you will observe the same option to appear as “ **Re-install VMware tools** “. + +3\. Once you click on that, you will observe a virtual CD/DVD mounted in the guest OS. + +4\. Open that and copy/paste the **tar.gz** file to any location of your choice and extract it, here we choose the **Desktop**. + +![][9] + +5\. After extraction, launch the terminal and navigate to the folder inside by typing in the following command: + +``` +cd Desktop/VMwareTools-10.3.2-9925305/vmware-tools-distrib +``` + +You need to check the name of the folder and path in your case – depending on the version and where you extracted – it might vary. + +![][10] + +Replace **Desktop** with your storage location (such as cd Downloads) and the rest should remain the same if you are installing **10.3.2 version**. + +6\. Now, simply type in the following command to start the installation: + +``` +sudo ./vmware-install.pl -d +``` + +![][11] + +You will be asked the password for permission to install, type it in and you should be good to go. + +That’s it. You are done. These set of steps should be applicable to almost any Ubuntu-based guest operating system. If you want to install VMware tools on Ubuntu Server, or any other OS. + +**Wrapping Up** + +Installing VMware tools on Ubuntu Linux is pretty easy. In addition to the easy method, we have also explained the manual method to do it. If you still need help, or have a suggestion regarding the installation, let us know in the comments down below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-vmware-tools-linux + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-vmware-player-ubuntu-1310/ +[2]: https://kb.vmware.com/s/article/340 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-downloading.jpg?fit=800%2C531&ssl=1 +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/install-vmware-tools-linux.png?resize=800%2C450&ssl=1 +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-features.gif?resize=800%2C500&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-shared-folder.jpg?fit=800%2C660&ssl=1 +[7]: https://docs.vmware.com/en/VMware-Workstation-Player-for-Linux/15.0/com.vmware.player.linux.using.doc/GUID-3F6B9D0E-6CFC-4627-B80B-9A68A5960F60.html +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools.jpg?fit=800%2C481&ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-extraction.jpg?fit=800%2C564&ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-folder.jpg?fit=800%2C487&ssl=1 +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/vmware-tools-installation-ubuntu.jpg?fit=800%2C492&ssl=1 diff --git a/sources/tech/20190304 How to Install MongoDB on Ubuntu.md b/sources/tech/20190304 How to Install MongoDB on Ubuntu.md new file mode 100644 index 0000000000..30d588ddba --- /dev/null +++ b/sources/tech/20190304 How to Install MongoDB on Ubuntu.md @@ -0,0 +1,238 @@ +[#]: collector: (lujun9972) +[#]: translator: (An-DJ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install MongoDB on Ubuntu) +[#]: via: (https://itsfoss.com/install-mongodb-ubuntu) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How to Install MongoDB on Ubuntu +====== + +**This tutorial presents two ways to install MongoDB on Ubuntu and Ubuntu-based Linux distributions.** + +[MongoDB][1] is an increasingly popular free and open-source NoSQL database that stores data in collections of JSON-like, flexible documents, in contrast to the usual table approach you’ll find in SQL databases. + +You are most likely to find MongoDB used in modern web applications. Its document model makes it very intuitive to access and handle with various programming languages. + +![mongodb Ubuntu][2] + +In this article, I’ll cover two ways you can install MongoDB on your Ubuntu system. + +### Installing MongoDB on Ubuntu based Distributions + + 1. Install MongoDB using Ubuntu’s repository. Easy but not the latest version of MongoDB + 2. Install MongoDB using its official repository. Slightly complicated but you get the latest version of MongoDB. + + + +The first installation method is easier, but I recommend the second method if you plan on using the latest release with official support. + +Some people might prefer using snap packages. There are snaps available in the Ubuntu Software Center, but I wouldn’t recommend using them; they’re outdated at the moment and I won’t be covering that. + +#### Method 1. Install MongoDB from Ubuntu Repository + +This is the easy way to install MongoDB on your system, you only need to type in a simple command. + +##### Installing MongoDB + +First, make sure your packages are up-to-date. Open up a terminal and type: + +``` +sudo apt update && sudo apt upgrade -y +``` + +Go ahead and install MongoDB with: + +``` +sudo apt install mongodb +``` + +That’s it! MongoDB is now installed on your machine. + +The MongoDB service should automatically be started on install, but to check the status type + +``` +sudo systemctl status mongodb +``` + +![Check if the MongoDB service is running.][3] + +You can see that the service is **active**. + +##### Running MongoDB + +MongoDB is currently a systemd service, so we’ll use **systemctl** to check and modify it’s state, using the following commands: + +``` +sudo systemctl status mongodb +sudo systemctl stop mongodb +sudo systemctl start mongodb +sudo systemctl restart mongodb +``` + +You can also change if MongoDB automatically starts when the system starts up ( **default** : enabled): + +``` +sudo systemctl disable mongodb +sudo systemctl enable mongodb +``` + +To start working with (creating and editing) databases, type: + +``` +mongo +``` + +This will start up the **mongo shell**. Please check out the [manual][4] for detailed information on the available queries and options. + +**Note:** Depending on how you plan to use MongoDB, you might need to adjust your Firewall. That’s unfortunately more involved than what I can cover here and depends on your configuration. + +##### Uninstall MongoDB + +If you installed MongoDB from the Ubuntu Repository and want to uninstall it (maybe to install using the officially supported way), type: + +``` +sudo systemctl stop mongodb +sudo apt purge mongodb +sudo apt autoremove +``` + +This should completely get rid of your MongoDB install. Make sure to **backup** any collections or documents you might want to keep since they will be wiped out! + +#### Method 2. Install MongoDB Community Edition on Ubuntu + +This is the way the recommended way to install MongoDB, using the package manager. You’ll have to type a few more commands and it might be intimidating if you are newer to the Linux world. + +But there’s nothing to be afraid of! We’ll go through the installation process step by step. + +##### Installing MongoDB + +The package maintained by MongoDB Inc. is called **mongodb-org** , not **mongodb** (this is the name of the package in the Ubuntu Repository). Make sure **mongodb** is not installed on your system before applying this steps. The packages will conflict. Let’s get to it! + +First, we’ll have to import the public key: + +``` +sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4 +``` + +Now, you need to add a new repository in your sources list so that you can install MongoDB Community Edition and also get automatic updates: + +``` +echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu $(lsb_release -cs)/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list +``` + +To be able to install **mongodb-org** , we’ll have to update our package database so that your system is aware of the new packages available: + +``` +sudo apt update +``` + +Now you can ether install the **latest stable version** of MongoDB: + +``` +sudo apt install -y mongodb-org +``` + +or a **specific version** (change the version number after **equal** sign) + +``` +sudo apt install -y mongodb-org=4.0.6 mongodb-org-server=4.0.6 mongodb-org-shell=4.0.6 mongodb-org-mongos=4.0.6 mongodb-org-tools=4.0.6 +``` + +If you choose to install a specific version, make sure you change the version number everywhere. If you only change it in the **mongodb-org=4.0.6** part, the latest version will be installed. + +By default, when updating using the package manager ( **apt-get** ), MongoDB will be updated to the newest updated version. To stop that from happening (and freezing to the installed version), use: + +``` +echo "mongodb-org hold" | sudo dpkg --set-selections +echo "mongodb-org-server hold" | sudo dpkg --set-selections +echo "mongodb-org-shell hold" | sudo dpkg --set-selections +echo "mongodb-org-mongos hold" | sudo dpkg --set-selections +echo "mongodb-org-tools hold" | sudo dpkg --set-selections +``` + +You have now successfully installed MongoDB! + +##### Configuring MongoDB + +By default, the package manager will create **/var/lib/mongodb** and **/var/log/mongodb** and MongoDB will run using the **mongodb** user account. + +I won’t go into changing these default settings since that is beyond the scope of this guide. You can check out the [manual][5] for detailed information. + +The settings in **/etc/mongod.conf** are applied when starting/restarting the **mongodb** service instance. + +##### Running MongoDB + +To start the mongodb daemon **mongod** , type: + +``` +sudo service mongod start +``` + +Now you should verify that the **mongod** process started successfully. This information is stored (by default) at **/var/log/mongodb/mongod.log**. Let’s check the contents of that file: + +``` +sudo cat /var/log/mongodb/mongod.log +``` + +![Check MongoDB logs to see if the process is running properly.][6] + +As long as you get this: **[initandlisten] waiting for connections on port 27017** somewhere in there, the process is running properly. + +**Note: 27017** is the default port of **mongod.** + +To stop/restart **mongod** enter: + +``` +sudo service mongod stop +sudo service mongod restart +``` + +Now, you can use MongoDB by opening the **mongo shell** : + +``` +mongo +``` + +##### Uninstall MongoDB + +Run the following commands + +``` +sudo service mongod stop +sudo apt purge mongodb-org* +``` + +To remove the **databases** and **log files** (make sure to **backup** what you want to keep!): + +``` +sudo rm -r /var/log/mongodb +sudo rm -r /var/lib/mongodb +``` + +**Wrapping Up** + +MongoDB is a great NoSQL database, easy to integrate into modern projects. I hope this tutorial helped you to set it up on your Ubuntu machine! Let us know how you plan on using MongoDB in the comments below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-mongodb-ubuntu + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://www.mongodb.com/ +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/mongodb-ubuntu.jpeg?resize=800%2C450&ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/mongodb_check_status.jpg?fit=800%2C574&ssl=1 +[4]: https://docs.mongodb.com/manual/tutorial/getting-started/ +[5]: https://docs.mongodb.com/manual/ +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/02/mongodb_org_check_logs.jpg?fit=800%2C467&ssl=1 diff --git a/sources/tech/20190304 What you need to know about Ansible modules.md b/sources/tech/20190304 What you need to know about Ansible modules.md new file mode 100644 index 0000000000..8330d4bd59 --- /dev/null +++ b/sources/tech/20190304 What you need to know about Ansible modules.md @@ -0,0 +1,311 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What you need to know about Ansible modules) +[#]: via: (https://opensource.com/article/19/3/developing-ansible-modules) +[#]: author: (Jairo da Silva Junior https://opensource.com/users/jairojunior) + +What you need to know about Ansible modules +====== +Learn how and when to develop custom modules for Ansible. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw) + +Ansible works by connecting to nodes and sending small programs called modules to be executed remotely. This makes it a push architecture, where configuration is pushed from Ansible to servers without agents, as opposed to the pull model, common in agent-based configuration management systems, where configuration is pulled. + +These modules are mapped to resources and their respective states, which are represented in YAML files. They enable you to manage virtually everything that has an API, CLI, or configuration file you can interact with, including network devices like load balancers, switches, firewalls, container orchestrators, containers themselves, and even virtual machine instances in a hypervisor or in a public (e.g., AWS, GCE, Azure) and/or private (e.g., OpenStack, CloudStack) cloud, as well as storage and security appliances and system configuration. + +With Ansible's batteries-included model, hundreds of modules are included and any task in a playbook has a module behind it. + +The contract for building modules is simple: JSON in the stdout. The configurations declared in YAML files are delivered over the network via SSH/WinRM—or any other connection plugin—as small scripts to be executed in the target server(s). Modules can be written in any language capable of returning JSON, although most Ansible modules (except for Windows PowerShell) are written in Python using the Ansible API (this eases the development of new modules). + +Modules are one way of expanding Ansible capabilities. Other alternatives, like dynamic inventories and plugins, can also increase Ansible's power. It's important to know about them so you know when to use one instead of the other. + +Plugins are divided into several categories with distinct goals, like Action, Cache, Callback, Connection, Filters, Lookup, and Vars. The most popular plugins are: + + * **Connection plugins:** These implement a way to communicate with servers in your inventory (e.g., SSH, WinRM, Telnet); in other words, how automation code is transported over the network to be executed. + * **Filters plugins:** These allow you to manipulate data inside your playbook. This is a Jinja2 feature that is harnessed by Ansible to solve infrastructure-as-code problems. + * **Lookup plugins:** These fetch data from an external source (e.g., env, file, Hiera, database, HashiCorp Vault). + + + +Ansible's official docs are a good resource on [developing plugins][1]. + +### When should you develop a module? + +Although many modules are delivered with Ansible, there is a chance that your problem is not yet covered or it's something too specific—for example, a solution that might make sense only in your organization. Fortunately, the official docs provide excellent guidelines on [developing modules][2]. + +**IMPORTANT:** Before you start working on something new, always check for open pull requests, ask developers at #ansible-devel (IRC/Freenode), or search the [development list][3] and/or existing [working groups][4] to see if a module exists or is in development. + +Signs that you need a new module instead of using an existing one include: + + * Conventional configuration management methods (e.g., templates, file, get_url, lineinfile) do not solve your problem properly. + * You have to use a complex combination of commands, shells, filters, text processing with magic regexes, and API calls using curl to achieve your goals. + * Your playbooks are complex, imperative, non-idempotent, and even non-deterministic. + + + +In the ideal scenario, the tool or service already has an API or CLI for management, and it returns some sort of structured data (JSON, XML, YAML). + +### Identifying good and bad playbooks + +> "Make love, but don't make a shell script in YAML." + +So, what makes a bad playbook? + +``` +- name: Read a remote resource +   command: "curl -v http://xpto/resource/abc" + register: resource + changed_when: False + + - name: Create a resource in case it does not exist +   command: "curl -X POST http://xpto/resource/abc -d '{ config:{ client: xyz, url: http://beta, pattern: core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md } }'" +   when: "resource.stdout | 404" + + # Leave it here in case I need to remove it hehehe + #- name: Remove resource + #  command: "curl -X DELETE http://xpto/resource/abc" + #  when: resource.stdout == 1 +``` + +Aside from being very fragile—what if the resource state includes a 404 somewhere?—and demanding extra code to be idempotent, this playbook can't update the resource when its state changes. + +Playbooks written this way disrespect many infrastructure-as-code principles. They're not readable by human beings, are hard to reuse and parameterize, and don't follow the declarative model encouraged by most configuration management tools. They also fail to be idempotent and to converge to the declared state. + +Bad playbooks can jeopardize your automation adoption. Instead of harnessing configuration management tools to increase your speed, they have the same problems as an imperative automation approach based on scripts and command execution. This creates a scenario where you're using Ansible just as a means to deliver your old scripts, copying what you already have into YAML files. + +Here's how to rewrite this example to follow infrastructure-as-code principles. + +``` +- name: XPTO +  xpto: +    name: abc +    state: present +    config: +      client: xyz +      url: http://beta +      pattern: "*.*" +``` + +The benefits of this approach, based on custom modules, include: + + * It's declarative—resources are properly represented in YAML. + * It's idempotent. + * It converges from the declared state to the current state. + * It's readable by human beings. + * It's easily parameterized or reused. + + + +### Implementing a custom module + +Let's use [WildFly][5], an open source Java application server, as an example to introduce a custom module for our not-so-good playbook: + +``` + - name: Read datasource +   command: "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:read-resource()'" +   register: datasource + + - name: Create datasource +   command: "jboss-cli.sh -c '/subsystem=datasources/data-source=DemoDS:add(driver-name=h2, user-name=sa, password=sa, min-pool-size=20, max-pool-size=40, connection-url=.jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE..)'" +   when: 'datasource.stdout | outcome => failed' +``` + +Problems: + + * It's not declarative. + * JBoss-CLI returns plaintext in a JSON-like syntax; therefore, this approach is very fragile, since we need a type of parser for this notation. Even a seemingly simple parser can be too complex to treat many [exceptions][6]. + * JBoss-CLI is just an interface to send requests to the management API (port 9990). + * Sending an HTTP request is more efficient than opening a new JBoss-CLI session, connecting, and sending a command. + * It does not converge to the desired state; it only creates the resource when it doesn't exist. + + + +A custom module for this would look like: + +``` +- name: Configure datasource +      jboss_resource: +        name: "/subsystem=datasources/data-source=DemoDS" +        state: present +        attributes: +          driver-name: h2 +          connection-url: "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE" +          jndi-name: "java:jboss/datasources/DemoDS" +          user-name: sa +          password: sa +          min-pool-size: 20 +          max-pool-size: 40 +``` + +This playbook is declarative, idempotent, more readable, and converges to the desired state regardless of the current state. + +### Why learn to build custom modules? + +Good reasons to learn how to build custom modules include: + + * Improving existing modules + * You have bad playbooks and want to improve them, or … + * You don't, but want to avoid having bad playbooks. + * Knowing how to build a module considerably improves your ability to debug problems in playbooks, thereby increasing your productivity. + + + +> "…abstractions save us time working, but they don't save us time learning." —Joel Spolsky, [The Law of Leaky Abstractions][7] + +#### Custom Ansible modules 101 + + * JSON (JavaScript Object Notation) in stdout: that's the contract! + * They can be written in any language, but … + * Python is usually the best option (or the second best) + * Most modules delivered with Ansible ( **lib/ansible/modules** ) are written in Python and should support compatible versions. + + + +#### The Ansible way + + * First step: + +``` +git clone https://github.com/ansible/ansible.git +``` + + * Navigate in **lib/ansible/modules/** and read the existing modules code. + + * Your tools are: Git, Python, virtualenv, pdb (Python debugger) + + * For comprehensive instructions, consult the [official docs][8]. + + + + +#### An alternative: drop it in the library directory + +``` +library/                  # if any custom modules, put them here (optional) +module_utils/             # if any custom module_utils to support modules, put them here (optional) +filter_plugins/           # if any custom filter plugins, put them here (optional) + +site.yml                  # master playbook +webservers.yml            # playbook for webserver tier +dbservers.yml             # playbook for dbserver tier + +roles/ +    common/               # this hierarchy represents a "role" +        library/          # roles can also include custom modules +        module_utils/     # roles can also include custom module_utils +        lookup_plugins/   # or other types of plugins, like lookup in this case +``` + + * It's easier to start. + * Doesn't require anything besides Ansible and your favorite IDE/text editor. + * This is your best option if it's something that will be used internally. + + + +**TIP:** You can use this directory layout to overwrite existing modules if, for example, you need to patch a module. + +#### First steps + +You could do it in your own—including using another language—or you could use the AnsibleModule class, as it is easier to put JSON in the stdout ( **exit_json()** , **fail_json()** ) in the way Ansible expects ( **msg** , **meta** , **has_changed** , **result** ), and it's also easier to process the input ( **params[]** ) and log its execution ( **log()** , **debug()** ). + +``` +def main(): + +  arguments = dict(name=dict(required=True, type='str'), +                  state=dict(choices=['present', 'absent'], default='present'), +                  config=dict(required=False, type='dict')) + +  module = AnsibleModule(argument_spec=arguments, supports_check_mode=True) +  try: +      if module.check_mode: +          # Do not do anything, only verifies current state and report it +          module.exit_json(changed=has_changed, meta=result, msg='Fez alguma coisa ou não...') + +      if module.params['state'] == 'present': +          # Verify the presence of a resource +          # Desired state `module.params['param_name'] is equal to the current state? +          module.exit_json(changed=has_changed, meta=result) + +      if module.params['state'] == 'absent': +          # Remove the resource in case it exists +          module.exit_json(changed=has_changed, meta=result) + +  except Error as err: +      module.fail_json(msg=str(err)) +``` + +**NOTES:** The **check_mode** ("dry run") allows a playbook to be executed or just verifies if changes are required, but doesn't perform them. **** Also, the **module_utils** directory can be used for shared code among different modules. + +For the full Wildfly example, check [this pull request][9]. + +### Running tests + +#### The Ansible way + +The Ansible codebase is heavily tested, and every commit triggers a build in its continuous integration (CI) server, [Shippable][10], which includes linting, unit tests, and integration tests. + +For integration tests, it uses containers and Ansible itself to perform the setup and verify phase. Here is a test case (written in Ansible) for our custom module's sample code: + +``` +- name: Configure datasource + jboss_resource: +   name: "/subsystem=datasources/data-source=DemoDS" +   state: present +   attributes: +     connection-url: "jdbc:h2:mem:demo;DB_CLOSE_DELAY=-1;DB_CLOSE_ON_EXIT=FALSE" +     ... + register: result + +- name: assert output message that datasource was created + assert: +   that: +      - "result.changed == true" +      - "'Added /subsystem=datasources/data-source=DemoDS' in result.msg" +``` + +#### An alternative: bundling a module with your role + +Here is a [full example][11] inside a simple role: + +``` +[*Molecule*]() + [*Vagrant*]() + [*pytest*](): `molecule init` (inside roles/) +``` + +It offers greater flexibility to choose: + + * Simplified setup + * How to spin up your infrastructure: e.g., Vagrant, Docker, OpenStack, EC2 + * How to verify your infrastructure tests: Testinfra and Goss + + + +But your tests would have to be written using pytest with Testinfra or Goss, instead of plain Ansible. If you'd like to learn more about testing Ansible roles, see my article about [using Molecule][12]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/developing-ansible-modules + +作者:[Jairo da Silva Junior][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jairojunior +[b]: https://github.com/lujun9972 +[1]: https://docs.ansible.com/ansible/latest/dev_guide/developing_plugins.html#developing-plugins +[2]: https://docs.ansible.com/ansible/latest/dev_guide/developing_modules.html +[3]: https://groups.google.com/forum/#!forum/ansible-devel +[4]: https://github.com/ansible/community/ +[5]: http://www.wildfly.org/ +[6]: https://tools.ietf.org/html/rfc7159 +[7]: https://en.wikipedia.org/wiki/Leaky_abstraction#The_Law_of_Leaky_Abstractions +[8]: https://docs.ansible.com/ansible/latest/dev_guide/developing_modules_general.html#developing-modules-general +[9]: https://github.com/ansible/ansible/pull/43682/files +[10]: https://app.shippable.com/github/ansible/ansible/dashboard +[11]: https://github.com/jairojunior/ansible-role-jboss/tree/with_modules +[12]: https://opensource.com/article/18/12/testing-ansible-roles-molecule diff --git a/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md b/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md new file mode 100644 index 0000000000..cf046ec1b3 --- /dev/null +++ b/sources/tech/20190305 How rootless Buildah works- Building containers in unprivileged environments.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How rootless Buildah works: Building containers in unprivileged environments) +[#]: via: (https://opensource.com/article/19/3/tips-tricks-rootless-buildah) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +How rootless Buildah works: Building containers in unprivileged environments +====== +Buildah is a tool and library for building Open Container Initiative (OCI) container images. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_2015-1-osdc-lead.png?itok=VEB4zwza) + +In previous articles, including [How does rootless Podman work?][1], I talked about [Podman][2], a tool that enables users to manage pods, containers, and container images. + +[Buildah][3] is a tool and library for building Open Container Initiative ([OCI][4]) container images that is complementary to Podman. (Both projects are maintained by the [containers][5] organization, of which I'm a member.) In this article, I will talk about rootless Buildah, including the differences between it and Podman. + +Our goal with Buildah was to build a low-level tool that could be used either directly or vendored into other tools to build container images. + +### Why Buildah? + +Here is how I describe a container image: It is basically a rootfs directory that contains the code needed to run your container. This directory is called a rootfs because it usually looks like **/ (root)** on a Linux machine, meaning you are likely to find directories in a rootfs like **/etc** , **/usr** , **/bin** , etc. + +The second part of a container image is a JSON file that describes the contents of the rootfs. It contains fields like the command to run the container, the entrypoint, the environment variables required to run the container, the working directory of the container, etc. Basically this JSON file allows the developer of the container image to describe how the container image is expected to be used. The fields in this JSON file have been standardized in the [OCI Image Format specification][6] + +The rootfs and the JSON file then get tar'd together to create an image bundle that is stored in a container registry. To create a layered image, you install more software into the rootfs and modify the JSON file. Then you tar up the differences of the new and the old rootfs and store that in another image tarball. The second JSON file refers back to the first JSON file via a checksum. + +Many years ago, Docker introduced Dockerfile, a simplified scripting language for building container images. Dockerfile was great and really took off, but it has many shortcomings that users have complained about. For example: + + * Dockerfile encourages the inclusion of tools used to build containers inside the container image. Container images do not need to include yum/dnf/apt, but most contain one of them and all their dependencies. + + * Each line causes a layer to be created. Because of this, secrets can mistakenly get added to container images. If you create a secret in one line of the Dockerfile and delete it in the next, the secret is still in the image. + + + + +One of my biggest complaints about the "container revolution" is that six years since it started, the only way to build a container image was still with Dockerfiles. Lots of tools other than **docker build** have appeared besides Buildah, but most still deal only with Dockerfile. So users continue hacking around the problems with Dockerfile. + +Note that [umoci][7] is an alternative to **docker build** that allows you to build container images without Dockerfile. + +Our goal with Buildah was to build a simple tool that could just create a rootfs directory on disk and allow other tools to populate the directory, then create the JSON file. Finally, Buildah would create the OCI image and push it to a container registry where it could be used by any container engine, like [Docker][8], Podman, [CRI-O][9], or another Buildah. + +Buildah also supports Dockerfile, since we know the bulk of people building containers have created Dockerfiles. + +### Using Buildah directly + +Lots of people use Buildah directly. A cool feature of Buildah is that you can script up the container build directly in Bash. + +The example below creates a Bash script called **myapp.sh** , which uses Buildah to pull down the Fedora image, and then uses **dnf** and **make** on a machine to install software into the container image rootfs, **$mnt**. It then adds some fields to the JSON file using **buildah config** and commits the container to a container image **myapp**. Finally, it pushes the container image to a container registry, **quay.io**. (It could push it to any container registry.) Now this OCI image can be used by any container engine or Kubernetes. + +``` +cat myapp.sh +#!/bin/sh +ctr=$(buildah from fedora) +mnt=($buildah mount $ctr) +dnf -y install --installroot $mnt httpd +make install DESTDIR=$mnt myapp +rm -rf $mnt/var/cache $mnt/var/log/* +buildah config --command /usr/bin/myapp -env foo=bar --working-dir=/root $ctr +buildah commit $ctr myapp +buildah push myapp http://quay.io/username/myapp +``` + +To create really small images, you could replace **fedora** in the script above with **scratch** , and Buildah will build a container image that only has the requirements for the **httpd** package inside the container image. No need for Python or DNF. + +### Podman's relationship to Buildah + +With Buildah, we have a low-level tool for building container images. Buildah also provides a library for other tools to build container images. Podman was designed to replace the Docker command line interface (CLI). One of the Docker CLI commands is **docker build**. We needed to have **podman build** to support building container images with Dockerfiles. Podman vendored in the Buildah library to allow it to do **podman build**. Any time you do a **podman build** , you are executing Buildah code to build your container images. If you are only going to use Dockerfiles to build container images, we recommend you only use Podman; there's no need for Buildah at all. + +### Other tools using the Buildah library + +Podman is not the only tool to take advantage of the Buildah library. [OpenShift 4 Source-to-Image][10] (S2I) will also use Buildah to build container images. OpenShift S2I allows developers using OpenShift to use Git commands to modify source code; when they push the changes for their source code to the Git repository, OpenShift kicks off a job to compile the source changes and create a container image. It also uses Buildah under the covers to build this image. + +[Ansible-Bender][11] is a new project to build container images via an Ansible playbook. For those familiar with Ansible, Ansible-Bender makes it easy to describe the contents of the container image and then uses Buildah to package up the container image and send it to a container registry. + +We would love to see other tools and languages for describing and building a container image and would welcome others use Buildah to do the conversion. + +### Problems with rootless + +Buildah works fine in rootless mode. It uses user namespace the same way Podman does. If you execute + +``` +$ buildah bud --tag myapp -f Dockerfile . +$ buildah push myapp http://quay.io/username/myapp +``` + +in your home directory, everything works great. + +However, if you execute the script described above, it will fail! + +The problem is that, when running the **buildah mount** command in rootless mode, the **buildah** command must put itself inside the user namespace and create a new mount namespace. Rootless users are not allowed to mount filesystems when not running in a user namespace. + +When the Buildah executable exits, the user namespace and mount namespace disappear, so the mount point no longer exists. This means the commands after **buildah mount** that attempt to write to **$mnt** will fail since **$mnt** is no longer mounted. + +How can we make the script work in rootless mode? + +#### Buildah unshare + +Buildah has a special command, **buildah unshare** , that allows you to enter the user namespace. If you execute it with no commands, it will launch a shell in the user namespace, and your shell will seem like it is running as root and all the contents of the home directory will seem like they are owned by root. If you look at the owner or files in **/usr** , it will list them as owned by **nfsnobody** (or nobody). This is because your user ID (UID) is now root inside the user namespace and real root (UID=0) is not mapped into the user namespace. The kernel represents all files owned by UIDs not mapped into the user namespace as the NFSNOBODY user. When you exit the shell, you will exit the user namespace, you will be back to your normal UID, and the home directory will be owned by your UID again. + +If you want to execute the **myapp.sh** command defined above, you can execute **buildah unshare myapp.sh** and the script will now run correctly. + +#### Conclusion + +Building and running containers in unprivileged environments is now possible and quite useable. There is little reason for developers to develop containers as root. + +If you want to use a traditional container engine, and use Dockerfile's for builds, then you should probably just use Podman. But if you want to experiment with building container images in new ways without using Dockerfile, then you should really take a look at Buildah. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/tips-tricks-rootless-buildah + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/19/2/how-does-rootless-podman-work +[2]: https://podman.io/ +[3]: https://github.com/containers/buildah +[4]: https://www.opencontainers.org/ +[5]: https://github.com/containers +[6]: https://github.com/opencontainers/image-spec +[7]: https://github.com/openSUSE/umoci +[8]: https://github.com/docker +[9]: https://cri-o.io/ +[10]: https://github.com/openshift/source-to-image +[11]: https://github.com/TomasTomecek/ansible-bender diff --git a/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md b/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md new file mode 100644 index 0000000000..785a6eeb5a --- /dev/null +++ b/sources/tech/20190305 Running the ‘Real Debian- on Raspberry Pi 3- -For DIY Enthusiasts.md @@ -0,0 +1,134 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Running the ‘Real Debian’ on Raspberry Pi 3+ [For DIY Enthusiasts]) +[#]: via: (https://itsfoss.com/debian-raspberry-pi) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Running the ‘Real Debian’ on Raspberry Pi 3+ [For DIY Enthusiasts] +====== + +If you have ever used a Raspberry Pi device, you probably already know that it recommends a Linux distribution called [Raspbian][1]. + +Raspbian is a heavily customized form of Debian to run on low-powered ARM processors. It’s not bad. In fact, it’s an excellent OS for Raspberry Pi devices but it’s not the real Debian. + +[Debian purists like me][2] would prefer to run the actual Debian over the Raspberry Pi’s customized Debian version. I trust Debian more than any other distribution to provide me a vast amount of properly vetted free software packages. Moreover, a project like this would help other ARM devices as well. + +Above all, running the official Debian on Raspberry Pi is sort of challenge and I like such challenges. + +![Real Debian on Raspberry Pi][3] + +I am not the only one who thinks like this. There are many other Debian users who share the same feeling and this is why there exists an ongoing project to create a [Debian image for Raspberry Pi][4]. + +About two and a half months back, a Debian Developer (DD) named [Gunnar Wolf][5] took over that unofficial Raspberry Pi image generation project. + +I’ll be quickly showing you how can you install this Raspberry Pi Debian Buster preview image on your Raspberry Pi 3 (or higher) devices. + +### Getting Debian on Raspberry Pi [For Experts] + +``` +Warning + +Be aware this Debian image is very raw and unsupported at the moment. Though it’s very new, I believe experienced Raspberry Pi and Debian users should be able to use it. +``` + +Now as far as [Debian][6] is concerned, here is the Debian image and instructions that you could use to put the Debian stock image on your Raspberry pi 3 Model B+. + +#### Step 1: Download the Debian Raspberry Pi Buster image + +You can download the preview images using wget command: + +``` +wget https://people.debian.org/~gwolf/raspberrypi3/20190206/20190206-raspberry-pi-3-buster-PREVIEW.img.xz +``` + +#### Step 2: Verify checksum (optional) + +It’s optional but you should [verify the checksum][7]. You can do that by downloading the SHA256 hashfile and then comparing it with that of the downloaded Raspberry Pi Debian image. + +At my end I had moved both the .sha256 file as img.xz to a directory to make it easier to check although it’s not necessary. + +``` +wget https://people.debian.org/~gwolf/raspberrypi3/20190206/20190206-raspberry-pi-3-buster-PREVIEW.img.xz.sha256 + +sha256sum -c 20190206-raspberry-pi-3-buster-PREVIEW.img.xz.sha256 +``` + +#### Step 3: Write the image to your SD card + +Once you have verified the image, take a look at it. It is around 400MB in the compressed xzip format. You can extract it to get an image of around 1.5GB in size. + +Insert your SD card. **Before you carry on to the next command please change the sdX to a suitable name that corresponds to your SD card.** + +The command basically extracts the img.xz archive to the SD card. The progress switch/flag enables you to see a progress line with a number as to know how much the archive has extracted. + +``` +xzcat 20190206-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdX bs=64k oflag=dsync status=progress$ xzcat 20190206-raspberry-pi-3-buster-PREVIEW.img.xz | dd of=/dev/sdX bs=64k oflag=dsync status=progress +``` + +Once you have successfully flashed your SD card, you should be able test if the installation went ok by sshing into your Raspberry Pi. The default root password is raspberry. + +``` +ssh root@rpi3 +``` + +If you are curious to know how the Raspberry Pi image was built, you can look at the [build scripts][8]. + +You can find more info on the project homepage. + +[DEBIAN RASPBERRY PI IMAGE][15] + +### How to contribute to the Raspberry Pi Buster effort + +There is a mailing list called [debian-arm][9] where people could contribute their efforts and ask questions. As you can see in the list, there is already a new firmware which was released [few days back][10] which might make booting directly a reality instead of the workaround shared above. + +If you want you could make a new image using the raspi3-image-spec shared above or wait for Gunnar to make a new image which might take time. + +Most of the maintainers also hang out at #vmdb2 at #OFTC. You can either use your IRC client or [Riot client][11], register your name at Nickserv and connect with either Gunnar Wolf, Roman Perier or/and Lars Wirzenius, author of [vmdb2][12]. I might do a follow-up on vmdb2 as it’s a nice little tool by itself. + +### The Road Ahead + +If there are enough interest and contributors, for instance, the lowest-hanging fruit would be to make sure that the ARM64 port [wiki page][13] is as current as possible. The benefits are and can be enormous. + +There are a huge number of projects which could benefit from either having a [Pi farm][14] to making your media server or a SiP phone or whatever you want to play/work with. + +Another low-hanging fruit might be synchronization between devices, say an ARM cluster sharing reports to either a Debian desktop by way of notification or on mobile or both ways. + +While I have shared about Raspberry Pi, there are loads of single-board computers on the market already and lot more coming, both from MIPS as well as OpenRISC-V so there is going to plenty of competition in the days ahead. + +Also, OpenRISC-V is and would be open-sourcing lot of its IP so non-free firmware or binary blobs would not be needed. Even MIPS is rumored to be more open which may challenge ARM if MIPS and OpenRISC-V are able to get their logistics and pricing right, but that is a story for another day. + +There are many more vendors, I am just sharing the ones whom I am most interested to see what they come up with. + +I hope the above sheds some light why it makes sense to have Debian on the Raspberry Pi. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/debian-raspberry-pi + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://www.raspberrypi.org/downloads/raspbian/ +[2]: https://itsfoss.com/reasons-why-i-love-debian/ +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/debian-raspberry-pi.png?resize=800%2C450&ssl=1 +[4]: https://wiki.debian.org/RaspberryPi3 +[5]: https://gwolf.org/node/4139 +[6]: https://www.debian.org/ +[7]: https://itsfoss.com/checksum-tools-guide-linux/ +[8]: https://github.com/Debian/raspi3-image-spec +[9]: https://lists.debian.org/debian-arm/2019/02/threads.html +[10]: https://alioth-lists.debian.net/pipermail/pkg-raspi-maintainers/Week-of-Mon-20190225/000310.html +[11]: https://itsfoss.com/riot-desktop/ +[12]: https://liw.fi/vmdb2/ +[13]: https://wiki.debian.org/Arm64Port +[14]: https://raspi.farm/ +[15]: https://wiki.debian.org/RaspberryPi3 diff --git a/sources/tech/20190306 ClusterShell - A Nifty Tool To Run Commands On Cluster Nodes In Parallel.md b/sources/tech/20190306 ClusterShell - A Nifty Tool To Run Commands On Cluster Nodes In Parallel.md new file mode 100644 index 0000000000..8f69143d36 --- /dev/null +++ b/sources/tech/20190306 ClusterShell - A Nifty Tool To Run Commands On Cluster Nodes In Parallel.md @@ -0,0 +1,309 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ClusterShell – A Nifty Tool To Run Commands On Cluster Nodes In Parallel) +[#]: via: (https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +ClusterShell – A Nifty Tool To Run Commands On Cluster Nodes In Parallel +====== + +We had written two articles in the past to run commands on multiple remote server in parallel. + +These are **[Parallel SSH (PSSH)][1]** or **[Distributed Shell (DSH)][2]**. + +Today also, we are going to discuss about the same kind of topic but it allows us to perform the same on cluster nodes as well. + +You may think, i can write a small shell script to archive this instead of installing these third party packages. + +Of course you are right and if you are going to run some commands in 10-15 remote systems then you don’t need to use this. + +However, the scripts take some time to complete this task as it’s running in a sequential order. + +Think about if you would like to run some commands on 1000+ servers what will be the options? + +In this case your script won’t help you. Also, it would take good amount of time to complete a task. + +So, to overcome this kind of issue and situation. We need to run the command in parallel on remote machines. + +For that, we need use in one of the Parallel applications. I hope this explanation might fulfilled your doubts about parallel utilities. + +### What Is ClusterShell? + +clush stands for [ClusterShell][3]. ClusterShell is an event-driven open source Python library, designed to run local or distant commands in parallel on server farms or on large Linux clusters. + +It will take care of common issues encountered on HPC clusters, such as operating on groups of nodes, running distributed commands using optimized execution algorithms, as well as gathering results and merging identical outputs, or retrieving return codes. + +ClusterShell takes advantage of existing remote shell facilities already installed on your systems, like SSH. + +ClusterShell’s primary goal is to improve the administration of high- performance clusters by providing a lightweight but scalable Python API for developers. It also provides clush, clubak and cluset/nodeset, convenient command-line tools that allow traditional shell scripts to benefit from some of the library features. + +ClusterShell’s written in Python and it requires Python (v2.6+ or v3.4+) to run on your system. + +### How To Install ClusterShell On Linux? + +ClusterShell package is available in most of the distribution official package manager. So, use the distribution package manager tool to install it. + +For **`Fedora`** system, use **[DNF Command][4]** to install clustershell. + +``` +$ sudo dnf install clustershell +``` + +Python 2 module and tools are installed and if it’s default on your system then run the following command to install Python 3 development on Fedora System. + +``` +$ sudo dnf install python3-clustershell +``` + +Make sure you should have enabled the **[EPEL repository][5]** on your system before performing clustershell installation. + +For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install clustershell. + +``` +$ sudo yum install clustershell +``` + +Python 2 module and tools are installed and if it’s default on your system then run the following command to install Python 3 development on CentOS/RHEL System. + +``` +$ sudo yum install python34-clustershell +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install clustershell. + +``` +$ sudo zypper install clustershell +``` + +Python 2 module and tools are installed and if it’s default on your system then run the following command to install Python 3 development on OpenSUSE System. + +``` +$ sudo zypper install python3-clustershell +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][8]** or **[APT Command][9]** to install clustershell. + +``` +$ sudo apt install clustershell +``` + +### How To Install ClusterShell In Linux Using PIP? + +Use PIP to install ClusterShell because it’s written in Python. + +Make sure you should have enabled the **[Python][10]** and **[PIP][11]** on your system before performing clustershell installation. + +``` +$ sudo pip install ClusterShell +``` + +### How To Use ClusterShell On Linux? + +It’s straight forward and awesome tool compared with other utilities such as pssh and dsh. It has so many options to perform the remote execution in parallel. + +Make sure you should have enabled the **[password less login][12]** on your system before start using clustershell. + +The following configuration file defines system-wide default values. You no need to modify anything here. + +``` +$ cat /etc/clustershell/clush.conf +``` + +If you would like to create a servers group. Here you can go. By default some examples were available so, do the same for your requirements. + +``` +$ cat /etc/clustershell/groups.d/local.cfg +``` + +Just run the clustershell command in the following format to get the information from the given nodes. + +``` +$ clush -w 192.168.1.4,192.168.1.9 cat /proc/version +192.168.1.9: Linux version 4.15.0-45-generic ([email protected]) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 +192.168.1.4: Linux version 3.10.0-957.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) ) #1 SMP Thu Nov 8 23:39:32 UTC 2018 +``` + +**Option:** + + * **`-w:`** nodes where to run the command. + + + +You can use the regular expressions instead of using full hostname and IPs. + +``` +$ clush -w 192.168.1.[4,9] uname -r +192.168.1.9: 4.15.0-45-generic +192.168.1.4: 3.10.0-957.el7.x86_64 +``` + +Alternatively you can use the following format if you have the servers in the same IP series. + +``` +$ clush -w 192.168.1.[4-9] date +192.168.1.6: Mon Mar 4 21:08:29 IST 2019 +192.168.1.7: Mon Mar 4 21:08:29 IST 2019 +192.168.1.8: Mon Mar 4 21:08:29 IST 2019 +192.168.1.5: Mon Mar 4 09:16:30 CST 2019 +192.168.1.9: Mon Mar 4 21:08:29 IST 2019 +192.168.1.4: Mon Mar 4 09:16:30 CST 2019 +``` + +clustershell allow us to run the command in batch mode. Use the following format to achieve this. + +``` +$ clush -w 192.168.1.4,192.168.1.9 -b +Enter 'quit' to leave this interactive mode +Working with nodes: 192.168.1.[4,9] +clush> hostnamectl +--------------- +192.168.1.4 +--------------- + Static hostname: CentOS7.2daygeek.com + Icon name: computer-vm + Chassis: vm + Machine ID: 002f47b82af248f5be1d67b67e03514c + Boot ID: f9b37a073c534dec8b236885e754cb56 + Virtualization: kvm + Operating System: CentOS Linux 7 (Core) + CPE OS Name: cpe:/o:centos:centos:7 + Kernel: Linux 3.10.0-957.el7.x86_64 + Architecture: x86-64 +--------------- +192.168.1.9 +--------------- + Static hostname: Ubuntu18 + Icon name: computer-vm + Chassis: vm + Machine ID: 27f6c2febda84dc881f28fd145077187 + Boot ID: f176f2eb45524d4f906d12e2b5716649 + Virtualization: oracle + Operating System: Ubuntu 18.04.2 LTS + Kernel: Linux 4.15.0-45-generic + Architecture: x86-64 +clush> free -m +--------------- +192.168.1.4 +--------------- + total used free shared buff/cache available +Mem: 1838 641 217 19 978 969 +Swap: 2047 0 2047 +--------------- +192.168.1.9 +--------------- + total used free shared buff/cache available +Mem: 1993 352 1067 1 573 1473 +Swap: 1425 0 1425 +clush> w +--------------- +192.168.1.4 +--------------- + 09:21:14 up 3:21, 3 users, load average: 0.00, 0.01, 0.05 +USER TTY FROM [email protected] IDLE JCPU PCPU WHAT +daygeek :0 :0 06:02 ?xdm? 1:28 0.30s /usr/libexec/gnome-session-binary --session gnome-classic +daygeek pts/0 :0 06:03 3:17m 0.06s 0.06s bash +daygeek pts/1 192.168.1.6 06:03 52:26 0.10s 0.10s -bash +--------------- +192.168.1.9 +--------------- + 21:13:12 up 3:12, 1 user, load average: 0.08, 0.03, 0.00 +USER TTY FROM [email protected] IDLE JCPU PCPU WHAT +daygeek pts/0 192.168.1.6 20:42 29:41 0.05s 0.05s -bash +clush> quit +``` + +If you would like to run the command on a group of nodes then use the following format. + +``` +$ clush -w @dev uptime +or +$ clush -g dev uptime +or +$ clush --group=dev uptime + +192.168.1.9: 21:10:10 up 3:09, 1 user, load average: 0.09, 0.03, 0.01 +192.168.1.4: 09:18:12 up 3:18, 3 users, load average: 0.01, 0.02, 0.05 +``` + +If you would like to run the command on more than one group of nodes then use the following format. + +``` +$ clush -w @dev,@uat uptime +or +$ clush -g dev,uat uptime +or +$ clush --group=dev,uat uptime + +192.168.1.7: 07:57:19 up 59 min, 1 user, load average: 0.08, 0.03, 0.00 +192.168.1.9: 20:27:20 up 1:00, 1 user, load average: 0.00, 0.00, 0.00 +192.168.1.5: 08:57:21 up 59 min, 1 user, load average: 0.00, 0.01, 0.05 +``` + +clustershell allow us to copy a file to remote machines. To copy local file or directory to the remote nodes in the same location. + +``` +$ clush -w 192.168.1.[4,9] --copy /home/daygeek/passwd-up.sh +``` + +We can verify the same by running the following command. + +``` +$ clush -w 192.168.1.[4,9] ls -lh /home/daygeek/passwd-up.sh +192.168.1.4: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 4 09:00 /home/daygeek/passwd-up.sh +192.168.1.9: -rwxr-xr-x 1 daygeek daygeek 159 Mar 4 20:52 /home/daygeek/passwd-up.sh +``` + +To copy local file or directory to the remote nodes in the different location. + +``` +$ clush -g uat --copy /home/daygeek/passwd-up.sh --dest /tmp +``` + +We can verify the same by running the following command. + +``` +$ clush --group=uat ls -lh /tmp/passwd-up.sh +192.168.1.7: -rwxr-xr-x. 1 daygeek daygeek 159 Mar 6 07:44 /tmp/passwd-up.sh +``` + +To copy file or directory from remote nodes to local system. + +``` +$ clush -w 192.168.1.7 --rcopy /home/daygeek/Documents/magi.txt --dest /tmp +``` + +We can verify the same by running the following command. + +``` +$ ls -lh /tmp/magi.txt.192.168.1.7 +-rw-r--r-- 1 daygeek daygeek 35 Mar 6 20:24 /tmp/magi.txt.192.168.1.7 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/clustershell-clush-run-commands-on-cluster-nodes-remote-system-in-parallel-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/pssh-parallel-ssh-run-execute-commands-on-multiple-linux-servers/ +[2]: https://www.2daygeek.com/dsh-run-execute-shell-commands-on-multiple-linux-servers-at-once/ +[3]: https://cea-hpc.github.io/clustershell/ +[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[5]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/3-methods-to-install-latest-python3-package-on-centos-6-system/ +[11]: https://www.2daygeek.com/install-pip-manage-python-packages-linux/ +[12]: https://www.2daygeek.com/linux-passwordless-ssh-login-using-ssh-keygen/ diff --git a/sources/tech/20190306 Getting started with the Geany text editor.md b/sources/tech/20190306 Getting started with the Geany text editor.md new file mode 100644 index 0000000000..7da5f95686 --- /dev/null +++ b/sources/tech/20190306 Getting started with the Geany text editor.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with the Geany text editor) +[#]: via: (https://opensource.com/article/19/3/getting-started-geany-text-editor) +[#]: author: (James Mawson https://opensource.com/users/dxmjames) + +Getting started with the Geany text editor +====== +Geany is a light and swift text editor with IDE features. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) + +I have to admit, it took me a rather embarrassingly long time to really get into Linux as a daily driver. One thing I recall from these years in the wilderness was how strange it was to watch open source types get so worked up about text editors. + +It wasn't just that opinions differed. Disagreements were intense. And you'd see them again and again. + +I mean, I suppose it makes some sense. Doing dev or admin work means you're spending a lot of time with a text editor. And when it gets in the way or won't do quite what you want? In that exact moment, that's the most frustrating thing in the world. + +And I know what it means to really hate a text editor. I learned this many years ago in the computer labs at university trying to figure out Emacs. I was quite shocked that a piece of software could have so many sadomasochistic overtones. People were doing that to each other deliberately! + +So perhaps it's a rite of passage that now I have one I very much like. It's called [Geany][1], it's on GPL, and it's [in the repositories][2] of most popular distributions. + +Here's why it works for me. + +### I'm into simplicity + +The main thing I want from a text editor is just to edit text. I don't think there should be any kind of learning curve in the way. I should be able to open it and use it. + +For that reason, I've generally used whatever is included with an operating system. On Windows 10, I used Notepad far longer than I should have. When I finally replaced it, it was with Notepad++. In the Linux terminal, I like Nano. + +I was perfectly aware I was missing out on a lot of useful functionality. But it was never enough of a pain point to make a change. And it's not that I've never tried anything more elaborate. I did some of my first real programming on Visual Basic and Borland Delphi. + +These development environments gave you a graphical interface to design your windows visually, various windows where you could configure properties and settings, a text interface to write your functions, and various odds and ends for debugging. This was a great way to build desktop applications, so long as you used it the way it was intended. + +But if you wanted to do something the authors didn't anticipate, all these extra moving parts suddenly got in the way. As software became more and more about the web and the internet, this situation started happening all the time. + +In the past, I used HTML editing suites like Macromedia Dreamweaver (as it was back then) and FirstPage for static websites. Again, I found the features could get in the way as much as they helped. These applications had their own ideas about how to organize your project, and if you had a different view, it was an awful bother. + +More recently, after a long break from programming, I started learning the people's language: [Python][3]. I bought a book of introductory tutorials, which said to install [IDLE][4], so I did. I think I got about five minutes into it before ditching it to run the interpreter from the command line. It had way too many moving parts to deal with. Especially for HelloWorld.py. + +But I always went back to Notepad++ and Nano whenever I could get away with it. + +So what changed? Well, a few months ago I [ditched Windows 10][5] completely (hooray!). Sticking with what I knew, I used Nano as my main text editor for a few weeks. + +I learned that Nano is great when you're already on the command line and you need to launch a Navy SEAL mission. You know what I mean. A lightning-fast raid. Get in, complete the objective, and get out. + +It's less ideal for long campaigns—or even moderately short ones. Even just adding a new page to a static website turns out to involve many repetitive keystrokes. As much as anything else, I really missed being able to navigate and select text with the mouse. + +### Introducing Geany + +The Geany project began in 2005 and is still actively developed. + +It has minimal dependencies: just the [GTK Toolkit][6] and the libraries that GTK depends on. If you have any kind of desktop environment installed, you almost certainly have GTK on your machine. + +I'm using it on Xfce, but thanks to these minimal dependencies, Geany is portable across desktop environments. + +Geany is fast and light. Installing Geany from the package manager took mere moments, and it uses only 3.1MB of space on my machine. + +So far, I've used it for HTML, CSS, and Python and to edit configuration files. It also recognizes C, Java, JavaScript, Perl, and [more][7]. + +### No-compromise simplicity + +Geany has a lot of great features that make life easier. Just listing them would miss the best bit, which is this: Geany makes sense right out of the box. As soon as it's installed, you can start editing files straightaway, and it just works. + +For all the IDE functionality, none of it gets in the way. The default settings are set intelligently, and the menus are laid out nicely enough that it's no hassle to change them. + +It doesn't try to organize your project for you, and it doesn't have strong opinions about how you should do anything. + +### Handles whitespace beautifully + +By default, every time you press Enter, Geany preserves the indentation on the new line. In addition to saving a few tedious keystrokes, it avoids the inconsistent use of tabs and spaces, which can sometimes sneak in when your mind's elsewhere and make your code hard to follow for anyone with a different text editor. + +But what if you're editing a file that's already suffered this treatment? For example, I needed to edit an HTML file that was indented with a mix of tabs and spaces, making it a nightmare to figure out how the tags were nested. + +With Geany, it took just seconds to hunt through the menus to change the tab length from four spaces to eight. Even better was the option to convert those tabs to spaces. Problem solved! + +### Clever shortcuts and automation + +How often do you write the correct code on the wrong line? I do it all the time. + +Geany makes it easy to move lines of code up and down using Alt+PgUp and Alt+PgDn. This is a little nicer than just a regular cut and paste—instead of needing four or five key presses, you only need one. + +When coding HTML, Geany automatically closes tags for you. As well as saving time, this avoids a lot of annoying bugs. When you forget to close a tag, you can spend ages scouring the document looking for something far more complex. + +It gets even better in Python, where indentation is crucial. Whenever you end a line with a colon, Geany automatically indents it for you. + +One nice little side effect is that when you forget to include the colon—something I do with embarrassing regularity—you realize it immediately when you don't get the automatic indentation you expected. + +The default indentation is a single tab, while I prefer two spaces. Because Geany's menus are very well laid out, it took me only a few seconds to figure out how to change it. + +You, of course, get syntax highlighting too. In addition, it tracks your [variable scope][8] and offers useful autocompletion. + +### Large plugin library + +Geany has a [big library of plugins][9], but so far I haven't needed to try any. Even so, I still feel like I benefit from them. How? Well, it means that my editor isn't crammed with functionality I don't use. + +I reckon this attitude of adding extra functionality into a big library of plugins is a great ethos—no matter your specific needs, you get to have all the stuff you want and none of what you don't. + +### Remote file editing + +One thing that's really nice about terminal text editors is that it's no problem to use them in a remote shell. + +Geany handles this beautifully, as well. You can open remote files anywhere you have SSH access as easily as you can open files on your own machine. + +One frustration I had at first was I only seemed to be able to authenticate with a username and password, which was annoying, because certificates are so much nicer. It turned out that this was just me being a noob by keeping certificates in my home directory rather than in ~/.ssh. + +When editing Python scripts remotely, autocompletion doesn't work when you use packages installed on the server and not on your local machine. This isn't really that big a deal for me, but it's there. + +### In summary + +Text editors are such a personal preference that the right one will be different for different people. + +Geany is excellent if you already know what you want to write and want to just get on with it while enjoying plenty of useful shortcuts to speed up the menial parts. + +Geany is a great way to have your cake and eat it too. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/getting-started-geany-text-editor + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dxmjames +[b]: https://github.com/lujun9972 +[1]: https://www.geany.org/ +[2]: https://www.geany.org/Download/ThirdPartyPackages +[3]: https://opensource.com/resources/python +[4]: https://en.wikipedia.org/wiki/IDLE +[5]: https://blog.dxmtechsupport.com.au/linux-on-the-desktop-are-we-nearly-there-yet/ +[6]: https://www.gtk.org/ +[7]: https://www.geany.org/Main/AllFiletypes +[8]: https://cscircles.cemc.uwaterloo.ca/11b-how-functions-work/ +[9]: https://plugins.geany.org/ diff --git a/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md b/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md new file mode 100644 index 0000000000..b1e7fbf046 --- /dev/null +++ b/sources/tech/20190311 Building the virtualization stack of the future with rust-vmm.md @@ -0,0 +1,76 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Building the virtualization stack of the future with rust-vmm) +[#]: via: (https://opensource.com/article/19/3/rust-virtual-machine) +[#]: author: (Andreea Florescu ) + +Building the virtualization stack of the future with rust-vmm +====== +rust-vmm facilitates sharing core virtualization components between Rust Virtual Machine Monitors. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_stack_blue_disks.png?itok=3hKDtK20) + +More than a year ago we started developing [Firecracker][1], a virtual machine monitor (VMM) that runs on top of KVM (the kernel-based virtual machine). We wanted to create a lightweight VMM that starts virtual machines (VMs) in a fraction of a second, with a low memory footprint, to enable high-density cloud environments. + +We started out developing Firecracker by forking the Chrome OS VMM ([CrosVM][2]), but we diverged shortly after because we targeted different customer use cases. CrosVM provides Linux application isolation in ChromeOS, while Firecracker is used for running multi-tenant workloads at scale. Even though we now walk different paths, we still have common virtualization components, such as wrappers over KVM input/output controls (ioctls), a minimal kernel loader, and use of the [Virtio][3] device models. + +With this in mind, we started thinking about the best approach for sharing the common code. Having a shared codebase raises the security and quality bar for both projects. Currently, fixing security bugs requires duplicated work in terms of porting the changes from one project to the other and going through different review processes for merging the changes. After open sourcing Firecracker, we've received requests for adding features including GPU support and booting [bzImage][4] files. Some of the requests didn't align with Firecracker's goals, but were otherwise valid use cases that just haven't found the right place for an implementation. + +### The rust-vmm project + +The [rust-vmm][5] project came to life in December 2018 when Amazon, Google, Intel, and Red Hat employees started talking about the best way of sharing virtualization packages. More contributors have joined this initiative along the way. We are still at the beginning of this journey, with only one component published to [Crates.io][6] (Rust's package registry) and several others (such as Virtio devices, Linux kernel loaders, and KVM ioctls wrappers) being developed. With two VMMs written in Rust under active development and growing interest in building other specialized VMMs, rust-vmm was born as the host for sharing core virtualization components. + +The goal of rust-vmm is to enable the community to create custom VMMs that import just the required building blocks for their use case. We decided to organize rust-vmm as a multi-repository project, where each repository corresponds to an independent virtualization component. Each individual building block is published on Crates.io. + +### Creating custom VMMs with rust-vmm + +The components discussed below are currently under development. + +![](https://opensource.com/sites/default/files/uploads/custom_vmm.png) + +Each box on the right side of the diagram is a GitHub repository corresponding to one package, which in Rust is called a crate. The functionality of one crate can be further split into modules, for example virtio-devices. Let's have a look at these components and some of their potential use cases. + + * **KVM interface:** Creating our VMM on top of KVM requires an interface that can invoke KVM functionality from Rust. The kvm-bindings crate represents the Rust Foreign Function Interface (FFI) to KVM kernel headers. Because headers only include structures and defines, we also have wrappers over the KVM ioctls (kvm-ioctls) that we use for opening dev/kvm, creating a VM, creating vCPUs, and so on. + + * **Virtio devices and rate limiting:** Virtio has a frontend-backend architecture. Currently in rust-vmm, the frontend is implemented in the virtio-devices crate, and the backend lies in the vhost package. Vhost has support for both user-land and kernel-land drivers, but users can also plug virtio-devices to their custom backend. The virtio-bindings are the bindings for Virtio devices generated using the Virtio Linux headers. All devices in the virtio-devices crate are exported independently as modules using conditional compilation. Some devices, such as block, net, and vsock support rate limiting in terms of I/O per second and bandwidth. This can be achieved by using the functionality provided in the rate-limiter crate. + + * The kernel-loader is responsible for loading the contents of an [ELF][7] kernel image in guest memory. + + + + +For example, let's say we want to build a custom VMM that allows users to create and configure a single VM running on top of KVM. As part of the configuration, users will be able to specify the kernel image file, the root file system, the number of vCPUs, and the memory size. Creating and configuring the resources of the VM can be implemented using the kvm-ioctls crate. The kernel image can be loaded in guest memory with kernel-loader, and specifying a root filesystem can be achieved with the virtio-devices block module. The last thing needed for our VMM is writing VMM Glue, the code that takes care of integrating rust-vmm components with the VMM user interface, which allows users to create and manage VMs. + +### How you can help + +This is the beginning of an exciting journey, and we are looking forward to getting more people interested in VMMs, Rust, and the place where you can find both: [rust-vmm][5]. + +We currently have [sync meetings][8] every two weeks to discuss the future of the rust-vmm organization. The meetings are open to anyone willing to participate. If you have any questions, please open an issue in the [community repository][9] or send an email to the rust-vmm [mailing list][10] (you can also [subscribe][11]). We also have a [Slack channel][12] and encourage you to join, if you are interested. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/rust-virtual-machine + +作者:[Andreea Florescu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://github.com/firecracker-microvm/firecracker +[2]: https://chromium.googlesource.com/chromiumos/platform/crosvm/ +[3]: https://www.linux-kvm.org/page/Virtio +[4]: https://en.wikipedia.org/wiki/Vmlinux#bzImage +[5]: https://github.com/rust-vmm +[6]: https://crates.io/ +[7]: https://en.wikipedia.org/wiki/Executable_and_Linkable_Format +[8]: http://lists.opendev.org/pipermail/rust-vmm/2019-January/000103.html +[9]: https://github.com/rust-vmm/community +[10]: mailto:rust-vmm@lists.opendev.org +[11]: http://lists.opendev.org/cgi-bin/mailman/listinfo/rust-vmm +[12]: https://join.slack.com/t/rust-vmm/shared_invite/enQtNTI3NDM2NjA5MzMzLTJiZjUxOGEwMTJkZDVkYTcxYjhjMWU3YzVhOGQ0M2Y5NmU5MzExMjg5NGE3NjlmNzNhZDlhMmY4ZjVhYTQ4ZmQ diff --git a/sources/tech/20190312 BackBox Linux for Penetration Testing.md b/sources/tech/20190312 BackBox Linux for Penetration Testing.md new file mode 100644 index 0000000000..b79a4a5cee --- /dev/null +++ b/sources/tech/20190312 BackBox Linux for Penetration Testing.md @@ -0,0 +1,200 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (BackBox Linux for Penetration Testing) +[#]: via: (https://www.linux.com/blog/learn/2019/3/backbox-linux-penetration-testing) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +BackBox Linux for Penetration Testing +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/security-2688911_1920.jpg?itok=yZ9TjAXu) + +Any given task can succeed or fail depending upon the tools at hand. For security engineers in particular, building just the right toolkit can make life exponentially easier. Luckily, with open source, you have a wide range of applications and environments at your disposal, ranging from simple commands to complicated and integrated tools. + +The problem with the piecemeal approach, however, is that you might wind up missing out on something that can make or break a job… or you waste a lot of time hunting down the right tools for the job. To that end, it’s always good to consider an operating system geared specifically for penetration testing (aka pentesting). + +Within the world of open source, the most popular pentesting distribution is [Kali Linux][1]. It is, however, not the only tool in the shop. In fact, there’s another flavor of Linux, aimed specifically at pentesting, called [BackBox][2]. BackBox is based on Ubuntu Linux, which also means you have easy access to a host of other outstanding applications besides those that are included, out of the box. + +### What Makes BackBox Special? + +BackBox includes a suite of ethical hacking tools, geared specifically toward pentesting. These testing tools include the likes of: + + * Web application analysis + + * Exploitation testing + + * Network analysis + + * Stress testing + + * Privilege escalation + + * Vulnerability assessment + + * Computer forensic analysis and exploitation + + * And much more + + + + +Out of the box, one of the most significant differences between Kali Linux and BackBox is the number of installed tools. Whereas Kali Linux ships with hundreds of tools pre-installed, BackBox significantly limits that number to around 70. Nonetheless, BackBox includes many of the tools necessary to get the job done, such as: + + * Ettercap + + * Msfconsole + + * Wireshark + + * ZAP + + * Zenmap + + * BeEF Browser Exploitation + + * Sqlmap + + * Driftnet + + * Tcpdump + + * Cryptcat + + * Weevely + + * Siege + + * Autopsy + + + + +BackBox is in active development, the latest version (5.3) was released February 18, 2019. But how is BackBox as a usable tool? Let’s install and find out. + +### Installation + +If you’ve installed one Linux distribution, you’ve installed them all … with only slight variation. BackBox is pretty much the same as any other installation. [Download the ISO][3], burn the ISO onto a USB drive, boot from the USB drive, and click the Install icon. + +The installer (Figure 1) will be instantly familiar to anyone who has installed a Ubuntu or Debian derivative. Just because BackBox is a distribution geared specifically toward security administrators, doesn’t mean the operating system is a challenge to get up and running. In fact, BackBox is a point-and-click affair that anyone, regardless of skills, can install. + +![installation][5] + +Figure 1: The installation of BackBox will be immediately familiar to anyone. + +[Used with permission][6] + +The trickiest section of the installation is the Installation Type. As you can see (Figure 2), even this step is quite simple. + +![BackBox][8] + +Figure 2: Selecting the type of installation for BackBox. + +[Used with permission][6] + +Once you’ve installed BackBox, reboot the system, remove the USB drive, and wait for it to land on the login screen. Log into the desktop and you’re ready to go (Figure 3). + +![desktop][10] + +Figure 3: The BackBox Linux desktop, running as a VirtualBox virtual machine. + +[Used with permission][6] + +### Using BackBox + +Thanks to the [Xfce desktop environment][11], BackBox is easy enough for a Linux newbie to navigate. Click on the menu button in the top left corner to reveal the menu (Figure 4). + +![desktop menu][13] + +Figure 4: The BackBox desktop menu in action. + +[Used with permission][6] + +From the desktop menu, click on any one of the favorites (in the left pane) or click on a category to reveal the related tools (Figure 5). + +![Auditing][15] + +Figure 5: The Auditing category in the BackBox menu. + +[Used with permission][6] + +The menu entries you’ll most likely be interested in are: + + * Anonymous - allows you to start an anonymous networking session. + + * Auditing - the majority of the pentesting tools are found in here. + + * Services - allows you to start/stop services such as Apache, Bluetooth, Logkeys, Networking, Polipo, SSH, and Tor. + + + + +Before you run any of the testing tools, I would recommend you first making sure to update and upgrade BackBox. This can be done via a GUI or the command line. If you opt to go the GUI route, click on the desktop menu, click System, and click Software Updater. When the updater completes its check for updates, it will prompt you if any are available, or if (after an upgrade) a reboot is necessary (Figure 6). + +![reboot][17] + +Figure 6: Time to reboot after an upgrade. + +[Used with permission][6] + +Should you opt to go the manual route, open a terminal window and issue the following two commands: + +``` +sudo apt-get update + +sudo apt-get upgrade -y +``` + +Many of the BackBox pentesting tools do require a solid understanding of how each tool works, so before you attempt to use any given tool, make sure you know how to use said tool. Some tools (such as Metasploit) are made a bit easier to work with, thanks to BackBox. To run Metasploit, click on the desktop menu button and click msfconsole from the favorites (left pane). When the tool opens for the first time, you’ll be asked to configure a few options. Simply select each default given by clicking your keyboard Enter key when prompted. Once you see the Metasploit prompt, you can run commands like: + +``` +db_nmap 192.168.0/24 +``` + +The above command will list out all discovered ports on a 192.168.1.x network scheme (Figure 7). + +![Metasploit][19] + +Figure 7: Open port discovery made simple with Metasploit on BackBox. + +[Used with permission][6] + +Even often-challenging tools like Metasploit are made far easier than they are with other distributions (partially because you don’t have to bother with installing the tools). That alone is worth the price of entry for BackBox (which is, of course, free). + +### The Conclusion + +Although BackBox usage may not be as widespread as Kali Linux, it still deserves your attention. For anyone looking to do pentesting on their various environments, BackBox makes the task far easier than so many other operating systems. Give this Linux distribution a go and see if it doesn’t aid you in your journey to security nirvana. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/3/backbox-linux-penetration-testing + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.kali.org/ +[2]: https://linux.backbox.org/ +[3]: https://www.backbox.org/download/ +[4]: /files/images/backbox1jpg +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_1.jpg?itok=pn4fQVp7 (installation) +[6]: /licenses/category/used-permission +[7]: /files/images/backbox2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_2.jpg?itok=tf-1zo8Z (BackBox) +[9]: /files/images/backbox3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_3.jpg?itok=GLowoAUb (desktop) +[11]: https://www.xfce.org/ +[12]: /files/images/backbox4jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_4.jpg?itok=VmQXtuZL (desktop menu) +[14]: /files/images/backbox5jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_5.jpg?itok=UnfM_OxG (Auditing) +[16]: /files/images/backbox6jpg +[17]: https://www.linux.com/sites/lcom/files/styles/floated_images/public/backbox_6.jpg?itok=2t1BiKPn (reboot) +[18]: /files/images/backbox7jpg +[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/backbox_7.jpg?itok=Vw_GEub3 (Metasploit) diff --git a/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md b/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md new file mode 100644 index 0000000000..2e4b8f098a --- /dev/null +++ b/sources/tech/20190312 Star LabTop Mk III Open Source Edition- An Interesting Laptop.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Star LabTop Mk III Open Source Edition: An Interesting Laptop) +[#]: via: (https://itsfoss.com/star-labtop-open-source-edition) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Star LabTop Mk III Open Source Edition: An Interesting Laptop +====== + +[Star Labs Systems][1] have been producing Laptops tailored for Linux for some time. While you can purchase other variants available on their website, they have recently launched a [Kickstarter campaign][2] for their upcoming ‘Open Source Edition’ laptop that incorporates more features as per the requests by the users or reviewers. + +It may not be the best laptop you’ve ever come across for around a **1000 Euros** – but it certainly is interesting for some specific features. + +In this article, we will talk about what makes it an interesting deal and whether or not it’s worth investing for. + +![star labtop mk III][3] + +### Key Highlight: Open-source Coreboot Firmware + +Normally, you will observe proprietary firmware (BIOS) on computers, American Megatrends Inc, for example. + +But, here, Star Labs have tailored the [coreboot firmware][4] (a.k.a known as the LinuxBIOS) which is an open source alternative to proprietary solutions for this laptop. + +Not just open source but it is also a lighter firmware for better control over your laptop. With [TianoCore EDK II][5], it ensures that you get the maximum compatibility for most of the major Operating Systems. + +### Other Features of Star LabTop Mk III + +![sat labtop mk III][6] + +In addition to the open source firmware, the laptop features an **8th-gen i7 chipse** t ( **i7-8550u** ) coupled with **16 Gigs of LPDDR4 RAM** clocked at **2400 MHz**. + +The GPU being the integrated **Intel UHD Graphics 620** should be enough for professional tasks – except video editing and gaming. It will be rocking a **Full HD 13.3-inch IPS** panel as the display. + +The storage option includes **480 GB or 960 GB of PCIe SSD** – which is impressive as well. In addition to all this, it comes with the **USB Type-C** support. + +Interestingly, the **BIOS, Embedded Controller and SSD** will be receiving automatic [firmware updates][7] via the [LVFS][8] (the Mk III standard edition has this feature already). + +You should also check out a review video of [Star LabTob Mk III][9] to get an idea of how the open source edition could look like: + +If you are curious about the detailed tech specs, you should check out the [Kickstarter page][2]. + + + +### Our Opinion + +![star labtop mk III][10] + +The inclusion of coreboot firmware and being something tailored for various Linux distributions originally is the reason why it is being termed as the “ **Open Source Edition”**. + +The price for the ultimate bundle on Kickstarter is **1087 Euros**. + +Can you get better laptop deals at this price? **Yes** , definitely. But, it really comes down to your preference and your passion for open source – of what you require. + +However, if you want a performance-driven laptop specifically tailored for Linux, yes, this is an option you might want to consider with something new to offer (and potentially considering your requests for their future builds). + +Of course, you cannot consider this for video editing and gaming – for obvious reasons. So, they should considering adding a dedicated GPU to make it a complete package for computing, gaming, video editing and much more. Maybe even a bigger screen, say 15.6-inch? + +### Wrapping Up + +For what it is worth, if you are a Linux and open source enthusiast and want a performance-driven laptop, this could be an option to go with and back this up on Kickstarter right now. + +What do you think about it? Will you be interested in a laptop like this? If not, why? + +Let us know your thoughts in the comments below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/star-labtop-open-source-edition + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://starlabs.systems +[2]: https://www.kickstarter.com/projects/starlabs/star-labtop-mk-iii-open-source-edition +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-2.jpg?resize=800%2C450&ssl=1 +[4]: https://en.wikipedia.org/wiki/Coreboot +[5]: https://github.com/tianocore/tianocore.github.io/wiki/EDK-II +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-1.jpg?ssl=1 +[7]: https://itsfoss.com/update-firmware-ubuntu/ +[8]: https://fwupd.org/ +[9]: https://starlabs.systems/pages/star-labtop +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii.jpg?resize=800%2C435&ssl=1 +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/star-labtop-mkiii-2.jpg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md b/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md new file mode 100644 index 0000000000..5af0ae30d3 --- /dev/null +++ b/sources/tech/20190313 Game Review- Steel Rats is an Enjoyable Bike-Combat Game.md @@ -0,0 +1,95 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Game Review: Steel Rats is an Enjoyable Bike-Combat Game) +[#]: via: (https://itsfoss.com/steel-rats) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Game Review: Steel Rats is an Enjoyable Bike-Combat Game +====== + +Steel Rats is a quite impressive 2.5D motorbike combat game with exciting stunts involved. It was already available for Windows on [Steam][1] – however, recently it has been made available for Linux and Mac as well. + +In case you didn’t know, you can easily [install Steam on Ubuntu][2] or other distributions and [enable Steam Play feature to run some Windows games on Linux][3]. + +So, in this article, we shall take a look at what the game is all about and if it is a good purchase for you. + +This game is neither free nor open source. We have covered it here because the game developers made an effort to port it to Linux. + +### Story Overview + +![steel rats][4] + +You belong to a biker gang – “ **Steel Rats** ” – who stepped up to protect their city from alien robots invasion. The alien robots aren’t just any tiny toys that you can easily defeat but with deadly weapons and abilities. + +The games features the setting as an alternative version of 1940’s USA – with the retro theme in place. You have to use your bike as the ultimate weapon to go against waves of alien robot and boss fights as well. + +You will encounter 4 different characters with unique abilities to switch from after progressing through a couple of rounds. + +You will start playing as “ **Toshi** ” and unlock other characters as you progress. **Toshi** is a genius and will be using a drone as his gadget to fight the alien robots. **James** – is the leader with the hammer attack as his special ability. **Lisa** would be the one utilizing fire to burn the junk robots. And, **Randall** will have his harpoon ready to destroy aerial robots with ease. + +### Gameplay + +![][5] + +Honestly, I am not a fan of 2.5 D (or 2D games). But, games like [Unravel][6] will be the exception – which is still not available for Linux, such a shame – EA. + +In this case, I did end up enjoying “ **Steel Rats** ” as one of the few 2D games I play. + +There is really no rocket science for this game – you just have to get good with the controls. No matter whether you use a controller or a keyboard, it is definitely challenging to get comfortable with the controls. + +You do not need to plan ahead in order to save your health or nitro boost because you will always have it when needed while also having checkpoints to resume your progress. + +You just need to keep the right pace and the perfect jump while hitting every enemy to get the best score in the leader boards. Once you do that, the game ends up being an easy and fun experience. + +If you’re curious about the gameplay, we recommend watching this video: + + +#[wasm_bindgen] +// This is pretty plain Rust code. If you've written Rust before this +// should look extremely familiar. If not, why wait?! Check this out: +// +pub fn excited_greeting(original: &str) -> String { +format!("HELLO, {}", original.to_uppercase()) +} +``` + +Second, we'll have to make two changes to our **Cargo.toml** configuration file: + + * Add **wasm_bindgen** as a dependency. + * Configure the type of library binary to be a **cdylib** or dynamic system library. In this case, our system is **wasm** , and setting this option is how we produce **.wasm** binary files. + + +``` +[package] +name = "my-wasm-library" +version = "0.1.0" +authors = ["$YOUR_INFO"] +edition = "2018" + +[lib] +crate-type = ["cdylib", "rlib"] + +[dependencies] +wasm-bindgen = "0.2.33" +``` + +Now let's build! If we just use **cargo build** , we'll get a **.wasm** binary, but in order to make it easy to call our Rust code from JavaScript, we'd like to have some JavaScript code that converts rich JavaScript types like strings and objects to pointers and passes these pointers to the Wasm module on our behalf. Doing this manually is tedious and prone to bugs. + +Luckily, in addition to being a library, **wasm-bindgen** also has the ability to create this "glue" JavaScript for us. This means in our code we can interact with our Wasm module using normal JavaScript types, and the generated code from **wasm-bindgen** will do the dirty work of converting these rich types into the pointer types that Wasm actually understands. + +We can use the awesome **wasm-pack** to build our Wasm binary, invoke the **wasm-bindgen** CLI tool, and package all of our JavaScript (and any optional generated TypeScript types) into one nice and neat package. Let's do that now! + +First we'll need to install **wasm-pack** : + +``` +$ cargo install wasm-pack +``` + +By default, **wasm-bindgen** produces ES6 modules. We'll use our code from a simple script tag, so we just want it to produce a plain old JavaScript object that gives us access to our Wasm functions. To do this, we'll pass it the **\--target no-modules** option. + +``` +$ wasm-pack build --target no-modules +``` + +We now have a **pkg** directory in our project. If we look at the contents, we'll see the following: + + * **package.json** : useful if we want to package this up as an NPM module + * **my_wasm_library_bg.wasm** : our actual Wasm code + * **my_wasm_library.js** : the JavaScript "glue" code + * Some TypeScript definition files + + + +Now we can create an **index.html** file that will make use of our JavaScript and Wasm: + +``` +<[html][9]> +<[head][10]> +<[meta][11] content="text/html;charset=utf-8" http-equiv="Content-Type" /> + +<[body][12]> + +<[script][13] src='./pkg/my_wasm_library.js'> + +<[script][13]> +window.addEventListener('load', async () => { +// Load the wasm file +await wasm_bindgen('./pkg/my_wasm_library_bg.wasm'); +// Once it's loaded the `wasm_bindgen` object is populated +// with the functions defined in our Rust code +const greeting = wasm_bindgen.excited_greeting("Ryan") +console.log(greeting) +}); + + + +``` + +You may be tempted to open the HTML file in your browser, but unfortunately, this is not possible. For security reasons, Wasm files have to be served from the same domain as the HTML file. You'll need an HTTP server. If you have a favorite static HTTP server that can serve files from your filesystem, feel free to use that. I like to use [**basic-http-server**][14], which you can install and run like so: + +``` +$ cargo install basic-http-server +$ basic-http-server +``` + +Now open the **index.html** file through the web server by going to **** and check your JavaScript console. You should see a very exciting greeting there! + +If you have any questions, please [let me know][15]. Next time, we'll take a look at how we can use various browser and JavaScript APIs from within our Rust code. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/calling-rust-javascript + +作者:[Ryan Levick][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ryanlevick +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/javascript_vim.jpg?itok=mqkAeakO (JavaScript in Vim) +[2]: https://opensource.com/article/19/2/why-use-rust-webassembly +[3]: https://rustup.rs/ +[4]: https://doc.rust-lang.org/cargo/ +[5]: https://github.com/rustwasm/wasm-bindgen +[6]: https://github.com/koute/stdweb +[7]: https://github.com/koute/stdweb/issues/318 +[8]: https://www.rust-lang.org/governance/wgs/wasm +[9]: http://december.com/html/4/element/html.html +[10]: http://december.com/html/4/element/head.html +[11]: http://december.com/html/4/element/meta.html +[12]: http://december.com/html/4/element/body.html +[13]: http://december.com/html/4/element/script.html +[14]: https://github.com/brson/basic-http-server +[15]: https://twitter.com/ryan_levick diff --git a/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md b/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md new file mode 100644 index 0000000000..925326e0d7 --- /dev/null +++ b/sources/tech/20190318 Install MEAN.JS Stack In Ubuntu 18.04 LTS.md @@ -0,0 +1,266 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Install MEAN.JS Stack In Ubuntu 18.04 LTS) +[#]: via: (https://www.ostechnix.com/install-mean-js-stack-ubuntu/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Install MEAN.JS Stack In Ubuntu 18.04 LTS +====== + +![Install MEAN.JS Stack][1] + +**MEAN.JS** is an Open-Source, full-Stack JavaScript solution for building fast, and robust web applications. **MEAN.JS** stack consists of **MongoDB** (NoSQL database), **ExpressJs** (NodeJS server-side application web framework), **AngularJS** (Client-side web application framework), and **Node.js** (JavaScript run-time, popular for being a web server platform). In this tutorial, we will be discussing how to install MEAN.JS stack in Ubuntu. This guide was tested in Ubuntu 18.04 LTS server. However, it should work on other Ubuntu versions and Ubuntu variants. + +### Install MongoDB + +**MongoDB** is a free, cross-platform, open source, NoSQL document-oriented database. To install MongoDB on your Ubuntu system, refer the following guide: + + * [**Install MongoDB Community Edition In Linux**][2] + + + +### Install Node.js + +**NodeJS** is an open source, cross-platform, and lightweight JavaScript run-time environment that can be used to build scalable network applications. + +To install NodeJS on your system, refer the following guide: + + * [**How To Install NodeJS On Linux**][3] + + + +After installing, MongoDB, and Node.js, we need to install the other required components such as **Yarn** , **Grunt** , and **Gulp** for MEAN.js stack. + +### Install Yarn package manager + +Yarn is a package manager used by MEAN.JS stack to manage front-end packages. + +To install Bower, run the following command: + +``` +$ npm install -g yarn +``` + +### Install Grunt Task Runner + +Grunt Task Runner is used to to automate the development process. + +To install Grunt, run: + +``` +$ npm install -g grunt-cli +``` + +To verify if Yarn and Grunt have been installed, run: + +``` +$ npm list -g --depth=0 /home/sk/.nvm/versions/node/v11.11.0/lib ├── [email protected] ├── [email protected] └── [email protected] +``` + +### Install Gulp Task Runner (Optional) + +This is optional. You can use Gulp instead of Grunt. To install Gulp Task Runner, run the following command: + +``` +$ npm install -g gulp +``` + +We have installed all required prerequisites. Now, let us deploy MEAN.JS stack. + +### Download and Install MEAN.JS Stack + +Install Git if it is not installed already: + +``` +$ sudo apt-get install git +``` + +Next, git clone the MEAN.JS repository with command: + +``` +$ git clone https://github.com/meanjs/mean.git meanjs +``` + +**Sample output:** + +``` +Cloning into 'meanjs'... +remote: Counting objects: 8596, done. +remote: Compressing objects: 100% (12/12), done. +remote: Total 8596 (delta 3), reused 0 (delta 0), pack-reused 8584 Receiving objects: 100% (8596/8596), 2.62 MiB | 140.00 KiB/s, done. +Resolving deltas: 100% (4322/4322), done. +Checking connectivity... done. +``` + +The above command will clone the latest version of the MEAN.JS repository to **meanjs** folder in your current working directory. + +Go to the meanjs folder: + +``` +$ cd meanjs/ +``` + +Run the following command to install the Node.js dependencies required for testing and running our application: + +``` +$ npm install +``` + +This will take some time. Please be patient. + +* * * + +**Troubleshooting:** + +When I run the above command in Ubuntu 18.04 LTS, I get the following error: + +``` +Downloading binary from https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node +Cannot download "https://github.com/sass/node-sass/releases/download/v4.5.3/linux-x64-67_binding.node": + +HTTP error 404 Not Found + +[....] +``` + +If you ever get these type of common errors like “node-sass and gulp-sass”, do the following: + +First uninstall the project and global gulp-sass modules using the following commands: + +``` +$ npm uninstall gulp-sass +$ npm uninstall -g gulp-sass +``` + +Next uninstall the global node-sass module: + +``` +$ npm uninstall -g node-sass +``` + +Install the global node-sass first. Then install the gulp-sass module at the local project level. + +``` +$ npm install -g node-sass +$ npm install gulp-sass +``` + +Now try the npm install again from the project folder using command: + +``` +$ npm install +``` + +Now all dependencies will start to install without any issues. + +* * * + +Once all dependencies are installed, run the following command to install all the front-end modules needed for the application: + +``` +$ yarn --allow-root --config.interactive=false install +``` + +Or, + +``` +$ yarn --allow-root install +``` + +You will see the following message at the end if the installation is successful. + +``` +[...] +> meanjs@0.6.0 snyk-protect /home/sk/meanjs +> snyk protect + +Successfully applied Snyk patches + +Done in 99.47s. +``` + +### Test MEAN.JS + +MEAN.JS stack has been installed. We can now able to start a sample application using command: + +``` +$ npm start +``` + +After a few seconds, you will see a message like below. This means MEAN.JS stack is working! + +``` +[...] +MEAN.JS - Development Environment + +Environment: development +Server: http://0.0.0.0:3000 +Database: mongodb://localhost/mean-dev +App version: 0.6.0 +MEAN.JS version: 0.6.0 +``` + +![][4] + +To verify, open up the browser and navigate to **** or ****. You should see a screen something like below. + +![][5] + +Mean stack test page + +Congratulations! MEAN.JS stack is ready to start building web applications. + +For further details, I recommend you to refer **[MEAN.JS stack official documentation][6]**. + +* * * + +Want to setup MEAN.JS stack in CentOS, RHEL, Scientific Linux? Check the following link for more details. + + * **[Install MEAN.JS Stack in CentOS 7][7]** + + + +* * * + +And, that’s all for now, folks. Hope this tutorial will help you to setup MEAN.JS stack. + +If you find this tutorial useful, please share it on your social, professional networks and support OSTechNix. + +More good stuffs to come. Stay tuned! + +Cheers! + +**Resources:** + + * **[MEAN.JS website][8]** + * [**MEAN.JS GitHub Repository**][9] + + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/install-mean-js-stack-ubuntu/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.ostechnix.com/install-mongodb-linux/ +[3]: https://www.ostechnix.com/install-node-js-linux/ +[4]: http://www.ostechnix.com/wp-content/uploads/2016/03/meanjs.png +[5]: http://www.ostechnix.com/wp-content/uploads/2016/03/mean-stack-test-page.png +[6]: http://meanjs.org/docs.html +[7]: http://www.ostechnix.com/install-mean-js-stack-centos-7/ +[8]: http://meanjs.org/ +[9]: https://github.com/meanjs/mean diff --git a/sources/tech/20190318 Let-s try dwm - dynamic window manager.md b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md new file mode 100644 index 0000000000..48f44a33cb --- /dev/null +++ b/sources/tech/20190318 Let-s try dwm - dynamic window manager.md @@ -0,0 +1,150 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Let’s try dwm — dynamic window manager) +[#]: via: (https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +Let’s try dwm — dynamic window manager +====== + +![][1] + +If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try _dwm_ — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager. + +You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm. + +## **Installation** + +To install dwm on Fedora, run: + +``` +$ sudo dnf install dwm dwm-user +``` + +The _dwm_ package installs the window manager itself, and the _dwm-user_ package significantly simplifies configuration which will be explained later in this article. + +Additionally, to be able to lock the screen when needed, we’ll also install _slock_ — a simple X display locker. + +``` +$ sudo dnf install slock +``` + +However, you can use a different one based on your personal preference. + +## **Quick start** + +To start dwm, choose the _dwm-user_ option on the login screen. + +![][2] + +After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a _[]=_ symbol that represents the layout of your windows. + +### Launching applications + +Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing _Alt+p_ and typing the name of the app followed by _Enter_. There’s also a shortcut _Alt+Shift+Enter_ for opening a terminal. + +Now that some apps are running, have a look at the layouts. + +### Layouts + +There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout. + +The tiling layout, represented by _[]=_ on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing _Alt+t._ + +![][3] + +The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed. + +To swap windows between the two areas, hover your mouse over one in the stack area and press _Alt+Enter_ to swap it with the one in the master area. + +![][4] + +The monocle layout, represented by _[N]_ on the top bar, makes your primary window take the whole screen. You can switch to it by pressing _Alt+m_. + +Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is _Alt+f_ and the symbol on the top bar is _> <>_. + +### Workspaces and tags + +Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press _Alt+1..9._ You can even view multiple tags at once by clicking on their number using the secondary mouse button. + +Windows can be moved between different tags by highlighting them using your mouse, and pressing _Alt+Shift+1..9._ + +## **Configuration** + +To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the _dwm-user_ package provided by the maintainer in Fedora. + +First, you need to copy the file into your home directory using a command similar to the following: + +``` +$ mkdir ~/.dwm +$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h +``` + +You can get the exact path by running _man dwm-start._ + +Second, just edit the _~/.dwm/config.h_ file. As an example, let’s configure a new shortcut to lock the screen by pressing _Alt+Shift+L_. + +Considering we’ve installed the _slock_ package mentioned earlier in this post, we need to add the following two lines into the file to make it work: + +Under the _/* commands */_ comment, add: + +``` +static const char *slockcmd[] = { "slock", NULL }; +``` + +And the following line into _static Key keys[]_ : + +``` +{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } }, +``` + +In the end, it should look like as follows: (added lines are highlighted) + +``` +... + /* commands */ + static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */ + static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL }; + static const char *termcmd[] = { "st", NULL }; + static const char *slockcmd[] = { "slock", NULL }; + + static Key keys[] = { + /* modifier key function argument */ + { MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } }, + { MODKEY, XK_p, spawn, {.v = dmenucmd } }, + { MODKEY|ShiftMask, XK_Return, spawn, {.v = termcmd } }, + ... +``` + +Save the file. + +Finally, just log out by pressing _Alt+Shift+q_ and log in again. The scripts provided by the _dwm-user_ package will recognize that you have changed the _config.h_ file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it. + +You can try locking your screen now by pressing _Alt+Shift+L_ , and then logging back in again by typing your password and pressing enter. + +## **Conclusion** + +If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it. + +To learn more about dwm, see the project’s homepage at . + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/lets-try-dwm-dynamic-window-manger/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-magazine-image-816x345.png +[2]: https://fedoramagazine.org/wp-content/uploads/2019/03/choosing-dwm-1024x469.png +[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/dwm-desktop-1024x593.png +[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-2019-03-15-at-11.12.32-1024x592.png diff --git a/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md b/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md new file mode 100644 index 0000000000..c7a8d4bc55 --- /dev/null +++ b/sources/tech/20190318 Solus 4 ‘Fortitude- Released with Significant Improvements.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Solus 4 ‘Fortitude’ Released with Significant Improvements) +[#]: via: (https://itsfoss.com/solus-4-release) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Solus 4 ‘Fortitude’ Released with Significant Improvements +====== + +Finally, after a year of work, the much anticipated Solus 4 is here. It’s a significant release not just because this is a major upgrade, but also because this is the first major release after [Ikey Doherty (the founder of Solus) left the project][1] a few months ago. + +Now that everything’s under control with the new _management_ , **Solus 4 Fortitude** with updated Budgie desktop and other significant improvements has officially released. + +### What’s New in Solus 4 + +![Solus 4 Fortitude][2] + +#### Core Improvements + +Solus 4 comes loaded with **[Linux Kernel 4.20.16][3]** which enables better hardware support (like Touchpad support, improved support for Intel Coffee Lake and Ice Lake CPUs, and for AMD Picasso & Raven2 APUs). + +This release also ships with the latest [FFmpeg 4.1.1][4]. Also, they have enabled the support for [dav1d][5] in [VLC][6] – which is an open source AV1 decoder. So, you can consider these upgrades to significantly improve the Multimedia experience. + +It also includes some minor fixes to the Software Center – if you were encountering any issues while finding an application or viewing the description. + +In addition, WPS Office has been removed from the listing. + +#### UI Improvements + +![Budgie 10.5][7] + +The Budgie desktop update includes some minor changes and also comes baked in with the [Plata (Noir) GTK Theme.][8] + +You will no longer observe same applications multiple times in the menu, they’ve fixed this. They have also introduced a “ **Caffeine** ” mode as applet which prevents the system from suspending, locking the screen or changing the brightness while you are working. You can schedule the time accordingly. + +![Caffeine Mode][9] + +The new Budgie desktop experience also adds quick actions to the app icons on the task bar, dubbed as “ **Icon Tasklist** “. It makes it easy to manage the active tabs on a browser or the actions to minimize and move it to a new workplace (as shown in the image below). + +![Icon Tasklist][10] + +As the [change log][11] mentions, the above pop over design lets you do more: + + * _Close all instances of the selected application_ + * _Easily access per-window controls for marking it always on top, maximizing / unmaximizing, minimizing, and moving it to various workspaces._ + * _Quickly favorite / unfavorite apps_ + * _Quickly launch a new instance of the selected application_ + * _Scroll up or down on an IconTasklist button when a single window is open to activate and bring it into focus, or minimize it, based on the scroll direction._ + * _Toggle to minimize and unminimize various application windows_ + + + +The notification area now groups the notifications from specific applications instead of piling it all up. So, that’s a good improvement. + +In addition to these, the sound widget got some cool improvements while letting you personalize the look and feel of your desktop in an efficient manner. + +To know about all the nitty-gritty details, do refer the official [release note][11]s. + +### Download Solus 4 + +You can get the latest version of Solus from its download page below. It is available in the default Budgie, GNOME and MATE desktop flavors. + +[Get Solus 4][12] + +### Wrapping Up** + +Solus 4 is definitely an impressive upgrade – without introducing any unnecessary fancy features but by adding only the useful ones, subtle changes. + +What do you think about the latest Solus 4 Fortitude? Have you tried it yet? + +Let us know your thoughts in the comments below. + + + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/solus-4-release + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/ikey-leaves-solus/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/solus-4-featured.jpg?fit=800%2C450&ssl=1 +[3]: https://itsfoss.com/kernel-4-20-release/ +[4]: https://www.ffmpeg.org/ +[5]: https://code.videolan.org/videolan/dav1d +[6]: https://www.videolan.org/index.html +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?resize=800%2C450&ssl=1 +[8]: https://gitlab.com/tista500/plata-theme +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/caffeine-mode.jpg?ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/IconTasklistPopover.jpg?ssl=1 +[11]: https://getsol.us/2019/03/17/solus-4-released/ +[12]: https://getsol.us/download/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Budgie-desktop.jpg?fit=800%2C450&ssl=1 +[14]: https://www.facebook.com/sharer.php?t=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&u=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F +[15]: https://twitter.com/intent/tweet?text=Solus+4+%E2%80%98Fortitude%E2%80%99+Released+with+Significant+Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&via=itsfoss2 +[16]: https://www.linkedin.com/shareArticle?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F&mini=true +[17]: https://www.reddit.com/submit?title=Solus%204%20%E2%80%98Fortitude%E2%80%99%20Released%20with%20Significant%20Improvements&url=https%3A%2F%2Fitsfoss.com%2Fsolus-4-release%2F diff --git a/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md b/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md new file mode 100644 index 0000000000..c419d15268 --- /dev/null +++ b/sources/tech/20190319 Five Commands To Use Calculator In Linux Command Line.md @@ -0,0 +1,342 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Five Commands To Use Calculator In Linux Command Line?) +[#]: via: (https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Five Commands To Use Calculator In Linux Command Line? +====== + +As a Linux administrator you may use the command line calculator many times in a day for some purpose. + +I had used this especially when LVM creation using the PE values. + +There are many commands available for this purpose and i’m going to list most used commands in this article. + +These command line calculators are allow us to perform all kind of actions such as scientific, financial, or even simple calculation. + +Also, we can use these commands in shell scripts for complex math. + +In this article, I’m listing the top five command line calculator commands. + +Those command line calculator commands are below. + + * **`bc:`** An arbitrary precision calculator language + * **`calc:`** arbitrary precision calculator + * **`expr:`** evaluate expressions + * **`gcalccmd:`** gnome-calculator – a desktop calculator + * **`qalc:`** + * **`Linux Shell:`** + + + +### How To Perform Calculation In Linux Using bc Command? + +bs stands for Basic Calculator. bc is a language that supports arbitrary precision numbers with interactive execution of statements. There are some similarities in the syntax to the C programming language. + +A standard math library is available by command line option. If requested, the math library is defined before processing any files. bc starts by processing code from all the files listed on the command line in the order listed. + +After all files have been processed, bc reads from the standard input. All code is executed as it is read. + +By default bc command has installed in all the Linux system. If not, use the following procedure to install it. + +For **`Fedora`** system, use **[DNF Command][1]** to install bc. + +``` +$ sudo dnf install bc +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install bc. + +``` +$ sudo apt install bc +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install bc. + +``` +$ sudo pacman -S bc +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install bc. + +``` +$ sudo yum install bc +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install bc. + +``` +$ sudo zypper install bc +``` + +### How To Use The bc Command To Perform Calculation In Linux? + +We can use the bc command to perform all kind of calculation right from the terminal. + +``` +$ bc +bc 1.07.1 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. + +1+2 +3 + +10-5 +5 + +2*5 +10 + +10/2 +5 + +(2+4)*5-5 +25 + +quit +``` + +Use `-l` flag to define the standard math library. + +``` +$ bc -l +bc 1.07.1 +Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc. +This is free software with ABSOLUTELY NO WARRANTY. +For details type `warranty'. + +3/5 +.60000000000000000000 + +quit +``` + +### How To Perform Calculation In Linux Using calc Command? + +calc is an arbitrary precision calculator. It’s a simple calculator that allow us to perform all kind of calculation in Linux command line. + +For **`Fedora`** system, use **[DNF Command][1]** to install calc. + +``` +$ sudo dnf install calc +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install calc. + +``` +$ sudo apt install calc +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install calc. + +``` +$ sudo pacman -S calc +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install calc. + +``` +$ sudo yum install calc +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install calc. + +``` +$ sudo zypper install calc +``` + +### How To Use The calc Command To Perform Calculation In Linux? + +We can use the calc command to perform all kind of calculation right from the terminal. + +Intractive mode + +``` +$ calc +C-style arbitrary precision calculator (version 2.12.7.1) +Calc is open software. For license details type: help copyright +[Type "exit" to exit, or "help" for help.] + +; 5+1 + 6 +; 5-1 + 4 +; 5*2 + 10 +; 10/2 + 5 +; quit +``` + +Non-Intractive mode + +``` +$ calc 3/5 + 0.6 +``` + +### How To Perform Calculation In Linux Using expr Command? + +Print the value of EXPRESSION to standard output. A blank line below separates increasing precedence groups. It’s part of coreutils so, we no need to install it. + +### How To Use The expr Command To Perform Calculation In Linux? + +Use the following format for basic calculations. + +For addition + +``` +$ expr 5 + 1 +6 +``` + +For subtraction + +``` +$ expr 5 - 1 +4 +``` + +For division. + +``` +$ expr 10 / 2 +5 +``` + +### How To Perform Calculation In Linux Using gcalccmd Command? + +gnome-calculator is the official calculator of the GNOME desktop environment. gcalccmd is the console version of Gnome Calculator utility. By default it has installed in the GNOME desktop. + +### How To Use The gcalccmd Command To Perform Calculation In Linux? + +I have added few examples on this. + +``` +$ gcalccmd + +> 5+1 +6 + +> 5-1 +4 + +> 5*2 +10 + +> 10/2 +5 + +> sqrt(16) +4 + +> 3/5 +0.6 + +> quit +``` + +### How To Perform Calculation In Linux Using qalc Command? + +Qalculate is a multi-purpose cross-platform desktop calculator. It is simple to use but provides power and versatility normally reserved for complicated math packages, as well as useful tools for everyday needs (such as currency conversion and percent calculation). + +Features include a large library of customizable functions, unit calculations and conversion, symbolic calculations (including integrals and equations), arbitrary precision, uncertainty propagation, interval arithmetic, plotting, and a user-friendly interface (GTK+ and CLI). + +For **`Fedora`** system, use **[DNF Command][1]** to install qalc. + +``` +$ sudo dnf install libqalculate +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][2]** or **[APT Command][3]** to install qalc. + +``` +$ sudo apt install libqalculate +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][4]** to install qalc. + +``` +$ sudo pacman -S libqalculate +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][5]** to install qalc. + +``` +$ sudo yum install libqalculate +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][6]** to install qalc. + +``` +$ sudo zypper install libqalculate +``` + +### How To Use The qalc Command To Perform Calculation In Linux? + +I have added few examples on this. + +``` +$ qalc +> 5+1 + + 5 + 1 = 6 + +> ans*2 + + ans * 2 = 12 + +> ans-2 + + ans - 2 = 10 + +> 1 USD to INR +It has been 36 day(s) since the exchange rates last were updated. +Do you wish to update the exchange rates now? y + + error: Failed to download exchange rates from coinbase.com: Resolving timed out after 15000 milliseconds. + 1 * dollar = approx. INR 69.638581 + +> 10 USD to INR + + 10 * dollar = approx. INR 696.38581 + +> quit +``` + +### How To Perform Calculation In Linux Using Linux Shell Command? + +We can use the shell commands such as echo, awk, etc to perform the calculation. + +For Addition using echo command. + +``` +$ echo $((5+5)) +10 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-command-line-calculator-bc-calc-qalc-gcalccmd/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[2]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[3]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[4]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[5]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ diff --git a/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md b/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md new file mode 100644 index 0000000000..26b9850109 --- /dev/null +++ b/sources/tech/20190319 How To Set Up a Firewall with GUFW on Linux.md @@ -0,0 +1,365 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Set Up a Firewall with GUFW on Linux) +[#]: via: (https://itsfoss.com/set-up-firewall-gufw) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How To Set Up a Firewall with GUFW on Linux +====== + +**UFW (Uncomplicated Firewall)** is a simple to use firewall utility with plenty of options for most users. It is an interface for the **iptables** , which is the classic (and harder to get comfortable with) way to set up rules for your network. + +**Do you really need a firewall for desktop?** + +![][1] + +A **[firewall][2]** is a way to regulate the incoming and outgoing traffic on your network. A well-configured firewall is crucial for the security of servers. + +But what about normal, desktop users? Do you need a firewall on your Linux system? Most likely you are connected to internet via a router linked to your internet service provider (ISP). Some routers already have built-in firewall. On top of that, your actual system is hidden behind NAT. In other words, you probably have a security layer when you are on your home network. + +Now that you know you should be using a firewall on your system, let’s see how you can easily install and configure a firewall on Ubuntu or any other Linux distribution. + +### Setting Up A Firewall With GUFW + +**[GUFW][3]** is a graphical utility for managing [Uncomplicated Firewall][4] ( **UFW** ). In this guide, I’ll go over configuring a firewall using **GUFW** that suits your needs, going over the different modes and rules. + +But first, let’s see how to install GUFW. + +#### Installing GUFW on Ubuntu and other Linux + +GUFW is available in all major Linux distributions. I advise using your distribution’s package manager for installing GUFW. + +If you are using Ubuntu, make sure you have the Universe Repository enabled. To do that, open up a terminal (default hotkey**:** CTRL+ALT+T) and enter: + +``` +sudo add-apt-repository universe +sudo apt update -y +``` + +Now you can install GUFW with this command: + +``` +sudo apt install gufw -y +``` + +That’s it! If you prefer not touching the terminal, you can install it from the Software Center as well. + +Open Software Center and search for **gufw** and click on the search result. + +![Search for gufw in software center][5] + +Go ahead and click **Install**. + +![Install GUFW from the Software Center][6] + +To open **gufw** , go to your menu and search for it. + +![Start GUFW][7] + +This will open the firewall application and you’ll be greeted by a “ **Getting Started** ” section. + +![GUFW Interface and Welcome Screen][8] + +#### Turn on the firewall + +The first thing to notice about this menu is the **Status** toggle. Pressing this button will turn on/off the firewall ( **default:** off), applying your preferences (policies and rules). + +![Turn on the firewall][9] + +If turned on, the shield icon turn from grey to colored. The colors, as noted later in this article, reflect your policies. This will also make the firewall **automatically start** on system startup. + +**Note:** _**Home** will be turned **off** by default. The other profiles (see next section) will be turned **on.**_ + +#### Understanding GUFW and its profiles + +As you can see in the menu, you can select different **profiles**. Each profile comes with different **default policies**. What this means is that they offer different behaviors for incoming and outgoing traffic. + +The **default profiles** are: + + * Home + * Public + * Office + + + +You can select another profile by clicking on the current one ( **default: Home** ). + +![][10] + +Selecting one of them will modify the default behavior. Further down, you can change Incoming and Outgoing traffic preferences. + +By default, both in **Home** and in **Office** , these policies are **Deny Incoming** and **Allow Outgoing**. This enables you to use services such as http/https without letting anything get in ( **e.g.** ssh). + +For **Public** , they are **Reject Incoming** and **Allow Outgoing**. **Reject** , similar to **deny** , doesn’t let services in, but also sends feedback to the user/service that tried accessing your machine (instead of simply dropping/hanging the connection). + +Note + +If you are an average desktop user, you can stick with the default profiles. You’ll have to manually change the profiles if you change the network. + +So if you are travelling, set the firewall on public profile and the from here forwards, firewall will be set in public mode on each reboot. + +#### Configuring firewall rules and policies [for advanced users] + +All profiles use the same rules, only the policies the rules build upon will differ. Changing the behavior of a policy ( **Incoming/Outgoing** ) will apply the changes to the selected profile. + +Note that the policies can only be changed while the firewall is active (Status: ON). + +Profiles can easily be added, deleted and renamed from the **Preferences** menu. + +##### Preferences + +In the top bar, click on **Edit**. Select **Preferences**. + +![Open Preferences Menu in GUFW][11] + +This will open up the **Preferences** menu. + +![][12] + +Let’s go over the options you have here! + +**Logging** means exactly what you would think: how much information does the firewall write down in the log files. + +The options under **Gufw** are quite self-explanatory. + +In the section under **Profiles** is where we can add, delete and rename profiles. Double-clicking on a profile will allow you to **rename** it. Pressing **Enter** will complete this process and pressing **Esc** will cancel the rename. + +![][13] + +To **add** a new profile, click on the **+** under the list of profiles. This will add a new profile. However, it won’t notify you about it. You’ll also have to scroll down the list to see the profile you created (using the mouse wheel or the scroll bar on the right side of the list). + +**Note:** _The newly added profile will **Deny Incoming** and **Allow Outgoing** traffic._ + +![][14] + +Clicking a profile highlight that profile. Pressing the **–** button will **delete** the highlighted profile. + +![][15] + +**Note:** _You can’t rename/remove the currently selected profile_. + +You can now click on **Close**. Next, I’ll go into setting up different **rules**. + +##### Rules + +Back to the main menu, somewhere in the middle of the screen you can select different tabs ( **Home, Rules, Report, Logs)**. We already covered the **Home** tab (that’s the quick guide you see when you start the app). + +![][16] + +Go ahead and select **Rules**. + +![][17] + +This will be the bulk of your firewall configuration: networking rules. You need to understand the concepts UFW is based on. That is **allowing, denying, rejecting** and **limiting** traffic. + +**Note:** _In UFW, the rules apply from top to bottom (the top rules take effect first and on top of them are added the following ones)._ + +**Allow, Deny, Reject, Limit:**These are the available policies for the rules you’ll add to your firewall. + +Let’s see exactly what each of them means: + + * **Allow:** allows any entry traffic to a port + * **Deny:** denies any entry traffic to a port + * **Reject:** denies any entry traffic to a port and informs the requester about the rejection + * **Limit:** denies entry traffic if an IP address has attempted to initiate 6 or more connections in the last 30 seconds + + + +##### Adding Rules + +There are three ways to add rules in GUFW. I’ll present all three methods in the following section. + +**Note:** _After you added the rules, changing their order is a very tricky process and it’s easier to just delete them and add them in the right order._ + +But first, click on the **+** at the bottom of the **Rules** tab. + +![][18] + +This should open a pop-up menu ( **Add a Firewall Rule** ). + +![][19] + +At the top of this menu, you can see the three ways you can add rules. I’ll guide you through each method i.e. **Preconfigured, Simple, Advanced**. Click to expand each section. + +**Preconfigured Rules** + +This is the most beginner-friendly way to add rules. + +The first step is choosing a policy for the rule (from the ones detailed above). + +![][20] + +The next step is to choose the direction the rule will affect ( **Incoming, Outgoing, Both** ). + +![][21] + +The **Category** and **Subcategory** choices are plenty. These narrow down the **Applications** you can select + +Choosing an **Application** will set up a set of ports based on what is needed for that particular application. This is especially useful for apps that might operate on multiple ports, or if you don’t want to bother with manually creating rules for handwritten port numbers. + +If you wish to further customize the rule, you can click on the **orange arrow icon**. This will copy the current settings (Application with it’s ports etc.) and take you to the **Advanced** rule menu. I’ll cover that later in this article. + +For this example, I picked an **Office Database** app: **MySQL**. I’ll deny all incoming traffic to the ports used by this app. +To create the rule, click on **Add**. + +![][22] + +You can now **Close** the pop-up (if you don’t want to add any other rules). You can see that the rule has been successfully added. + +![][23] + +The ports have been added by GUFW, and the rules have been automatically numbered. You may wonder why are there two new rules instead of just one; the answer is that UFW automatically adds both a standard **IP** rule and an **IPv6** rule. + +**Simple Rules** + +Although setting up preconfigured rules is nice, there is another easy way to add a rule. Click on the **+** icon again and go to the **Simple** tab. + +![][24] + +The options here are straight forward. Enter a name for your rule and select the policy and the direction. I’ll add a rule for rejecting incoming SSH attempts. + +![][25] + +The **Protocols** you can choose are **TCP, UDP** or **Both**. + +You must now enter the **Port** for which you want to manage the traffic. You can enter a **port number** (e.g. 22 for ssh), a **port range** with inclusive ends separated by a **:** ( **colon** ) (e.g. 81:89) or a **service name** (e.g. ssh). I’ll use **ssh** and select **both TCP and UDP** for this example. As before, click on **Add** to completing the creation of your rule. You can click the **red arrow icon** to copy the settings to the **Advanced** rule creation menu. + +![][26] + +If you select **Close** , you can see that the new rule (along with the corresponding IPv6 rule) has been added. + +![][27] + +**Advanced Rules** + +I’ll now go into how to set up more advanced rules, to handle traffic from specific IP addresses and subnets and targeting different interfaces. + +Let’s open up the **Rules** menu again. Select the **Advanced** tab. + +![][28] + +By now, you should already be familiar with the basic options: **Name, Policy, Direction, Protocol, Port**. These are the same as before. + +![][29] + +**Note:** _You can choose both a receiving port and a requesting port._ + +What changes is that now you have additional options to further specialize our rules. + +I mentioned before that rules are automatically numbered by GUFW. With **Advanced** rules you specify the position of your rule by entering a number in the **Insert** option. + +**Note:** _Inputting **position 0** will add your rule after all existing rules._ + +**Interface** let’s you select any network interface available on your machine. By doing so, the rule will only have effect on traffic to and from that specific interface. + +**Log** changes exactly that: what will and what won’t be logged. + +You can also choose IPs for the requesting and for the receiving port/service ( **From** , **To** ). + +All you have to do is specify an **IP address** (e.g. 192.168.0.102) or an entire **subnet** (e.g. 192.168.0.0/24 for IPv4 addresses ranging from 192.168.0.0 to 192.168.0.255). + +In my example, I’ll set up a rule to allow all incoming TCP SSH requests from systems on my subnet to a specific network interface of the machine I’m currently running. I’ll add the rule after all my standard IP rules, so that it takes effect on top of the other rules I have set up. + +![][30] + +**Close** the menu. + +![][31] + +The rule has been successfully added after the other standard IP rules. + +##### Edit Rules + +Clicking a rule in the rules list will highlight it. Now, if you click on the **little cog icon** at the bottom, you can **edit** the highlighted rule. + +![][32] + +This will open up a menu looking something like the **Advanced** menu I explained in the last section. + +![][33] + +**Note:** _Editing any options of a rule will move it to the end of your list._ + +You can now ether select on **Apply** to modify your rule and move it to the end of the list, or hit **Cancel**. + +##### Delete Rules + +After selecting (highlighting) a rule, you can also click on the **–** icon. + +![][34] + +##### Reports + +Select the **Report** tab. Here you can see services that are currently running (along with information about them, such as Protocol, Port, Address and Application name). From here, you can **Pause Listening Report (Pause Icon)** or **Create a rule from a highlighted service from the listening report (+ Icon)**. + +![][35] + +##### Logs + +Select the **Logs** tab. Here is where you’ll have to check for any errors are suspicious rules. I’ve tried creating some invalid rules to show you what these might look like when you don’t know why you can’t add a certain rule. In the bottom section, there are two icons. Clicking the **first icon copies the logs** to your clipboard and clicking the **second icon** **clears the log**. + +![][36] + +### Wrapping Up + +Having a firewall that is properly configured can greatly contribute to your Ubuntu experience, making your machine safer to use and allowing you to have full control over incoming and outgoing traffic. + +I have covered the different uses and modes of **GUFW** , going into how to set up different rules and configure a firewall to your needs. I hope that this guide has been helpful to you. + +If you are a beginner, this should prove to be a comprehensive guide; even if you are more versed in the Linux world and maybe getting your feet wet into servers and networking, I hope you learned something new. + +Let us know in the comments if this article helped you and why did you decide a firewall would improve your system! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/set-up-firewall-gufw + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?resize=800%2C450&ssl=1 +[2]: https://en.wikipedia.org/wiki/Firewall_(computing) +[3]: http://gufw.org/ +[4]: https://en.wikipedia.org/wiki/Uncomplicated_Firewall +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_gufw-1.jpg?ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu_software_install_gufw.jpg?ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/show_applications_gufw.jpg?ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw.jpg?ssl=1 +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_toggle_status.jpg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_select_profile-1.jpg?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_open_preferences.jpg?ssl=1 +[12]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preferences.png?fit=800%2C585&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rename_profile.png?fit=800%2C551&ssl=1 +[14]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_profile.png?ssl=1 +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_profile.png?ssl=1 +[16]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_home_tab.png?ssl=1 +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_rules_tab.png?ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rule.png?ssl=1 +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_rules_menu.png?ssl=1 +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_policy.png?ssl=1 +[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_direction.png?ssl=1 +[22]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_add_rule.png?ssl=1 +[23]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_preconfigured_rule_added.png?ssl=1 +[24]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rules_menu.png?ssl=1 +[25]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_name_policy_direction.png?ssl=1 +[26]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_simple_rule.png?ssl=1 +[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_simple_rule_added.png?ssl=1 +[28]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rules_menu.png?ssl=1 +[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_basic_options.png?ssl=1 +[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_add_advanced_rule.png?ssl=1 +[31]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_advanced_rule_added.png?ssl=1 +[32]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_highlighted_rule.png?ssl=1 +[33]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_edit_rule_menu.png?ssl=1 +[34]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_delete_rule.png?ssl=1 +[35]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_report_tab.png?ssl=1 +[36]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/gufw_log_tab-1.png?ssl=1 +[37]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/firewall-linux.png?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190319 How to set up a homelab from hardware to firewall.md b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md new file mode 100644 index 0000000000..d8bb34395b --- /dev/null +++ b/sources/tech/20190319 How to set up a homelab from hardware to firewall.md @@ -0,0 +1,107 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to set up a homelab from hardware to firewall) +[#]: via: (https://opensource.com/article/19/3/home-lab) +[#]: author: (Michael Zamot https://opensource.com/users/mzamot) + +How to set up a homelab from hardware to firewall +====== + +Take a look at hardware and software options for building your own homelab. + +![][1] + +Do you want to create a homelab? Maybe you want to experiment with different technologies, create development environments, or have your own private cloud. There are many reasons to have a homelab, and this guide aims to make it easier to get started. + +There are three categories to consider when planning a home lab: hardware, software, and maintenance. We'll look at the first two categories here and save maintaining your computer lab for a future article. + +### Hardware + +When thinking about your hardware needs, first consider how you plan to use your lab as well as your budget, noise, space, and power usage. + +If buying new hardware is too expensive, search local universities, ads, and websites like eBay or Craigslist for recycled servers. They are usually inexpensive, and server-grade hardware is built to last many years. You'll need three types of hardware: a virtualization server, storage, and a router/firewall. + +#### Virtualization servers + +A virtualization server allows you to run several virtual machines that share the physical box's resources while maximizing and isolating resources. If you break one virtual machine, you won't have to rebuild the entire server, just the virtual one. If you want to do a test or try something without the risk of breaking your entire system, just spin up a new virtual machine and you're ready to go. + +The two most important factors to consider in a virtualization server are the number and speed of its CPU cores and its memory. If there are not enough resources to share among all the virtual machines, they'll be overallocated and try to steal each other's CPU cycles and memory. + +So, consider a CPU platform with multiple cores. You want to ensure the CPU supports virtualization instructions (VT-x for Intel and AMD-V for AMD). Examples of good consumer-grade processors that can handle virtualization are Intel i5 or i7 and AMD Ryzen. If you are considering server-grade hardware, the Xeon class for Intel and EPYC for AMD are good options. Memory can be expensive, especially the latest DDR4 SDRAM. When estimating memory requirements, factor at least 2GB for the host operating system's memory consumption. + +If your electricity bill or noise is a concern, solutions like Intel's NUC devices provide a small form factor, low power usage, and reduced noise, but at the expense of expandability. + +#### Network-attached storage (NAS) + +If you want a machine loaded with hard drives to store all your personal data, movies, pictures, etc. and provide storage for the virtualization server, network-attached storage (NAS) is what you want. + +In most cases, you won't need a powerful CPU; in fact, many commercial NAS solutions use low-powered ARM CPUs. A motherboard that supports multiple SATA disks is a must. If your motherboard doesn't have enough ports, use a host bus adapter (HBA) SAS controller to add extras. + +Network performance is critical for a NAS, so select a gigabit network interface (or better). + +Memory requirements will differ based on your filesystem. ZFS is one of the most popular filesystems for NAS, and you'll need more memory to use features such as caching or deduplication. Error-correcting code (ECC) memory is your best bet to protect data from corruption (but make sure your motherboard supports it before you buy). Last, but not least, don't forget an uninterruptible power supply (UPS), because losing power can cause data corruption. + +#### Firewall and router + +Have you ever realized that a cheap router/firewall is usually the main thing protecting your home network from the exterior world? These routers rarely receive timely security updates, if they receive any at all. Scared now? Well, [you should be][2]! + +You usually don't need a powerful CPU or a great deal of memory to build your own router/firewall, unless you are handling a huge throughput or want to do CPU-intensive tasks, like a VPN server or traffic filtering. In such cases, you'll need a multicore CPU with AES-NI support. + +You may want to get at least two 1-gigabit or better Ethernet network interface cards (NICs), also, not needed, but recommended, a managed switch to connect your DIY-router to create VLANs to further isolate and secure your network. + +![Home computer lab PfSense][4] + +### Software + +After you've selected your virtualization server, NAS, and firewall/router, the next step is exploring the different operating systems and software to maximize their benefits. While you could use a regular Linux distribution like CentOS, Debian, or Ubuntu, they usually take more time to configure and administer than the following options. + +#### Virtualization software + +**[KVM][5]** (Kernel-based Virtual Machine) lets you turn Linux into a hypervisor so you can run multiple virtual machines in the same box. The best thing is that KVM is part of Linux, and it is the go-to option for many enterprises and home users. If you are comfortable, you can install **[libvirt][6]** and **[virt-manager][7]** to manage your virtualization platform. + +**[Proxmox VE][8]** is a robust, enterprise-grade solution and a full open source virtualization and container platform. It is based on Debian and uses KVM as its hypervisor and LXC for containers. Proxmox offers a powerful web interface, an API, and can scale out to many clustered nodes, which is helpful because you'll never know when you'll run out of capacity in your lab. + +**[oVirt][9] (RHV)** is another enterprise-grade solution that uses KVM as the hypervisor. Just because it's enterprise doesn't mean you can't use it at home. oVirt offers a powerful web interface and an API and can handle hundreds of nodes (if you are running that many servers, I don't want to be your neighbor!). The potential problem with oVirt for a home lab is that it requires a minimum set of nodes: You'll need one external storage, such as a NAS, and at least two additional virtualization nodes (you can run it just on one, but you'll run into problems in maintenance of your environment). + +#### NAS software + +**[FreeNAS][10]** is the most popular open source NAS distribution, and it's based on the rock-solid FreeBSD operating system. One of its most robust features is its use of the ZFS filesystem, which provides data-integrity checking, snapshots, replication, and multiple levels of redundancy (mirroring, striped mirrors, and striping). On top of that, everything is managed from the powerful and easy-to-use web interface. Before installing FreeNAS, check its hardware support, as it is not as wide as Linux-based distributions. + +Another popular alternative is the Linux-based **[OpenMediaVault][11]**. One of its main features is its modularity, with plugins that extend and add features. Among its included features are a web-based administration interface; protocols like CIFS, SFTP, NFS, iSCSI; and volume management, including software RAID, quotas, access control lists (ACLs), and share management. Because it is Linux-based, it has extensive hardware support. + +#### Firewall/router software + +**[pfSense][12]** is an open source, enterprise-grade FreeBSD-based router and firewall distribution. It can be installed directly on a server or even inside a virtual machine (to manage your virtual or physical networks and save space). It has many features and can be expanded using packages. It is managed entirely using the web interface, although it also has command-line access. It has all the features you would expect from a router and firewall, like DHCP and DNS, as well as more advanced features, such as intrusion detection (IDS) and intrusion prevention (IPS) systems. You can create multiple networks listening on different interfaces or using VLANs, and you can create a secure VPN server with a few clicks. pfSense uses pf, a stateful packet filter that was developed for the OpenBSD operating system using a syntax similar to IPFilter. Many companies and organizations use pfSense. + +* * * + +With all this information in mind, it's time for you to get your hands dirty and start building your lab. In a future article, I will get into the third category of running a home lab: using automation to deploy and maintain it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/home-lab + +作者:[Michael Zamot (Red Hat)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb +[2]: https://opensource.com/article/18/5/how-insecure-your-router +[3]: /file/427426 +[4]: https://opensource.com/sites/default/files/uploads/pfsense2.png (Home computer lab PfSense) +[5]: https://www.linux-kvm.org/page/Main_Page +[6]: https://libvirt.org/ +[7]: https://virt-manager.org/ +[8]: https://www.proxmox.com/en/proxmox-ve +[9]: https://ovirt.org/ +[10]: https://freenas.org/ +[11]: https://www.openmediavault.org/ +[12]: https://www.pfsense.org/ diff --git a/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md new file mode 100644 index 0000000000..5f940e9b0b --- /dev/null +++ b/sources/tech/20190320 Choosing an open messenger client- Alternatives to WhatsApp.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Choosing an open messenger client: Alternatives to WhatsApp) +[#]: via: (https://opensource.com/article/19/3/open-messenger-client) +[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) + +Choosing an open messenger client: Alternatives to WhatsApp +====== + +Keep in touch with far-flung family, friends, and colleagues without sacrificing your privacy. + +![Team communication, chat][1] + +Like many families, mine is inconveniently spread around, and I have many colleagues in North and South America. So, over the years, I've relied more and more on WhatsApp to stay in touch with people. The claimed end-to-end encryption appeals to me, as I prefer to maintain some shreds of privacy, and moreover to avoid forcing those with whom I communicate to use an insecure mechanism. But all this [WhatsApp/Facebook/Instagram "convergence"][2] has led our family to decide to vote with our feet. We no longer use WhatsApp for anything except communicating with others who refuse to use anything else, and we're working on them. + +So what do we use instead? Before I spill the beans, I'd like to explain what other options we looked at and how we chose. + +### Options we considered and how we evaluated them + +There is an absolutely [crazy number of messaging apps out there][3], and we spent a good deal of time thinking about what we needed for a replacement. We started by reading Dan Arel's article on [five social media alternatives to protect privacy][4]. + +Then we came up with our list of core needs: + + * Our entire family uses Android phones. + * One of us has a Windows desktop; the rest use Linux. + * Our main interest is something we can use to chat, both individually and as a group, on our phones, but it would be nice to have a desktop client available. + * It would also be nice to have voice and video calling as well. + * Our privacy is important. Ideally, the code should be open source to facilitate security reviews. If the operation is not pure peer-to-peer, then the organization operating the server components should not operate a business based on the commercialization of our personal information. + + + +At that point, we narrowed the long list down to [Viber][5], [Line][6], [Signal][7], [Threema][8], [Wire][9], and [Riot.im][10]. While I lean strongly to open source, we wanted to include some closed source and paid solutions to make sure we weren't missing something important. Here's how those six alternatives measured up. + +### Line + +[Line][11] is a popular messaging application, and it's part of a larger Line "ecosystem"—online gaming, Taxi (an Uber-like service in Japan), Wow (a food delivery service), Today (a news hub), shopping, and others. For us, Line checks a few too many boxes with all those add-on features. Also, I could not determine its current security quality, and it's not open source. The business model seems to be to build a community and figure out how to make money through that community. + +### Riot.im + +[Riot.im][12] operates on top of the Matrix protocol and therefore lets the user choose a Matrix provider. It also appears to check all of our "needs" boxes, although in operation it looks more like Slack, with a room-oriented and interoperable/federated design. It offers desktop clients, and it's open source. Since the Matrix protocol can be hosted anywhere, any business model would be particular to the Matrix provider. + +### Signal + +[Signal][13] offers a similar user experience to WhatsApp. It checks all of our "needs" boxes, with solid security validated by external audit. It is open source, and it is developed and operated by a not-for-profit foundation, in principle similar to the Mozilla Foundation. Interestingly, Signal's communications protocol appears to be used by other messaging apps, [including WhatsApp][14]. + +### Threema + +[Threema][15] is extremely privacy-focused. It checks some of our "needs" boxes, with decent external audit results of its security. It doesn't offer a desktop client, and it [isn't fully open source][16] though some of its core components are. Threema's business model appears to be to offer paid secure communications. + +### Viber + +[Viber][17] is a very popular messaging application. It checks most of our "needs" boxes; however, it doesn't seem to have solid proof of its security—it seems to use a proprietary encryption mechanism, and as far as I could determine, its current security mechanisms are not externally audited. It's not open source. The owner, Rakuten, seems to be planning for a paid subscription as a business model. + +### Wire + +[Wire][18] was started and is built by some ex-Skype people. It appears to check all of our "needs" boxes, although I am not completely comfortable with its security profile since it stores client data that apparently is not encrypted on its servers. It offers desktop clients and is open source. The developer and operator, Wire Swiss, appears to have a [pay-for-service track][9] as its future business model. + +### The final verdict + +In the end, we picked Signal. We liked its open-by-design approach, its serious and ongoing [privacy and security stance][7] and having a Signal app on our GNOME (and Windows) desktops. It performs very well on our Android handsets and our desktops. Moreover, it wasn't a big surprise to our small user community; it feels much more like WhatsApp than, for example, Riot.im, which we also tried extensively. Having said that, if we were trying to replace Slack, we'd probably move to Riot.im. + +_Have a favorite messenger? Tell us about it in the comments below._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/open-messenger-client + +作者:[Chris Hermansen (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) +[2]: https://www.cnbc.com/2018/03/28/facebook-new-privacy-settings-dont-address-instagram-whatsapp.html +[3]: https://en.wikipedia.org/wiki/Comparison_of_instant_messaging_clients +[4]: https://opensource.com/article/19/1/open-source-social-media-alternatives +[5]: https://en.wikipedia.org/wiki/Viber +[6]: https://en.wikipedia.org/wiki/Line_(software) +[7]: https://en.wikipedia.org/wiki/Signal_(software) +[8]: https://en.wikipedia.org/wiki/Threema +[9]: https://en.wikipedia.org/wiki/Wire_(software) +[10]: https://en.wikipedia.org/wiki/Riot.im +[11]: https://line.me/en/ +[12]: https://about.riot.im/ +[13]: https://signal.org/ +[14]: https://en.wikipedia.org/wiki/Signal_Protocol +[15]: https://threema.ch/en +[16]: https://threema.ch/en/faq/source_code +[17]: https://www.viber.com/ +[18]: https://wire.com/en/ diff --git a/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md new file mode 100644 index 0000000000..c4200355e4 --- /dev/null +++ b/sources/tech/20190320 Getting started with Jaeger to build an Istio service mesh.md @@ -0,0 +1,157 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Jaeger to build an Istio service mesh) +[#]: via: (https://opensource.com/article/19/3/getting-started-jaeger) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +Getting started with Jaeger to build an Istio service mesh +====== + +Improve monitoring and tracing of cloud-native apps on a distributed networking system. + +![Mesh networking connected dots][1] + +[Service mesh][2] provides a dedicated network for service-to-service communication in a transparent way. [Istio][3] aims to help developers and operators address service mesh features such as dynamic service discovery, mutual transport layer security (TLS), circuit breakers, rate limiting, and tracing. [Jaeger][4] with Istio augments monitoring and tracing of cloud-native apps on a distributed networking system. This article explains how to get started with Jaeger to build an Istio service mesh on the Kubernetes platform. + +### Spinning up a Kubernetes cluster + +[Minikube][5] allows you to run a single-node Kubernetes cluster based on a virtual machine such as [KVM][6], [VirtualBox][7], or [HyperKit][8] on your local machine. [Install Minikube][9] and use the following shell script to run it: + +``` +#!/bin/bash + +export MINIKUBE_PROFILE_NAME=istio-jaeger +minikube profile $MINIKUBE_PROFILE_NAME +minikube config set cpus 3 +minikube config set memory 8192 + +# You need to replace appropriate VM driver on your local machine +minikube config set vm-driver hyperkit + +minikube start +``` + +In the above script, replace the **\--vm-driver=xxx** option with the appropriate virtual machine driver on your operating system (OS). + +### Deploying Istio service mesh with Jaeger + +Download the Istio installation file for your OS from the [Istio release page][10]. In the Istio package directory, you will find the Kubernetes installation YAML files in **install/** and the sample applications in **sample/**. Use the following commands: + +``` +$ curl -L | sh - +$ cd istio-1.0.5 +$ export PATH=$PWD/bin:$PATH +``` + +The easiest way to deploy Istio with Jaeger on your Kubernetes cluster is to use [Custom Resource Definitions][11]. Install Istio with mutual TLS authentication between sidecars with these commands: + +``` +$ kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml +$ kubectl apply -f install/kubernetes/istio-demo-auth.yaml +``` + +Check if all pods of Istio on your Kubernetes cluster are deployed and running correctly by using the following command and review the output: + +``` +$ kubectl get pods -n istio-system +NAME READY STATUS RESTARTS AGE +grafana-59b8896965-p2vgs 1/1 Running 0 3h +istio-citadel-856f994c58-tk8kq 1/1 Running 0 3h +istio-cleanup-secrets-mq54t 0/1 Completed 0 3h +istio-egressgateway-5649fcf57-n5ql5 1/1 Running 0 3h +istio-galley-7665f65c9c-wx8k7 1/1 Running 0 3h +istio-grafana-post-install-nh5rw 0/1 Completed 0 3h +istio-ingressgateway-6755b9bbf6-4lf8m 1/1 Running 0 3h +istio-pilot-698959c67b-d2zgm 2/2 Running 0 3h +istio-policy-6fcb6d655f-lfkm5 2/2 Running 0 3h +istio-security-post-install-st5xc 0/1 Completed 0 3h +istio-sidecar-injector-768c79f7bf-9rjgm 1/1 Running 0 3h +istio-telemetry-664d896cf5-wwcfw 2/2 Running 0 3h +istio-tracing-6b994895fd-h6s9h 1/1 Running 0 3h +prometheus-76b7745b64-hzm27 1/1 Running 0 3h +servicegraph-5c4485945b-mk22d 1/1 Running 1 3h +``` + +### Building sample microservice apps + +You can use the [Bookinfo][12] app to learn about Istio's features. Bookinfo consists of four microservice apps: _productpage_ , _details_ , _reviews_ , and _ratings_ deployed independently on Minikube. Each microservice will be deployed with an Envoy sidecar via Istio by using the following commands: + +``` +// Enable sidecar injection automatically +$ kubectl label namespace default istio-injection=enabled +$ kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml + +// Export the ingress IP, ports, and gateway URL +$ kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml + +$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}') +$ export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}') +$ export INGRESS_HOST=$(minikube ip) + +$ export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT +``` + +### Accessing the Jaeger dashboard + +To view tracing information for each HTTP request, create some traffic by running the following commands at the command line: +``` + +``` + +$ while true; do + curl -s http://${GATEWAY_URL}/productpage > /dev/null + echo -n .; + sleep 0.2 +done + +You can access the Jaeger dashboard through a web browser with [http://localhost:16686][13] if you set up port forwarding as follows: + +``` +kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath='{.items[0].metadata.name}') 16686:16686 & +``` + +You can explore all traces by clicking "Find Traces" after selecting the _productpage_ service. Your dashboard will look similar to this: + +![Find traces in Jaeger][14] + +You can also view more details about each trace to dig into performance issues or elapsed time by clicking on a certain trace. + +![Viewing details about a trace][15] + +### Conclusion + +A distributed tracing platform allows you to understand what happened from service to service for individual ingress/egress traffic. Istio sends individual trace information automatically to Jaeger, the distributed tracing platform, even if your modern applications aren't aware of Jaeger at all. In the end, this capability helps developers and operators do troubleshooting easier and quicker at scale. + +* * * + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/getting-started-jaeger + +作者:[Daniel Oh (Red Hat)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mesh_networking_dots_connected.png?itok=ovINTRR3 (Mesh networking connected dots) +[2]: https://blog.buoyant.io/2017/04/25/whats-a-service-mesh-and-why-do-i-need-one/ +[3]: https://istio.io/docs/concepts/what-is-istio/ +[4]: https://www.jaegertracing.io/docs/1.9/ +[5]: https://opensource.com/article/18/10/getting-started-minikube +[6]: https://www.linux-kvm.org/page/Main_Page +[7]: https://www.virtualbox.org/wiki/Downloads +[8]: https://github.com/moby/hyperkit +[9]: https://kubernetes.io/docs/tasks/tools/install-minikube/ +[10]: https://github.com/istio/istio/releases +[11]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions +[12]: https://github.com/istio/istio/tree/master/samples/bookinfo +[13]: http://localhost:16686/ +[14]: https://opensource.com/sites/default/files/uploads/traces_productpages.png (Find traces in Jaeger) +[15]: https://opensource.com/sites/default/files/uploads/traces_performance.png (Viewing details about a trace) diff --git a/sources/tech/20190320 Move your dotfiles to version control.md b/sources/tech/20190320 Move your dotfiles to version control.md new file mode 100644 index 0000000000..7d070760c7 --- /dev/null +++ b/sources/tech/20190320 Move your dotfiles to version control.md @@ -0,0 +1,130 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Move your dotfiles to version control) +[#]: via: (https://opensource.com/article/19/3/move-your-dotfiles-version-control) +[#]: author: (Matthew Broberg https://opensource.com/users/mbbroberg) + +Move your dotfiles to version control +====== +Back up or sync your custom configurations across your systems by sharing dotfiles on GitLab or GitHub. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/documents_papers_file_storage_work.png?itok=YlXpAqAJ) + +There is something truly exciting about customizing your operating system through the collection of hidden files we call dotfiles. In [What a Shell Dotfile Can Do For You][1], H. "Waldo" Grunenwald goes into excellent detail about the why and how of setting up your dotfiles. Let's dig into the why and how of sharing them. + +### What's a dotfile? + +"Dotfiles" is a common term for all the configuration files we have floating around our machines. These files usually start with a **.** at the beginning of the filename, like **.gitconfig** , and operating systems often hide them by default. For example, when I use **ls -a** on MacOS, it shows all the lovely dotfiles that would otherwise not be in the output. + +``` +dotfiles on master +➜ ls +README.md  Rakefile   bin       misc    profiles   zsh-custom + +dotfiles on master +➜ ls -a +.               .gitignore      .oh-my-zsh      README.md       zsh-custom +..              .gitmodules     .tmux           Rakefile +.gemrc          .global_ignore .vimrc           bin +.git            .gvimrc         .zlogin         misc +.gitconfig      .maid           .zshrc          profiles +``` + +If I take a look at one, **.gitconfig** , which I use for Git configuration, I see a ton of customization. I have account information, terminal color preferences, and tons of aliases that make my command-line interface feel like mine. Here's a snippet from the **[alias]** block: + +``` +87   # Show the diff between the latest commit and the current state +88   d = !"git diff-index --quiet HEAD -- || clear; git --no-pager diff --patch-with-stat" +89 +90   # `git di $number` shows the diff between the state `$number` revisions ago and the current state +91   di = !"d() { git diff --patch-with-stat HEAD~$1; }; git diff-index --quiet HEAD -- || clear; d" +92 +93   # Pull in remote changes for the current repository and all its submodules +94   p = !"git pull; git submodule foreach git pull origin master" +95 +96   # Checkout a pull request from origin (of a github repository) +97   pr = !"pr() { git fetch origin pull/$1/head:pr-$1; git checkout pr-$1; }; pr" +``` + +Since my **.gitconfig** has over 200 lines of customization, I have no interest in rewriting it on every new computer or system I use, and either does anyone else. This is one reason sharing dotfiles has become more and more popular, especially with the rise of the social coding site GitHub. The canonical article advocating for sharing dotfiles is Zach Holman's [Dotfiles Are Meant to Be Forked][2] from 2008. The premise is true to this day: I want to share them, with myself, with those new to dotfiles, and with those who have taught me so much by sharing their customizations. + +### Sharing dotfiles + +Many of us have multiple systems or know hard drives are fickle enough that we want to back up our carefully curated customizations. How do we keep these wonderful files in sync across environments? + +My favorite answer is distributed version control, preferably a service that will handle the heavy lifting for me. I regularly use GitHub and continue to enjoy GitLab as I get more experienced with it. Either one is a perfect place to share your information. To set yourself up: + + 1. Sign into your preferred Git-based service. + 2. Create a repository called "dotfiles." (Make it public! Sharing is caring.) + 3. Clone it to your local environment.* + 4. Copy your dotfiles into the folder. + 5. Symbolically link (symlink) them back to their target folder (most often **$HOME** ). + 6. Push them to the remote repository. + + + +* You may need to set up your Git configuration commands to clone the repository. Both GitHub and GitLab will prompt you with the commands to run. + +![](https://opensource.com/sites/default/files/uploads/gitlab-new-project.png) + +Step 4 above is the crux of this effort and can be a bit tricky. Whether you use a script or do it by hand, the workflow is to symlink from your dotfiles folder to the dotfiles destination so that any updates to your dotfiles are easily pushed to the remote repository. To do this for my **.gitconfig** file, I would enter: + +``` +$ cd dotfiles/ +$ ln -nfs .gitconfig $HOME/.gitconfig +``` + +The flags added to the symlinking command offer a few additional benefits: + + * **-s** creates a symbolic link instead of a hard link + * **-f** continues with other symlinking when an error occurs (not needed here, but useful in loops) + * **-n** avoids symlinking a symlink (same as **-h** for other versions of **ln** ) + + + +You can review the IEEE and Open Group [specification of **ln**][3] and the version on [MacOS 10.14.3][4] if you want to dig deeper into the available parameters. I had to look up these flags since I pulled them from someone else's dotfiles. + +You can also make updating simpler with a little additional code, like the [Rakefile][5] I forked from [Brad Parbs][6]. Alternatively, you can keep it incredibly simple, as Jeff Geerling does [in his dotfiles][7]. He symlinks files using [this Ansible playbook][8]. Keeping everything in sync at this point is easy: you can cron job or occasionally **git push** from your dotfiles folder. + +### Quick aside: What not to share + +Before we move on, it is worth noting what you should not add to a shared dotfile repository—even if it starts with a dot. Anything that is a security risk, like files in your **.ssh/** folder, is not a good choice to share using this method. Be sure to double-check your configuration files before publishing them online and triple-check that no API tokens are in your files. + +### Where should I start? + +If Git is new to you, my [article about the terminology][9] and [a cheat sheet][10] of my most frequently used commands should help you get going. + +There are other incredible resources to help you get started with dotfiles. Years ago, I came across [dotfiles.github.io][11] and continue to go back to it for a broader look at what people are doing. There is a lot of tribal knowledge hidden in other people's dotfiles. Take the time to scroll through some and don't be shy about adding them to your own. + +I hope this will get you started on the joy of having consistent dotfiles across your computers. + +What's your favorite dotfile trick? Add a comment or tweet me [@mbbroberg][12]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/move-your-dotfiles-version-control + +作者:[Matthew Broberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mbbroberg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/9/shell-dotfile +[2]: https://zachholman.com/2010/08/dotfiles-are-meant-to-be-forked/ +[3]: http://pubs.opengroup.org/onlinepubs/9699919799/utilities/ln.html +[4]: https://www.unix.com/man-page/FreeBSD/1/ln/ +[5]: https://github.com/mbbroberg/dotfiles/blob/master/Rakefile +[6]: https://github.com/bradp/dotfiles +[7]: https://github.com/geerlingguy/dotfiles +[8]: https://github.com/geerlingguy/mac-dev-playbook +[9]: https://opensource.com/article/19/2/git-terminology +[10]: https://opensource.com/downloads/cheat-sheet-git +[11]: http://dotfiles.github.io/ +[12]: https://twitter.com/mbbroberg?lang=en diff --git a/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md b/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md new file mode 100644 index 0000000000..ba0d8d550d --- /dev/null +++ b/sources/tech/20190320 Nuvola- Desktop Music Player for Streaming Services.md @@ -0,0 +1,186 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Nuvola: Desktop Music Player for Streaming Services) +[#]: via: (https://itsfoss.com/nuvola-music-player) +[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) + +Nuvola: Desktop Music Player for Streaming Services +====== + +[Nuvola][1] is not like your usual music players. It’s different because it allows you to play a number of streaming services in a desktop music player. + +Nuvola provides a runtime called [Nuvola Apps Runtime][2] which runs web apps. This is why Nuvola can support a host of streaming services. Some of the major players it supports are: + + * Spotify + * Google Play Music + * YouTube, YouTube Music + * [Pandora][3] + * [SoundCloud][4] + * and many many more. + + + +You can find the full list [here][1] in the Music streaming services section. Apple Music is not supported, if you were wondering. + +Why would you use a streaming music service in a different desktop player when you can run it in a web browser? The advantage with Nuvola is that it provides tight integration with many [desktop environments][5]. + +Ideally it should work with all DEs, but the officially supported ones are GNOME, Unity, and Pantheon (elementary OS). + +### Features of Nuvola Music Player + +Let’s see some of the main features of the open source project Nuvola: + + * Supports a wide variety of music streaming services + * Desktop integration with GNOME, Unity, and Pantheon. + * Keyboard shortcuts with the ability to customize them + * Support for keyboard’s multimedia keys (paid feature) + * Background play with notifications + * [GNOME Media Player][6] extension support + * App Tray indicator + * Dark and Light themes + * Enable or disable features + * Password Manager for web services + * Remote control over internet (paid feature) + * Available for a lot of distros ([Flatpak][7] packages) + + + +Complete list of features is available [here][8]. + +### How to install Nuvola on Ubuntu & other Linux distributions + +Installing Nuvola consists of a few more steps than simply adding a PPA and then installing the software. Since it is based on [Flatpak][7], you have to set up Flatpak first and then you can install Nuvola. + +[Enable Flatpak Support][9] + +The steps are pretty simple. You can follow the guide [here][10] if you want to install using the GUI, however I prefer terminal commands since they’re easier and faster. + +**Warning: If already installed, remove the older version of Nuvola (Click to expand)** + +If you have ever installed Nuvola before, you need to uninstall it to avoid issues. Run these commands in the terminal to do so. + +``` +sudo apt remove nuvolaplayer* +``` + +``` +rm -rf ~/.cache/nuvolaplayer3 ~/.local/share/nuvolaplayer ~/.config/nuvolaplayer3 ~/.local/share/applications/nuvolaplayer3* +``` + +Once you have made sure that your system has Flatpak, you can install Nuvola using this command: + +``` +flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo +flatpak remote-add --if-not-exists nuvola https://dl.tiliado.eu/flatpak/nuvola.flatpakrepo +``` + +This is an optional step but I recommend you install this since the service allows you to commonly configure settings like shortcuts for each of the streaming service that you might use. + +``` +flatpak install nuvola eu.tiliado.Nuvola +``` + +Nuvola supports 29 streaming services. To get them, you need to add those services individually. You can find all the supported music services are available on this [page][10]. + +For the purpose of this tutorial, I’m going to go with [YouTube Music][11]. + +``` +flatpak install nuvola eu.tiliado.NuvolaAppYoutubeMusic +``` + +After this, you should have the app installed and should be able to see the icon if you search for it. + +![Nuvola App specific icons][12] + +Clicking on the icon will pop-up the first time setup. You’ll have to accept the Privacy Policy and then continue. + +![Terms and Conditions page][13] + +After accepting terms and conditions, you should launch into the web app of the respective streaming service, YouTube Music in this case. + +![YouTube Music web app running on Nuvola Runtime][14] + +In case of installation on other distributions, specific guidelines are available on the [Nuvola website][15]. + +### My experience with Nuvola Music Player + +Initially I thought that it wouldn’t be too different than simply running the web app in [Firefox][16], since many desktop environments like KDE support media controls and shortcuts for media playing in Firefox. + +However, this isn’t the case with many other desktops environments and that’s where Nuvola comes in handy. Often, it’s also faster to access than loading the website on the browser. + +Once loaded, it behaves pretty much like a normal web app with the benefit of keyboard shortcuts. Speaking of shortcuts, you should check out the list of must know [Ubuntu shortcuts][17]. + +![Viewing an Artist’s page][18] + +Integration with the DE comes in handy when you quickly want to change a song or play/pause your music without leaving your current application. Nuvola gives you access in GNOME notifications as well as provides an app tray icon. + + * ![Notification music controls][19] + + * ![App tray music controls][20] + + + + +Keyboard shortcuts work well, globally as well as in-app. You get a notification when the song changes. Whether you do it yourself or it automatically switches to the next song. + +![][21] + +By default, very few keyboard shortcuts are provided. However you can enable them for almost everything you can do with the app. For example I set the song change shortcuts to Ctrl + Arrow keys as you can see in the screenshot. + +![Keyboard Shortcuts][22] + +All in all, it works pretty well and it’s fast and responsive. Definitely more so than your usual Snap app. + +**Some criticism** + +Some thing that did not please me as much was the installation size. Since it requires a browser back-end and GNOME integration it essentially installs a browser and necessary GNOME libraries for Flatpak, so that results in having to install almost 350MB in dependencies. + +After that, you install individual apps. The individual apps themselves are not heavy at all. But if you just use one streaming service, having a 300+ MB installation might not be ideal if you’re concerned about disk space. + +Nuvola also does not support local music, at least as far as I could find. + +**Conclusion** + +Hope this article helped you to know more about Nuvola Music Player and its features. If you like such different applications, why not take a look at some of the [lesser known music players for Linux][23]? + +As always, if you have any suggestions or questions, I look forward to reading your comments. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/nuvola-music-player + +作者:[Atharva Lele][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/atharva/ +[b]: https://github.com/lujun9972 +[1]: https://nuvola.tiliado.eu/ +[2]: https://nuvola.tiliado.eu/#fn:1 +[3]: https://itsfoss.com/install-pandora-linux-client/ +[4]: https://itsfoss.com/install-soundcloud-linux/ +[5]: https://itsfoss.com/best-linux-desktop-environments/ +[6]: https://extensions.gnome.org/extension/55/media-player-indicator/ +[7]: https://flatpak.org/ +[8]: http://tiliado.github.io/nuvolaplayer/documentation/4/explore.html +[9]: https://itsfoss.com/flatpak-guide/ +[10]: https://nuvola.tiliado.eu/nuvola/ubuntu/bionic/ +[11]: https://nuvola.tiliado.eu/app/youtube_music/ubuntu/bionic/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music_icon.png?resize=800%2C450&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_eula.png?resize=800%2C450&ssl=1 +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_youtube_music.png?resize=800%2C450&ssl=1 +[15]: https://nuvola.tiliado.eu/index/ +[16]: https://itsfoss.com/why-firefox/ +[17]: https://itsfoss.com/ubuntu-shortcuts/ +[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player.png?resize=800%2C449&ssl=1 +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_music_controls.png?fit=800%2C450&ssl=1 +[20]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_web_player2.png?fit=800%2C450&ssl=1 +[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_song_change_notification-e1553077619208.png?ssl=1 +[22]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/nuvola_shortcuts.png?resize=800%2C450&ssl=1 +[23]: https://itsfoss.com/lesser-known-music-players-linux/ diff --git a/sources/tech/20190321 4 ways to jumpstart productivity at work.md b/sources/tech/20190321 4 ways to jumpstart productivity at work.md new file mode 100644 index 0000000000..679fa75607 --- /dev/null +++ b/sources/tech/20190321 4 ways to jumpstart productivity at work.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 ways to jumpstart productivity at work) +[#]: via: (https://opensource.com/article/19/3/guide-being-more-productive) +[#]: author: (Sarah Wall https://opensource.com/users/sarahwall) + +4 ways to jumpstart productivity at work +====== + +This article includes six open source productivity tools. + +![][1] + +Time poverty—the idea that there's not enough time to do all the work we need to do—is it a perception or a reality? + +The truth is you'll never get more than 24 hours out of any day. Working longer hours doesn't help. Your productivity actually decreases the longer you work in a given day. Your perception, or intuitive understanding of your time, is what matters. One key to managing productivity is how you use the time you've got. + +You have lots of time that you can use more efficiently, including time lost to ineffective meetings, distractions, and context switching between tasks. By spending your time more wisely, you can get more done and achieve higher overall job performance. You will also have a higher level of job satisfaction and feel lower levels of stress. + +### Jumpstart your productivity + +#### 1\. Eliminate distractions + +When you have too many things vying for your attention, it slows you down and decreases your productivity. Do your best to remove every distraction that pulls you off tasks. + +Cellphones, email, and messaging apps are the most common drains on productivity. Set the ringer on your phone to vibrate, set specific times for checking email, and close irrelevant browser tabs. With this approach, your work will be interrupted less throughout the day. + +#### 2\. Make your to-do list _verb-oriented_ + +To-do lists are a great way to help you focus on exactly what you need to accomplish each day. Some people do best with a physical list, like a notebook, and others do better with digital tools. Check out these suggestions for [open source productivity tools][2] to help you manage your workflow. Or check these six open source tools to stay organized: + + * [Joplin, a note-taking app][3] + * [Wekan, an open source kanban board][4] + * [TaskBoard, a lightweight kanban board][5] + * [Go For It, a flexible to-do list application][6] + * [Org mode without Emacs][7] + * [Freeplane, an open source mind-mapping application][8] + + + +Your list can be as sophisticated or as simple as you like, but just making a list is not enough. What goes on your list makes all the difference. Every item that goes on your list should be actionable. The trick is to make sure there's a verb. For example, "Smith project" is not actionable enough. "Outline key deliverables on Smith project" gives you a more concrete task to complete. + +#### 3\. Stick to the 10-minute rule + +Overwhelmed by an unclear or unwieldy task? Break it into 10-minute mini-tasks instead. This can be a great way to take something unmanageable and turn it into something achievable. + +The beauty of 10-minute tasks is they can be fit into many parts of your day. When you get into the office in the morning and are feeling fresh, kick off your day with a burst of productivity with a few 10-minute tasks. Losing momentum in the afternoon? A 10-minute job can help you regain speed. + +Ten-minute tasks are also a good way to identify tasks that can be delegated to others. The ability to delegate work is often one of the most effective management techniques. By finding a simple task that can be accomplished by another member of your team, you can make short work of a big job. + +#### 4\. Take a break + +Another drain on productivity is the urge to keep pressing ahead on a task to complete it without taking a break. Suddenly you feel really fatigued or hungry, and you realize you haven't gone to the bathroom in hours! Your concentration is affected, and therefore your productivity decreases. + +Set benchmarks for taking breaks and stick to them. For example, commit to once per hour to get up and move around for five minutes. If you're pressed for time, stand up and stretch for two minutes. Changing your body position and focusing on the present moment will help relieve any mental tension that has built up. + +Hydrate your mind with a glass of water. When your body is not properly hydrated, it can put increased stress on your brain. As little as a one to three percent decrease in hydration can negatively affect your memory, concentration, and decision-making. + +### Don't fall into the time-poverty trap + +Time is limited and time poverty is just an idea. How you choose to spend the time you have each day is what's important. When you develop new, healthy habits, you can increase your productivity and direct your time in the ways that give the most value. + +* * * + +_This article was adapted from "[The Keys to Productivity][9]" on ImageX's blog._ + +_Sarah Wall will present_ [_Mindless multitasking: a dummy's guide to productivity_][10], _at_ [_DrupalCon_][11] _in Seattle, April 8-12, 2019._ + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/guide-being-more-productive + +作者:[Sarah Wall][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sarahwall +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_commun_4604_02_mech_connections_rhcz0.5x.png?itok=YPPU4dMj +[2]: https://opensource.com/article/16/11/open-source-productivity-hacks +[3]: https://opensource.com/article/19/1/productivity-tool-joplin +[4]: https://opensource.com/article/19/1/productivity-tool-wekan +[5]: https://opensource.com/article/19/1/productivity-tool-taskboard +[6]: https://opensource.com/article/19/1/productivity-tool-go-for-it +[7]: https://opensource.com/article/19/1/productivity-tool-org-mode +[8]: https://opensource.com/article/19/1/productivity-tool-freeplane +[9]: https://imagexmedia.com/managing-productivity +[10]: https://events.drupal.org/seattle2019/sessions/mindless-multitasking-dummy%E2%80%99s-guide-productivity +[11]: https://events.drupal.org/seattle2019 diff --git a/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md b/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md new file mode 100644 index 0000000000..9c3de11bc5 --- /dev/null +++ b/sources/tech/20190321 How To Setup Linux Media Server Using Jellyfin.md @@ -0,0 +1,268 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Setup Linux Media Server Using Jellyfin) +[#]: via: (https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +How To Setup Linux Media Server Using Jellyfin +====== + +![Setup Linux Media Server Using Jellyfin][1] + +We’ve already written about setting up your own streaming media server on Linux using [**Streama**][2]. Today, we will going to setup yet another media server using **Jellyfin**. Jellyfin is a free, cross-platform and open source alternative to propriety media streaming applications such as **Emby** and **Plex**. The main developer of Jellyfin forked it from Emby after the announcement of Emby transitioning to a proprietary model. Jellyfin doesn’t include any premium features, licenses or membership plans. It is completely free and open source project supported by hundreds of community members. Using jellyfin, we can instantly setup Linux media server in minutes and access it via LAN/WAN from any devices using multiple apps. + +### Setup Linux Media Server Using Jellyfin + +Jellyfin supports GNU/Linux, Mac OS and Microsoft Windows operating systems. You can install it on your Linux distribution as described below. + +##### Install Jellyfin On Linux + +As of writing this guide, Jellyfin packages are available for most popular Linux distributions, such as Arch Linux, Debian, CentOS, Fedora and Ubuntu. + +On **Arch Linux** and its derivatives like **Antergos** , **Manjaro Linux** , you can install Jellyfin using any AUR helper tools, for example [**YaY**][3]. + +``` +$ yay -S jellyfin-git +``` + +On **CentOS/RHEL** : + +Download the latest Jellyfin rpm package from [**here**][4] and install it as shown below. + +``` +$ wget https://repo.jellyfin.org/releases/server/centos/jellyfin-10.2.2-1.el7.x86_64.rpm + +$ sudo yum localinstall jellyfin-10.2.2-1.el7.x86_64.rpm +``` + +On **Fedora** : + +Download Jellyfin for Fedora from [**here**][5]. + +``` +$ wget https://repo.jellyfin.org/releases/server/fedora/jellyfin-10.2.2-1.fc29.x86_64.rpm + +$ sudo dnf install jellyfin-10.2.2-1.fc29.x86_64.rpm +``` + +On **Debian** : + +Install HTTPS transport for APT if it is not installed already: + +``` +$ sudo apt install apt-transport-https +``` + +Import Jellyfin GPG signing key:`` + +``` +$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add - +``` + +Add Jellyfin repository: + +``` +$ sudo touch /etc/apt/sources.list.d/jellyfin.list + +$ echo "deb [arch=amd64] https://repo.jellyfin.org/debian $( lsb_release -c -s ) main" | sudo tee /etc/apt/sources.list.d/jellyfin.list +``` + +Finally, update Jellyfin repository and install Jellyfin using commands:`` + +``` +$ sudo apt update + +$ sudo apt install jellyfin +``` + +On **Ubuntu 18.04 LTS** : + +Install HTTPS transport for APT if it is not installed already: + +``` +$ sudo apt install apt-transport-https +``` + +Import and add Jellyfin GPG signing key:`` + +``` +$ wget -O - https://repo.jellyfin.org/debian/jellyfin_team.gpg.key | sudo apt-key add - +``` + +Add the Jellyfin repository: + +``` +$ sudo touch /etc/apt/sources.list.d/jellyfin.list + +$ echo "deb https://repo.jellyfin.org/ubuntu bionic main" | sudo tee /etc/apt/sources.list.d/jellyfin.list +``` + +For Ubuntu 16.04, just replace **bionic** with **xenial** in the above URL. + +Finally, update Jellyfin repository and install Jellyfin using commands:`` + +``` +$ sudo apt update + +$ sudo apt install jellyfin +``` + +##### Start Jellyfin service + +Run the following commands to enable and start jellyfin service on every reboot: + +``` +$ sudo systemctl enable jellyfin + +$ sudo systemctl start jellyfin +``` + +To check if the service has been started or not, run: + +``` +$ sudo systemctl status jellyfin +``` + +Sample output: + +``` +● jellyfin.service - Jellyfin Media Server +Loaded: loaded (/lib/systemd/system/jellyfin.service; enabled; vendor preset: enabled) +Drop-In: /etc/systemd/system/jellyfin.service.d +└─jellyfin.service.conf +Active: active (running) since Wed 2019-03-20 12:20:19 UTC; 1s ago +Main PID: 4556 (jellyfin) +Tasks: 11 (limit: 2320) +CGroup: /system.slice/jellyfin.service +└─4556 /usr/bin/jellyfin --datadir=/var/lib/jellyfin --configdir=/etc/jellyfin --logdir=/var/log/jellyfin --cachedir=/var/cache/jellyfin --r + +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Photos, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Server.Implementations, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nu +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.MediaEncoding, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Dlna, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.LocalMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=nul +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading Emby.Notifications, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading MediaBrowser.XbmcMetadata, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Loading jellyfin, Version=10.2.2.0, Culture=neutral, PublicKeyToken=null +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite version: 3.26.0 +Mar 20 12:20:21 ubuntuserver jellyfin[4556]: [12:20:21] [INF] Sqlite compiler options: COMPILER=gcc-5.4.0 20160609,DEFAULT_FOREIGN_KEYS,ENABLE_COLUMN_M +``` + +If you see an output something, congratulations! Jellyfin service has been started. + +Next, we should do some initial configuration. + +##### Configure Jellyfin + +Once jellyfin is installed, open the browser and navigate to – **http:// :8096** or **http:// :8096** URL. + +You will see the following welcome screen. Select your preferred language and click Next. + +![][6] + +Enter your user details. You can add more users later from the Jellyfin Dashboard. + +![][7] + +The next step is to select media files which we want to stream. To do so, click “Add media Library” button: + +![][8] + +Choose the content type (i.e audio, video, movies etc.,), display name and click plus (+) sign next to the Folders icon to choose the location where you kept your media files. You can further choose other library settings such as the preferred download language, country etc. Click Ok after choosing the preferred options. + +![][9] + +Similarly, add all of the media files. Once you have chosen everything to stream, click Next. + +![][10] + +Choose the Metadata language and click Next: + +![][11] + +Next, you need to configure whether you want to allow remote connections to this media server. Make sure you have allowed the remote connections. Also, enable automatic port mapping and click Next: + +![][12] + +You’re all set! Click Finish to complete Jellyfin configuration. + +![][13] + +You will now be redirected to Jellyfin login page. Click on the username and enter it’s password which we setup earlier. + +![][14] + +This is how Jellyfin dashboard looks like. + +![][15] + +As you see in the screenshot, all of your media files are shown in the dashboard itself under My Media section. Just click on the any media file of your choice and start watching it!! + +![][16] + +You can access this Jellyfin media server from any systems on the network using URL – . You need not to install any extra apps. All you need is a modern web browser. + +If you want to change anything or reconfigure, click on the three horizontal bars from the Home screen. Here, you can add users, media files, change playback settings, add TV/DVR, install plugins, change default port no and a lot more settings. + +![][17] + +For more details, check out [**Jellyfin official documentation**][18] page. + +And, that’s all for now. As you can see setting up a streaming media server on Linux is no big-deal. I tested it on my Ubuntu 18.04 LTS VM. It worked fine out of the box. I can be able to watch the movies from other systems in my LAN. If you’re looking for easy, quick and free solution for hosting a media server, Jellyfin is a good choice. + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.ostechnix.com/streama-setup-your-own-streaming-media-server-in-minutes/ +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://repo.jellyfin.org/releases/server/centos/ +[5]: https://repo.jellyfin.org/releases/server/fedora/ +[6]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-1.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-2-1.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-3-1.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-4-1.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-5-1.png +[11]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-6.png +[12]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-7.png +[13]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-8-1.png +[14]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-9.png +[15]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-10.png +[16]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-11.png +[17]: http://www.ostechnix.com/wp-content/uploads/2019/03/jellyfin-12.png +[18]: https://jellyfin.readthedocs.io/en/latest/ +[19]: https://github.com/jellyfin/jellyfin +[20]: http://feedburner.google.com/fb/a/mailverify?uri=ostechnix (Subscribe to our Email newsletter) +[21]: https://www.paypal.me/ostechnix (Donate Via PayPal) +[22]: http://ostechnix.tradepub.com/category/information-technology/1207/ +[23]: https://www.facebook.com/ostechnix/ +[24]: https://twitter.com/ostechnix +[25]: https://plus.google.com/+SenthilkumarP/ +[26]: https://www.linkedin.com/in/ostechnix +[27]: http://feeds.feedburner.com/Ostechnix +[28]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=reddit (Click to share on Reddit) +[29]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=twitter (Click to share on Twitter) +[30]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=facebook (Click to share on Facebook) +[31]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=linkedin (Click to share on LinkedIn) +[32]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=pocket (Click to share on Pocket) +[33]: https://api.whatsapp.com/send?text=How%20To%20Setup%20Linux%20Media%20Server%20Using%20Jellyfin%20https%3A%2F%2Fwww.ostechnix.com%2Fhow-to-setup-linux-media-server-using-jellyfin%2F (Click to share on WhatsApp) +[34]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=telegram (Click to share on Telegram) +[35]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/?share=email (Click to email this to a friend) +[36]: https://www.ostechnix.com/how-to-setup-linux-media-server-using-jellyfin/#print (Click to print) diff --git a/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md new file mode 100644 index 0000000000..0e4be0aa01 --- /dev/null +++ b/sources/tech/20190321 How to use Spark SQL- A hands-on tutorial.md @@ -0,0 +1,540 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use Spark SQL: A hands-on tutorial) +[#]: via: (https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial) +[#]: author: (Dipanjan Sarkar https://opensource.com/users/djsarkar) + +How to use Spark SQL: A hands-on tutorial +====== + +This tutorial explains how to leverage relational databases at scale using Spark SQL and DataFrames. + +![Team checklist and to dos][1] + +In the [first part][2] of this series, we looked at advances in leveraging the power of relational databases "at scale" using [Apache Spark SQL and DataFrames][3]. We will now do a simple tutorial based on a real-world dataset to look at how to use Spark SQL. We will be using Spark DataFrames, but the focus will be more on using SQL. In a separate article, I will cover a detailed discussion around Spark DataFrames and common operations. + +I love using cloud services for my machine learning, deep learning, and even big data analytics needs, instead of painfully setting up my own Spark cluster. I will be using the Databricks Platform for my Spark needs. Databricks is a company founded by the creators of Apache Spark that aims to help clients with cloud-based big data processing using Spark. + +![Apache Spark and Databricks][4] + +The simplest (and free of charge) way is to go to the [Try Databricks page][5] and [sign up for a community edition][6] account. You get a cloud-based cluster, which is a single-node cluster with 6GB and unlimited notebooks—not bad for a free version! I recommend using the Databricks Platform if you have serious needs for analyzing big data. + +Let's get started with our case study now. Feel free to create a new notebook from your home screen in Databricks or your own Spark cluster. + +![Create a notebook][7] + +You can also import my notebook containing the entire tutorial, but please make sure to run every cell and play around and explore with it, instead of just reading through it. Unsure of how to use Spark on Databricks? Follow [this short but useful tutorial][8]. + +This tutorial will familiarize you with essential Spark capabilities to deal with structured data often obtained from databases or flat files. We will explore typical ways of querying and aggregating relational data by leveraging concepts of DataFrames and SQL using Spark. We will work on an interesting dataset from the [KDD Cup 1999][9] and try to query the data using high-level abstractions like the dataframe that has already been a hit in popular data analysis tools like R and Python. We will also look at how easy it is to build data queries using the SQL language and retrieve insightful information from our data. This also happens at scale without us having to do a lot more since Spark distributes these data structures efficiently in the backend, which makes our queries scalable and as efficient as possible. We'll start by loading some basic dependencies. + +``` +import pandas as pd +import matplotlib.pyplot as plt +plt.style.use('fivethirtyeight') +``` + +#### Data retrieval + +The [KDD Cup 1999][9] dataset was used for the Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99, the Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network-intrusion detector, a predictive model capable of distinguishing between _bad connections_ , called intrusions or attacks, and _good, normal connections_. This database contains a standard set of data to be audited, which includes a wide variety of intrusions simulated in a military network environment. + +We will be using the reduced dataset **kddcup.data_10_percent.gz** that contains nearly a half-million network interactions. We will download this Gzip file from the web locally and then work on it. If you have a good, stable internet connection, feel free to download and work with the full dataset, **kddcup.data.gz**. + +#### Working with data from the web + +Dealing with datasets retrieved from the web can be a bit tricky in Databricks. Fortunately, we have some excellent utility packages like **dbutils** that help make our job easier. Let's take a quick look at some essential functions for this module. + +``` +dbutils.help() +``` + +``` +This module provides various utilities for users to interact with the rest of Databricks. + +fs: DbfsUtils -> Manipulates the Databricks filesystem (DBFS) from the console +meta: MetaUtils -> Methods to hook into the compiler (EXPERIMENTAL) +notebook: NotebookUtils -> Utilities for the control flow of a notebook (EXPERIMENTAL) +preview: Preview -> Utilities under preview category +secrets: SecretUtils -> Provides utilities for leveraging secrets within notebooks +widgets: WidgetsUtils -> Methods to create and get bound value of input widgets inside notebooks +``` + +#### Retrieve and store data in Databricks + +We will now leverage the Python **urllib** library to extract the KDD Cup 99 data from its web repository, store it in a temporary location, and move it to the Databricks filesystem, which can enable easy access to this data for analysis + +> **Note:** If you skip this step and download the data directly, you may end up getting a **InvalidInputException: Input path does not exist** error. + +``` +import urllib +urllib.urlretrieve("", "/tmp/kddcup_data.gz") +dbutils.fs.mv("file:/tmp/kddcup_data.gz", "dbfs:/kdd/kddcup_data.gz") +display(dbutils.fs.ls("dbfs:/kdd")) +``` + +![Spark Job kddcup_data.gz][10] + +#### Build the KDD dataset + +Now that we have our data stored in the Databricks filesystem, let's load up our data from the disk into Spark's traditional abstracted data structure, the [Resilient Distributed Dataset][11] (RDD). + +``` +data_file = "dbfs:/kdd/kddcup_data.gz" +raw_rdd = sc.textFile(data_file).cache() +raw_rdd.take(5) +``` + +![Data in Resilient Distributed Dataset \(RDD\)][12] + +You can also verify the type of data structure of our data (RDD) using the following code. + +``` +type(raw_rdd) +``` + +![output][13] + +#### Build a Spark DataFrame on our data + +A Spark DataFrame is an interesting data structure representing a distributed collecion of data. Typically the entry point into all SQL functionality in Spark is the **SQLContext** class. To create a basic instance of this call, all we need is a **SparkContext** reference. In Databricks, this global context object is available as **sc** for this purpose. + +``` +from pyspark.sql import SQLContext +sqlContext = SQLContext(sc) +sqlContext +``` + +![output][14] + +#### Split the CSV data + +Each entry in our RDD is a comma-separated line of data, which we first need to split before we can parse and build our dataframe. + +``` +csv_rdd = raw_rdd.map(lambda row: row.split(",")) +print(csv_rdd.take(2)) +print(type(csv_rdd)) +``` + +![Splitting RDD entries][15] + +#### Check the total number of features (columns) + +We can use the following code to check the total number of potential columns in our dataset. + +``` +len(csv_rdd.take(1)[0]) + +Out[57]: 42 +``` + +#### Understand and parse data + +The KDD 99 Cup data consists of different attributes captured from connection data. You can obtain the [full list of attributes in the data][16] and further details pertaining to the [description for each attribute/column][17]. We will just be using some specific columns from the dataset, the details of which are specified as follows. + +feature num | feature name | description | type +---|---|---|--- +1 | duration | length (number of seconds) of the connection | continuous +2 | protocol_type | type of the protocol, e.g., tcp, udp, etc. | discrete +3 | service | network service on the destination, e.g., http, telnet, etc. | discrete +4 | src_bytes | number of data bytes from source to destination | continuous +5 | dst_bytes | number of data bytes from destination to source | continuous +6 | flag | normal or error status of the connection | discrete +7 | wrong_fragment | number of "wrong" fragments | continuous +8 | urgent | number of urgent packets | continuous +9 | hot | number of "hot" indicators | continuous +10 | num_failed_logins | number of failed login attempts | continuous +11 | num_compromised | number of "compromised" conditions | continuous +12 | su_attempted | 1 if "su root" command attempted; 0 otherwise | discrete +13 | num_root | number of "root" accesses | continuous +14 | num_file_creations | number of file creation operations | continuous + +We will be extracting the following columns based on their positions in each data point (row) and build a new RDD as follows. + +``` +from pyspark.sql import Row + +parsed_rdd = csv_rdd.map(lambda r: Row( + duration=int(r[0]), + protocol_type=r[1], + service=r[2], + flag=r[3], + src_bytes=int(r[4]), + dst_bytes=int(r[5]), + wrong_fragment=int(r[7]), + urgent=int(r[8]), + hot=int(r[9]), + num_failed_logins=int(r[10]), + num_compromised=int(r[12]), + su_attempted=r[14], + num_root=int(r[15]), + num_file_creations=int(r[16]), + label=r[-1] + ) +) +parsed_rdd.take(5) +``` + +![Extracting columns][18] + +#### Construct the DataFrame + +Now that our data is neatly parsed and formatted, let's build our DataFrame! +``` + +``` + +df = sqlContext.createDataFrame(parsed_rdd) +display(df.head(10)) + +![DataFrame][19] + +You can also now check out the schema of our DataFrame using the following code. + +``` +df.printSchema() +``` + +![Dataframe schema][20] + +#### Build a temporary table + +We can leverage the **registerTempTable()** function to build a temporary table to run SQL commands on our DataFrame at scale! A point to remember is that the lifetime of this temp table is tied to the session. It creates an in-memory table that is scoped to the cluster in which it was created. The data is stored using Hive's highly optimized, in-memory columnar format. + +You can also check out **saveAsTable()** , which creates a permanent, physical table stored in S3 using the Parquet format. This table is accessible to all clusters. The table metadata, including the location of the file(s), is stored within the Hive metastore. + +``` +help(df.registerTempTable) +``` + +![help\(df.registerTempTable\)][21] + +``` +df.registerTempTable("connections") +``` + +### Execute SQL at Scale + +Let's look at a few examples of how we can run SQL queries on our table based off of our dataframe. We will start with some simple queries and then look at aggregations, filters, sorting, sub-queries, and pivots in this tutorial. + +#### Connections based on the protocol type + +Let's look at how we can get the total number of connections based on the type of connectivity protocol. First, we will get this information using normal DataFrame DSL syntax to perform aggregations. + +``` +display(df.groupBy('protocol_type') +.count() +.orderBy('count', ascending=False)) +``` + +![Total number of connections][22] + +Can we also use SQL to perform the same aggregation? Yes, we can leverage the table we built earlier for this! + +``` +protocols = sqlContext.sql(""" + SELECT protocol_type, count(*) as freq + FROM connections + GROUP BY protocol_type + ORDER BY 2 DESC + """) +display(protocols) +``` + +![protocol type and frequency][23] + +You can clearly see that you get the same results and don't need to worry about your background infrastructure or how the code is executed. Just write simple SQL! + +#### Connections based on good or bad (attack types) signatures + +We will now run a simple aggregation to check the total number of connections based on good (normal) or bad (intrusion attacks) types. + +``` +labels = sqlContext.sql(""" + SELECT label, count(*) as freq + FROM connections + GROUP BY label + ORDER BY 2 DESC +""") +display(labels) +``` + +![Connection by type][24] + +We have a lot of different attack types. We can visualize this in the form of a bar chart. The simplest way is to use the excellent interface options in the Databricks notebook. + +![Databricks chart types][25] + +This gives us a nice-looking bar chart, which you can customize further by clicking on **Plot Options**. + +![Bar chart][26] + +Another way is to write the code to do it. You can extract the aggregated data as a Pandas DataFrame and plot it as a regular bar chart. + +``` +labels_df = pd.DataFrame(labels.toPandas()) +labels_df.set_index("label", drop=True,inplace=True) +labels_fig = labels_df.plot(kind='barh') + +plt.rcParams["figure.figsize"] = (7, 5) +plt.rcParams.update({'font.size': 10}) +plt.tight_layout() +display(labels_fig.figure) +``` + +![Bar chart][27] + +### Connections based on protocols and attacks + +Let's look at which protocols are most vulnerable to attacks by using the following SQL query. + +``` + +attack_protocol = sqlContext.sql(""" + SELECT + protocol_type, + CASE label + WHEN 'normal.' THEN 'no attack' + ELSE 'attack' + END AS state, + COUNT(*) as freq + FROM connections + GROUP BY protocol_type, state + ORDER BY 3 DESC +""") +display(attack_protocol) +``` + +![Protocols most vulnerable to attacks][28] + +Well, it looks like ICMP connections, followed by TCP connections have had the most attacks. + +#### Connection stats based on protocols and attacks + +Let's take a look at some statistical measures pertaining to these protocols and attacks for our connection requests. + +``` +attack_stats = sqlContext.sql(""" + SELECT + protocol_type, + CASE label + WHEN 'normal.' THEN 'no attack' + ELSE 'attack' + END AS state, + COUNT(*) as total_freq, + ROUND(AVG(src_bytes), 2) as mean_src_bytes, + ROUND(AVG(dst_bytes), 2) as mean_dst_bytes, + ROUND(AVG(duration), 2) as mean_duration, + SUM(num_failed_logins) as total_failed_logins, + SUM(num_compromised) as total_compromised, + SUM(num_file_creations) as total_file_creations, + SUM(su_attempted) as total_root_attempts, + SUM(num_root) as total_root_acceses + FROM connections + GROUP BY protocol_type, state + ORDER BY 3 DESC +""") +display(attack_stats) +``` + +![Statistics pertaining to protocols and attacks][29] + +Looks like the average amount of data being transmitted in TCP requests is much higher, which is not surprising. Interestingly, attacks have a much higher average payload of data being transmitted from the source to the destination. + +#### Filtering connection stats based on the TCP protocol by service and attack type + +Let's take a closer look at TCP attacks, given that we have more relevant data and statistics for the same. We will now aggregate different types of TCP attacks based on service and attack type and observe different metrics. + +``` +tcp_attack_stats = sqlContext.sql(""" +SELECT +service, +label as attack_type, +COUNT(*) as total_freq, +ROUND(AVG(duration), 2) as mean_duration, +SUM(num_failed_logins) as total_failed_logins, +SUM(num_file_creations) as total_file_creations, +SUM(su_attempted) as total_root_attempts, +SUM(num_root) as total_root_acceses +FROM connections +WHERE protocol_type = 'tcp' +AND label != 'normal.' +GROUP BY service, attack_type +ORDER BY total_freq DESC +""") +display(tcp_attack_stats) +``` + +![TCP attack data][30] + +There are a lot of attack types, and the preceding output shows a specific section of them. + +#### Filtering connection stats based on the TCP protocol by service and attack type + +We will now filter some of these attack types by imposing some constraints in our query based on duration, file creations, and root accesses. + +``` +tcp_attack_stats = sqlContext.sql(""" +SELECT +service, +label as attack_type, +COUNT(*) as total_freq, +ROUND(AVG(duration), 2) as mean_duration, +SUM(num_failed_logins) as total_failed_logins, +SUM(num_file_creations) as total_file_creations, +SUM(su_attempted) as total_root_attempts, +SUM(num_root) as total_root_acceses +FROM connections +WHERE (protocol_type = 'tcp' +AND label != 'normal.') +GROUP BY service, attack_type +HAVING (mean_duration >= 50 +AND total_file_creations >= 5 +AND total_root_acceses >= 1) +ORDER BY total_freq DESC +""") +display(tcp_attack_stats) +``` + +![Filtered by attack type][31] + +It's interesting to see that [multi-hop attacks][32] can get root accesses to the destination hosts! + +#### Subqueries to filter TCP attack types based on service + +Let's try to get all the TCP attacks based on service and attack type such that the overall mean duration of these attacks is greater than zero ( **> 0** ). For this, we can do an inner query with all aggregation statistics and extract the relevant queries and apply a mean duration filter in the outer query, as shown below. + +``` +tcp_attack_stats = sqlContext.sql(""" +SELECT +t.service, +t.attack_type, +t.total_freq +FROM +(SELECT +service, +label as attack_type, +COUNT(*) as total_freq, +ROUND(AVG(duration), 2) as mean_duration, +SUM(num_failed_logins) as total_failed_logins, +SUM(num_file_creations) as total_file_creations, +SUM(su_attempted) as total_root_attempts, +SUM(num_root) as total_root_acceses +FROM connections +WHERE protocol_type = 'tcp' +AND label != 'normal.' +GROUP BY service, attack_type +ORDER BY total_freq DESC) as t +WHERE t.mean_duration > 0 +""") +display(tcp_attack_stats) +``` + +![TCP attacks based on service and attack type][33] + +This is nice! Now another interesting way to view this data is to use a pivot table, where one attribute represents rows and another one represents columns. Let's see if we can leverage Spark DataFrames to do this! + +#### Build a pivot table from aggregated data + +We will build upon the previous DataFrame object where we aggregated attacks based on type and service. For this, we can leverage the power of Spark DataFrames and the DataFrame DSL. + +``` +display((tcp_attack_stats.groupby('service') +.pivot('attack_type') +.agg({'total_freq':'max'}) +.na.fill(0)) +) +``` + +![Pivot table][34] + +We get a nice, neat pivot table showing all the occurrences based on service and attack type! + +### Next steps + +I would encourage you to go out and play with Spark SQL and DataFrames. You can even [import my notebook][35] and play with it in your own account. + +Feel free to refer to [my GitHub repository][36] also for all the code and notebooks used in this article. It covers things we didn't cover here, including: + + * Joins + * Window functions + * Detailed operations and transformations of Spark DataFrames + + + +You can also access my tutorial as a [Jupyter Notebook][37], in case you want to use it offline. + +There are plenty of articles and tutorials available online, so I recommend you check them out. One useful resource is Databricks' complete [guide to Spark SQL][38]. + +Thinking of working with JSON data but unsure of using Spark SQL? Databricks supports it! Check out this excellent guide to [JSON support in Spark SQL][39]. + +Interested in advanced concepts like window functions and ranks in SQL? Take a look at "[Introducing Window Functions in Spark SQL][40]." + +I will write another article covering some of these concepts in an intuitive way, which should be easy for you to understand. Stay tuned! + +In case you have any feedback or queries, you can reach out to me on [LinkedIn][41]. + +* * * + +*This article originally appeared on Medium's [Towards Data Science][42] channel and is republished with permission. * + +* * * + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/apache-spark-and-dataframes-tutorial + +作者:[Dipanjan (DJ) Sarkar (Red Hat)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/djsarkar +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos) +[2]: https://opensource.com/article/19/3/sql-scale-apache-spark-sql-and-dataframes +[3]: https://spark.apache.org/sql/ +[4]: https://opensource.com/sites/default/files/uploads/13_spark-databricks.png (Apache Spark and Databricks) +[5]: https://databricks.com/try-databricks +[6]: https://databricks.com/signup#signup/community +[7]: https://opensource.com/sites/default/files/uploads/14_create-notebook.png (Create a notebook) +[8]: https://databricks.com/spark/getting-started-with-apache-spark +[9]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html +[10]: https://opensource.com/sites/default/files/uploads/15_dbfs-kdd-kddcup_data-gz.png (Spark Job kddcup_data.gz) +[11]: https://spark.apache.org/docs/latest/rdd-programming-guide.html#resilient-distributed-datasets-rdds +[12]: https://opensource.com/sites/default/files/uploads/16_rdd-data.png (Data in Resilient Distributed Dataset (RDD)) +[13]: https://opensource.com/sites/default/files/uploads/16a_output.png (output) +[14]: https://opensource.com/sites/default/files/uploads/16b_output.png (output) +[15]: https://opensource.com/sites/default/files/uploads/17_split-csv.png (Splitting RDD entries) +[16]: http://kdd.ics.uci.edu/databases/kddcup99/kddcup.names +[17]: http://kdd.ics.uci.edu/databases/kddcup99/task.html +[18]: https://opensource.com/sites/default/files/uploads/18_extract-columns.png (Extracting columns) +[19]: https://opensource.com/sites/default/files/uploads/19_build-dataframe.png (DataFrame) +[20]: https://opensource.com/sites/default/files/uploads/20_dataframe-schema.png (Dataframe schema) +[21]: https://opensource.com/sites/default/files/uploads/21_registertemptable.png (help(df.registerTempTable)) +[22]: https://opensource.com/sites/default/files/uploads/22_number-of-connections.png (Total number of connections) +[23]: https://opensource.com/sites/default/files/uploads/23_sql.png (protocol type and frequency) +[24]: https://opensource.com/sites/default/files/uploads/24_intrusion-type.png (Connection by type) +[25]: https://opensource.com/sites/default/files/uploads/25_chart-interface.png (Databricks chart types) +[26]: https://opensource.com/sites/default/files/uploads/26_plot-options-chart.png (Bar chart) +[27]: https://opensource.com/sites/default/files/uploads/27_pandas-barchart.png (Bar chart) +[28]: https://opensource.com/sites/default/files/uploads/28_most-attacked.png (Protocols most vulnerable to attacks) +[29]: https://opensource.com/sites/default/files/uploads/29_data-transmissions.png (Statistics pertaining to protocols and attacks) +[30]: https://opensource.com/sites/default/files/uploads/30_tcp-attack-metrics.png (TCP attack data) +[31]: https://opensource.com/sites/default/files/uploads/31_attack-type.png (Filtered by attack type) +[32]: https://attack.mitre.org/techniques/T1188/ +[33]: https://opensource.com/sites/default/files/uploads/32_tcp-attack-types.png (TCP attacks based on service and attack type) +[34]: https://opensource.com/sites/default/files/uploads/33_pivot-table.png (Pivot table) +[35]: https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/3137082781873852/3704545280501166/1264763342038607/latest.html +[36]: https://github.com/dipanjanS/data_science_for_all/tree/master/tds_spark_sql_intro +[37]: http://nbviewer.jupyter.org/github/dipanjanS/data_science_for_all/blob/master/tds_spark_sql_intro/Working%20with%20SQL%20at%20Scale%20-%20Spark%20SQL%20Tutorial.ipynb +[38]: https://docs.databricks.com/spark/latest/spark-sql/index.html +[39]: https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html +[40]: https://databricks.com/blog/2015/07/15/introducing-window-functions-in-spark-sql.html +[41]: https://www.linkedin.com/in/dipanzan/ +[42]: https://towardsdatascience.com/sql-at-scale-with-apache-spark-sql-and-dataframes-concepts-architecture-and-examples-c567853a702f diff --git a/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md new file mode 100644 index 0000000000..52f02edc95 --- /dev/null +++ b/sources/tech/20190321 NVIDIA Jetson Nano is a -99 Raspberry Pi Rival for AI Development.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development) +[#]: via: (https://itsfoss.com/nvidia-jetson-nano/) +[#]: author: (Atharva Lele https://itsfoss.com/author/atharva/) + +NVIDIA Jetson Nano is a $99 Raspberry Pi Rival for AI Development +====== + +At the [GPU Technology Conference][1] NVIDIA announced the [Jetson Nano Module][2] and the [Jetson Nano Developer Kit][3]. Compared to other Jetson boards which cost between $299 and $1099, the Jetson Nano bears a low cost of $99. This puts it within the reach of many developers, educators, and researchers who could not spend hundreds of dollars to get such a product. + +![The Jetson Nano Development Kit \(left\) and the Jetson Nano Module \(right\)][4] + +### Bringing back AI development from ‘cloud’ + +In the last few years, we have seen a lot of [advances in AI research][5]. Traditionally AI computing was always done in the cloud, where there was plenty of processing power available. + +Recently, there’s been a trend in shifting this computation away from the cloud and do it locally. This is called [Edge Computing][6]. Now at the embedded level, products which could do such complex calculations required for AI and Machine Learning were sparse, but we’re seeing a great explosion these days in this product segment. + +Products like the [SparkFun Edge][7] and [OpenMV Board][8] are good examples. The Jetson Nano, is NVIDIA’s latest offering in this market. When connected to your system, it will be able to supply the processing power needed for Machine Learning and AI tasks without having to rely on the cloud. + +This is great for privacy as well as saving on internet bandwidth. It is also more secure since your data always stays on the device itself. + +### Jetson Nano focuses on smaller AI projects + +![Jetson Nano powered JetBot][9] + +Previously released Jetson Boards like the [TX2][10] and [AGX Xavier][11] were used in products like drones and cars, the Jetson Nano is targeting smaller projects, projects where you need to have the processing power which boards like the [Raspberry Pi][12] cannot provide. + +Did you know? + +NVIDIA’s JetPack SDK provides a ‘complete desktop Linux environment based on Ubuntu 18.04 LTS’. In other words, the Jetson Nano is powered by Ubuntu Linux. + +### NVIDIA Jetson Nano Specifications + +For $99, you get 472 GFLOPS of processing power due to 128 NVIDIA Maxwell Architecture CUDA Cores, a quad-core ARM A57 processor, 4GB of LP-DDR4 RAM, 16GB of on-board storage, and 4k video encode/decode capabilities. The port selection is also pretty decent with the Nano having Gigabit Ethernet, MIPI Camera, Display outputs, and a couple of USB ports (1×3.0, 3×2.0). Full range of specifications can be found [here][13]. + +CPU | Quad-core ARM® Cortex®-A57 MPCore processor +---|--- +GPU | NVIDIA Maxwell™ architecture with 128 NVIDIA CUDA® cores +RAM | 4 GB 64-bit LPDDR4 +Storage | 16 GB eMMC 5.1 Flash +Camera | 12 lanes (3×4 or 4×2) MIPI CSI-2 DPHY 1.1 (1.5 Gbps) +Connectivity | Gigabit Ethernet +Display Ports | HDMI 2.0 and DP 1.2 +USB Ports | 1 USB 3.0 and 3 USB 2.0 +Other | 1 x1/2/4 PCIE, 1x SDIO / 2x SPI / 6x I2C / 2x I2S / GPIOs +Size | 69.6 mm x 45 mm + +Along with good hardware, you get support for the majority of popular AI frameworks like TensorFlow, PyTorch, Keras, etc. It also has support for NVIDIA’s [JetPack][14] and [DeepStream][15] SDKs, same as the more expensive TX2 and AGX Boards. + +“Jetson Nano makes AI more accessible to everyone — and is supported by the same underlying architecture and software that powers our nation’s supercomputer. Bringing AI to the maker movement opens up a whole new world of innovation, inspiring people to create the next big thing.” said Deepu Talla, VP and GM of Autonomous Machines at NVIDIA. + +[Subscribe to It’s FOSS YouTube Channel][16] + +**What do you think of Jetson Nano?** + +The availability of Jetson Nano differs from country to country. + +The [Intel Neural Stick][17], is also one such accelerator which is competitively prices at $79. It’s good to see competition stirring up at these lower price points from the big manufacturers. + +I’m looking forward to getting my hands on the product if possible. + +What do you guys think about a product like this? Let us know in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/nvidia-jetson-nano/ + +作者:[Atharva Lele][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/atharva/ +[b]: https://github.com/lujun9972 +[1]: https://www.nvidia.com/en-us/gtc/ +[2]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/ +[3]: https://developer.nvidia.com/embedded/buy/jetson-nano-devkit +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/jetson-nano-family-press-image-hd.jpg?ssl=1 +[5]: https://itsfoss.com/nanotechnology-open-science-ai/ +[6]: https://en.wikipedia.org/wiki/Edge_computing +[7]: https://www.sparkfun.com/news/2886 +[8]: https://openmv.io/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/nvidia_jetson_bot.jpg?ssl=1 +[10]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-tx2/ +[11]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-agx-xavier/ +[12]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/ +[13]: https://www.nvidia.com/en-us/autonomous-machines/embedded-systems/jetson-nano/#specifications +[14]: https://developer.nvidia.com/embedded/jetpack +[15]: https://developer.nvidia.com/deepstream-sdk +[16]: https://www.youtube.com/c/itsfoss?sub_confirmation=1 +[17]: https://software.intel.com/en-us/movidius-ncs-get-started diff --git a/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md new file mode 100644 index 0000000000..f3f1f7c72b --- /dev/null +++ b/sources/tech/20190321 Top 10 New Linux SBCs to Watch in 2019.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 10 New Linux SBCs to Watch in 2019) +[#]: via: (https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019) +[#]: author: (Eric Brown https://www.linux.com/users/ericstephenbrown) + +Top 10 New Linux SBCs to Watch in 2019 +====== + +![UP Xtreme][1] + +Aaeon's Linux-ready UP Xtreme SBC. + +[Used with permission][2] + +A recent [Global Market Insights report][3] projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you don’t need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them [tailored for highly specific applications][4]. + +Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of [community-backed, open-spec SBCs][5]. + +Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent [Embedded World show][6] in Nuremberg. (There was also some [interesting Linux software news][7] at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week. + +Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Google’s i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards. + +The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages. + +**[UP Xtreme][8]** —The latest in Aaeon’s line of community-backed SBCs taps Intel’s 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around -- and possibly the most expensive. + +The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeon’s new AI Core X modules, which offer Intel’s latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration. + +**[Jetson Nano Dev Kit][9]** —Nvidia just announced a low-end Jetson Nano compute module that’s sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries. + +Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and there’s a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports. + +**[Coral Dev Board][10]** —Google’s very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Google’s Edge TPU AI chip—a stripped-down version of Google’s TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources. + +The Coral Dev Board combines the Edge TPU chip with NXP’s quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidia’s Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports. + +**[SBC-C43][11]** —Seco’s commercial, industrial temperature SBC-C43 board is the first SBC based on NXP’s high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available. + +The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more. + +**[Nitrogen8M_Mini][12]** —This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXP’s new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that you’re limited to HD video resolution. + +Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works. + +**[Pine H64 Model B][13]** —Pine64’s latest hacker board was teased in late January as part of an [ambitious roll-out][14] of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support. + +The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch. + +**[AI-ML Board][15]** —Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, we’re more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC we’ve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP. + +The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1. + +**[BeagleBone AI][16]** —The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoC’s dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools. + +Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port. + +**[Robotics RB3 Platform (DragonBoard 845c)][17]** —Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based [DragonBoard 820c][18] SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a [DragonBoard 845c product page][17], and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works. + +The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the board’s expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System). + +**[Avenger96][19]** —Like Arrow’s AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: ST’s recently announced [STM32MP153][20]. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU. + +This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. It’s unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. There’s also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/3/top-10-new-linux-sbcs-watch-2019 + +作者:[Eric Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/ericstephenbrown +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/aaeon_upxtreme.jpg?itok=QnwAt3mp (UP Xtreme) +[2]: /LICENSES/CATEGORY/USED-PERMISSION +[3]: https://www.globenewswire.com/news-release/2019/02/13/1724445/0/en/Single-Board-Computer-Market-to-surpass-1bn-by-2025-Global-Market-Insights-Inc.html +[4]: https://www.linux.com/blog/2019/1/linux-hacker-board-trends-2018-and-beyond +[5]: http://linuxgizmos.com/catalog-of-122-open-spec-linux-hacker-boards/ +[6]: https://www.embedded-world.de/en +[7]: https://www.linux.com/news/2019/2/embedded-linux-software-highlights-embedded-world +[8]: http://linuxgizmos.com/latest-up-board-combines-whiskey-lake-with-ai-core-x-modules/ +[9]: http://linuxgizmos.com/trimmed-down-jetson-nano-modules-ships-on-99-linux-dev-kit/ +[10]: http://linuxgizmos.com/google-launches-i-mx8m-dev-board-with-edge-tpu-ai-chip/ +[11]: http://linuxgizmos.com/first-i-mx8-quadmax-sbc-breaks-cover/ +[12]: http://linuxgizmos.com/open-spec-nitrogen8m_mini-sbc-ships-along-with-new-mini-based-som/ +[13]: http://linuxgizmos.com/revised-allwiner-h64-based-pine-h64-sbc-has-rpi-size-and-gpio/ +[14]: https://www.linux.com/blog/2019/2/pine64-launch-open-source-phone-laptop-tablet-and-camera +[15]: http://linuxgizmos.com/arrows-latest-96boards-sbcs-tap-i-mx8x-and-i-mx8m/ +[16]: http://linuxgizmos.com/beaglebone-ai-sbc-features-dual-a15-soc-with-eve-ai-cores/ +[17]: http://linuxgizmos.com/robotics-kit-runs-linux-on-new-dragonboard-845c-96boards-sbc/ +[18]: http://linuxgizmos.com/debian-driven-dragonboard-expands-to-96boards-extended-spec/ +[19]: http://linuxgizmos.com/sandwich-style-96boards-sbc-runs-linux-on-sts-new-cortex-a7-m4-soc/ +[20]: https://www.linux.com/news/2019/2/st-spins-its-first-linux-powered-cortex-soc diff --git a/sources/tech/20190322 12 open source tools for natural language processing.md b/sources/tech/20190322 12 open source tools for natural language processing.md new file mode 100644 index 0000000000..9d2822926f --- /dev/null +++ b/sources/tech/20190322 12 open source tools for natural language processing.md @@ -0,0 +1,113 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (12 open source tools for natural language processing) +[#]: via: (https://opensource.com/article/19/3/natural-language-processing-tools) +[#]: author: (Dan Barker https://opensource.com/users/barkerd427) + +12 open source tools for natural language processing +====== + +Take a look at a dozen options for your next NLP application. + +![Chat bubbles][1] + +Natural language processing (NLP), the technology that powers all the chatbots, voice assistants, predictive text, and other speech/text applications that permeate our lives, has evolved significantly in the last few years. There are a wide variety of open source NLP tools out there, so I decided to survey the landscape to help you plan your next voice- or text-based application. + +For this review, I focused on tools that use languages I'm familiar with, even though I'm not familiar with all the tools. (I didn't find a great selection of tools in the languages I'm not familiar with anyway.) That said, I excluded tools in three languages I am familiar with, for various reasons. + +The most obvious language I didn't include might be R, but most of the libraries I found hadn't been updated in over a year. That doesn't always mean they aren't being maintained well, but I think they should be getting updates more often to compete with other tools in the same space. I also chose languages and tools that are most likely to be used in production scenarios (rather than academia and research), and I have mostly used R as a research and discovery tool. + +I was also surprised to see that the Scala libraries are fairly stagnant. It has been a couple of years since I last used Scala, when it was pretty popular. Most of the libraries haven't been updated since that time—or they've only had a few updates. + +Finally, I excluded C++. This is mostly because it's been many years since I last wrote in C++, and the organizations I've worked in have not used C++ for NLP or any data science work. + +### Python tools + +#### Natural Language Toolkit (NLTK) + +It would be easy to argue that [Natural Language Toolkit (NLTK)][2] is the most full-featured tool of the ones I surveyed. It implements pretty much any component of NLP you would need, like classification, tokenization, stemming, tagging, parsing, and semantic reasoning. And there's often more than one implementation for each, so you can choose the exact algorithm or methodology you'd like to use. It also supports many languages. However, it represents all data in the form of strings, which is fine for simple constructs but makes it hard to use some advanced functionality. The documentation is also quite dense, but there is a lot of it, as well as [a great book][3]. The library is also a bit slow compared to other tools. Overall, this is a great toolkit for experimentation, exploration, and applications that need a particular combination of algorithms. + +#### SpaCy + +[SpaCy][4] is probably the main competitor to NLTK. It is faster in most cases, but it only has a single implementation for each NLP component. Also, it represents everything as an object rather than a string, which simplifies the interface for building applications. This also helps it integrate with many other frameworks and data science tools, so you can do more once you have a better understanding of your text data. However, SpaCy doesn't support as many languages as NLTK. It does have a simple interface with a simplified set of choices and great documentation, as well as multiple neural models for various components of language processing and analysis. Overall, this is a great tool for new applications that need to be performant in production and don't require a specific algorithm. + +#### TextBlob + +[TextBlob][5] is kind of an extension of NLTK. You can access many of NLTK's functions in a simplified manner through TextBlob, and TextBlob also includes functionality from the Pattern library. If you're just starting out, this might be a good tool to use while learning, and it can be used in production for applications that don't need to be overly performant. Overall, TextBlob is used all over the place and is great for smaller projects. + +#### Textacy + +This tool may have the best name of any library I've ever used. Say "[Textacy][6]" a few times while emphasizing the "ex" and drawing out the "cy." Not only is it great to say, but it's also a great tool. It uses SpaCy for its core NLP functionality, but it handles a lot of the work before and after the processing. If you were planning to use SpaCy, you might as well use Textacy so you can easily bring in many types of data without having to write extra helper code. + +#### PyTorch-NLP + +[PyTorch-NLP][7] has been out for just a little over a year, but it has already gained a tremendous community. It is a great tool for rapid prototyping. It's also updated often with the latest research, and top companies and researchers have released many other tools to do all sorts of amazing processing, like image transformations. Overall, PyTorch is targeted at researchers, but it can also be used for prototypes and initial production workloads with the most advanced algorithms available. The libraries being created on top of it might also be worth looking into. + +### Node tools + +#### Retext + +[Retext][8] is part of the [unified collective][9]. Unified is an interface that allows multiple tools and plugins to integrate and work together effectively. Retext is one of three syntaxes used by the unified tool; the others are Remark for markdown and Rehype for HTML. This is a very interesting idea, and I'm excited to see this community grow. Retext doesn't expose a lot of its underlying techniques, but instead uses plugins to achieve the results you might be aiming for with NLP. It's easy to do things like checking spelling, fixing typography, detecting sentiment, or making sure text is readable with simple plugins. Overall, this is an excellent tool and community if you just need to get something done without having to understand everything in the underlying process. + +#### Compromise + +[Compromise][10] certainly isn't the most sophisticated tool. If you're looking for the most advanced algorithms or the most complete system, this probably isn't the right tool for you. However, if you want a performant tool that has a wide breadth of features and can function on the client side, you should take a look at Compromise. Overall, its name is accurate in that the creators compromised on functionality and accuracy by focusing on a small package with much more specific functionality that benefits from the user understanding more of the context surrounding the usage. + +#### Natural + +[Natural][11] includes most functions you might expect in a general NLP library. It is mostly focused on English, but some other languages have been contributed, and the community is open to additional contributions. It supports tokenizing, stemming, classification, phonetics, term frequency–inverse document frequency, WordNet, string similarity, and some inflections. It might be most comparable to NLTK, in that it tries to include everything in one package, but it is easier to use and isn't necessarily focused around research. Overall, this is a pretty full library, but it is still in active development and may require additional knowledge of underlying implementations to be fully effective. + +#### Nlp.js + +[Nlp.js][12] is built on top of several other NLP libraries, including Franc and Brain.js. It provides a nice interface into many components of NLP, like classification, sentiment analysis, stemming, named entity recognition, and natural language generation. It also supports quite a few languages, which is helpful if you plan to work in something other than English. Overall, this is a great general tool with a simplified interface into several other great tools. This will likely take you a long way in your applications before you need something more powerful or more flexible. + +### Java tools + +#### OpenNLP + +[OpenNLP][13] is hosted by the Apache Foundation, so it's easy to integrate it into other Apache projects, like Apache Flink, Apache NiFi, and Apache Spark. It is a general NLP tool that covers all the common processing components of NLP, and it can be used from the command line or within an application as a library. It also has wide support for multiple languages. Overall, OpenNLP is a powerful tool with a lot of features and ready for production workloads if you're using Java. + +#### StanfordNLP + +[Stanford CoreNLP][14] is a set of tools that provides statistical NLP, deep learning NLP, and rule-based NLP functionality. Many other programming language bindings have been created so this tool can be used outside of Java. It is a very powerful tool created by an elite research institution, but it may not be the best thing for production workloads. This tool is dual-licensed with a special license for commercial purposes. Overall, this is a great tool for research and experimentation, but it may incur additional costs in a production system. The Python implementation might also interest many readers more than the Java version. Also, one of the best Machine Learning courses is taught by a Stanford professor on Coursera. [Check it out][15] along with other great resources. + +#### CogCompNLP + +[CogCompNLP][16], developed by the University of Illinois, also has a Python library with similar functionality. It can be used to process text, either locally or on remote systems, which can remove a tremendous burden from your local device. It provides processing functions such as tokenization, part-of-speech tagging, chunking, named-entity tagging, lemmatization, dependency and constituency parsing, and semantic role labeling. Overall, this is a great tool for research, and it has a lot of components that you can explore. I'm not sure it's great for production workloads, but it's worth trying if you plan to use Java. + +* * * + +What are your favorite open source tools and libraries for NLP? Please share in the comments—especially if there's one I didn't include. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/natural-language-processing-tools + +作者:[Dan Barker (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/barkerd427 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_communication_team.png?itok=CYfZ_gE7 (Chat bubbles) +[2]: http://www.nltk.org/ +[3]: http://www.nltk.org/book_1ed/ +[4]: https://spacy.io/ +[5]: https://textblob.readthedocs.io/en/dev/ +[6]: https://readthedocs.org/projects/textacy/ +[7]: https://pytorchnlp.readthedocs.io/en/latest/ +[8]: https://www.npmjs.com/package/retext +[9]: https://unified.js.org/ +[10]: https://www.npmjs.com/package/compromise +[11]: https://www.npmjs.com/package/natural +[12]: https://www.npmjs.com/package/node-nlp +[13]: https://opennlp.apache.org/ +[14]: https://stanfordnlp.github.io/CoreNLP/ +[15]: https://opensource.com/article/19/2/learn-data-science-ai +[16]: https://github.com/CogComp/cogcomp-nlp diff --git a/sources/tech/20190322 Easy means easy to debug.md b/sources/tech/20190322 Easy means easy to debug.md new file mode 100644 index 0000000000..4b0b4d52d2 --- /dev/null +++ b/sources/tech/20190322 Easy means easy to debug.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Easy means easy to debug) +[#]: via: (https://arp242.net/weblog/easy.html) +[#]: author: (Martin Tournoij https://arp242.net/) + + +What does it mean for a framework, library, or tool to be “easy”? There are many possible definitions one could use, but my definition is usually that it’s easy to debug. I often see people advertise a particular program, framework, library, file format, or something else as easy because “look with how little effort I can do task X, this is so easy!” That’s great, but an incomplete picture. + +You only write software once, but will almost always go through several debugging cycles. With debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. To debug code, you need to understand it, so “easy to debug” by extension means “easy to understand”. + +Abstractions which make something easier to write often come at the cost of make things harder to understand. Sometimes this is a good trade-off, but often it’s not. In general I will happily spend a little but more effort writing something now if that makes things easier to understand and debug later on, as it’s often a net time-saver. + +Simplicity isn’t the only thing that makes programs easier to debug, but it is probably the most important. Good documentation helps too, but unfortunately good documentation is uncommon (note that quality is not measured by word count!) + +This is not exactly a novel insight; from the 1974 The Elements of Programming Style by Brian W. Kernighan and P. J. Plauger: + +> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you’re as clever as you can be when you write it, how will you ever debug it? + +A lot of stuff I see seems to be written “as clever as can be” and is consequently hard to debug. I’ll list a few examples of this pattern below. It’s not my intention to argue that any of these things are bad per se, I just want to highlight the trade-offs in “easy to use” vs. “easy to debug”. + + * When I tried running [Let’s Encrypt][1] a few years ago it required running a daemon as root(!) to automatically rewrite nginx files. I looked at the source a bit to understand how it worked and it was all pretty complex, so I was “let’s not” and opted to just pay €10 to the CA mafia, as not much can go wrong with putting a file in /etc/nginx/, whereas a lot can go wrong with complex Python daemons running as root. + +(I don’t know the current state/options for Let’s Encrypt; at a quick glance there may be better/alternative ACME clients that suck less now.) + + * Some people claim that systemd is easier than SysV init.d scripts because it’s easier to write systemd unit files than it is to write shell scripts. In particular, this is the argument Lennart Poettering used in his [systemd myths][2] post (point 5). + +I think is completely missing the point. I agree with Poettering that shell scripts are hard – [I wrote an entire post about that][3] – but by making the interface easier doesn’t mean the entire system becomes easier. Look at [this issue][4] I encountered and [the fix][5] for it. Does that look easy to you? + + * Many JavaScript frameworks I’ve used can be hard to fully understand. Clever state keeping logic is great and all, until that state won’t work as you expect, and then you better hope there’s a Stack Overflow post or GitHub issue to help you out. + + * Docker is great, right up to the point you get: + +``` + ERROR: for elasticsearch Cannot start service elasticsearch: +oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:258: +applying cgroup configuration for process caused \"failed to write 898 to cgroup.procs: write +/sys/fs/cgroup/cpu,cpuacct/docker/b13312efc203e518e3864fc3f9d00b4561168ebd4d9aad590cc56da610b8dd0e/cgroup.procs: +invalid argument\"" +``` + +or + +``` +ERROR: for elasticsearch Cannot start service elasticsearch: EOF +``` + +And … now what? + + * Many testing libraries can make things harder to debug. Ruby’s rspec is a good example where I’ve occasionally used the library wrong by accident and had to spend quite a long time figuring out what exactly went wrong (as the errors it gave me were very confusing!) + +I wrote a bit more about that in my [Testing isn’t everything][6] post. + + * ORM libraries can make database queries a lot easier, at the cost of making things a lot harder to understand once you want to solve a problem. + + + + + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/easy.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Let%27s_Encrypt +[2]: http://0pointer.de/blog/projects/the-biggest-myths.html +[3]: https://arp242.net/weblog/shell-scripting-trap.html +[4]: https://unix.stackexchange.com/q/185495/33645 +[5]: https://cgit.freedesktop.org/systemd/systemd/commit/?id=6e392c9c45643d106673c6643ac8bf4e65da13c1 +[6]: /weblog/testing.html +[7]: mailto:martin@arp242.net +[8]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md b/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md new file mode 100644 index 0000000000..a4325fe74b --- /dev/null +++ b/sources/tech/20190322 How to Install OpenLDAP on Ubuntu Server 18.04.md @@ -0,0 +1,205 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install OpenLDAP on Ubuntu Server 18.04) +[#]: via: (https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +How to Install OpenLDAP on Ubuntu Server 18.04 +====== + +![OpenLDAP][1] + +In part one of this short tutorial series, Jack Wallen explains how to install OpenLDAP. + +[Creative Commons Zero][2] + +The Lightweight Directory Access Protocol (LDAP) allows for the querying and modification of an X.500-based directory service. In other words, LDAP is used over a Local Area Network (LAN) to manage and access a distributed directory service. LDAPs primary purpose is to provide a set of records in a hierarchical structure. What can you do with those records? The best use-case is for user validation/authentication against desktops. If both server and client are set up properly, you can have all your Linux desktops authenticating against your LDAP server. This makes for a great single point of entry so that you can better manage (and control) user accounts. + +The most popular iteration of LDAP for Linux is [OpenLDAP][3]. OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol, and makes it incredibly easy to get your LDAP server up and running. + +In this three-part series, I’ll be walking you through the steps of: + + 1. Installing OpenLDAP server. + + 2. Installing the web-based LDAP Account Manager. + + 3. Configuring Linux desktops, such that they can communicate with your LDAP server. + + + + +In the end, all of your Linux desktop machines (that have been configured properly) will be able to authenticate against a centralized location, which means you (as the administrator) have much more control over the management of users on your network. + +In this first piece, I’ll be demonstrating the installation and configuration of OpenLDAP on Ubuntu Server 18.04. All you will need to make this work is a running instance of Ubuntu Server 18.04 and a user account with sudo privileges. +Let’s get to work. + +### Update/Upgrade + +The first thing you’ll want to do is update and upgrade your server. Do note, if the kernel gets updated, the server will need to be rebooted (unless you have Live Patch, or a similar service running). Because of this, run the update/upgrade at a time when the server can be rebooted. +To update and upgrade Ubuntu, log into your server and run the following commands: + +``` +sudo apt-get update + +sudo apt-get upgrade -y +``` + +When the upgrade completes, reboot the server (if necessary), and get ready to install and configure OpenLDAP. + +### Installing OpenLDAP + +Since we’ll be using OpenLDAP as our LDAP server software, it can be installed from the standard repository. To install the necessary pieces, log into your Ubuntu Server and issue the following command: + +### sudo apt-get instal slapd ldap-utils -y + +During the installation, you’ll be first asked to create an administrator password for the LDAP directory. Type and verify that password (Figure 1). + +![password][4] + +Figure 1: Creating an administrator password for LDAP. + +[Used with permission][5] + +Configuring LDAP + +With the installation of the components complete, it’s time to configure LDAP. Fortunately, there’s a handy tool we can use to make this happen. From the terminal window, issue the command: + +``` +sudo dpkg-reconfigure slapd +``` + +In the first window, hit Enter to select No and continue on. In the second window of the configuration tool (Figure 2), you must type the DNS domain name for your server. This will serve as the base DN (the point from where a server will search for users) for your LDAP directory. In my example, I’ve used example.com (you’ll want to change this to fit your needs). + +![domain name][6] + +Figure 2: Configuring the domain name for LDAP. + +[Used with permission][5] + +In the next window, type your Organizational name (ie the name of your company or department). You will then be prompted to (once again) create an administrator password (you can use the same one as you did during the installation). Once you’ve taken care of that, you’ll be asked the following questions: + + * Database backend to use - select **MDB**. + + * Do you want the database to be removed with slapd is purged? - Select **No.** + + * Move old database? - Select **Yes.** + + + + +OpenLDAP is now ready for data. + +### Adding Initial Data + +Now that OpenLDAP is installed and running, it’s time to populate the directory with a bit of initial data. In the second piece of this series, we’ll be installing a web-based GUI that makes it much easier to handle this task, but it’s always good to know how to add data the manual way. + +One of the best ways to add data to the LDAP directory is via text file, which can then be imported in with the __ldapadd__ command. Create a new file with the command: + +``` +nano ldap_data.ldif +``` + +In that file, paste the following contents: + +``` +dn: ou=People,dc=example,dc=com + +objectClass: organizationalUnit + +ou: People + + +dn: ou=Groups,dc=EXAMPLE,dc=COM + +objectClass: organizationalUnit + +ou: Groups + + +dn: cn=DEPARTMENT,ou=Groups,dc=EXAMPLE,dc=COM + +objectClass: posixGroup + +cn: SUBGROUP + +gidNumber: 5000 + + +dn: uid=USER,ou=People,dc=EXAMPLE,dc=COM + +objectClass: inetOrgPerson + +objectClass: posixAccount + +objectClass: shadowAccount + +uid: USER + +sn: LASTNAME + +givenName: FIRSTNAME + +cn: FULLNAME + +displayName: DISPLAYNAME + +uidNumber: 10000 + +gidNumber: 5000 + +userPassword: PASSWORD + +gecos: FULLNAME + +loginShell: /bin/bash + +homeDirectory: USERDIRECTORY +``` + +In the above file, every entry in all caps needs to be modified to fit your company needs. Once you’ve modified the above file, save and close it with the [Ctrl]+[x] key combination. + +To add the data from the file to the LDAP directory, issue the command: + +``` +ldapadd -x -D cn=admin,dc=EXAMPLE,dc=COM -W -f ldap_data.ldif +``` + +Remember to alter the dc entries (EXAMPLE and COM) in the above command to match your domain name. After running the command, you will be prompted for the LDAP admin password. When you successfully authentication to the LDAP server, the data will be added. You can then ensure the data is there, by running a search like so: + +``` +ldapsearch -x -LLL -b dc=EXAMPLE,dc=COM 'uid=USER' cn gidNumber +``` + +Where EXAMPLE and COM is your domain name and USER is the user to search for. The command should report the entry you searched for (Figure 3). + +![search][7] + +Figure 3: Our search was successful. + +[Used with permission][5] + +Now that you have your first entry into your LDAP directory, you can edit the above file to create even more. Or, you can wait until the next entry into the series (installing LDAP Account Manager) and take care of the process with the web-based GUI. Either way, you’re one step closer to having LDAP authentication on your network. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804 + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap.png?itok=r9viT8n6 (OpenLDAP) +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.openldap.org/ +[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_1.jpg?itok=vbWScztB (password) +[5]: /LICENSES/CATEGORY/USED-PERMISSION +[6]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_2.jpg?itok=10CSCm6Z (domain name) +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldap_3.jpg?itok=df2Y65Dv (search) diff --git a/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md new file mode 100644 index 0000000000..2d794f2d29 --- /dev/null +++ b/sources/tech/20190322 How to set up Fedora Silverblue as a gaming station.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to set up Fedora Silverblue as a gaming station) +[#]: via: (https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/) +[#]: author: (Michal Konečný https://fedoramagazine.org/author/zlopez/) + +How to set up Fedora Silverblue as a gaming station +====== + +![][1] + +This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam. + +Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out [this blog post][2] for pointers. + +### Add the Flathub repository + +This process starts with a clean Fedora 29 Silverblue installation with a user already created for you. + +First, go to and enable the Flathub repository on your system. To do this, click the _Quick setup_ button on the main page. + +![Quick setup button on flathub.org/home][3] + +This redirects you to where you should click on the Fedora icon. + +![Fedora icon on flatpak.org/setup][4] + +Now you just need to click on _Flathub repository file._ Open the downloaded file with the _Software Install_ application. + +![Flathub repository file button on flatpak.org/setup/Fedora][5] + +The GNOME Software application opens. Next, click on the _Install_ button. This action needs _sudo_ permissions, because it installs the Flathub repository for use by the whole system. + +![Install button in GNOME Software][6] + +### Install the Steam flatpak + +You can now search for the S _team_ flatpak in _GNOME Software_. If you can’t find it, try rebooting — or logout and login — in case _GNOME Software_ didn’t read the metadata. That happens automatically when you next login. + +![Searching for Steam][7] + +Click on the _Steam_ row and the _Steam_ page opens in _GNOME Software._ Next, click on _Install_. + +![Steam page in GNOME Software][8] + +And now you have installed _Steam_ flatpak on your system. + +### Enable Steam Play in Steam + +Now that you have _Steam_ installed, launch it and log in. To play Windows games too, you need to enable _Steam Play_ in _Steam._ To enable it, choose _Steam > Settings_ from the menu in the main window. + +![Settings button in Steam][9] + +Navigate to the _Steam Play_ section. You should see the option _Enable Steam Play for supported titles_ is already ticked, but it’s recommended you also tick the _Enable Steam Play_ option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on _Steam._ To see which games are playable, visit [ProtonDB][10] and search for your favorite game. Or just look for the games with the most platinum reports. + +![Steam Play settings menu on Steam][11] + +If you want to know more about Steam Play, you can read the [article][12] about it here on Fedora Magazine: + +> [Play Windows games on Fedora with Steam Play and Proton][12] + +### Appendix + +You’re now ready to play plenty of games on Linux. Please remember to share your experience with others using the _Contribute_ button on [ProtonDB][10] and report bugs you find on [GitHub][13], because sharing is nice. 🙂 + +* * * + +_Photo by _[ _Hardik Sharma_][14]_ on _[_Unsplash_][15]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/set-up-fedora-silverblue-gaming-station/ + +作者:[Michal Konečný][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/zlopez/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/silverblue-gaming-816x345.jpg +[2]: https://blogs.gnome.org/alexl/2019/03/06/nvidia-drivers-in-fedora-silverblue/ +[3]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-29-00.png +[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-36-35-1024x713.png +[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-45-12.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-12-57-37.png +[7]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-08-21.png +[8]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-13-59-1024x769.png +[9]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-30-20.png +[10]: https://www.protondb.com/ +[11]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-15-13-41-53.png +[12]: https://fedoramagazine.org/play-windows-games-steam-play-proton/ +[13]: https://github.com/ValveSoftware/Proton +[14]: https://unsplash.com/photos/I7rXyzBNVQM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[15]: https://unsplash.com/search/photos/video-game-laptop?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190322 Printing from the Linux command line.md b/sources/tech/20190322 Printing from the Linux command line.md new file mode 100644 index 0000000000..75aec13bb3 --- /dev/null +++ b/sources/tech/20190322 Printing from the Linux command line.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Printing from the Linux command line) +[#]: via: (https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Printing from the Linux command line +====== + +There's a lot more to printing from the Linux command line than the lp command. Check out some of the many available options. + +![Sherry \(CC BY 2.0\)][1] + +Printing from the Linux command line is easy. You use the **lp** command to request a print, and **lpq** to see what print jobs are in the queue, but things get a little more complicated when you want to print double-sided or use portrait mode. And there are lots of other things you might want to do — such as printing multiple copies of a document or canceling a print job. Let's check out some options for getting your printouts to look just the way you want them to when you're printing from the command line. + +### Displaying printer settings + +To view your printer settings from the command line, use the **lpoptions** command. The output should look something like this: + +``` +$ lpoptions +copies=1 device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ finishings=3 job-cancel-after=10800 job-hold-until=no-hold job-priority=50 job-sheets=none,none marker-change-time=1553023232 marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00 marker-levels=18,62,62,63 marker-names='Black\ Cartridge\ HP\ CC530A,Cyan\ Cartridge\ HP\ CC531A,Magenta\ Cartridge\ HP\ CC533A,Yellow\ Cartridge\ HP\ CC532A' marker-types=toner,toner,toner,toner number-up=1 printer-commands=none printer-info='HP Color LaserJet CP2025dn (F47468)' printer-is-accepting-jobs=true printer-is-shared=true printer-is-temporary=false printer-location printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7' printer-state=3 printer-state-change-time=1553023232 printer-state-reasons=none printer-type=167964 printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn sides=one-sided +``` + +This output is likely to be a little more human-friendly if you turn its blanks into carriage returns. Notice how many settings are listed. + +NOTE: In the output below, some lines have been reconnected to make this output more readable. + +``` +$ lpoptions | tr " " '\n' +copies=1 +device-uri=dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ +finishings=3 +job-cancel-after=10800 +job-hold-until=no-hold +job-priority=50 +job-sheets=none,none +marker-change-time=1553023232 +marker-colors=#000000,#00FFFF,#FF00FF,#FFFF00 +marker-levels=18,62,62,63 +marker-names='Black\ Cartridge\ HP\ CC530A, +Cyan\ Cartridge\ HP\ CC531A, +Magenta\ Cartridge\ HP\ CC533A, +Yellow\ Cartridge\ HP\ CC532A' +marker-types=toner,toner,toner,toner +number-up=1 +printer-commands=none +printer-info='HP Color LaserJet CP2025dn (F47468)' +printer-is-accepting-jobs=true +printer-is-shared=true +printer-is-temporary=false +printer-location +printer-make-and-model='HP Color LaserJet cp2025dn pcl3, hpcups 3.18.7' +printer-state=3 +printer-state-change-time=1553023232 +printer-state-reasons=none +printer-type=167964 +printer-uri-supported=ipp://localhost/printers/Color-LaserJet-CP2025dn +sides=one-sided +``` + +With the **-v** option, the **lpinfo** command will list drivers and related information. + +``` +$ lpinfo -v +network ipp +network https +network socket +network beh +direct hp +network lpd +file cups-brf:/ +network ipps +network http +direct hpfax +network dnssd://HP%20Color%20LaserJet%20CP2025dn%20(F47468)._pdl-datastream._tcp.local/ <== printer +network socket://192.168.0.23 <== printer IP +``` + +The lpoptions command will show the settings of your default printer. Use the **-p** option to specify one of a number of available printers. + +``` +$ lpoptions -p LaserJet +``` + +The **lpstat -p** command displays the status of a printer while **lpstat -p -d** also lists available printers. + +``` +$ lpstat -p -d +printer Color-LaserJet-CP2025dn is idle. enabled since Tue 19 Mar 2019 05:07:45 PM EDT +system default destination: Color-LaserJet-CP2025dn +``` + +### Useful commands + +To print a document on the default printer, just use the **lp** command followed by the name of the file you want to print. If the filename includes blanks (rare on Linux systems), either put the name in quotes or start entering the file name and press the tab key to invoke file completion (as shown in the second example below). + +``` +$ lp "never leave home angry" +$ lp never\ leave\ home\ angry +``` + +The **lpq** command displays the print queue. + +``` +$ lpq +Color-LaserJet-CP2025dn is ready and printing +Rank Owner Job File(s) Total Size +active shs 234 agenda 2048 bytes +``` + +With the **-n** option, the lp command allows you to specify the number of copies of a printout you want. + +``` +$ lp -n 11 agenda +``` + +To cancel a print job, you can use the **cancel** or **lprm** command. If you don't act quickly, you might see this: + +``` +$ cancel 229 +cancel: cancel-job failed: Job #229 is already completed - can't cancel. +``` + +### Two-sided printing + +To print in two-sided mode, you can issue your lp command with a **sides** option that says both to print on both sides of the paper and which edge to turn the paper on. This setting represents the normal way that you would expect two-sided portrait documents to look. + +``` +$ lp -o sides=two-sided-long-edge Notes.pdf +``` + +If you want all of your documents to print in two-side mode, you can change your lp settings by using the **lpoptions** command to change the setting for **sides**. + +``` +$ lpoptions -o sides=two-sided-short-edge +``` + +To revert to single-sided printing, you would use a command like this one: + +``` +$ lpoptions -o sides=one-sided +``` + +#### Printing in landscape mode + +To print in landscape mode, you would use the **landscape** option with the lp command. + +``` +$ lp -o landscape penguin.jpg +``` + +### CUPS + +The print system used on Linux systems is the standards-based, open source printing system called CUPS, originally standing for **Common Unix Printing System**. It allows a computer to act as a print server. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3373502/printing-from-the-linux-command-line.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/printouts-paper-100791390-large.jpg +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md b/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md new file mode 100644 index 0000000000..8aa5c65139 --- /dev/null +++ b/sources/tech/20190325 Backup on Fedora Silverblue with Borg.md @@ -0,0 +1,314 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Backup on Fedora Silverblue with Borg) +[#]: via: (https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/) +[#]: author: (Steven Snow https://fedoramagazine.org/author/jakfrost/) + +Backup on Fedora Silverblue with Borg +====== + +![][1] + +When it comes to backing up a Fedora Silverblue system, some of the traditional tools may not function as expected. BorgBackup (Borg) is an alternative available that can provide backup capability for your Silverblue based systems. This how-to explains the steps for using BorgBackup 1.1.8 as a layered package to back up Fedora Silverblue 29 system. + +On a normal Fedora Workstation system, _dnf_ is used to install a package. However, on Fedora Silverblue, _rpm-ostree install_ is used to install new software. This is termed layering on the Silverblue system, since the core ostree is an immutable image and the rpm package is layered onto the core system during the install process resulting in a new local image with the layered package. + +> “BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.” +> +> From the Borg website + +Additionally, the main way to interact with Borg is via the command line. Reading the Quick Start guide it becomes apparent that Borg is well suited to scripting. In fact, it is pretty much necessary to use some form of shell script when performing repeated thorough backup’s of a system. A basic script is provided in the [Borg Quick Start guide][2] , as a point to get started. + +### Installing Borg + +In a terminal, type the following command to install BorgBackup as a layered package: + +``` +$rpm-ostree install borgbackup +``` +This installs BorgBackup to the Fedora Silverblue system. To use it, reboot into the new ostree with: + +``` +$systemctl reboot +``` + +Now Borg is installed, and ready to use. + +### Some notes about Silverblue and its file system, layered packages and flatpaks + +#### The file system + +Silverblue is an immutable operating system based on ostree, with support for layering rpm’s through the use of rpm-ostree. At the user level, this means the path that appears as _/home_ in a flatpak, will actually be _/var/home_ to the system. For programs like Borg, and other backup tools this is important to remember since they often require the actual path, so in this example that would be _/var/home_ instead of just _/home_. + +Before starting a backup it’s a good idea to understand where potential data could be stored, and then if that data should be backed up. Silverblue’s file system layout is very specific with respect to what is writable and what is not. On Silverblue _/etc_ and _/var_ are the only places that are not immutable, therefore writable. On a single user system, typically the user home directory would be a likely choice for data backup. Normally excluding Downloads, but including Documents and more. Also, _/etc_ is a logical choice for some configuration options you don’t want to go through again. Take notes of what to exclude from your home directory and from _/etc_. Some files and subdirectories of /etc you need root or sudo privileges to access. + +#### Flatpaks + +Flatpak applications store data in your home directory under _$HOME/.var/app/flatpakapp_ , regardless of whether they were installed as user or system. If installed at a user level, there is also data found in _$HOME/.local/share/flatpak/app/_ , or if installed at a system level it will be found in _/var/lib/flatpak/app_ For the purposes of this article, it was enough to list the flatpak’s installed and redirect the output to a file for backing up. Reasoning that if there is a need to reinstall them (flatpaks) the list file could be used to do it from. For a more robust approach, examining the flatpak file system layouts can be done [here.][3] + +#### Layering and rpm-ostree + +There is no easy way for a user to retrieve the layered package information aside from the + +$rpm-ostree status + +command. Which shows the current and previous ostree commit’s layered packages, and if any commits are pinned they would be listed too. Below is the output on my system, note the LayeredPackages label at the end of each commit listing. + +![][4] + +The command + +$ostree log + +is useful to retrieve a history of commits for the system. Type it in your terminal to see the output. + +### Preparing the backup repo + +In order to use Borg to back up a system, you need to first initialize a Borg repo. Before initializing, the decision must be made to use encryption (or not) and if so, what mode. + +With Borg the data can be protected using 256-bit AES encryption. The integrity and authenticity of the data, which is encrypted on the clientside, is verified using HMAC-SHA256. The encryption modes are listed below. + +#### Encryption modes + +Hash/MAC | Not encrypted no auth | Not encrypted, but authenticated | Encrypted (AEAD w/ AES) and authenticated +---|---|---|--- +SHA-256 | none | authenticated | repokey keyfile +BLAKE2b | n/a | authenticated-blake2 | repokey-blake2 keyfile-blake2 + +The encryption mode decided on was keyfile-blake2, which requires a passphrase to be entered as well as the keyfile being needed. + +Borg can use the following compression types which you can specify at backup creation time. + + * lz4 (super fast, low compression) + * zstd (wide range from high speed and low compression to high compression and lower speed) + * zlib (medium speed and compression) + * lzma (low speed, high compression) + + + +For compression lzma was chosen at setting 6, the highest sensible compression level. The initial backup took 4 minutes 59.98 seconds to complete, while subsequent ones have taken less than 20 seconds as a rule. + +#### Borg init + +To be able to perform backups with Borg, first, create a directory for your Borg repo: + +``` +$mkdir borg_testdir +``` + +and then change to it. + +``` +$cd borg_testdir +``` + +Next, initialize the Borg repo with the borg init command: + +``` +$borg init -e=keyfile-blake2 . +``` + +Borg will prompt for your passphrase, which is case sensitive, and at creation must be entered twice. A suitable passphrase of alpha-numeric characters and symbols, and of a reasonable length should be created. It can be changed later on if needed without affecting the keyfile, or your encrypted data. The keyfile can be exported and should be for backup purposes, along with the passphrase, and stored somewhere secure. + +#### Creating a backup + +Next, create a test backup of the Documents directory, remember on Silverblue the actual path to the user Documents directory is _/var/home/username/Documents_. In practice on Silverblue, it is suitable to use _~/_ or _$HOME_ to indicate your home directory. The distinction between the actual path and environment variables being the real path does not change whereas the environment variable can be changed. From within the Borg repo, type the following command + +``` +$borg create .::borgtest /var/home/username/Documents +``` + +and that will create a backup of the Documents directory named **borgtest**. To break down the command a bit; **create** requires a **repo location** , in this case **.** since we are in the **top level** of the **repo**. That makes the path **.::borgtest** for the backup name. Finally **/var/home/username/Documents** is the location of the data we are backing up. + +The following command + +``` +$borg list +``` + +returns a listing of your backups, after a few days it look similar to this: + +![Output of borg list command in my backup repo.][5] + +To delete the test backup, type the following in the terminal + +``` +$borg delete .::borgtest +``` + +at this time Borg will prompt for the encryption passphrase in order to delete the backup. + +### Pulling it together into a shell script + +As mentioned Borg is an eminently script friendly tool. The Borg documentation links provided are great places to find out more about BorgBackup, and there is more. The example script provided by Borg was modified to suit this article. Below is a version with the basic parts that others could use as a starting point if desired. It tries to capture the three information pieces of the system and apps mentioned earlier. The output of _flatpak list_ , _rpm-ostree status_ , and _ostree log_ as human readable files given the same names each time so overwritten each time. The repo setup had to be changed since the original example is for a remote server login with ssh, and this was intended to be used locally. The other changes mostly involved correcting directory paths, tailoring the excluded content to suit this systems home directory, and choosing the compression. +``` +#!/bin/sh + + + +# This gets the ostree commit data, this file is overwritten each time + +sudo ostree log fedora-workstation:fedora/29/x86_64/silverblue > ostree.log + + + +rpm-ostree status > rpm-ostree-status.lst + + + +# Flatpaks get listed too + +flatpak list > flatpak.lst + + + +# Setting this, so the repo does not need to be given on the commandline: + +export BORG_REPO=/var/home/usernamehere/borg_testdir + + + +# Setting this, so you won't be asked for your repository passphrase:(Caution advised!) + +export BORG_PASSPHRASE='usercomplexpassphrasehere' + + + +# some helpers and error handling: + +info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; } + +trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM + + + +info "Starting backup" + + + +# Backup the most important directories into an archive named after + +# the machine this script is currently running on: + +borg create \ + + --verbose \ + + --filter AME \ + + --list \ + + --stats \ + + --show-rc \ + + --compression auto,lzma,6 \ + + --exclude-caches \ + + --exclude '/var/home/*/borg_testdir'\ + + --exclude '/var/home/*/Downloads/'\ + + --exclude '/var/home/*/.var/' \ + + --exclude '/var/home/*/Desktop/'\ + + --exclude '/var/home/*/bin/' \ + + \ + + ::'{hostname}-{now}' \ + + /etc \ + + /var/home/ssnow \ + + + + backup_exit=$? + + + + info "Pruning repository" + + + + # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly + + # archives of THIS machine. The '{hostname}-' prefix is very important to + + # limit prune's operation to this machine's archives and not apply to + + # other machines' archives also: + + + + borg prune \ + + --list \ + + --prefix '{hostname}-' \ + + --show-rc \ + + --keep-daily 7 \ + + --keep-weekly 4 \ + + --keep-monthly 6 \ + + + + prune_exit=$? + + + + # use highest exit code as global exit code + + global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) + + + + if [ ${global_exit} -eq 0 ]; then + + info "Backup and Prune finished successfully" + + elif [ ${global_exit} -eq 1 ]; then + + info "Backup and/or Prune finished with warnings" + + else + + info "Backup and/or Prune finished with errors" + + fi + + + + exit ${global_exit} +``` + +This listing is missing some more excludes that were specific to the test system setup and backup intentions, and is very basic with room for customization and improvement. For this test to write an article it wasn’t a problem having the passphrase inside of a shell script file. Under normal use it is better to enter the passphrase each time when performing the backup. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/backup-on-fedora-silverblue-with-borg/ + +作者:[Steven Snow][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/jakfrost/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/borg-816x345.jpg +[2]: https://borgbackup.readthedocs.io/en/stable/quickstart.html +[3]: https://github.com/flatpak/flatpak/wiki/Filesystem +[4]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-17-11-21-1024x285.png +[5]: https://fedoramagazine.org/wp-content/uploads/2019/03/Screenshot-from-2019-03-18-18-56-03.png diff --git a/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md new file mode 100644 index 0000000000..3de297db06 --- /dev/null +++ b/sources/tech/20190325 Contribute at the Fedora Test Day for Fedora Modularity.md @@ -0,0 +1,50 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Contribute at the Fedora Test Day for Fedora Modularity) +[#]: via: (https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/) +[#]: author: (Sumantro Mukherjee https://fedoramagazine.org/author/sumantrom/) + +Contribute at the Fedora Test Day for Fedora Modularity +====== + +![][1] + +Modularity lets you keep the right version of an application, language runtime, or other software on your Fedora system even as the operating system is updated. You can read more about Modularity in general on the [Fedora documentation site][2]. + +The Modularity folks have been working on Modules for everyone. As a result, the Fedora Modularity and QA teams have organized a test day for **Tuesday, March 26, 2019**. Refer to the [wiki page][3] for links to the test images you’ll need to participate. Read on for more information on the test day. + +### How do test days work? + +A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started. + +To contribute, you only need to be able to do the following things: + + * Download test materials, which include some large files + * Read and follow directions step by step + + + +The [wiki page][3] for the modularity test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day [web application][4]. If you’re available on or around the day of the event, please do some testing and report your results. + +Happy testing, and we hope to see you on test day. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/contribute-at-the-fedora-test-day-for-fedora-modularity/ + +作者:[Sumantro Mukherjee][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/sumantrom/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2015/03/test-days-945x400.png +[2]: https://docs.fedoraproject.org/en-US/modularity/ +[3]: https://fedoraproject.org/wiki/Test_Day:2019-03-26_Modularity_Test_Day +[4]: http://testdays.fedorainfracloud.org/events/61 diff --git a/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md new file mode 100644 index 0000000000..22f7df8876 --- /dev/null +++ b/sources/tech/20190325 How Open Source Is Accelerating NFV Transformation.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How Open Source Is Accelerating NFV Transformation) +[#]: via: (https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation) +[#]: author: (Pam Baker https://www.linux.com/users/pambaker) + +How Open Source Is Accelerating NFV Transformation +====== + +![NFV][1] + +In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, about the role of open source in innovation for telecommunications service providers. + +[Creative Commons Zero][2] + +Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of [open source as the path to innovation][3] resonates on many levels. + +In anticipation of the upcoming [Open Networking Summit][4], we talked with [Thomas Nadeau][5], Technical Director NFV at Red Hat, who gave a [keynote address][6] at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers. + +One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs. + +“There are projects now, like Kubernetes, that are too big for any one company to do. There's technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.” + +Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies. + +**Linux.com: Why is open source central to innovation in general for telecommunications service providers?** + +**Nadeau:** The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed. + +And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They're becoming much more flexible, more modular, and open source is the best means to achieve that. + +**Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.** + +**Nadeau:** Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today's marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete. + +There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms. + +NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came "disaggregated VMs" where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it's still NFV. + +You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great. + +But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we're back to square one where you lose 80% of the performance because of the latest SOA model that they've implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it's still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations. + +**Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.** + +**Nadeau:** Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it's the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it's all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It's secure, it's flexible, and scalable, so operators can really use it as a tool now. + +**Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?** + +**Nadeau:** Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses. + +These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they're using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions. + +_Learn more at[Open Networking Summit][4], happening April 3-5 at the San Jose McEnery Convention Center._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/3/how-open-source-accelerating-nfv-transformation + +作者:[Pam Baker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/pambaker +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/nfv-443852_1920.jpg?itok=uFbzmEPY (NFV) +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.linuxfoundation.org/blog/2018/02/open-source-standards-team-red-hat-measures-open-source-success/ +[4]: https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/ +[5]: https://www.linkedin.com/in/tom-nadeau/ +[6]: https://onseu18.sched.com/event/Fmpr diff --git a/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md b/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md new file mode 100644 index 0000000000..80ddb77264 --- /dev/null +++ b/sources/tech/20190325 Reducing sysadmin toil with Kubernetes controllers.md @@ -0,0 +1,166 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Reducing sysadmin toil with Kubernetes controllers) +[#]: via: (https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers) +[#]: author: (Paul Czarkowski https://opensource.com/users/paulczar) + +Reducing sysadmin toil with Kubernetes controllers +====== + +Controllers can ease a sysadmin's workload by handling things like creating and managing DNS addresses and SSL certificates. + +![][1] + +Kubernetes is a platform for reducing toil cunningly disguised as a platform for running containers. The element that allows for both running containers and reducing toil is the Kubernetes concept of a **Controller**. + +Most resources in Kubernetes are managed by **kube-controller-manager** , or "controller" for short. A [controller][2] is defined as "a control loop that watches the shared state of a cluster … and makes changes attempting to move the current state toward the desired state." Think of it like this: A Kubernetes controller is to a microservice as a Chef recipe (or an Ansible playbook) is to a monolith. + +Each Kubernetes resource is controlled by its own control loop. This is a step forward from previous systems like Chef or Puppet, which both have control loops at the server level, but not the resource level. A controller is a fairly simple piece of code that creates a control loop over a single resource to ensure the resource is behaving correctly. These control loops can stack together to create complex functionality with simple interfaces. + +The canonical example of this in action is in how we manage Pods in Kubernetes. A Pod is effectively a running copy of an application that a specific worker node is asked to run. If that application crashes, the kubelet running on that node will start it again. However, if that node crashes, the Pod is not recovered, as the control loop (via the kubelet process) responsible for the resource no longer exists. To make applications more resilient, Kubernetes has the ReplicaSet controller. + +The ReplicaSet controller is bundled inside the Kubernetes **controller-manager** , which runs on the Kubernetes master node and contains the controllers for these more advanced resources. The ReplicaSet controller is responsible for ensuring that a set number of copies of your application is always running. To do this, the ReplicaSet controller requests that a given number of Pods is created. It then routinely checks that the correct number of Pods is still running and will request more Pods or destroy existing Pods to do so. + +By requesting a ReplicaSet from Kubernetes, you get a self-healing deployment of your application. You can further add lifecycle management to your workload by requesting [a Deployment][3], which is a controller that manages ReplicaSets and provides rolling upgrades by managing multiple versions of your application's ReplicaSets. + +These controllers are great for managing Kubernetes resources and fantastic for managing resources outside of Kubernetes. The [Cloud Controller Manager][4] is a grouping of Kubernetes controllers that acts on resources external to Kubernetes, specifically resources that provide functionality to Kubernetes on the underlying cloud infrastructure. This is what drives Kubernetes' ability to do things like having a **LoadBalancer** [Service][5] type create and manage a cloud-specific load-balancer (e.g., an Elastic Load Balancer on AWS). + +Furthermore, you can extend Kubernetes by writing a controller that watches for events and annotations and performs extra work, acting on Kubernetes resources or external resources that have some form of programmable API. + +To review: + + * Controllers are a fundamental building block of Kubernetes' functionality. + * A controller forms a control loop to ensure that the state of a given resource matches the requested state. + * Kubernetes provides controllers via Controller Manager and Cloud Controller Manager processes that provide additional resilience and functionality. + * The ReplicaSet controller adds resiliency to pods by ensuring the correct number of replicas is running. + * A Deployment controller adds rolling upgrade capabilities to ReplicaSets. + * You can extend Kubernetes' functionality by writing your own controllers. + + + +### Controllers reduce sysadmin toil + +Some of the most common tickets in a sysadmin's queue are for fairly simple tasks that should be automated, but for various reasons are not. For example, creating or updating a DNS record generally requires updating a [zone file][6], but one bad entry and you can take down your entire DNS infrastructure. Or how about those tickets that look like _[SYSAD-42214] Expired SSL Certificate - Production is down_? + +[![DNS Haiku][7]][8] + +DNS haiku, image by HasturHasturHamster + +What if I told you that Kubernetes could manage these things for you by running some additional controllers? + +Imagine a world where asking Kubernetes to run applications for you would automatically create and manage DNS addresses and SSL certificates. What a world we live in! + +#### Example: External DNS controller + +The **[external-dns][9]** controller is a perfect example of Kubernetes treating operations as a microservice. You configure it with your DNS provider, and it will watch resources including Services and Ingress controllers. When one of those resources changes, it will inspect them for annotations that will tell it when it needs to perform an action. + +With the **external-dns** controller running in your cluster, you can add the following annotation to a service, and it will go out and create a matching [DNS A record][10] for that resource: +``` +kubectl annotate service nginx \ +"external-dns.alpha.kubernetes.io/hostname=nginx.example.org." +``` +You can change other characteristics, such as the DNS record's TTL value: +``` +kubectl annotate service nginx \ +"external-dns.alpha.kubernetes.io/ttl=10" +``` +Just like that, you now have automatic DNS management for your applications and services in Kubernetes that reacts to any changes in your cluster to ensure your DNS is correct. + +#### Example: Certificate manager operator + +Like the **external-dns** controller, the [**cert-manager**][11] will react to changes in resources, but it also comes with a custom resource definition (CRD) that will allow you to request certificates as a resource on their own, not just as a byproduct of an annotation. + +**cert-manager** works with [Let's Encrypt][12] and other sources of certificates to request valid, signed Transport Layer Security (TLS) certificates. You can even use it in combination with **external-dns** , like in the following example, which registers **web.example.com** , retrieves a TLS certificate from Let's Encrypt, and stores it in a Secret. + +``` +apiVersion: extensions/v1beta1 +kind: Ingress +metadata: + annotations: + certmanager.k8s.io/acme-http01-edit-in-place: "true" + certmanager.k8s.io/cluster-issuer: letsencrypt-prod + kubernetes.io/tls-acme: "true" + name: example +spec: + rules: + - host: web.example.com + http: + paths: + - backend: + serviceName: example + servicePort: 80 + path: /* + tls: + - hosts: + - web.example.com + secretName: example-tls +``` + +You can also request a certificate directly from the **cert-manager** CRD, like in the following example. As in the above, it will result in a certificate key pair stored in a Kubernetes Secret: +``` +apiVersion: certmanager.k8s.io/v1alpha1 +kind: Certificate +metadata: + name: example-com + namespace: default +spec: + secretName: example-com-tls + issuerRef: + name: letsencrypt-staging + commonName: example.com + dnsNames: + - www.example.com + acme: + config: + - http01: + ingressClass: nginx + domains: + - example.com + - http01: + ingress: my-ingress + domains: + - www.example.com +``` + +### Conclusion + +This was a quick look at one way Kubernetes is helping enable a new wave of changes in how we operate software. This is one of my favorite topics, and I look forward to sharing more on [Opensource.com][14] and my [blog][15]. I'd also like to hear how you use controllers—message me on Twitter [@pczarkowski][16]. + +* * * + +_This article is based on[Cloud Native Operations - Kubernetes Controllers][17] originally published on Paul Czarkowski's blog._ + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/reducing-sysadmin-toil-kubernetes-controllers + +作者:[Paul Czarkowski][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/paulczar +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv +[2]: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/ +[3]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ +[4]: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/ +[5]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types +[6]: https://en.wikipedia.org/wiki/Zone_file +[7]: https://opensource.com/sites/default/files/uploads/dns_haiku.png (DNS Haiku) +[8]: https://www.reddit.com/r/sysadmin/comments/4oj7pv/network_solutions_haiku/ +[9]: https://github.com/kubernetes-incubator/external-dns +[10]: https://en.wikipedia.org/wiki/List_of_DNS_record_types#Resource_records +[11]: http://docs.cert-manager.io/en/latest/ +[12]: https://letsencrypt.org/ +[13]: http://www.example.com +[14]: http://Opensource.com +[15]: https://tech.paulcz.net/blog/ +[16]: https://twitter.com/pczarkowski +[17]: https://tech.paulcz.net/blog/cloud-native-operations-k8s-controllers/ diff --git a/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md new file mode 100644 index 0000000000..52c7c925dd --- /dev/null +++ b/sources/tech/20190326 An inside look at an IIoT-powered smart factory.md @@ -0,0 +1,74 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An inside look at an IIoT-powered smart factory) +[#]: via: (https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +An inside look at an IIoT-powered smart factory +====== + +### Despite housing some 50 robots and 50 people, Tempo Automation’s gleaming connected factory relies on industrial IoT and looks more like a high-tech startup office than a manufacturing plant. + +![Tempo Automation][1] + +As someone who’s spent his whole career working in offices, not factories, I had very little idea what a modern “smart factory” powered by the industrial Internet of Things (IIoT) might look like. That’s why I was so interested in [Tempo Automation][2]’s new 42,000-square-foot facility in San Francisco’s trendy Design District. + +Frankly, I pictured the company’s facility, which uses IIoT to automatically configure, operate, and monitor the prototyping and low-volume production of printed circuit board assemblies (PCBAs), as a cacophony of robots and conveyor belts attended to by a grizzled band of grease-stained technicians. You know, a 21stcentury update of Charlie Chaplin’s 1936 classic *Modern Times *making equipment for customers in the aerospace, medtech, industrial automation, consumer electronics, and automotive industries. (The company just inked a [new contract with Lockheed Martin][3].) + +**[ Learn more about the[industrial Internet of Things][4]. | Get regularly scheduled insights by [signing up for Network World newsletters][5]. ]** + +Not exactly. As you can see from the below pictures, despite housing some 50 robots and 50 people, this gleaming “connected factory” looks more like a high-tech startup office, with just as many computers and few more hard-to-identify machines, including Solder Jet and Stencil Printers, zone reflow ovens, 3D X-ray devices and many more. + +![Tempo Automation office space][6] + +![Tempo Automation factory floor][7] + +## How Tempo Automation's 'smart factory' works + +On the front end, Tempo’s customers upload CAD files with their board designs and Bills of Materials (BOM) listing the required parts to be used. After performing feature extraction on the design and developing a virtual model of the finished product, the Tempo system, the platform (called Tempocom) creates a manufacturing plan and automatically programs the factory’s machines. Tempocom also creates work plans for the factory employees, uploading them to the networked IIoT mobile devicesthey all carry. Updated in real time based on design and process changes, this“digital traveler” tells workers where to go and what to work on next. + +While Tempocom is planning and organizing the internal work of production, the system is also connected to supplier databases, seeking and ordering the parts that will be used in assembly, optimizing for speed of delivery to the Tempo factory. + +## Connecting the digital thread + +“There could be up to 20 robots, 400 unique parts, and 25 people working on the factory floor to produce one order start to finish in a matter of hours,” explained [Shashank Samala][8], Tempo’s co-founder and vice president of product in an email. Tempo “employs IIoT to automatically configure, operate, and monitor” the entire process, coordinated by a “connected manufacturing system” that creates an “unbroken digital thread from design intent of the engineer captured on the website, to suppliers distributed across the country, to robots and people on the factory floor.” + +Rather than the machines on the floor functioning as “isolated islands of technology,” Samala added, Tempo Automation uses [Amazon Web Services (AWS) GovCloud][9] to network everything in a bi-directional feedback loop. + +“After customers upload their design to the Tempo platform, our software extracts the design features and then streams relevant data down to all the devices, processes, and robots on the factory floor,” he said. “This loop then works the other way: As the robots build the products, they collect data and feedback about the design during production. This data is then streamed back through the Tempo secure cloud architecture to the customer as a ‘Production Forensics’ report.” + +Samala claimed the system has “streamlined operations, improved collaboration, and simplified remote management and control.” + +## Traditional IoT, too + +Of course, the Tempo factory isn’t all fancy, cutting-edge IIoT implementations. According to Ryan Saul, vice president of manufacturing,the plant also includes an array of IoT sensors that track temperature, humidity, equipment status, job progress, reported defects, and so on to help engineers and executives understand how the facility is operating. + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384378/an-inside-look-at-tempo-automations-iiot-powered-smart-factory.html#tk.rss_all + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-floor-100791923-large.jpg +[2]: http://www.tempoautomation.com/ +[3]: https://www.businesswire.com/news/home/20190325005097/en/Tempo-Automation-Announces-Contract-Lockheed-Martin +[4]: https://www.networkworld.com/article/3243928/internet-of-things/what-is-the-industrial-iot-and-why-the-stakes-are-so-high.html#nww-fsb +[5]: https://www.networkworld.com/newsletters/signup.html#nww-fsb +[6]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-2-100791921-large.jpg +[7]: https://images.idgesg.net/images/article/2019/03/tempo-automation-iiot-factory-100791922-large.jpg +[8]: https://www.linkedin.com/in/shashanksamala/ +[9]: https://aws.amazon.com/govcloud-us/ +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md b/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md new file mode 100644 index 0000000000..836eac23be --- /dev/null +++ b/sources/tech/20190326 Bringing Kubernetes to the bare-metal edge.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Bringing Kubernetes to the bare-metal edge) +[#]: via: (https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge) +[#]: author: (John Studarus https://opensource.com/users/studarus) + +Bringing Kubernetes to the bare-metal edge +====== +New Kubespray features enable Kubernetes clusters to be deployed across +next-generation edge locations. +![cubes coming together to create a larger cube][1] + +[Kubespray][2], a community project that provides Ansible playbooks for the deployment and management of Kubernetes clusters, recently added support for the bare-metal cloud [Packet][3]. This allows Kubernetes clusters to be deployed across next-generation edge locations, including [cell-tower based micro datacenters][4]. + +Packet, which is unique in its bare-metal focus, expands Kubespray's support beyond the usual clouds—Amazon Web Services, Google Compute Engine, Azure, OpenStack, vSphere, and Oracle Cloud Infrastructure. Kubespray removes the complexities of standing up a Kubernetes cluster through automation using Terraform and Ansible. Terraform provisions the infrastructure and installs the prerequisites for the Ansible installation. Terraform provider plugins enable support for a variety of different cloud providers. The Ansible playbook then deploys and configures Kubernetes. + +Since there are already [detailed instructions online][5] for deploying with Kubespray on Packet, I'll focus on why bare-metal support is important for Kubernetes and what's required to make it happen. + +### Why bare metal? + +Historically, Kubernetes deployments relied upon the "creature comforts" of a public cloud or a fully managed private cloud to provide virtual machines and networking infrastructure for running Kubernetes. This adds a layer of abstraction (e.g., a hypervisor with virtual machines) that Kubernetes doesn't necessarily need. In fact, Kubernetes began its life on bare metal as Google's Borg. + +As we move workloads closer to the end user (in the form of edge computing) and deploy to more diverse environments (including hybrid and on-premises infrastructure of different architectures and sizes), relying on a homogenous public cloud substrate isn't always possible or ideal. For instance, with edge locations being resource constrained, it is more efficient and practical to run Kubernetes directly on bare metal. + +### Mind the gaps + +Without a full-featured public cloud underneath a bare-metal cluster, some traditional capabilities, such as load balancing and storage orchestration, will need to be managed directly within the Kubernetes cluster. Luckily there are projects, such as [MetalLB][6] and [Rook][7], that provide this support for Kubernetes. + +MetalLB, a Layer 2 and Layer 3 load balancer, is integrated into Kubespray, and it's easy to install support for Rook, which orchestrates Ceph to provide distributed and replicated storage for a Kubernetes cluster, on a bare-metal cluster. In addition to enabling full functionality, this "bring your own" approach to storage and load balancing removes reliance upon specific cloud services, helping you avoid lock-in with an approach that can be installed anywhere. + +Kubespray has support for ARM64 processors. The ARM architecture (which is starting to show up regularly in datacenter-grade hardware, SmartNICs, and other custom accelerators) has a long history in mobile and embedded devices, making it well-suited for edge deployments. + +Going forward, I hope to see deeper integration with MetalLB and Rook as well as bare-metal continuous integration (CI) of daily builds atop a number of different hardware configurations. Access to automated bare metal at Packet enables testing and maintaining support across various processor types, storage options, and networking setups. This will help ensure that Kubespray-powered Kubernetes can be deployed and managed confidently across public clouds, bare metal, and edge environments. + +### It takes a village + +Kubespray is an open source project driven by the community, indebted to its core developers and contributors as well as the folks that assisted with the Packet integration. Contributors include [Maxime Guyot][8] and [Aivars Sterns][9] for the initial commits and code reviews, [Rong Zhang][10] and [Ed Vielmetti][11] for document reviews, as well as [Tomáš Karásek][12] (who maintains the Packet Go library and Terraform provider). + +* * * + +_John Studarus will present[The Open Micro Edge Data Center][13] at the [Open Infrastructure Summit][14], April 29-May 1 in Denver._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/bringing-kubernetes-bare-metal-edge + +作者:[John Studarus][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/studarus +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube) +[2]: https://kubespray.io/ +[3]: https://www.packet.com/ +[4]: https://twitter.com/packethost/status/1062147355108085760 +[5]: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/packet.md +[6]: https://metallb.universe.tf/ +[7]: https://rook.io/ +[8]: https://twitter.com/Miouge +[9]: https://github.com/Atoms +[10]: https://github.com/riverzhang +[11]: https://twitter.com/vielmetti +[12]: https://t0mk.github.io/ +[13]: https://www.openstack.org/summit/denver-2019/summit-schedule/events/23153/the-open-micro-edge-data-center +[14]: https://openstack.org/summit diff --git a/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md new file mode 100644 index 0000000000..803b6a993d --- /dev/null +++ b/sources/tech/20190326 Changes in SD-WAN Purchase Drivers Show Maturity of the Technology.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Changes in SD-WAN Purchase Drivers Show Maturity of the Technology) +[#]: via: (https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all) +[#]: author: (Cliff Grossner https://www.networkworld.com/author/Cliff-Grossner/) + +Changes in SD-WAN Purchase Drivers Show Maturity of the Technology +====== + +![istock][1] + +[SD-WANs][2] have been available now for the past five years, but adoption has been light compared to that of the overall WAN market. This should be no surprise, as the technology was immature, and customers were dipping their toes in the water first as a test. Recently, however, there are signs that the market is maturing, which also happens to coincide with an acceleration of the market. + +Evidence of the maturation of SD-WANs can be seen in the most recent IHS Markit _Campus LAN and WAN SDN Strategies and Leadership North American Enterprise Survey_. Exhibit 1 shows that the top drivers of SD-WAN deployments are the simplification of WAN provisioning, automation capabilities. and direct cloud connectivity—all of which require an architectural change. + +This is in stark contrast to the approach of early adopters looking for a reduction in opex and capex savings, doing so in the past by shifting to cheap broadband and low-cost branch hardware. The survey data finds that opex savings now ranks tied in fifth place among the purchase drivers of SD-WAN; and that reduced capex is last, indicating that cost savings no longer possess the same level of importance as with early adopters. + +The shift in purchase drivers indicates companies are looking for SD-WAN to provide more value than legacy WAN. + +With [SD-WAN][3], the “software defined” indicates that the control plane has been separated from the data plane, enabling the control plane to be abstracted away from the hardware and allowing centralized, distributed, and hybrid control architectures, working alongside the centralized management of those architectures. This provides many benefits, the biggest of which is to make WAN provisioning easier. + +![Exhibit 1: Simplification and automation are top drivers for SD-WAN.][4] + +With SD-WAN, most mainstream buyers now demand Zero Touch Provisioning, where the SD-WAN appliance automatically calls home when it attaches to the network and pulls its configuration down from a centralized location. Also, changes can be made through a centralized console and then immediately pushed out to every device. This can automate many of the mundane and repetitive tasks associated with running a network. + +Such a setup carries many benefits—the most important being that highly skilled network engineers can dedicate more time to innovation and less time to working on tasks associated with “keeping the lights on.” + +At present, most resources—time and money—associated with running the WAN are allocated to maintaining the status quo. In the cloud era, however, business leaders embracing digital transformation are looking to their IT organization to help drive innovation and leapfrog the competition. SD-WANs can modernize the network, and the technology will tip the IT resource scale back in favor of innovation. + +### Mainstream buyers set new expectations for SD-WAN + +With early adopters, technology innovation is key because adopters are generally tech-savvy buyers and are always looking to use the latest and greatest to gain an edge. With mainstream buyers, other concerns arise. Exhibit 2 from the IHS Markit survey shows that technological innovation now ranks tied in fourth place in what buyers look for from an SD-WAN provider. While innovation is still important, factors such as security, financial stability, and product service and reliability rank higher. And although businesses need a strong technical solution, it cannot be achieved at the expense of security, vendor stability, or quality without putting operations at risk. + +It’s not surprising, then, that security turned out to be the overwhelming top evaluation criterion, as SD-WANs enable businesses to implement local internet breakout and cloud on-ramp features. Overall, SD-WANs help make applications perform better, especially as enterprises deploy workloads in off-premises, cloud-service-provider-operated data centers as they build their hybrid and multi-clouds. + +Another security capability of SD-WANs is their ability to easily implement segmentation, which enables businesses to establish centrally defined and globally consistent security policies that isolate traffic. For example, a retailer could isolate point-of-sale systems from its guest Wi-Fi network. [SD-WAN vendors][5] can also establish partnerships with well-known security vendors that enable the SD-WAN software to be service chained into application traffic flows, in the process allowing mainstream buyers their choice of security technology. + +![Exhibit 2: SD-WAN buyers now want security and financially viable vendors.][6] + +### The bottom line + +The SD-WAN market is maturing, and the shift from early adopters to mainstream businesses will create a “rising tide” that will benefit all SD-WAN buyers in the WAN ecosystem. As a result, vendors will work to meet calls emphasizing greater simplicity and risk reduction, as well as bring about features that provide an integrated connectivity fabric for enterprise edge, hybrid, and multi-clouds. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384103/changes-in-sd-wan-purchase-drivers-show-maturity-of-the-technology.html#tk.rss_all + +作者:[Cliff Grossner][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Cliff-Grossner/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/istock-998475736-100791932-large.jpg +[2]: https://www.silver-peak.com/sd-wan +[3]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[4]: https://images.idgesg.net/images/article/2019/03/chart-1_post-10-100791930-large.jpg +[5]: https://www.silver-peak.com/sd-wan/choosing-an-sd-wan-vendor +[6]: https://images.idgesg.net/images/article/2019/03/chart-2_post-10-100791931-large.jpg diff --git a/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md new file mode 100644 index 0000000000..37c14fec39 --- /dev/null +++ b/sources/tech/20190326 How to use NetBSD on a Raspberry Pi.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use NetBSD on a Raspberry Pi) +[#]: via: (https://opensource.com/article/19/3/netbsd-raspberry-pi) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +How to use NetBSD on a Raspberry Pi +====== + +Experiment with NetBSD, an open source OS with direct lineage back to the original UNIX source code, on your Raspberry Pi. + +![][1] + +Do you have an old Raspberry Pi lying around gathering dust, maybe after a recent Pi upgrade? Are you curious about [BSD Unix][2]? If you answered "yes" to both of these questions, you'll be pleased to know that the first is the solution to the second, because you can run [NetBSD][3], as far back as the very first release, on a Raspberry Pi. + +BSD is the Berkley Software Distribution of [Unix][4]. In fact, it's the only open source Unix with direct lineage back to the original source code written by Dennis Ritchie and Ken Thompson at Bell Labs. Other modern versions are either proprietary (such as AIX and Solaris) or clever re-implementations (such as Minix and GNU/Linux). If you're used to Linux, you'll feel mostly right at home with BSD, but there are plenty of new commands and conventions to discover. If you're still relatively new to open source, trying BSD is a good way to experience a traditional Unix. + +Admittedly, NetBSD isn't an operating system that's perfectly suited for the Pi. It's a minimal install compared to many Linux distributions designed specifically for the Pi, and not all components of recent Pi models are functional under NetBSD yet. However, it's arguably an ideal OS for the older Pi models, since it's lightweight and lovingly maintained. And if nothing else, it's a lot of fun for any die-hard Unix geek to experience another side of the [POSIX][5] world. + +### Download NetBSD + +There are different versions of BSD. NetBSD has cultivated a reputation for being lightweight and versatile (its website features the tagline "Of course it runs NetBSD"). It offers an image of the latest version of the OS for every version of the Raspberry Pi since the original. To download a version for your Pi, you must first [determine what variant of the ARM architecture your Pi uses][6]. Some information about this is available on the NetBSD site, but for a comprehensive overview, you can also refer to [RPi Hardware History][7]. + +The Pi I used for this article is, as far as I can tell, a Raspberry Pi Model B Rev 2.0 (with two USB ports and no mounting holes). According to the [Raspberry Pi FAQ][8], this means the architecture is ARMv6, which translates to **earmv6hf** in NetBSD's architecture notation. + +![NetBSD on Raspberry Pi][9] + +If you're not sure what kind of Pi you have, the good news is that there are only two Pi images, so try **earmv7hf** first; if it doesn't work, fall back to **earmv6hf**. + +For the easiest and quickest install, use the binary image instead of an installer. Using the image is the most common method of getting an OS onto your Pi: you copy the image to your SD card and boot it up. There's no install necessary, because the image is a generic installation of the OS, and you've just copied it, bit for bit, onto the media that the Pi uses as its boot drive. + +The image files are found in the **binary > gzimg** directories of the NetBSD installation media server, which you can reach from the [front page][3] of NetBSD.org. The image is **rpi.img.gz** , a compressed **.img** file. Download it to your hard drive. + +Once you have downloaded the entire image, extract it. If you're running Linux, BSD, or MacOS, you can use the **gunzip** command: + +``` +$ gunzip ~/Downloads/rpi.img.gz +``` + +If you're working on Windows, you can install the open source [7-Zip][10] archive utility. + +### Copy the image to your SD card + +Once the image file is uncompressed, you must copy it to your Pi's SD card. There are two ways to do this, so use the one that works best for you. + +#### 1\. Using Etcher + +Etcher is a cross-platform application specifically designed to copy OS images to USB drives and SD cards. Download it from [Etcher.io][11] and launch it. + +In the Etcher interface, select the image file on your hard drive and the SD card you want to flash, then click the Flash button. + +![Etcher][12] + +That's it. + +#### 2\. Using the dd command + +On Linux, BSD, or MacOS, you can use the **dd** command to copy the image to your SD card. + + 1. First, insert your SD card into a card reader. Don't mount the card to your system because **dd** needs the device to be disengaged to copy data onto it. + + 2. Run **dmesg | tail** to find out where the card is located without it being mounted. On MacOS, use **diskutil list**. + + 3. Copy the image file to the SD card: + +``` +$ sudo dd if=~/Downloads/rpi.img of=/dev/mmcblk0 bs=2M status=progress +``` + +Before doing this, you _must be sure_ you have the correct location of the SD card. If you copy the image file to the incorrect device, you could lose data. If you are at all unsure about this, use Etcher instead! + + + + +When either **dd** or Etcher has written the image to the SD card, place the card in your Pi and power it on. + +### First boot + +The first time it's booted, NetBSD detects that the SD card's filesystem does not occupy all the free space available and resizes the filesystem accordingly. + +![Booting NetBSD on Raspberry Pi][13] + +Once that's finished, the Pi reboots and presents a login prompt. Log into your NetBSD system using **root** as the user name. No password is required. + +### Set up a user account + +First, set a password for the root user: + +``` +# passwd +``` + +Then create a user account for yourself with the **-m** option to prompt NetBSD to create a home directory and the **-G wheel** option to add your account to the wheel group so that you can become the administrative user (root) as needed: + +``` +# useradd -m -G wheel seth +``` + +Use the **passwd** command again to set a password for your user account: + +``` +# passwd seth +``` + +Log out, and then log back in with your new credentials. + +### Add software to NetBSD + +If you've ever used a Pi, you probably know that the way to add more software to your system is with a special command like **apt** or **dnf** (depending on whether you prefer to run [Raspbian][14] or [FedBerry][15] on your Pi). On NetBSD, use the **pkg_add** command. But some setup is required before the command knows where to go to get the packages you want to install. + +There are ready-made (pre-compiled) packages for NetBSD on NetBSD's servers using the scheme **<[ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/[PORT]/[VERSION]/All>][16]**. Replace PORT with the architecture you are using, either **earmv6hf** or **earmv7hf**. Replace VERSION with the NetBSD release you are using; at the time of this writing, that's **8.0**. + +Place this value in a file called **/etc/pkg_install.conf**. Since that's a system file outside your user folder, you must invoke root privileges to create it: + +``` +$ su - + +# echo "PKG_PATH=" >> /etc/pkg_install.conf +``` + +Now you can install packages from the NetBSD software distribution. A good first candidate is Bash, commonly the default shell on a Linux (and Mac) system. Also, if you're not already a Vi text editor user, you may want to try something more intuitive such as [Jove][17] or [Nano][18]: + +``` +# pkg_add -v bash jove nano +# exit +$ +``` + +Unlike many Linux distributions ([Slackware][19] being a notable exception), NetBSD does very little configuration on your behalf, and this is considered a feature. So, to use Bash, Jove, or Nano as your default toolset, you must set the configuration yourself. + +You can set many of your preferences dynamically using environment variables, which are special variables that your whole system can access. For instance, most applications in Unix know that if there is a **VISUAL** or **EDITOR** variable set, the value of those variables should be used as the default text editor. You can set these two variables temporarily, just for your current login session: + +``` +$ export EDITOR=nano +# export VISUAL=nano +``` + +Or you can make them permanent by adding them to the default NetBSD **.profile** file: + +``` +$ sed -i 's/EDITOR=vi/EDITOR=nano/' ~/.profile +``` + +Load your new settings: + +``` +$ . ~/.profile +``` + +To make Bash your default shell, use the **chsh** (change shell) command, which now loads into your preferred editor. Before running **chsh** , though, make sure you know where Bash is located: + +``` +$ which bash +/usr/pkg/bin/bash +``` + +Set the value for **shell** in the **chsh** entry to **/usr/pkg/bin/bash** , then save the document. + +### Add sudo + +The **pkg_add** command is a privileged command, which means to use it, you must become the root user with the **su** command. If you prefer, you can also set up the **sudo** command, which allows certain users to use their own password to execute administrative tasks. + +First, install it: + +``` +# pkg_add -v sudo +``` + +And then use the **visudo** command to edit its configuration file. You must use the **visudo** command to edit the **sudo** configuration, and it must be run as root: + +``` +$ su +# SUDO_EDITOR=nano visudo +``` + +Once you are in the editor, find the line allowing members of the wheel group to execute any command, and uncomment it (by removing **#** from the beginning of the line): + +``` +### Uncomment to allow members of group wheel to execute any command +%wheel ALL=(ALL) ALL +``` + +Save the document as described in Nano's bottom menu panel and exit the root shell. + +Now you can use **pkg_add** with **sudo** instead of becoming root: + +``` +$ sudo pkg_add -v fluxbox +``` + +### Net gain + +NetBSD is a full-featured Unix operating system, and now that you have it set up on your Pi, you can explore every nook and cranny. It happens to be a pretty lightweight OS, so even an old Pi with a 700mHz processor and 256MB of RAM can run it with ease. If this article has sparked your interest and you have an old Pi sitting in a drawer somewhere, try it out! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/netbsd-raspberry-pi + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_computer_development_programming.png?itok=4OM29-82 +[2]: https://en.wikipedia.org/wiki/Berkeley_Software_Distribution +[3]: http://netbsd.org/ +[4]: https://en.wikipedia.org/wiki/Unix +[5]: https://en.wikipedia.org/wiki/POSIX +[6]: http://wiki.netbsd.org/ports/evbarm/raspberry_pi +[7]: https://elinux.org/RPi_HardwareHistory +[8]: https://www.raspberrypi.org/documentation/faqs/ +[9]: https://opensource.com/sites/default/files/uploads/pi.jpg (NetBSD on Raspberry Pi) +[10]: https://www.7-zip.org/ +[11]: https://www.balena.io/etcher/ +[12]: https://opensource.com/sites/default/files/uploads/etcher_0.png (Etcher) +[13]: https://opensource.com/sites/default/files/uploads/boot.png (Booting NetBSD on Raspberry Pi) +[14]: http://raspbian.org/ +[15]: http://fedberry.org/ +[16]: ftp://ftp.netbsd.org/pub/pkgsrc/packages/NetBSD/%5BPORT%5D/%5BVERSION%5D/All%3E +[17]: https://opensource.com/article/17/1/jove-lightweight-alternative-vim +[18]: https://www.nano-editor.org/ +[19]: http://www.slackware.com/ diff --git a/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md new file mode 100644 index 0000000000..babc54c0f7 --- /dev/null +++ b/sources/tech/20190326 Today-s Retailer is Turning to the Edge for CX.md @@ -0,0 +1,52 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Today’s Retailer is Turning to the Edge for CX) +[#]: via: (https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all) +[#]: author: (Cindy Waxer https://www.networkworld.com/author/Cindy-Waxer/) + +Today’s Retailer is Turning to the Edge for CX +====== + +### Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the U.S. Census. + +![iStock][1] + +Despite the increasing popularity and convenience of ecommerce, 92% of purchases continue to be made off-line, according to the [U.S. Census][2]. That’s putting enormous pressure on retailers to meet new consumer expectations around real-time access to merchandise and order information. In fact, 85.3% of shoppers expect retailers to provide associates with handheld or fixed devices to check inventory and price within a store, a nearly 51% increase over 2017, according to a [survey from SOTI][3]. + +With an eye on transforming the customer experience of spending time in a store, retailers are investing aggressively in compute power located closer to the buyer, also known as [edge computing][4]. + +So what new and innovative technologies are edge environments supporting? Here’s where retail is headed with customer service and how edge computing will help them get there. + +**Face forward** : Facial recognition technology is on the rise in retail as brands search for new ways to engage customers. Take, CaliBurger, for example. The restaurant chain recently tested out self-ordering kiosks that use AI and facial-recognition technology to identify registered customers and pull up their loyalty accounts and order preferences. By automatically displaying a customer’s most popular purchases, the system aims to help patrons complete their orders in seconds flat for greater speed and convenience. + +**Customer experience on display** : Forget about traditional counter displays. Savvy retailers are experimenting with high-tech, in-store digital signage solutions to attract consumers and gather valuable data. For instance, Glass Media’s projection-based, end-to-end digital retail signage combines display technology, a cloud-based IoT platform, and data analytic capabilities. Through projection, the solution can influence customers at the point-of-decision. + +**Backroom access** : Tracking inventory manually requires substantial human resources. IoT-powered backroom technologies such as RFID, real-time point of sale (POS), and smart shelving systems promise to change that by improving the accuracy of inventory tracking throughout the supply chain. These automated solutions can track and reorder items automatically, eliminating the need for humans to take inventory and reducing the risk of product shortages. + +**Robots to the rescue** : Hoping to transform the branch experience, HSBC recently unveiled Pepper, a concierge robot whose job is to help customers with simple tasks, from answering commonly asked questions to directing them to available tellers. Pepper also acts as an online banking station where customers can log into their mobile banking account or access information about products. By putting Pepper on the payroll, HSBC hopes to reduce customer wait times and free up its “human” bankers. + +These innovative technologies provide retailers with unique opportunities to enhance customer experience, develop new revenue streams, and boost customer loyalty. But many of them require edge computing to work properly. Bandwidth-intensive content and vast volumes of data can lead to latency issues, outages, and other IT headaches. Fortunately, by placing computing power and storage capabilities directly on the edge of the network, edge computing can help retailers deliver the best customer experience possible. + +To find out more about how edge computing is transforming the customer experience in retail, visit [APC.com][5]. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384202/today-s-retailer-is-turning-to-the-edge-for-cx.html#tk.rss_all + +作者:[Cindy Waxer][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Cindy-Waxer/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/istock-508154656-100791924-large.jpg +[2]: https://ycharts.com/indicators/ecommerce_sales_as_percent_retail_sales +[3]: https://www.soti.net/resources/newsroom/2019/annual-connected-retailer-survey-new-soti-survey-reveals-us-consumers-prefer-speed-and-convenience-when-shopping-with-limited-human-interaction/ +[4]: https://www.hpe.com/us/en/servers/edgeline-iot-systems.html?pp=false&jumpid=ps_83cqske5um_aid-510380402&gclid=CjwKCAjw6djYBRB8EiwAoAF6oWwk-M6LWcfCbbZ331fXhEHShXGbLWoSwTIzue6mxQg4gDvYx59XZxoC_4oQAvD_BwE&gclsrc=aw.ds +[5]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp diff --git a/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md new file mode 100644 index 0000000000..2a0dde5fb3 --- /dev/null +++ b/sources/tech/20190327 Cisco forms VC firm looking to weaponize fledgling technology companies.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco forms VC firm looking to weaponize fledgling technology companies) +[#]: via: (https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco forms VC firm looking to weaponize fledgling technology companies +====== + +### Decibel, an investment firm focused on early stage funding for enterprise-product startups, will back technologies related to Cisco's core interests. + +![BrianaJackson / Getty][1] + +Cisco this week stepped deeper into the venture capital world by announcing Decibel, an early-stage investment firm that will focus on bringing enterprise-oriented startups to market. + +Veteran VC groundbreaker and former general partner at New Enterprise Associates [Jon Sakoda][2] will lead Decibel. Sakoda had been with NEA since 2006 and focused on startup investments in software and Internet companies. + +**[ Now see[7 free network tools you must have][3]. ]** + +Of Decibel Sakoda said: “We want to invest in companies that are helping our customers use innovation as a weapon in the game to transform their respective industries.” + +“Decibel combines the speed, agility, and independent risk-taking traditionally found in the best VC firms, while offering differentiated access to the scale, entrepreneurial talent, and deep customer relationships found in one of the largest tech companies in the world,” [Sakoda said][4]. “This approach is an industry first and provides a unique way for entrepreneurs to get access to unparalleled resources at a time and stage when they need it most.” + +“As one of the most prolific strategic venture capitalists in the world, Cisco already has a view into future technologies shaping our markets through our rich portfolio of companies,” wrote Rob Salvagno, vice president of Corporate Development and Cisco Investments in a [blog about Decibel][5]. “But we realized we could do even more by engaging with the startup community earlier in its lifecycle.” + +Indeed Cisco already has an investment arm, Cisco Investments, that focuses on later stage startups, the company says. Cisco said this arm invests $200 to $300 million annually, and it will continue its charter of investing and partnering with best-in-class companies in core and adjacent markets. + +Cisco didn’t talk about how much money would be involved in Decibel, but according to a [CNBC report][6], Cisco is setting up Decibel as an independent firm with a separate pool of cash, an unusual model for corporate investors. The fund hasn’t closed yet, but a [Securities and Exchange Commission filing][7] from October indicated that Sakoda was setting out to [raise $500 million][8], CNBC wrote. + +**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][9] ]** + +Decibel does plan to invest anywhere from $5M – 15M in each start up in their portfolio, Cisco says. + +“Cisco has a culture of leveraging both internal and external innovation – accelerating our rich internal development capabilities by our ability to also partner, invest and acquire, Salvagno said. + +He said the company recognizes that significant innovation happens outside of the walls of Cisco. Cisco has acquired more than 200 companies, accounting for more than one in eight Cisco employees have joined as a result. "We have a deep bench of acquired founders, many of which play leadership roles within the company today, which continues to reinforce this entrepreneurial spirit," Salvagno said. + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385039/cisco-forms-vc-firm-looking-to-weaponize-fledgling-technology-companies.html#tk.rss_all + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/money_salary_magnet_flying-money_money-magnet-by-brianajackson-getty-100787974-large.jpg +[2]: https://twitter.com/jonsakoda +[3]: https://www.networkworld.com/article/2825879/7-free-open-source-network-monitoring-tools.html +[4]: https://www.decibel.vc/the-blast/announcingdecibel +[5]: https://blogs.cisco.com/news/cisco-fuels-innovation-engine-with-investment-in-new-early-stage-vc-fund +[6]: https://www.cnbc.com/2019/03/26/cisco-introduces-decibel-an-early-stage-venture-firm-with-jon-sakoda.html +[7]: https://www.sec.gov/Archives/edgar/data/1754260/000175426018000002/xslFormDX01/primary_doc.xml +[8]: https://www.cnbc.com/2018/10/08/cisco-lead-investor-jon-sakoda-catalyst-labs-500-million.html +[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190327 How to make a Raspberry Pi gamepad.md b/sources/tech/20190327 How to make a Raspberry Pi gamepad.md new file mode 100644 index 0000000000..694c09d4c9 --- /dev/null +++ b/sources/tech/20190327 How to make a Raspberry Pi gamepad.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to make a Raspberry Pi gamepad) +[#]: via: (https://opensource.com/article/19/3/gamepad-raspberry-pi) +[#]: author: (Leon Anavi https://opensource.com/users/leon-anavi) + +How to make a Raspberry Pi gamepad +====== + +This DIY retro video game controller for the Raspberry Pi is fun and not difficult to build but requires some time. + +![Raspberry Pi Gamepad device][1] + +From time to time, I get nostalgic about the video games I played during my childhood in the late '80s and the '90s. Although most of my old computers and game consoles are long gone, my Raspberry Pi can fulfill my retro-gaming fix. I enjoy the simple games included in Raspbian, and the open source RetroPie project helped me turn my Raspberry Pi into an advanced retro-gaming machine. + +But, for a more authentic experience, like back in the "old days," I needed a gamepad. There are a lot of options on the market for USB gamepads and joysticks, but as an open source enthusiast, maker, and engineer, I prefer doing it the hard way. So, I made my own simple open source hardware gamepad, which I named the [ANAVI Play pHAT][2]. I designed it as an add-on board for Raspberry Pi using an [EEPROM][3] and a devicetree binary overlay I created for mapping the keys. + +### Get the gamepad buttons and EEPROM + +There are a huge variety of gamepads available for purchase, and some of them are really complex. However, it's not hard to make a gamepad similar to the iconic NES controller using the design I created. + +The gamepad uses eight "momentary" buttons (i.e., switches that are active only while they're pushed): four tactile (tact) switches for movement (Up, Down, Left, Right), two tact buttons for A and B, and two smaller tact buttons for Select and Start. I used [through-hole][4] tact switches: six 6x6x4.3mm switches for movement and the A and B buttons, and two 3x6x4.3mm switches for the Start and Select buttons. + +While the gamepad's primary purpose is to play retro games, the add-on board is large enough to include home-automation features, such as monitoring temperature, humidity, light, or barometric pressure, that you can use when you're not playing games. I added three slots for attaching [I2C][5] sensors to the primary I2C bus on physical pins 3 and 5. + +The most interesting and important part of the hardware design is the EEPROM (electrically erasable programmable read-only memory). A through-hole mounted EEPROM is easier to flash on a breadboard and solder to the gamepad. An article in the [MagPi magazine][6] recommends CAT24C32 EEPROM; if that model isn't available, try to find a model with similar technical specifications. All Raspberry Pi models and versions released after 2014 (Raspberry Pi B+ and newer) have a secondary I2C bus on physical pins 27 and 28. + +Once you have this hardware, use a breadboard to check that it works. + +### Create the printed circuit board + +The next step is to create a printed circuit board (PCB) design and have it manufactured. As an open source enthusiast, I believe that free and open source software should be used for creating open source hardware. I rely on [KiCad][7], electronic design automation (EDA) software available under the GPLv3+ license. KiCad works on Windows, MacOS, and GNU/Linux. (I use KiCad version 5 on Ubuntu 18.04.) + +KiCad allows you to create PCBs with up to 32 copper layers plus 14 fixed-purpose technical layers. It also has an integrated 3D viewer. It's actively developed, including many contributions by CERN developers, and used for industrial applications; for example, Olimex uses KiCad to design complex PCBs with multiple layers, like the one in its [TERES-I][8] DIY open source hardware laptop. + +The KiCad workflow includes three major steps: + + * Designing the schematics in the schematic layout editor + * Drawing the edge cuts, placing the components, and routing the tracks in the PCB layout editor + * Exporting Gerber and drill files for manufacture + + + +If you haven't designed PCBs before, keep in mind there is a steep learning curve. Go through the [examples and user's guides][9] provided by KiCad to learn how to work with the schematic and the PCB layout editor. (If you are not in the mood to do everything from scratch, you can just clone the ANAVI Play pHAT project in my [GitHub repository][10].) + +![KiCad schematic][11] + +In KiCad's schematic layout editor, connect the Raspberry Pi's GPIOs to the buttons, the slots for sensors to the primary I2C, and the EEPROM to the secondary I2C. Assign an appropriate footprint to each component. Perform an electrical rule check and, if there are no errors, generate the [netlist][12], which describes an electronic circuit's connectivity. + +Open the PCB layout editor. It contains several layers. Read the netlist. All components and tracks must be on the front and bottom copper layers (F.Cu and B.Cu), and the board's form must be created in the Edge.Cuts layer. Any text, including button labels, must be on the silkscreen layers. + +![Printable circuit board design][13] + +Finally, export the Gerber and drill files that you'll send to the company that will produce your PCB. The Gerber format is the de facto industry standard for PCBs. It is an open ASCII vector format for 2D binary images; simply explained, it is like a PDF for PCB manufacturing. + +There are numerous companies that can make a simple two-layer board like the gamepad's. For a few prototypes, you can count on [OSHPark in the US][14] or [Aisler in Europe][15]. There are also a lot of Chinese manufacturers, such as JLCPCB, PCBWay, ALLPCB, Seeed Studio, and many more. Alternatively, if you prefer to skip the hassle of PCB manufacturing and sourcing components, you can order the [ANAVI Play pHAT maker kit from Crowd Supply][2] and solder all the through-hole components on your own. + +### Understanding devicetree + +[Devicetree][16] is a specification for a software data structure that describes the hardware components. Its purpose is to allow the compiled Linux kernel to handle a variety of different hardware configurations within a wider architecture family. The bootloader loads the devicetree into memory and passes it to the Linux kernel. + +The devicetree includes three components: + + * Devicetree source (DTS) + * Devicetree blob (DTB) and overlay (DTBO) + * Devicetree compiler (DTC) + + + +The DTC creates binaries from a textual source. Devicetree overlays allow a central DTB to be overlaid on the devicetree. Overlays include a number of fragments. + +For several years, a devicetree has been required for all new ARM systems on a chip (SoCs), including Broadcom SoCs in all Raspberry Pi models and versions. With the default bootloader in Raspberry Pi's popular Raspbian distribution, DTO can be set in the configuration file ( **config.txt** ) on the FAT partition of a bootable microSD card using the keyword **device_tree=**. + +Since 2014, the Raspberry Pi's pin header has been extended to 40 pins. Pins 27 and 28 are dedicated for a secondary I2C bus. This way, the DTBO can be automatically loaded from an EEPROM attached to these pins. Furthermore, additional system information can be saved in the EEPROM. This feature is among the Raspberry Pi Foundation's requirements for any Raspberry Pi HAT (hardware attached on top) add-on board. On Raspbian and other GNU/Linux distributions for Raspberry Pi, the information from the EEPROM can be seen from userspace at **/proc/device-tree/hat/** after booting. + +In my opinion, the devicetree is one of the most fascinating features added in the Linux ecosystem over the past decade. Creating devicetree blobs and overlays is an advanced task and requires some background knowledge. However, it's possible to create a devicetree binary overlay for the Raspberry Pi add-on board and flash it on an appropriate EEPROM. The device binary overlay defines the Linux key codes for each key of the gamepad. The result is a gamepad for Raspberry Pi with keys that work as soon as you boot Raspbian. + +#### Creating the DTBO + +There are three major steps to create a devicetree binary overlay for the gamepad: + + * Creating the devicetree source with mapping for the keys based on the Linux key codes + * Compiling the devicetree binary overlay using the devicetree compiles + * Creating an **.eep** file and flashing it on an EEPROM using the open source tools provided by the Raspberry Pi Foundation + + + +Linux key codes are defined in the file **/usr/include/linux/input-event-codes.h**. The device source file should describe which Raspberry Pi GPIO pin is connected to which hardware button and which Linux key code should be triggered when the button is pressed. In this gamepad, GPIO17 (pin 11) is connected to the tactile button for Right, GPIO4 (pin 7) to Left, GPIO22 (pin 15) to Up, GPIO27 (pin 13) to Down, GPIO5 (pin 29) to Start, GPIO6 (pin 31) to Select, GPIO19 (pin 35) to A, and GPIO26 (pin 37) to B. + +Please note there is a difference between the GPIO numbers and the physical position of the pin on the header. For convenience, all pins are located on the second row of the Raspberry Pi's 40-pin header. This approach makes it easier to route the printed circuit board in KiCad. + +The entire devicetree source for the gamepad is [available on GitHub][17]. As an example, the following is a short code snippet that demonstrates how GPIO17, corresponding to physical pin 11 on the Raspberry Pi, is mapped to the tact button for Right: + +``` +button@17 { +label = "right"; +linux,code = <106>; +gpios = <&gpio 17 1>; +}; +``` + +To compile the DTS directly on the Raspberry Pi, install the devicetree compiler on Raspbian by executing the following command in the terminal: +``` +sudo apt-get update +sudo apt-get install device-tree-compiler +``` +Run DTC and provide as arguments the name of the output DTBO and the path to the source file. For example: + +``` +dtc -I dts -O dtb -o anavi-play-phat.dtbo anavi-play-phat.dts +``` + +The Raspberry Pi Foundation provides a [GitHub repository with the mechanical, hardware, and software specifications for HATs][18]. It also includes three very convenient tools: + + * **eepmake:** Creates an **.eep** file from a text file with settings + * **eepdump:** Useful for debugging, as it dumps a binary **.eep** file as human-readable text + * **eepflash:** Writes or reads an **.eep** binary image to/from an EEPROM + + + +The **eeprom_settings.txt** file can be used as a template. [The Raspberry Pi Foundation][19] and [MagPi magazine][6] have helpful articles and tutorials, so I won't go into too many details. As I wrote above, the recommended EEPROM is CAT24C32, but it can be replaced with any other EEPROM with the same technical specifications. Using an EEPROM with an eight-pin, through-hole, dual in-line (DIP) package is easier for hobbyists to flash because it can be done with a breadboard. The following example command creates a file ready to be flashed on the EEPROM using the **eepmake** tool from the Raspberry Pi GitHub repository: + +``` +./eepmake settings.txt settings.eep anavi-play-phat.dtbo +``` + +Before proceeding with flashing, ensure that the EEPROM is connected properly to the primary I2C bus (pins 3 and 5) on the Raspberry Pi. (You can consult the MagPi magazine article linked above for a discussion on wiring schematics.) Then run the following command and follow the onscreen instructions to flash the **.eep** file on the EEPROM: + +``` +sudo ./eepflash.sh -w -f=settings.eep -t=24c32 +``` + +Before soldering the EEPROM to the printed circuit board, move it to the secondary I2C bus on the breadboard and test it to ensure it works as expected. If you detect any issues while testing the EEPROM on the breadboard, correct the settings files, move it back to the primary I2C bus, and flash it again. + +### Testing the gamepad + +Now comes the fun part! It is time to test the add-on board using Raspbian, which you can [download][20] from RaspberryPi.org. After booting, open a terminal and enter the following commands: + +``` +cat /proc/device-tree/hat/product +cat /proc/device-tree/hat/vendor +``` + +The output should be similar to this: + +![Testing output][21] + +If it is, congratulations! The data from the EEPROM has been read successfully. + +The next step is to verify that the keys on the Play pHAT are set properly and working. In a terminal or a text editor, press each of the eight buttons and verify they are acting as configured. + +Finally, it is time to play games! By default, Raspbian's desktop includes [Python Games][22]. Launch them from the application menu. Make an audio output selection and pick a game from the list. My favorite is Wormy, a Snake-like game. As a former Symbian mobile application developer, I find playing Wormy brings back memories of the glorious days of Nokia. + +### Retro gaming with RetroPie + +![RetroPie with the Play pHAT][23] + +Raspbian is amazing, but [RetroPie][24] offers so much more for retro games fans. It is a GNU/Linux distribution optimized for playing retro games and combines the open source projects RetroArch and Emulation Station. It's available for Raspberry Pi, the [Odroid][25] C1/C2, and personal computers running Debian or Ubuntu. It provides emulators for loading ROMs—the digital versions of game cartridges. Keep in mind that no ROMs are included in RetroPie due to copyright issues. You will have to [find appropriate ROMs and copy them][26] to the Raspberry Pi after booting RetroPie. + +The open source hardware gamepad works fine in RetroPie's menus, but I discovered that the keys fail after launching some games and emulators. After debugging, I found a solution to ensuring they work in the game emulators: add a Python script for additional software emulation of the keys. [The script is available on GitHub.][27] Here's how to get it and install Python on RetroPie: + +``` + +sudo apt-get update +sudo apt-get install -y python-pip +sudo pip install evdev +cd ~ +git clone +``` + +Finally, add the following line to **/etc/rc.local** so it will be executed automatically when RetroPie boots: + +``` +sudo python /home/pi/anavi-examples/anavi-play-phat/anavi-play-gamepad.py & +``` + +That's it! After following these steps, you can create an entirely open source hardware gamepad as an add-on board for any Raspberry Pi model with a 40-pin header and use it with Raspbian and RetroPie! + +### What's next? + +Combining free and open source software with open source hardware is fun and not difficult, but it requires a significant amount of time. After creating the open source hardware gamepad in my spare time, I ran a modest crowdfunding campaign at [Crowd Supply][2] for low-volume manufacturing in my hometown in Plovdiv, Bulgaria. [The Open Source Hardware Association][28] certified the ANAVI Play pHAT as an open source hardware project under [BG000007][29]. Even [the acrylic enclosures][30] that protect the board from dust are open source hardware created with the free and open source software OpenSCAD. + +![Game pad in acrylic enclosure][31] + +If you enjoyed reading this article, I encourage you to try creating your own open source hardware add-on board for Raspberry Pi with KiCad. If you don't have enough spare time, you can order an [ANAVI Play pHAT maker kit][2], grab your soldering iron, and assemble the through-hole components. If you're not comfortable with the soldering iron, you can just order a fully assembled version. + +Happy retro gaming everybody! Next time someone irritably asks what you can learn from playing vintage computer games, tell them about Raspberry Pi, open source hardware, Linux, and devicetree. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/gamepad-raspberry-pi + +作者:[Leon Anavi][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/leon-anavi +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/gamepad_raspberrypi_hardware.jpg?itok=W16gOnay (Raspberry Pi Gamepad device) +[2]: https://www.crowdsupply.com/anavi-technology/anavi-play-phat +[3]: https://en.wikipedia.org/wiki/EEPROM +[4]: https://en.wikipedia.org/wiki/Through-hole_technology +[5]: https://en.wikipedia.org/wiki/I%C2%B2C +[6]: https://www.raspberrypi.org/magpi/make-your-own-hat/ +[7]: http://kicad-pcb.org/ +[8]: https://www.olimex.com/Products/DIY-Laptop/ +[9]: http://kicad-pcb.org/help/getting-started/ +[10]: https://github.com/AnaviTechnology/anavi-play-phat +[11]: https://opensource.com/sites/default/files/uploads/kicad-schematic.png (KiCad schematic) +[12]: https://en.wikipedia.org/wiki/Netlist +[13]: https://opensource.com/sites/default/files/uploads/circuitboard.png (Printable circuit board design) +[14]: https://oshpark.com/ +[15]: https://aisler.net/ +[16]: https://www.devicetree.org/ +[17]: https://github.com/AnaviTechnology/hats/blob/anavi/eepromutils/anavi-play-phat.dts +[18]: https://github.com/raspberrypi/hats +[19]: https://www.raspberrypi.org/blog/introducing-raspberry-pi-hats/ +[20]: https://www.raspberrypi.org/downloads/ +[21]: https://opensource.com/sites/default/files/uploads/testing-output.png (Testing output) +[22]: https://www.raspberrypi.org/documentation/usage/python-games/ +[23]: https://opensource.com/sites/default/files/uploads/retropie.jpg (RetroPie with the Play pHAT) +[24]: https://retropie.org.uk/ +[25]: https://www.hardkernel.com/product-category/odroid-board/ +[26]: https://opensource.com/article/19/1/retropie +[27]: https://github.com/AnaviTechnology/anavi-examples/blob/master/anavi-play-phat/anavi-play-gamepad.py +[28]: https://www.oshwa.org/ +[29]: https://certification.oshwa.org/bg000007.html +[30]: https://github.com/AnaviTechnology/anavi-cases/tree/master/anavi-play-phat +[31]: https://opensource.com/sites/default/files/uploads/gamepad-acrylic.jpg (Game pad in acrylic enclosure) diff --git a/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md new file mode 100644 index 0000000000..f7c49381f4 --- /dev/null +++ b/sources/tech/20190327 Identifying exceptional user experience (UX) in IoT platforms.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Identifying exceptional user experience (UX) in IoT platforms) +[#]: via: (https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all) +[#]: author: (Steven Hilton https://www.networkworld.com/author/Steven-Hilton/) + +Identifying exceptional user experience (UX) in IoT platforms +====== + +### Examples of excellent IoT platform UX from the perspectives of 5 typical IoT platform personas. + +![Leo Wolfert / Getty Images][1] + +Enterprises are inundated with information about IoT platforms’ features and capabilities. But to find a long-lived IoT platform that minimizes ongoing development costs, enterprises must focus on exceptional user experience (UX) for 5 types of IoT platform users. + +Marketing and sales literature from IoT platform vendors is filled with information about IoT platform features. And no doubt, enterprises choosing to buy IoT platform services need to understand the actual capabilities of IoT platforms – preferably by [testing a variety of IoT platforms][2] – before making a purchase decision. + +However, it is a lot harder to gauge the quality of an IoT platform UX than itemizing an IoT platform’s features. Having excellent UX leads to lower platform deployment and management costs and higher customer satisfaction and retention. So enterprises should make UX one of their top criteria when selecting an IoT platform. + +[RELATED: Storage tank operator turns to IoT for energy savings][3] + +One of the ways to determine excellent IoT platform UX is to simulate the tasks conducted by typical IoT platform users. By completing these tasks, it becomes readily apparent when an IoT platform is exceptional or annoyingly bad. + +In this blog, I describe excellent IoT platform UX from the perspectives of five typical IoT platform users or personas. + +## Persona 1: platform administrator + +A platform administrator’s primary role is to configure, monitor, and maintain the functionality of an IoT platform. A platform administrator is typically an IT employee responsible for maintaining and configuring the various data management, device management, access control, external integration, and monitoring services that comprise an IoT platform. + +Typical platform administrator tasks include + + * configuration of the on-platform data visualization and data aggregation tools + * configuration of available device management functionality or execution of in-bulk device management tasks + * configuration and creation of on-platform complex event processing (CEP) workflows + * management and configuration of platform service orchestration + + + +Enterprises should pick IoT platforms with superlative access to on-platform configuration functionality with an emphasis on declarative interfaces for configuration management. Although many platform administrators are capable of working with RESTful API endpoints, good UX design should not require that platform administrators use third-party tools to automate basic functionality or execute bulk tasks. Some programmatic interfaces, such as SQL syntax for limiting monitoring views or dashboards for setting event processing trigger criteria, are acceptable and expected, although a fully declarative solution that maintains similar functionality is preferred. + +## Persona 2: platform operator + +A platform operator’s primary role is to leverage an IoT platform to execute common day-to-day business-centric operations and services. While the responsibilities of a platform operator will vary based on enterprise vertical and use case, all platform operators conduct business rather than IoT domain tasks. + +Typical platform operator tasks include + + * visualizing and aggregating on-platform data to view key business KPIs + * using device management functionality on a per-device basis + * creating, managing, and monitoring per-device and per-location event processing rules + * executing self-service administrative tasks, such as enrolling downstream operators + + + +Enterprises should pick IoT platforms centered on excellent ease-of-use for a business user. In general, the UX should be focused on providing information immediately required for the execution of day-to-day operational tasks while removing more complex functionality. These platforms should have easy access to well-defined and well-constrained operational functions or data visualization. An effective UX should enable easy creation and modification of data views, graphs, dashboards, and other visualizations by allowing operators to select devices using a declarative rather than SQL or other programmatic interfaces. + +## Persona 3: hardware and systems developer + +A hardware and systems developer’s primary role is the integration and configuration of IoT assets into an IoT platform. The hardware and systems developer possesses very specific, detailed knowledge about IoT hardware (e.g., specific multipoint control units, embedded platforms, or PLC/SCADA control systems), and leverages this knowledge to enable protocol and asset compatibility with northbound platform services. + +Typical hardware and systems developer tasks include + + * designing and implementing firmware for IoT assets based on either standardized IoT SDKs or platform-specific SDKs + * updating firmware or software packages over deployment lifecycles + * integrating manufacturer-specific protocols adapters into either IoT assets or the northbound platform + + + +Enterprises should pick IoT platforms that allow hardware and systems developers to most efficiently design and implement low-level device and protocol functionality. An effective developer experience provides well-documented and fully-featured SDKs supporting a variety of languages and device architectures to enable integration with various types of IoT hardware. + +## Persona 4: platform and backend developer + +A platform and backend developer’s primary role is to execute customer-specific application logic and integrations within an IoT deployment. Customer-specific logic may include on-platform or on-edge custom applications, such as those used for analytics, data aggregation and normalization, or any type of event processing workflow. In addition, a platform and backend developer is responsible for integrating the IoT platform with external databases, analytic solutions, or business systems such as MES, ERP, or CRM applications. + +Typical platform and backend developer tasks include + + * integrating streaming data from the IoT platform into external systems and applications + * configuring inbound and outbound platform actions and interactions with external systems + * configuring complex code-based event processing capabilities beyond the scope of a platform administrator’s knowledge or ability + * debugging low-level platform functionalities that require coding to detect or resolve + + + +Enterprises should pick excellent IoT platforms that provide access to well-documented and well-featured platform-level SDKs for application or service development. A best-in-class platform UX should provide real-time logging tools, debugging tools, and indexed and searchable access to all platform logs. Finally, a platform and backend developer is particularly dependent upon high-quality, platform-level documentation, especially for platform APIs. + +## Persona 5: user interface and experience (UI/UX) developer + +A UI/UX developer’s primary role is to design the various operator interfaces and monitoring views for an IoT platform. In more complex IoT deployments, various operator audiences will need to be addressed, including solution domain experts such as a factory manager; role-specific experts such as an equipment operator or factory technician; and business experts such as a supply-chain analyst or company executive. + +Typical UI/UX developer tasks include + + * building and maintaining customer-specific dashboards and monitoring views on either the IoT platform or edge devices + * designing, implementing, and maintaining various operator consoles for a variety of operator audiences and customer-specific use cases + * ensuring good user experience for customers over the lifetime of an IoT implementation + + + +Enterprises should pick IoT platforms that provide an exceptional variety and quality of UI/UX tools, such as dashboarding frameworks for on-platform monitoring solutions that are declaratively or programmatically customizable, as well as various widget and display blocks to help the developer rapidly implement customer-specific views. An IoT platform must also provide a UI/UX developer with appropriate debugging and logging tools for monitoring and operator console frameworks and platform APIs. Finally, a best-in-class platform should provide a sample dashboard, operator console, and on-edge monitoring implementation in order to enable the UI/UX developer to quickly become accustomed with platform paradigms and best practices. + +Enterprises should make UX one of their top criteria when selecting an IoT platform. Having excellent UX allows enterprises to minimize platform deployment and management costs. At the same time, excellent UX allows enterprises to more readily launch new solutions to the market thereby increasing customer satisfaction and retention. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][4]** + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384738/identifying-exceptional-user-experience-ux-in-iot-platforms.html#tk.rss_all + +作者:[Steven Hilton][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Steven-Hilton/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_by_leowolfert_gettyimages-689799380_2400x1600-100788464-large.jpg +[2]: https://www.machnation.com/2018/09/25/announcing-mit-e-2-0-hands-on-benchmarking-for-iot-cloud-edge-and-analytics-platforms/ +[3]: https://www.networkworld.com/article/3169384/internet-of-things/storage-tank-operator-turns-to-iot-for-energy-savings.html#tk.nww-fsb +[4]: /contributor-network/signup.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md new file mode 100644 index 0000000000..016c5151fb --- /dev/null +++ b/sources/tech/20190327 IoT roundup- Keeping an eye on energy use and Volkswagen teams with AWS.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS) +[#]: via: (https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +IoT roundup: Keeping an eye on energy use and Volkswagen teams with AWS +====== + +### This week's roundup features new tech from MIT, big news in the automotive sector and a handy new level of centralization from a smaller IoT-focused company. + +![Getty Images][1] + +Much of what’s exciting about IoT technology has to do with getting data from a huge variety of sources into one place so it can be mined for insight, but sensors used to gather that data are frequently legacy devices from the early days of industrial automation or cheap, lightweight, SoC-based gadgets without a lot of sophistication of their own. + +Researchers at MIT have devised a system that can gather a certain slice of data from unsophisticated devices that are grouped on the same electrical circuit without adding sensors to each device. + +**[ Check out our[corporate guide to addressing IoT security][2]. ]** + +The technology’s called non-intrusive load monitoring, and sits directly on a given building's, vehicle's or other piece of infrastructure’s electrical circuits, identifies devices based on their power usage, and sends alerts when there are irregularities. + +It seems likely to make IIoT-related waves once it’s out of testing and onto the market. + +NLIM was recently tested, said MIT’s news service, on a U.S. Coast Guard cutter based in Boston, where it was attached to the outside of an electrical wire “at a single point, without requiring any cutting or splicing of wires.” + +Two such connections allowed the scientists to monitor roughly 20 separate devices on an electrical circuit, and the system was able to detect an anomalous amount of energy use from a component of the ship’s diesel engines known as a jacket water heater. + +“[C]rewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed,” the MIT report stated. Two other important but slightly less critical faults were also detected by the system. + +It’s easy to see why NLIM could easily prove to be an attractive technology for IIoT use in the future. It sounds as though it’s very simple to install, can operate without any kind of Internet connection (though most implementers will probably want to connect it to a wider monitoring setup for a more holistic picture of their systems) and does all of its computational work locally. It can even be used for general energy audits. What, in short, is not to like? + +**Volkswagen teams up with Amazon** + +AWS has got a new flagship client for its growing IoT services in the form of the Volkswagen Group, which [announced][3] that AWS is going to design and build the Volkswagen Industrial Cloud, a floor-to-ceiling industrial IoT implementation aimed at improving uptime, flexibility, productivity and vehicle quality. + +Real-time data from all 122 of VW’s manufacturing plants around the world will be available to the system, everything from part tracking to comparative analysis of efficiency to even deeper forms of analytics will take place in the company’s “data lake,” as the announcement calls it. Oh, and machine learning is part of it, too. + +The German carmaker clearly believes that AWS’s technology can provide a lot of help to its operations across the board, [even in the wake of a partnership with Microsoft for Azure-based cloud services announced last year.][4] + +**IoT-in-a-box** + +IoT can be very complicated. While individual components of any given implementation are often quite simple, each implementation usually contains a host of technologies that have to work in close concert. That means a lot of orchestration work has to go into making this stuff work. + +Enter Digi International, which rolled out an IoT-in-a-box package called Digi Foundations earlier this month. The idea is to take a lot of the logistical legwork out of IoT implementations by integrating cloud-connection software and edge-computing capabilities into the company’s core industrial router business. Foundations, which is packaged as a software subscription that adds these capabilities and more to the company’s devices, also includes a built-in management layer, allowing for simplified configuration and monitoring. + +OK, so it’s not quite all-in-one, but it’s still an impressive level of integration, particularly from a company that many might not have heard of before. It’s also a potential bellwether for other smaller firms upping their technical sophistication in the IoT sector. + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384697/iot-roundup-keeping-an-eye-on-energy-use-and-volkswagen-teams-with-aws.html#tk.rss_all + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/08/nw_iot-news_internet-of-things_smart-city_smart-home7-100768495-large.jpg +[2]: https://www.networkworld.com/article/3269165/internet-of-things/a-corporate-guide-to-addressing-iot-security-concerns.html +[3]: https://www.volkswagen-newsroom.com/en/press-releases/volkswagen-and-amazon-web-services-to-develop-industrial-cloud-4780 +[4]: https://www.volkswagenag.com/en/news/2018/09/volkswagen-and-microsoft-announce-strategic-partnership.html +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md b/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md new file mode 100644 index 0000000000..e473614955 --- /dev/null +++ b/sources/tech/20190327 Standardizing WASI- A system interface to run WebAssembly outside the web.md @@ -0,0 +1,347 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Standardizing WASI: A system interface to run WebAssembly outside the web) +[#]: via: (https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/) +[#]: author: (Lin Clark https://twitter.com/linclark) + +Standardizing WASI: A system interface to run WebAssembly outside the web +====== + +Today, we announce the start of a new standardization effort — WASI, the WebAssembly system interface. + +**Why:** Developers are starting to push WebAssembly beyond the browser, because it provides a fast, scalable, secure way to run the same code across all machines. + +But we don’t yet have a solid foundation to build upon. Code outside of a browser needs a way to talk to the system — a system interface. And the WebAssembly platform doesn’t have that yet. + +**What:** WebAssembly is an assembly language for a conceptual machine, not a physical one. This is why it can be run across a variety of different machine architectures. + +Just as WebAssembly is an assembly language for a conceptual machine, WebAssembly needs a system interface for a conceptual operating system, not any single operating system. This way, it can be run across all different OSs. + +This is what WASI is — a system interface for the WebAssembly platform. + +We aim to create a system interface that will be a true companion to WebAssembly and last the test of time. This means upholding the key principles of WebAssembly — portability and security. + +**Who:** We are chartering a WebAssembly subgroup to focus on standardizing [WASI][1]. We’ve already gathered interested partners, and are looking for more to join. + +Here are some of the reasons that we, our partners, and our supporters think this is important: + +### Sean White, Chief R&D Officer of Mozilla + +“WebAssembly is already transforming the way the web brings new kinds of compelling content to people and empowers developers and creators to do their best work on the web. Up to now that’s been through browsers, but with WASI we can deliver the benefits of WebAssembly and the web to more users, more places, on more devices, and as part of more experiences.” + +### Tyler McMullen, CTO of Fastly + +“We are taking WebAssembly beyond the browser, as a platform for fast, safe execution of code in our edge cloud. Despite the differences in environment between our edge and browsers, WASI means WebAssembly developers won’t have to port their code to each different platform.” + +### Myles Borins, Node Technical Steering committee director + +“WebAssembly could solve one of the biggest problems in Node — how to get close-to-native speeds and reuse code written in other languages like C and C++ like you can with native modules, while still remaining portable and secure. Standardizing this system interface is the first step towards making that happen.” + +### Laurie Voss, co-founder of npm + +“npm is tremendously excited by the potential WebAssembly holds to expand the capabilities of the npm ecosystem while hugely simplifying the process of getting native code to run in server-side JavaScript applications. We look forward to the results of this process.” + +So that’s the big news! 🎉 + +There are currently 3 implementations of WASI: + + ++ [wasmtime](https://github.com/CraneStation/wasmtime), Mozilla’s WebAssembly runtime ++ [Lucet](https://www.fastly.com/blog/announcing-lucet-fastly-native-webassembly-compiler-runtime), Fastly’s WebAssembly runtime ++ [a browser polyfill](https://wasi.dev/polyfill/) + + +You can see WASI in action in this video: + + + +And if you want to learn more about our proposal for how this system interface should work, keep reading. + +### What’s a system interface? + +Many people talk about languages like C giving you direct access to system resources. But that’s not quite true. + +These languages don’t have direct access to do things like open or create files on most systems. Why not? + +Because these system resources — such as files, memory, and network connections— are too important for stability and security. + +If one program unintentionally messes up the resources of another, then it could crash the program. Even worse, if a program (or user) intentionally messes with the resources of another, it could steal sensitive data. + +[![A frowning terminal window indicating a crash, and a file with a broken lock indicating a data leak][2]][3] + +So we need a way to control which programs and users can access which resources. People figured this out pretty early on, and came up with a way to provide this control: protection ring security. + +With protection ring security, the operating system basically puts a protective barrier around the system’s resources. This is the kernel. The kernel is the only thing that gets to do operations like creating a new file or opening a file or opening a network connection. + +The user’s programs run outside of this kernel in something called user mode. If a program wants to do anything like open a file, it has to ask the kernel to open the file for it. + +[![A file directory structure on the left, with a protective barrier in the middle containing the operating system kernel, and an application knocking for access on the right][4]][5] + +This is where the concept of the system call comes in. When a program needs to ask the kernel to do one of these things, it asks using a system call. This gives the kernel a chance to figure out which user is asking. Then it can see if that user has access to the file before opening it. + +On most devices, this is the only way that your code can access the system’s resources — through system calls. + +[![An application asking the operating system to put data into an open file][6]][7] + +The operating system makes the system calls available. But if each operating system has its own system calls, wouldn’t you need a different version of the code for each operating system? Fortunately, you don’t. + +How is this problem solved? Abstraction. + +Most languages provide a standard library. While coding, the programmer doesn’t need to know what system they are targeting. They just use the interface. + +Then, when compiling, your toolchain picks which implementation of the interface to use based on what system you’re targeting. This implementation uses functions from the operating system’s API, so it’s specific to the system. + +This is where the system interface comes in. For example, `printf` being compiled for a Windows machine could use the Windows API to interact with the machine. If it’s being compiled for Mac or Linux, it will use POSIX instead. + +[![The interface for putc being translated into two different implementations, one implemented using POSIX and one implemented using Windows APIs][8]][9] + +This poses a problem for WebAssembly, though. + +With WebAssembly, you don’t know what kind of operating system you’re targeting even when you’re compiling. So you can’t use any single OS’s system interface inside the WebAssembly implementation of the standard library. + +[![an empty implementation of putc][10]][11] + +I’ve talked before about how WebAssembly is [an assembly language for a conceptual machine][12], not a real machine. In the same way, WebAssembly needs a system interface for a conceptual operating system, not a real operating system. + +But there are already runtimes that can run WebAssembly outside the browser, even without having this system interface in place. How do they do it? Let’s take a look. + +### How is WebAssembly running outside the browser today? + +The first tool for producing WebAssembly was Emscripten. It emulates a particular OS system interface, POSIX, on the web. This means that the programmer can use functions from the C standard library (libc). + +To do this, Emscripten created its own implementation of libc. This implementation was split in two — part was compiled into the WebAssembly module, and the other part was implemented in JS glue code. This JS glue would then call into the browser, which would then talk to the OS. + +[![A Rube Goldberg machine showing how a call goes from a WebAssembly module, into Emscripten's JS glue code, into the browser, into the kernel][13]][14] + +Most of the early WebAssembly code was compiled with Emscripten. So when people started wanting to run WebAssembly without a browser, they started by making Emscripten-compiled code run. + +So these runtimes needed to create their own implementations for all of these functions that were in the JS glue code. + +There’s a problem here, though. The interface provided by this JS glue code wasn’t designed to be a standard, or even a public facing interface. That wasn’t the problem it was solving. + +For example, for a function that would be called something like `read` in an API that was designed to be a public interface, the JS glue code instead uses `_system3(which, varargs)`. + +[![A clean interface for read, vs a confusing one for system3][15]][16] + +The first parameter, `which`, is an integer which is always the same as the number in the name (so 3 in this case). + +The second parameter, `varargs`, are the arguments to use. It’s called `varargs` because you can have a variable number of them. But WebAssembly doesn’t provide a way to pass in a variable number of arguments to a function. So instead, the arguments are passed in via linear memory. This isn’t type safe, and it’s also slower than it would be if the arguments could be passed in using registers. + +That was fine for Emscripten running in the browser. But now runtimes are treating this as a de facto standard, implementing their own versions of the JS glue code. They are emulating an internal detail of an emulation layer of POSIX. + +This means they are re-implementing choices (like passing arguments in as heap values) that made sense based on Emscripten’s constraints, even though these constraints don’t apply in their environments. + +[![A more convoluted Rube Goldberg machine, with the JS glue and browser being emulated by a WebAssembly runtime][17]][18] + +If we’re going to build a WebAssembly ecosystem that lasts for decades, we need solid foundations. This means our de facto standard can’t be an emulation of an emulation. + +But what principles should we apply? + +### What principles does a WebAssembly system interface need to uphold? + +There are two important principles that are baked into WebAssembly : + + * portability + * security + + + +We need to maintain these key principles as we move to outside-the-browser use cases. + +As it is, POSIX and Unix’s Access Control approach to security don’t quite get us there. Let’s look at where they fall short. + +### Portability + +POSIX provides source code portability. You can compile the same source code with different versions of libc to target different machines. + +[![One C source file being compiled to multiple binaries][19]][20] + +But WebAssembly needs to go one step beyond this. We need to be able to compile once and run across a whole bunch of different machines. We need portable binaries. + +[![One C source file being compiled to a single binary][21]][22] + +This kind of portability makes it much easier to distribute code to users. + +For example, if Node’s native modules were written in WebAssembly, then users wouldn’t need to run node-gyp when they install apps with native modules, and developers wouldn’t need to configure and distribute dozens of binaries. + +### Security + +When a line of code asks the operating system to do some input or output, the OS needs to determine if it is safe to do what the code asks. + +Operating systems typically handle this with access control that is based on ownership and groups. + +For example, the program might ask the OS to open a file. A user has a certain set of files that they have access to. + +When the user starts the program, the program runs on behalf of that user. If the user has access to the file — either because they are the owner or because they are in a group with access — then the program has that same access, too. + +[![An application asking to open a file that is relevant to what it's doing][23]][24] + +This protects users from each other. That made a lot of sense when early operating systems were developed. Systems were often multi-user, and administrators controlled what software was installed. So the most prominent threat was other users taking a peek at your files. + +That has changed. Systems now are usually single user, but they are running code that pulls in lots of other, third party code of unknown trustworthiness. Now the biggest threat is that the code that you yourself are running will turn against you. + +For example, let’s say that the library you’re using in an application gets a new maintainer (as often happens in open source). That maintainer might have your interest at heart… or they might be one of the bad guys. And if they have access to do anything on your system — for example, open any of your files and send them over the network — then their code can do a lot of damage. + +[![An evil application asking for access to the users bitcoin wallet and opening up a network connection][25]][26] + +This is why using third-party libraries that can talk directly to the system can be dangerous. + +WebAssembly’s way of doing security is different. WebAssembly is sandboxed. + +This means that code can’t talk directly to the OS. But then how does it do anything with system resources? The host (which might be a browser, or might be a wasm runtime) puts functions in the sandbox that the code can use. + +This means that the host can limit what a program can do on a program-by-program basis. It doesn’t just let the program act on behalf of the user, calling any system call with the user’s full permissions. + +Just having a mechanism for sandboxing doesn’t make a system secure in and of itself — the host can still put all of the capabilities into the sandbox, in which case we’re no better off — but it at least gives hosts the option of creating a more secure system. + +[![A runtime placing safe functions into the sandbox with an application][27]][28] + +In any system interface we design, we need to uphold these two principles. Portability makes it easier to develop and distribute software, and providing the tools for hosts to secure themselves or their users is an absolute must., + +### What should this system interface look like? + +Given those two key principles, what should the design of the WebAssembly system interface be? + +That’s what we’ll figure out through the standardization process. We do have a proposal to start with, though: + + * Create a modular set of standard interfaces + * Start with standardizing the most fundamental module, wasi-core + + + +[![Multiple modules encased in the WASI standards effort][29]][30] + +What will be in wasi-core? + +wasi-core will contain the basics that all programs need. It will cover much of the same ground as POSIX, including things such as files, network connections, clocks, and random numbers. + +And it will take a very similar approach to POSIX for many of these things. For example, it will use POSIX’s file-oriented approach, where you have system calls such as open, close, read, and write and everything else basically provides augmentations on top. + +But wasi-core won’t cover everything that POSIX does. For example, the process concept does not map clearly onto WebAssembly. And beyond that, it doesn’t make sense to say that every WebAssembly engine needs to support process operations like `fork`. But we also want to make it possible to standardize `fork`. + +This is where the modular approach comes in. This way, we can get good standardization coverage while still allowing niche platforms to use only the parts of WASI that make sense for them. + +[![Modules filled in with possible areas for standardization, such as processes, sensors, 3D graphics, etc][31]][32] + +Languages like Rust will use wasi-core directly in their standard libraries. For example, Rust’s `open` is implemented by calling `__wasi_path_open` when it’s compiled to WebAssembly. + +For C and C++, we’ve created a [wasi-sysroot][33] that implements libc in terms of wasi-core functions. + +[![The Rust and C implementations of openat with WASI][34]][35] + +We expect compilers like Clang to be ready to interface with the WASI API, and complete toolchains like the Rust compiler and Emscripten to use WASI as part of their system implementations + +How does the user’s code call these WASI functions? + +The runtime that is running the code passes the wasi-core functions in as imports. + +[![A runtime placing an imports object into the sandbox][36]][37] + +This gives us portability, because each host can have their own implementation of wasi-core that is specifically written for their platform — from WebAssembly runtimes like Mozilla’s wasmtime and Fastly’s Lucet, to Node, or even the browser. + +It also gives us sandboxing because the host can choose which wasi-core functions to pass in — so, which system calls to allow — on a program-by-program basis. This preserves security. + +[ +][38][![Three runtimes—wastime, Node, and the browser—passing their own implementations of wasi_fd_open into the sandbox][39]][40] + +WASI gives us a way to extend this security even further. It brings in more concepts from capability-based security. + +Traditionally, if code needs to open a file, it calls `open` with a string, which is the path name. Then the OS does a check to see if the code has permission (based on the user who started the program). + +With WASI, if you’re calling a function that needs to access a file, you have to pass in a file descriptor, which has permissions attached to it. This could be for the file itself, or for a directory that contains the file. + +This way, you can’t have code that randomly asks to open `/etc/passwd`. Instead, the code can only operate on the directories that are passed in to it. + +[![Two evil apps in sandboxes. The one on the left is using POSIX and succeeds at opening a file it shouldn't have access to. The other is using WASI and can't open the file.][41]][42] + +This makes it possible to safely give sandboxed code more access to different system calls — because the capabilities of these system calls can be limited. + +And this happens on a module-by-module basis. By default, a module doesn’t have any access to file descriptors. But if code in one module has a file descriptor, it can choose to pass that file descriptor to functions it calls in other modules. Or it can create more limited versions of the file descriptor to pass to the other functions. + +So the runtime passes in the file descriptors that an app can use to the top level code, and then file descriptors get propagated through the rest of the system on an as-needed basis. + +[![The runtime passing a directory to the app, and then then app passing a file to a function][43]][44] + +This gets WebAssembly closer to the principle of least privilege, where a module can only access the exact resources it needs to do its job. + +These concepts come from capability-oriented systems, like CloudABI and Capsicum. One problem with capability-oriented systems is that it is often hard to port code to them. But we think this problem can be solved. + +If code already uses `openat` with relative file paths, compiling the code will just work. + +If code uses `open` and migrating to the `openat` style is too much up-front investment, WASI can provide an incremental solution. With [libpreopen][45], you can create a list of file paths that the application legitimately needs access to. Then you can use `open`, but only with those paths. + +### What’s next? + +We think wasi-core is a good start. It preserves WebAssembly’s portability and security, providing a solid foundation for an ecosystem. + +But there are still questions we’ll need to address after wasi-core is fully standardized. Those questions include: + + * asynchronous I/O + * file watching + * file locking + + + +This is just the beginning, so if you have ideas for how to solve these problems, [join us][1]! + +-------------------------------------------------------------------------------- + +via: https://hacks.mozilla.org/2019/03/standardizing-wasi-a-webassembly-system-interface/ + +作者:[Lin Clark][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twitter.com/linclark +[b]: https://github.com/lujun9972 +[1]: https://wasi.dev/ +[2]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-01_crash-data-leak-1-500x220.png +[3]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-01_crash-data-leak-1.png +[4]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-02-protection-ring-sec-1-500x298.png +[5]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-02-protection-ring-sec-1.png +[6]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-03-syscall-1-500x227.png +[7]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/01-03-syscall-1.png +[8]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-01-implementations-1-500x267.png +[9]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-01-implementations-1.png +[10]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-02-implementations-1-500x260.png +[11]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/02-02-implementations-1.png +[12]: https://hacks.mozilla.org/2017/02/creating-and-working-with-webassembly-modules/ +[13]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-01-emscripten-1-500x329.png +[14]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-01-emscripten-1.png +[15]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-02-system3-1-500x179.png +[16]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-02-system3-1.png +[17]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-03-emulation-1-500x341.png +[18]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/03-03-emulation-1.png +[19]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-01-portability-1-500x375.png +[20]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-01-portability-1.png +[21]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-02-portability-1-500x484.png +[22]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-02-portability-1.png +[23]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-03-access-control-1-500x224.png +[24]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-03-access-control-1.png +[25]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-04-bitcoin-1-500x258.png +[26]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-04-bitcoin-1.png +[27]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-05-sandbox-1-500x278.png +[28]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/04-05-sandbox-1.png +[29]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-01-wasi-1-500x419.png +[30]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-01-wasi-1.png +[31]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-02-wasi-1-500x251.png +[32]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-02-wasi-1.png +[33]: https://github.com/CraneStation/wasi-sysroot +[34]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-03-open-imps-1-500x229.png +[35]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-03-open-imps-1.png +[36]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-04-imports-1-500x285.png +[37]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-04-imports-1.png +[38]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-1.png +[39]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-2-500x705.png +[40]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-05-sec-port-2.png +[41]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-06-openat-path-1-500x192.png +[42]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-06-openat-path-1.png +[43]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-07-file-perms-1-500x423.png +[44]: https://2r4s9p1yi1fa2jd7j43zph8r-wpengine.netdna-ssl.com/files/2019/03/05-07-file-perms-1.png +[45]: https://github.com/musec/libpreopen diff --git a/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md new file mode 100644 index 0000000000..3dfb93eec7 --- /dev/null +++ b/sources/tech/20190328 As memory prices plummet, PCIe is poised to overtake SATA for SSDs.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (As memory prices plummet, PCIe is poised to overtake SATA for SSDs) +[#]: via: (https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +As memory prices plummet, PCIe is poised to overtake SATA for SSDs +====== + +### Taiwan vendors believe PCIe and SATA will achieve price and market share parity by years' end. + +![Intel SSD DC P6400 Series][1] + +A collapse in price for NAND flash memory and a shrinking gap between the prices of PCI Express-based and SATA-based [solid-state drives][2] (SSDs) means the shift to PCI Express SSDs will accelerate in 2019, with the newer, faster format replacing the old by years' end. + +According to the Taiwanese tech publication DigiTimes (the stories are now archived and unavailable without a subscription), falling NAND flash prices continue to drag down SSD prices, which will drive the adoption of SSDs in enterprise and data-center applications. This, in turn, will further drive the adoption of PCIe drives, which are a superior format to SATA. + +**[ Read also:[Backup vs. archive: Why it’s important to know the difference][3] ]** + +## SATA vs. PCI Express + +SATA was introduced in 2001 as a replacement for the IDE interface, which had a much larger cable and slower interface. But SATA is a legacy HDD connection and not fast enough for NAND flash memory. + +I used to review SSDs, and it was always the same when it came to benchmarking, with the drives scoring within a few milliseconds of each other despite the memory used. The SATA interface was the bottleneck. A SATA SSD is like a one-lane highway with no speed limit. + +PCIe is several times faster and has much more parallelism, so throughput is more suited to the NAND format. It comes in two physical formats: an [add-in card][4] that plugs into a PCIe slot and M.2, which is about the size of a [stick of gum][5] and sits on the motherboard. PCIe is most widely used in servers, while M.2 is in consumer devices. + +There used to be a significant price difference between PCIe and SATA drives with the same capacity, but they have come into parity thanks to Moore’s Law, said Jim Handy, principal analyst with Objective Analysis, who follows the memory market. + +“The controller used to be a big part of the price of an SSD. But complexity has not grown with transistor count. It can have a lot of transistors, and it doesn’t cost more. SATA got more complicated, but PCIe has not. PCIe is very close to the same price as SATA, and [the controller] was the only thing that justified the price diff between the two,” he said. + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][6] ]** + +DigiTimes estimates that the price drop for NAND flash chips will cause global shipments of SSDs to surge 20 to 25 percent in 2019, and PCIe SSDs are expected to emerge as a new mainstream offering by the end of 2019 with a market share of 50 percent, matching SATA SSDs. + +## SSD and NAND memory prices already falling + +Market sources to DigiTimes said that unit price for 512GB PCIe SSD has fallen by 11 percent sequentially in the first quarter of 2019, while SATA SSDs have dropped 9 percent. They added that the current average unit price for 512GB SSDs is now equal to that of 256GB SSDs from one year ago, with prices continuing to drop. + +According to DRAMeXchange, NAND flash contract prices will continue falling but at a slower rate in the second quarter of 2019. Memory makers are cutting production to avoid losing any more profits. + +“We’re in a price collapse. For over a year I’ve been saying the destination for NAND is 8 cents per gigabyte, and some spot markets are 6 cents. It was 30 cents a year ago. Contract pricing is around 15 cents now, it had been 25 to 27 cents last year,” said Handy. + +A contract price is what it sounds like. A memory maker like Samsung or Micron signs a contract with a SSD maker like Toshiba or Kingston for X amount for Y cents per gigabyte. Spot prices are prices that take place at the end of a quarter (like now) where a vendor anxious to unload excessive inventory has a fire sale to a drive maker that needs it on short supply. + +DigiTimes’s contacts aren’t the only ones who foresee this. Handy was at an analyst event by Samsung a few months back where they presented their projection that PCIe SSD would outsell SATA by the end of this year, and not just in the enterprise but everywhere. + +**More about backup and recovery:** + + * [Backup vs. archive: Why it’s important to know the difference][3] + * [How to pick an off-site data-backup method][7] + * [Tape vs. disk storage: Why isn’t tape dead yet?][8] + * [The correct levels of backup save time, bandwidth, space][9] + + + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384700/as-memory-prices-plummet-pcie-is-poised-to-overtake-sata-for-ssds.html#tk.rss_all + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/12/intel-ssd-p4600-series1-100782098-large.jpg +[2]: https://www.networkworld.com/article/3326058/what-is-an-ssd.html +[3]: https://www.networkworld.com/article/3285652/storage/backup-vs-archive-why-its-important-to-know-the-difference.html +[4]: https://www.newegg.com/Product/Product.aspx?Item=N82E16820249107 +[5]: https://www.newegg.com/Product/Product.aspx?Item=20-156-199&cm_sp=SearchSuccess-_-INFOCARD-_-m.2+-_-20-156-199-_-2&Description=m.2+ +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[7]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[8]: https://www.networkworld.com/article/3315156/storage/tape-vs-disk-storage-why-isnt-tape-dead-yet.html +[9]: https://www.networkworld.com/article/3302804/storage/the-correct-levels-of-backup-save-time-bandwidth-space.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md new file mode 100644 index 0000000000..bae14a2f5c --- /dev/null +++ b/sources/tech/20190328 Can Better Task Stealing Make Linux Faster.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Can Better Task Stealing Make Linux Faster?) +[#]: via: (https://www.linux.com/blog/can-better-task-stealing-make-linux-faster) +[#]: author: (Oracle ) + +Can Better Task Stealing Make Linux Faster? +====== + +_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._ + +### Load balancing via scalable task stealing + +The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused. + +I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all. + +### Results + +Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats: + + * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap. + * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time. + + + +![load balancing][1] + +[Used with permission][2] + +​​CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel: + +![][3] + +Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate. + +### The code + +As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using + +``` +# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes +Yes +``` + +If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger). + +### Future work + +After the basic stealing algorithm is pushed upstream, I am considering the following enhancements: + + * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes. + * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues. + * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported. + * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing. + + + +_This article originally appeared at[Oracle Developers Blog][6]._ + +_Oracle Linux kernel developer Steve Sistare contributes this discussion on kernel scheduler improvements._ + +### Load balancing via scalable task stealing + +The Linux task scheduler balances load across a system by pushing waking tasks to idle CPUs, and by pulling tasks from busy CPUs when a CPU becomes idle. Efficient scaling is a challenge on both the push and pull sides on large systems. For pulls, the scheduler searches all CPUs in successively larger domains until an overloaded CPU is found, and pulls a task from the busiest group. This is very expensive, costing 10's to 100's of microseconds on large systems, so search time is limited by the average idle time, and some domains are not searched. Balance is not always achieved, and idle CPUs go unused. + +I have implemented an alternate mechanism that is invoked after the existing search in idle_balance() limits itself and finds nothing. I maintain a bitmap of overloaded CPUs, where a CPU sets its bit when its runnable CFS task count exceeds 1. The bitmap is sparse, with a limited number of significant bits per cacheline. This reduces cache contention when many threads concurrently set, clear, and visit elements. There is a bitmap per last-level cache. When a CPU becomes idle, it searches the bitmap to find the first overloaded CPU with a migratable task, and steals it. This simple stealing yields a higher CPU utilization than idle_balance() alone, because the search is cheap, costing 1 to 2 microseconds, so it may be called every time the CPU is about to go idle. Stealing does not offload the globally busiest queue, but it is much better than running nothing at all. + +### Results + +Stealing improves utilization with only a modest CPU overhead in scheduler code. In the following experiment, hackbench is run with varying numbers of groups (40 tasks per group), and the delta in /proc/schedstat is shown for each run, averaged per CPU, augmented with these non-standard stats: + + * %find - percent of time spent in old and new functions that search for idle CPUs and tasks to steal and set the overloaded CPUs bitmap. + * steal - number of times a task is stolen from another CPU. Elapsed time improves by 8 to 36%, costing at most 0.4% more find time. + + + +![load balancing][1] + +[Used with permission][2] + +​​CPU busy utilization is close to 100% for the new kernel, as shown by the green curve in the following graph, versus the orange curve for the baseline kernel: + +![][3] + +Stealing improves Oracle database OLTP performance by up to 9% depending on load, and we have seen some nice improvements for mysql, pgsql, gcc, java, and networking. In general, stealing is most helpful for workloads with a high context switch rate. + +### The code + +As of this writing, this work is not yet upstream, but the latest patch series is at [https://lkml.org/lkml/2018/12/6/1253. ][4]If your kernel is built with CONFIG_SCHED_DEBUG=y, you can verify that it contains the stealing optimization using + +``` +# grep -q STEAL /sys/kernel/debug/sched_features && echo Yes +Yes +``` + +If you try it, note that stealing is disabled for systems with more than 2 NUMA nodes, because hackbench regresses on such systems, as I explain in [https://lkml.org/lkml/2018/12/6/1250 .][5]However, I suspect this effect is specific to hackbench and that stealing will help other workloads on many-node systems. To try it, reboot with kernel parameter sched_steal_node_limit = 8 (or larger). + +### Future work + +After the basic stealing algorithm is pushed upstream, I am considering the following enhancements: + + * If stealing within the last-level cache does not find a candidate, steal across LLC's and NUMA nodes. + * Maintain a sparse bitmap to identify stealing candidates in the RT scheduling class. Currently pull_rt_task() searches all run queues. + * Remove the core and socket levels from idle_balance(), as stealing handles those levels. Remove idle_balance() entirely when stealing across LLC is supported. + * Maintain a bitmap to identify idle cores and idle CPUs, for push balancing. + + + +_This article originally appeared at[Oracle Developers Blog][6]._ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/can-better-task-stealing-make-linux-faster + +作者:[Oracle][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/linux-load-balancing.png?itok=2Uk1yALt (load balancing) +[2]: /LICENSES/CATEGORY/USED-PERMISSION +[3]: https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b7a700fe-edc3-4ea0-876a-c91e1850b59b/Image/00c074f4282bcbaf0c10dd153c5dfa76/steal_graph.png +[4]: https://lkml.org/lkml/2018/12/6/1253 +[5]: https://lkml.org/lkml/2018/12/6/1250 +[6]: https://blogs.oracle.com/linux/can-better-task-stealing-make-linux-faster diff --git a/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md new file mode 100644 index 0000000000..1ae1222f6e --- /dev/null +++ b/sources/tech/20190328 Elizabeth Warren-s right-to-repair plan fails to consider data from IoT equipment.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment) +[#]: via: (https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all) +[#]: author: (Fredric Paul https://www.networkworld.com/author/Fredric-Paul/) + +Elizabeth Warren's right-to-repair plan fails to consider data from IoT equipment +====== + +### Senator and presidential candidate Elizabeth Warren suggests national legislation focused on farm equipment. But that’s only a first step. The data collected by that equipment must also be considered. + +![Thinkstock][1] + +There’s a surprising battle being fought on America’s farms, between farmers and the companies that sell them tractors, combines, and other farm equipment. Surprisingly, the outcome of that war could have far-reaching implications for the internet of things (IoT) — and now Massachusetts senator and Democratic presidential candidate Elizabeth Warren has weighed in with a proposal that could shift the balance of power in this largely under-the-radar struggle. + +## Right to repair farm equipment + +Here’s the story: As part of a new plan to support family farms, Warren came out in support of a national right-to-repair law for farm equipment. That might not sound like a big deal, but it raises the stakes in a long-simmering fight between farmers and equipment makers over who really controls access to the equipment — and to the increasingly critical data gathered by the IoT capabilities built into it. + +**[ Also read:[Right-to-repair smartphone ruling loosens restrictions on industrial, farm IoT][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3] ]** + +[Warren’s proposal reportedly][4] calls for making all diagnostics tools and manuals freely available to the equipment owners, as well as independent repair shops — not just vendors and their authorized agents — and focuses solely on farm equipment. + +That’s a great start, and kudos to Warren for being by far the most prominent politician to weigh in on the issue. + +## Part of a much bigger IoT data issue + +But Warren's proposal merely scratches the surface of the much larger issue of who actually controls the equipment and devices that consumers and businesses buy. Even more important, it doesn’t address the critical data gathered by IoT sensors in everything ranging from smartphones, wearables, and smart-home devices to private and commercial vehicles and aircraft to industrial equipment. + +And as many farmers can tell you, this isn’t some academic argument. That data has real value — not to mention privacy implications. For farmers, it’s GPS-equipped smart sensors tracking everything — from temperature to moisture to soil acidity — that can determine the most efficient times to plant and harvest crops. For consumers, it might be data that affects their home or auto insurance rates, or even divorce cases. For manufacturers, it might cover everything from which equipment needs maintenance to potential issues with raw materials or finished products. + +The solution is simple: IoT users need consistent regulations that ensure free access to what is really their own data, and give them the option to share that data with the equipment vendors — if they so choose and on their own terms. + +At the very least, users need clear statements of the rules, so they know exactly what they’re getting — and not getting — when they buy IoT-enhanced devices and equipment. And if they’re being honest, most equipment vendors would likely admit that clear rules would benefit them as well by creating a level playing field, reducing potential liabilities and helping to avoid making customers unhappy. + +Sen. Warren made headlines earlier this month by proposing to ["break up" tech giants][5] such as Amazon, Apple, and Facebook. If she really wants to help technology buyers, prioritizing the right-to-repair and the associated right to own your own data seems like a more effective approach. + +**[ Now read this:[Big trouble down on the IoT farm][6] ]** + +Join the Network World communities on [Facebook][7] and [LinkedIn][8] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385122/elizabeth-warrens-right-to-repair-plan-fails-to-consider-data-from-iot-equipment.html#tk.rss_all + +作者:[Fredric Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Fredric-Paul/ +[b]: https://github.com/lujun9972 +[1]: https://images.techhive.com/images/article/2017/03/ai_agriculture_primary-100715481-large.jpg +[2]: https://www.networkworld.com/article/3317696/the-recent-right-to-repair-smartphone-ruling-will-also-affect-farm-and-industrial-equipment.html +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://appleinsider.com/articles/19/03/27/presidential-candidate-elizabeth-warren-focusing-right-to-repair-on-farmers-not-tech +[5]: https://www.nytimes.com/2019/03/08/us/politics/elizabeth-warren-amazon.html +[6]: https://www.networkworld.com/article/3262631/big-trouble-down-on-the-iot-farm.html +[7]: https://www.facebook.com/NetworkWorld/ +[8]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md new file mode 100644 index 0000000000..0400f4db04 --- /dev/null +++ b/sources/tech/20190328 Microsoft introduces Azure Stack for HCI.md @@ -0,0 +1,63 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Microsoft introduces Azure Stack for HCI) +[#]: via: (https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Microsoft introduces Azure Stack for HCI +====== + +### Azure Stack is great for your existing hardware, so Microsoft is covering the bases with a turnkey solution. + +![Thinkstock/Microsoft][1] + +Microsoft has introduced Azure Stack HCI Solutions, a new implementation of its on-premises Azure product specifically for [Hyper Converged Infrastructure][2] (HCI) hardware. + +[Azure Stack][3] is an on-premises version of its Azure cloud service. It gives companies a chance to migrate to an Azure environment within the confines of their own enterprise rather than onto Microsoft’s data centers. Once you have migrated your apps and infrastructure to Azure Stack, moving between your systems and Microsoft’s cloud service is easy. + +HCI is the latest trend in server hardware. It uses scale-out hardware systems and a full software-defined platform to handle [virtualization][4] and management. It’s designed to reduce the complexity of a deployment and on-going management, since everything ships fully integrated, hardware and software. + +**[ Read also:[12 most powerful hyperconverged infrasctructure vendors][5] | Get regularly scheduled insights: [Sign up for Network World newsletters][6] ]** + +It makes sense for Microsoft to take this step. Azure Stack was ideal for an existing enterprise. Now you can deploy a whole new hardware configuration setup to run Azure in-house, complete with Hyper-V-based software-defined compute, storage, and networking. + +The Windows Admin Center is the main management tool for Azure Stack HCI. It connects to other Azure tools, such as Azure Monitor, Azure Security Center, Azure Update Management, Azure Network Adapter, and Azure Site Recovery. + +“We are bringing our existing HCI technology into the Azure Stack family for customers to run virtualized applications on-premises with direct access to Azure management services such as backup and disaster recovery,” wrote Julia White, corporate vice president of Microsoft Azure, in a [blog post announcing Azure Stack HCI][7]. + +It’s not so much a new product launch as a rebranding. When Microsoft launched Server 2016, it introduced a version called Windows Server Software-Defined Data Center (SDDC), which was built on the Hyper-V hypervisor, and says so in a [FAQ][8] as part of the announcement. + +"Azure Stack HCI is the evolution of Windows Server Software-Defined (WSSD) solutions previously available from our hardware partners. We brought it into the Azure Stack family because we have started to offer new options to connect seamlessly with Azure for infrastructure management services,” the company said. + +Microsoft introduced Azure Stack in 2017, but it was not the first to offer an on-premises cloud option. That distinction goes to [OpenStack][9], a joint project between Rackspace and NASA built on open-source code. Amazon followed with its own product, called [Outposts][10]. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385078/microsoft-introduces-azure-stack-for-hci.html#tk.rss_all + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2017/08/5_microsoft-azure-100733132-large.jpg +[2]: https://www.networkworld.com/article/3207567/what-is-hyperconvergence.html +[3]: https://www.networkworld.com/article/3207748/microsoft-introduces-azure-stack-its-answer-to-openstack.html +[4]: https://www.networkworld.com/article/3234795/what-is-virtualization-definition-virtual-machine-hypervisor.html +[5]: https://www.networkworld.com/article/3112622/hardware/12-most-powerful-hyperconverged-infrastructure-vendors.htmll +[6]: https://www.networkworld.com/newsletters/signup.html +[7]: https://azure.microsoft.com/en-us/blog/enabling-customers-hybrid-strategy-with-new-microsoft-innovation/ +[8]: https://azure.microsoft.com/en-us/blog/announcing-azure-stack-hci-a-new-member-of-the-azure-stack-family/ +[9]: https://www.openstack.org/ +[10]: https://www.networkworld.com/article/3324043/aws-does-hybrid-cloud-with-on-prem-hardware-vmware-help.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md new file mode 100644 index 0000000000..ce38f54f79 --- /dev/null +++ b/sources/tech/20190328 Motorola taps freed-up wireless spectrum for enterprise LTE networks.md @@ -0,0 +1,68 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Motorola taps freed-up wireless spectrum for enterprise LTE networks) +[#]: via: (https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all) +[#]: author: (Patrick Nelson https://www.networkworld.com/author/Patrick-Nelson/) + +Motorola taps freed-up wireless spectrum for enterprise LTE networks +====== + +### Citizens Broadband Radio Service (CBRS) is developing. Out of the gate, Motorola is creating a land mobile radio (LMR) system that includes enterprise-level, voice handheld devices and fast, private data networks. + +![Jiraroj Praditcharoenkul / Getty Images][1] + +In a move that could upend how workers access data in the enterprise, Motorola has announced a broadband product that it says will deliver data at double the capacity and four-times the range of Wi-Fi for end users. The handheld, walkie-talkie-like device, called Mototrbo Nitro, will, importantly, also include a voice channel. “Business-critical voice with private broadband data,” as [Motorola describes it on its website][2]. + +The company sees the product being implemented in traditional, moving-around, voice communications environments, such as factories and warehouses, that increasingly need data supplementation, too. A shop floor that has an electronically delivered repair manual, with included video demonstration, could be one example. Video could be two-way, even. + +**[ Also read:[Wi-Fi 6 is coming to a router near you][3] | Get regularly scheduled insights: [Sign up for Network World newsletters][4] ]** + +The product takes advantage of upcoming Citizens Broadband Radio Service (CBRS) spectrum. That’s a swath of radio bandwidth that’s being released by the Federal Communications Commission (FCC) in the 3.5GHz band. It’s a frequency chunk that is also expected to be used heavily for 5G. In this case, though, Motorola is creating a private LTE network for the enterprise. + +The CBRS band is the first time publicly available broadband spectrum has been available, [Motorola explains in a white paper][5] (pdf) — organizations don’t have to buy licenses, yet they can get access to useful spectrum: [A tiered sharing system, where auction winners will get priority access licenses, but others will have some access too is proposed][6] by the FCC. The non-prioritized open access could be used by any enterprise for whatever — internet of things (IoT) or private networks. + +## Motorola's pitch for using a private broadband network + +Why a private broadband network and not simply cell phones? One giveaway line is in Motorola’s promotional video: “Without sacrificing control,” it says. What it means is that the firm thinks there’s a market for companies who want to run entire business communications systems — data and voice — without involvement from possibly nosy Mobile Network Operator phone companies. [I’ve written before about how control over security is prompting large industrials to explore private networks][7] more. Motorola manages the network in this case, though, for the enterprise. + +Motorola also refers to potentially limited or intermittent onsite coverage and congestion for public, commercial, single-platform voice and data networks. That’s particularly the case in factories, [Motorola says in an ebook][8]. Heavy machinery containing radio-unfriendly metal can hinder Wi-Fi and cellular, it claims. Or that traditional Land Mobile Radios (LMRs), such as walkie-talkies and vehicle-mounted mobile radios, don’t handle data natively. In particular, it says that if you want to get into artificial intelligence (AI) and analytics, say, you need a more evolving voice and fast data communications setup. + +## Industrial IoT uses for Motorola's Nitro network + +Industrial IoT will be another beneficiary, Motorola says. It says its CBRS Nitro network could include instant notifications of equipment failures that traditional products can’t provide. It also suggests merging fixed security cameras with “photos and videos of broken machines and sending real-time video to an expert.” + +**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][9] ]** + +Motorola also suggests that by separating consumer Wi-Fi (as is offered in hospitality and transport verticals, for example) from business-critical systems, one reduces traffic congestion risks. + +The highly complicated CBRS band-sharing system is still not through its government testing. “However, we could deploy customer systems under an experimental license,” a Motorola representative told me. + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385117/motorola-taps-cbrs-spectrum-to-create-private-broadband-lmr-system.html#tk.rss_all + +作者:[Patrick Nelson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Patrick-Nelson/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/industry_4-0_industrial_iot_smart_factory_automation_robotic_arm_gear_engineer_tablet_by_jiraroj_praditcharoenkul_gettyimages-1091790364_2400x1600-100788459-large.jpg +[2]: https://www.motorolasolutions.com/en_us/products/two-way-radios/mototrbo/nitro.html +[3]: https://www.networkworld.com/article/3311921/mobile-wireless/wi-fi-6-is-coming-to-a-router-near-you.html +[4]: https://www.networkworld.com/newsletters/signup.html +[5]: https://www.motorolasolutions.com/content/dam/msi/docs/products/mototrbo/nitro/cbrs-white-paper.pdf +[6]: https://www.networkworld.com/article/3300339/private-lte-using-new-spectrum-approaching-market-readiness.html +[7]: https://www.networkworld.com/article/3319176/private-5g-networks-are-coming.html +[8]: https://img04.en25.com/Web/MotorolaSolutionsInc/%7B293ce809-fde0-4619-8507-2b42076215c3%7D_radio_evolution_eBook_Nitro_03.13.19_MS_V3.pdf?elqTrackId=850d56c6d53f4013afa2290a66d6251f&elqaid=2025&elqat=2 +[9]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md new file mode 100644 index 0000000000..f62317ae54 --- /dev/null +++ b/sources/tech/20190328 Robots in Retail are Real- and so is Edge Computing.md @@ -0,0 +1,48 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Robots in Retail are Real… and so is Edge Computing) +[#]: via: (https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all) +[#]: author: (Wendy Torell https://www.networkworld.com/author/Wendy-Torell/) + +Robots in Retail are Real… and so is Edge Computing +====== + +### I’ve seen plenty of articles touting the promise of edge computing technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store. + +![Getty][1] + +I’ve seen plenty of articles touting the promise of [edge computing][2] technologies like AI and robotics in retail brick & mortar, but it wasn’t until this past weekend that I had my first encounter with an actual robot in a retail store. I was doing my usual weekly grocery shopping at my local Stop & Shop, and who comes strolling down the aisle, but…. Marty… the autonomous robot. He was friendly looking with his big googly eyes and was wearing a sign that explained he was there for safety, and that he was monitoring the aisles to report spills, debris, and other hazards to employees to improve my shopping experience. He caught the attention of most of the shoppers. + +At the National Retail Federation conference in NY that I attended in January, this was a topic of one of the [panel sessions][3]. It all makes sense… a positive customer experience is critical to retail success. But employee-to-customer (human to human) interaction has also been proven important. That’s where Marty comes in… to free up resources spent on tedious, time consuming tasks so that personnel can spend more time directly helping customers. + +**Use cases for robots in stores** + +Robotics have been utilized by retailers in manufacturing floors, and in distribution warehouses to improve productivity and optimize business processes along the supply chain. But it is only more recently that we’re seeing them make their way into the retail store front, where they are in contact with the customers. Alerting to hazards in the aisles is just one of many use-cases for the robots. They can also be used to scan and re-stock shelves, or as general information sources and greeters upon entering the store to guide your shopping experience. But how does a retailer justify the investment in this type of technology? Determining your ROI isn’t as cut and dry as in a warehouse environment, for example, where costs are directly tied to number of staff, time to complete tasks, etc… I guess time will tell for the retailers that are giving it a go. + +**What does it mean for the IT equipment on-premise ([micro data center][4])** + +Robotics are one of the many ways retail stores are being digitized. Video analytics is another big one, being used to analyze facial expressions for customer satisfaction, obtain customer demographics as input to product development, or ensure queue lines don’t get too long. My colleague, Patrick Donovan, wrote a detailed [blog post][5] about our trip to NRF and the impact on the physical infrastructure in the stores. In a nutshell, the equipment on-premise is becoming more mission critical, more integrated to business applications in the cloud, more tied to positive customer-experiences… and with that comes the need for more secure, more available, more manageable edge. But this is easier said than done in an environment that generally has no IT staff on-premise, and with hundreds or potentially thousands of stores spread out geographically. So how do we address this? + +We answer this question in a white paper that Patrick and I are currently writing titled “An Integrated Ecosystem to Solve Edge Computing Infrastructure Challenges”. Here’s a hint, (1) an integrated ecosystem of partners, and (2) an integrated micro data center that emerges from the ecosystem. I’ll be sure to comment on this blog with the link when the white paper becomes publicly available! In the meantime, explore our [edge computing][2] landing page to learn more. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385046/robots-in-retail-are-real-and-so-is-edge-computing.html#tk.rss_all + +作者:[Wendy Torell][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Wendy-Torell/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/03/gettyimages-828488368-1060x445-100792228-large.jpg +[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp +[3]: https://stores.org/2019/01/15/why-is-there-a-robot-in-my-store/ +[4]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp +[5]: https://blog.apc.com/2019/02/06/4-thoughts-edge-computing-infrastructure-retail-sector/ diff --git a/sources/tech/20190329 How to submit a bug report with Bugzilla.md b/sources/tech/20190329 How to submit a bug report with Bugzilla.md new file mode 100644 index 0000000000..ee778410e7 --- /dev/null +++ b/sources/tech/20190329 How to submit a bug report with Bugzilla.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to submit a bug report with Bugzilla) +[#]: via: (https://opensource.com/article/19/3/bug-reporting) +[#]: author: (David Both https://opensource.com/users/dboth) + +How to submit a bug report with Bugzilla +====== + +Submitting bug reports is an easy way to give back and it helps everyone. + +![][1] + +I spend a lot of time doing research for my books and [Opensource.com][2] articles. Sometimes this leads me to discover bugs in the software I use, including Fedora and the Linux kernel. As a long-time Linux user and sysadmin, I have benefited greatly from GNU/Linux, and I like to give back. I am not a C language programmer, so I don't create fixes and submit them with bug reports, as some people do. But a way I can return some value to the Linux community is by reporting bugs. + +Product maintainers use a lot of tools to let their users search for existing bugs and report new ones. Bugzilla is a popular tool, and I use the Red Hat [Bugzilla][3] website to report Fedora-related bugs because I primarily use Fedora on the systems I'm responsible for. It's an easy process, but it may seem daunting if you have never done it before. So let's start with the basics. + +### Start with a search + +Even though it's tempting, never assume that seemingly anomalous behavior is the result of a bug. I always start with a search of relevant websites, such as the [Fedora wiki][4], the [CentOS wiki][5], and the documentation for the distro I'm using. I also try to check the various distro listservs. + +If it appears that no one has encountered this problem before (or if they have, they haven't reported it as a bug), I go to the Red Hat Bugzilla site and begin searching for a bug report that might come close to matching the symptoms I encountered. + +You can search the Red Hat Bugzilla site without an account. Go to the Bugzilla site and click on the [Advanced Search tab][6]. + +![Searching for a bug][7] + +For example, if you want to search for bug reports related to Fedora's Rescue mode kernel, enter the following data in the Advanced Search form. + +Field | Logic | Data or Selection +---|---|--- +Summary | Contains the string | Rescue mode kernel +Classification | | Fedora +Product | | Fedora +Component | | grub2 +Status | | New + Assigned + +Then press **Search**. This returns a list of one bug with the ID 1654337 (which happens to be a bug I reported). + +![Bug report list][8] + +Click on the ID to view my bug report details. I entered as much relevant data as possible in the top section of the report. In the comments, I described the problem and included supporting files, other relevant comments (such as the fact that the problem occurred on multiple motherboards), and the steps to reproduce the problem. + +![Bug report details][9] + +The more information you can provide here that pertains to the bug, such as symptoms, the hardware and software environments (if they are applicable), other software that was running at the time, kernel and distro release levels, and so on, the easier it will be to determine where to assign your bug. In this case, I originally chose the kernel component, but it was quickly changed to the GRUB2 component because the problem occurred before the kernel loaded. + +### How to submit a bug report + +The Red Hat [Bugzilla][3] website requires an account to submit new bugs or comment on old ones. It is easy to sign up. On Bugzilla's main page, click **Open a New Account** and fill in the requested information. After you verify your email address, you can fill in the rest of the information to create your account. + +_**Advisory:**_ _Bugzilla is a working website that people count on for support. I strongly suggest not creating an account unless you intend to submit bug reports or comment on existing bugs._ + +To demonstrate how to submit a bug report, I'll use a fictional example of creating a bug against the Xfce4-terminal emulator in Fedora. _Please do not do this unless you have a real bug to report._ + +Log into your account and click on **New** in the menu bar or the **File a Bug** button. You'll need to select a classification for the bug to continue the process. This will narrow down some of the choices on the next page. + +The following image shows how I filled out the required fields (and a couple of others that are not required). + +![Reporting a bug][10] + +When you type a short problem description in the **Summary** field, Bugzilla displays a list of other bugs that might match yours. If one matches, click **Add Me to the CC List** to receive emails when changes are made to the bug. + +If none match, fill in the information requested in the **Description** field. Add as much information as you can, including error messages and screen captures that illustrate the problem. Be sure to describe the exact steps needed to reproduce the problem and how reproducible it is: does it fail every time, every second, third, fourth, random time, or whatever. If it happened only once, it's very unlikely anyone will be able to reproduce the problem you observed. + +When you finish adding as much information as you can, press **Submit Bug**. + +### Be kind + +Bug reporting websites are not for asking questions—they are for searching and reporting bugs. That means you must have performed some work on your own to conclude that there really is a bug. There are many wikis, listservs, and Q&A websites that are appropriate for asking questions. Use sites like Bugzilla to search for existing bug reports on the problem you have found. + +Be sure you submit your bugs on the correct bug reporting website. For example, only submit bugs about Red Hat products on the Red Hat Bugzilla, and submit bugs about LibreOffice by following [LibreOffice's instructions][11]. + +Reporting bugs is not difficult, and it is an important way to participate. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/bug-reporting + +作者:[David Both (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug-insect-butterfly-diversity-inclusion-2.png?itok=TcC9eews +[2]: http://Opensource.com +[3]: https://bugzilla.redhat.com/ +[4]: https://fedoraproject.org/wiki/ +[5]: https://wiki.centos.org/ +[6]: https://bugzilla.redhat.com/query.cgi?format=advanced +[7]: https://opensource.com/sites/default/files/uploads/bugreporting-1.png (Searching for a bug) +[8]: https://opensource.com/sites/default/files/uploads/bugreporting-2.png (Bug report list) +[9]: https://opensource.com/sites/default/files/uploads/bugreporting-4.png (Bug report details) +[10]: https://opensource.com/sites/default/files/uploads/bugreporting-3.png (Reporting a bug) +[11]: https://wiki.documentfoundation.org/QA/BugReport diff --git a/sources/tech/20190329 Russia demands access to VPN providers- servers.md b/sources/tech/20190329 Russia demands access to VPN providers- servers.md new file mode 100644 index 0000000000..0c950eb04f --- /dev/null +++ b/sources/tech/20190329 Russia demands access to VPN providers- servers.md @@ -0,0 +1,77 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Russia demands access to VPN providers’ servers) +[#]: via: (https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all) +[#]: author: (Tim Greene https://www.networkworld.com/author/Tim-Greene/) + +Russia demands access to VPN providers’ servers +====== + +### 10 VPN service providers have been ordered to link their servers in Russia to the state censorship agency by April 26 + +![Getty Images][1] + +The Russian censorship agency Roskomnadzor has ordered 10 [VPN][2] service providers to link their servers in Russia to its network in order to stop users within the country from reaching banned sites. + +If they fail to comply, their services will be blocked, according to a machine translation of the order. + +[RELATED: Best VPN routers for small business][3] + +The 10 VPN providers are ExpressVPN, HideMyAss!, Hola VPN, IPVanish, Kaspersky Secure Connection, KeepSolid, NordVPN, OpenVPN, TorGuard, and VyprVPN. + +In response at least five of the 10 – Express VPN, IPVanish, KeepSolid, NordVPN, TorGuard and – say they are tearing down their servers in Russia but continuing to offer their services to Russian customers if they can reach the providers’ servers located outside of Russia. A sixth provider, Kaspersky Labs, which is based in Moscow, says it will comply with the order. The other four could not be reached for this article. + +IPVanish characterized the order as another phase of “Russia’s censorship agenda” dating back to 2017 when the government enacted a law forbidding the use of VPNs to access blocked Web sites. + +“Up until recently, however, they had done little to enforce such rules,” IPVanish [says in its blog][4]. “These new demands mark a significant escalation.” + +The reactions of those not complying are similar. TorGuard says it has taken steps to remove all its physical servers from Russia. It is also cutting off its business with data centers in the region + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]** + +“We would like to be clear that this removal of servers was a voluntary decision by TorGuard management and no equipment seizure occurred,” [TorGuard says in its blog][6]. “We do not store any logs so even if servers were compromised it would be impossible for customer’s data to be exposed.” + +TorGuard says it is deploying more servers in adjacent countries to protect fast download speeds for customers in the region. + +IPVanish says it has faced similar demands from Russia before and responded similarly. In 2016, a new Russian law required online service providers to store customers’ private data for a year. “In response, [we removed all physical server presence in Russia][7], while still offering Russians encrypted connections via servers outside of Russian borders,” the company says. “That decision was made in accordance with our strict zero-logs policy.” + +KeepSolid says it had no servers in Russia, but it will not comply with the order to link with Roskomnadzor's network. KeepSolid says it will [draw on its experience dealing with the Great Firewall of China][8] to fight the Russian censorship attempt. "Our team developed a special [KeepSolid Wise protocol][9] which is designed for use in countries where the use of VPN is blocked," a spokesperson for the company said in an email statement. + +NordVPN says it’s shutting down all its Russian servers, and all of them will be shredded as of April 1. [The company says in a blog][10] that some of its customers who connected to its Russian servers without use of the NordVPN application will have to reconfigure their devices to insure their security. Those customers using the app won’t have to do anything differently because the option to connect to Russia via the app has been removed. + +ExpressVPN is also not complying with the order. "As a matter of principle, ExpressVPN will never cooperate with efforts to censor the internet by any country," said the company's vice presidentn Harold Li in an email, but he said that blocking traffic will be ineffective. "We epect that Russian internet users will still be able to find means of accessing the sites and services they want, albeit perhaps with some additional effort." + +Kaspersky Labs says it will comply with the Russian order and responded to emailed questions about its reaction with this written response: + +“Kaspersky Lab is aware of the new requirements from Russian regulators for VPN providers operating in the country. These requirements oblige VPN providers to restrict access to a number of websites that were listed and prohibited by the Russian Government in the country’s territory. As a responsible company, Kaspersky Lab complies with the laws of all the countries where it operates, including Russia. At the same time, the new requirements don’t affect the main purpose of Kaspersky Secure Connection which protects user privacy and ensures confidentiality and protection against data interception, for example, when using open Wi-Fi networks, making online payments at cafes, airports or hotels. Additionally, the new requirements are relevant to VPN use only in Russian territory and do not concern users in other countries.” + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385050/russia-demands-access-to-vpn-providers-servers.html#tk.rss_all + +作者:[Tim Greene][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Tim-Greene/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/10/ipsecurity-protocols-network-security-vpn-100775457-large.jpg +[2]: https://www.networkworld.com/article/3268744/understanding-virtual-private-networks-and-why-vpns-are-important-to-sd-wan.html +[3]: http://www.networkworld.com/article/3002228/router/best-vpn-routers-for-small-business.html#tk.nww-fsb +[4]: https://nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/ +[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[6]: https://torguard.net/blog/why-torguard-has-removed-all-russian-servers/ +[7]: https://blog.ipvanish.com/ipvanish-removes-russian-vpn-servers-from-moscow/ +[8]: https://www.vpnunlimitedapp.com/blog/what-roskomnadzor-demands-from-vpns/ +[9]: https://www.vpnunlimitedapp.com/blog/keepsolid-wise-a-smart-solution-to-get-total-online-freedom/ +[10]: /cms/article/blog%20https:/nordvpn.com/blog/nordvpn-servers-roskomnadzor-russia/ +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md b/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md new file mode 100644 index 0000000000..3d7f7eaf0c --- /dev/null +++ b/sources/tech/20190329 ShadowReader- Serverless load tests for replaying production traffic.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ShadowReader: Serverless load tests for replaying production traffic) +[#]: via: (https://opensource.com/article/19/3/shadowreader-serverless) +[#]: author: (Yuki Sawa https://opensource.com/users/yukisawa1/users/yongsanchez) + +ShadowReader: Serverless load tests for replaying production traffic +====== +This open source tool recreates serverless production conditions to +pinpoint causes of memory leaks and other errors that aren't visible in +the QA environment. +![Traffic lights at night][1] + +While load testing has become more accessible, configuring load tests that faithfully re-create production conditions can be difficult. A good load test must use a set of URLs that are representative of production traffic and achieve request rates that mimic real users. Even performing distributed load tests requires the upkeep of a fleet of servers. + +[ShadowReader][2] aims to solve these problems. It gathers URLs and request rates straight from production logs and replays them using AWS Lambda. Being serverless, it is more cost-efficient and performant than traditional distributed load tests; in practice, it has scaled beyond 50,000 requests per minute. + +At Edmunds, we have been able to utilize these capabilities to solve problems, such as Node.js memory leaks that were happening only in production, by recreating the same conditions in our QA environment. We're also using it daily to generate load for pre-production canary deployments. + +The memory leak problem we faced in our Node.js application confounded our engineering team; as it was only occurring in our production environment; we could not reproduce it in QA until we introduced ShadowReader to replay production traffic into QA. + +### The incident + +On Christmas Eve 2017, we suffered an incident where there was a jump in response time across the board with error rates tripling and impacting many users of our website. + +![Christmas Eve 2017 incident][3] + +![Christmas Eve 2017 incident][4] + +Monitoring during the incident helped identify and resolve the issue quickly, but we still needed to understand the root cause. + +At Edmunds, we leverage a robust continuous delivery (CD) pipeline that releases new updates to production multiple times a day. We also dynamically scale up our applications to accommodate peak traffic and scale down to save costs. Unfortunately, this had the side effect of masking a memory leak. + +In our investigation, we saw that the memory leak had existed for weeks, since early December. Memory usage would climb to 60%, along with a slow increase in 99th percentile response time. + +Between our CD pipeline and autoscaling events, long-running containers were frequently being shut down and replaced by newer ones. This inadvertently masked the memory leak until December, when we decided to stop releasing software to ensure stability during the holidays. + +![Slow increase in 99th percentile response time][5] + +### Our CD pipeline + +At a glance, Edmunds' CD pipeline looks like this: + + 1. Unit test + 2. Build a Docker image for the application + 3. Integration test + 4. Load test/performance test + 5. Canary release + + + +The solution is fully automated and requires no manual cutover. The final step is a canary deployment directly into the live website, allowing us to release multiple times a day. + +For our load testing, we leveraged custom tooling built on top of JMeter. It takes random samples of production URLs and can simulate various percentages of traffic. Unfortunately, however, our load tests were not able to reproduce the memory leak in any of our pre-production environments. + +### Solving the memory leak + +When looking at the memory patterns in QA, we noticed there was a very healthy pattern. Our initial hypothesis was that our JMeter load testing in QA was unable to simulate production traffic in a way that allows us to predict how our applications will perform. + +While the load test takes samples from production URLs, it can't precisely simulate the URLs customers use and the exact frequency of calls (i.e., the burst rate). + +Our first step was to re-create the problem in QA. We used a new tool called ShadowReader, a project that evolved out of our hackathons. While many projects we considered were product-focused, this was the only operations-centric one. It is a load-testing tool that runs on AWS Lambda and can replay production traffic and usage patterns against our QA environment. + +The results it returned were immediate: + +![QA results in ShadowReader][6] + +Knowing that we could re-create the problem in QA, we took the additional step to point ShadowReader to our local environment, as this allowed us to trigger Node.js heap dumps. After analyzing the contents of the dumps, it was obvious the memory leak was coming from two excessively large objects containing only strings. At the time the snapshot dumped, these objects contained 373MB and 63MB of strings! + +![Heap dumps show source of memory leak][7] + +We found that both objects were temporary lookup caches containing metadata to be used on the client side. Neither of these caches was ever intended to be persisted on the server side. The user's browser cached only its own metadata, but on the server side, it cached the metadata for all users. This is why we were unable to reproduce the leak with synthetic testing. Synthetic tests always resulted in the same fixed set of metadata in the server-side caches. The leak surfaced only when we had a sufficient amount of unique metadata being generated from a variety of users. + +Once we identified the problem, we were able to remove the large caches that we observed in the heap dumps. We've since instrumented the application to start collecting metrics that can help detect issues like this faster. + +![Collecting metrics][8] + +After making the fix in QA, we saw that the memory usage was constant and the leak was plugged. + +![Graph showing memory leak fixed][9] + +### What is ShadowReader? + +ShadowReader is a serverless load-testing framework powered by AWS Lambda and S3 to replay production traffic. It mimics real user traffic by replaying URLs from production at the same rate as the live website. We are happy to announce that after months of internal usage, we have released it as open source! + +#### Features + + * ShadowReader mimics real user traffic by replaying user requests (URLs). It can also replay certain headers, such as True-Client-IP and User-Agent, along with the URL. + + + * It is more efficient cost- and performance-wise than traditional distributed load tests that run on a fleet of servers. Managing a fleet of servers for distributed load testing can cost $1,000 or more per month; with a serverless stack, it can be reduced to $100 per month by provisioning compute resources on demand. + + + * We've scaled it up to 50,000 requests per minute, but it should be able to handle more than 100,000 reqs/min. + + + * New load tests can be spun up and stopped instantly, unlike traditional load-testing tools, which can take many minutes to generate the test plan and distribute the test data to the load-testing servers. + + + * It can ramp traffic up or down by a percentage value to function as a more traditional load test. + + + * Its plugin system enables you to switch out plugins to change its behavior. For instance, you can switch from past replay (i.e., replays past requests) to live replay (i.e., replays requests as they come in). + + + * Currently, it can replay logs from the [Application Load Balancer][10] and [Classic Load Balancer][11] Elastic Load Balancers (ELBs), and support for other load balancers is coming soon. + + + +### How it works + +ShadowReader is composed of four different Lambdas: a Parser, an Orchestrator, a Master, and a Worker. + +![ShadowReader architecture][12] + +When a user visits a website, a load balancer (in this case, an ELB) typically routes the request. As the ELB routes the request, it will log the event and ship it to S3. + +Next, ShadowReader triggers a Parser Lambda every minute via a CloudWatch event, which parses the latest access (ELB) logs on S3 for that minute, then ships the parsed URLs into another S3 bucket. + +On the other side of the system, ShadowReader also triggers an Orchestrator lambda every minute. This Lambda holds the configurations and state of the system. + +The Orchestrator then invokes a Master Lambda function. From the Orchestrator, the Master receives information on which time slice to replay and downloads the respective data from the S3 bucket of parsed URLs (deposited there by the Parser). + +The Master Lambda divides the load-test URLs into smaller batches, then invokes and passes each batch into a Worker Lambda. If 800 requests must be sent out, then eight Worker Lambdas will be invoked, each one handling 100 URLs. + +Finally, the Worker receives the URLs passed from the Master and starts load-testing the chosen test environment. + +### The bigger picture + +The challenge of reproducibility in load testing serverless infrastructure becomes increasingly important as we move from steady-state application sizing to on-demand models. While ShadowReader is designed and used with Edmunds' infrastructure in mind, any application leveraging ELBs can take full advantage of it. Soon, it will have support to replay the traffic of any service that generates traffic logs. + +As the project moves forward, we would love to see it evolve to be compatible with next-generation serverless runtimes such as Knative. We also hope to see other open source communities build similar toolchains for their infrastructure as serverless becomes more prevalent. + +### Getting started + +If you would like to test drive ShadowReader, check out the [GitHub repo][2]. The README contains how-to guides and a batteries-included [demo][13] that will deploy all the necessary resources to try out live replay in your AWS account. + +We would love to hear what you think and welcome contributions. See the [contributing guide][14] to get started! + +* * * + +_This article is based on "[How we fixed a Node.js memory leak by using ShadowReader to replay production traffic into QA][15]," published on the_ _Edmunds Tech Blog_ _with the help of Carlos Macasaet, Sharath Gowda, and Joey Davis._ _Yuki_ _Sawa_ _also presented this_ as* [ShadowReader—Serverless load tests for replaying production traffic][16] at ([SCaLE 17x][17]) March 7-10 in Pasadena, Calif.* + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/3/shadowreader-serverless + +作者:[Yuki Sawa][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/yukisawa1/users/yongsanchez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/traffic-light-go.png?itok=nC_851ys (Traffic lights at night) +[2]: https://github.com/edmunds/shadowreader +[3]: https://opensource.com/sites/default/files/uploads/shadowreader_incident1_0.png (Christmas Eve 2017 incident) +[4]: https://opensource.com/sites/default/files/uploads/shadowreader_incident2.png (Christmas Eve 2017 incident) +[5]: https://opensource.com/sites/default/files/uploads/shadowreader_99thpercentile.png (Slow increase in 99th percentile response time) +[6]: https://opensource.com/sites/default/files/uploads/shadowreader_qa.png (QA results in ShadowReader) +[7]: https://opensource.com/sites/default/files/uploads/shadowreader_heapdumps.png (Heap dumps show source of memory leak) +[8]: https://opensource.com/sites/default/files/uploads/shadowreader_code.png (Collecting metrics) +[9]: https://opensource.com/sites/default/files/uploads/shadowreader_leakplugged.png (Graph showing memory leak fixed) +[10]: https://docs.aws.amazon.com/elasticloadbalancing/latest/application/introduction.html +[11]: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/introduction.html +[12]: https://opensource.com/sites/default/files/uploads/shadowreader_architecture.png (ShadowReader architecture) +[13]: https://github.com/edmunds/shadowreader#live-replay +[14]: https://github.com/edmunds/shadowreader/blob/master/CONTRIBUTING.md +[15]: https://technology.edmunds.com/2018/08/25/Investigating-a-Memory-Leak-and-Introducing-ShadowReader/ +[16]: https://www.socallinuxexpo.org/scale/17x/speakers/yuki-sawa +[17]: https://www.socallinuxexpo.org/ diff --git a/sources/tech/20190401 Build and host a website with Git.md b/sources/tech/20190401 Build and host a website with Git.md new file mode 100644 index 0000000000..32a07d3490 --- /dev/null +++ b/sources/tech/20190401 Build and host a website with Git.md @@ -0,0 +1,226 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Build and host a website with Git) +[#]: via: (https://opensource.com/article/19/4/building-hosting-website-git) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Build and host a website with Git +====== +Publishing your own website is easy if you let Git help you out. Learn +how in the first article in our series about little-known Git uses. +![web development and design, desktop and browser][1] + +[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. + +Creating a website used to be both sublimely simple and a form of black magic all at once. Back in the old days of Web 1.0 (that's not what anyone actually called it), you could just open up any website, view its source code, and reverse engineer the HTML—with all its inline styling and table-based layout—and you felt like a programmer after an afternoon or two. But there was still the matter of getting the page you created on the internet, which meant dealing with servers and FTP and webroot directories and file permissions. While the modern web has become far more complex since then, self-publication can be just as easy (or easier!) if you let Git help you out. + +### Create a website with Hugo + +[Hugo][3] is an open source static site generator. Static sites are what the web used to be built on (if you go back far enough, it was _all_ the web was). There are several advantages to static sites: they're relatively easy to write because you don't have to code them, they're relatively secure because there's no code executed on the pages, and they can be quite fast because there's no processing aside from transferring whatever you have on the page. + +Hugo isn't the only static site generator out there. [Grav][4], [Pico][5], [Jekyll][6], [Podwrite][7], and many others provide an easy way to create a full-featured website with minimal maintenance. Hugo happens to be one with GitLab integration built in, which means you can generate and host your website with a free GitLab account. + +Hugo has some pretty big fans, too. For instance, if you've ever gone to the Let's Encrypt website, then you've used a site built with Hugo. + +![Let's Encrypt website][8] + +#### Install Hugo + +Hugo is cross-platform, and you can find installation instructions for MacOS, Windows, Linux, OpenBSD, and FreeBSD in [Hugo's getting started resources][9]. + +If you're on Linux or BSD, it's easiest to install Hugo from a software repository or ports tree. The exact command varies depending on what your distribution provides, but on Fedora you would enter: + +``` +$ sudo dnf install hugo +``` + +Confirm you have installed it correctly by opening a terminal and typing: + +``` +$ hugo help +``` + +This prints all the options available for the **hugo** command. If you don't see that, you may have installed Hugo incorrectly or need to [add the command to your path][10]. + +#### Create your site + +To build a Hugo site, you must have a specific directory structure, which Hugo will generate for you by entering: + +``` +$ hugo new site mysite +``` + +You now have a directory called **mysite** , and it contains the default directories you need to build a Hugo website. + +Git is your interface to get your site on the internet, so change directory to your new **mysite** folder and initialize it as a Git repository: + +``` +$ cd mysite +$ git init . +``` + +Hugo is pretty Git-friendly, so you can even use Git to install a theme for your site. Unless you plan on developing the theme you're installing, you can use the **\--depth** option to clone the latest state of the theme's source: + +``` +$ git clone --depth 1 \ + +themes/mero +``` + + +Now create some content for your site: + +``` +$ hugo new posts/hello.md +``` + +Use your favorite text editor to edit the **hello.md** file in the **content/posts** directory. Hugo accepts Markdown files and converts them to themed HTML files at publication, so your content must be in [Markdown format][11]. + +If you want to include images in your post, create a folder called **images** in the **static** directory. Place your images into this folder and reference them in your markup using the absolute path starting with **/images**. For example: + +``` +![A picture of a thing](/images/thing.jpeg) +``` + +#### Choose a theme + +You can find more themes at [themes.gohugo.io][12], but it's best to stay with a basic theme while testing. The canonical Hugo test theme is [Ananke][13]. Some themes have complex dependencies, and others don't render pages the way you might expect without complex configuration. The Mero theme used in this example comes bundled with a detailed **config.toml** configuration file, but (for the sake of simplicity) I'll provide just the basics here. Open the file called **config.toml** in a text editor and add three configuration parameters: + +``` + +languageCode = "en-us" +title = "My website on the web" +theme = "mero" + +[params] + author = "Seth Kenlon" + description = "My hugo demo" +``` + +#### Preview your site + +You don't have to put anything on the internet until you're ready to publish it. While you work, you can preview your site by launching the local-only web server that ships with Hugo. + +``` +$ hugo server --buildDrafts --disableFastRender +``` + +Open a web browser and navigate to **** to see your work in progress. + +### Publish with Git to GitLab + +To publish and host your site on GitLab, create a repository for the contents of your site. + +To create a repository in GitLab, click on the **New Project** button in your GitLab Projects page. Create an empty repository called **yourGitLabUsername.gitlab.io** , replacing **yourGitLabUsername** with your GitLab user name or group name. You must use this scheme as the name of your project. If you want to add a custom domain later, you can. + +Do not include a license or a README file (because you've started a project locally, adding these now would make pushing your data to GitLab more complex, and you can always add them later). + +Once you've created the empty repository on GitLab, add it as the remote location for the local copy of your Hugo site, which is already a Git repository: + +``` +$ git remote add origin git@gitlab.com:skenlon/mysite.git +``` + +Create a GitLab site configuration file called **.gitlab-ci.yml** and enter these options: + +``` +image: monachus/hugo + +variables: + GIT_SUBMODULE_STRATEGY: recursive + +pages: + script: + - hugo + artifacts: + paths: + - public + only: + - master +``` + +The **image** parameter defines a containerized image that will serve your site. The other parameters are instructions telling GitLab's servers what actions to execute when you push new code to your remote repository. For more information on GitLab's CI/CD (Continuous Integration and Delivery) options, see the [CI/CD section of GitLab's docs][14]. + +#### Set the excludes + +Your Git repository is configured, the commands to build your site on GitLab's servers are set, and your site ready to publish. For your first Git commit, you must take a few extra precautions so you're not version-controlling files you don't intend to version-control. + +First, add the **/public** directory that Hugo creates when building your site to your **.gitignore** file. You don't need to manage the finished site in Git; all you need to track are your source Hugo files. + +``` +$ echo "/public" >> .gitignore +``` + +You can't maintain a Git repository within a Git repository without creating a Git submodule. For the sake of keeping this simple, move the embedded **.git** directory so that the theme is just a theme. + +Note that you _must_ add your theme files to your Git repository so GitLab will have access to the theme. Without committing your theme files, your site cannot successfully build. + +``` +$ mv themes/mero/.git ~/.local/share/Trash/files/ +``` + +Alternately, use a **trash** command such as [Trashy][15]: + +``` +$ trash themes/mero/.git +``` + +Now you can add all the contents of your local project directory to Git and push it to GitLab: + +``` +$ git add . +$ git commit -m 'hugo init' +$ git push -u origin HEAD +``` + +### Go live with GitLab + +Once your code has been pushed to GitLab, take a look at your project page. An icon indicates GitLab is processing your build. It might take several minutes the first time you push your code, so be patient. However, don't be _too_ patient, because the icon doesn't always update reliably. + +![GitLab processing your build][16] + +While you're waiting for GitLab to assemble your site, go to your project settings and find the **Pages** panel. Once your site is ready, its URL will be provided for you. The URL is **yourGitLabUsername.gitlab.io/yourProjectName**. Navigate to that address to view the fruits of your labor. + +![Previewing Hugo site][17] + +If your site fails to assemble correctly, GitLab provides insight into the CI/CD pipeline logs. Review the error message for an indication of what went wrong. + +### Git and the web + +Hugo (or Jekyll or similar tools) is just one way to leverage Git as your web publishing tool. With server-side Git hooks, you can design your own Git-to-web pipeline with minimal scripting. With the community edition of GitLab, you can self-host your own GitLab instance or you can use an alternative like [Gitolite][18] or [Gitea][19] and use this article as inspiration for a custom solution. Have fun! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/building-hosting-website-git + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh (web development and design, desktop and browser) +[2]: https://git-scm.com/ +[3]: http://gohugo.io +[4]: http://getgrav.org +[5]: http://picocms.org/ +[6]: https://jekyllrb.com +[7]: http://slackermedia.info/podwrite/ +[8]: https://opensource.com/sites/default/files/uploads/letsencrypt-site.jpg (Let's Encrypt website) +[9]: https://gohugo.io/getting-started/installing +[10]: https://opensource.com/article/17/6/set-path-linux +[11]: https://commonmark.org/help/ +[12]: https://themes.gohugo.io/ +[13]: https://themes.gohugo.io/gohugo-theme-ananke/ +[14]: https://docs.gitlab.com/ee/ci/#overview +[15]: http://slackermedia.info/trashy +[16]: https://opensource.com/sites/default/files/uploads/hugo-gitlab-cicd.jpg (GitLab processing your build) +[17]: https://opensource.com/sites/default/files/uploads/hugo-demo-site.jpg (Previewing Hugo site) +[18]: http://gitolite.com +[19]: http://gitea.io diff --git a/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md new file mode 100644 index 0000000000..777108f639 --- /dev/null +++ b/sources/tech/20190401 Meta Networks builds user security into its Network-as-a-Service.md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Meta Networks builds user security into its Network-as-a-Service) +[#]: via: (https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all) +[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/) + +Meta Networks builds user security into its Network-as-a-Service +====== + +### Meta Networks has a unique approach to the security of its Network-as-a-Service. A tight security perimeter is built around every user and the specific resources each person needs to access. + +![MF3d / Getty Images][1] + +Network-as-a-Service (NaaS) is growing in popularity and availability for those organizations that don’t want to host their own LAN or WAN, or that want to complement or replace their traditional network with something far easier to manage. + +With NaaS, a service provider creates a multi-tenant wide area network comprised of geographically dispersed points of presence (PoPs) connected via high-speed Tier 1 carrier links that create the network backbone. The PoPs peer with cloud services to facilitate customer access to cloud applications such as SaaS offerings, as well as to infrastructure services from the likes of Amazon, Google and Microsoft. User organizations connect to the network from whatever facilities they have — data centers, branch offices, or even individual client devices — typically via SD-WAN appliances and/or VPNs. + +Numerous service providers now offer Network-as-a-Service. As the network backbone and the PoPs become more of a commodity, the providers are distinguishing themselves on other value-added services, such as integrated security or WAN optimization. + +**[ Also read:[What to consider when deploying a next generation firewall][2] | Get regularly scheduled insights: [Sign up for Network World newsletters][3]. ]** + +Ever since its launch about a year ago, [Meta Networks][4] has staked security as its primary value-add. What’s different about the Meta NaaS is the philosophy that the network is built around users, not around specific sites or offices. Meta Networks does this by building a software-defined perimeter (SDP) for each user, giving workers micro-segmented access to only the applications and network resources they need. The vendor was a little ahead of its time with SDP, but the market is starting to catch up. Companies are beginning to show interest in SDP as a VPN replacement or VPN alternative. + +Meta NaaS has a zero-trust architecture where each user is bound by an SDP. Each user has a unique, fixed identity no matter from where they connect to this network. The SDP security framework allows one-to-one network connections that are dynamically created on demand between the user and the specific resources they need to access. Everything else on the NaaS is invisible to the user. No access is possible unless it is explicitly granted, and it’s continuously verified at the packet level. This model effectively provides dynamically provisioned secure network segmentation. + +## SDP tightly controls access to specific resources + +This approach works very well when a company wants to securely connect employees, contractors, and external partners to specific resources on the network. For example, one of Meta Networks’ customers is Via Transportation, a New York-based company that has a ride-sharing platform. The company operates its own ride-sharing services in various cities in North America and Europe, and it licenses its technology to other transit systems around the world. + +Via’s operations are completely cloud-native, and so it has no legacy-style site-based WAN to connect its 400-plus employees and contractors to their cloud-based applications. Via’s partners, primarily transportation operators in different cities and countries, also need controlled access to specific portions of Via’s software platform to manage rideshares. Giving each group of users access to the applications they need — and _only_ to the ones they specifically need – was a challenge using a VPN. Using the Meta NaaS instead gives Via more granular control over who has what access. + +**[[Prepare to become a Certified Information Security Systems Professional with this comprehensive online course from PluralSight. Now offering a 10-day free trial!][5] ]** + +Via’s employees with managed devices connect to the Meta NaaS using client software on the device, and they are authenticated using Okta and a certificate. Contractors and customers with unmanaged devices use a browser-based access solution from Meta that doesn’t require installation or setup. New users can be on-boarded quickly and assigned granular access policies based on their role. Integration with Okta provides information that facilitates identity-based access policies. Once users connect to the network, they can see only the applications and network resources that their policy allows; everything else is invisible to them under the SDP architecture. + +For Via, there are several benefits to the Meta NaaS approach. First and foremost, the company doesn’t have to own or operate its own WAN infrastructure. Everything is a managed service located in the cloud — the same business model that Via itself espouses. Next, this solution scales easily to support the company’s growth. Meta’s security integrates with Via’s existing identity management system, so identities and access policies can be centrally managed. And finally, the software-defined perimeter hides resources from unauthorized users, creating security by obscurity. + +## Tightening security even further + +Meta Networks further tightens the security around the user by doing device posture checks — “NAC lite,” if you will. A customer can define the criteria that devices have to meet before they are allowed to connect to the NaaS. For example, the check could be whether a security certificate is installed, if a registry key is set to a specific value, or if anti-virus software is installed and running. It’s one more way to enforce company policies on network access. + +When end users use the browser-based method to connect to the Meta NaaS, all activity is recorded in a rich log so that everything can be audited, but also to set alerts and look for anomalies. This data can be exported to a SIEM if desired, but Meta has its own notification and alert system for security incidents. + +Meta Networks recently implemented some new features around management, including smart groups and support for the System for Cross-Domain Identity Management (SCIM) protocol. The smart groups feature provides the means to add an extra notation or tag to elements such as devices, services, network subnets or segments, and basically everything that’s in the system. These tags can then be applied to policy. For example, a customer could label some of their services as a production, staging, or development environment. Then a policy could be implemented to say that only sales people can access the production environment. Smart groups are just one more way to get even more granular about policy. + +The SCIM support makes on-boarding new users simple. SCIM is a protocol that is used to synchronize and provision users and identities from a third-party identity provider such as Okta, Azure AD, or OneLogin. A customer can use SCIM to provision all the users from the IdP into the Meta system, synchronize in real time the groups and attributes, and then use that information to build the access policies inside Meta NaaS. + +These and other security features fit into Meta Networks’ vision that the security perimeter goes with you no matter where you are, and the perimeter includes everything that was formerly delivered through the data center. It is delivered through the cloud to your client device with always-on security. It’s a broad approach to SDP and a unique approach to NaaS. + +**Reviews: 4 free, open-source network monitoring tools** + + * [Icinga: Enterprise-grade, open-source network-monitoring that scales][6] + * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][7] + * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][8] + * [Zabbix delivers effective no-frills network monitoring][9] + + + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3385531/meta-networks-builds-user-security-into-its-network-as-a-service.html#tk.rss_all + +作者:[Linda Musthaler][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Linda-Musthaler/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/10/firewall_network-security_lock_padlock_cyber-security-100776989-large.jpg +[2]: https://www.networkworld.com/article/3236448/lan-wan/what-to-consider-when-deploying-a-next-generation-firewall.html +[3]: https://www.networkworld.com/newsletters/signup.html +[4]: https://www.metanetworks.com/ +[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fcertified-information-systems-security-professional-cisspr +[6]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb +[7]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html +[8]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb +[9]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md new file mode 100644 index 0000000000..8177390648 --- /dev/null +++ b/sources/tech/20190401 Top Ten Reasons to Think Outside the Router -2- Simplify and Consolidate the WAN Edge.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge) +[#]: via: (https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all) +[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/) + +Top Ten Reasons to Think Outside the Router #2: Simplify and Consolidate the WAN Edge +====== + +![istock][1] + +We’re now near reaching the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the *Top Ten Reasons to Think Outside the Router. *Click for the [#3][3], [#4][4], [#5][5], [#6][6], [#7][7], [#8][8], [#9][9] and [#10][10] reasons to retire traditional branch routers. + +_The #2 reason it’s time to retire branch routers: conventional router-centric WAN architectures are rigid and complex to manage!_ + +### **Challenges of conventional WAN edge architecture** + +A conventional WAN edge architecture consists of a disparate array of devices, including routers, firewalls, WAN optimization appliances, wireless controllers and so on. This architecture was born in the era when applications were hosted exclusively in the data center. With this model, deploying new applications or provisioning new policies or making policy changes has become an arduous and time-consuming task. Configuration, deployment and management requires specialized on-premise IT expertise to manually program and configure each device with its own management interface, often using an arcane CLI. This process has hit the wall in the cloud era proving too slow, complex, error-prone, costly and inefficient. + +As cloud-first enterprises increasingly migrate applications and infrastructure to the cloud, the traditional WAN architecture is no longer efficient. IT is now faced with a new set of challenges when it comes to connecting users securely and directly to the applications that run their businesses: + + * How do you manage and consistently apply QoS and security policies across the distributed enterprise? + * How do you intelligently automate traffic steering across multiple WAN transport services based on application type and unique requirements? + * How do you deliver the highest quality of experiences to users when running applications over broadband, especially voice and video? + * How do you quickly respond to continuously changing business requirements? + + + +These are just some of the new challenges facing IT teams in the cloud era. To be successful, enterprises will need to shift toward a business-first networking model where top-down business intent drives how the network behaves. And they would be well served to deploy a business-driven unified [SD-WAN][11] edge platform to transform their networks from a business constraint to a business accelerant. + +### **Shifting toward a business-driven WAN edge platform** + +A business-driven WAN edge platform is designed to enable enterprises to realize the full transformation promise of the cloud. It is a model where top-down business intent is the driver, not bottoms-up technology constraints. It’s outcome oriented, utilizing automation, artificial intelligence (AI) and machine learning to get smarter every day. Through this continuous adaptation, and the ability to improve the performance of underlying transport and applications, it delivers the highest quality of experience to end users. This is in stark contrast to the router-centric model where application policies must be shoe-horned to fit within the constraints of the network. A business-driven, top-down approach continuously stays in compliance with business intent and centrally defined security policies. + +### **A unified platform for simplifying and consolidating the WAN Edge** + +Achieving a business-driven architecture requires a unified platform, designed from the ground up as one system, uniting [SD-WAN][12], [firewall][13], [segmentation][14], [routing][15], [WAN optimization][16], application visibility and control in a single-platform. Furthermore, it requires [centralized orchestration][17] with complete observability of the entire wide area network through a single pane of glass. + +The use case “[Simplifying WAN Architecture][18]” describes in detail key capabilities of the Silver Peak [Unity EdgeConnect™][19] SD-WAN edge platform. It illustrates how EdgeConnect enables enterprises to simplify branch office WAN edge infrastructure and streamline deployment, configuration and ongoing management. + +![][20] + +### **Business and IT outcomes of a business-driven SD-WAN** + + * Accelerates deployment, leveraging consistent hardware, software, cloud delivery models + * Saves up to 40 percent on hardware, software, installation, management and maintenance costs when replacing traditional routers + * Protects existing investment in security through simplified service chaining with our broadest ecosystem partners: [Check Point][21], [Forcepoint][22], [McAfee][23], [OPAQ][24], [Palo Alto Networks][25], [Symantec][26] and [Zscaler][27]. + * Reduces foot print by 75 percent as it unifies network functions into a single platform + * Saves more than 50 percent on WAN optimization costs by selectively applying it when and where is needed on an application-by-application basis + * Accelerates time-to-resolution of application or network performance bottlenecks from days to minutes with simple, visual application and WAN analytics + + + +Calculate your [ROI][28] today and learn why the time is now to [think outside the router][29] and deploy the business-driven Silver Peak EdgeConnect SD-WAN edge platform! + +![][30] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384928/top-ten-reasons-to-think-outside-the-router-2-simplify-and-consolidate-the-wan-edge.html#tk.rss_all + +作者:[Rami Rammaha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rami-Rammaha/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/silverpeak_main-100792490-large.jpg +[2]: https://www.silver-peak.com/why-silver-peak +[3]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal +[4]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover +[5]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management +[6]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6 +[7]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs +[8]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video +[9]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance +[10]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy +[11]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[12]: https://www.silver-peak.com/sd-wan +[13]: https://www.silver-peak.com/products/unity-edge-connect/orchestrated-security-policies +[14]: https://www.silver-peak.com/resource-center/centrally-orchestrated-end-end-segmentation +[15]: https://www.silver-peak.com/products/unity-edge-connect/bgp-routing +[16]: https://www.silver-peak.com/products/unity-boost +[17]: https://www.silver-peak.com/products/unity-orchestrator +[18]: https://www.silver-peak.com/use-cases/simplifying-wan-architecture +[19]: https://www.silver-peak.com/products/unity-edge-connect +[20]: https://images.idgesg.net/images/article/2019/04/sp_linkthrough-copy-100792505-large.jpg +[21]: https://www.silver-peak.com/resource-center/check-point-silver-peak-securing-internet-sd-wan +[22]: https://www.silver-peak.com/company/tech-partners/forcepoint +[23]: https://www.silver-peak.com/company/tech-partners/mcafee +[24]: https://www.silver-peak.com/company/tech-partners/opaq-networks +[25]: https://www.silver-peak.com/resource-center/palo-alto-networks-and-silver-peak +[26]: https://www.silver-peak.com/company/tech-partners/symantec +[27]: https://www.silver-peak.com/resource-center/zscaler-and-silver-peak-solution-brief +[28]: https://www.silver-peak.com/sd-wan-interactive-roi-calculator +[29]: https://www.silver-peak.com/think-outside-router +[30]: https://images.idgesg.net/images/article/2019/04/roi-100792506-large.jpg diff --git a/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md new file mode 100644 index 0000000000..38cbc70e94 --- /dev/null +++ b/sources/tech/20190402 3 Essentials for Achieving Resiliency at the Edge.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (3 Essentials for Achieving Resiliency at the Edge) +[#]: via: (https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all) +[#]: author: (Anne Taylor https://www.networkworld.com/author/Anne-Taylor/) + +3 Essentials for Achieving Resiliency at the Edge +====== + +### Edge computing requires different thinking and management to ensure the always-on availability that users have come to demand. + +![iStock][1] + +> “The IT industry has done a good job of making robust data centers that are highly manageable, highly secure, with redundant systems,” [says Kevin Brown][2], SVP Innovation and CTO for Schneider Electric’s Secure Power Division. + +However, he continues, companies then connect these data centers to messy edge closets and server rooms, which over time have become “micro mission-critical data centers” in their own right — making system availability vital. If not designed and managed correctly, the situation can be disastrous if users cannot connect to business-critical applications. + +To avoid unacceptable downtime, companies should incorporate three essential ingredients into their edge computing deployments: remote management, physical security, and rapid deployments. + +**Remote management** + +Depending on the company’s size, staff could be managing several — or many multiple — edge sites. Not only is this time consuming and costly, it’s also complex, especially if protocols differ from site to site. + +While some organizations might deploy traditional remote monitoring technology to manage these sites, it’s important to note these tools: don’t provide real-time status updates; are largely reactionary rather than proactive; and are sometimes limited in terms of data output. + +Coupled with the need to overcome these limitations, the economics for managing edge sites necessitate that organizations consider a digital, or cloud-based, solution. In addition to cost savings, these platforms provide: + + * Simplification in monitoring across edge sites + * Real-time visibility, right down to any device on the network + * Predictive analytics, including data-driven intelligence and recommendations to ensure proactive service delivery + + + +**Physical security** + +Small, local edge computing sites are often situated within larger corporate or wide-open spaces, sometimes in highly accessible, shared offices and public areas. And sometimes they’re set up on-the-fly for a time-sensitive project. + +However, when there is no dedicated location and open racks are unsecured, the risks of malicious and accidental incidents escalate. + +To prevent unauthorized access to IT equipment at edge computing sites, proper physical security is critical and requires: + + * Physical space monitoring, with environmental sensors for temperature and humidity + * Access control, with biometric sensors as an option + * Audio and video surveillance and monitoring with recording + * If possible, install IT equipment within a secure enclosure + + + +**Rapid deployments** + +The [benefits of edge computing][3] are significant, especially the ability to bring bandwidth-intensive computing closer to the user, which leads to faster speed to market and greater productivity. + +Create a holistic plan that will enable the company to quickly deploy edge sites, while ensuring resiliency and reliability. That means having a standardized, repeatable process including: + + * Pre-configured, integrated equipment that combines server, storage, networking, and software in a single enclosure — a prefabricated micro data center, if you will + * Designs that specify supporting racks, UPSs, PDUs, cable management, airflow practices, and cooling systems + + + +These best practices as well as a balanced, systematic approach to edge computing deployments will ensure the always-on availability that today’s employees and users have come to expect. + +Learn how to enable resiliency within your edge computing deployment at [APC.com][4]. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386438/3-essentials-for-achieving-resiliency-at-the-edge.html#tk.rss_all + +作者:[Anne Taylor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Anne-Taylor/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/istock-900882382-100792635-large.jpg +[2]: https://www.youtube.com/watch?v=IfsCTFSH6Jc +[3]: https://www.networkworld.com/article/3342455/how-edge-computing-will-bring-business-to-the-next-level.html +[4]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp diff --git a/sources/tech/20190402 Automate password resets with PWM.md b/sources/tech/20190402 Automate password resets with PWM.md new file mode 100644 index 0000000000..0bc7012c21 --- /dev/null +++ b/sources/tech/20190402 Automate password resets with PWM.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automate password resets with PWM) +[#]: via: (https://opensource.com/article/19/4/automate-password-resets-pwm) +[#]: author: (James Mawson https://opensource.com/users/dxmjames) + +Automate password resets with PWM +====== +PWM puts responsibility for password resets in users' hands, freeing IT +for more pressing tasks. +![Password][1] + +One of the things that can be "death by a thousand cuts" for any IT team's sanity and patience is constantly being asked to reset passwords. + +The best way we've found to handle this is to ditch your hashing algorithms and store your passwords in plaintext so that your users can retrieve them at any time. + +Ha! I am, of course, kidding. That's a terrible idea. + +When your users forget their passwords, you'll still need to reset them. But is there a way to break free from the monotonous, repetitive task of doing it manually? + +### PWM puts password resets in users' hands + +[PWM][2] is an open source ([GPLv2][3]) [JavaServer Pages][4] application that provides a webpage where users can submit their own password resets. If certain conditions are met—which you can configure—PWM will send a password reset instruction to whichever directory service you've connected it to. + +![PWM password reset screen][5] + +One thing that's great about PWM is it's very easy to add it to an existing network. If you're largely happy with what you've already built—just sick of processing password requests manually—you can just throw PWM into the mix. + +PWM works with any implementation of [LDAP][6] and written to run on [Apache Tomcat][7]. Once you get it up and running, you can administer it through a browser-based dashboard. + +### Why PWM is better than Microsoft SSPR + +As much as our team prefers open source, we still have to deal with Windows networks. Of course, Microsoft has its own password-reset tool, called Self Service Password Reset (SSPR). But I prefer PWM, and not just because of a general preference for open source. I believe PWM is better for my use case for the following reasons: + + * **SSPR has a very complex licensing system**. You need different products depending on what servers you're running and whose metal they're running on. This is a constraint on your flexibility and a whole extra pain in the neck when it's time to move to new architecture. For [the busy admin who wants to go home on time][8], it's extra bureaucracy to get the purchase approved. PWM just works on what it's configured to work on at no cost. + + * **PWM is not just for Windows**. It works with any kind of LDAP server. So, it's one less part you need to worry about if you ever stop using Windows for a certain role. It also means that, once you've gotten the hang of it, you have something in your bag of tricks that you can use in many different environments. + + * **PWM is easy to install**. If you know how to install Linux as a virtual machine—and, let's face it, if you're running a network, you probably do—then you're already most of the way there. + + + + +PWM can run on Windows, but we prefer to include it in a Windows network by running it on a Linux virtual machine, [for example, Ubuntu Server 16.04][9]. + +### Risks and rewards of automation + +Password resets are an attack vector, so be thoughtful about where and how you use PWM. Automating your password resets can mean an attacker is potentially just one unencrypted email connection away from resetting a password. + +To some extent, automating your password resets trades a bit of security for some convenience. So maybe this isn't the right way to handle C-suite user accounts that approve large payments. + +On the other hand, manual resets are not 100% secure either—they can be gamed with targeted attacks like spear phishing and social engineering. It's much easier to fall for these scams if your team gets frequent reset requests and is sick of dealing with them. You may benefit from automating the bulk of lower-risk requests so you can focus on protecting the higher-risk accounts manually; this is possible given the time you can save using PWM. + +Some of the risks associated with shifting resets to users can be mitigated with PWM's built-in features, such as insisting users verify their password reset request by email or SMS. You can also make PWM accessible only on the intranet. + +![PWM configuration options][10] + +PWM doesn't store any passwords, so that's one less headache. It does, however, store answers to users' secret questions in a MySQL database that can be configured to be stored locally or on a separate server, depending on your preference. + +There are a ton of ways to make PWM look and feel like a polished part of your team's infrastructure. With a little bit of CSS know-how, you can customize the user interface for your business' branding. There are also more options for implementation than you can shake a stick at. + +### Wrapping up + +PWM is a great open source project, it's actively developed, and it has a helpful online community. It's a great alternative to Microsoft's Azure SSPR solution for small to midsized businesses that have to keep a tight grip on the purse strings, and it slots in neatly to any existing Active Directory infrastructure. It also saves IT's time by outsourcing this mundane task to users. + +I advise every network admin to dive in and have a look at the cool stuff PWM offers. Check out the [getting started resources][11] and reach out to the community if you have any questions. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/automate-password-resets-pwm + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dxmjames +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ (Password) +[2]: https://github.com/pwm-project/pwm +[3]: https://github.com/pwm-project/pwm/blob/master/LICENSE +[4]: https://www.oracle.com/technetwork/java/index-jsp-138231.html +[5]: https://opensource.com/sites/default/files/uploads/pwm_password-reset.png (PWM password reset screen) +[6]: https://opensource.com/business/14/5/top-4-open-source-ldap-implementations +[7]: http://tomcat.apache.org/ +[8]: https://opensource.com/article/18/7/tools-admin +[9]: https://blog.dxmtechsupport.com.au/adding-pwm-password-reset-tool-to-windows-network/ +[10]: https://opensource.com/sites/default/files/uploads/pwm-configuration.png (PWM configuration options) +[11]: https://github.com/pwm-project/pwm#links diff --git a/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md b/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md new file mode 100644 index 0000000000..8b5010a2ec --- /dev/null +++ b/sources/tech/20190402 How to Install and Configure Plex on Ubuntu Linux.md @@ -0,0 +1,202 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install and Configure Plex on Ubuntu Linux) +[#]: via: (https://itsfoss.com/install-plex-ubuntu) +[#]: author: (Chinmay https://itsfoss.com/author/chinmay/) + +How to Install and Configure Plex on Ubuntu Linux +====== + +When you are a media hog and have a big collection of movies, photos or music, the below capabilities would be very handy. + + * Share media with family and other people. + * Access media from different devices and platforms. + + + +Plex ticks all of those boxes and more. Plex is a client-server media player system with additional features. Plex supports a wide array of platforms, both for the server and the player. No wonder it is considered one of the [best media servers for Linux][1]. + +Note: Plex is not a completely open source media player. We have covered it because this is one of the frequently [requested tutorial][2]. + +### Install Plex on Ubuntu + +For this guide I am installing Plex on Elementary OS, an Ubuntu based distribution. You can still follow along if you are installing it on a headless Linux machine. + +Go to the Plex [downloads][3] page, select Ubuntu 64-bit (I would not recommend installing it on a 32-bit CPU) and download the .deb file. + +![][4] + +[Download Plex][3] + +You can [install the .deb file][5] by just clicking on the package. If it does not work, you can use an installer like **Eddy** or **[GDebi][6].** + +You can also install it via the terminal using dpkg as shown below. + +Install Plex on a headless Linux system + +For a [headless system][7], you can use **wget** to download the .deb package. This example uses the current link for Ubuntu, at the time of writing. Be sure to use the up-to-date version supplied on the Plex website. + +``` +wget https://downloads.plex.tv/plex-media-server-new/1.15.1.791-8bec0f76c/debian/plexmediaserver_1.15.1.791-8bec0f76c_amd64.deb +``` + +The above command downloads the 64-bit .deb package. Once downloaded install the package using the following command. + +``` +dpkg -i plexmediaserver*.deb +``` + +Enable version upgrades for Plex + +The .deb installation does create an entry in sources.d, but [repository updates][8] are not enabled by default and the contents of _plexmediaserver.list_ are commented out. This means that if there is a new Plex version available, your system will not be able to update your Plex install. + +To enable repository updates you can either remove the # from the line starting with deb or run the following commands. + +``` +echo deb https://downloads.plex.tv/repo/deb public main | sudo tee /etc/apt/sources.list.d/plexmediaserver.list +``` + +The above command updates the entry in sources.d directory. + +We also need to add Plex’s public key to facilitate secure and safe downloads. You can try running the command below, unfortunately this **did not work for me** and the [GPG][9] key was not added. + +``` +curl https://downloads.plex.tv/plex-keys/PlexSign.key | sudo apt-key add - +``` + +To fix this issue I found out the key hash for from the error message after running _sudo apt-get update._ + +![][10] + +``` +97203C7B3ADCA79D +``` + +The above hash can be used to add the key from the key-server. Run the below commands to add the key. + +``` +gpg --keyserver https://downloads.plex.tv/plex-keys/PlexSign.key --recv-keys 97203C7B3ADCA79D +``` + +``` +gpg --export --armor 97203C7B3ADCA79D|sudo apt-key add - +``` + +You should see an **OK** once the key is added. + +Run the below command to verify that the repository is added to the sources list successfully. + +``` +sudo apt update +``` + +To update Plex to the newest version available on the repository, run the below [apt-get command][11]. + +``` +sudo apt-get --only-upgrade install plexmediaserver +``` + +Once installed the Plex service automatically starts running. You can check if its running by running the this command in a terminal. + +``` +systemctl status plexmediaserver +``` + +If the service is running properly you should see something like this. + +![Check the status of Plex Server][12] + +### Configuring Plex as a Media Server + +The Plex server is accessible on the ports 32400 and 32401. Navigate to **localhost:32400** or **localhost:32401** using a browser. You should replace the ‘localhost’ with the IP address of the machine running Plex server if you are going headless. + +The first time you are required to sign up or log in to your Plex account. + +![Plex Login Page][13] + +Now you can go ahead and give a friendly name to your Plex Server. This name will be used to identify the server over the network. You can also have multiple Plex servers identified by different names on the same network. + +![Plex Server Setup][14] + +Now it is finally time to add all your collections to the Plex library. Here your collections will be automatically get indexed and organized. + +You can click the add library button to add all your collections. + +![Add Media Library][15] + +![][16] + +Navigate to the location of the media you want to add to Plex . + +![][17] + +You can add multiple folders and different types of media. + +When you are done, you are taken to a very slick looking Plex UI. You can already see the contents of your libraries showing up on the home screen. It also automatically selects a thumbnail and also fills the metadata. + +![][18] + +You can head over to the settings and configure some of the settings. You can create new users( **only with Plex Pass** ), adjust the transcoding settings set scheduled library updates and more. + +If you have a public IP assigned to your router by the ISP you can also enable Remote Access. This means that you can be traveling and still access your libraries at home, considering you have your Plex server running all the time. + +Now you are all set up and ready, but how do you access your media? Yes you can access through your browser but Plex has a presence in almost all platforms you can think of including Android Auto. + +### Accessing Your Media and Plex Pass + +You can access you media either by using the web browser (the same address you used earlier) or Plex’s suite of apps. The web browser experience is pretty good on computers and can be better on phones. + +Plex apps provide a much better experience. But, the iOS and Android apps need to be activated with a [Plex Pass][19]. Without activation you are limited to 1 minute of video playback and images are watermarked. + +Plex Pass is a premium subscription service which activates the mobile apps and enables more features. You can also individually activate your apps tied to a particular phone for a cheaper price. You can also create multiple users and set permissions with the Plex Pass which is a very handy feature. + +You can check out all the benefits of Plex Pass [here][19]. + +_Note: Plex Meida Player is free on all platforms other than Android and iOS App._ + +**Conclusion** + +That’s about all things you need to know for the first time configuration, go ahead and explore the Plex UI, it also gives you access to free online content like podcasts and music through Tidal. + +There are alternatives to Plex like [Jellyfin][20] which is free but native apps are in beta and on road to be published on the App stores.You can also use a NAS with any of the freely available media centers like Kodi, OpenELEC or even VLC media player. + +Here is an article listing the [best Linux media servers.][1] + +Let us know your experience with Plex and what you use for your media sharing needs. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-plex-ubuntu + +作者:[Chinmay][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/chinmay/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-linux-media-server/ +[2]: https://itsfoss.com/request-tutorial/ +[3]: https://www.plex.tv/media-server-downloads/ +[4]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/downloads-plex.png?ssl=1 +[5]: https://itsfoss.com/install-deb-files-ubuntu/ +[6]: https://itsfoss.com/gdebi-default-ubuntu-software-center/ +[7]: https://www.lions-wing.net/lessons/servers/home-server.html +[8]: https://itsfoss.com/ubuntu-repositories/ +[9]: https://www.gnupg.org/ +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-26-07-21-05-1.png?ssl=1 +[11]: https://itsfoss.com/apt-get-linux-guide/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/check-plex-service.png?ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/plex-home-page.png?ssl=1 +[14]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/Plex-server-setup.png?ssl=1 +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-library.png?ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-library.png?ssl=1 +[17]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/add-plex-folder.png?ssl=1 +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/Screenshot-from-2019-03-17-22-27-56.png?ssl=1 +[19]: https://www.plex.tv/plex-pass/ +[20]: https://jellyfin.readthedocs.io/en/latest/ diff --git a/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md new file mode 100644 index 0000000000..686a2be6a4 --- /dev/null +++ b/sources/tech/20190402 Intel-s Agilex FPGA family targets data-intensive workloads.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Intel's Agilex FPGA family targets data-intensive workloads) +[#]: via: (https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all) +[#]: author: (Marc Ferranti https://www.networkworld.com) + +Intel's Agilex FPGA family targets data-intensive workloads +====== +Agilex processors are the first Intel FPGAs to use 10nm manufacturing, achieving a performance boost for AI, financial and IoT workloads +![Intel][1] + +After teasing out details about the technology for a year and half under the code name Falcon Mesa, Intel has unveiled the Agilex family of FPGAs, aimed at data-center and network applications that are processing increasing amounts of data for AI, financial, database and IoT workloads. + +The Agilex family, expected to start appearing in devices in the third quarter, is part of a new wave of more easily programmable FPGAs that is beginning to take an increasingly central place in computing as data centers are called on to handle an explosion of data. + +**Learn about edge networking** + + * [How edge networking and IoT will reshape data centers][2] + * [Edge computing best practices][3] + * [How edge computing can help secure the IoT][4] + + + +FPGAs, or field programmable gate arrays, are built around around a matrix of configurable logic blocks (CLBs) linked via programmable interconnects that can be programmed after manufacturing – and even reprogrammed after being deployed in devices – to run algorithms written for specific workloads. They can thus be more efficient on a performance-per-watt basis than general-purpose CPUs, even while driving higher performance. + +### Accelerated computing takes center stage + +CPUs can be packaged with FPGAs, offloading specific tasks to them and enhancing overall data-center and network efficiency. The concept, known as accelerated computing, is increasingly viewed by data-center and network managers as a cost-efficient way to handle increasing data and network traffic. + +"This data is creating what I call an innovation race across from the edge to the network to the cloud," said Dan McNamara, general manager of the Programmable Solutions Group (PSG) at Intel. "We believe that we’re in the largest adoption phase for FPGAs in our history." + +The Agilex family is the first line of FPGAs developed from the ground up in the wake of [Intel’s $16.7 billion 2015 acquisition of Altera.][5] It's the first FPGA line to be made with Intel's 10nm manufacturing process, which adds billions of transistors to the FPGAs compared to earlier generations. Along with Intel's second-generation HyperFlex architecture, it helps give Agilex 40 percent higher performance than the company's current high-end FPGA family, the Stratix 10 line, Intel says. + +HyperFlex architecture includes additional registers – places on a processor that temporarily hold data – called Hyper-Registers, located everywhere throughout the core fabric to enhance bandwidth as well as area and power efficiency. + +**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][6] ]** + +### Memory coherency is key + +Agilex FPGAs are also the first processors to support [Compute Express Link (CXL), a high-speed interconnect][7] designed to maintain memory coherency among CPUs like Intel's second-generation Xeon Scalable processors and purpose-built accelerators like FPGAs and GPUs. It ensures that different processors don't clash when trying to write to the same memory space, essentially allowing CPUs and accelerators to share memory. + +"By having this CXL bus you can actually write applications that will use all the real memory so what that does is it simplifies the programming model in large memory workloads," said Patrick Moorhead, founder and principal at Moor Insights & Strategy. + +The ability to integrate FPGAs, other accelerators and CPUs is key to Intel's accelerated computing strategy for the data center. Intel calls it "any to any" integration. + +### 'Any-to-any' integration is crucial for the data center + +The Agilex family uses embedded multi-die interconnect bridge (EMIB) packaging technology to integrate, for example, Xeon Scalable CPUs or ASICs – special-function processors that are not reprogammable – alongside FPGA fabric. Intel last year bought eASIC, a maker of structured ASICs, which the company describes as an intermediary technology between FPGAs and ASICs. The idea is to deliver products that offer a mix of functionality to achieve optimal cost and performance efficiency for data-intensive workloads. + +Intel underscored the importance of processor integration for the data center by unveiling Agilex on Tuesday at its Data Centric Innovation Day in San Francisco, when it also discussed plans for its second generation Xeon Scalable line. + +Traditionally, FPGAs were mainly used in embedded devices, communications equipment and in hyperscale data centers, and not sold directly to enterprises. But several products based on Intel Stratix 10 and Arria 10 FPGAs are now being sold to enterprises, including in Dell EMC and Fujitsu off-the-shelf servers. + +Making FPGAs easier to program is key to making them more mainstream. "What's really, really important is the software story," said Intel's McNamara. "None of this really matters if we can't generate more users and make it easier to program FPGA's." + +Intel's Quartus Prime design tool will be available for Agilex hardware developers but the real breakthrough for FPGA software development will be Intel's OneAPI concept, announced in December. + +"OneAPI is is an effort by Intel to be able to have programmers write to OneAPI and OneAPI determines the best piece of silicon to run it on," Moorhead said. "I lovingly refer to it as the magic API; this is the big play I always thought Intel was gonna be working on ever since it bought Altera. The first thing I expect to happen are the big enterprise developers like SAP and Oracle to write to Agilex, then smaller ISVs, then custom enterprise applications." + +![][8] + +Intel plans three different product lines in the Agilex family – from low to high end, the F-, I- and M-series – aimed at different applications and processing requirements. The Agilex family, depending on the series, supports PCIe (peripheral component interconnect express) Gen 5, and different types of memory including DDR5 RAM, HBM (high-bandwidth memory) and Optane DC persistent memory. It will offer up to 112G bps transceiver data rates and a greater mix of arithmetic precision for AI, including bfloat16 number format. + +In addition to accelerating server-based workloads like AI, genomics, financial and database applications, FPGAs play an important part in networking. Their cost-per-watt efficiency makes them suitable for edge networks, IoT devices as well as deep packet inspection. In addition, they can be used in 5G base stations; as 5G standards evolve, they can be reprogrammed. Once 5G standards are hardened, the "any to any" integration will allow processing to be offloaded to special-purpose ASICs for ultimate cost efficiency. + +### Agilex will compete with Xylinx's ACAPs + +Agilex will likely vie with Xylinx's upcoming [Versal product family][9], due out in devices in the second half of the year. Xylinx competed for years with Altera in the FPGA market, and with Versal has introduced what it says is [a new product category, the Adaptive Compute Acceleration Platform (ACAP)][10]. Versal ACAPs will be made using TSMC's 7nm manufacturing process technology, though because Intel achieves high transistor density, the number of transistors offered by Agilex and Versal chips will likely be equivalent, noted Moorhead. + +Though Agilex and Versal differ in details, the essential pitch is similar: the programmable processors offer a wider variety of programming options than prior generations of FPGA, work with CPUs to accelerate data-intensive workloads, and offer memory coherence. Rather than CXL, though, the Versal family uses the cache coherent interconnect for accelerators (CCIX) interconnect fabric. + +Neither Intel or Xylinx for the moment have announced OEM support for Agilex or Versal products that will be sold to the enterprise, but that should change as the year progresses. + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html#tk.rss_all + +作者:[Marc Ferranti][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/agilex-100792596-large.jpg +[2]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[3]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[4]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[5]: https://www.networkworld.com/article/2903454/intel-could-strengthen-its-server-product-stack-with-altera.html +[6]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[7]: https://www.networkworld.com/article/3359254/data-center-giants-announce-new-high-speed-interconnect.html +[8]: https://images.idgesg.net/images/article/2019/04/agilex-family-100792597-large.jpg +[9]: https://www.xilinx.com/news/press/2018/xilinx-unveils-versal-the-first-in-a-new-category-of-platforms-delivering-rapid-innovation-with-software-programmability-and-scalable-ai-inference.html +[10]: https://www.networkworld.com/article/3263436/fpga-maker-xilinx-aims-range-of-software-programmable-chips-at-data-centers.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190402 Manage your daily schedule with Git.md b/sources/tech/20190402 Manage your daily schedule with Git.md new file mode 100644 index 0000000000..8f5d7d89bb --- /dev/null +++ b/sources/tech/20190402 Manage your daily schedule with Git.md @@ -0,0 +1,240 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage your daily schedule with Git) +[#]: via: (https://opensource.com/article/19/4/calendar-git) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Manage your daily schedule with Git +====== +Treat time like source code and maintain your calendar with the help of +Git. +![website design image][1] + +[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at using Git to keep track of your calendar. + +### Keep track of your schedule with Git + +What if time itself was but source code that could be managed and version controlled? While proving or disproving such a theory is probably beyond the scope of this article, it happens that you can treat time like source code and manage your daily schedule with the help of Git. + +The reigning champion for calendaring is the [CalDAV][3] protocol, which drives popular open source calendaring applications like [NextCloud][4] as well as popular closed source ones. There's nothing wrong with CalDAV (commenters, take heed). But it's not for everyone, and besides there's nothing less inspiring than a mono-culture. + +Because I have no interest in becoming invested in largely GUI-dependent CalDAV clients (although if you're looking for a good terminal CalDAV viewer, see [khal][5]), I started investigating text-based alternatives. Text-based calendaring has all the usual benefits of working in [plaintext][6]. It's lightweight, it's highly portable, and as long as it's structured, it's easy to parse and beautify (whatever _beauty_ means to you). + +And best of all, it's exactly what Git was designed to manage. + +### Org mode not in a scary way + +If you don't impose structure on your plaintext, it quickly falls into a pandemonium of off-the-cuff thoughts and devil-may-care notation. Luckily, a markup syntax exists for calendaring, and it's contained in the venerable productivity Emacs mode, [Org mode][7] (which, admit it, you've been meaning to start using anyway). + +The amazing thing about Org mode that many people don't realize is [you don't need to know or even use Emacs][8] to take advantage of conventions established by Org mode. You get a lot of great features if you _do_ use Emacs, but if Emacs intimidates you, then you can implement a Git-based Org-mode calendaring system without so much as installing Emacs. + +The only part of Org mode that you need to know is its syntax. Org-mode syntax is low-maintenance and fairly intuitive. The biggest difference in calendaring with Org mode instead of a GUI calendaring app is the workflow: instead of going to a calendar and finding the day you want to schedule a task, you create a list of tasks and then assign each one a day and time. + +Lists in Org mode use asterisks (*) as bullets. Here's my gaming task list: **** + +``` +* Gaming +** Build Stardrifter character +** Read Stardrifter rules +** Stardrifter playtest + +** Blue Planet @ Mike's + +** Run Rappan Athuk +*** Purchase hard copy +*** Skim Rappan Athuk +*** Build Rappan Athuk maps in maptool +*** Sort Rappan Athuk tokens +``` + +If you're familiar with [CommonMark][9] or Markdown, you'll notice that instead of using whitespace to create a subtask, Org mode favors the more explicit use of additional bullets. Whatever your background with lists, this is an intuitive and easy way to build a list, and it obviously is not inherently tied to Emacs (although using Emacs provides you with shortcuts so you can rearrange your list quickly). + +To turn your list into scheduled tasks or events in a calendar, go back through and add the keywords **SCHEDULED** and, optionally, **:CATEGORY:**. + +``` +* Gaming +:CATEGORY: Game +** Build Stardrifter character +SCHEDULED: <2019-03-22 18:00-19:00> +** Read Stardrifter rules +SCHEDULED: <2019-03-22 19:00-21:00> +** Stardrifter playtest +SCHEDULED: <2019-03-25 0900-1300> +** Blue Planet @ Mike's +SCHEDULED: <2019-03-18 18:00-23:00 +1w> + +and so on... +``` + +The **SCHEDULED** keyword marks the entry as an event that you expect to be notified about and the optional **:CATEGORY:** keyword is an arbitrary tagging system for your own use (and in Emacs, you can color-code entries according to category). + +For a repeating event, you can use notation such as **+1w** to create a weekly event or **+2w** for a fortnightly event, and so on. + +All the fancy markup available for Org mode is [documented][10], so don't hesitate to find more tricks to help it fit your needs. + +### Put it into Git + +Without Git, your Org-mode appointments are just a file on your local machine. It's the 21st century, though, so you at least need your calendar on your mobile phone, if not on all of your personal computers. You can use Git to publish your calendar for yourself and others. + +First, create a directory for your **.org** files. I store mine in **~/cal**. + +``` +$ mkdir ~/cal +``` + +Change into your directory and make it a Git repository: + +``` +$ cd cal +$ git init +``` + +Move your **.org** file to your local Git repo. In practice, I maintain one **.org** file per category. + +``` +$ mv ~/*.org ~/cal +$ ls +Game.org Meal.org Seth.org Work.org +``` + +Stage and commit your files: + +``` +$ git add *.org +$ git commit -m 'cal init' +``` + +### Create a Git remote + +To make your calendar available from anywhere, you must have a Git repository on the internet. Your calendar is plaintext, so any Git repository will do. You can put your calendar on [GitLab][11] or any other public Git hosting service (even proprietary ones), and as long as your host allows it, you can even mark the repository as private. If you don't want to post your calendar to a server you don't control, it's easy to host a Git repository yourself, either using a bare repository for a single user or using a frontend service like [Gitolite][12] or [Gitea][13]. + +In the interest of simplicity, I'll assume a self-hosted bare Git repository. You can create a bare remote repository on any server you have SSH access to with one Git command: +``` +$ ssh -p 22122 [seth@example.com][14] +[remote]$ mkdir cal.git +[remote]$ cd cal.git +[remote]$ git init --bare +[remote]$ exit +``` + +This bare repository can serve as your calendar's home on the internet. + +Set it as the remote source for your local (on your computer, not your server) Git repository: + +``` +$ git remote add origin seth@example.com:/home/seth/cal.git +``` + +And then push your calendar data to the server: + +``` +$ git push -u origin HEAD +``` + +With your calendar in a Git repository, it's available to you on any device running Git. That means you can make updates and changes to your schedule and push your changes upstream so it updates everywhere. + +I use this method to keep my calendar in sync between my work laptop and my home workstation. Since I use Emacs every day for most of the day, being able to view and edit my calendar in Emacs is a major convenience. The same is true for most people with a mobile device, so the next step is to set up an Org-mode calendaring system on a mobile. + +### Mobile Git + +Since your calendar data is in plaintext, strictly speaking, you can "use" it on any device that can read a text file. That's part of the beauty of this system; you're never without, at the very least, your raw data. But to integrate your calendar on a mobile device the way you'd expect a modern calendar to work, you need two components: a mobile Git client and a mobile Org-mode viewer. + +#### Git client for mobile + +[MGit][15] is a good Git client for Android. There are Git clients for iOS, as well. + +Once you've installed MGit (or a similar Git client), you must clone your calendar repository so your phone has a copy. To access your server from your mobile device, you must set up an SSH key for authentication. MGit can generate and store a key for you, which you must add to your server's **~/.ssh/authorized_keys** file or to your SSH keys in the settings of your hosted Git account. + +You must do this manually. MGit does not have an interface to log into your server or hosted Git account. If you do not do this, your mobile device cannot access your server to access your calendar data. + +I did it by copying the key file I generated in MGit to my laptop over [KDE Connect][16] (but you can do the same over Bluetooth, or with an SD card reader, or a USB cable, depending on your preferred method of accessing data on your phone). I copied the key (a file called **calkey** to my server with this command: + +``` +$ cat calkey | ssh seth@example.com "cat >> /home/seth/.ssh/authorized_keys" +``` + +You may have a different way of doing it, but if you ever set your server up for passwordless login, this is exactly the same process. If you're using a hosted Git service like GitLab, you must copy and paste the contents of your key file into your user account's SSH Key panel. + +![Adding key file data to GitLab][17] + +Once that's done, your mobile device can authorize to your server, but it still needs to know where to go to find your calendar data. Different apps may use different notation, but MGit uses plain old Git-over-SSH. That means if you're using a non-standard SSH port, you must specify the SSH port to use: + +``` +$ git clone ssh://seth@example.com:22122//home/seth/git/cal.git +``` + +![Specifying SSH port in MGit][18] + +If you use a different app, it may use a different syntax that allows you to provide a port in a special field or drop the **ssh://** prefix. Refer to the app documentation if you experience issues. + +Clone the repository to your phone. + +![Cloned repositories][19] + +Few Git apps are set to automatically update the repository. There are a few apps you can use to automate pulls, or you can set up Git hooks to push updates from your server—but I won't get into that here. For now, after you make an update to your calendar, be sure to pull new changes manually in MGit (or if you change events on your phone, push the changes to your server). + +![MGit push/pull settings][20] + +#### Mobile calendar + +There are a few different apps that provide frontends for Org mode on a mobile device. [Orgzly][21] is a great open source Android app that provides an interface for Org mode's greatest features, from the Agenda mode to the TODO lists. Install and launch it. + +From the Main menu, choose Setting Sync Repositories and select the directory containing your calendar files (i.e., the Git repository you cloned from your server). + +Give Orgzly a moment to import the data, then use Orgzly's [hamburger][22] menu to select the Agenda view. + +![Orgzly's agenda view][23] + +In Orgzly's Settings Reminders menu, you can choose which event types trigger a notification on your phone. You can get notifications for **SCHEDULED** tasks, **DEADLINE** tasks, or anything with an event time assigned to it. If you use your phone as your taskmaster, you'll never miss an event with Org mode and Orgzly. + +![Orgzly notification][24] + +Orgzly isn't just a parser. You can edit and update events, and even mark events **DONE**. + +![Orgzly to-do list][25] + +### Designed for and by you + +The important thing to understand about using Org mode and Git is that both applications are highly flexible, and it's expected that you'll customize how and what they do so they will adapt to your needs. If something in this article is an affront to how you organize your life or manage your weekly schedule, but you like other parts of what this proposal offers, then throw out the part you don't like. You can use Org mode in Emacs if you want, or you can just use it as calendar markup. You can set your phone to pull Git data right off your computer at the end of the day instead of a server on the internet, or you can configure your computer to sync calendars whenever your phone is plugged in, or you can manage it daily as you load up your phone with all the stuff you need for the workday. It's up to you, and that's the most significant thing about Git, about Org mode, and about open source. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/calendar-git + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web-design-monitor-website.png?itok=yUK7_qR0 (website design image) +[2]: https://git-scm.com/ +[3]: https://tools.ietf.org/html/rfc4791 +[4]: http://nextcloud.com +[5]: https://github.com/pimutils/khal +[6]: https://plaintextproject.online/ +[7]: https://orgmode.org +[8]: https://opensource.com/article/19/1/productivity-tool-org-mode +[9]: https://commonmark.org/ +[10]: https://orgmode.org/manual/ +[11]: http://gitlab.com +[12]: http://gitolite.com/gitolite/index.html +[13]: https://gitea.io/en-us/ +[14]: mailto:seth@example.com +[15]: https://f-droid.org/en/packages/com.manichord.mgit +[16]: https://community.kde.org/KDEConnect +[17]: https://opensource.com/sites/default/files/uploads/gitlab-add-key.jpg (Adding key file data to GitLab) +[18]: https://opensource.com/sites/default/files/uploads/mgit-0.jpg (Specifying SSH port in MGit) +[19]: https://opensource.com/sites/default/files/uploads/mgit-1.jpg (Cloned repositories) +[20]: https://opensource.com/sites/default/files/uploads/mgit-2.jpg (MGit push/pull settings) +[21]: https://f-droid.org/en/packages/com.orgzly/ +[22]: https://en.wikipedia.org/wiki/Hamburger_button +[23]: https://opensource.com/sites/default/files/uploads/orgzly-agenda.jpg (Orgzly's agenda view) +[24]: https://opensource.com/sites/default/files/uploads/orgzly-cal-notify.jpg (Orgzly notification) +[25]: https://opensource.com/sites/default/files/uploads/orgzly-cal-todo.jpg (Orgzly to-do list) diff --git a/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md b/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md new file mode 100644 index 0000000000..dc0961a66d --- /dev/null +++ b/sources/tech/20190402 What are Ubuntu Repositories- How to enable or disable them.md @@ -0,0 +1,189 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What are Ubuntu Repositories? How to enable or disable them?) +[#]: via: (https://itsfoss.com/ubuntu-repositories) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +What are Ubuntu Repositories? How to enable or disable them? +====== + +_**This detailed article tells you about various repositories like universe, multiverse in Ubuntu and how to enable or disable them.**_ + +So, you are trying to follow a tutorial from the web and installing a software using apt-get command and it throws you an error: + +``` +E: Unable to locate package xyz +``` + +You are surprised because others the package should be available. You search on the internet and come across a solution that you have to enable universe or multiverse repository to install that package. + +**You can enable universe and multiverse repositories in Ubuntu using the commands below:** + +``` +sudo add-apt-repository universe multiverse +sudo apt update +``` + +You installed the universe and multiverse repository but do you know what are these repositories? How do they play a role in installing packages? Why there are several repositories? + +I’ll explain all these questions in detail here. + +### The concept of repositories in Ubuntu + +Okay, so you already know that to [install software in Ubuntu][1], you can use the [apt command][2]. This is the same [APT package manager][3] that Ubuntu Software Center utilizes underneath. So all the software (except Snap packages) that you see in the Software Center are basically from APT. + +Have you ever wondered where does the apt program install the programs from? How does it know which packages are available and which are not? + +Apt basically works on the repository. A repository is nothing but a server that contains a set of software. Ubuntu provides a set of repositories so that you won’t have to search on the internet for the installation file of various software of your need. This centralized way of providing software is one of the main strong points of using Linux. + +The APT package manager gets the repository information from the /etc/apt/sources.list file and files listed in /etc/apt/sources.list.d directory. Repository information is usually in the following format: + +``` +deb http://us.archive.ubuntu.com/ubuntu/ bionic main +``` + +In fact, you can [go to the above server address][4] and see how the repository is structured. + +When you [update Ubuntu using the apt update command][5], the apt package manager gets the information about the available packages (and their version info) from the repositories and stores them in local cache. You can see this in /var/lib/apt/lists directory. + +Keeping this information locally speeds up the search process because you don’t have to go through the network and search the database of available packages just to check if a certain package is available or not. + +Now you know how repositories play an important role, let’s see why there are several repositories provided by Ubuntu. + +### Ubuntu Repositories: Main, Universe, Multiverse, Restricted and Partner + +![][6] + +Software in Ubuntu repository are divided into five categories: main, universe, multiverse, restricted and partner. + +Why Ubuntu does that? Why not put all the software into one single repository? To answer this question, let’s see what are these repositories: + +#### **Main** + +When you install Ubuntu, this is the repository enabled by default. The main repository consists of only FOSS (free and open source software) that can be distributed freely without any restrictions. + +Software in this repository are fully supported by the Ubuntu developers. This is what Ubuntu will provide with security updates until your system reaches end of life. + +#### **Universe** + +This repository also consists free and open source software but Ubuntu doesn’t guarantee of regular security updates to software in this category. + +Software in this category are packaged and maintained by the community. The Universe repository has a vast amount of open source software and thus it enables you to have access to a huge number of software via apt package manager. + +#### **Multiverse** + +Multiverse contains the software that are not FOSS. Due to licensing and legal issues, Ubuntu cannot enable this repository by default and cannot provide fix and updates. + +It’s up to you to decide if you want to use Multiverse repository and check if you have the right to use the software. + +#### **Restricted** + +Ubuntu tries to provide only free and open source software but that’s not always possible specially when it comes to supporting hardware. + +The restricted repositories consists of proprietary drivers. + +#### **Partner** + +This repository consist of proprietary software packaged by Ubuntu for their partners. Earlier, Ubuntu used to provide Skype trough this repository. + +#### Third party repositories and PPA (Not provided by Ubuntu) + +The above five repositories are provided by Ubuntu. You can also add third party repositories (it’s up to you if you want to do it) to access more software or to access newer version of a software (as Ubuntu might provide old version of the same software). + +For example, if you add the repository provided by [VirtualBox][7], you can get the latest version of VurtualBox. It will add a new entry in your sources.list. + +You can also install additional application using PPA (Personal Package Archive). I have written about [what is PPA and how it works][8] in detail so please read that article. + +Tip + +Try NOT adding anything other than Ubuntu’s repositories in your sources.list file. You should keep this file in pristine condition because if you mess it up, you won’t be able to update your system or (at times) even install new packages. + +### Add universe, multiverse and other repositories + +As I had mentioned earlier, only the Main repository is enabled by default when you install Ubuntu. To access more software, you can add the additional repositories. + +Let me show you how to do it in command line first and then I’ll show you the GUI ways as well. + +To enable Universe repository, use: + +``` +sudo add-apt-repository universe +``` + +To enable Restricted repository, use: + +``` +sudo add-apt-repository restricted +``` + +To enable Multiverse repository, use this command: + +``` +sudo add-apt-repository multiverse +``` + +You must use sudo apt update command after adding the repository so that you system creates the local cache with package information. + +If you want to **remove a repository** , simply add -r like **sudo add-apt-repository -r universe**. + +Graphically, go to Software & Updates and you can enable the repositories here: + +![Adding Universe, Restricted and Multiverse repositories][9] + +You’ll find the option to enable partner repository in the Other Software tab. + +![Adding Partner repository][10] + +To disable a repository, simply uncheck the box. + +### Bonus Tip: How to know which repository a package belongs to? + +Ubuntu has a dedicated website that provides you with information about all the packages available in the Ubuntu archive. Go to Ubuntu Packages website. + +[Ubuntu Packages][11] + +You can search for a package name in the search field. You can select if you are looking for a particular Ubuntu release or a particular repository. I prefer using ‘any’ option in both fields. + +![][12] + +It will show you all the matching packages, Ubuntu releases and the repository information. + +![][13] + +As you can see above the package tor is available in the Universe repository for various Ubuntu releases. + +**Conclusion** + +I hope this article helped you in understanding the concept of repositories in Ubuntu. + +If you have any questions or suggestions, please feel free to leave a comment below. If you liked the article, please share it on social media sites like Reddit and Hacker News. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/ubuntu-repositories + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/remove-install-software-ubuntu/ +[2]: https://itsfoss.com/apt-command-guide/ +[3]: https://wiki.debian.org/Apt +[4]: http://us.archive.ubuntu.com/ubuntu/ +[5]: https://itsfoss.com/update-ubuntu/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/ubuntu-repositories.png?resize=800%2C450&ssl=1 +[7]: https://itsfoss.com/install-virtualbox-ubuntu/ +[8]: https://itsfoss.com/ppa-guide/ +[9]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-repositories-ubuntu.png?resize=800%2C490&ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/03/enable-partner-repository-ubuntu.png?resize=800%2C490&ssl=1 +[11]: https://packages.ubuntu.com +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/search-packages-ubuntu-archive.png?ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/03/search-packages-ubuntu-archive-1.png?resize=800%2C454&ssl=1 diff --git a/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md new file mode 100644 index 0000000000..29a73998d7 --- /dev/null +++ b/sources/tech/20190402 When Wi-Fi is mission-critical, a mixed-channel architecture is the best option.md @@ -0,0 +1,90 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (When Wi-Fi is mission-critical, a mixed-channel architecture is the best option) +[#]: via: (https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all) +[#]: author: (Zeus Kerravala https://www.networkworld.com/author/Zeus-Kerravala/) + +When Wi-Fi is mission-critical, a mixed-channel architecture is the best option +====== + +### Multi-channel is the norm for Wi-Fi today, but it’s not always the best choice. Single-channel and hybrid APs offer compelling alternatives when reliable Wi-Fi is a must. + +![Getty Images][1] + +I’ve worked with a number of companies that have implemented digital projects only to see them fail. The ideation was correct, the implementation was sound, and the market opportunity was there. The weak link? The Wi-Fi network. + +For example, a large hospital wanted to improve clinician response times to patient alarms by having telemetry information sent to mobile devices. Without the system, the only way a nurse would know about a patient alarm is from an audible alert. And with all the background noise, it’s often tough to discern where noises are coming from. The problem was the Wi-Fi network in the hospital had not been upgraded in years and caused messages to be significantly delayed in their delivery, often taking four to five minutes to deliver. The long delivery times caused a lack of confidence in the system, so many clinicians stopped using it and went back to manual alerting. As a result, the project was considered a failure. + +I’ve seen similar examples in manufacturing, K-12 education, entertainment, and other industries. Businesses are competing on the basis of customer experience, and that’s driven from the ever-expanding, ubiquitous wireless edge. Great Wi-Fi doesn’t necessarily mean market leadership, but bad Wi-Fi will have a negative impact on customers and employees. And in today’s competitive climate, that’s a recipe for disaster. + +**[ Read also:[Wi-Fi site-survey tips: How to avoid interference, dead spots][2] ]** + +## Wi-Fi performance historically inconsistent + +The problem with Wi-Fi is that it’s inherently flaky. I’m sure everyone reading this has experienced the typical flaws with failed downloads, dropped connections, inconsistent performance, and lengthy wait times to connect to public hot spots. + +Picture sitting in a conference prior to a keynote address and being able to tweet, send email, browse the web, and do other things with no problem. Then the keynote speaker comes on stage and the entire audiences start snapping pics, uploading those pictures, and streaming things – and the Wi-Fi stops working. I find this to be the norm more than the exception, underscoring the need for [no-compromise Wi-Fi][3]. + +The question for network professionals is how to get to a place where the Wi-Fi is rock solid 100% of the time. Some say that just beefing up the existing network will do that, and it might, but in some cases, the type of Wi-Fi might not be appropriate. + +The most commonly deployed type of Wi-Fi is multi-channel, also known as micro-cell, where each client connects to the access point (AP) using a radio channel. A high-quality experience is based on two things: good signal strength and minimal interference. Several things can cause interference, such as APs being too close, layout issues, or interference from other equipment. To minimize interference, businesses invest a significant amount of time and money in [site surveys to plan the optimal channel map][2], but even with that’s done well, Wi-Fi glitches can still happen. + +**[[Take this mobile device management course from PluralSight and learn how to secure devices in your company without degrading the user experience.][4] ]** + +## Multi-channel Wi-Fi not always the best choice + +For many carpeted offices, multi-channel Wi-Fi is likely to be solid, but there are some environments where external circumstances will impact performance. A good example of this is a multi-tenant building in which there are multiple Wi-Fi networks transmitting on the same channel and interfering with one another. Another example is a hospital where there are many campus workers moving between APs. The client will also try to connect to the best AP, causing the client to continually disconnect and reconnect resulting in dropped sessions. Then there are environments such as schools, airports, and conference facilities where there is a high number of transient devices and multi-channel can struggle to keep up. + +## Single channel Wi-Fi offers better reliability but with a performance hit + +What’s a network manager to do? Is inconsistent Wi-Fi just a fait accompli? Multi-channel is the norm, but it isn’t designed for dynamic physical environments or those where reliable connectivity is a must. + +Several years ago an alternative architecture was proposed that would solve these problems. As the name suggests, “single channel” Wi-Fi uses a single radio channel for all APs in the network. Think of this as being a single Wi-Fi fabric that operates on one channel. With this architecture, the placement of APs is irrelevant because they all utilize the same channel, so they won’t interfere with one another. This has an obvious simplicity advantage, such as if coverage is poor, there’s no reason to do another expensive site survey. Instead, just drop in APs where they are needed. + +One of the disadvantages of single-channel is that aggregate network throughput was lower than multi-channel because only one channel can be used. This might be fine in environments where reliability trumps performance, but many organizations want both. + +## Hybrid APs offer the best of both worlds + +There has been recent innovation from the manufacturers of single-channel systems that mix channel architectures, creating a “best of both worlds” deployment that offers the throughput of multi-channel with the reliability of single-channel. For example, Allied Telesis offers Hybrid APs that can operate in multi-channel and single-channel mode simultaneously. That means some web clients can be assigned to the multi-channel to have maximum throughput, while others can use single-channel for seamless roaming experience. + +A practical use-case of such a mix might be a logistics facility where the office staff uses multi-channel, but the fork-lift operators use single-channel for continuous connectivity as they move throughout the warehouse. + +Wi-Fi was once a network of convenience, but now it is perhaps the most mission-critical of all networks. A traditional multi-channel system might work, but due diligence should be done to see how it functions under a heavy load. IT leaders need to understand how important Wi-Fi is to digital transformation initiatives and do the proper testing to ensure it’s not the weak link in the infrastructure chain and choose the best technology for today’s environment. + +**Reviews: 4 free, open-source network monitoring tools:** + + * [Icinga: Enterprise-grade, open-source network-monitoring that scales][5] + * [Nagios Core: Network-monitoring software with lots of plugins, steep learning curve][6] + * [Observium open-source network monitoring tool: Won’t run on Windows but has a great user interface][7] + * [Zabbix delivers effective no-frills network monitoring][8] + + + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386376/when-wi-fi-is-mission-critical-a-mixed-channel-architecture-is-the-best-option.html#tk.rss_all + +作者:[Zeus Kerravala][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Zeus-Kerravala/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/09/tablet_graph_wifi_analytics-100771638-large.jpg +[2]: https://www.networkworld.com/article/3315269/wi-fi-site-survey-tips-how-to-avoid-interference-dead-spots.html +[3]: https://www.alliedtelesis.com/blog/no-compromise-wi-fi +[4]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmobile-device-management-big-picture +[5]: https://www.networkworld.com/article/3273439/review-icinga-enterprise-grade-open-source-network-monitoring-that-scales.html?nsdr=true#nww-fsb +[6]: https://www.networkworld.com/article/3304307/nagios-core-monitoring-software-lots-of-plugins-steep-learning-curve.html +[7]: https://www.networkworld.com/article/3269279/review-observium-open-source-network-monitoring-won-t-run-on-windows-but-has-a-great-user-interface.html?nsdr=true#nww-fsb +[8]: https://www.networkworld.com/article/3304253/zabbix-delivers-effective-no-frills-network-monitoring.html +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190402 Zero-trust- microsegmentation networking.md b/sources/tech/20190402 Zero-trust- microsegmentation networking.md new file mode 100644 index 0000000000..864bd8eea4 --- /dev/null +++ b/sources/tech/20190402 Zero-trust- microsegmentation networking.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zero-trust: microsegmentation networking) +[#]: via: (https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +Zero-trust: microsegmentation networking +====== + +### Microsegmentation gives administrators the control to set granular policies in order to protect the application environment. + +![Aaron Burson \(CC0\)][1] + +The transformation to the digital age has introduced significant changes to the cloud and data center environments. This has compelled the organizations to innovate more quickly than ever before. This, however, brings with it both – the advantages and disadvantages. + +The network and security need to keep up with this rapid pace of change. If you cannot match with the speed of the [digital age,][2] then ultimately bad actors will become a hazard. Therefore, the organizations must move to a [zero-trust environment][3]: default deny, with least privilege access. In today’s evolving digital world this is the primary key to success. + +Ideally, a comprehensive solution must provide protection across all platforms including legacy servers, VMs, services in public clouds, on-premise, off-premise, hosted, managed or self-managed. We are going to stay hybrid for a long time, therefore we need to equip our architecture with [zero-trust][4]. + +**[ Don’t miss[customer reviews of top remote access tools][5] and see [the most powerful IoT companies][6] . | Get daily insights by [signing up for Network World newsletters][7]. ]** + +We need to have the ability to support all of these hybrid environments that can analyze at a process, flow data, and infrastructure level. As a matter of fact, there is never just one element to analyze within a network in order to create an effective security posture. + +To adequately secure such an environment requires a solution with key components: such as appropriate visibility, microsegmentation, and breach detection. Let's learn more about one of these primary elements: zero-trust microsegmentation networking. + +There are a variety of microsegmentation vendors, all with competing platforms. We have, for example, SDN-based, container-centric, network-based appliance be it physical or virtual, and container-centric to name just a few. + +## What is microsegmentation? + +Microsegmentation is the ability to put a wrapper around the access control for each component of an application. The traditional days are gone where we can just impose a block on source/destination/port numbers or higher up in the stack with protocols, such as HTTP or HTTPS. + +As the communication patterns become more complex, thereby isolating the communication flows between entities, hence following the microsegmentation principles has become a necessity. + +## Why is microsegmentation important? + +Microsegmentation gives administrators the control to set granular policies in order to protect the application environment. It defines the rules and policies as to how an application can communicate within its tier. The policies are granular (a lot more granular than what we had before), which restrict the communication to hosts that are only allowed to communicate. + +Eventually, this reduces the available attack surface and completely locks down the ability for the bad actors to move laterally within the application infrastructure. Why? Because it governs the application’s activity at a granular level, thereby improving the entire security posture. The traditional zone-based networking no longer cuts it in today’s [digital world][8]. + +## General networking + +Let's start with the basics. We all know that with security, you are only as strong as your weakest link. As a result, enterprises have begun to further segment networks into microsegments. Some call them nanosegments. + +But first, let’s recap on what we actually started within the initial stage- nothing! We had IP addresses that were used for connectivity but unfortunately, they have no built-in authentication mechanism. Why? Because it wasn't a requirement back then. + +Network connectivity based on network routing protocols was primarily used for sharing resources. A printer, 30 years ago, could cost the same as a house, so connectivity and the sharing of resources were important. The authentication of the communication endpoints was not considered significant. + +## Broadcast domains + +As networks grew in size, virtual LANs (VLANs) were introduced to divide the broadcast domains and improve network performance. A broadcast domain is a logical division of a computer network. All nodes can reach each other by sending a broadcast at the data link layer. When the broadcast domain swells, the network performance takes a hit. + +Over time the role of the VLAN grew to be used as a security tool but it was never meant to be in that space. VLANs were used to improve performance, not to isolate the resources. The problem with VLANs is that there is no intra VLAN filtering. They have a very broad level of access and trust. If bad actors gain access to one segment in the zone, they should not be allowed to try and compromise another device within that zone, but with VLANs, this is a strong possibility. + +Hence, VLAN offers the bad actor a pretty large attack surface to play with and move across laterally without inspection. Lateral movements are really hard to detect with traditional architectures. + +Therefore, enterprises were forced to switch to microsegmentation. Microsegmentation further segments networks within the zone. On the contrary, the whole area of virtualization complicates the segmentation process. A virtualized server may only have a single physical network port but it supports numerous logical networks where services and applications reside across multiple security zones. + +Thus, microsegmentation needs to work at both; the physical network layer as well as within the virtualized networking layer. As you are aware, there has been a change in the traffic pattern. The good thing about microsegmentation is that it controls both; the “north & south” and also the “east & west” movement of traffic, further isolating the size of broadcast domains. + +## Microsegmentation – a multi-stage process + +Implementing microsegmentation is a multi-stage process. There are certain prerequisites that must be followed before the implementation. Firstly, you need to fully understand the communication patterns, map the flows and all the application dependencies. + +Once this is done, it's only then you can enable microsegmentation in a platform-agnostic manner across all the environments. Segmenting your network appropriately creates a dark network until the administrator turns on the lights. Authentication is performed first and then access is granted to the communicating entities operating with zero-trust with least privilege access. + +Once you are connecting the entities, they need to run through a number of technologies in order to be fully connected. There is not a once-off check with microsegmentation. It’s rather a continuous process to make sure that both entities are doing what they are supposed to do. + +This ensures that everyone is doing what they are entitled to do. You want to reduce the unnecessary cross-talk to an absolute minimum and only allow communication that is a complete necessity. + +## How do you implement microsegmentation? + +Firstly, you need strong visibility not just at the traffic flow level but also at the process and data contextual level. Without granular application visibility, it's impossible to map and fully understand what is normal traffic flow and irregular application communication patterns. + +Visibility cannot be mapped out manually, as there could be hundreds of workloads. Therefore, an automatic approach must be taken. Manual mapping is more prone to errors and is inefficient. The visibility also needs to be in real-time. A static snapshot of the application architecture, even if it's down to a process level, will not tell you anything about the behaviors that are sanctioned or unsanctioned. + +You also need to make sure that you, not under-segmenting, similar to what we had in the old days. Primarily, microsegmentation must manage communication workflows all the way up to Layer 7 of the Open Systems Interconnection (OSI) layer. Layer 4 microsegmentation only focuses on the Transport layer. If you are only segmenting the network at Layer 4 then you are widening your attack surface, thereby opening the network to be compromised. + +Segmenting right up to the application layer means you are locking down the lateral movements, open ports, and protocols. It enables you to restrict access to the source and destination process rather than source and destination port numbers. + +## Security issues with hybrid cloud + +Since the [network perimeter][9] has been removed, therefore, it has become difficult to bolt the traditional security tools. Traditionally, we could position a static perimeter around the network infrastructure. However, this is not an available option today as we have a mixture of containerized applications, for example, a legacy database server. We have legacy communicating to the containerized land. + +Hybrid enables organizations to use different types of cloud architects to include the on-premise and new technologies, such as containers. We are going to have a hybrid cloud in coming times which will change the way we think about networking. Hybrid forces the organizations to rethink about the network architectures. + +When you attach the microsegment policies around the workload itself, then the policies will go with the workload. Then it would not matter if the entity moves to the on-premise or to the cloud. If the workload auto scales up and down or horizontally, the policy needs to go with the workload. Even if you go deeper than the workload, into the process level, you can set even more granular controls for microsegmentation. + +## Identity + +However, this is the point where identity becomes a challenge. If things are scaling and becoming dynamic, you can’t tie policies to the IP addresses. Rather than using IP addresses as the base for microsegmentation, policies are based on the logical (not physical) attributes. + +With microsegmentation, the workload identity is based on logical attributes, such as the multi-factor authentication (MFA), transport layer security (TLS) certificate, the application service, or the use of a logical label associated with the workload. + +These are what are known as logical attributes. Ultimately the policies map to the IP addresses but these are set by using the logical attributes, not the physical ones. As we progress in this technological era, the IP address is less relevant now. Named data networking is one of the perfect examples. + +Other identity methods for microsegmentation are TLS certificates. If the traffic is encrypted with a different TLS certificate or from an invalid source, it automatically gets dropped, even if it comes from the right location. It will get blocked as it does not have the right identity. + +You can even extend that further and look inside the actual payload. If an entity is trying to do a hypertext transfer protocol (HTTP) post to a record and if it tries to perform any other operation, it will get blocked. + +## Policy enforcement + +Practically, all of these policies can be implemented and enforced in different places throughout the network. However, if you enforce in only one place, that point in the network can become compromised and become an entry door to the bad actor. You can, for example, enforce in 10 different network points, even if you subvert in 2 of them the other 8 will still protect you. + +Zero-trust microsegmentation ensures that you can enforce in different points throughout the network and also with different mechanics. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][10]** + +Join the Network World communities on [Facebook][11] and [LinkedIn][12] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3384748/zero-trust-microsegmentation-networking.html#tk.rss_all + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/07/hive-structured_windows_architecture_connections_connectivity_network_lincoln_park_pavilion_chicago_by_aaron_burson_cc0_via_unsplash_1200x800-100765880-large.jpg +[2]: https://youtu.be/AnMQH_noNDo +[3]: https://network-insight.net/2018/10/zero-trust-networking-ztn-want-ghosted/ +[4]: https://network-insight.net/2018/09/embrace-zero-trust-networking/ +[5]: https://www.networkworld.com/article/3262145/lan-wan/customer-reviews-top-remote-access-tools.html#nww-fsb +[6]: https://www.networkworld.com/article/2287045/internet-of-things/wireless-153629-10-most-powerful-internet-of-things-companies.html#nww-fsb +[7]: https://www.networkworld.com/newsletters/signup.html#nww-fsb +[8]: https://network-insight.net/2017/10/internet-things-iot-dissolving-cloud/ +[9]: https://network-insight.net/2018/09/software-defined-perimeter-zero-trust/ +[10]: /contributor-network/signup.html +[11]: https://www.facebook.com/NetworkWorld/ +[12]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md new file mode 100644 index 0000000000..826cd9d413 --- /dev/null +++ b/sources/tech/20190403 Intel unveils an epic response to AMD-s server push.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Intel unveils an epic response to AMD’s server push) +[#]: via: (https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Intel unveils an epic response to AMD’s server push +====== + +### Intel introduced more than 50 new Xeon Scalable Processors for servers that cover a variety of workloads. + +![Intel][1] + +Intel on Tuesday introduced its second-generation Xeon Scalable Processors for servers, developed under the codename Cascade Lake, and it’s clear AMD has lit a fire under a once complacent company. + +These new Xeon SP processors max out at 28 cores and 56 threads, a bit shy of AMD’s Epyc server processors with 32 cores and 64 threads, but independent benchmarks are still to come, which may show Intel having a lead at single core performance. + +And for absolute overkill, there is the Xeon SP Platinum 9200 Series, which sports 56 cores and 112 threads. It will also require up to 400W of power, more than twice what the high-end Xeons usually consume. + +**[ Now read:[What is quantum computing (and why enterprises should care)][2] ]** + +The new processors were unveiled at a big event at Intel’s headquarters in Santa Clara, California, and live-streamed on the web. [Newly minted CEO][3] Bob Swan kicked off the event, saying the new processors were the “first truly data-centric portfolio for our customers.” + +“For the last several years, we have embarked on a journey to transform from a PC-centric company to a data-centric computing company and build the silicon processors with our partners to help our customers prosper and grow in an increasingly data-centric world,” he added. + +He also said the move to a data-centric world isn’t just CPUs, but a suite of accelerant technologies, including the [Agilex FPGA processors][4], Optane memory, and more. + +This launch is the largest Xeon launch in the company’s history, with more than 50 processor designs across the Xeon 8200 and 9200 lines. While something like that can lead to confusion, many of these are specific to certain workloads instead of general-purpose processors. + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][5] ]** + +Cascade Lake chips are the replacement for the previous Skylake platform, and the mainstream Cascade Lake chips have the same architecture as the Purley motherboard used by Skylake. Like the current Xeon Scalable processors, they have up to 28 cores with up to 38.5 MB of L3 cache, but speeds and feeds have been bumped up. + +The Cascade Lake generation supports the new UPI (Ultra Path Interface) high-speed interconnect, up to six memory channels, AVX-512 support, and up to 48 PCIe lanes. Memory capacity has been doubled, from 768GB to 1.5TB of memory per socket. They work in the same socket as Purley motherboards and are built on a 14nm manufacturing process. + +Some of the new Xeons, however, can access up to 4.5TB of memory per processor: 1.5TB of memory and 3TB of Optane memory, the new persistent memory that sits between DRAM and NAND flash memory and acts as a massive cache for both. + +## Built-in fixes for Meltdown and Spectre vulnerabilities + +Most important, though, is that these new Xeons have built-in fixes for the Meltdown and Spectre vulnerabilities. There are existing fixes for the exploits, but they have the effect of reducing performance, which varies based on workload. Intel showed a slide at the event that shows the company is using a combination of firmware and software mitigation. + +New features also include Intel Deep Learning Boost (DL Boost), a technology developed to accelerate vector computing that Intel said makes this the first CPU with built-in inference acceleration for AI workloads. It works with the AVX-512 extension, which should make it ideal for machine learning scenarios. + +Most of the new Xeons are available now, except for the 9200 Platinum, which is coming in the next few months. Many Intel partners – Dell, Cray, Cisco, Supermicro – all have new products, with Supermicro launching more than 100 new products built around Cascade Lake. + +## Intel also rolls out Xeon D-1600 series processors + +In addition to its hot rod Xeons, Intel also rolled out the Xeon D-1600 series processors, a low power variant based on a completely different architecture. Xeon D-1600 series processors are designed for space and/or power constrained environments, such as edge network devices and base stations. + +Along with the new Xeons and FPGA chips, Intel also announced the Intel Ethernet 800 series adapter, which supports 25, 50 and 100 Gigabit transfer speeds. + +Thank you, AMD. This is what competition looks like. + +Join the Network World communities on [Facebook][6] and [LinkedIn][7] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html#tk.rss_all + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/intel-xeon-family-1-100792811-large.jpg +[2]: https://www.networkworld.com/article/3275367/what-s-quantum-computing-and-why-enterprises-need-to-care.html +[3]: https://www.networkworld.com/article/3336921/intel-promotes-swan-to-ceo-bumps-off-itanium-and-eyes-mellanox.html +[4]: https://www.networkworld.com/article/3386158/intels-agilex-fpga-family-targets-data-intensive-workloads.html +[5]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[6]: https://www.facebook.com/NetworkWorld/ +[7]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md new file mode 100644 index 0000000000..72d566a7d0 --- /dev/null +++ b/sources/tech/20190403 Top Ten Reasons to Think Outside the Router -1- It-s Time for a Router Refresh.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh) +[#]: via: (https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all) +[#]: author: (Rami Rammaha https://www.networkworld.com/author/Rami-Rammaha/) + +Top Ten Reasons to Think Outside the Router #1: It’s Time for a Router Refresh +====== + +![istock][1] + +We’re now at the end of our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as [Silver Peak][2] counts down the _Top Ten Reasons to Think Outside the Router._ Click for the [#2][3], [#3][4], [#4][5], [#5][6], [#6][7], [#7][8], [#8][9], [#9][10] and [#10][11] reasons to retire traditional branch routers. + +_**The #1 reason it’s time to retire conventional routers at the branch: your branch routers are coming due for a refresh – the perfect time to evaluate new options.**_ + +Your WAN architecture is due for a branch router refresh! You’re under immense pressure to advance your organization’s digital transformation initiatives and deliver a high quality of experience to your users and customers. Your applications – at least SaaS apps – are all cloud-based. You know you need to move more quickly to keep pace with changing business requirements to realize the transformational promise of the cloud. And, you’re dealing with shifting traffic patterns and an insatiable appetite for more bandwidth at branch sites to support your users and applications. Finally, you know your IT budget for networking isn’t going to increase. + +_So, what’s next?_ You really only have three options when it comes to refreshing your WAN. You can continue to try and stretch your conventional router-centric model. You can choose a basic [SD-WAN][12] model that may or may not be good enough. Or you can take a new approach and deploy a business-driven SD-WAN edge platform. + +### **The pitfalls of a router-centric model** + +![][13] + +The router-centric approach worked well when enterprise applications were hosted in the data center; before the advent of the cloud. All traffic was routed directly from branch offices to the data center. With the emergence of the cloud, businesses were forced to conform to the constraints of the network when deploying new applications or making network changes. This is a bottoms-up device centric approach in which the network becomes a bottleneck to the business. + +A router-centric approach requires manual device-by-device configuration that results in endless hours of manual programming, making it extremely difficult for network administrators to scale without experiencing major challenges in configuration, outages and troubleshooting. Any changes that arise when deploying a new application or changing a QoS or security policy, once again requires manually programming every router at every branch across the network. Re-programming is time consuming and requires utilizing a complex, cumbersome CLI, further adding to the inefficiencies of the model. In short, the router-centric WAN has hit the wall. + +### **Basic SD-WAN, a step in the right direction** + +![][14] + +In this model, businesses realize the benefit of foundational features, but this model falls short of the goal of a fully automated, business-driven network. A basic SD-WAN approach is unable to provide what the business really needs, including the ability to deliver the best Quality of Experience for users. + +Some of the basic SD-WAN features include the ability to use multiple forms of transport, path selection, centralized management, zero-touch provisioning and encrypted VPN overlays. However, a basic SD-WAN lacks in many areas: + + * Limited end-to-end orchestration of WAN edge network functions + * Rudimentary path selection with traffic steering limited to pre-defined rules + * Long fail-over times in response to WAN transport outages + * Inability to use links when they experience brownouts due to link congestion or packet loss + * Fixed application definitions and manually scripted ACLs to control traffic steering across the internet + + + +### **The solution: shift to a business-first networking model** + +![][15] + +In this model, the network enables the business. The WAN is transformed into a business accelerant that is fully automated and continuous, giving every application the resources it truly needs while delivering 10x the bandwidth for the same budget – ultimately achieving the highest quality of experience to users and IT alike. With a business-first networking model, the network functions (SD-WAN, firewall, segmentation, routing, WAN optimization and application visibility and control) are unified in a single platform and are centrally orchestrated and managed. Top-down business intent is the driver, enabling businesses to unlock the full transformational promise of the cloud. + +The business-driven [Silver Peak® EdgeConnect™ SD-WAN][16] edge platform was built for the cloud, enabling enterprises to liberate their applications from the constraints of existing WAN approaches. EdgeConnect offers the following advanced capabilities: + +1\. Automates traffic steering and security policy enforcement based on business intent instead of TCP/IP addresses, delivering the highest Quality of Experience for users + +2\. Actively embraces broadband to increase application performance and availability while lowering costs + +3\. Securely and directly connect branch users to SaaS and IaaS cloud services + +4\. Increases operational efficiency while increasing business agility and time-to-market via centralized orchestration + +Silver Peak has more than 1,000 enterprise customer deployments across a range of vertical industries. Bentley Systems, [Nuffield Health][17] and [Solis Mammography][18] have all realized tangible business outcomes from their EdgeConnect deployments. + +![][19] + +Learn why the time is now to [think outside the router][20]! + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386116/top-ten-reasons-to-think-outside-the-router-1-it-s-time-for-a-router-refresh.html#tk.rss_all + +作者:[Rami Rammaha][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rami-Rammaha/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/istock-478729482-100792542-large.jpg +[2]: https://www.silver-peak.com/why-silver-peak +[3]: http://blog.silver-peak.com/think-outside-the-router-reason-2-simplify-and-consolidate-the-wan-edge +[4]: http://blog.silver-peak.com/think-outside-the-router-reason-3-mpls-contract-renewal +[5]: http://blog.silver-peak.com/top-ten-reasons-to-think-outside-the-router-4-broadband-is-used-only-for-failover +[6]: http://blog.silver-peak.com/think-outside-the-router-reason-5-manual-cli-based-configuration-and-management +[7]: http://blog.silver-peak.com/https-blog-silver-peak-com-think-outside-the-router-reason-6 +[8]: http://blog.silver-peak.com/think-outside-the-router-reason-7-exorbitant-router-support-and-maintenance-costs +[9]: http://blog.silver-peak.com/think-outside-the-router-reason-8-garbled-voip-pixelated-video +[10]: http://blog.silver-peak.com/think-outside-router-reason-9-sub-par-saas-performance +[11]: http://blog.silver-peak.com/think-outside-router-reason-10-its-getting-cloudy +[12]: https://www.silver-peak.com/sd-wan/sd-wan-explained +[13]: https://images.idgesg.net/images/article/2019/04/1_router-centric-vs-business-first-100792538-medium.jpg +[14]: https://images.idgesg.net/images/article/2019/04/2_basic-sd-wan-vs-business-first-100792539-medium.jpg +[15]: https://images.idgesg.net/images/article/2019/04/3_bus-first-networking-model-100792540-large.jpg +[16]: https://www.silver-peak.com/products/unity-edge-connect +[17]: https://www.silver-peak.com/resource-center/nuffield-health-deploys-uk-wide-sd-wan-silver-peak +[18]: https://www.silver-peak.com/resource-center/national-leader-mammography-services-accelerates-access-life-critical-scans +[19]: https://images.idgesg.net/images/article/2019/04/4_real-world-business-outcomes-100792541-large.jpg +[20]: https://www.silver-peak.com/think-outside-router diff --git a/sources/tech/20190403 Use Git as the backend for chat.md b/sources/tech/20190403 Use Git as the backend for chat.md new file mode 100644 index 0000000000..e564bbc6e7 --- /dev/null +++ b/sources/tech/20190403 Use Git as the backend for chat.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use Git as the backend for chat) +[#]: via: (https://opensource.com/article/19/4/git-based-chat) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Use Git as the backend for chat +====== +GIC is a prototype chat application that showcases a novel way to use Git. +![Team communication, chat][1] + +[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at GIC, a Git-based chat application + +### Meet GIC + +While the authors of Git probably expected frontends to be created for Git, they undoubtedly never expected Git would become the backend for, say, a chat client. Yet, that's exactly what developer Ephi Gabay did with his experimental proof-of-concept [GIC][3]: a chat client written in [Node.js][4] using Git as its backend database. + +GIC is by no means intended for production use. It's purely a programming exercise, but it's one that demonstrates the flexibility of open source technology. What's astonishing is that the client consists of just 300 lines of code, excluding the Node libraries and Git itself. And that's one of the best things about the chat client and about open source; the ability to build upon existing work. Seeing is believing, so you should give GIC a look for yourself. + +### Get set up + +GIC uses Git as its engine, so you need an empty Git repository to serve as its chatroom and logger. The repository can be hosted anywhere, as long as you and anyone who needs access to the chat service has access to it. For instance, you can set up a Git repository on a free Git hosting service like GitLab and grant chat users contributor access to the Git repository. (They must be able to make commits to the repository, because each chat message is a literal commit.) + +If you're hosting it yourself, create a centrally located bare repository. Each user in the chat must have an account on the server where the bare repository is located. You can create accounts specific to Git with Git hosting software like [Gitolite][5] or [Gitea][6], or you can give them individual user accounts on your server, possibly using **git-shell** to restrict their access to Git. + +Performance is best on a self-hosted instance. Whether you host your own or you use a hosting service, the Git repository you create must have an active branch, or GIC won't be able to make commits as users chat because there is no Git HEAD. The easiest way to ensure that a branch is initialized and active is to commit a README or license file upon creation. If you don't do that, you can create and commit one after the fact: + +``` +$ echo "chat logs" > README +$ git add README +$ git commit -m 'just creating a HEAD ref' +$ git push -u origin HEAD +``` + +### Install GIC + +Since GIC is based on Git and written in Node.js, you must first install Git, Node.js, and the Node package manager, npm (which should be bundled with Node). The command to install these differs depending on your Linux or BSD distribution, but here's an example command on Fedora: + +``` +$ sudo dnf install git nodejs +``` + +If you're not running Linux or BSD, follow the installation instructions on [git-scm.com][7] and [nodejs.org][8]. + +There's no install process, as such, for GIC. Each user (Alice and Bob, in this example) must clone the repository to their hard drive: + +``` +$ git cone https://github.com/ephigabay/GIC GIC +``` + +Change directory into the GIC directory and install the Node.js dependencies with **npm** : + +``` +$ cd GIC +$ npm install +``` + +Wait for the Node modules to download and install. + +### Configure GIC + +The only configuration GIC requires is the location of your Git chat repository. Edit the **config.js** file: + +``` +module.exports = { +gitRepo: '[seth@example.com][9]:/home/gitchat/chatdemo.git', +messageCheckInterval: 500, +branchesCheckInterval: 5000 +}; +``` + + +Test your connection to the Git repository before trying GIC, just to make sure your configuration is sane: + +``` +$ git clone --quiet seth@example.com:/home/gitchat/chatdemo.git > /dev/null +``` + +Assuming you receive no errors, you're ready to start chatting. + +### Chat with Git + +From within the GIC directory, start the chat client: + +``` +$ npm start +``` + +When the client first launches, it must clone the chat repository. Since it's nearly an empty repository, it won't take long. Type your message and press Enter to send a message. + +![GIC][10] + +A Git-based chat client. What will they think of next? + +As the greeting message says, a branch in Git serves as a chatroom or channel in GIC. There's no way to create a new branch from within the GIC UI, but if you create one in another terminal session or in a web UI, it shows up immediately in GIC. It wouldn't take much to patch some IRC-style commands into GIC. + +After chatting for a while, take a look at your Git repository. Since the chat happens in Git, the repository itself is also a chat log: + +``` +$ git log --pretty=format:"%p %cn %s" +4387984 Seth Kenlon Hey Chani, did you submit a talk for All Things Open this year? +36369bb Chani No I didn't get a chance. Did you? +[...] +``` + +### Exit GIC + +Not since Vim has there been an application as difficult to stop as GIC. You see, there is no way to stop GIC. It will continue to run until it is killed. When you're ready to stop GIC, open another terminal tab or window and issue this command: + +``` +$ kill `pgrep npm` +``` + +GIC is a novelty. It's a great example of how an open source ecosystem encourages and enables creativity and exploration and challenges us to look at applications from different angles. Try GIC out. Maybe it will give you ideas. At the very least, it's a great excuse to spend an afternoon with Git. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/git-based-chat + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/talk_chat_team_mobile_desktop.png?itok=d7sRtKfQ (Team communication, chat) +[2]: https://git-scm.com/ +[3]: https://github.com/ephigabay/GIC +[4]: https://nodejs.org/en/ +[5]: http://gitolite.com +[6]: http://gitea.io +[7]: http://git-scm.com +[8]: http://nodejs.org +[9]: mailto:seth@example.com +[10]: https://opensource.com/sites/default/files/uploads/gic.jpg (GIC) diff --git a/sources/tech/20190404 9 features developers should know about Selenium IDE.md b/sources/tech/20190404 9 features developers should know about Selenium IDE.md new file mode 100644 index 0000000000..b099da68e2 --- /dev/null +++ b/sources/tech/20190404 9 features developers should know about Selenium IDE.md @@ -0,0 +1,158 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (9 features developers should know about Selenium IDE) +[#]: via: (https://opensource.com/article/19/4/features-selenium-ide) +[#]: author: (Al Sargent https://opensource.com/users/alsargent) + +9 features developers should know about Selenium IDE +====== +The new Selenium IDE brings the benefits of functional test automation +to many IT professionals—and to frontend developers specifically. +![magnifying glass on computer screen][1] + +There has long been a stigma associated with using record-and-playback tools for testing rather than scripted QA automation tools like [Selenium Webdriver][2], [Cypress][3], and [WebdriverIO][4]. + +Record-and-playbook tools are perceived to suffer from many issues, including a lack of cross-browser support, no way to run scripts in parallel or from CI build scripts, poor support for responsive web apps, and no way to quickly diagnose frontend bugs. + +Needless to say, it's been somewhat of a rough road for these tools, and after Selenium IDE [went end-of-life][5] in 2017, many thought the road for record and playback would end altogether. + +Well, it turns out this perception was wrong. Not long after the Selenium IDE project was discontinued, my colleagues at [Applitools approached the Selenium open source community][6] to see how they could help. + +Since then, much of Selenium IDE's code has been revamped. The code is now freely available on GitHub under an Apache 2.0 license, managed by the Selenium community, and supported by [two full-time engineers][7], one of whom literally wrote the book on [Selenium testing][8]. + +![Selenium IDE's GitHub repository][9] + +The new Selenium IDE brings the benefits of functional test automation to many IT professionals—and to frontend developers specifically. Here are nine things developers should know about the new Selenium IDE. + +### 1\. Selenium IDE is now cross-browser + +When the record-and-playback tool first came out in 2006, Firefox was the shiny new browser it hitched its wagon to, and it remained that way for a decade. No more! Selenium IDE is now available as a [Google Chrome Extension][10] and [Firefox Add-on][11]. + +Even better, Selenium IDE can run its tests on Selenium WebDriver servers by using Selenium IDE's new command-line test runner, [SIDE Runner][12]. SIDE Runner blends elements of Selenium IDE and Selenium Webdriver. It takes a Selenium IDE script, saved as a [**.side** file][13], and runs it using browser drivers such as [ChromeDriver][14], [EdgeDriver][15], Firefox's [Geckodriver][16], [IEDriver][17], and [SafariDriver][18]. + +SIDE Runner and the other drivers above are available as [straightforward npm installs][12]. Here's what it looks like in action. + +![SIDE Runner][19] + +### 2\. No more brittle functional tests + +For years, brittle tests have been an issue for functional tests—whether you record them or code them by hand. Now that developers are releasing new features more frequently, their user interface (UI) code is constantly changing as well. When a UI changes, object locators often change, too. + +Selenium IDE fixes that by capturing multiple object locators when you record your script. During playback, if Selenium IDE can't find one locator, it tries each of the other locators until it finds one that works. Your test will fail only if none of the locators work. This doesn't guarantee scripts will always play back, but it does insulate scripts against numerous changes. As you can see below, Selenium IDE captures linkText, an xPath expression, and CSS-based locators. + +![Selenium IDE captures linkText, an xPath expression, and CSS-based locators][20] + +### 3\. Conditional logic to handle UI features + +When testing web apps, scripts have to handle intermittent UI elements that can randomly appear in your app. These come in the form of cookie notices, popups for special offers, quote requests, newsletter subscriptions, paywall notifications, adblocker requests, and more. + +Conditional logic is a great way to handle these intermittent UI features. Developers can easily insert conditional logic—also called control flow—into Selenium IDE scripts. [Here are details][21] and how it looks. + +![Selenium IDE's Conditional logic][22] + +### 4\. Support for embedded code + +As broad as the new [Selenium IDE API][23] is, it doesn't do everything. For this reason, Selenium IDE has **[**execute** **script**][24]** and **[execute async script][25]** commands that let your script call a JavaScript snippet. + +This provides developers with a tremendous amount of flexibility to take advantage of JavaScript's flexibility and wide range of libraries. To use it, click on the test step where you want JavaScript to run, choose **Insert New Command** , and enter **execute script** or **execute async script** in the command field, as shown below. + +![Selenium IDE's command line][26] + +### 5\. Selenium IDE runs from CI build scripts + +Because SIDE Runner is called from the command line, you can easily fit it into CI build scripts, so long as the CI server can call **selenium-ide-runner** and upload the **.side** file (the test script) as a build artifact. For example, here's how to upload an input file in [Jenkins][27], [Travis][28], and [CircleCI][29]. + +This means Selenium IDE can be better integrated into the software development technology stack. In addition, the scripts created by less-technical QA team members—including business analysts—can run with every build. This helps better align QA with the developer so fewer bugs escape into production. + +### 6\. Support for third-party plugins + +Imagine companies building plugins to have Selenium IDE do all kinds of things, like uploading scripts to a functional testing cloud, a load testing cloud, or a production application monitoring service. + +Plenty of companies have integrated Selenium Webdriver into their offerings, and I bet the same will happen with Selenium IDE. You can also [build your own Selenium IDE plugin][30]. + +### 7\. Visual UI testing + +Speaking of new plugins, Applitools introduced a new Selenium IDE plugin to add artificial intelligence-powered visual validations to the equation. Available through the [Chrome][31] and [Firefox][32] stores via a three-second install, just plug in the Applitools API key and go. + +Visual checkpoints are a great way to ensure a UI renders correctly. Rather than a bunch of assert statements on all the UI elements—which would be a pain to maintain—one visual checkpoint checks all your page elements. + +Best of all, visual AI looks at a web app the same way a human does, ignoring minor differences. This means fewer fake bugs to frustrate a development team. + +### 8\. Visually test responsive web apps + +When testing the visual layout of [responsive web apps][33], it's best to do it on a wide range of screen sizes (also called viewports) to ensure nothing appears out of whack. It's all too easy for responsive web bugs to creep in, and when they do, the problems can range from merely cosmetic to business stopping. + +When you use visual UI testing for Selenium IDE, you can visually test your webpages on the Applitools [Visual Grid][34], which has more than 100 combinations of browsers, emulated devices, and viewport sizes. + +Once tests run on the Visual Grid, developers can easily check the test results on all the various combinations. + +![Selenium IDE's Visual Grid][35] + +### 9\. Responsive web bugs have nowhere to hide + +Selenium IDE can help pinpoint the cause of frontend bugs. Every Selenium IDE script that's run with the Visual Grid can be analyzed with Applitools' [Root Cause Analysis][36]. It's no longer enough to find a bug—developers also need to fix it. + +When a visual bug is discovered, it can be clicked on and just the relevant (not all) Document Object Model (DOM) and CSS differences will be displayed. + +![Finding visual bugs][37] + +In summary, much like many emerging technologies in software development, Selenium IDE is part of a larger trend of making life easier and simpler for technical professionals and enabling them to spend more time and effort on creating code for even faster feedback. + +* * * + +_This article is based on[16 reasons why to use Selenium IDE in 2019 (and 2 why not)][38] originally published on the Applitools blog._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/features-selenium-ide + +作者:[Al Sargent][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alsargent +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0 (magnifying glass on computer screen) +[2]: https://www.seleniumhq.org/projects/webdriver/ +[3]: https://www.cypress.io/ +[4]: https://webdriver.io/ +[5]: https://seleniumhq.wordpress.com/2017/08/09/firefox-55-and-selenium-ide/ +[6]: https://seleniumhq.wordpress.com/2018/08/06/selenium-ide-tng/ +[7]: https://github.com/SeleniumHQ/selenium-ide/graphs/contributors +[8]: http://davehaeffner.com/ +[9]: https://opensource.com/sites/default/files/uploads/selenium_ide_github_graphic_1.png (Selenium IDE's GitHub repository) +[10]: https://chrome.google.com/webstore/detail/selenium-ide/mooikfkahbdckldjjndioackbalphokd +[11]: https://addons.mozilla.org/en-US/firefox/addon/selenium-ide/ +[12]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/ +[13]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/command-line-runner/#launching-the-runner +[14]: http://chromedriver.chromium.org/ +[15]: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/ +[16]: https://github.com/mozilla/geckodriver +[17]: https://github.com/SeleniumHQ/selenium/wiki/InternetExplorerDriver +[18]: https://developer.apple.com/documentation/webkit/testing_with_webdriver_in_safari +[19]: https://opensource.com/sites/default/files/uploads/selenium_ide_side_runner_2.png (SIDE Runner) +[20]: https://opensource.com/sites/default/files/uploads/selenium_ide_linktext_3.png (Selenium IDE captures linkText, an xPath expression, and CSS-based locators) +[21]: https://www.seleniumhq.org/selenium-ide/docs/en/introduction/control-flow/ +[22]: https://opensource.com/sites/default/files/uploads/selenium_ide_conditional_logic_4.png (Selenium IDE's Conditional logic) +[23]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/ +[24]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-script +[25]: https://www.seleniumhq.org/selenium-ide/docs/en/api/commands/#execute-async-script +[26]: https://opensource.com/sites/default/files/uploads/selenium_ide_command_line_5.png (Selenium IDE's command line) +[27]: https://stackoverflow.com/questions/27491789/how-to-upload-a-generic-file-into-a-jenkins-job +[28]: https://docs.travis-ci.com/user/uploading-artifacts/ +[29]: https://circleci.com/docs/2.0/artifacts/ +[30]: https://www.seleniumhq.org/selenium-ide/docs/en/plugins/plugins-getting-started/ +[31]: https://chrome.google.com/webstore/detail/applitools-for-selenium-i/fbnkflkahhlmhdgkddaafgnnokifobik +[32]: https://addons.mozilla.org/en-GB/firefox/addon/applitools-for-selenium-ide/ +[33]: https://en.wikipedia.org/wiki/Responsive_web_design +[34]: https://applitools.com/visualgrid +[35]: https://opensource.com/sites/default/files/uploads/selenium_ide_visual_grid_6.png (Selenium IDE's Visual Grid) +[36]: https://applitools.com/root-cause-analysis +[37]: https://opensource.com/sites/default/files/uploads/seleniumice_rootcauseanalysis_7.png (Finding visual bugs) +[38]: https://applitools.com/blog/why-selenium-ide-2019 diff --git a/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md new file mode 100644 index 0000000000..b2f8a59ab4 --- /dev/null +++ b/sources/tech/20190404 Edge Computing is Key to Meeting Digital Transformation Demands - and Partnerships Can Help Deliver Them.md @@ -0,0 +1,72 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them) +[#]: via: (https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all) +[#]: author: (Rob McKernan https://www.networkworld.com/author/Rob-McKernan/) + +Edge Computing is Key to Meeting Digital Transformation Demands – and Partnerships Can Help Deliver Them +====== + +### Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of edge computing technology + +![Getty Images][1] + +Organizations in virtually every vertical industry are undergoing a digital transformation in an attempt to take advantage of [edge computing][2] technology to make their businesses more efficient, innovative and profitable. In the process, they’re coming face to face with challenges ranging from time to market to reliability of IT infrastructure. + +It’s a complex problem, especially when you consider the scope of what digital transformation entails. “Digital transformation is not simply a list of IT projects, it involves completely rethinking how an organization uses technology to pursue new revenue streams, products, services, and business models,” as the [research firm IDC says][3]. + +Companies will be spending more than $650 billion per year on digital transformation efforts by 2024, a CAGR of more than 18.5% from 2018, according to the research firm [Market Research Engine][4]. + +The drivers behind all that spending include Internet of Things (IoT) technology, which involves collecting data from machines and sensors covering every aspect of the organization. That is contributing to Big Data – the treasure trove of data that companies mine to find the keys to efficiency, opportunity and more. Artificial intelligence and machine learning are crucial to that effort, helping companies make sense of the mountains of data they’re creating and consuming, and to find opportunities. + +**Requirements for Edge Computing** + +All of these trends are creating the need for more and more compute power and data storage. And much of it needs to be close to the source of the data, and to those employees who are working with it. In other words, it’s driving the need for companies to build edge data centers or edge computing sites. + +Physically, these edge computing sites bear little resemblance to large, centralized data centers, but they have many of the same requirements in terms of performance, reliability, efficiency and security. Given they are typically in locations with few if any IT personnel, the data centers must have a high degree of automation and remote management capabilities. And to meet business requirements, they must be built quickly. + +**Answering the Call at the Edge** + +These are complex requirements, but if companies are to meet time-to-market goals and deal with the lack of IT personnel at the edge, they demand simple solutions. + +One solution is integration. We’re seeing this already in the IT space, with vendors delivering hyper-converged infrastructure that combines servers, storage, networking and software that is tightly integrated and delivered in a single enclosure. This saves IT groups valuable time in terms of procuring and configuring equipment and makes it far easier to manage over the long term. + +Now we’re seeing the same strategy applied to edge data centers. Prefabricated, modular data centers are an ideal solution for delivering edge data center capacity quickly and reliably. All the required infrastructure – power, cooling, racks, UPSs – can be configured and installed in a factory and delivered as a single, modular unit to the data center site (or multiple modules, depending on requirements). + +Given they’re built in a factory under controlled conditions, modular data centers are more reliable over the long haul. They can be configured with management software built-in, enabling remote management capabilities and a high degree of automation. And they can be delivered in weeks or months, not years – and in whatever size is required, including small “micro” data centers. + +Few companies, however, have all the components required to deliver a complete, functional data center, not to mention the expertise required to install and configure it. So, it takes effective partnerships to deliver complete edge data center solutions. + +**Tech Data Partnership Delivers at the Edge ** + +APC by Schneider Electric has a long history of partnering to deliver complete solutions that address customer needs. Of the thousands of partnerships it has established over the years, the [25-year partnership][5] with [Tech Data][6] is particularly relevant for the digital transformation era. + +Tech Data is a $36.8 billion, Fortune 100 company that has established itself as the world’s leading end-to-end IT distributor. Power and physical infrastructure specialists from Tech Data team up with their counterparts from APC to deliver innovative solutions, including modular and [micro data centers][7]. Many of these solutions are pre-certified by major alliance partners, including IBM, HPE, Cisco, Nutanix, Dell EMC and others. + +To learn more, [access the full story][8] that explains how the Tech Data and APC partnership helps deliver [Certainty in a Connected World][9] and effective edge computing solutions that meet today’s time to market requirements. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387140/edge-computing-is-key-to-meeting-digital-transformation-demands-and-partnerships-can-help-deliver-t.html#tk.rss_all + +作者:[Rob McKernan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Rob-McKernan/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/gettyimages-494323751-942x445-100792905-large.jpg +[2]: https://www.apc.com/us/en/solutions/business-solutions/edge-computing.jsp +[3]: https://www.idc.com/getdoc.jsp?containerId=US43985717 +[4]: https://www.marketresearchengine.com/digital-transformation-market +[5]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/full-resource.jsp +[6]: https://www.techdata.com/ +[7]: https://www.apc.com/us/en/solutions/business-solutions/micro-data-centers.jsp +[8]: https://www.apc.com/us/en/partners-alliances/partners/tech-data-and-apc-partnership-drives-edge-computing-success/index.jsp +[9]: https://www.apc.com/us/en/who-we-are/certainty-in-a-connected-world.jsp diff --git a/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md new file mode 100644 index 0000000000..3ec4b4600e --- /dev/null +++ b/sources/tech/20190404 Intel formally launches Optane for data center memory caching.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Intel formally launches Optane for data center memory caching) +[#]: via: (https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all) +[#]: author: (Andy Patrizio https://www.networkworld.com/author/Andy-Patrizio/) + +Intel formally launches Optane for data center memory caching +====== + +### Intel formally launched the Optane persistent memory product line, which includes 3D Xpoint memory technology. The Intel-only solution is meant to sit between DRAM and NAND and to speed up performance. + +![Intel][1] + +As part of its [massive data center event][2] on Tuesday, Intel formally launched the Optane persistent memory product line. It had been out for a while, but the current generation of Xeon server processors could not fully utilize it. The new Xeon 8200 and 9200 lines take full advantage of it. + +And since Optane is an Intel product (co-developed with Micron), that means AMD and Arm server processors are out of luck. + +As I have [stated in the past][3], Optane DC Persistent Memory uses 3D Xpoint memory technology that Intel developed with Micron Technology. 3D Xpoint is a non-volatile memory type that is much faster than solid-state drives (SSD), almost at the speed of DRAM, but it has the persistence of NAND flash. + +**[ Read also:[Why NVMe? Users weigh benefits of NVMe-accelerated flash storage][4] and [IDC’s top 10 data center predictions][5] | Get regularly scheduled insights [Sign up for Network World newsletters][6] ]** + +The first 3D Xpoint products were SSDs called Intel’s ["ruler,"][7] because they were designed in a long, thin format similar to the shape of a ruler. They were designed that way to fit in 1u server carriages. As part of Tuesday’s announcement, Intel introduced the new Intel SSD D5-P4326 'Ruler' SSD, using four-cell or QLC 3D NAND memory, with up to 1PB of storage in a 1U design. + +Optane DC Persistent Memory will be available in DIMM capacities of 128GB on up to 512GB initially. That’s two to four times what you can get with DRAM, said Navin Shenoy, executive vice president and general manager of Intel’s Data Center Group, who keynoted the event. + +“We expect system capacity in a server system to scale to 4.5 terabytes per socket or 36 TB in an 8-socket system. That’s three times larger than what we were able to do with the first-generation of Xeon Scalable,” he said. + +## Intel Optane memory uses and speed + +Optane runs in two different modes: Memory Mode and App Direct Mode. Memory mode is what I have been describing to you, where Optane memory exists “above” the DRAM and acts as a cache. In App Direct mode, the DRAM and Optane DC Persistent Memory are pooled together to maximize the total capacity. Not every workload is ideal for this kind of configuration, so it should be used in applications that are not latency-sensitive. The primary use case for Optane, as Intel is promoting it, is Memory Mode. + +**[[Get certified as an Apple Technical Coordinator with this seven-part online course from PluralSight.][8] ]** + +When 3D Xpoint was initially announced a few years back, Intel claimed it was 1,000 times faster than NAND, with 1000 times the endurance, and 10 times the density potential of DRAM. Well that was a little exaggerated, but it does have some intriguing elements. + +Optane memory, when used in 256B contiguous 4 cacheline, can achieve read speeds of 8.3GB/sec and write speeds of 3.0GB/sec. Compare that with the read/write speed of 500 or so MB/sec for a SATA SSD, and you can see the performance gain. Optane, remember, is feeding memory, so it caches frequently accessed SSD content. + +This is the key takeaware of Optane DC. It will keep very large data sets very close to memory, and hence the CPU, with low latency while at the same time minimizing the need to access the slower storage subsystem, whether it’s SSD or HDD. It now offers the possibility of putting multiple terabytes of data very close to the CPU for much faster access. + +## One challenge with Optane memory + +The only real challenge is that Optane goes into DIMM slots, which is where memory goes. Now some motherboards come with as many as 16 DIMM slots per CPU socket, but that’s still board real estate that the customer and OEM provider will need to balance out: Optane vs. memory. There are some Optane drives in PCI Express format, which alleviate the memory crowding on the motherboard. + +3D Xpoint also offers higher endurance than traditional NAND flash memory due to the way it writes data. Intel promises a five-year warranty with its Optane, while a lot of SSDs offer only three years. + +Join the Network World communities on [Facebook][9] and [LinkedIn][10] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387117/intel-formally-launches-optane-for-data-center-memory-caching.html#tk.rss_all + +作者:[Andy Patrizio][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Andy-Patrizio/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/06/intel-optane-persistent-memory-100760427-large.jpg +[2]: https://www.networkworld.com/article/3386142/intel-unveils-an-epic-response-to-amds-server-push.html +[3]: https://www.networkworld.com/article/3279271/intel-launches-optane-the-go-between-for-memory-and-storage.html +[4]: https://www.networkworld.com/article/3290421/why-nvme-users-weigh-benefits-of-nvme-accelerated-flash-storage.html +[5]: https://www.networkworld.com/article/3242807/data-center/top-10-data-center-predictions-idc.html#nww-fsb +[6]: https://www.networkworld.com/newsletters/signup.html#nww-fsb +[7]: https://www.theregister.co.uk/2018/02/02/ruler_and_miniruler_ssd_formats_look_to_banish_diskstyle_drives/ +[8]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fpaths%2Fapple-certified-technical-trainer-10-11 +[9]: https://www.facebook.com/NetworkWorld/ +[10]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md new file mode 100644 index 0000000000..f5915aebe7 --- /dev/null +++ b/sources/tech/20190404 Why blockchain (might be) coming to an IoT implementation near you.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why blockchain (might be) coming to an IoT implementation near you) +[#]: via: (https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all) +[#]: author: (Jon Gold https://www.networkworld.com/author/Jon-Gold/) + +Why blockchain (might be) coming to an IoT implementation near you +====== + +![MF3D / Getty Images][1] + +Companies have found that IoT partners well with a host of other popular enterprise computing technologies of late, and blockchain – the innovative system of distributed trust most famous for underpinning cryptocurrencies – is no exception. Yet while the two phenomena can be complementary in certain circumstances, those expecting an explosion of blockchain-enabled IoT technologies probably shouldn’t hold their breath. + +Blockchain technology can be counter-intuitive to understand at a basic level, but it’s probably best thought of as a sort of distributed ledger keeping track of various transactions. Every “block” on the chain contains transactional records or other data to be secured against tampering, and is linked to the previous one by a cryptographic hash, which means that any tampering with the block will invalidate that connection. The nodes – which can be largely anything with a CPU in it – communicate via a decentralized, peer-to-peer network to share data and ensure the validity of the data in the chain. + +**[ Also see[What is edge computing?][2] and [How edge networking and IoT will reshape data centers][3].]** + +The system works because all the blocks have to agree with each other on the specifics of the data that they’re safeguarding, according to Nir Kshetri, a professor of management at the University of North Carolina – Greensboro. If someone attempts to alter a previous transaction on a given node, the rest of the data on the network pushes back. “The old record of the data is still there,” said Kshetri. + +That’s a powerful security technique – absent a bad actor successfully controlling all of the nodes on a given blockchain (the [famous “51% attack][4]”), the data protected by that blockchain can’t be falsified or otherwise fiddled with. So it should be no surprise that the use of blockchain is an attractive option to companies in some corners of the IoT world. + +Part of the reason for that, over and above the bare fact of blockchain’s ability to securely distribute trusted information across a network, is its place in the technology stack, according to Jay Fallah, CTO and co-founder of NXMLabs, an IoT security startup. + +“Blockchain stands at a very interesting intersection. Computing has accelerated in the last 15 years [in terms of] storage, CPU, etc, but networking hasn’t changed that much until recently,” he said. “[Blockchain]’s not a network technology, it’s not a data technology, it’s both.” + +### Blockchain and IoT** + +** + +Where blockchain makes sense as a part of the IoT world depends on who you speak to and what they are selling, but the closest thing to a general summation may have come from Allison Clift-Jenning, CEO of enterprise blockchain vendor Filament. + +“Anywhere where you've got people who are kind of wanting to trust each other, and have very archaic ways of doing it, that is usually a good place to start with use cases,” she said. + +One example, culled directly from Filament’s own customer base, is used car sales. Filament’s working with “a major Detroit automaker” to create a trusted-vehicle history platform, based on a device that plugs into the diagnostic port of a used car, pulls information from there, and writes that data to a blockchain. Just like that, there’s an immutable record of a used car’s history, including whether its airbags have ever been deployed, whether it’s been flooded, and so on. No unscrupulous used car lot or duplicitous former owner could change the data, and even unplugging the device would mean that there’s a suspicious blank period in the records. + +Most of present-day blockchain IoT implementation is about trust and the validation of data, according to Elvira Wallis, senior vice president and global head of IoT at SAP. + +“Most of the use cases that we have come across are in the realm of tracking and tracing items,” she said, giving the example of a farm-to-fork tracking system for high-end foodstuffs, using blockchain nodes mounted on crates and trucks, allowing for the creation of an un-fudgeable record of an item’s passage through transport infrastructure. (e.g., how long has this steak been refrigerated at such-and-such a temperature, how far has it traveled today, and so on.) + +### **Is using blockchain with IoT a good idea?** + +Different vendors sell different blockchain-based products for different use cases, which use different implementations of blockchain technology, some of which don’t bear much resemblance to the classic, linear, mined-transaction blockchain used in cryptocurrency. + +That means it’s a capability that you’d buy from a vendor for a specific use case, at this point. Few client organizations have the in-house expertise to implement a blockchain security system, according to 451 Research senior analyst Csilla Zsigri. + +The idea with any intelligent application of blockchain technology is to play to its strengths, she said, creating a trusted platform for critical information. + +“That’s where I see it really adding value, just in adding a layer of trust and validation,” said Zsigri. + +Yet while the basic idea of blockchain-enabled IoT applications is fairly well understood, it’s not applicable to every IoT use case, experts agree. Applying blockchain to non-transactional systems – although there are exceptions, including NXM Labs’ blockchain-based configuration product for IoT devices – isn’t usually the right move. + +If there isn’t a need to share data between two different parties – as opposed to simply moving data from sensor to back-end – blockchain doesn’t generally make sense, since it doesn’t really do anything for the key value-add present in most IoT implementations today: data analysis. + +“We’re still in kind of the early dial-up era of blockchain today,” said Clift-Jennings. “It’s slower than a typical database, it often isn't even readable, it often doesn't have a query engine tied to it. You don't really get privacy, by nature of it.” + +Join the Network World communities on [Facebook][5] and [LinkedIn][6] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3386881/why-blockchain-might-be-coming-to-an-iot-implementation-near-you.html#tk.rss_all + +作者:[Jon Gold][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Jon-Gold/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/02/chains_binary_data_blockchain_security_by_mf3d_gettyimages-941175690_2400x1600-100788434-large.jpg +[2]: https://www.networkworld.com/article/3224893/internet-of-things/what-is-edge-computing-and-how-it-s-changing-the-network.html +[3]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[4]: https://bitcoinist.com/51-percent-attack-hackers-steals-18-million-bitcoin-gold-btg-tokens/ +[5]: https://www.facebook.com/NetworkWorld/ +[6]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190405 5 open source tools for teaching young children to read.md b/sources/tech/20190405 5 open source tools for teaching young children to read.md new file mode 100644 index 0000000000..c3a1fe82c8 --- /dev/null +++ b/sources/tech/20190405 5 open source tools for teaching young children to read.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 open source tools for teaching young children to read) +[#]: via: (https://opensource.com/article/19/4/early-literacy-tools) +[#]: author: (Laura B. Janusek https://opensource.com/users/lbjanusek) + +5 open source tools for teaching young children to read +====== +Early literacy apps give kids a foundation in letter recognition, +alphabet sequencing, word finding, and more. +![][1] + +Anyone who sees a child using a tablet or smartphone observes their seemingly innate ability to scroll through apps and swipe through screens, flexing those "digital native" muscles. According to [Common Sense Media][2], the percentage of US households in which 0- to 8-year-olds have access to a smartphone has grown from 52% in 2011 to 98% in 2017. While the debates around age guidelines and screen time surge, it's hard to deny that children are developing familiarity and skills with technology at an unprecedented rate. + +This rise in early technical literacy may be astonishing, but what about _traditional_ literacy, the good old-fashioned ability to read? What does the intersection of early literacy development and early tech use look like? Let's explore some open source tools for early learners that may help develop both of these critical skill sets. + +### Balancing risks and rewards + +But first, a disclaimer: Guidelines for technology use, especially for young children, are [constantly changing][3]. Organizations like the American Academy of Pediatrics, Common Sense Media, Zero to Three, and PBS Kids are continually conducting research and publishing recommendations. One position that all of these and other organizations can agree on is that plopping a child in front of a screen with unmonitored content for an unlimited set of time is highly inadvisable. + +Even setting kids up with educational content or tools for extended periods of time may have risks. And on the flip side, research on the benefits of education technologies is often limited or unavailable. In short, there are many cases in which we don't know for certain if educational technology use at a young age is beneficial, detrimental, or simply neutral. + +But if screen time is available to your child or student, it's logical to infer that educational resources would be preferable over simpler pop-the-bubble or slice-the-fruit games or platforms that could house inappropriate content or online predators. While we may not be able to prove that education apps will make a child's test scores soar, we can at least take comfort in their generally being safer and more age-appropriate than the internet at large. + +That said, if you're open to exploring early-education technologies, there are many reasons to look to open source options. Open source technologies are not only free but open to collaborative improvement. In many cases, they are created by developers who are educators or parents themselves, and they're a great way to avoid in-app purchases, advertisements, and paid upgrades. Open source programs can often be downloaded and installed on your device and accessed without an internet connection. Plus, the idea of [open source in education][4] is a growing trend, and there are countless resources to [learn more][5] about the concept. + +But for now, let's check out some open source tools for early literacy in action! + +### Childsplay + +![Childsplay screenshot][6] + +Let's start simple. [Childsplay][7], licensed under the GPLv2, is the most basic of the resources on this list. It's a compilation of just over a dozen educational games for young learners, four of which are specific to letter recognition, including memory games and an activity where the learner identifies a spoken letter. + +### eduActiv8 + +![eduActiv8 screenshot][8] + +[eduActiv8][9] started in 2011 as a personal project for the developer's son, "whose thirst for learning and knowledge inspired the creation of this educational program." It includes activities for building basic math and early literacy skills, including a variety of spelling, matching, and listening activities. Games include filling in missing letters in the alphabet, unscrambling letters to form a word, matching words to images, and completing mazes by connecting letters in the correct order. eduActiv8 was written in [Python][10] and is available under the GPLv3. + +### GCompris + +![GCompris screenshot][11] + +[GCompris][12] is an open source behemoth (licensed under the GPLv3) of early educational activities. A French software engineer started it in 2000, and it now includes over 130 educational games in nearly 20 languages. Tailored for learners under age 10, it includes activities for letter recognition and drawing, alphabet sequencing, vocabulary building, and games like hangman to identify missing letters in words, plus activities for learning braille. It also includes games in math and music, plus classics from tic-tac-toe to chess. + +### Feed the Monster + +![Feed the Monster screenshot][13] + +The quality of the playful "monster" graphics in [Feed the Monster][14] definitely sets it apart from the others on this list, plus it supports nearly 40 languages! The app includes activities for sorting letters to form words, memory games to match words to images, and letter-tracing writing activities. The app is developed by Curious Learning, which states: "We create, localize, distribute, and optimize open source mobile software so every child can learn to read." While Feed the Monster's offerings are geared toward early readers, Curious Mind's roadmap suggests it's headed towards a more robust personalized literacy platform growing on a foundation of research with MIT, Tufts, and Georgia State University. + +### Syntax Untangler + +![Syntax Untangler screenshot][15] + +[Syntax Untangler][16] is the outlier of this group. Developed by a technologist at the University of Wisconsin–Madison under the GPLv2, the application is "particularly designed for training language learners to recognize and parse linguistic features." Examples show the software being used for foreign language learning, but anyone can use it to create language identification games, including games for early literacy activities like letter recognition. It could also be applied to later literacy skills, like identifying parts of speech in complex sentences or literary techniques in poetry or fiction. + +### Wrapping up + +Access to [literary environments][17] has been shown to impact literacy and attitudes towards reading. Why not strive to create a digital literary environment for our kids by filling our devices with educational technologies, just like our shelves are filled with books? + +Now it's your turn! What open source literacy tools have you used? Comment below to share. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/early-literacy-tools + +作者:[Laura B. Janusek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lbjanusek +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_kid_education.png?itok=3lRp6gFa +[2]: https://www.commonsensemedia.org/research/the-common-sense-census-media-use-by-kids-age-zero-to-eight-2017?action +[3]: https://www.businessinsider.com/smartphone-use-young-kids-toddlers-limits-science-2018-3 +[4]: /article/18/1/best-open-education +[5]: https://opensource.com/resources/open-source-education +[6]: https://opensource.com/sites/default/files/uploads/cp_flashcards.gif (Childsplay screenshot) +[7]: http://www.childsplay.mobi/ +[8]: https://opensource.com/sites/default/files/uploads/eduactiv8.jpg (eduActiv8 screenshot) +[9]: https://www.eduactiv8.org/ +[10]: /article/17/11/5-approaches-learning-python +[11]: https://opensource.com/sites/default/files/uploads/gcompris2.png (GCompris screenshot) +[12]: https://gcompris.net/index-en.html +[13]: https://opensource.com/sites/default/files/uploads/feedthemonster.png (Feed the Monster screenshot) +[14]: https://www.curiouslearning.org/ +[15]: https://opensource.com/sites/default/files/uploads/syntaxuntangler.png (Syntax Untangler screenshot) +[16]: https://courses.dcs.wisc.edu/untangler/ +[17]: http://www.jstor.org/stable/41386459 diff --git a/sources/tech/20190405 File sharing with Git.md b/sources/tech/20190405 File sharing with Git.md new file mode 100644 index 0000000000..13f95b8287 --- /dev/null +++ b/sources/tech/20190405 File sharing with Git.md @@ -0,0 +1,234 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (File sharing with Git) +[#]: via: (https://opensource.com/article/19/4/file-sharing-git) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +File sharing with Git +====== +SparkleShare is an open source, Git-based, Dropbox-style file sharing +application. Learn more in our series about little-known uses of Git. +![][1] + +[Git][2] is one of those rare applications that has managed to encapsulate so much of modern computing into one program that it ends up serving as the computational engine for many other applications. While it's best-known for tracking source code changes in software development, it has many other uses that can make your life easier and more organized. In this series leading up to Git's 14th anniversary on April 7, we'll share seven little-known ways to use Git. Today, we'll look at SparkleShare, which uses Git as the backbone for file sharing. + +### Git for file sharing + +One of the nice things about Git is that it's inherently distributed. It's built to share. Even if you're sharing a repository just with other computers on your own network, Git brings transparency to the act of getting files from a shared location. + +As interfaces go, Git is pretty simple. It varies from user to user, but the common incantation when sitting down to get some work done is just **git pull** or maybe the slightly more complex **git pull && git checkout -b my-branch**. Still, for some people, the idea of _entering a command_ into their computer at all is confusing or bothersome. Computers are meant to make life easy, and computers are good at repetitious tasks, and so there are easier ways to share files with Git. + +### SparkleShare + +The [SparkleShare][3] project is a cross-platform, open source, Dropbox-style file sharing application based on Git. It automates all Git commands, triggering the add, commit, push, and pull processes with the simple act of dragging-and-dropping a file into a specially designated SparkleShare directory. Because it is based on Git, you get fast, diff-based pushes and pulls, and you inherit all the benefits of Git version control and backend infrastructure (like Git hooks). It can be entirely self-hosted, or you can use it with Git hosting services like [GitLab][4], GitHub, Bitbucket, and others. Furthermore, because it's basically just a frontend to Git, you can access your SparkleShare files on devices that may not have a SparkleShare client but do have Git clients. + +Just as you get all the benefits of Git, you also get all the usual Git restrictions: It's impractical to use SparkleShare to store hundreds of photos and music and videos because Git is designed and optimized for text. Git certainly has the capability to store large files of binary data but it is designed to track history, so once a file is added to it, it's nearly impossible to completely remove it. This somewhat limits the usefulness of SparkleShare for some people, but it makes it ideal for many workflows, including [calendaring][5]. + +#### Installing SparkleShare + +SparkleShare is cross-platform, with installers for Windows and Mac available from its [website][6]. For Linux, there's a [Flatpak][7] in your software installer, or you can run these commands in a terminal: + + +``` +$ sudo flatpak remote-add flathub +$ sudo flatpak install flathub org.sparkleshare.SparkleShare +``` + +### Creating a Git repository + +SparkleShare isn't software-as-a-service (SaaS). You run SparkleShare on your computer to communicate with a Git repository—SparkleShare doesn't store your data. If you don't have a Git repository to sync a folder with yet, you must create one before launching SparkleShare. You have three options: hosted Git, self-hosted Git, or self-hosted SparkleShare. + +#### Git hosting + +SparkleShare can use any Git repository you can access for storage, so if you have or create an account with GitLab or any other hosting service, it can become the backend for your SparkleShare. For example, the open source [Notabug.org][8] service is a Git hosting service like GitHub and GitLab, but unique enough to prove SparkleShare's flexibility. Creating a new repository differs from host to host depending on the user interface, but all of the major ones follow the same general model. + +First, locate the button in your hosting service to create a new project or repository and click on it to begin. Then step through the repository creation process, providing a name for your repository, privacy level (repositories often default to being public), and whether or not to initialize the repository with a README file. Whether you need a README or not, enable an initial README file. Starting a repository with a file isn't strictly necessary, but it forces the Git host to instantiate a **master** branch in the repository, which helps ensure that frontend applications like SparkleShare have a branch to commit and push to. It's also useful for you to see a file, even if it's an almost empty README file, to confirm that you have connected. + +![Creating a Git repository][9] + +Once you've created a repository, obtain the URL it uses for SSH clones. You can get this URL the same way anyone gets any URL for a Git project: navigate to the page of the repository and look for the **Clone** button or field. + +![Cloning a URL on GitHub][10] + +Cloning a GitHub URL. + +![Cloning a URL on GitLab][11] + +Cloning a GitLab URL. + +This is the address SparkleShare uses to reach your data, so make note of it. Your Git repository is now configured. + +#### Self-hosted Git + +You can use SparkleShare to access a Git repository on any computer you have access to. No special setup is required, aside from a bare Git repository. However, if you want to give access to your Git repository to anyone else, then you should run a Git manager like [Gitolite][12] or SparkleShare's own Dazzle server to help you manage SSH keys and accounts. At the very least, create a user specific to Git so that users with access to your Git repository don't also automatically gain access to the rest of your server. + +Log into your server as the Git user (or yourself, if you're very good at managing user and group permissions) and create a repository: + + +``` +$ mkdir ~/sparkly.git +$ cd ~/sparkly.git +$ git init --bare . +``` + +Your Git repository is now configured. + +#### Dazzle + +SparkleShare's developers provide a Git management system called [Dazzle][13] to help you self-host Git repositories. + +On your server, download the Dazzle application to some location in your path: + + +``` +$ curl \ +\--output ~/bin/dazzle +$ chmod +x ~/bin/dazzle +``` + +Dazzle sets up a user specific to Git and SparkleShare and also implements access rights based on keys generated by the SparkleShare application. For now, just set up a project: + + +``` +`$ dazzle create sparkly` +``` + +Your server is now configured as a SparkleShare host. + +### Configuring SparkleShare + +When you launch SparkleShare for the first time, you are prompted to configure what server you want SparkleShare to use for storage. This process may feel like a first-run setup wizard, but it's actually the usual process for setting up a new shared location within SparkleShare. Unlike many shared drive applications, with SparkleShare you can have several locations configured at once. The first shared location you configure isn't any more significant than any shared location you may set up later, and you're not signing up with SparkleShare or any other service. You're just pointing SparkleShare at a Git repository so that it knows what to keep your first SparkleShare folder in sync with. + +On the first screen, identify yourself by whatever means you want on record in the Git commits that SparkleShare makes on your behalf. You can use anything, even fake information that resolves to nothing. It's purely for the commit messages, which you may never even see if you have no interest in reviewing the Git backend processes. + +The next screen prompts you to choose your hosting type. If you are using GitLab, GitHub, Planio, or Bitbucket, then select the appropriate one. For anything else, select **Own server**. + +![Choosing a Sparkleshare host][14] + +At the bottom of this screen, you must enter the SSH clone URL. If you're self-hosting, the address is something like **** and the remote path is the absolute path to the Git repository you created for this purpose. + +Based on my self-hosted examples above, the address to my imaginary server is **** (the **:22122** indicates a nonstandard SSH port) and the remote path is **/home/git/sparkly.git**. + +If I use my Notabug.org account instead, the address from the example above is **[git@notabug.org][15]** and the path is **seth/sparkly.git**. + +SparkleShare will fail the first time it attempts to connect to the host because you have not yet copied the SparkleShare client ID (an SSH key specific to the SparkleShare application) to the Git host. This is expected, so don't cancel the process. Leave the SparkleShare setup window open and obtain the client ID from the SparkleShare icon in your system tray. Then copy the client ID to your clipboard so you can add it to your Git host. + +![Getting the client ID from Sparkleshare][16] + +#### Adding your client ID to a hosted Git account + +Minor UI differences aside, adding an SSH key (which is all the client ID is) is basically the same process on any hosting service. In your Git host's web dashboard, navigate to your user settings and find the **SSH Keys** category. Click the **Add New Key** button (or similar) and paste the contents of your SparkleShare client ID. + +![Adding an SSH key][17] + +Save the key. If you want someone else, such as collaborators or family members, to be able to access this same repository, they must provide you with their SparkleShare client ID so you can add it to your account. + +#### Adding your client ID to a self-hosted Git account + +A SparkleShare client ID is just an SSH key, so copy and paste it into your Git user's **~/.ssh/authorized_keys** file. + +#### Adding your client ID with Dazzle + +If you are using Dazzle to manage your SparkleShare projects, add a client ID with this command: + + +``` +`$ dazzle link` +``` + +When Dazzle prompts you for the ID, paste in the client ID found in the SparkleShare menu. + +### Using SparkleShare + +Once you've added your client ID to your Git host, click the **Retry** button in the SparkleShare window to finish setup. When it's finished cloning your repository, you can close the SparkleShare setup window, and you'll find a new **SparkleShare** folder in your home directory. If you set up a Git repository with a hosting service and chose to include a README or license file, you can see them in your SparkleShare directory. + +![Sparkleshare file manager][18] + +Otherwise, there are some hidden directories, which you can see by revealing hidden directories in your file manager. + +![Showing hidden files in GNOME][19] + +You use SparkleShare the same way you use any directory on your computer: you put files into it. Anytime a file or directory is placed into a SparkleShare folder, it's copied in the background to your Git repository. + +#### Excluding certain files + +Since Git is designed to remember _everything_ , you may want to exclude specific file types from ever being recorded. There are a few reasons to manage excluded files. By defining files that are off limits for SparkleShare, you can avoid accidental copying of large files. You can also design a scheme for yourself that enables you to store files that logically belong together (MIDI files with their **.flac** exports, for instance) in one directory, but manually back up the large files yourself while letting SparkleShare back up the text-based files. + +If you can't see hidden files in your system's file manager, then reveal them. Navigate to your SparkleShare folder, then to the directory representing your repository, locate a file called **.gitignore** , and open it in a text editor. You can enter file extensions or file names, one per line, into **.gitignore** , and any file matching what you list will be (as the file name suggests) ignored. + + +``` +Thumbs.db +$RECYCLE.BIN/ +.DS_Store +._* +.fseventsd +.Spotlight-V100 +.Trashes +.directory +.Trash-* +*.wav +*.ogg +*.flac +*.mp3 +*.m4a +*.opus +*.jpg +*.png +*.mp4 +*.mov +*.mkv +*.avi +*.pdf +*.djvu +*.epub +*.od{s,t} +*.cbz +``` + +You know the types of files you encounter most often, so concentrate on the ones most likely to sneak their way into your SparkleShare directory. If you want to exercise a little overkill, you can find good collections of **.gitignore** files on Notabug.org and also on the internet at large. + +With those entries in your **.gitignore** file, you can place large files that you don't want sent to your Git host in your SparkleShare directory, and SparkleShare will ignore them entirely. Of course, that means it's up to you to make sure they get onto a backup or distributed to your SparkleShare collaborators through some other means. + +### Automation + +[Automation][20] is part of the silent agreement we have with computers: they do the repetitious, boring stuff that we humans either aren't very good at doing or aren't very good at remembering. SparkleShare is a nice, simple way to automate the routine distribution of data. It isn't right for every Git repository, by any means. It doesn't have an interface for advanced Git functions; it doesn't have a pause button or a manual override. And that's OK because its scope is intentionally limited. SparkleShare does what SparkleShare sets out to do, it does it well, and it's one Git repository you won't have to think about. + +If you have a use for that kind of steady, invisible automation, give SparkleShare a try. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/file-sharing-git + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_cloud21x_cc.png?itok=5UwC92dO +[2]: https://git-scm.com/ +[3]: http://www.sparkleshare.org/ +[4]: http://gitlab.com +[5]: https://opensource.com/article/19/4/calendar-git +[6]: http://sparkleshare.org +[7]: /business/16/8/flatpak +[8]: http://notabug.org +[9]: https://opensource.com/sites/default/files/uploads/git-new-repo.jpg (Creating a Git repository) +[10]: https://opensource.com/sites/default/files/uploads/github-clone-url.jpg (Cloning a URL on GitHub) +[11]: https://opensource.com/sites/default/files/uploads/gitlab-clone-url.jpg (Cloning a URL on GitLab) +[12]: http://gitolite.org +[13]: https://github.com/hbons/Dazzle +[14]: https://opensource.com/sites/default/files/uploads/sparkleshare-host.jpg (Choosing a Sparkleshare host) +[15]: mailto:git@notabug.org +[16]: https://opensource.com/sites/default/files/uploads/sparkleshare-clientid.jpg (Getting the client ID from Sparkleshare) +[17]: https://opensource.com/sites/default/files/uploads/git-ssh-key.jpg (Adding an SSH key) +[18]: https://opensource.com/sites/default/files/uploads/sparkleshare-file-manager.jpg (Sparkleshare file manager) +[19]: https://opensource.com/sites/default/files/uploads/gnome-show-hidden-files.jpg (Showing hidden files in GNOME) +[20]: /downloads/ansible-quickstart diff --git a/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md b/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md new file mode 100644 index 0000000000..6ee1633f9d --- /dev/null +++ b/sources/tech/20190405 How to Authenticate a Linux Desktop to Your OpenLDAP Server.md @@ -0,0 +1,190 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Authenticate a Linux Desktop to Your OpenLDAP Server) +[#]: via: (https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +How to Authenticate a Linux Desktop to Your OpenLDAP Server +====== + +![][1] + +[Creative Commons Zero][2] + +In this final part of our three-part series, we reach the conclusion everyone has been waiting for. The ultimate goal of using LDAP (in many cases) is enabling desktop authentication. With this setup, admins are better able to manage and control user accounts and logins. After all, Active Directory admins shouldn’t have all the fun, right? + +WIth OpenLDAP, you can manage your users on a centralized directory server and connect the authentication of every Linux desktop on your network to that server. And since you already have [OpenLDAP][3] and the [LDAP Authentication Manager][4] setup and running, the hard work is out of the way. At this point, there is just a few quick steps to enabling those Linux desktops to authentication with that server. + +I’m going to walk you through this process, using the Ubuntu Desktop 18.04 to demonstrate. If your desktop distribution is different, you’ll only have to modify the installation steps, as the configurations should be similar. + +**What You’ll Need** + +Obviously you’ll need the OpenLDAP server up and running. You’ll also need user accounts created on the LDAP directory tree, and a user account on the client machines with sudo privileges. With those pieces out of the way, let’s get those desktops authenticating. + +**Installation** + +The first thing we must do is install the necessary client software. This will be done on all the desktop machines that require authentication with the LDAP server. Open a terminal window on one of the desktop machines and issue the following command: + +``` +sudo apt-get install libnss-ldap libpam-ldap ldap-utils nscd -y +``` + +During the installation, you will be asked to enter the LDAP server URI ( **Figure 1** ). + +![][5] + +Figure 1: Configuring the LDAP server URI for the client. + +[Used with permission][6] + +The LDAP URI is the address of the OpenLDAP server, in the form ldap://SERVER_IP (Where SERVER_IP is the IP address of the OpenLDAP server). Type that address, tab to OK, and press Enter on your keyboard. + +In the next window ( **Figure 2)** , you are required to enter the Distinguished Name of the OpenLDAP server. This will be in the form dc=example,dc=com. + +![][7] + +Figure 2: Configuring the DN of your OpenLDAP server. + +[Used with permission][6] + +If you’re unsure of what your OpenLDAP DN is, log into the LDAP Account Manager, click Tree View, and you’ll see the DN listed in the left pane ( **Figure 3** ). + +![][8] + +Figure 3: Locating your OpenLDAP DN with LAM. + +[Used with permission][6] + +The next few configuration windows, will require the following information: + + * Specify LDAP version (select 3) + + * Make local root Database admin (select Yes) + + * Does the LDAP database require login (select No) + + * Specify LDAP admin account suffice (this will be in the form cn=admin,dc=example,dc=com) + + * Specify password for LDAP admin account (this will be the password for the LDAP admin user) + + + + +Once you’ve answered the above questions, the installation of the necessary bits is complete. + +**Configuring the LDAP Client** + +Now it’s time to configure the client to authenticate against the OpenLDAP server. This is not nearly as hard as you might think. + +First, we must configure nsswitch. Open the configuration file with the command: + +``` +sudo nano /etc/nsswitch.conf +``` + +In that file, add ldap at the end of the following line: + +``` +passwd: compat systemd + +group: compat systemd + +shadow: files +``` + +These configuration entries should now look like: + +``` +passwd: compat systemd ldap +group: compat systemd ldap +shadow: files ldap +``` + +At the end of this section, add the following line: + +``` +gshadow files +``` + +The entire section should now look like: + +``` +passwd: compat systemd ldap + +group: compat systemd ldap + +shadow: files ldap + +gshadow files +``` + +Save and close that file. + +Now we need to configure PAM for LDAP authentication. Issue the command: + +``` +sudo nano /etc/pam.d/common-password +``` + +Remove use_authtok from the following line: + +``` +password [success=1 user_unknown=ignore default=die] pam_ldap.so use_authtok try_first_pass +``` + +Save and close that file. + +There’s one more PAM configuration to take care of. Issue the command: + +``` +sudo nano /etc/pam.d/common-session +``` + +At the end of that file, add the following: + +``` +session optional pam_mkhomedir.so skel=/etc/skel umask=077 +``` + +The above line will create the default home directory (upon first login), on the Linux desktop, for any LDAP user that doesn’t have a local account on the machine. Save and close that file. + +**Logging In** + +Reboot the client machine. When the login is presented, attempt to log in with a user on your OpenLDAP server. The user account should authenticate and present you with a desktop. You are good to go. + +Make sure to configure every single Linux desktop on your network in the same fashion, so they too can authenticate against the OpenLDAP directory tree. By doing this, any user in the tree will be able to log into any configured Linux desktop machine on your network. + +You now have an OpenLDAP server running, with the LDAP Account Manager installed for easy account management, and your Linux clients authenticating against that LDAP server. + +And that, my friends, is all there is to it. + +We’re done. + +Keep using Linux. + +It’s been an honor. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/how-authenticate-linux-desktop-your-openldap-server + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/cyber-3400789_1280_0.jpg?itok=YiinDnTw +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.linux.com/blog/2019/3/how-install-openldap-ubuntu-server-1804 +[4]: https://www.linux.com/blog/learn/2019/3/how-install-ldap-account-manager-ubuntu-server-1804 +[5]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_1.jpg?itok=DgYT8iY1 +[6]: /LICENSES/CATEGORY/USED-PERMISSION +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_2.jpg?itok=CXITs7_J +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ldapauth_3.jpg?itok=HmhiYj7J diff --git a/sources/tech/20190406 Run a server with Git.md b/sources/tech/20190406 Run a server with Git.md new file mode 100644 index 0000000000..2d7749a465 --- /dev/null +++ b/sources/tech/20190406 Run a server with Git.md @@ -0,0 +1,240 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Run a server with Git) +[#]: via: (https://opensource.com/article/19/4/server-administration-git) +[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/seth) + +Run a server with Git +====== +Thanks to Gitolite, you can manage a Git server with Git. Learn how in +our series about little-known Git uses. +![computer servers processing data][1] + +As I've tried to demonstrate in this series leading up to Git's 14th anniversary on April 7, [Git][2] can do a wide range of things beyond tracking source code. Believe it or not, Git can even manage your Git server, so you can, more or less, run a Git server with Git itself. + +Of course, this involves a lot of components beyond everyday Git, not the least of which is [Gitolite][3], the backend application managing the fiddly bits that you configure using Git. The great thing about Gitolite is that, because it uses Git as its frontend interface, it's easy to integrate Git server administration within the rest of your Git-based workflow. Gitolite provides precise control over who can access specific repositories on your server and what permissions they have. You can manage that sort of thing yourself with the usual Linux system tools, but it takes a lot of work if you have more than just one or two repos across a half-dozen users. + +Gitolite's developers have done the hard work to make it easy for you to provide many users with access to your Git server without giving them access to your entire environment—and you can do it all with Git. + +What Gitolite is _not_ is a GUI admin and user panel. That sort of experience is available with the excellent [Gitea][4] project, but this article focuses on the simple elegance and comforting familiarity of Gitolite. + +### Install Gitolite + +Assuming your Git server runs Linux, you can install Gitolite with your package manager ( **yum** on CentOS and RHEL, **apt** on Debian and Ubuntu, **zypper** on OpenSUSE, and so on). For example, on RHEL: + + +``` +`$ sudo yum install gitolite3` +``` + +Many repositories still have older versions of Gitolite for legacy support, but the current version is version 3. + +You must have passwordless SSH access to your server. You can use a password to log in if you prefer, but Gitolite relies on SSH keys, so you must configure the option to log in with keys. If you don't know how to configure a server for passwordless SSH access, go learn how to do that first (the [Setting up SSH key authentication][5] section of Steve Ovens's Ansible article explains it well). It's an essential part of secure server administration—as well as of running Gitolite. + +### Configure a Git user + +Without Gitolite, if a person requests access to a Git repository you host on a server, you have to provide that person with a user account. Git provides a special shell, the **git-shell** , which is an ultra-specific shell that performs only Git tasks. This lets you have users who can access your server only through the filter of a very limited shell environment. + +That solution works, but it usually means a user gains access to all repositories on your server unless you have a very good schema for group permissions and maintain those permissions strictly whenever a new repository is created. It also requires a lot of manual configuration at the system level, an area usually reserved for a specific tier of sysadmins and not necessarily the person usually in charge of Git repositories. + +Gitolite sidesteps this issue entirely by designating one username for every person who needs access to any repository. By default, the username is **git** , and because Gitolite's documentation assumes that's what is used, it's a good default to keep when you're learning the tool. It's also a well-known convention for anyone who's ever used GitLab or GitHub or any other Git hosting service. + +Gitolite calls this user the _hosting user_. Create an account on your server to act as the hosting user (I'll stick with **git** because that's the convention): + + +``` +` $ sudo adduser --create-home git` +``` + +For you to control the **git** user account, it must have a valid public SSH key that belongs to you. You should already have this set up, so **cp** your public key ( _not your private key_ ) to the **git** user's home directory: + + +``` +$ sudo cp ~/.ssh/id_ed25519.pub /home/git/ +$ sudo chown git:git /home/git/id_ed25519.pub +``` + +If your public key doesn't end with the extension **.pub** , Gitolite will not use it, so rename the file accordingly. Change to that user account to run Gitolite's setup: + + +``` +$ sudo su - git +$ gitolite setup --pubkey id_ed25519.pub +``` + +After the setup script runs, the **git** home's user directory will have a **repositories** directory, which (for now) contains the files **git-admin.git** and **testing.git**. That's all the setup the server requires, so log out. + +### Use Gitolite + +Managing Gitolite is a matter of editing text files in a Git repository, specifically **gitolite-admin.git**. You won't SSH into your server for Git administration, and Gitolite encourages you not to try. The repositories you and your users store on the Gitolite server are _bare_ repositories, so it's best to stay out of them. + + +``` +$ git clone [git@example.com][6]:gitolite-admin.git gitolite-admin.git +$ cd gitolite-admin.git +$ ls -1 +conf +keydir +``` + +The **conf** directory in this repository contains a file called **gitolite.conf**. Open it in a text editor or use **cat** to view its contents: + + +``` +repo gitolite-admin +RW+ = id_ed22519 + +repo testing +RW+ = @all +``` + +You may have an idea of what this configuration file does: **gitolite-admin** represents this repository, and the owner of the **id_ed25519** key has read, write, and Git administrative privileges. In other words, rather than mapping users to normal local Unix users (because all your users log in using the **git** hosting user identity), Gitolite maps users to SSH keys listed in the **keydir** directory. + +The **testing.git** repository gives full permissions to everyone with access to the server using special group notation. + +#### Add users + +If you want to add a user called **alice** to your Git server, the person Alice must send you her public SSH key. Gitolite uses whatever is to the left of the **.pub** extension as the identifier for your Git users. Rather than using the default key name values, give keys a name indicative of the key owner. If a user has more than one key (e.g., one for her laptop, one for her desktop), you can use subdirectories to avoid file name collisions. For instance, the key Alice uses from her laptop might come to you as the default **id_rsa.pub** , so rename it **alice.pub** or similar (or let the users name the key according to their local user accounts on their computers), and place it into the **gitolite-admin.git/keydir/work/laptop/** directory. If she sends you another key from her desktop, name it **alice.pub** (the same as the previous one) and add it to **keydir/work/desktop/**. Another key might go into **keydir/home/desktop/** , and so on. Gitolite recursively searches **keydir** for a **.pub** file matching a repository "user" and treats any match as the same identity. + +When you add keys to the **keydir** directory, you must commit them back to your server. This is such an easy thing to forget that there's a real argument here for using an automated Git application like [**Sparkleshare**][7] so any change is committed back to your Gitolite admin immediately. The first time you forget to commit and push—and waste three hours of your time and your user's time troubleshooting—you'll see that Gitolite is the perfect justification for using Sparkleshare. + + +``` +$ git add keydir +$ git commit -m 'added alice-laptop-0.pub' +$ git push origin HEAD +``` + +Alice, by default, gains access to the **testing.git** directory so she can test connectivity and functionality with that. + +#### Set permissions + +As with users, directory permissions and groups are abstracted away from the normal Unix tools you might be used to (or find information about online). Permissions to projects are granted in the **gitolite.conf** file in **gitolite-admin.git/conf** directory. There are four levels of permissions: + + * **R** allows read-only. A user with **R** permissions on a repository may clone it, and that's all. + * **RW** allows a user to perform a fast-forward push of a branch, create new branches, and create new tags. More or less, this one feels like a "normal" Git repository to most users. + * **RW+** allows Git actions that are potentially destructive. A user can perform normal fast-forward pushes, as well as rewind pushes, do rebases, and delete branches and tags. This may or may not be something you want to grant to all contributors on a project. + * **-** explicitly denies access to a repository. This is essentially the same as a user not being listed in the repository's configuration. + + + +Create a new repository or modify an existing repository's permissions by adjusting **gitolite.conf**. For instance, to give Alice permissions to administrate a new repository called **widgets.git** : + + +``` +repo gitolite-admin +RW+ = id_ed22519 + +repo testing +RW+ = @all + +repo widgets +RW+ = alice +``` + +Now Alice—and Alice alone—can clone the repo: + + +``` +[alice]$ git clone [git@example.com][6]:widgets.git +Cloning into 'widgets'... +warning: You appear to have cloned an empty repository. +``` + +On her initial push, Alice must use the **-u** option to send her branch to the empty repository (as she would have to do with any Git host). + +To make user management easier, you can define groups of repositories: + + +``` +@qtrepo = widgets +@qtrepo = games + +repo gitolite-admin +RW+ = id_ed22519 + +repo testing +RW+ = @all + +repo @qtrepo +RW+ = alice +``` + +Just as you can create group repositories, you can group users. One user group exists by default: **@all**. As you might expect, it includes all users, without exception. You can create your own: + + +``` +@qtrepo = widgets +@qtrepo = games + +@developers = alice bob + +repo gitolite-admin +RW+ = id_ed22519 + +repo testing +RW+ = @all + +repo @qtrepo +RW+ = @developers +``` + +As with adding or modifying key files, any change to the **gitolite.conf** file must be committed and pushed to take effect. + +### Create a repository + +By default, Gitolite assumes repository creation happens from the top down. For instance, a project manager with access to the Git server creates a project repository and, through the Gitolite administration repo, adds developers. + +In practice, you might prefer to grant users permission to create repositories. Gitolite calls these "wild repos" (I'm not sure whether that's commentary on how the repos come into being or a reference to the wildcard characters required by the configuration file to let it happen). Here's an example: + + +``` +@managers = alice bob + +repo foo/CREATOR/[a-z]..* +C = @managers +RW+ = CREATOR +RW = WRITERS +R = READERS +``` + +The first line defines a group of users: the group is called **@managers** and contains users **alice** and **bob**. The next line sets up a wildcard allowing repositories that do not yet exist to be created in a directory called **foo** followed by a subdirectory named for the user creating the repo. For example: + + +``` +[alice]$ git clone [git@example.com][6]:foo/alice/cool-app.git +Cloning into cool-app'... +Initialized empty Git repository in /home/git/repositories/foo/alice/cool-app.git +warning: You appear to have cloned an empty repository. +``` + +There are some mechanisms for the creator of a wild repo to define who can read and write to their repository, but they're limited in scope. For the most part, Gitolite assumes that a specific set of users governs project permission. One solution is to grant all users access to **gitolite-admin** using a Git hook to require manager approval to merge changes into the master branch. + +### Learn more + +Gitolite has many more features than what this introductory article covers, so try it out. The [documentation][8] is excellent, and once you read through it, you can customize your Gitolite server to provide your users whatever level of control you are comfortable with. Gitolite is a low-maintenance, simple system that you can install, set up, and then more or less forget about. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/server-administration-git + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data) +[2]: https://git-scm.com/ +[3]: http://gitolite.com +[4]: http://gitea.io +[5]: Setting%20up%20SSH%20key%20authentication +[6]: mailto:git@example.com +[7]: https://opensource.com/article/19/4/file-sharing-git +[8]: http://gitolite.com/gitolite/quick_install.html diff --git a/sources/tech/20190407 Manage multimedia files with Git.md b/sources/tech/20190407 Manage multimedia files with Git.md new file mode 100644 index 0000000000..340c356aa9 --- /dev/null +++ b/sources/tech/20190407 Manage multimedia files with Git.md @@ -0,0 +1,247 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage multimedia files with Git) +[#]: via: (https://opensource.com/article/19/4/manage-multimedia-files-git) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +Manage multimedia files with Git +====== +Learn how to use Git to track large multimedia files in your projects in +the final article in our series on little-known uses of Git. +![video editing dashboard][1] + +Git is very specifically designed for source code version control, so it's rarely embraced by projects and industries that don't primarily work in plaintext. However, the advantages of an asynchronous workflow are appealing, especially in the ever-growing number of industries that combine serious computing with seriously artistic ventures, including web design, visual effects, video games, publishing, currency design (yes, that's a real industry), education… the list goes on and on. + +In this series leading up to Git's 14th anniversary, we've shared six little-known ways to use Git. In this final article, we'll look at software that brings the advantages of Git to managing multimedia files. + +### The problem with managing multimedia files with Git + +It seems to be common knowledge that Git doesn't work well with non-text files, but it never hurts to challenge assumptions. Here's an example of copying a photo file using Git: + + +``` +$ du -hs +108K . +$ cp ~/photos/dandelion.tif . +$ git add dandelion.tif +$ git commit -m 'added a photo' +[master (root-commit) fa6caa7] two photos +1 file changed, 0 insertions(+), 0 deletions(-) +create mode 100644 dandelion.tif +$ du -hs +1.8M . +``` + +Nothing unusual so far; adding a 1.8MB photo to a directory results in a directory 1.8MB in size. So, let's try removing the file: + + +``` +$ git rm dandelion.tif +$ git commit -m 'deleted a photo' +$ du -hs +828K . +``` + +You can see the problem here: Removing a large file after it's been committed increases a repository's size roughly eight times its original, barren state (from 108K to 828K). You can perform tests to get a better average, but this simple demonstration is consistent with my experience. The cost of committing files that aren't text-based is minimal at first, but the longer a project stays active, the more changes people make to static content, and the more those fractions start to add up. When a Git repository becomes very large, the major cost is usually speed. The time to perform pulls and pushes goes from being how long it takes to take a sip of coffee to how long it takes to wonder if your computer got kicked off the network. + +The reason static content causes Git to grow in size is that formats based on text allow Git to pull out just the parts that have changed. Raster images and music files make as much sense to Git as they would to you if you looked at the binary data contained in a .png or .wav file. So Git just takes all the data and makes a new copy of it, even if only one pixel changes from one photo to the next. + +### Git-portal + +In practice, many multimedia projects don't need or want to track the media's history. The media part of a project tends to have a different lifecycle than the text or code part of a project. Media assets generally progress in one direction: a picture starts as a pencil sketch, proceeds toward its destination as a digital painting, and, even if the text is rolled back to an earlier version, the art continues its forward progress. It's rare for media to be bound to a specific version of a project. The exceptions are usually graphics that reflect datasets—usually tables or graphs or charts—that can be done in text-based formats such as SVG. + +So, on many projects that involve both media and text (whether it's narrative prose or code), Git is an acceptable solution to file management, as long as there's a playground outside the version control cycle for artists to play in. + +![Graphic showing relationship between art assets and Git][2] + +A simple way to enable that is [Git-portal][3], a Bash script armed with Git hooks that moves your asset files to a directory outside Git's purview and replaces them with symlinks. Git commits the symlinks (sometimes called aliases or shortcuts), which are trivially small, so all you commit are your text files and whatever symlinks represent your media assets. Because the replacement files are symlinks, your project continues to function as expected because your local machine follows the symlinks to their "real" counterparts. Git-portal maintains a project's directory structure when it swaps out a file with a symlink, so it's easy to reverse the process, should you decide that Git-portal isn't right for your project or you need to build a version of your project without symlinks (for distribution, for instance). + +Git-portal also allows remote synchronization of assets over rsync, so you can set up a remote storage location as a centralized source of authority. + +Git-portal is ideal for multimedia projects, including video game and tabletop game design, virtual reality projects with big 3D model renders and textures, [books][4] with graphics and .odt exports, collaborative [blog websites][5], music projects, and much more. It's not uncommon for an artist to perform versioning in their application—in the form of layers (in the graphics world) and tracks (in the music world)—so Git adds nothing to multimedia project files themselves. The power of Git is leveraged for other parts of artistic projects (prose and narrative, project management, subtitle files, credits, marketing copy, documentation, and so on), and the power of structured remote backups is leveraged by the artists. + +#### Install Git-portal + +There are RPM packages for Git-portal located at , which you can download and install. + +Alternately, you can install Git-portal manually from its home on GitLab. It's just a Bash script and some Git hooks (which are also Bash scripts), but it requires a quick build process so that it knows where to install itself: + + +``` +$ git clone git-portal.clone +$ cd git-portal.clone +$ ./configure +$ make +$ sudo make install +``` + +#### Use Git-portal + +Git-portal is used alongside Git. This means, as with all large-file extensions to Git, there are some added steps to remember. But you only need Git-portal when dealing with your media assets, so it's pretty easy to remember unless you've acclimated yourself to treating large files the same as text files (which is rare for Git users). There's one setup step you must do to use Git-portal in a project: + + +``` +$ mkdir bigproject.git +$ cd !$ +$ git init +$ git-portal init +``` + +Git-portal's **init** function creates a **_portal** directory in your Git repository and adds it to your .gitignore file. + +Using Git-portal in a daily routine integrates smoothly with Git. A good example is a MIDI-based music project: the project files produced by the music workstation are text-based, but the MIDI files are binary data: + + +``` +$ ls -1 +_portal +song.1.qtr +song.qtr +song-Track_1-1.mid +song-Track_1-3.mid +song-Track_2-1.mid +$ git add song*qtr +$ git-portal song-Track*mid +$ git add song-Track*mid +``` + +If you look into the **_portal** directory, you'll find the original MIDI files. The files in their place are symlinks to **_portal** , which keeps the music workstation working as expected: + + +``` +$ ls -lG +[...] _portal/ +[...] song.1.qtr +[...] song.qtr +[...] song-Track_1-1.mid -> _portal/song-Track_1-1.mid* +[...] song-Track_1-3.mid -> _portal/song-Track_1-3.mid* +[...] song-Track_2-1.mid -> _portal/song-Track_2-1.mid* +``` + +As with Git, you can also add a directory of files: + + +``` +$ cp -r ~/synth-presets/yoshimi . +$ git-portal add yoshimi +Directories cannot go through the portal. Sending files instead. +$ ls -lG _portal/yoshimi +[...] yoshimi.stat -> ../_portal/yoshimi/yoshimi.stat* +``` + +Removal works as expected, but when removing something in **_portal** , you should use **git-portal rm** instead of **git rm**. Using Git-portal ensures that the file is removed from **_portal** : + + +``` +$ ls +_portal/ song.qtr song-Track_1-3.mid@ yoshimi/ +song.1.qtr song-Track_1-1.mid@ song-Track_2-1.mid@ +$ git-portal rm song-Track_1-3.mid +rm 'song-Track_1-3.mid' +$ ls _portal/ +song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/ +``` + +If you forget to use Git-portal, then you have to remove the portal file manually: + + +``` +$ git-portal rm song-Track_1-1.mid +rm 'song-Track_1-1.mid' +$ ls _portal/ +song-Track_1-1.mid* song-Track_2-1.mid* yoshimi/ +$ trash _portal/song-Track_1-1.mid +``` + +Git-portal's only other function is to list all current symlinks and find any that may have become broken, which can sometimes happen if files move around in a project directory: + + +``` +$ mkdir foo +$ mv yoshimi foo +$ git-portal status +bigproject.git/song-Track_2-1.mid: symbolic link to _portal/song-Track_2-1.mid +bigproject.git/foo/yoshimi/yoshimi.stat: broken symbolic link to ../_portal/yoshimi/yoshimi.stat +``` + +If you're using Git-portal for a personal project and maintaining your own backups, this is technically all you need to know about Git-portal. If you want to add in collaborators or you want Git-portal to manage backups the way (more or less) Git does, you can a remote. + +#### Add Git-portal remotes + +Adding a remote location for Git-portal is done through Git's existing remote function. Git-portal implements Git hooks, scripts hidden in your repository's .git directory, to look at your remotes for any that begin with **_portal**. If it finds one, it attempts to **rsync** to the remote location and synchronize files. Git-portal performs this action anytime you do a Git push or a Git merge (or pull, which is really just a fetch and an automatic merge). + +If you've only cloned Git repositories, then you may never have added a remote yourself. It's a standard Git procedure: + + +``` +$ git remote add origin [git@gitdawg.com][6]:seth/bigproject.git +$ git remote -v +origin [git@gitdawg.com][6]:seth/bigproject.git (fetch) +origin [git@gitdawg.com][6]:seth/bigproject.git (push) +``` + +The name **origin** is a popular convention for your main Git repository, so it makes sense to use it for your Git data. Your Git-portal data, however, is stored separately, so you must create a second remote to tell Git-portal where to push to and pull from. Depending on your Git host, you may need a separate server because gigabytes of media assets are unlikely to be accepted by a Git host with limited space. Or maybe you're on a server that permits you to access only your Git repository and not external storage directories: + + +``` +$ git remote add _portal [seth@example.com][7]:/home/seth/git/bigproject_portal +$ git remote -v +origin [git@gitdawg.com][6]:seth/bigproject.git (fetch) +origin [git@gitdawg.com][6]:seth/bigproject.git (push) +_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (fetch) +_portal [seth@example.com][7]:/home/seth/git/bigproject_portal (push) +``` + +You may not want to give all of your users individual accounts on your server, and you don't have to. To provide access to the server hosting a repository's large file assets, you can run a Git frontend like **[Gitolite][8]** , or you can use **rrsync** (i.e., restricted rsync). + +Now you can push your Git data to your remote Git repository and your Git-portal data to your remote portal: + + +``` +$ git push origin HEAD +master destination detected +Syncing _portal content... +sending incremental file list +sent 9,305 bytes received 18 bytes 1,695.09 bytes/sec +total size is 60,358,015 speedup is 6,474.10 +Syncing _portal content to example.com:/home/seth/git/bigproject_portal +``` + +If you have Git-portal installed and a **_portal** remote configured, your **_portal** directory will be synchronized, getting new content from the server and sending fresh content with every push. While you don't have to do a Git commit and push to sync with the server (a user could just use rsync directly), I find it useful to require commits for artistic changes. It integrates artists and their digital assets into the rest of the workflow, and it provides useful metadata about project progress and velocity. + +### Other options + +If Git-portal is too simple for you, there are other options for managing large files with Git. [Git Large File Storage][9] (LFS) is a fork of a defunct project called git-media and is maintained and supported by GitHub. It requires special commands (like **git lfs track** to protect large files from being tracked by Git) and requires the user to manage a .gitattributes file to update which files in the repository are tracked by LFS. It supports _only_ HTTP and HTTPS remotes for large files, so your LFS server must be configured so users can authenticate over HTTP rather than SSH or rsync. + +A more flexible option than LFS is [git-annex][10], which you can learn more about in my article about [managing binary blobs in Git][11] (ignore the parts about the deprecated git-media, as its former flexibility doesn't apply to its successor, Git LFS). Git-annex is a flexible and elegant solution with a detailed system for adding, removing, and moving large files within a repository. Because it's flexible and powerful, there are lots of new commands and rules to learn, so take a look at its [documentation][12]. + +If, however, your needs are simple and you like a solution that utilizes existing technology to do simple and obvious tasks, Git-portal might be the tool for the job. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/manage-multimedia-files-git + +作者:[Seth Kenlon (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) +[2]: https://opensource.com/sites/default/files/uploads/git-velocity.jpg (Graphic showing relationship between art assets and Git) +[3]: http://gitlab.com/slackermedia/git-portal.git +[4]: https://www.apress.com/gp/book/9781484241691 +[5]: http://mixedsignals.ml +[6]: mailto:git@gitdawg.com +[7]: mailto:seth@example.com +[8]: https://opensource.com/article/19/4/file-sharing-git +[9]: https://git-lfs.github.com/ +[10]: https://git-annex.branchable.com/ +[11]: https://opensource.com/life/16/8/how-manage-binary-blobs-git-part-7 +[12]: https://git-annex.branchable.com/walkthrough/ diff --git a/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md b/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md new file mode 100644 index 0000000000..10e073a029 --- /dev/null +++ b/sources/tech/20190407 What it means to be Cloud-Native approach - the CNCF way.md @@ -0,0 +1,123 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What it means to be Cloud-Native approach — the CNCF way) +[#]: via: (https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923) +[#]: author: (Sonu Jose https://medium.com/@sonujose993) + +What it means to be Cloud-Native approach — the CNCF way +====== + +![](https://cdn-images-1.medium.com/max/2400/0*YknjM7T_Pxwz9deR) + +While discussing on Digital Transformation and modern application development Cloud-Native is a term which frequently comes in. But what does it actually means to be cloud-native? This blog is all about giving a good understanding of the cloud-native approach and the ways to achieve it in the CNCF way. + +Michael Dell once said that “the cloud isn’t a place, it’s a way of doing IT”. He was right, and the same can be said of cloud-native. + +Cloud-native is an approach to building and running applications that exploit the advantages of the cloud computing delivery model. Cloud-native is about how applications are created and deployed, not where. … It’s appropriate for both public and private clouds. + +Cloud native architectures take full advantage of on-demand delivery, global deployment, elasticity, and higher-level services. They enable huge improvements in developer productivity, business agility, scalability, availability, utilization, and cost savings. + +### CNCF (Cloud native computing foundation) + +Google has been using containers for many years and they led the Kubernetes project which is a leading container orchestration platform. But alone they can’t really change the broad perspective in the industry around modern applications. So there was a huge need for industry leaders to come together and solve the major problems facing the modern approach. In order to achieve this broader vision, Google donated kubernetes to the Cloud Native foundation and this lead to the birth of CNCF in 2015. + +![](https://cdn-images-1.medium.com/max/1200/1*S1V9R_C_rjLVlH3M8dyF-g.png) + +Cloud Native computing foundation is created in the Linux foundation for building and managing platforms and solutions for modern application development. It really is a home for amazing projects that enable modern application development. CNCF defines cloud-native as “scalable applications” running in “modern dynamic environments” that use technologies such as containers, microservices, and declarative APIs. Kubernetes is the world’s most popular container-orchestration platform and the first CNCF project. + +### The approach… + +CNCF created a trail map to better understand the concept of Cloud native approach. In this article, we will be discussed based on this landscape. The newer version is available at https://landscape.cncf.io/ + +The Cloud Native Trail Map is CNCF’s recommended path through the cloud-native landscape. This doesn’t define a specific path with which we can approach digital transformation rather there are many possible paths you can follow to align with this concept based on your business scenario. This is just a trail to simplify the journey to cloud-native. + + +Let's start discussing the steps defined in this trail map. + +### 1. CONTAINERIZATION + +![][1] + +You can’t do cloud-native without containerizing your application. It doesn’t matter what size the application is any type of application will do. **A container is a standard unit of software that packages up the code and all its dependencies** so the application runs quickly and reliably from one computing environment to another. Docker is the most preferred platform for containerization. A **Docker container** image is a lightweight, standalone, executable package of software that includes everything needed to run an application. + +### 2. CI/CD + +![][2] + +Setup Continuous Integration/Continuous Delivery (CI/CD) so that changes to your source code automatically result in a new container being built, tested, and deployed to staging and eventually, perhaps, to production. Next thing we need to setup is automated rollouts, rollbacks as well as testing. There are a lot of platforms for CI/CD: **Jenkins, VSTS, Azure DevOps** , TeamCity, JFRog, Spinnaker, etc.. + +### 3. ORCHESTRATION + +![][3] + +Container orchestration is all about managing the lifecycles of containers, especially in large, dynamic environments. Software teams use container orchestration to control and automate many tasks. **Kubernetes** is the market-leading orchestration solution. There are other orchestrators like Docker swarm, Mesos, etc.. **Helm Charts** help you define, install, and upgrade even the most complex Kubernetes application. + +### 4. OBSERVABILITY & ANALYSIS + +Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster. Kubernetes provides detailed information about an application’s resource usage at each of these levels. This information allows you to evaluate your application’s performance and where bottlenecks can be removed to improve overall performance. + +![][4] + +Pick solutions for monitoring, logging, and tracing. Consider CNCF projects Prometheus for monitoring, Fluentd for logging and Jaeger for TracingFor tracing, look for an OpenTracing-compatible implementation like Jaeger. + +### 5. SERVICE MESH + +As its name says it’s all about connecting services, the **discovery of services** , **health checking, routing** and it is used to **monitoring ingress** from the internet. A service mesh also often has more complex operational requirements, like A/B testing, canary rollouts, rate limiting, access control, and end-to-end authentication. + +![][5] + +**Istio** provides behavioral insights and operational control over the service mesh as a whole, offering a complete solution to satisfy the diverse requirements of microservice applications. **CoreDNS** is a fast and flexible tool that is useful for service discovery. **Envoy** and **Linkerd** each enable service mesh architectures. + +### 6. NETWORKING AND POLICY + +It is really important to enable more flexible networking layers. To enable more flexible networking, use a CNI compliant network project like Calico, Flannel, or Weave Net. Open Policy Agent (OPA) is a general purpose policy engine with uses ranging from authorization and admission control to data filtering + +### 7. DISTRIBUTED DATABASE + +A distributed database is a database in which not all storage devices are attached to a common processor. It may be stored in multiple computers, located in the same physical location; or may be dispersed over a network of interconnected computers. + +![][6] + +When you need more resiliency and scalability than you can get from a single database, **Vitess** is a good option for running MySQL at scale through sharding. Rook is a storage orchestrator that integrates a diverse set of storage solutions into Kubernetes. Serving as the “brain” of Kubernetes, etcd provides a reliable way to store data across a cluster of machine + +### 8. MESSAGING + +When you need higher performance than JSON-REST, consider using gRPC or NATS. gRPC is a universal RPC framework. NATS is a multi-modal messaging system that includes request/reply, pub/sub and load balanced queues. It is also applicable and take care of much newer and use cases like IoT. + +### 9. CONTAINER REGISTRY & RUNTIMES + +Container Registry is a single place for your team to manage Docker images, perform vulnerability analysis, and decide who can access what with fine-grained access control. There are many container registries available in market docker hub, Azure Container registry, Harbor, Nexus registry, Amazon Elastic Container Registry and way more… + +![][7] + +Container runtime **containerd** is available as a daemon for Linux and Windows. It manages the complete container lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and beyond. + +### 10. SOFTWARE DISTRIBUTION + +If you need to do secure software distribution, evaluate Notary, implementation of The Update Framework (TUF). + +TUF provide a framework (a set of libraries, file formats, and utilities) that can be used to secure new and existing software update systems. The framework should enable applications to be secure from all known attacks on the software update process. It is not concerned with exposing information about what software is being updated (and thus what software the client may be running) or the contents of updates. + +-------------------------------------------------------------------------------- + +via: https://medium.com/@sonujose993/what-it-means-to-be-cloud-native-approach-the-cncf-way-9e8ab99d4923 + +作者:[Sonu Jose][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://medium.com/@sonujose993 +[b]: https://github.com/lujun9972 +[1]: https://cdn-images-1.medium.com/max/1200/1*glD7bNJG3SlO0_xNmSGPcQ.png +[2]: https://cdn-images-1.medium.com/max/1600/1*qOno8YNzmwimlaL9j2fSbA.png +[3]: https://cdn-images-1.medium.com/max/1200/1*fw8YJnfF32dWsX_beQpWOw.png +[4]: https://cdn-images-1.medium.com/max/1600/1*sbjPYNq76s9lR7D_FK4ltg.png +[5]: https://cdn-images-1.medium.com/max/1600/1*kUFBuGfjZSS-n-32CCjtwQ.png +[6]: https://cdn-images-1.medium.com/max/1600/1*4OGiB3HHQZBFsALjaRb9pA.jpeg +[7]: https://cdn-images-1.medium.com/max/1600/1*VMCJN41mGZs4p2lQHD0nDw.png diff --git a/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md new file mode 100644 index 0000000000..2110c17606 --- /dev/null +++ b/sources/tech/20190408 A beginner-s guide to building DevOps pipelines with open source tools.md @@ -0,0 +1,352 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A beginner's guide to building DevOps pipelines with open source tools) +[#]: via: (https://opensource.com/article/19/4/devops-pipeline) +[#]: author: (Bryant Son https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter) + +A beginner's guide to building DevOps pipelines with open source tools +====== +If you're new to DevOps, check out this five-step process for building +your first pipeline. +![Shaking hands, networking][1] + +DevOps has become the default answer to fixing software development processes that are slow, siloed, or otherwise dysfunctional. But that doesn't mean very much when you're new to DevOps and aren't sure where to begin. This article explores what a DevOps pipeline is and offers a five-step process to create one. While this tutorial is not comprehensive, it should give you a foundation to start on and expand later. But first, a story. + +### My DevOps journey + +I used to work for the cloud team at Citi Group, developing an Infrastructure-as-a-Service (IaaS) web application to manage Citi's cloud infrastructure, but I was always interested in figuring out ways to make the development pipeline more efficient and bring positive cultural change to the development team. I found my answer in a book recommended by Greg Lavender, who was the CTO of Citi's cloud architecture and infrastructure engineering, called _[The Phoenix Project][2]_. The book reads like a novel while it explains DevOps principles. + +A table at the back of the book shows how often different companies deploy to the release environment: + +Company | Deployment Frequency +---|--- +Amazon | 23,000 per day +Google | 5,500 per day +Netflix | 500 per day +Facebook | 1 per day +Twitter | 3 per week +Typical enterprise | 1 every 9 months + +How are the frequency rates of Amazon, Google, and Netflix even possible? It's because these companies have figured out how to make a nearly perfect DevOps pipeline. + +This definitely wasn't the case before we implemented DevOps at Citi. Back then, my team had different staged environments, but deployments to the development server were very manual. All developers had access to just one development server based on IBM WebSphere Application Server Community Edition. The problem was the server went down whenever multiple users simultaneously tried to make deployments, so the developers had to let each other know whenever they were about to make a deployment, which was quite a pain. In addition, there were problems with low code test coverages, cumbersome manual deployment processes, and no way to track code deployments with a defined task or a user story. + +I realized something had to be done, and I found a colleague who felt the same way. We decided to collaborate to build an initial DevOps pipeline—he set up a virtual machine and a Tomcat application server while I worked on Jenkins, integrating with Atlassian Jira and BitBucket, and code testing coverages. This side project was hugely successful: we almost fully automated the development pipeline, we achieved nearly 100% uptime on our development server, we could track and improve code testing coverage, and the Git branch could be associated with the deployment and Jira task. And most of the tools we used to construct our DevOps pipeline were open source. + +I now realize how rudimentary our DevOps pipeline was, as we didn't take advantage of advanced configurations like Jenkins files or Ansible. However, this simple process worked well, maybe due to the [Pareto][3] principle (also known as the 80/20 rule). + +### A brief introduction to DevOps and the CI/CD pipeline + +If you ask several people, "What is DevOps? you'll probably get several different answers. DevOps, like agile, has evolved to encompass many different disciplines, but most people will agree on a few things: DevOps is a software development practice or a software development lifecycle (SDLC) and its central tenet is cultural change, where developers and non-developers all breathe in an environment where formerly manual things are automated; everyone does what they are best at; the number of deployments per period increases; throughput increases; and flexibility improves. + +While having the right software tools is not the only thing you need to achieve a DevOps environment, some tools are necessary. A key one is continuous integration and continuous deployment (CI/CD). This pipeline is where the environments have different stages (e.g., DEV, INT, TST, QA, UAT, STG, PROD), manual things are automated, and developers can achieve high-quality code, flexibility, and numerous deployments. + +This article describes a five-step approach to creating a DevOps pipeline, like the one in the following diagram, using open source tools. + +![Complete DevOps pipeline][4] + +Without further ado, let's get started. + +### Step 1: CI/CD framework + +The first thing you need is a CI/CD tool. Jenkins, an open source, Java-based CI/CD tool based on the MIT License, is the tool that popularized the DevOps movement and has become the de facto standard. + +So, what is Jenkins? Imagine it as some sort of a magical universal remote control that can talk to many many different services and tools and orchestrate them. On its own, a CI/CD tool like Jenkins is useless, but it becomes more powerful as it plugs into different tools and services. + +Jenkins is just one of many open source CI/CD tools that you can leverage to build a DevOps pipeline. + +Name | License +---|--- +[Jenkins][5] | Creative Commons and MIT +[Travis CI][6] | MIT +[CruiseControl][7] | BSD +[Buildbot][8] | GPL +[Apache Gump][9] | Apache 2.0 +[Cabie][10] | GNU + +Here's what a DevOps process looks like with a CI/CD tool. + +![CI/CD tool][11] + +You have a CI/CD tool running in your localhost, but there is not much you can do at the moment. Let's follow the next step of DevOps journey. + +### Step 2: Source control management + +The best (and probably the easiest) way to verify that your CI/CD tool can perform some magic is by integrating with a source control management (SCM) tool. Why do you need source control? Suppose you are developing an application. Whenever you build an application, you are programming—whether you are using Java, Python, C++, Go, Ruby, JavaScript, or any of the gazillion programming languages out there. The programming codes you write are called source codes. In the beginning, especially when you are working alone, it's probably OK to put everything in your local directory. But when the project gets bigger and you invite others to collaborate, you need a way to avoid merge conflicts while effectively sharing the code modifications. You also need a way to recover a previous version—and the process of making a backup and copying-and-pasting gets old. You (and your teammates) want something better. + +This is where SCM becomes almost a necessity. A SCM tool helps by storing your code in repositories, versioning your code, and coordinating among project members. + +Although there are many SCM tools out there, Git is the standard and rightly so. I highly recommend using Git, but there are other open source options if you prefer. + +Name | License +---|--- +[Git][12] | GPLv2 & LGPL v2.1 +[Subversion][13] | Apache 2.0 +[Concurrent Versions System][14] (CVS) | GNU +[Vesta][15] | LGPL +[Mercurial][16] | GNU GPL v2+ + +Here's what the DevOps pipeline looks like with the addition of SCM. + +![Source control management][17] + +The CI/CD tool can automate the tasks of checking in and checking out source code and collaborating across members. Not bad? But how can you make this into a working application so billions of people can use and appreciate it? + +### Step 3: Build automation tool + +Excellent! You can check out the code and commit your changes to the source control, and you can invite your friends to collaborate on the source control development. But you haven't yet built an application. To make it a web application, it has to be compiled and put into a deployable package format or run as an executable. (Note that an interpreted programming language like JavaScript or PHP doesn't need to be compiled.) + +Enter the build automation tool. No matter which build tool you decide to use, all build automation tools have a shared goal: to build the source code into some desired format and to automate the task of cleaning, compiling, testing, and deploying to a certain location. The build tools will differ depending on your programming language, but here are some common open source options to consider. + +Name | License | Programming Language +---|---|--- +[Maven][18] | Apache 2.0 | Java +[Ant][19] | Apache 2.0 | Java +[Gradle][20] | Apache 2.0 | Java +[Bazel][21] | Apache 2.0 | Java +[Make][22] | GNU | N/A +[Grunt][23] | MIT | JavaScript +[Gulp][24] | MIT | JavaScript +[Buildr][25] | Apache | Ruby +[Rake][26] | MIT | Ruby +[A-A-P][27] | GNU | Python +[SCons][28] | MIT | Python +[BitBake][29] | GPLv2 | Python +[Cake][30] | MIT | C# +[ASDF][31] | Expat (MIT) | LISP +[Cabal][32] | BSD | Haskell + +Awesome! You can put your build automation tool configuration files into your source control management and let your CI/CD tool build it. + +![Build automation tool][33] + +Everything is good, right? But where can you deploy it? + +### Step 4: Web application server + +So far, you have a packaged file that might be executable or deployable. For any application to be truly useful, it has to provide some kind of a service or an interface, but you need a vessel to host your application. + +For a web application, a web application server is that vessel. An application server offers an environment where the programming logic inside the deployable package can be detected, render the interface, and offer the web services by opening sockets to the outside world. You need an HTTP server as well as some other environment (like a virtual machine) to install your application server. For now, let's assume you will learn about this along the way (although I will discuss containers below). + +There are a number of open source web application servers available. + +Name | License | Programming Language +---|---|--- +[Tomcat][34] | Apache 2.0 | Java +[Jetty][35] | Apache 2.0 | Java +[WildFly][36] | GNU Lesser Public | Java +[GlassFish][37] | CDDL & GNU Less Public | Java +[Django][38] | 3-Clause BSD | Python +[Tornado][39] | Apache 2.0 | Python +[Gunicorn][40] | MIT | Python +[Python Paste][41] | MIT | Python +[Rails][42] | MIT | Ruby +[Node.js][43] | MIT | Javascript + +Now the DevOps pipeline is almost usable. Good job! + +![Web application server][44] + +Although it's possible to stop here and integrate further on your own, code quality is an important thing for an application developer to be concerned about. + +### Step 5: Code testing coverage + +Implementing code test pieces can be another cumbersome requirement, but developers need to catch any errors in an application early on and improve the code quality to ensure end users are satisfied. Luckily, there are many open source tools available to test your code and suggest ways to improve its quality. Even better, most CI/CD tools can plug into these tools and automate the process. + +There are two parts to code testing: _code testing frameworks_ that help write and run the tests, and _code quality suggestion tools_ that help improve code quality. + +#### Code test frameworks + +Name | License | Programming Language +---|---|--- +[JUnit][45] | Eclipse Public License | Java +[EasyMock][46] | Apache | Java +[Mockito][47] | MIT | Java +[PowerMock][48] | Apache 2.0 | Java +[Pytest][49] | MIT | Python +[Hypothesis][50] | Mozilla | Python +[Tox][51] | MIT | Python + +#### Code quality suggestion tools + +Name | License | Programming Language +---|---|--- +[Cobertura][52] | GNU | Java +[CodeCover][53] | Eclipse Public (EPL) | Java +[Coverage.py][54] | Apache 2.0 | Python +[Emma][55] | Common Public License | Java +[JaCoCo][56] | Eclipse Public License | Java +[Hypothesis][50] | Mozilla | Python +[Tox][51] | MIT | Python +[Jasmine][57] | MIT | JavaScript +[Karma][58] | MIT | JavaScript +[Mocha][59] | MIT | JavaScript +[Jest][60] | MIT | JavaScript + +Note that most of the tools and frameworks mentioned above are written for Java, Python, and JavaScript, since C++ and C# are proprietary programming languages (although GCC is open source). + +Now that you've implemented code testing coverage tools, your DevOps pipeline should resemble the DevOps pipeline diagram shown at the beginning of this tutorial. + +### Optional steps + +#### Containers + +As I mentioned above, you can host your application server on a virtual machine or a server, but containers are a popular solution. + +[What are][61] [containers][61]? The short explanation is that a VM needs the huge footprint of an operating system, which overwhelms the application size, while a container just needs a few libraries and configurations to run the application. There are clearly still important uses for a VM, but a container is a lightweight solution for hosting an application, including an application server. + +Although there are other options for containers, Docker and Kubernetes are the most popular. + +Name | License +---|--- +[Docker][62] | Apache 2.0 +[Kubernetes][63] | Apache 2.0 + +To learn more, check out these other [Opensource.com][64] articles about Docker and Kubernetes: + + * [What Is Docker?][65] + * [An introduction to Docker][66] + * [What is Kubernetes?][67] + * [From 0 to Kubernetes][68] + + + +#### Middleware automation tools + +Our DevOps pipeline mostly focused on collaboratively building and deploying an application, but there are many other things you can do with DevOps tools. One of them is leveraging Infrastructure as Code (IaC) tools, which are also known as middleware automation tools. These tools help automate the installation, management, and other tasks for middleware software. For example, an automation tool can pull applications, like a web application server, database, and monitoring tool, with the right configurations and deploy them to the application server. + +Here are several open source middleware automation tools to consider: + +Name | License +---|--- +[Ansible][69] | GNU Public +[SaltStack][70] | Apache 2.0 +[Chef][71] | Apache 2.0 +[Puppet][72] | Apache or GPL + +For more on middleware automation tools, check out these other [Opensource.com][64] articles: + + * [A quickstart guide to Ansible][73] + * [Automating deployment strategies with Ansible][74] + * [Top 5 configuration management tools][75] + + + +### Where can you go from here? + +This is just the tip of the iceberg for what a complete DevOps pipeline can look like. Start with a CI/CD tool and explore what else you can automate to make your team's job easier. Also, look into [open source communication tools][76] that can help your team work better together. + +For more insight, here are some very good introductory articles about DevOps: + + * [What is DevOps][77] + * [5 things to master to be a DevOps engineer][78] + * [DevOps is for everyone][79] + * [Getting started with predictive analytics in DevOps][80] + + + +Integrating DevOps with open source agile tools is also a good idea: + + * [What is agile?][81] + * [4 steps to becoming an awesome agile developer][82] + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/devops-pipeline + +作者:[Bryant Son (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson/users/milindsingh/users/milindsingh/users/dscripter +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/network_team_career_hand.png?itok=_ztl2lk_ (Shaking hands, networking) +[2]: https://www.amazon.com/dp/B078Y98RG8/ +[3]: https://en.wikipedia.org/wiki/Pareto_principle +[4]: https://opensource.com/sites/default/files/uploads/1_finaldevopspipeline.jpg (Complete DevOps pipeline) +[5]: https://github.com/jenkinsci/jenkins +[6]: https://github.com/travis-ci/travis-ci +[7]: http://cruisecontrol.sourceforge.net +[8]: https://github.com/buildbot/buildbot +[9]: https://gump.apache.org +[10]: http://cabie.tigris.org +[11]: https://opensource.com/sites/default/files/uploads/2_runningjenkins.jpg (CI/CD tool) +[12]: https://git-scm.com +[13]: https://subversion.apache.org +[14]: http://savannah.nongnu.org/projects/cvs +[15]: http://www.vestasys.org +[16]: https://www.mercurial-scm.org +[17]: https://opensource.com/sites/default/files/uploads/3_sourcecontrolmanagement.jpg (Source control management) +[18]: https://maven.apache.org +[19]: https://ant.apache.org +[20]: https://gradle.org/ +[21]: https://bazel.build +[22]: https://www.gnu.org/software/make +[23]: https://gruntjs.com +[24]: https://gulpjs.com +[25]: http://buildr.apache.org +[26]: https://github.com/ruby/rake +[27]: http://www.a-a-p.org +[28]: https://www.scons.org +[29]: https://www.yoctoproject.org/software-item/bitbake +[30]: https://github.com/cake-build/cake +[31]: https://common-lisp.net/project/asdf +[32]: https://www.haskell.org/cabal +[33]: https://opensource.com/sites/default/files/uploads/4_buildtools.jpg (Build automation tool) +[34]: https://tomcat.apache.org +[35]: https://www.eclipse.org/jetty/ +[36]: http://wildfly.org +[37]: https://javaee.github.io/glassfish +[38]: https://www.djangoproject.com/ +[39]: http://www.tornadoweb.org/en/stable +[40]: https://gunicorn.org +[41]: https://github.com/cdent/paste +[42]: https://rubyonrails.org +[43]: https://nodejs.org/en +[44]: https://opensource.com/sites/default/files/uploads/5_applicationserver.jpg (Web application server) +[45]: https://junit.org/junit5 +[46]: http://easymock.org +[47]: https://site.mockito.org +[48]: https://github.com/powermock/powermock +[49]: https://docs.pytest.org +[50]: https://hypothesis.works +[51]: https://github.com/tox-dev/tox +[52]: http://cobertura.github.io/cobertura +[53]: http://codecover.org/ +[54]: https://github.com/nedbat/coveragepy +[55]: http://emma.sourceforge.net +[56]: https://github.com/jacoco/jacoco +[57]: https://jasmine.github.io +[58]: https://github.com/karma-runner/karma +[59]: https://github.com/mochajs/mocha +[60]: https://jestjs.io +[61]: /resources/what-are-linux-containers +[62]: https://www.docker.com +[63]: https://kubernetes.io +[64]: http://Opensource.com +[65]: https://opensource.com/resources/what-docker +[66]: https://opensource.com/business/15/1/introduction-docker +[67]: https://opensource.com/resources/what-is-kubernetes +[68]: https://opensource.com/article/17/11/kubernetes-lightning-talk +[69]: https://www.ansible.com +[70]: https://www.saltstack.com +[71]: https://www.chef.io +[72]: https://puppet.com +[73]: https://opensource.com/article/19/2/quickstart-guide-ansible +[74]: https://opensource.com/article/19/1/automating-deployment-strategies-ansible +[75]: https://opensource.com/article/18/12/configuration-management-tools +[76]: https://opensource.com/alternatives/slack +[77]: https://opensource.com/resources/devops +[78]: https://opensource.com/article/19/2/master-devops-engineer +[79]: https://opensource.com/article/18/11/how-non-engineer-got-devops +[80]: https://opensource.com/article/19/1/getting-started-predictive-analytics-devops +[81]: https://opensource.com/article/18/10/what-agile +[82]: https://opensource.com/article/19/2/steps-agile-developer diff --git a/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md b/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md new file mode 100644 index 0000000000..4ec5b372e0 --- /dev/null +++ b/sources/tech/20190408 Beyond SD-WAN- VMware-s vision for the network edge.md @@ -0,0 +1,114 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Beyond SD-WAN: VMware’s vision for the network edge) +[#]: via: (https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all) +[#]: author: (Linda Musthaler https://www.networkworld.com/author/Linda-Musthaler/) + +Beyond SD-WAN: VMware’s vision for the network edge +====== +Under the ownership of VMware, the VeloCloud Business Unit is greatly expanding its vision of what an SD-WAN should be. VMware calls the strategy “the network edge.” +![istock][1] + +VeloCloud is now a Business Unit within VMware since being acquired in December 2017. The two companies have had sufficient time to integrate their operations and fit their technologies together to build a cohesive offering. In January, Neal Weinberg provided [an overview of where VMware is headed with its reinvention][2]. Now let’s look at it from the VeloCloud [SD-WAN][3] perspective. + +I recently talked to Sanjay Uppal, vice president and general manager of the VeloCloud Business Unit. He shared with me where VeloCloud is heading, adding that it’s all possible because of the complementary products that VMware brings to VeloCloud’s table. + +**[ Read also:[Edge computing is the place to address a host of IoT security concerns][4] ]** + +It all starts with this architecture chart that shows the VMware vision for the network edge. + +![][5] + +The left side of the chart shows that in the branch office, you can put an edge device that can be either a VeloCloud hardware appliance or VeloCloud software running on some third-party hardware. Then the right side of the chart shows where the workloads are — the traditional data center, the public cloud, and SaaS applications. You can put one or more edge devices there and then you have the classic hub-and-spoke model with the VeloCloud SD-WAN on running on top. + +In the middle of the diagram are the gateways, which are a differentiator and a unique benefit of VeloCloud. + +“If you have applications in the public cloud or SaaS, then you can use our gateways instead of spinning up individual edges at each of the applications,” Uppal said. “Those gateways really perform a multi-tenanted edge function. So, instead of locating an individual edge at every termination point at the cloud, you basically go from an edge in the branch to a gateway in the cloud, and then from that gateway you go to your final destination. We've engineered it so that the gateways are close to where the end applications are — typically within five milliseconds.” + +Going back to the architecture diagram, there are two clouds in the middle of the chart. The left-hand cloud is the over-the-top (OTT) service run by VeloCloud. It uses 800 gateways deployed over 30 points of presence (PoPs) around the world. The right-hand cloud is the telco cloud, which deploys gateways as network-based services. VeloCloud has several telco partners that take the same VeloCloud gateways and deploy them in their cloud. + +“Between a telco service, a cloud service, and hub and spoke on premise, we essentially have covered all the bases in terms of how enterprises would want to consume software-defined WAN. This flexibility is part of the reason why we've been successful in this market,” Uppal said. + +Where is VeloCloud going with this strategy? Again, looking at the architecture chart, the “vision” pieces are labeled 1 through 5. Let’s look at each of those areas. + +### Edge compute + +Starting with number 1 on the left-hand side of the diagram, there is the expansion from the edge itself going deeper into the branch by crossing over a LAN or a Wi-Fi boundary to get to where the individual users and IoT “things” are. This approach uses the same VeloCloud platform to spin up [compute at the edge][6], which can be either a container or a virtual machine (VM). + +“Of course, VMwareis very strong in compute in the data center. Our CEO recently articulated the VMware edge story, which is compute edge and device edge. When you combine it with the network edge, which is VeloCloud, then you have a full edge solution,” Uppal explained. “So, this first piece that you see is our foray into getting deeper into the branch all the way up to the individual users and things and combining compute functions on to the VeloCloud solution. There's been a lot of talk about edge compute and we do know that the pendulum is swinging back, but one of the major challenges is how to manage it all. VMware has strong technology in the data center space that we are bringing to bear out there at the edge.” + +### 5G underlay intelligence + +The next piece, number 2 on the diagram, is [5G][7]. At the Mobile World Congress, VMware and AT&T announced they are bringing SD-WAN out running on 5G. The idea here is that 5G should give you a low-latency connection and you get on-demand control, so you can tell 5G on the fly that you want this type of connection. Once that is done, the right network slices would be put in place and then you can get a connection according to the specifications that you asked for. + +“We as VeloCloud would measure the underlay continuously. It's like a speed test on steroids. We would measure bandwidth, packet loss, jitter and latency continuously with low overhead because we piggyback on real user traffic. And then on the basis of that measurement, we would steer the traffic one way or another,” Uppal said. “For example, your real-time voice is important, so let's pick the best performing network at that instant of time, which might change in the next instant, so that's why we have to make that decision on a per-packet basis.” + +Uppal continued, “What 5G allows us to do is to look at that underlay as not just being one underlay, but it could be several different underlays, and it's programmable so you could ask it for a type of underlay. That is actually pretty revolutionary — that we would run an overlay with the intelligence of SD-WAN counting on the underlay intelligence of 5G. + +“We are working pretty closely with our partner AT&T in this space. We are talking about the business aspect of 5G being used as a transport mechanism for enterprise data, rather than consumer phones having 5G on them. This is available from AT&T today in a handful of cities. So as 5G becomes more ubiquitous, you'll begin to see it deployed more and more. Then we will do an Ethernet or Wi-Fi handoff to the hotspot, and from then on, we'll jump onto the 5G network for the SD-WAN. Then the next phase of that will be 5G natively on our devices, which is what we are working on today.” + +### Gateway federation + +The third part of the vision is gateway federation, some of which is available today. The left-hand cloud in the diagram, which is the OTT service, should be able to interoperate gateway to gateway with the cloud on the right-hand side, which is the network-based service. For example, if you have a telco cloud of gateways but those gateways don't reach out into areas where the telco doesn’t have a presence, then you can reuse VeloCloud gateways that are sitting in other locations. A gateway would federate with another gateway, so it would extend the telco’s network beyond the facilities that they own. That's the first step of gateway federation, which is available from VeloCloud today. + +Uppal said the next step is a telco-to telco-federation. “There's a lot of interest from folks in the industry on how to get that federation done. We're working with the Metro Ethernet Forum (MEF) on that,” he said. + +### SD-WAN as a platform + +The next piece of the vision is SD-WAN as a platform. VeloCloud already incorporates security services into its SD-WAN platform in the form of [virtual network functions][8] (VNFs) from Palo Alto, Check Point Software, and other partners. Deploying a service as a VNF eliminates having separate hardware on the network. Now the company is starting to bring more services onto its platform. + +“Analytics is the area we are bringing in next,” Uppal said. “We partnered with SevOne and Plixer so that they can take analytics that we are providing, correlate them with other analytics that they have and then come up with inferences on whether things worked correctly or not, or to check for anomalous behavior.” + +Two additional areas that VeloCloud is working on are unified communications as a service (UCaaS) and universal customer premises equipment (uCPE). + +“We announced that we are working with RingCentral in the UCaaS space, and with ADVA and Telco Systems for uCPE. We have our own uCPE offering today but with a limited number of VNFs, so ADVA and Telco Systems will help us expand those capabilities,” Uppal explained. “With SD-WAN becoming a platform for on-premise deployments, you can virtualize functions and manage them from the same place, whether they're VNF-type of functions or compute-type of functions. This is an important direction that we are moving towards.” + +### Hybrid and multi-cloud integration + +The final piece of the strategy is hybrid and multi-cloud integration. Since its inception, VeloCloud has had gateways to facilitate access to specific applications running in the cloud. These gateways provide a secure end-to-end connection and an ROI advantage. + +Recognizing that workloads have expanded to multi-cloud and hybrid cloud, VeloCloud is broadening this approach utilizing VMware’s relationships with Microsoft, Amazon, and Google and offerings on Azure, Amazon Web Services, and Google Cloud, respectively. From a networking standpoint, you can get the same consistency of access using VeloCloud because you can decide from the gateway whichever direction you want to go. That direction will be chosen — and services added — based on your business policy. + +“We think this is the next hurdle in terms of deployment of SD-WAN, and once that is solved, people are going to deploy a lot more for hybrid and multi-cloud,” said Uppal. “We want to be the first ones out of the gate to get that done.” + +Uppal further said, “These five areas are where we see our SD-WAN headed, and we call this a network edge because it's beyond just the traditional SD-WAN functions. It includes edge computing, SD-WAN becoming a broader platform, integrating with hybrid multi cloud — these are all aspects of features that go way beyond just the narrower definition of SD-WAN.” + +**More about edge networking:** + + * [How edge networking and IoT will reshape data centers][9] + * [Edge computing best practices][10] + * [How edge computing can help secure the IoT][11] + + + +Join the Network World communities on [Facebook][12] and [LinkedIn][13] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387641/beyond-sd-wan-vmwares-vision-for-the-network-edge.html#tk.rss_all + +作者:[Linda Musthaler][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Linda-Musthaler/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/01/istock-864405678-100747484-large.jpg +[2]: https://www.networkworld.com/article/3340259/vmware-s-transformation-takes-hold.html +[3]: https://www.networkworld.com/article/3031279/sd-wan-what-it-is-and-why-you-ll-use-it-one-day.html +[4]: https://www.networkworld.com/article/3307859/edge-computing-helps-a-lot-of-iot-security-problems-by-getting-it-involved.html +[5]: https://images.idgesg.net/images/article/2019/04/vmware-vision-for-network-edge-100793086-large.jpg +[6]: https://www.networkworld.com/article/3224893/what-is-edge-computing-and-how-it-s-changing-the-network.html +[7]: https://www.networkworld.com/article/3203489/what-is-5g-how-is-it-better-than-4g.html +[8]: https://www.networkworld.com/article/3206709/what-s-the-difference-between-sdn-and-nfv.html +[9]: https://www.networkworld.com/article/3291790/data-center/how-edge-networking-and-iot-will-reshape-data-centers.html +[10]: https://www.networkworld.com/article/3331978/lan-wan/edge-computing-best-practices.html +[11]: https://www.networkworld.com/article/3331905/internet-of-things/how-edge-computing-can-help-secure-the-iot.html +[12]: https://www.facebook.com/NetworkWorld/ +[13]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md b/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md new file mode 100644 index 0000000000..b0e1948ff4 --- /dev/null +++ b/sources/tech/20190408 InitRAMFS, Dracut, and the Dracut Emergency Shell.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (InitRAMFS, Dracut, and the Dracut Emergency Shell) +[#]: via: (https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/) +[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/) + +InitRAMFS, Dracut, and the Dracut Emergency Shell +====== + +![][1] + +The [Linux startup process][2] goes through several stages before reaching the final [graphical or multi-user target][3]. The initramfs stage occurs just before the root file system is mounted. Dracut is a tool that is used to manage the initramfs. The dracut emergency shell is an interactive mode that can be initiated while the initramfs is loaded. + +This article will show how to use the dracut command to modify the initramfs. Some basic troubleshooting commands that can be run from the dracut emergency shell will also be demonstrated. + +### The InitRAMFS + +[Initramfs][4] stands for Initial Random-Access Memory File System. On modern Linux systems, it is typically stored in a file under the /boot directory. The kernel version for which it was built will be included in the file name. A new initramfs is generated every time a new kernel is installed. + +![A Linux Boot Directory][5] + +By default, Fedora keeps the previous two versions of the kernel and its associated initramfs. This default can be changed by modifying the value of the _installonly_limit_ setting the /etc/dnf/dnf.conf file. + +You can use the _lsinitrd_ command to list the contents of your initramfs archive: + +![The LsInitRD Command][6] + +The above screenshot shows that my initramfs archive contains the _nouveau_ GPU driver. The _modinfo_ command tells me that the nouveau driver supports several models of NVIDIA video cards. The _lspci_ command shows that there is an NVIDIA GeForce video card in my computer’s PCI slot. There are also several basic Unix commands included in the archive such as _cat_ and _cp_. + +By default, the initramfs archive only includes the drivers that are needed for your specific computer. This allows the archive to be smaller and decreases the time that it takes for your computer to boot. + +### The Dracut Command + +The _dracut_ command can be used to modify the contents of your initramfs. For example, if you are going to move your hard drive to a new computer, you might want to temporarily include all drivers in the initramfs to be sure that the operating system can load on the new computer. To do so, you would run the following command: + +``` +# dracut --force --no-hostonly +``` + +The _force_ parameter tells dracut that it is OK to overwrite the existing initramfs archive. The _no-hostonly_ parameter overrides the default behavior of including only drivers that are germane to the currently-running computer and causes dracut to instead include all drivers in the initramfs. + +By default dracut operates on the initramfs for the currently-running kernel. You can use the _uname_ command to display which version of the Linux kernel you are currently running: + +``` +$ uname -r +5.0.5-200.fc29.x86_64 +``` + +Once you have your hard drive installed and running in your new computer, you can re-run the dracut command to regenerate the initramfs with only the drivers that are needed for the new computer: + +``` +# dracut --force +``` + +There are also parameters to add arbitrary drivers, dracut modules, and files to the initramfs archive. You can also create configuration files for dracut and save them under the /etc/dracut.conf.d directory so that your customizations will be automatically applied to all new initramfs archives that are generated when new kernels are installed. As always, check the man page for the details that are specific to the version of dracut you have installed on your computer: + +``` +$ man dracut +``` + +### The Dracut Emergency Shell + +![The Dracut Emergency Shell][7] + +Sometimes something goes wrong during the initramfs stage of your computer’s boot process. When this happens, you will see “Entering emergency mode” printed to the screen followed by a shell prompt. This gives you a chance to try and fix things up manually and continue the boot process. + +As a somewhat contrived example, let’s suppose that I accidentally deleted an important kernel parameter in my boot loader configuration: + +``` +# sed -i 's/ rd.lvm.lv=fedora\/root / /' /boot/grub2/grub.cfg +``` + +The next time I reboot my computer, it will seem to hang for several minutes while it is trying to find the root partition and eventually give up and drop to an emergency shell. + +From the emergency shell, I can enter _journalctl_ and then use the **Space** key to page down though the startup logs. Near the end of the log I see a warning that reads “/dev/mapper/fedora-root does not exist”. I can then use the _ls_ command to find out what does exist: + +``` +# ls /dev/mapper +control fedora-swap +``` + +Hmm, the fedora-root LVM volume appears to be missing. Let’s see what I can find with the lvm command: + +``` +# lvm lvscan +ACTIVE '/dev/fedora/swap' [3.85 GiB] inherit +inactive '/dev/fedora/home' [22.85 GiB] inherit +inactive '/dev/fedora/root' [46.80 GiB] inherit +``` + +Ah ha! There’s my root partition. It’s just inactive. All I need to do is activate it and exit the emergency shell to continue the boot process: + +``` +# lvm lvchange -a y fedora/root +# exit +``` + +![The Fedora Login Screen][8] + +The above example only demonstrates the basic concept. You can check the [troubleshooting section][9] of the [dracut guide][10] for a few more examples. + +It is possible to access the dracut emergency shell manually by adding the _rd.break_ parameter to your kernel command line. This can be useful if you need to access your files before any system services have been started. + +Check the _dracut.kernel_ man page for details about what kernel options your version of dracut supports: + +``` +$ man dracut.kernel +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/ + +作者:[Gregory Bartholomew][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/glb/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-816x345.png +[2]: https://en.wikipedia.org/wiki/Linux_startup_process +[3]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/sect-managing_services_with_systemd-targets +[4]: https://en.wikipedia.org/wiki/Initial_ramdisk +[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/boot.jpg +[6]: https://fedoramagazine.org/wp-content/uploads/2019/04/lsinitrd.jpg +[7]: https://fedoramagazine.org/wp-content/uploads/2019/04/dracut-shell.jpg +[8]: https://fedoramagazine.org/wp-content/uploads/2019/04/fedora-login-1024x768.jpg +[9]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html#_troubleshooting +[10]: http://www.kernel.org/pub/linux/utils/boot/dracut/dracut.html diff --git a/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md b/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md new file mode 100644 index 0000000000..ca0d81d89a --- /dev/null +++ b/sources/tech/20190408 Linux Server Hardening Using Idempotency with Ansible- Part 1.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 1) +[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1) +[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie) + +Linux Server Hardening Using Idempotency with Ansible: Part 1 +====== + +![][1] + +[Creative Commons Zero][2] + +I think it’s safe to say that the need to frequently update the packages on our machines has been firmly drilled into us. To ensure the use of latest features and also keep security bugs to a minimum, skilled engineers and even desktop users are well-versed in the need to update their software. + +Hardware, software and SaaS (Software as a Service) vendors have also firmly embedded the word “firewall” into our vocabulary for both domestic and industrial uses to protect our computers. In my experience, however, even within potentially more sensitive commercial environments, few engineers actively tweak the operating system (OS) they’re working on, to any great extent at least, to bolster security. + +Standard fare on Linux systems, for example, might mean looking at configuring a larger swap file to cope with your hungry application’s demands. Or, maybe adding a separate volume to your server for extra disk space, specifying a more performant CPU at launch time, installing a few of your favorite DevOps tools, or chucking a couple of certificates onto the filesystem for each new server you build. This isn’t quite the same thing. + +### Improve your Security Posture + +What I am specifically referring to is a mixture of compliance and security, I suppose. In short, there’s a surprisingly large number of areas in which a default OS can improve its security posture. We can agree that tweaking certain aspects of an OS are a little riskier than others. Consider your network stack, for example. Imagine that, completely out of the blue, your server’s networking suddenly does something unexpected and causes you troubleshooting headaches or even some downtime. This might happen because a new application or updated package suddenly expects routing to behave in a less-common way or needs a specific protocol enabled to function correctly. + +However, there are many changes that you can make to your servers without suffering any sleepless nights. The version and flavor of an OS helps determine which changes and to what extent you might want to comfortably make. Most importantly though what’s good for the goose is rarely good for the gander. In other words every single server estate has different, both broad and subtle, requirements which makes each use case unique. And, don’t forget that a database server also has very different needs to a web server so you can have a number of differing needs even within one small cluster of servers. + +Over the last few years I’ve introduced these hardening and compliance tweaks more than a handful of times across varying server estates in my DevSecOps roles. The OSs have included: Debian, Red Hat Enterprise Linux (RHEL) and their respective derivatives (including what I suspect will be the increasingly popular RHEL derivative, Amazon Linux). There have been times that, admittedly including a multitude of relatively tiny tweaks, the number of changes to a standard server build was into the hundreds. It all depended on the time permitted for the work, the appetite for any risks and the generic or specific nature of the OS tweaks. + +In this article, we’ll discuss the theory around something called idempotency which, in hand with an automation tool such as Ansible, can provide the ongoing improvements to your server estate’s security posture. For good measure we’ll also look at a number of Ansible playbook examples and additionally refer to online resources so that you can introduce idempotency to a server estate near you. + +### Say What? + +In simple terms the word “idempotent” just means returning something back to how it was prior to a change. It can also mean that lots of things you wanted to be the same, for consistency, are exactly the same, too. + +Picture that in action for a moment on a server estate; we’ll use AWS (Amazon Web Services) as our example. You create a new server image (Amazon Machine Images == AMIs) precisely how you want it with compliance and hardening introduced, custom packages, the removal of unwanted packages, SSH keys, user accounts etc and then spin up twenty servers using that AMI. + +You know for certain that all the servers, at least at the time that they are launched, are absolutely identical. Trust me when I say that this is a “good thing” ™. The lack of what’s known as “config drift” means that if one package on a server needs updated for security reasons then all the servers need that package updated too. Or if there’s a typo in a config file that’s breaking an application then it affects all servers equally. There’s less administrative overhead, less security risk and greater levels of predictability in terms of achieving better uptime. + +What about config drift from a security perspective? As you’ve guessed it’s definitely not welcome. That’s because engineers making manual changes to a “base OS build” can only lead to heartache and stress. The predictability of how a system is working suffers greatly as a result and servers running unique config become less reliable. These server systems are known as “snowflakes” as they’re unique but far less beautiful than actual snow. + +Equally an attacker might have managed to breach one aspect, component or service on a server but not all of its facets. By rewriting our base config again and again we’re able to, with 100% certainty (if it’s set up correctly), predict exactly what a server will look like and therefore how it will perform. Using various tools you can also trigger alarms if changes are detected to request that a pair of human eyes have a look to see if it’s a serious issue and then adjust the base config if needed. + +To make our machines idempotent we might overwrite our config changes every 20 or 30 minutes, for example. When it comes to running servers, that in essence, is what is meant by idempotency. + +### Central Station + +My mechanism of choice for repeatedly writing config across a large number of servers is running Ansible playbooks. It’s relatively easy to implement and removes the all-too-painful additional logic required when using shell scripts. Of the popular configuration management tools I’ve seen in action is Puppet, used successfully on a large government estate in an idempotent manner, but I prefer Ansible due to its more logical syntax (to my mind at least) and its readily available documentation. + +Before we look at some simple Ansible examples of hardening an OS with idempotency in mind we should explore how to trigger our Ansible playbooks. + +This is a larger area for debate than you might first imagine. Say, for example, you have nicely segmented server estate with production servers being carefully locked away from development servers, sitting behind a production-grade firewall. Consider the other servers on the estate, belonging to staging (pre-production) or other development environments, intentionally having different access permissions for security reasons. + +If you’re going to run a centralized server that has superuser permissions (which are required to make privileged changes to your core system files) then that server will need to have high-level access permissions potentially across your entire server estate. It must therefore be guarded very closely. + +You will also want to test your playbooks against development environments (in plural) to test their efficacy which means you’ll probably need two all-powerful centralised Ansible servers, one for production and one for the multiple development environments. + +The actual approach of how to achieve other logistical issues is up for debate and I’ve heard it discussed a few times. Bear in mind that Ansible runs using plain, old SSH keys (a feature that something other configuration management tools have started to copy over time) but ideally you want a mechanism for keeping non-privileged keys on your centralised servers so you’re not logging in as the “root” user across the estate every twenty minutes or thirty minutes. + +From a network perspective I like the idea of having firewalling in place to enforce one-way traffic only into the environment that you’re affecting. This protects your centralised host so that a compromised server can’t attack that main Ansible host easily and then as a result gain access to precious SSH keys in order to damage the whole estate. + +Speaking of which, are servers actually needed for a task like this? What about using AWS Lambda () to execute your playbooks? A serverless approach stills needs to be secured carefully but unquestionably helps to limit the attack surface and also potentially reduces administrative responsibilities. + +I suspect how this all-powerful server is architected and deployed is always going to be contentious and there will never be a one-size-fits-all approach but instead a unique, bespoke solution will be required for every server estate. + +### How Now, Brown Cow + +It’s important to think about how often you run your Ansible and also how to prepare for your first execution of the playbook. Let’s get the frequency of execution out of the way first as it’s the easiest to change in the future. + +My preference would be three times an hour or instead every thirty minutes. If we include enough detail in our configuration then our playbooks might prevent an attacker gaining a foothold on a system as the original configuration overwrites any altered config. Twenty minutes seems more appropriate to my mind. + +Again, this is an aspect you need to have a think about. You might be dumping small config databases locally onto a filesystem every sixty minutes for example and that scheduled job might add an extra little bit of undesirable load to your server meaning you have to schedule around it. + +Next time, we’ll take a look at some specific changes that can be made to various systems. + +_Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website:[https://www.devsecops.cc][3]_ + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-1 + +作者:[Chris Binnie][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/chrisbinnie +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/geometric-1732847_1280.jpg?itok=YRux0Tua +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.devsecops.cc/ diff --git a/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md b/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md new file mode 100644 index 0000000000..9844c3d3bf --- /dev/null +++ b/sources/tech/20190408 Performance-Based Routing (PBR) - The gold rush for SD-WAN.md @@ -0,0 +1,129 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Performance-Based Routing (PBR) – The gold rush for SD-WAN) +[#]: via: (https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all) +[#]: author: (Matt Conran https://www.networkworld.com/author/Matt-Conran/) + +Performance-Based Routing (PBR) – The gold rush for SD-WAN +====== +The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. +![Getty Images][1] + +BGP (Border Gateway Protocol) is considered the glue of the internet. If we view through the lens of farsightedness, however, there’s a question that still remains unanswered for the future. Will BGP have the ability to route on the best path versus the shortest path? + +There are vendors offering performance-based solutions for BGP-based networks. They have adopted various practices, such as, sending out pings to monitor the network and then modifying the BGP attributes, such as the AS prepending to make BGP do the performance-based routing (PBR). However, this falls short in a number of ways. + +The problem with BGP is that it's not capacity or performance aware and therefore its decisions can sink the application’s performance. The attributes that BGP relies upon for path selection are, for example, AS-Path length and multi-exit discriminators (MEDs), which do not always correlate with the network’s performance. + +[The time of 5G is almost here][2] + +Also, BGP changes paths only in reaction to changes in the policy or the set of available routes. It traditionally permits the use of only one path to reach a destination. Hence, traditional routing falls short as it doesn't always look for the best path which may not be the shortest path. + +### Blackout and brownouts + +As a matter of fact, we live in a world where we have more brownouts than blackouts. However, BGP was originally designed to detect only the blackouts i.e. the events wherein a link fails to reroute the traffic to another link. In a world where brownouts can last from 10 milliseconds to 10 seconds, you ought to be able to detect the failure in sub-seconds and re-route to a better path. + +This triggered my curiosity to dig out some of the real yet significant reasons why [SD-WAN][3] was introduced. We all know it saves cost and does many other things but were the inefficiencies in routing one of the main reasons? I decided to sit down with [Sorell][4] to discuss the need for policy-based routing (PBR). + +### SD-WAN is taking off + +The inefficiency factor in the case of traditional routing is one of the main reasons why SD-WAN is really taking off. SD-WAN vendors are adding proprietary mechanisms to their routing in order to select the best path, not the shortest path. + +Originally, we didn't have real-time traffic, such as, voice and video, which is latency and jitter sensitive. Besides, we also assumed that all links were equal. But in today's world, we witness more of a mix and match, for example, 100Gig and slower long-term evolution (LTE) links. The assumption that the shortest path is the best no longer holds true. + +### Introduction of new protocols + +To overcome the drawbacks of traditional routing, we have had the onset of new protocols, such as, [IPv6 segment routing][5] and named data networking along with specific SD-WAN vendor mechanisms that improve routing. + +For optimum routing, effective packet steering is a must. And SD-WAN overlays provide this by utilizing encapsulation which could be a combination of GRE, UDP, Ethernet, MPLS, [VxLAN][6] and IPsec. IPv6 segment routing implements a stack of segments (IPv6 address list) inserted in every packet and the named data networking can be distributed with routing protocols. + +Another critical requirement is the hop-by-hop payload encryption. You should be able to encrypt payloads for sessions that do not have transport layer encryption. Re-encrypting data can be expensive; it fragments the packets and further complicates the networks. Therefore, avoiding double encryption is also a must. + +The SD-WAN overlays furnish an all or nothing approach with [IPsec][7]. IPv6 segment routing requires application layer security that is provided by [IPsec][8] and named data network can offer since it’s object-based. + +### The various SD-WAN solutions + +The above are some of the new protocols available and some of the technologies that the SD-WAN vendors offer. Different vendors will have different mechanisms to implement PBR. Different vendors term PBR with different names, such as, “application-aware routing.” + +SD-WAN vendors are using many factors to influence the routing decision. They are not just making routing decisions on the number of hops or links the way traditional routing does by default. They monitor how the link is performing and do not just evaluate if the link is up or down. + +They are using a variety of mechanisms to perform PBR. For example, some are adding timestamps to every packet. Whereas, others are adding sequence numbers to the packets over and above what you would get in a transmission control protocol (TCP) sequence number. + +Another option is the use of the domain name system (DNS) and [transport layer security][9] (TLS) certificates to automatically identify the application and then based on the identity of the application; they have default classes for it. However, others use timestamps by adding a proprietary label. This is the same as adding a sequence number to the packets, but the sequence number is at Layer 3 instead of Layer 4. + +I can tie all my applications and sequence numbers and then use the network time protocol (NTP) to identify latency, jitter and dropped packets. Running NTP on both ends enables the identification of end-to-end vs hop-by-hop performance. + +Some vendors use an internet control message protocol (ICMP) or bidirectional forwarding detection (BFD). Hence, instead of adding a label to every packet which can introduce overhead, they are doing a sampling for every quarter or half a second. + +Realistically, it is yet to be determined which technology is the best to use, but what is consistent is that these mechanisms are examining elements, such as, the latency, dropped packets and jitter on the links. Essentially, different vendors are using different technologies to choose the best path, but the end result is still the same. + +With these approaches, one can, for example, identify a WebEx session and since a WebEx session has voice and video, can create that session as a high-priority session. All packets associated with the WebEx sessions get placed in a high-value queue. + +The rules are set to say, “I want my WebEx session to go over the multiprotocol label switching (MPLS) link instead of a slow LTE link.” Hence, if your MPLS link faces latency or jitter problems, it automatically reroutes the flow to a better alternate path. + +### Problems with TCP + +One critical problem that surfaces today due to the transmission control protocol (TCP) and adaptive codex is called waves. Let’s say you have 30 file transfers across a link, now to carry out the file transfers, the TCP window size will grow to a point where the link gets maxed out. The router will start to drop packets, followed by the reduced TCP window size. As a result, the bandwidth shrinks and at times when not dropping packets the window size increases. This hits the threshold and eventually, the packets start getting dropped again. + +This can be a continuous process, happening again and again. With all these waves obstructing the efficiency, we need products, like wide area network (WAN) optimizations to manage multiple TCP flows. Why? Because only TCP is aware of the flow that it controls, the single flow. It is not the networking aware of other flows moving across the path. Primarily, the TCP window size is only aware of one single file transfer. + +### Problems with adaptive codex + +Adaptive codex will use upward of 6 megabytes of the video if the link is clean but as soon as it starts to drop packets, the adaptive codex will send more packets for forwarding error-control in the codex. Therefore, it makes the problem even worse before it backs off to change the frame rate and resolution. + +Adaptive codex is the opposite of fixed codex that will always send out a fixed packet size. Adaptive codex is the standard used in WebRTC and can vary the jitter, buffer size and the frequency of packets based on the network conditions. + +Adaptive codex works better off Internet connections that have higher loss and jitter rate than, for example, more stable links, such as MPLS. This is the reason why real-time voice and the video does not use TCP because if the packet gets dropped, there is no point in sending a new packet. Logically, having the additional headers of TCP does not buy you anything. + +QUIC, on the other hand, can take a single flow and run it across multiple network-flows. This helps the video applications in rebuffering and improves throughput. In addition, it helps in boosting the response for bandwidth-intensive applications. + +### The introduction of new technologies + +With the introduction of [edge computing][10], augmented reality (AR), virtual reality (VR), real-time driving applications, [IoT sensors][11] on critical systems and other hypersensitive latency applications, PBR becomes a necessity. + +With AR you want the computing to be accomplished between 5 to 10 milliseconds of the endpoint. In the world of brownouts and path congestion, you need to pick a better path much more quickly. Also, service providers (SP) are rolling out 5G networks and announcing the use of different routing protocols that are being used as PBR. So the future looks bright for PBR. + +As voice and video, edge and virtual reality gain more existence in the market, PBR will become more popular. Even Facebook and Google are putting PBR inside their internal networks. Over time it will have a role in all the networks, specifically, the Internet Exchange points, both private and public. + +### Internet exchange points + +Back in the early 90s, there were only 4 internet exchange points in the US and 9 across the world overall. Now we have more than 3,000 where different providers have come together, and they exchange Internet traffic. + +When BGP was first rolled out in the mid-‘90s, because the internet exchange points were located far apart, the concept of shortest path held true more than today, where you have an internet that is highly distributed. + +The internet architecture will get changed as different service providers move to software-defined networking and update the routing protocols that they use. As far as the foreseeable future is concerned, however, the core internet exchanges will still use BGP. + +**This article is published as part of the IDG Contributor Network.[Want to Join?][12]** + +Join the Network World communities on [Facebook][13] and [LinkedIn][14] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387152/performance-based-routing-pbr-the-gold-rush-for-sd-wan.html#tk.rss_all + +作者:[Matt Conran][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Matt-Conran/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/10/smart-city_iot_digital-transformation_networking_wireless_city-scape_skyline-100777499-large.jpg +[2]: https://www.networkworld.com/article/3354477/mobile-world-congress-the-time-of-5g-is-almost-here.html +[3]: https://network-insight.net/2017/08/sd-wan-networks-scalpel/ +[4]: https://techvisionresearch.com/ +[5]: https://network-insight.net/2015/07/segment-routing-introduction/ +[6]: https://youtu.be/5XtkCSfRy3c +[7]: https://network-insight.net/2015/01/design-guide-ipsec-fault-tolerance/ +[8]: https://network-insight.net/2015/01/ipsec-virtual-private-network-vpn-overview/ +[9]: https://network-insight.net/2015/10/back-to-basics-ssl-security/ +[10]: https://youtu.be/5mbPiKd_TFc +[11]: https://network-insight.net/2016/11/internet-of-things-iot-networking/ +[12]: /contributor-network/signup.html +[13]: https://www.facebook.com/NetworkWorld/ +[14]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190409 AI Ops- Let the data talk.md b/sources/tech/20190409 AI Ops- Let the data talk.md new file mode 100644 index 0000000000..2b3d57ef17 --- /dev/null +++ b/sources/tech/20190409 AI Ops- Let the data talk.md @@ -0,0 +1,66 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (AI Ops: Let the data talk) +[#]: via: (https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all) +[#]: author: (Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena ) + +AI Ops: Let the data talk +====== +The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planet’s Marie Fiala details the conversation. +![metamorworks][1] + +![Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][2] + +_The catalysts and ROI of AI-powered network analytics for automated operations were the focus of discussion for service providers at the recent FutureNet conference in London. Blue Planet’s Marie Fiala details the conversation._ + +Do we need perfect data? Or is ‘good enough’ data good enough? Certainly, there is a need to find a pragmatic approach or else one could get stalled in analysis-paralysis. Is closed-loop automation the end goal? Or is human-guided open loop automation desired? If the quality of data defines the quality of the process, then for closed-loop automation of critical business processes, one needs near-perfect data. Is that achievable? + +These issues were discussed and debated at the recent FutureNet conference in London, where the show focused on solving network operators’ toughest challenges. Industry presenters and panelists stayed true to the themes of AI and automation, all touting the necessity of these interlinked software technologies, yet there were varied opinions on approaches. Network and service providers such as BT, Colt, Deutsche Telekom, KPN, Orange, Telecom Italia, Telefonica, Telenor, Telia, Telus, Turk Telkom, and Vodafone weighed in on the discussion. + +**Catalysts for AI-powered analytics** + +On one point, most service providers were in agreement: there is a need to identify a specific business use case with measurable ROI, as an initial validation point when introducing AI-powered analytics into operations. + +Host operator, Vodafone, positioned 5G as the catalyst. With the advent of 5G technology supporting 100x connections, 10Gbps super-bandwidth, and ultra-low <10ms latency, the volume, velocity and variety of data is exploding. It’s a virtuous cycle – 5G technologies generate a plethora of data, and conversely, a 5G network requires data-driven automation to function accurately and optimally (how else can virtualized network functions be managed in real-time?). + +![5G as catalyst for digitalisation][3] + +Another operator stated that the ‘AI gateway for telecom’ is the customer experience domain, citing how agents can use analytics to better serve the customer base. For another operator, capacity planning is the killer use case: first leverage AI to understand what’s going on in your network, then use predictive AI for planning so that you can make smarter investment decisions. Another point of view was that service assurance is the area where the most benefits from AI will be realized. There was even mention of ‘AI as a business’ by enabling the creation of new services, such as home assistants. At the broadest level, it was noted that AI allows network operators to remain relevant in the eyes of customers. + +**The human side of AI and automation** + +When it comes to implementation, the significant human impact of AI and automation was not overlooked. Across the board, service providers acknowledged that a new skillset is needed in network operations centers. Network engineers have to upskill to become data scientists and DevOps developers in order to best leverage the new AI-driven software tools. + +Furthermore, it is a challenge to recruit specialist AI experts, especially since web-scale providers are also vying for the same talent. On the flip side of the dire need for new skills, there is also a shortage of qualified experts in legacy technologies. Operators need automated, zero-touch management before the workforce retires! + +![FutureNet panelists discuss how automated AI can be leveraged as a competitive differentiator][4] + +**The ROI of AI** + +In many cases, the approach to AI has been a technology-driven ‘Field of Dreams’: build it and they will come. A strategic decision was made to hire experts, build data lakes, collect data, and then the business case that yielded positive returns was discovered. In other cases, the business use case came first. But no matter what the approach, the ROI was significant. + +These positive results are spurring determination for continued research to uncover ever more areas where AI can deliver tangible benefits. This is however no easy task – one operator highlighted that data collection takes 80% of the effort, with the remaining 20% spent on development of algorithms. For AI to really proliferate throughout all aspects of operations, that trend needs to be reversed. It needs to be relatively easy and quick to collect massive amounts of heterogeneous data, aggregate it, and correlate it. This would allow investment to be overwhelmingly applied to the development of predictive and prescriptive analytics tailored to specific use cases, and to enacting intelligent closed-loop automation. Only then will data be able to truly talk – and tell us what we haven’t even thought of yet. + +[Discover Intelligent Automation at Blue Planet][5] + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3388217/ai-ops-let-the-data-talk.html#tk.rss_all + +作者:[Marie Fiala, Director of Portfolio Marketing for Blue Planet at Ciena][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/istock-957627892-100793278-large.jpg +[2]: https://images.idgesg.net/images/article/2019/04/marla-100793273-small.jpg +[3]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-1-100793275-large.jpg +[4]: https://images.idgesg.net/images/article/2019/04/ciena-post-5-image-2-100793276-large.jpg +[5]: https://www.blueplanet.com/resources/Intelligent-Automation-Driving-Digital-Automation-for-Service-Providers.html?utm_campaign=X1058319&utm_source=NWW&utm_term=BPVision&utm_medium=newsletter diff --git a/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md b/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md new file mode 100644 index 0000000000..7ed701ec14 --- /dev/null +++ b/sources/tech/20190409 Juniper opens SD-WAN service for the cloud.md @@ -0,0 +1,80 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Juniper opens SD-WAN service for the cloud) +[#]: via: (https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Juniper opens SD-WAN service for the cloud +====== +Juniper rolls out its Contrail SD-WAN cloud offering. +![Thinkstock][1] + +Juniper has taken the wraps off a cloud-based SD-WAN service it says will ease the management and bolster the security of wired and wireless-connected branch office networks. + +The Contrail SD-WAN cloud offering expands on the company’s existing on-premise ([SRX][2]-based) and virtual ([NFX][3]-based) SD-WAN offerings to include greater expansion possibilities – up to 10,000 spoke-attached sites and support for more variants of passive redundant hybrid WAN links – and topologies such as hub and spoke, partial, and dynamic full mesh, Juniper stated. + +**More about SD-WAN** + + * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][4] + * [How to pick an off-site data-backup method][5] + * [SD-Branch: What it is and why you’ll need it][6] + * [What are the options for security SD-WAN?][7] + + + +The service brings with it Juniper’s Contrail Service Orchestration package, which secures, automates, and runs the service life cycle across [NFX Series][3] Network Services Platforms, [EX Series][8] Ethernet Switches, [SRX Series][2] next-generation firewalls, and [MX Series][9] 5G Universal Routing Platforms. Ultimately it lets customers manage and set up SD-WANs all from a single portal. + +The package is also a service orchestrator for the [vSRX][10] Virtual Firewall and [vMX][11] Virtual Router, available in public cloud marketplaces such as Amazon Web Services (AWS) and Microsoft Azure, Juniper said. The SD-WAN offering also includes integration with cloud security provider ZScaler. + +Contrail Service Orchestration offers organizations visibility across SD-WAN, as well as branch wired and now wireless infrastructure. Monitoring and intelligent analytics offer real-time insight into network operations, allowing administrators to preempt looming threats and degradations, as well as pinpoint issues for faster recovery. + +The new service also includes support for Juniper’s [recently acquired][12] Mist Systems wireless technology, which lets the service access and manage Mist’s wireless access points, allowing customers to meld wireless and wired networks. + +Juniper recently closed the agreement to buy innovative wireless-gear-maker Mist for $405 million. Mist touts itself as having developed an artificial-intelligence-based wireless platform that makes Wi-Fi more predictable, reliable, and measurable. + +With Contrail, administrators can control a growing mix of legacy and modern scale-out architectures while automating their operational workflows using software that provides smarter, easier-to-use automation, orchestration and infrastructure visibility, wrote Juniper CTO [Bikash Koley][13] in a [blog about the SD-WAN announcement][14]. + +“Management complexity and policy enforcement are traditional network administrator fears, while both data and network security are growing in importance for organizations of all sizes,” Koley stated. ** **“Cloud-delivered SD-WAN removes the complexity of software operations, arguably the most difficult part of Software Defined Networking.” + +Analysts said the Juniper announcement could help the company compete in a super-competitive, rapidly evolving SD-WAN world. + +“The announcement is more a ‘me too’ than a particular technological breakthrough,” said Lee Doyle, principal analyst with Doyle Research. “The Mist integration is what’s interesting here, and that could help them, but there are 15 to 20 other vendors that have the same technology, bigger partners, and bigger sales channels than Juniper does.” + +Indeed the SD-WAN arena is a crowded one with Cisco, VMware, Silver Peak, Riverbed, Aryaka, Nokia, and Versa among the players. + +The cloud-based Contrail SD-WAN offering is available as an annual or multi-year subscription. + +Join the Network World communities on [Facebook][15] and [LinkedIn][16] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3388030/juniper-opens-sd-wan-service-for-the-cloud.html#tk.rss_all + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/01/cloud_network_blockchain_bitcoin_storage-100745950-large.jpg +[2]: https://www.juniper.net/us/en/products-services/security/srx-series/ +[3]: https://www.juniper.net/us/en/products-services/sdn/nfx-series/ +[4]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html +[5]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[6]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html +[7]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true +[8]: https://www.juniper.net/us/en/products-services/switching/ex-series/ +[9]: https://www.juniper.net/us/en/products-services/routing/mx-series/ +[10]: https://www.juniper.net/us/en/products-services/security/srx-series/vsrx/ +[11]: https://www.juniper.net/us/en/products-services/routing/mx-series/vmx/ +[12]: https://www.networkworld.com/article/3353042/juniper-grabs-mist-for-wireless-ai-cloud-service-delivery-technology.html +[13]: https://www.networkworld.com/article/3324374/juniper-cto-talks-cloud-intent-computing-revolution-high-speed-networking-and-open-source-growth.html?nsdr=true +[14]: https://forums.juniper.net/t5/Engineering-Simplicity/Cloud-Delivered-Branch-Simplicity-Now-Surpasses-SD-WAN/ba-p/461188 +[15]: https://www.facebook.com/NetworkWorld/ +[16]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md b/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md new file mode 100644 index 0000000000..2bb20bc8a0 --- /dev/null +++ b/sources/tech/20190409 UP Shell Script - Quickly Navigate To A Specific Parent Directory In Linux.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (UP Shell Script – Quickly Navigate To A Specific Parent Directory In Linux) +[#]: via: (https://www.2daygeek.com/up-shell-script-quickly-go-back-to-a-specific-parent-directory-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +UP Shell Script – Quickly Navigate To A Specific Parent Directory In Linux +====== + +Recently we had written an article about **[bd command][1]** , which help us to **[quickly go back to the specific parent directory][1]**. + +Even, the [up shell script][2] allow us to perform the same but has different approach so, we would like to explore it. + +This will allow us to quickly navigate to a specific parent directory with mentioning the directory name. + +Instead we can give the directory number. I mean to say that number of times you’d have to go back. + +Stop typing `cd ../../..` endlessly and navigate easily to a specific parent directory by using up shell script. + +It support tab completion so, it’s become more convenient. + +The `up.sh` registers the up function and some completion functions via your `.bashrc` or `.zshrc` file. + +It was completely written using shell script and it’s support zsh and fish shell as well. + +We had written an article about **[autocd][3]**. It’s a builtin shell variable that helps us to **[navigate to inside a directory without cd command][3]**. + +### How To Install up Linux? + +It’s not based on the distribution and you have to install it based on your shell. + +Simple run the following command to enable up script on `bash` shell. + +``` +$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh + +$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc +``` + +Run the following command to take the changes to effect. + +``` +$ source ~/.bashrc +``` + +Simple run the following command to enable up script on `zsh` shell. + +``` +$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh + +$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc +``` + +Run the following command to take the changes to effect. + +``` +$ source ~/.zshrc +``` + +Simple run the following command to enable up script on `fish` shell. + +``` +$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish + +$ source ~/.config/up/up.fish +``` + +### How To Use This In Linux? + +We have successfully installed and configured the up script on system. It’s time to test it. + +I’m going to take the below directory path for this testing. + +Run the `pwd` command or `dirs` command to know your current location. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ pwd +or +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ dirs + +/usr/share/icons/Adwaita/256x256/apps +``` + +How to up one level? Quickly go back to one directory. I’m currently in `/usr/share/icons/Adwaita/256x256/apps` and if i want to go one directory up `256x256` directory quickly then simple type the following command. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up + +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256$ pwd +/usr/share/icons/Adwaita/256x256 +``` + +How to up multiple levels? Quickly go back to multiple directory. I’m currently in `/usr/share/icons/Adwaita/256x256/apps` and if i want to go to `share` directory quickly then simple type the following command. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up 4 + +daygeek@Ubuntu18:/usr/share$ pwd +/usr/share +``` + +How to up by full name? Quickly go back to the given directory instead of number. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up icons + +daygeek@Ubuntu18:/usr/share/icons$ pwd +/usr/share/icons +``` + +How to up by partial name? Quickly go back to the given directory instead of number. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up Ad + +daygeek@Ubuntu18:/usr/share/icons/Adwaita$ pwd +/usr/share/icons/Adwaita +``` + +As i told in the beginning of the article, it supports tab completion. + +``` +daygeek@Ubuntu18:/usr/share/icons/Adwaita/256x256/apps$ up +256x256/ Adwaita/ icons/ share/ usr/ +``` + +This tutorial allows you to quickly go back to a specific parent directory but there is no option to move forward quickly. + +We have another solution for this, will come up with new solution shortly. Please stay tune with us. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/up-shell-script-quickly-go-back-to-a-specific-parent-directory-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/bd-quickly-go-back-to-a-specific-parent-directory-in-linux/ +[2]: https://github.com/shannonmoeller/up +[3]: https://www.2daygeek.com/navigate-switch-directory-without-using-cd-command-in-linux/ diff --git a/sources/tech/20190409 What it takes to become a blockchain developer.md b/sources/tech/20190409 What it takes to become a blockchain developer.md new file mode 100644 index 0000000000..668824b99a --- /dev/null +++ b/sources/tech/20190409 What it takes to become a blockchain developer.md @@ -0,0 +1,204 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What it takes to become a blockchain developer) +[#]: via: (https://opensource.com/article/19/4/blockchain-career-developer) +[#]: author: (Joseph Mugo https://opensource.com/users/mugo) + +What it takes to become a blockchain developer +====== +If you’ve been considering a career in blockchain development, the time +to get your foot in the door is now. Here's how to get started. +![][1] + +The past decade has been an interesting time for the development of decentralized technologies. Before 2009, the progress was slow and without any clear direction until Satoshi Nakamoto created and deployed Bitcoin. That brought blockchain, the record-keeping technology behind Bitcoin, into the limelight. + +Since then, we've seen blockchain revolutionize various concepts that we used to take for granted, such as monitoring supply chains, [creating digital identities,][2] [tracking jewelry][3], and [managing shipping systems.][4] Companies such as IBM and Samsung are at the forefront of blockchain as the underlying infrastructure for the next wave of tech innovation. There is no doubt that blockchain's role will grow in the years to come. + +Thus, it's no surprise that there's a high demand for blockchain developers. LinkedIn put "blockchain developers" at the top of its 2018 [emerging jobs report][5] with an expected 33-fold growth. The freelancing site Upwork also released a report showing that blockchain was one of the [fastest growing skills][6] out of more than 5,000 in its index. + +Describing the internet in 2003, [Jeff Bezos said][7], "we are at the 1908 Hurley washing machine stage." The same can be said about blockchain today. The industry is busy building its foundation. If you've been considering a career as a blockchain developer, the time to get your foot in the door is now. + +However, you may not know where to start. It can be frustrating to go through countless blog posts and white papers or messy Slack channels when trying to find your footing. This article is a report on what I learned when contemplating whether I should become a blockchain developer. I'll approach it from the basics, with resources for each topic you need to master to be industry-ready. + +### Technical fundamentals + +Although you're won't be expected to build a blockchain from scratch, you need to be skilled enough to handle the duties of blockchain development. A bachelor's degree in computer science or information security is required. You also need to have some fundamentals in data structures, cryptography, and networking and distributed systems. + +#### Data structures + +The complexity of blockchain requires a solid understanding of data structures. At the core, a distributed ledger is like a network of replicated databases, only it stores information in blocks rather than tables. The blocks are also cryptographically secured to ensure their integrity every time a block is added. + +For this reason, you have to know how common data structures, such as binary search trees, hash maps, graphs, and linked lists, work. It's even better if you can build them from scratch. + +This [GitHub repository][8] contains all information newbies need to learn data structures and algorithms. Common languages such as Python, Java, Scala, C, C-Sharp, and C++ are featured. + +#### Cryptography + +Cryptography is the foundation of blockchain; it is what makes cryptocurrencies work. The Bitcoin blockchain employs public-key cryptography to create digital signatures and hash functions. You might be discouraged if you don't have a strong math background, but Stanford offers [a free course][9] that's perfect for newbies. You'll learn about authenticated encryption, message integrity, and block ciphers. + +You should also study [RSA][10], which doesn't require a strong background in mathematics, and look at [ECDSA][11] (elliptic curve cryptography). + +And don't forget [cryptographic hash functions][12]. They are the equations that enable most forms of encryptions on the internet. They keep payments secure on e-commerce sites and are the core mechanism behind the HTTPS protocol. There's extensive use of cryptographic hash functions in blockchain. + +#### Networking and distributed systems + +Build a good foundation in understanding how distributed ledgers work. Also understand how peer-to-peer networks work, which translates to a good foundation in computer networks, from networking topologies to routing. + +In blockchain, the processing power is harnessed from connected computers. For seamless recording and interchange of information between these devices, you need to understand about [Byzantine fault-tolerant consensus][13], which is a key security feature in blockchain. You don't need to know everything; an understanding of how distributed systems work is good enough. + +Stanford has a free, self-paced [course on computer networking][14] if you need to start from scratch. You can also consult this list of [awesome material on distributed systems][15]. + +### Cryptonomics + +We've covered some of the most important technical bits. It's time to talk about the economics of this industry. Although cryptocurrencies don't have central banks to monitor the money supply or keep crypto companies in check, it's essential to understand the economic structures woven around them. + +You'll need to understand game theory, the ideal mathematical framework for modeling scenarios in which conflicts of interest exist among involved parties. Take a look at Michael Karnjanaprakorn's [Beginner's Guide to Game Theory][16]. It's lucid and well explained. + +You also need to understand what affects currency valuation and the various monetary policies that affect cryptocurrencies. Here are some books you can refer to: + + * _[The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology][17]_ by William Mougayar + * _[Blockchain: Blueprint for the New Economy][18]_ by Melanie Swan + * _[Blockchain: The Blockchain For Beginners Guide to Blockchain Technology and Leveraging Blockchain Programming][19]_ by Josh Thompsons + + + +Depending on how skilled you are, you won't need to go through all those materials. But once you're done, you'll understand the fundamentals of blockchain. Then you can dive into the good stuff. + +### Smart contracts + +A [smart contract][20] is a program that runs on the blockchain once a transaction is complete to enhance blockchain's capabilities. + +Unlike traditional judicial systems, smart contracts are enforced automatically and impartially. There are also no middlemen, so you don't need a lawyer to oversee a transaction. + +As smart contracts get more complex, they become harder to secure. You need to be aware of every possible way a smart contract can be executed and ensure that it does what is expected. At the moment, not many developers can properly optimize and audit smart contracts. + +### Decentralized applications + +Decentralized applications (DApps) are software built on blockchains. As a blockchain developer, there are several platforms where you can build a DApp. Here are some of them: + +#### Ethereum + +Ethereum is Vitalik Buterin's brainchild. It went live in 2015 and is one of the most popular development platforms. Ether is the cryptocurrency that fuels the Ethereum. + +It has its own language called Solidity, which is similar to C++ and JavaScript. If you've got any experience with either, you'll pick it up easily. + +One thing that makes Solidity unique is that it is smart-contract oriented. + +#### NEO + +Originally known as Antshares, NEO was founded by Erik Zhang and Da Hongfei in 2014. It became NEO in 2017. Unlike Ethereum, it's not limited to one language. You can use different programming languages to build your DApps on NEO, including C# and Java. Experienced users can easily start building DApps on NEO. It's focused on providing platforms for future digital businesses. + +Consider NEO if you have applications that will need to process lots of transactions per second. However, it works closely with the Chinese government and follows Chinese business regulations. + +#### EOS + +EOS blockchain aims to be a decentralized operating system that can support industrial-scale applications. It's basically like Ethereum, but with faster transaction speeds and more scalable. + +#### Hyperledger + +Hyperledger is an open source collaborative platform that was created to develop cross-industry blockchain technologies. The Linux Foundation hosts Hyperledger as a hub for open industrial blockchain development. + +### Learning resources + +Here are some courses and other resources that'll help make you an industry-ready blockchain developer. + + * The University of Buffalo and The State University of New York have a [blockchain specialization course][21] that also teaches smart contracts. You can complete it in two months if you put in 10 hours per week. You'll learn about designing and implementing smart contracts and various methods for developing decentralized applications on blockchain. + * [DApps for Beginners][22] offers tutorials and other information to get you started on creating decentralized apps on the Ethereum blockchain. You'll need to know JavaScript, and knowledge of C++ is an added advantage. + * IBM also offers [Blockchain for Developers][23], where you'll work with IBM's private blockchain and build smart contracts using the [Hyperledger Fabric][24]. + * For $3,500 you can enroll in MIT's online [Blockchain Technologies: Business Innovation and Application][25] program, which examines blockchain from an economic perspective. You need deep pockets for this one; it's meant for executives who want to know how blockchain can be used in their organizations. + * If you're willing to commit 10 hours per week, Udacity's [Blockchain Developer Nanodegree][26] can prepare you to become an industry-ready blockchain developer in six months. Before enrolling, you should have some experience in object-oriented programming. You should also have developed the frontend and backend of a web application with JavaScript. And you're required to have used a remote API to create and consume data. You'll work with Bitcoin and Ethereum protocols to build projects for real-world applications. + * If you need to shore up your foundations, you may be interested in the Open Source Society University's wildly popular and [free computer science curriculum][27]. + * You can read a variety of articles about [blockchain in open source][28] on [Opensource.com][29]. + + + +### Types of blockchain development + +What does a blockchain developer really do? It doesn't involve building a blockchain from scratch. Depending on the organization you work for, here are some of the categories that blockchain developers fall under. + +#### Backend developers + +In this case, the developer is responsible for: + + * Designing and developing APIs for blockchain integration + * Doing performance testing and deployment + * Gathering requirements and working side-by-side with other developers and designers to design software + * Providing technical support + + + +#### Blockchain-specific + +Blockchain developers and project managers fall under this category. Their main roles include: + + * Developing and maintaining decentralized applications + * Supervising and planning blockchain projects + * Advising companies on how to structure initial coin offerings (ICOs) + * Understanding what a company needs and creating apps that address those needs + * For project managers, organizing training for employees + + + +#### Smart-contract engineers + +This type of developer is required to know a smart-contract language like Solidity, Python, or Go. Their main roles include: + + * Auditing and developing smart contracts + * Meeting with users and buyers + * Understanding business flow and security to ensure there are no loopholes in smart contracts + * Doing end-to-end business process testing + + + +### The state of the industry + +There's a wide base of knowledge to help you become a blockchain developer. If you're interested in joining the field, it's an opportunity for you to make a difference by pioneering the next wave of tech innovations. It pays very well and is in high demand. There's also a wide community you can join to help you gain entry as an actual developer, including [Ethereum Stack Exchange][30] and meetup events around the world. + +The banking sector, the insurance industry, governments, and retail industries are some of the sectors where blockchain developers can work. If you're willing to work for it, being a blockchain developer is an excellent career choice. Currently, the need outpaces available talent by far. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/blockchain-career-developer + +作者:[Joseph Mugo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mugo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_UnspokenBlockers_1110_A.png?itok=x8A9mqVA +[2]: https://www.fool.com/investing/2018/02/16/this-is-really-happening-microsoft-is-developing-b.aspx +[3]: https://www.engadget.com/2018/04/26/ibm-blockchain-jewelry-provenance/ +[4]: https://www.engadget.com/2018/04/16/samsung-blockchain-based-global-shipping-system/ +[5]: https://economicgraph.linkedin.com/research/linkedin-2018-emerging-jobs-report +[6]: https://www.upwork.com/blog/2018/05/fastest-growing-skills-upwork-q1-2018/ +[7]: https://www.wsj.com/articles/SB104690855395981400 +[8]: https://github.com/TheAlgorithms +[9]: https://www.coursera.org/learn/crypto +[10]: https://en.wikipedia.org/wiki/RSA_(cryptosystem) +[11]: https://en.wikipedia.org/wiki/Elliptic_Curve_Digital_Signature_Algorithm +[12]: https://komodoplatform.com/cryptographic-hash-function/ +[13]: https://en.wikipedia.org/wiki/Byzantine_fault +[14]: https://lagunita.stanford.edu/courses/Engineering/Networking-SP/SelfPaced/about +[15]: https://github.com/theanalyst/awesome-distributed-systems +[16]: https://hackernoon.com/beginners-guide-to-game-theory-31e3e6adcec9 +[17]: https://www.amazon.com/dp/B01EIGP8HG/ +[18]: https://www.amazon.com/Blockchain-Blueprint-Economy-Melanie-Swan/dp/1491920491 +[19]: https://www.amazon.com/Blockchain-Beginners-Technology-Leveraging-Programming-ebook/dp/B0711RN8KJ +[20]: https://lifeinpaces.com/2019/03/04/ethereum-smart-contracts-how-do-they-work/ +[21]: https://www.coursera.org/specializations/blockchain?aid=true +[22]: https://dappsforbeginners.wordpress.com/ +[23]: https://developer.ibm.com/tutorials/cl-ibm-blockchain-101-quick-start-guide-for-developers-bluemix-trs/#start +[24]: https://www.hyperledger.org/projects/fabric +[25]: https://executive.mit.edu/openenrollment/program/blockchain-technologies-business-innovation-and-application-self-paced-online/#.XJSk-CgzbRY +[26]: https://www.udacity.com/course/blockchain-developer-nanodegree--nd1309 +[27]: https://github.com/ossu/computer-science +[28]: https://opensource.com/tags/blockchain +[29]: http://Opensource.com +[30]: https://ethereum.stackexchange.com/ diff --git a/sources/tech/20190409 Working with variables on Linux.md b/sources/tech/20190409 Working with variables on Linux.md new file mode 100644 index 0000000000..da4fec5ea9 --- /dev/null +++ b/sources/tech/20190409 Working with variables on Linux.md @@ -0,0 +1,267 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Working with variables on Linux) +[#]: via: (https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Working with variables on Linux +====== +Variables often look like $var, but they also look like $1, $*, $? and $$. Let's take a look at what all these $ values can tell you. +![Mike Lawrence \(CC BY 2.0\)][1] + +A lot of important values are stored on Linux systems in what we call “variables,” but there are actually several types of variables and some interesting commands that can help you work with them. In a previous post, we looked at [environment variables][2] and where they are defined. In this post, we're going to look at variables that are used on the command line and within scripts. + +### User variables + +While it's quite easy to set up a variable on the command line, there are a few interesting tricks. To set up a variable, all you need to do is something like this: + +``` +$ myvar=11 +$ myvar2="eleven" +``` + +To display the values, you simply do this: + +``` +$ echo $myvar +11 +$ echo $myvar2 +eleven +``` + +You can also work with your variables. For example, to increment a numeric variable, you could use any of these commands: + +``` +$ myvar=$((myvar+1)) +$ echo $myvar +12 +$ ((myvar=myvar+1)) +$ echo $myvar +13 +$ ((myvar+=1)) +$ echo $myvar +14 +$ ((myvar++)) +$ echo $myvar +15 +$ let "myvar=myvar+1" +$ echo $myvar +16 +$ let "myvar+=1" +$ echo $myvar +17 +$ let "myvar++" +$ echo $myvar +18 +``` + +With some of these, you can add more than 1 to a variable's value. For example: + +``` +$ myvar0=0 +$ ((myvar0++)) +$ echo $myvar0 +1 +$ ((myvar0+=10)) +$ echo $myvar0 +11 +``` + +With all these choices, you'll probably find at least one that is easy to remember and convenient to use. + +You can also _unset_ a variable — basically undefining it. + +``` +$ unset myvar +$ echo $myvar +``` + +Another interesting option is that you can set up a variable and make it **read-only**. In other words, once set to read-only, its value cannot be changed (at least not without some very tricky command line wizardry). That means you can't unset it either. + +``` +$ readonly myvar3=1 +$ echo $myvar3 +1 +$ ((myvar3++)) +-bash: myvar3: readonly variable +$ unset myvar3 +-bash: unset: myvar3: cannot unset: readonly variable +``` + +You can use any of those setting and incrementing options for assigning and manipulating variables within scripts, but there are also some very useful _internal variables_ for working within scripts. Note that you can't reassign their values or increment them. + +### Internal variables + +There are quite a few variables that can be used within scripts to evaluate arguments and display information about the script itself. + + * $1, $2, $3 etc. represent the first, second, third, etc. arguments to the script. + * $# represents the number of arguments. + * $* represents the string of arguments. + * $0 represents the name of the script itself. + * $? represents the return code of the previously run command (0=success). + * $$ shows the process ID for the script. + * $PPID shows the process ID for your shell (the parent process for the script). + + + +Some of these variables also work on the command line but show related information: + + * $0 shows the name of the shell you're using (e.g., -bash). + * $$ shows the process ID for your shell. + * $PPID shows the process ID for your shell's parent process (for me, this is sshd). + + + +If we throw all of these variables into a script just to see the results, we might do this: + +``` +#!/bin/bash + +echo $0 +echo $1 +echo $2 +echo $# +echo $* +echo $? +echo $$ +echo $PPID +``` + +When we call this script, we'll see something like this: + +``` +$ tryme one two three +/home/shs/bin/tryme <== script name +one <== first argument +two <== second argument +3 <== number of arguments +one two three <== all arguments +0 <== return code from previous echo command +10410 <== script's process ID +10109 <== parent process's ID +``` + +If we check the process ID of the shell once the script is done running, we can see that it matches the PPID displayed within the script: + +``` +$ echo $$ +10109 <== shell's process ID +``` + +Of course, we're more likely to use these variables in considerably more useful ways than simply displaying their values. Let's check out some ways we might do this. + +Checking to see if arguments have been provided: + +``` +if [ $# == 0 ]; then + echo "$0 filename" + exit 1 +fi +``` + +Checking to see if a particular process is running: + +``` +ps -ef | grep apache2 > /dev/null +if [ $? != 0 ]; then + echo Apache is not running + exit +fi +``` + +Verifying that a file exists before trying to access it: + +``` +if [ $# -lt 2 ]; then + echo "Usage: $0 lines filename" + exit 1 +fi + +if [ ! -f $2 ]; then + echo "Error: File $2 not found" + exit 2 +else + head -$1 $2 +fi +``` + +And in this little script, we check if the correct number of arguments have been provided, if the first argument is numeric, and if the second argument is an existing file. + +``` +#!/bin/bash + +if [ $# -lt 2 ]; then + echo "Usage: $0 lines filename" + exit 1 +fi + +if [[ $1 != [0-9]* ]]; then + echo "Error: $1 is not numeric" + exit 2 +fi + +if [ ! -f $2 ]; then + echo "Error: File $2 not found" + exit 3 +else + echo top of file + head -$1 $2 +fi +``` + +### Renaming variables + +When writing a complicated script, it's often useful to assign names to the script's arguments rather than continuing to refer to them as $1, $2, and so on. By the 35th line, someone reading your script might have forgotten what $2 represents. It will be a lot easier on that person if you assign an important parameter's value to $filename or $numlines. + +``` +#!/bin/bash + +if [ $# -lt 2 ]; then + echo "Usage: $0 lines filename" + exit 1 +else + numlines=$1 + filename=$2 +fi + +if [[ $numlines != [0-9]* ]]; then + echo "Error: $numlines is not numeric" + exit 2 +fi + +if [ ! -f $ filename]; then + echo "Error: File $filename not found" + exit 3 +else + echo top of file + head -$numlines $filename +fi +``` + +Of course, this example script does nothing more than run the head command to show the top X lines in a file, but it is meant to show how internal parameters can be used within scripts to help ensure the script runs well or fails with at least some clarity. + +**[ Watch Sandra Henry-Stocker's Two-Minute Linux Tips[to learn how to master a host of Linux commands][3] ]** + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3387154/working-with-variables-on-linux.html#tk.rss_all + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2019/04/variable-key-keyboard-100793080-large.jpg +[2]: https://www.networkworld.com/article/3385516/how-to-manage-your-linux-environment.html +[3]: https://www.youtube.com/playlist?list=PL7D2RMSmRO9J8OTpjFECi8DJiTQdd4hua +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md b/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md new file mode 100644 index 0000000000..48edeaec20 --- /dev/null +++ b/sources/tech/20190409 topgrade - Upgrade-Update Everything In Single Command On Linux.md @@ -0,0 +1,207 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (topgrade – Upgrade/Update Everything In Single Command On Linux?) +[#]: via: (https://www.2daygeek.com/topgrade-upgrade-update-everything-in-single-command-on-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +topgrade – Upgrade/Update Everything In Single Command On Linux? +====== + +As a Linux administrator, you have to keep your system up-to-date to get ride out from some unexpected issues. + +We have to keep the system with latest patches as part of best practices. + +To do so, you need to perform the patching activity at least once in a month. + +Most of the time you have to reboot the server after patching to activate the latest kernel. + +It’s good to reboot the server at least 90-120 days once that will fix some outstanding issue which we already having. + +If you have a single system then we can directly login to the system and do perform the patching that is not a big deal. + +Even, if you have few of servers with the same flavor then you can perform the patching with help of shell script. + +If you have high number of servers then i would advise you to go with any of the parallel utility, which will help us to perform the patching in parallel. + +It will save a lot’s of time compared with shell script as this go with sequential order. + +how to patch all togeter if you have servers with multiple flavors? What will be the solution ? + +I recently came to know the utility called `topgrade` that can fulfill your requirement. + +Also, your distribution package manager doesn’t upgrade the packages which was installed with other package managers such as pip, npm, snap, etc,. but topgrade can fix this issue as well. + +### What Is topgrade? + +[topgrade][1] is a new tool that will upgrade all the installed packages on your system to latest available version by detecting and running the appropriate package managers. + +### How To Install topgrade In Linux? + +There is no separate package manager for distributions wise. Hence, you need to install topgrade with help of cargo package manager. + +The topgrade is available in AUR. So, use one of the **[AUR helper][2]** to install it on Arch-based systems. I prefer to go with **[Yay helper][3]** program. + +``` +$ yay -S topgrade +``` + +Once you have installed the **[cargo package manager][4]** , use the following command to install it. + +``` +$ cargo install topgrade +``` + +Once topgrade is initiated, it will perform the following tasks one by one. + + * Try to self-upgrade if any updated is available for topgrade. + * Arch: Run yay or fall back to pacman + * CentOS/RHEL: Run yum upgrade + * Fedora: Run dnf upgrade + * Debian/Ubuntu: Run apt update && apt dist-upgrade + * openSUSE: Run zypper refresh && zypper dist-upgrade + * Upgrade Vim/Neovim packages. + * Run npm update -g if NPM is installed + * Upgrade Atom packages + * Linux: Update Flatpak packages + * Linux: Update snap packages + * Linux: Run fwupdmgr to show firmware upgrade. + * Finally it will run needrestart to bounce all the services. + + + +Now, we have successfully installed `topgrade` so, run the topgrade alone to upgrade everything on your system. I have tested the utility on Ubuntu 18.04 LTS and the results are below. + +``` +$ topgrade + +―― System update ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― +[sudo] password for daygeek: +Hit:1 http://in.archive.ubuntu.com/ubuntu bionic InRelease +Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB] +Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB] +Get:4 http://in.archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB] +. +Get:16 http://security.ubuntu.com/ubuntu bionic-security/universe DEP-11 64x64 Icons [45.2 kB] +Get:17 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 DEP-11 Metadata [2,460 B] +Fetched 1,565 kB in 13s (117 kB/s) +Reading package lists... Done +Building dependency tree +Reading state information... Done +119 packages can be upgraded. Run 'apt list --upgradable' to see them. +Reading package lists... Done +Building dependency tree +Reading state information... Done +Calculating upgrade... Done +The following packages were automatically installed and are no longer required: + libopts25 linux-headers-4.15.0-45 linux-headers-4.15.0-45-generic linux-image-4.15.0-45-generic + linux-modules-4.15.0-29-generic linux-modules-4.15.0-45-generic linux-modules-extra-4.15.0-45-generic sntp +Use 'sudo apt autoremove' to remove them. +The following packages will be upgraded: + apport apport-gtk apt apt-utils cups cups-bsd cups-client cups-common cups-core-drivers cups-daemon cups-ipp-utils + cups-ppdc cups-server-common distro-info-data fwupdate fwupdate-signed gir1.2-dbusmenu-glib-0.4 gir1.2-gtk-3.0 + gir1.2-packagekitglib-1.0 gir1.2-snapd-1 gnome-settings-daemon gnome-settings-daemon-schemas grub-common grub-pc + python3-httplib2 python3-problem-report samba-libs systemd systemd-sysv ubuntu-drivers-common udev ufw + unattended-upgrades xdg-desktop-portal xdg-desktop-portal-gtk +119 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. +Need to get 38.5 MB of archives. +After this operation, 475 kB of additional disk space will be used. +Do you want to continue? [Y/n] +. +. +Setting up grub-pc (2.02-2ubuntu8.13) ... +Installing for i386-pc platform. +Installation finished. No error reported. +Sourcing file `/etc/default/grub' +Generating grub configuration file ... +Found memtest86+ image: /boot/memtest86+.elf +Found memtest86+ image: /boot/memtest86+.bin +done +Setting up mesa-vdpau-drivers:amd64 (18.2.8-0ubuntu0~18.04.2) ... +Updating PPD files for cups ... +Setting up apport-gtk (2.20.9-0ubuntu7.6) ... +Setting up pulseaudio-module-bluetooth (1:11.1-1ubuntu7.2) ... +Processing triggers for libc-bin (2.27-3ubuntu1) ... +Processing triggers for initramfs-tools (0.130ubuntu3.7) ... +update-initramfs: Generating /boot/initrd.img-4.15.0-47-generic +``` + +It will run the self-updates once the distribution official packages update done. + +``` +―― rustup ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― +info: checking for self-updates +info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu' +info: checking for self-updates + + stable-x86_64-unknown-linux-gnu unchanged - rustc 1.33.0 (2aa4c46cf 2019-02-28) +``` + +Then it will try to update the packages that has installed with other package managers. + +``` +―― Flatpak User Packages ―――――――――――――――――――――――――――――――――――――――――――――――――――――――― +Looking for updates... +Looking for updates... +Updating in system: +org.gnome.Platform/x86_64/3.30 flathub 862e6b8ec2b5 +org.gnome.Platform.Locale/x86_64/3.30 flathub 5e66e981ae00 +org.freedesktop.Platform.html5-codecs/x86_64/18.08 flathub 282fd2c4ef33 +com.github.muriloventuroso.easyssh/x86_64/stable flathub c6bc3a3e72fb + new permissions: ssh-auth +com.github.muriloventuroso.easyssh.Locale/x86_64/stable flathub b705864b8d78 +Updating: org.gnome.Platform/x86_64/3.30 from flathub +[####################] 16 delta parts, 10 loose fetched; 65539 KiB transferred in 63 seconds +Error: Failed to update org.gnome.Platform/x86_64/3.30: Flatpak system operation Deploy not allowed for user + +Skipping org.gnome.Platform.Locale/x86_64/3.30 due to previous error + +Skipping org.freedesktop.Platform.html5-codecs/x86_64/18.08 due to previous error +Updating: com.github.muriloventuroso.easyssh/x86_64/stable from flathub +[####################] 2 delta parts, 3 loose fetched; 1532 KiB transferred in 5 seconds +Error: Failed to update com.github.muriloventuroso.easyssh/x86_64/stable: Flatpak system operation Deploy not allowed for user + +Skipping com.github.muriloventuroso.easyssh.Locale/x86_64/stable due to previous error +error: There were one or more errors + +Retry? [y/N] +``` + +Then it will run the firmwre upgrade. + +``` +―― Firmware upgrades ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― +Fetching metadata https://cdn.fwupd.org/downloads/firmware.xml.gz +Downloading… [***************************************] +Fetching signature https://cdn.fwupd.org/downloads/firmware.xml.gz.asc +``` + +Finally, it shows the summary about the patching has done. + +``` +―― Summary ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― +System update: OK +rustup: OK +Flatpak User Packages: FAILED +Firmware upgrade: OK +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/topgrade-upgrade-update-everything-in-single-command-on-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/r-darwish/topgrade +[2]: https://www.2daygeek.com/category/aur-helper/ +[3]: https://www.2daygeek.com/install-yay-yet-another-yogurt-aur-helper-on-arch-linux/ +[4]: https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/ diff --git a/sources/tech/20190410 How to enable serverless computing in Kubernetes.md b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md new file mode 100644 index 0000000000..75e5a5868d --- /dev/null +++ b/sources/tech/20190410 How to enable serverless computing in Kubernetes.md @@ -0,0 +1,136 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to enable serverless computing in Kubernetes) +[#]: via: (https://opensource.com/article/19/4/enabling-serverless-kubernetes) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh/users/daniel-oh) + +How to enable serverless computing in Kubernetes +====== +Knative is a faster, easier way to develop serverless applications on +Kubernetes platforms. +![Kubernetes][1] + +In the first two articles in this series about using serverless on an open source platform, I described [how to get started with serverless platforms][2] and [how to write functions][3] in popular languages and build components using containers on Apache OpenWhisk. + +Here in the third article, I'll walk you through enabling serverless in your [Kubernetes][4] environment. Kubernetes is the most popular platform to manage serverless workloads and microservice application containers and uses a finely grained deployment model to process workloads more quickly and easily. + +Keep in mind that serverless not only helps you reduce infrastructure management while utilizing a consumption model for actual service use but also provides many capabilities of what the cloud platform serves. There are many serverless or FaaS (Function as a Service) platforms, but Kuberenetes is the first-class citizen for building a serverless platform because there are more than [13 serverless or FaaS open source projects][5] based on Kubernetes. + +However, Kubernetes won't allow you to build, serve, and manage app containers for your serverless workloads in a native way. For example, if you want to build a [CI/CD pipeline][6] on Kubernetes to build, test, and deploy cloud-native apps from source code, you need to use your own release management tool and integrate it with Kubernetes. + +Likewise, it's difficult to use Kubernetes in combination with serverless computing unless you use an independent serverless or FaaS platform built on Kubernetes, such as [Apache OpenWhisk][7], [Riff][8], or [Kubeless][9]. More importantly, the Kubernetes environment is still difficult for developers to learn the features of how it deals with serverless workloads from cloud-native apps. + +### Knative + +[Knative][10] was born for developers to create serverless experiences natively without depending on extra serverless or FaaS frameworks and many custom tools. Knative has three primary components—[Build][11], [Serving][12], and [Eventing][13]—for addressing common patterns and best practices for developing serverless applications on Kubernetes platforms. + +To learn more, let's go through the usual development process for using Knative to increase productivity and solve Kubernetes' difficulties from the developer's point of view. + +**Step 1:** Generate your cloud-native application from scratch using [Spring Initializr][14] or [Thorntail Project Generator][15]. Begin implementing your business logic using the [12-factor app methodology][16], and you might also do assembly testing to see if the function works correctly in many local testing tools. + +![Spring Initializr screenshot][17] | ![Thorntail Project Generator screenshot][18] +---|--- + +**Step 2:** Build container images from your source code repositories via the Knative Build component. You can define multiple steps, such as installing dependencies, running integration testing, and pushing container images to your secured image registry for using existing Kubernetes primitives. More importantly, Knative Build makes developers' daily work easier and simpler—"boring but difficult." Here's an example of the Build YAML: + + +``` +apiVersion: build.knative.dev/v1alpha1 +kind: Build +metadata: +name: docker-build +spec: +serviceAccountName: build-bot +source: +git: +revision: master +url: +steps: +\- args: +\- --context=/workspace/java/springboot +\- --dockerfile=/workspace/java/springboot/Dockerfile +\- --destination=docker.io/demo/event-greeter:0.0.1 +env: +\- name: DOCKER_CONFIG +value: /builder/home/.docker +image: gcr.io/kaniko-project/executor +name: docker-push +``` + +**Step 3:** Deploy and serve your container applications as serverless workloads via the Knative Serving component. This step shows the beauty of Knative in terms of automatically scaling up your serverless containers on Kubernetes then scaling them down to zero if there is no request to the containers for a specific period (e.g., two minutes). More importantly, [Istio][19] will automatically address ingress and egress networking traffic of serverless workloads in multiple, secure ways. Here's an example of the Serving YAML: + + +``` +apiVersion: serving.knative.dev/v1alpha1 +kind: Service +metadata: +name: greeter +spec: +runLatest: +configuration: +revisionTemplate: +spec: +container: +image: dev.local/rhdevelopers/greeter:0.0.1 +``` + +**Step 4:** Bind running serverless containers to a variety of eventing platforms, such as SaaS, FaaS, and Kubernetes, via Knative's Eventing component. In this step, you can define event channels and subscriptions, which are delivered to your services via a messaging platform such as [Apache Kafka][20] or [NATS streaming][21]. Here's an example of the Event sourcing YAML: + + +``` +apiVersion: sources.eventing.knative.dev/v1alpha1 +kind: CronJobSource +metadata: +name: test-cronjob-source +spec: +schedule: "* * * * *" +data: '{"message": "Event sourcing!!!!"}' +sink: +apiVersion: eventing.knative.dev/v1alpha1 +kind: Channel +name: ch-event-greeter +``` + +### Conclusion + +Developing with Knative will save a lot of time in building serverless applications in the Kubernetes environment. It can also make developers' jobs easier by focusing on developing serverless applications, functions, or cloud-native containers. + +* * * + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/enabling-serverless-kubernetes + +作者:[Daniel Oh (Red Hat, Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes) +[2]: https://opensource.com/article/18/11/open-source-serverless-platforms +[3]: https://opensource.com/article/18/11/developing-functions-service-apache-openwhisk +[4]: https://kubernetes.io/ +[5]: https://landscape.cncf.io/format=serverless +[6]: https://opensource.com/article/18/8/what-cicd +[7]: https://openwhisk.apache.org/ +[8]: https://projectriff.io/ +[9]: https://kubeless.io/ +[10]: https://cloud.google.com/knative/ +[11]: https://github.com/knative/build +[12]: https://github.com/knative/serving +[13]: https://github.com/knative/eventing +[14]: https://start.spring.io/ +[15]: https://thorntail.io/generator/ +[16]: https://12factor.net/ +[17]: https://opensource.com/sites/default/files/uploads/spring_300.png (Spring Initializr screenshot) +[18]: https://opensource.com/sites/default/files/uploads/springboot_300.png (Thorntail Project Generator screenshot) +[19]: https://istio.io/ +[20]: https://kafka.apache.org/ +[21]: https://nats.io/ diff --git a/sources/tech/20190411 How do you contribute to open source without code.md b/sources/tech/20190411 How do you contribute to open source without code.md new file mode 100644 index 0000000000..659fd9064e --- /dev/null +++ b/sources/tech/20190411 How do you contribute to open source without code.md @@ -0,0 +1,73 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How do you contribute to open source without code?) +[#]: via: (https://opensource.com/article/19/4/contribute-without-code) +[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer) + +How do you contribute to open source without code? +====== + +![Dandelion held out over water][1] + +My earliest open source contributions date back to the mid-1980s when our organization first connected to [UseNet][2] where we discovered the contributed code and the opportunities to share in its development and support. + +Today there are endless contribution opportunities, from contributing code to making how-to videos. + +I'm going to step right over the whole issue of contributing code, other than pointing out that many of us who write code but don't consider ourselves developers can still [contribute code][3]. Instead, I'd like to remind everyone that there are lots of [non-code ways to contribute to open source][4] and talk about three alternatives. + +### Filing bug reports + +One important and concrete kind of contribution could best be described as "not being afraid to file a decent bug report" and [all the consequences related to that][5]. Sometimes it's quite challenging to [file a decent bug report][6]. For example: + + * A bug may be difficult to record or describe. A long and complicated message with all sorts of unrecognizable codes may flash by as the computer is booting, or there may just be some "odd behavior" on the screen with no error messages produced. + * A bug may be difficult to reproduce. It may occur only on certain hardware/software configurations, or it may be rarely triggered, or the precise problem area may not be apparent. + * A bug may be linked to a very specific development environment configuration that is too big, messy, and complicated to share, requiring laborious creation of a stripped-down example. + * When reporting a bug to a distro, the maintainers may suggest filing the bug upstream instead, which can sometimes lead to a lot of work when the version supported by the distro is not the primary version of interest to the upstream community. (This can happen when the version provided in the distro lags the officially supported release and development version.) + + + +Nevertheless, I exhort would-be bug reporters (including me) to press on and try to get bugs fully recorded and acknowledged. + +One way to get started is to use your favorite search tool to look for similar bug reports, see how they are described, where they are filed, and so on. Another important thing to know is the formal mechanism defined for bug reporting by your distro (for example, [Fedora's is here][7]; [openSUSE's is here][8]; [Ubuntu's is here][9]) or software package ([LibreOffice's is here][10]; [Mozilla's seems to be here][11]). + +### Answering user's questions + +I lurk and occasionally participate in various mailing lists and forums, such as the [Ubuntu quality control team][12] and [forums][13], [LinuxQuestions.org][14], and the [ALSA users' mailing list][15]. Here, the contributions may relate less to bugs and more to documenting complex use cases. It's a great feeling for everyone to see someone jumping in to help a person sort out their trouble with a particular issue. + +### Writing about open source + +Finally, another area where I really enjoy contributing is [_writing_][16] about using open source software; whether it's a how-to guide, a comparative evaluation of different solutions to a particular problem, or just generally exploring an area of interest (in my case, using open source music-playing software to enjoy music). A similar option is making an instructional video; it's easy to [record the desktop][17] while demonstrating some fiendishly difficult desktop maneuver, such as creating a splashy logo with GIMP. And those of you who are bi- or multi-lingual can also consider translating existing how-to articles or videos to another language. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/contribute-without-code + +作者:[Chris Hermansen (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen/users/don-watkins/users/greg-p/users/petercheer +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dandelion_blue_water_hand.jpg?itok=QggW8Wnw (Dandelion held out over water) +[2]: https://en.wikipedia.org/wiki/Usenet +[3]: https://opensource.com/article/19/2/open-science-git +[4]: https://opensource.com/life/16/1/8-ways-contribute-open-source-without-writing-code +[5]: https://producingoss.com/en/bug-tracker.html +[6]: https://opensource.com/article/19/3/bug-reporting +[7]: https://docs.fedoraproject.org/en-US/quick-docs/howto-file-a-bug/ +[8]: https://en.opensuse.org/openSUSE:Submitting_bug_reports +[9]: https://help.ubuntu.com/stable/ubuntu-help/report-ubuntu-bug.html.en +[10]: https://wiki.documentfoundation.org/QA/BugReport +[11]: https://developer.mozilla.org/en-US/docs/Mozilla/QA/Bug_writing_guidelines +[12]: https://wiki.ubuntu.com/QATeam +[13]: https://ubuntuforums.org/ +[14]: https://www.linuxquestions.org/ +[15]: https://www.alsa-project.org/wiki/Mailing-lists +[16]: https://opensource.com/users/clhermansen +[17]: https://opensource.com/education/16/10/simplescreenrecorder-and-kazam diff --git a/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md new file mode 100644 index 0000000000..890b934ef1 --- /dev/null +++ b/sources/tech/20190411 Managed, enabled, empowered- 3 dimensions of leadership in an open organization.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Managed, enabled, empowered: 3 dimensions of leadership in an open organization) +[#]: via: (https://opensource.com/open-organization/19/4/managed-enabled-empowered) +[#]: author: (Heidi Hess von Ludewig https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack) + +Managed, enabled, empowered: 3 dimensions of leadership in an open organization +====== +Different types of work call for different types of engagement. Should +open leaders always aim for empowerment? +![][1] + +"Empowerment" seems to be the latest people management [buzzword][2]. And it's an important consideration for open organizations, too. After all, we like to think these open organizations thrive when the people inside them are equipped to take initiative to do their best work as they see fit. Shouldn't an open leader's goal be complete and total empowerment of everyone, in all parts of the organization, doing all types of work? + +Not necessarily. + +Before we jump on the employee [empowerment bandwagon][3], we should explore the important connections between empowerment and innovation. That requires placing empowerment in context. + +As Allison Matlack has already demonstrated, employee investment in an organization's mission and activities—and employee _autonomy_ relative to those things—[can take several forms][4], from "managed" to "enabled" to "empowered." Sometimes, complete and total empowerment _isn't_ the most desirable type of investment an open leader would like to activate in a contributor. Projects are always changing. New challenges are always arising. As a result, the _type_ or _degree_ of involvement leaders can expect in different situations is always shifting. "Managed," "enabled," and "empowered," contributors exist simultaneously and dynamically, depending on the work they're performing (and that work's desired outcomes). + +So before we head down to the community center to win a game of buzzword bingo, let's examine the different types of work, how they function, and how they contribute to the overall innovation of a company. Let's refine what we mean by "managed," "enabled," and "empowered" work, and discuss why we need all three. + +### Managed, enabled, empowered + +First, let's consider and define each type of work activity. + +"Managed" work involves tasks that are coordinated using guidance, supervision, and direction in order to achieve specific outcomes. When someone works to coordinate _every_ part of _every_ task, we colloquially call that behavior "micro-managing." "Enabled" associates have the ability to direct themselves while working within boundaries (guidance), and they have access to the materials and resources (information, people, technologies, etc.) they require to problem-solve as they see fit. Lastly, "empowered" individuals _direct themselves_ within organizational limits, have access materials and resources, and also have the authority to represent their team or organization and make decisions about work on behalf using their best judgement, based on the former elements. + +Most important here is the idea that these concepts are _nested_ (see Figure 1). Because each level builds on the one before it, one cannot have the full benefit of "empowered" associates without also having clear guidance and direction ("managed"), and transparency of information and resources ("enabled"). What changes from level to level is the amount of managed or enabled activity that comes before it. + +Let's dive more deeply into the nature of those activities and discuss the roles leaders should play in each. + +#### Managed work + +"Managed" work is just that: work activity supervised and directed to some degree. The amount of management occurring in a situation is dynamic and depends on the activity itself. For instance, in the manufacturing economy, managed work is prominent. I'll call this "widget" work, the point of which is producing a widget the same way, every time. People need to perform this work according to consistent processes with consistent, standardized outcomes. + +Before we jump on the employee empowerment bandwagon, we should explore the important connections between empowerment and innovation. That requires placing empowerment in context. + +Because this work requires consistency, it typically proceeds via explicit guidelines and policies (rules about cost, schedule, quality, quantity, process, and so on—characteristics applicable to all work to a greater or lesser degree). We can find examples of it in a variety of roles across many industries. Quite often, _any_ role in _any_ industry requires _some_ amount of this type of work. Examples include manufacturing precision machine parts, answering a customer support case within a specified timeframe for contractual reasons and with a friendly greeting, etc. In the software industry, a role that's _entirely_ like this would be a rarity, yet even these roles require some work of the "managed" type. For instance, consider the way a support engineer must respond to a case using a set of quality standards (friendliness, perhaps with a professional written tone, a branded signature line, adherence to a participat contractual agreement, usually responding within a particular time frame, etc.). + +"Management" is the best strategy when _work requirements include adhering to a consistent schedule, process, and quality._ + +#### Enabled work + +As the amount of creativity a role requires _increases_ , the amount of directed and "managed" work we find in that role _decreases_. Guidelines get broader, processes looser, schedules lengthened (I wish!). This is because what's required to "be creative" involves other types of work (and new degrees of transparency and authority along with them). Ron McFarland explains this in [his article on adaptive leadership][5]: Many challenges challenges are ambiguous, as opposed to technical, and therefore require specific kinds of leadership. + +To take this idea one step further, we might say open leaders need to be _adaptive_ to how they view and implement the different kinds of work on their teams or in their organizations. "Enabling" associates means growing their skills and knowledge so they can manage themselves. The foundation for this type of activity is information—access to it, sharing it, and opportunities to independently use it to complete work activity. This is the kind of work Peter Drucker was referring to when he coined the term "knowledge work." + +Enabled work liberates associates from the constraints of managed work, though it still involves leaders providing considerable direction and guidance. Outcomes of this work might be familiar and normalized, but the _paths to achieving them_ are more open-ended than in managed work. Methods are more flexible and inclusive of individual preference and capability. + +"Enablement" is the best strategy when _objectives are well-defined and the outcomes are aligned with past outcomes and results_. + +#### Empowered work + +In "[Beyond Engagement][4]," Allison describes empowerment as a state in which employees have "access to all the information, training, tools, and connections to people and others teams that they need to do their best work, as well as a safe environment in which to do that work so they feel comfortable making their own decisions." In other words, empowerment is enablement with the opportunity for associates to _act using their own best judgment as it relates to shared understanding of team and organizational guidelines and objectives._ + +"Empowerment" is the best strategy when _objectives and methods for achieving them are unclear and creative flexibility is necessary for defining them._ Often this work is focused on activities where problem definition and possible solutions (i.e. investigation, planning, and execution) are not well-defined. + +Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one. + +### Supporting innovation through managed, enabled, and empowered work + +The labels "managed," enabled," and "empowered" apply to different work at different times, and _all three_ are embedded in work activity at different times and in different tasks. That means leaders should be paying more attention to the work contributors are doing: the kind of work, its purpose, and its desired outcomes. We're now in a position to consider how _innovation_ factors into this equation. + +Frequently, people discuss the different modes of work by way of _contrast_. Most language about them connotes negativity: managed work is "the worst," while empowered work is "the best." The goal of any leadership practice should be to "move people along the scale"—to create empowered contributors. + +However, just as types of work are located on a continuum that doesn't include this element of negation, so too should our understanding of the types of work. Rather than seeing work as, for example " _always empowered"_ or _"always managed_ ," we should recognize that any role is a function of _of all three types of work at the same time_ , each to a varying degree. Think of the equation this way: + +> _Work = managed (x) + enabled (x) + empowered (x)_ + +Note here that the more enabled and empowered the work is, the more potential there is for creativity when doing that work. This is because creativity (and the creative individual) requires information—consistently updated and "fresh" sources of information—used in conjunction with individual judgment and capacity for interpreting how to _use_ and _combine_ that information to define problems, ideate, and solve problems. Enabled and empowered work can increase inclusivity—that is, draw more closely on an individual's unique skills, perspectives, and talents because, by definition, those kinds of work are less managed and more guided. Open leadership clearly supports hiring for diversity exactly for the reason that it makes inclusivity so much richer. The ambiguity that's characteristic of the challenges we face in modern workplaces means that the work we do is ripe with potential for innovation—if we embrace risk and adapt our leadership styles to liberate it. + +In other words: + +> _Innovation = enabled (x) + empowered (x) / managed (x)_ +> +> _The more enabled and empowered the work is, the more potential for innovation._ + +Focusing on the importance of enabled work and empowered work is not to devalue managed work in any way. I would say that managed work creates a stable foundation on which creative (enabled and empowered) work can blossom. Imagine if all the work we did was empowered; our organizations would be completely chaotic, undefined, and ambiguous. Organizations need a degree of managed work in order to ensure some direction, some understanding of priorities, and some definition of "quality." + +Any role in any organization involves these three types of work occurring at various moments and in various situations. No job requires just one. As open leaders, we must recognize that work isn't an all-or-nothing, one-type-of-work-alone equation. We have to get better at understanding work in _these three different ways_ and using each one to the organization's advantage, depending on the situation. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/4/managed-enabled-empowered + +作者:[Heidi Hess von Ludewig (Red Hat)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/heidi-hess-von-ludewig/users/amatlack +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_ControlNotDesirable.png?itok=nrXwSkv7 +[2]: https://www.entrepreneur.com/article/288340 +[3]: https://www.forbes.com/sites/lisaquast/2011/02/28/6-ways-to-empower-others-to-succeed/#5c860b365c62 +[4]: https://opensource.com/open-organization/18/10/understanding-engagement-and-empowerment +[5]: https://opensource.com/open-organization/19/3/adaptive-leadership-review diff --git a/sources/tech/20190411 Testing Small Scale Scrum in the real world.md b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md new file mode 100644 index 0000000000..0e4016435e --- /dev/null +++ b/sources/tech/20190411 Testing Small Scale Scrum in the real world.md @@ -0,0 +1,57 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing Small Scale Scrum in the real world) +[#]: via: (https://opensource.com/article/19/4/next-steps-small-scale-scrum) +[#]: author: (Agnieszka Gancarczyk Leigh Griffin https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin) + +Testing Small Scale Scrum in the real world +====== +We plan to test the Small Scale Scrum framework in real-world projects +involving small teams. +![Green graph of measurements][1] + +Scrum is built on the three pillars of inspection, adaptation, and transparency. Our empirical research is really the starting point in bringing scrum, one of the most popular agile implementations, to smaller teams. As presented in the diagram below, we are now taking time to inspect this framework and principles by testing them in real-world projects. + +![small-scale-scrum-inspection.png][2] + +Progress in empirical process control + +We plan to implement Small Scale Scrum in several upcoming projects. Our test candidates are customers with real projects where teams of one to three people will undertake short-lived projects (ranging from a few weeks to three months) with an emphasis on quality and outputs. Individual projects, such as final-year projects (over 24 weeks) that are a capstone project after four years in a degree program, are almost exclusively completed by a single person. In projects of this nature, there is an emphasis on the project plan and structure and on maximizing the outputs that a single person can achieve. + +We plan to metricize and publish the results of these projects and hold several retrospectives with the teams involved. We are particularly interested in metrics centered around quality, with a particular emphasis on quality in a software engineering context and management, both project management through the lifecycle with a customer and management of the day-to-day team activities and the delivery, release, handover, and signoff process. + +Ultimately, we will retrospectively analyze the overall framework and principles and see if the Manifesto we envisioned holds up to the reality of executing a project with small numbers. From this data, we will produce the second version of Small Scale Scrum and begin a cyclic pattern of inspecting the model in new projects and adapting it again. + +We want to do all of this transparently. This series of articles is one window into the data, the insights, the experiences, and the reality of running scrum for small teams whose everyday challenges include context switching, communication, and the need for a quality delivery. A follow-up series of articles is planned to examine the outputs and help develop the second edition of Small Scale Scrum entirely in the community. + +We also plan to attend conferences and share our knowledge with the Agile community. Our first conference will be Agile 2019 where the evolution of Small Scale Scrum will be further explored as an Experience Report. We are advising colleges and sharing our structure and approach to managing and executing final-year projects. All our outputs will be freely available in the open source way. + +Given the changes to recommended team sizes in the Scrum Guide, our long-term goal and vision is to have the Scrum Guide reflect that teams of one or more people occupying one or more roles within a project are capable of following scrum. + +* * * + +_Leigh Griffin will present Small Scale Scrum at Agile 2019 in Washington, August 5-9, 2019 as an Experience Report. An expanded paper will be published on[Agile Alliance][3] to accompany this._ + +* * * + +* * * + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/next-steps-small-scale-scrum + +作者:[Agnieszka Gancarczyk (Red Hat)Leigh Griffin (Red Hat)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/agagancarczyk/users/lgriffin/users/agagancarczyk/users/lgriffin +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_lead-steps-measure.png?itok=DG7rFZPk (Green graph of measurements) +[2]: https://opensource.com/sites/default/files/small-scale-scrum-inspection.png (small-scale-scrum-inspection.png) +[3]: https://www.agilealliance.org/ diff --git a/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md b/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md new file mode 100644 index 0000000000..3136ed60a0 --- /dev/null +++ b/sources/tech/20190412 Designing posters with Krita, Scribus, and Inkscape.md @@ -0,0 +1,131 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Designing posters with Krita, Scribus, and Inkscape) +[#]: via: (https://opensource.com/article/19/4/design-posters) +[#]: author: (Raghavendra Kamath https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath) + +Designing posters with Krita, Scribus, and Inkscape +====== +Graphic designers can do professional work with free and open source +tools. +![Hand drawing out the word "code"][1] + +A few months ago, I was asked to design some posters for a local [Free Software Foundation][2] (FSF) event. Richard M. Stallman was [visiting][3] our country, and my friend [Abhas Abhinav][4] wanted to put up some posters and banners to promote his visit. I designed two posters for RMS's talk in Bangalore. + +I create my artwork with F/LOSS (free/libre open source software) tools. Although many artists successfully use free software to create artwork, I repeatedly encounter comments in discussion forums claiming that free software is not made for creative work. This article is my effort to detail the process I typically use to create my artwork and to spread awareness that one can do professional work with the help of F/LOSS tools. + +### Sketching some concepts + +After understanding Abhas' initial requirements, I sat down to visualize some concepts. I am not that great of a copywriter, so I started reading the FSF website to get some copy material. I needed to finish the project in two days time, while simultaneously working on other projects. I started sketching some rough layouts. From five layouts, I liked three. I scanned them using [Skanlite][5]; although these sketches were very rough and would need proper layout and design, they were a good base for me to work from. + +![Skanlite][6] + +![Poster sketches][7] + +![Poster sketch][8] + +I had three concepts: + + * On the [FSF's website][2], I read about taking free software to new frontiers, which made me think about the idea of "conquering a summit." Free software work is also filled with adventures, in my opinion, and sometimes a task may seem like scaling a summit. So, I thought showing some mountaineers would resonate well. + * I also wanted to ask people to donate to FSF, so I sketched a hand giving a heart. I didn't feel any excitement in executing this idea, nevertheless, I kept it for backup in case I fell short of time. + * The FSF website has a hashtag for a donation program called #thankGNU, so I thought about using this as the basis of my design. Repurposing my hand visual, I replaced the heart with a bouquet of flowers that has a heart-shaped card saying #thankGNU! + + + +I know these are somewhat quick and safe concepts, but given the little time I had for the project, I went ahead with them. + +My design process mostly depends on the kind of look I need in the final image. I choose my software and process according to my needs. I may use one software from start to finish or combine various software packages to accomplish what I need. For this project, I used [Krita][9] and [Scribus][10], with some minimal use of [Inkscape][11]. + +### Krita: Making the illustrations + +I imported my sketches into [Krita][12] and started adding more defined lines and shapes. + +For the first image, which has some mountaineers climbing, I used [vector layers][13] in Krita to add basic shapes and then used [Alpha Inheritance][14], which is similar to what is called Clipping Masks in Photoshop, to add texture and gradients inside the shapes. This helped me change the underlying base shape (in this case, the shape of the mountain in the first poster) anytime during the process. Krita also has a nice feature called the Reference Image tool, which lets you pin some references around your canvas (this helps a lot and saves many Alt+Tabs). Once I got the mountain how I wanted, according to the layout, I started painting the mountaineers and added more details for the ice and other features. I like grungy brushes and brushes that have a texture akin to chalks and sponges. Krita has a wide range of brushes as well as a brush engine, which makes replicating a traditional medium easier. After about 3.5 hours of painting, this image was ready for further processing. + +I wanted the second poster to have the feel of an old-style book illustration. So, I created the illustration with inked lines, somewhat similar to what we see in textbooks or novels. Inking in Krita is really a time saver; since it has stabilizer options, your wavy, hand-drawn lines will be smooth and crisp. I added a textured background and some minimal colors beneath the lines. It took me about three hours to do this illustration as well. + +![Poster][15] + +![Poster][16] + +### Scribus: Adding layout and typography + +Once my illustrations were ready, it was time to move on to the next part: adding text and other things to the layout. For this, I used Scribus. Both Scribus and Krita have CMYK support. In both applications, you can soft-proof your artwork and make changes according to the color profile you get from the printer. I mostly do my work in RGB and then, if required, I convert it to CMYK. Since most printers nowadays will do the color conversion, I don't think CMYK is support required, however, it's good to be able to work in CMYK with free software tools. + +I use open source fonts for my design work unless a client has licensed a closed font for use. A good way to browse for suitable fonts is [Google Fonts repository][17]. (I have the entire repository cloned.) Occasionally, I also browse fonts on [Font Library][18], as it also has a nice collection. I decided to use Montserrat by Julieta Ulanovsky for the posters. Placing text was very quick in Scribus; once you create a style, you can apply it to any number of paragraphs or titles. This helped me place text in both designs quickly since I didn't have to re-create the text properties. + +![Poster in Scribus][19] + +I keep two layers in Scribus. One is for the illustrations, which are linked to the original files so if I change an illustration, it will update in Scribus. The other is for text and it's layered on top of the illustration layer. + +### Inkscape: QR codes + +I used Inkscape to generate a QR code that points to the Membership page on FSF's website. To generate a QR code in Scribus, go to **Extensions > Render > Barcode > QR Code** in Inkscape's menu. The logos are also vector; because Scribus supports vector images, you can directly paste things from Inkscape into Scribus. In a way, this helps in designing CMYK-based vector graphics. + +![Final poster design][20] + +![Final poster design][21] + +With the designs ready, I exported them to layered PDF and sent to them to Abhas for feedback. He asked me to add FSF India's logo, which I did and sent a new PDF to him. + +### Printing the posters + +From here, Abhas took over the printing part of the process. His local printer in Bangalore printed the posters in A2 size. He was kind enough to send me some pictures of them. The prints came out well, considering I didn't even convert them to CMYK nor do any color corrections or soft proofing, as I usually do when I get the color profile from my printer. My opinion is that 100% accurate CMYK printing is just a myth; there are too many factors to consider. If I really want perfect color reproduction, I leave this job to the printer, as they know their printer well and can do the conversion. + +![Final poster design][22] + +![Final poster design][23] + +### Accessing the source files + +When we discussed the requirements for these posters, Abhas told me to release the artwork under a Creative Commons license so others can re-use, modify, and share it. I am really glad he mentioned it. Anyone who wants to poke at the files can [download them from my Nextcloud drive][24]. If you have any improvements to make, please go ahead—and do remember to share your work with everybody. + +Let me know what you think about this article by [emailing me][25]. + +* * * + +_[This article][26] originally appeared on [Raghukamath.com][27] and is republished with the author's permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/design-posters + +作者:[Raghavendra Kamath][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/raghukamath/users/seilarashel/users/raghukamath/users/raghukamath/users/greg-p/users/raghukamath +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_hand_draw.png?itok=dpAf--Db (Hand drawing out the word "code") +[2]: https://www.fsf.org/ +[3]: https://rms-tour.gnu.org.in/ +[4]: https://abhas.io/ +[5]: https://kde.org/applications/graphics/skanlite/ +[6]: https://opensource.com/sites/default/files/uploads/skanlite.png (Skanlite) +[7]: https://opensource.com/sites/default/files/uploads/sketch-01.png (Poster sketches) +[8]: https://opensource.com/sites/default/files/uploads/sketch-02.png (Poster sketch) +[9]: https://krita.org/ +[10]: https://www.scribus.net/ +[11]: https://inkscape.org/ +[12]: /life/16/4/nick-hamilton-linuxfest-northwest-2016-krita +[13]: https://docs.krita.org/en/user_manual/vector_graphics.html#vector-graphics +[14]: https://docs.krita.org/en/tutorials/clipping_masks_and_alpha_inheritance.html +[15]: https://opensource.com/sites/default/files/uploads/poster-illo-01.jpg (Poster) +[16]: https://opensource.com/sites/default/files/uploads/poster-illo-02.jpg (Poster) +[17]: https://fonts.google.com/ +[18]: https://fontlibrary.org/ +[19]: https://opensource.com/sites/default/files/uploads/poster-in-scribus.png (Poster in Scribus) +[20]: https://opensource.com/sites/default/files/uploads/final-01.png (Final poster design) +[21]: https://opensource.com/sites/default/files/uploads/final-02.png (Final poster design) +[22]: https://opensource.com/sites/default/files/uploads/posters-in-action-01.jpg (Final poster design) +[23]: https://opensource.com/sites/default/files/uploads/posters-in-action-02.jpg (Final poster design) +[24]: https://box.raghukamath.com/cloud/index.php/s/97KPnTBP4QL4iCx +[25]: mailto:raghu@raghukamath.com?Subject=designing-posters-with-free-software +[26]: https://raghukamath.com/journal/designing-posters-with-free-software/ +[27]: https://raghukamath.com/ diff --git a/sources/tech/20190412 How libraries are adopting open source.md b/sources/tech/20190412 How libraries are adopting open source.md new file mode 100644 index 0000000000..2a8c8806e5 --- /dev/null +++ b/sources/tech/20190412 How libraries are adopting open source.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How libraries are adopting open source) +[#]: via: (https://opensource.com/article/19/4/software-libraries) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins) + +How libraries are adopting open source +====== +Over the past decade, ByWater Solutions has expanded its business by +advocating for open source software. +![][1] + +Four years ago, I [interviewed Nathan Currulla][2], co-founder of ByWater Solutions, a major services and solutions provider for [Koha][3], a popular open source integrated library system (ILS). Since then, I've benefitted directly from his company's work, as my local [Chautauqua–Cattaraugus Library System][4] in western New York migrated from a proprietary software system to a [ByWater Systems][5]' Koha implementation. + +When I learned that ByWater is celebrating its 10th anniversary in 2019, I decided to reach out to Nathan to learn how the company has grown over the last decade. (Our remarks have been edited slightly for grammar and clarity.) + +**Don Watkins** : How has ByWater grown in the last 10 years? + +**Nathan Currulla** : Over the last 10 years, ByWater has grown by leaps and bounds. By the end of 2009, we supported five libraries with five contracts. That number shot up to 117 libraries made up of 46 contracts by the end of 2010. We now support over 1,500 libraries and 450+ contracts. We also went from having two team members to 25 in the past 10 years. The service-focused processes we have developed for migrating new libraries have been adopted by other library companies, and we have become a real market disruptor, putting pressure on other companies to provide better support and lower software subscription fees for libraries using their products. This was our goal from the outset, to change the way libraries work with the technology companies who support them, whomever they may be. + +Since the beginning, we have been rooted in the future, while legacy systems are still rooted in the past. Ten years ago, it was a real struggle for us to overcome the barriers presented by the fear of change in libraries and the outdated perceptions of open source in general. Now, although we still have to deal with change aversion, there are enough users to disprove any misinformation that exists regarding Koha and open source. The conversation is easier now than it ever was. That said, despite the fact that the ideals and morals held by open source are directly aligned with those of libraries, we still have a long way to go until open source technologies are the norm in this marketplace. + +**DW** : What kinds of libraries do you support? + +**NC** : Our partners are made up of a diverse set of library types. About 35% of our partners are public libraries, 35% are academic, and the remaining 30% are made up of museum, corporate, law, school, and other special library types. Because of Koha's flexibility and diverse feature set, we can successfully provide services to a variety of library types despite the current trend of consolidation in the library technology marketplace. + +**DW** : How does ByWater work with and help the Koha community? + +**NC** : We are working with the rest of the Koha community to streamline workflows and further improve the process of submitting and accepting new features into Koha. The vast majority of the community is made up of volunteers; by providing paid positions within the community, we can dedicate more time to the quality assurance and sign-off processes needed to stay competitive with other systems, both open source and proprietary. The number of new features submitted to the Koha community for each release is staggering. The more resources we have to get those features out to our users, the faster Koha can evolve and further shape the library-technology marketplace. + +**DW** : When we talked in 2015, ByWater had recently partnered with library solutions provider [EBSCO][6]. What initiatives are you working on now with EBSCO? + +**NC** : Originally, Catalyst IT of New Zealand worked with EBSCO to create the EBSCO Discovery Service (EDS) plugin that is used by many of our customers. Unlike most discovery systems that sit on top of a library's online public access catalog (OPAC), Koha's integration with EDS uses the Koha OPAC as the frontend, with EDS feeding data into the Koha interface. This allows libraries to choose which interface they prefer (EDS or Koha as the frontend) and provides a unified library service platform (LSP). EBSCO has always been a great partner and has always shown a strong willingness to contribute to the open source initiative. They understand the importance of having fewer barriers between the ILS and the libraries' other content to provide a seamless interface to the end user. + +Outside of Koha, ByWater is working closely with EBSCO to provide implementation, training, and support services for its [Folio LSP][7]. Folio is an open source LSP for academic libraries with the intent to provide even more seamless integration with other content providers using an extensible, open app marketplace. ByWater is developing a separate department for the implementation and ongoing support of Folio, with EBSCO providing hosting services to our mutual customers. The fact that EBSCO is investing millions in the creation of an open source platform lends further credence to the importance and validity of open source technologies in the library market. + +**DW** : What other projects are you supporting? How do they complement Koha? + +**NC** : ByWater also supports Libki, an open source, web-based kiosk and print management solution; Coral, an open source electronic resource management (ERM) solution; and Folio. Libki and Coral seamlessly integrate with Koha to provide a unified LSP. Folio may work in cooperation with Koha on some functionality, but it is too early to tell what that will specifically look like. + +ByWater also offers Koha Klassmates, a program that provides free installations of Koha to over 40 library schools in the US to familiarize the next generation of librarians with open source and the tools they will use daily in the workforce. We are also rolling out a program called Koha University, which will mentor computer science students in writing and submitting code to Koha, one of the largest open source projects in the world. This will give them experience in working in such an environment and provide the opportunity for their names to be listed as official Koha contributors. + +**DW** : What is ByWater's strategic focus over the next five years? + +**NC** : ByWater will continue offering top-rated support to our ever-growing customer base while leveraging new open source opportunities to disprove misinformation surrounding the use of open source solutions in libraries. We will focus on making open source the norm and educating libraries that could be taking advantage of these technologies but do not because of outdated information and perceptions. + +Additionally, our research and development efforts will be focused on analyzing machine learning for advanced education and support services. We also want to work closely with our partners on advancing the marketing efforts (through software) for small and large libraries to help cement their roles as community centers by marketing inventory, programs, and library events. We want to be community builders on different levels, both for our partner libraries and with the open source communities that we are involved in. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/software-libraries + +作者:[Don Watkins (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDUCATION_opencardcatalog.png?itok=f9PyJEe- +[2]: https://opensource.com/business/15/5/bywater-solutions-empowering-library-tech +[3]: http://www.koha.org/ +[4]: https://catalog.cclsny.org/ +[5]: https://bywatersolutions.com/ +[6]: https://www.ebsco.com/ +[7]: https://www.ebsco.com/products/ebsco-folio-library-services diff --git a/sources/tech/20190412 Joe Doss- How Do You Fedora.md b/sources/tech/20190412 Joe Doss- How Do You Fedora.md new file mode 100644 index 0000000000..bc642fb1d6 --- /dev/null +++ b/sources/tech/20190412 Joe Doss- How Do You Fedora.md @@ -0,0 +1,122 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Joe Doss: How Do You Fedora?) +[#]: via: (https://fedoramagazine.org/joe-doss-how-do-you-fedora/) +[#]: author: (Charles Profitt https://fedoramagazine.org/author/cprofitt/) + +Joe Doss: How Do You Fedora? +====== + +![Joe Doss][1] + +We recently interviewed Joe Doss on how he uses Fedora. This is part of a [series][2] on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the [feedback form][3] to express your interest in becoming a interviewee. + +### Who is Joe Doss? + +Joe Doss lives in Chicago, Illinois USA and his favorite food is pizza. He is the Director of Engineering Operations and Kenna Security, Inc. Doss describes his employer this way: “Kenna uses data science to help enterprises combine their infrastructure and application vulnerability data with exploit intelligence to measure risk, predict attacks and prioritize remediation.” + +His first Linux distribution was Red Hat Linux 5. A friend of his showed him a computer that wasn’t running Windows. Doss thought it was just a program to install on Windows when his friend gave him a Red Hat Linux 5 install disk. “I proceeded to install this Linux ‘program’ on my Father’s PC,” he says. Luckily for Doss, his father supported his interest in computers. “I ended up totally wiping out the Windows 95 install as a result and this was how I got my first computer.” + +At Kenna, Doss’ group makes use of Fedora and [Ansible][4]: “We run Fedora Cloud in multiple VPC deployments in AWS and Google Compute with over 200 virtual machines. We use Ansible to automate everything we do with Fedora.” + +Doss brews beer at home and contributes to open source in his free time. He also has a cat named Tibby. “I rescued Tibby off the street the Hyde Park neighborhood of Chicago when she was 7 months old. She is not very smart, but she makes up for that with cuteness.” His favorite place to visit is his childhood home of Michigan, but Doss says, “anywhere with a warm beach, a cool drink, and the ocean is pretty nice too.” + +![Tibby the cute cat!][5] + +### The Fedora community + +Doss became involved with Fedora and the Fedora community through his job at Kenna Security. When he first joined the company they were using Ubuntu and Chef in production. There was a desire to make the infrastructure more reproducible and reliable, and he says, “I was able to greenfield our deployments with Fedora Cloud and Ansible.” This project got him involved in the Fedora Cloud release. + +When asked about his first impression of the Fedora community, Doss said, “Overwhelming to be honest. There is so much going on and it is hard to figure out who are the stakeholders of each part of Fedora.” Once he figured out who he needed to talk to he found the community very welcoming and super supportive. + +One of the ideas he had to improve the community was to unite the various projects and team under on bug tracking tool and community resource. “Pagure, Bugzilla, Github, Fedora Forums, Discourse Forums, Mailing lists… it is all over the place and hard to navigate at first.” Despite the initial complexity of becoming familiar with the Fedora Project, Doss feels it is amazingly rewarding to be involved. “It feels awesome it to be apart of a Linux distro that impacts so many people in very positive ways. You can make a difference.” + +Doss called out Dusty Mabe at Red Hat for helping him become involved, saying Dusty “has been an amazing mentor and resource for enabling me to contribute back to Fedora.” + +Doss has an interesting way of explaining to non-technical friends what he does. “Imagine changing the tires on a very large bus while it is going down the highway at 70 MPH and sometimes you need to get involved with the tire manufacturer to help make this process work well.” This metaphor helps people understand what replacing 200-plus VMs across more than five production VPCs in AWS and Google Compute with every Fedora release. + +Doss drew my attention to one specific incident with Fedora 29 and Vagrant. “Recently we encountered an issue where Vagrant wouldn’t set the hostname on a Fresh Fedora 29 Beta VM. This was due to Fedora 29 Cloud no longer shipping the network service stub in favor of NetworkManager. This led to me working with a colleague at Kenna Security to send a patch upstream to the Vagrant project to help their developers produce a fix for Fedora 29. Vagrant usage with Fedora is a very large part of our development cycle at Kenna, and having this broken before the Fedora 29 release would have impacted us a lot.” As Doss said, “Sometimes you need to help make the tires before they go on the bus.” + +Doss is the [COPR][6] Fedora, RHEL, and CentOS package maintainer for [WireGuard VPN][7]. “The CentOS repo just went over 60 thousand downloads last month which is pretty awesome.” + +### What Hardware? + +Doss uses Fedora 29 cloud in the over five VPC deployments in AWS and Google computer. At home he has a SuperMicro SYS-5019A-FTN4 1U Server that runs Fedora 29 Server with Openshift OKD installed on it. His laptops are all Lenovo. “For Laptops I use a ThinkPad T460s for work and a ThinkPad 25 at home. Both have Fedora 29 installed. ThinkPads are the best with Fedora.” + +### What Software? + +Doss used GNOME 3 as his preferred desktop on Fedora Workstation. “I use Sublime Text 3 for my text editor on the desktop or vim on servers.” For development and testing he uses Vagrant. “Ansible is what I use for any kind of automation with Fedora. I maintain an [Ansible playbook][8] for setting up my workstation.” + +### Ansible + +I asked Doss if he had advice for people trying to learn Ansible. + +“Start small. Automate the stuff that makes your life easier, but don’t over complicate it. [Ansible Galaxy][9] is a great resource to get things done quickly, but if you truly want to learn how to use Ansible, writing your own roles and playbooks the path I would take. + +“I have helped a lot of my coworkers that have joined my Operations team at Kenna get up to speed on using Ansible by buying them a copy of [Ansible for Devops][10] by Jeff Geerling. This book will give anyone new to Ansible the foundation they need to start using it everyday. #ansible on Freenode is a great resource as well along with the [official Ansible docs][11].” + +Doss also said, “Knowing what to automate is most likely the most difficult thing to master without over complicating things. Debugging complex playbooks and roles is a close second.” + +### Home lab + +He recommended setting up a home lab. “At Kenna and at home I use [Vagrant][12] with the [Vagrant-libvirt plugin][13] for developing Ansible roles and playbooks. You can iterate quickly to build your roles and playbooks on your laptop with your favorite editor and run _vagrant provision_ to run your playbook. Quick feedback loop and the ability to burn down your Vagrant VM and start over quickly is an amazing workflow. Below is a sample Vagrant file that I keep handy to spin up a Fedora VM to test my playbooks.” + +``` +-- mode: ruby -- + vi: set ft=ruby : + Vagrant.configure(2) do |config| + config.vm.provision "shell", inline: "dnf install nfs-utils rpcbind @development-tools @ansible-node redhat-rpm-config gcc-c++ -y" + config.ssh.forward_agent = true + config.vm.define "f29", autostart: false do |f29| + f29.vm.box = "fedora/29-cloud-base" + f29.vm.hostname = "f29.example.com" + f29.vm.provider "libvirt" do |vm| + vm.memory = 2048 + vm.cpus = 2 + vm.driver = "kvm" + vm.nic_model_type = "e1000" + end +config.vm.synced_folder '.', '/vagrant', disabled: true + +config.vm.provision "ansible" do |ansible| + ansible.groups = { + } + ansible.playbook = "playbooks/main.yml" + ansible.inventory_path = "inventory/development" + ansible.extra_vars = { + ansible_python_interpreter: "/usr/bin/python3" + } +# ansible.verbose = 'vvv' end +end +end +``` + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/joe-doss-how-do-you-fedora/ + +作者:[Charles Profitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/cprofitt/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/03/IMG_20181029_121944-816x345.jpg +[2]: https://fedoramagazine.org/tag/how-do-you-fedora/ +[3]: https://fedoramagazine.org/submit-an-idea-or-tip/ +[4]: https://ansible.com +[5]: https://fedoramagazine.org/wp-content/uploads/2019/04/IMG_20181231_110920_fixed.jpg +[6]: https://copr.fedorainfracloud.org/coprs/jdoss/wireguard/ +[7]: https://www.wireguard.com/install/ +[8]: https://github.com/jdoss/fedora-workstation +[9]: https://galaxy.ansible.com/ +[10]: https://www.ansiblefordevops.com/ +[11]: https://docs.ansible.com/ansible/latest/index.html +[12]: http://www.vagrantup.com/ +[13]: https://github.com/vagrant-libvirt/vagrant-libvirt%20plugin diff --git a/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md b/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md new file mode 100644 index 0000000000..1e1b451500 --- /dev/null +++ b/sources/tech/20190412 Linux Server Hardening Using Idempotency with Ansible- Part 2.md @@ -0,0 +1,116 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 2) +[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2) +[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie) + +Linux Server Hardening Using Idempotency with Ansible: Part 2 +====== + +![][1] + +[Creative Commons Zero][2] + +In the first part of this series, we introduced something called idempotency, which can provide the ongoing improvements to your server estate’s security posture. In this article, we’ll get a little more hands-on with a look at some specific Ansible examples. + +### Shopping List + +You will need some Ansible experience before being able to make use of the information that follows. Rather than run through the installation and operation of Ansible let’s instead look at some of the idempotency playbook’s content. + +As mentioned earlier there might be hundreds of individual system tweaks to make on just one type of host so we’ll only explore a few suggested Ansible tasks and how I like to structure the Ansible role responsible for the compliance and hardening. You have hopefully picked up on the fact that the devil is in the detail and you should absolutely, unequivocally, understand to as high a level of detail as possible, about the permutations of making changes to your server OS. + +Be aware that I will mix and match between OSs in the Ansible examples that follow. Many examples are OS agnostic but as ever you should pay close attention to the detail. Obvious changes like “apt” to “yum” for the package manager is a given. + +Inside a “tasks” file under our Ansible “hardening” role, or whatever you decide to name it, these named tasks represent the areas of a system with some example code to offer food for thought. In other words, each section that follows will probably be a single YAML file, such as “accounts.yml”, and each will have with varying lengths and complexity. + +Let’s look at some examples with ideas about what should go into each file to get you started. The contents of each file that follow are just the very beginning of a checklist and the following suggestions are far from exhaustive. + +#### SSH Server + +This is the application that almost all engineers immediately look to harden when asked to secure a server. It makes sense as SSH (the OpenSSH package in many cases) is usually only one of a few ports intentionally prised open and of course allows direct access to the command line. The level of hardening that you should adopt is debatable. I believe in tightening the daemon as much as possible without disruption and would usually make around fifteen changes to the standard OpenSSH server config file, “sshd_config”. These changes would include pulling in a MOTD banner (Message Of The Day) for legal compliance (warning of unauthorised access and prosecution), enforcing the permissions on the main SSHD files (so they can’t be tampered with by lesser-privileged users), ensuring the “root” user can’t log in directly, setting an idle session timeout and so on. + +Here’s a very simple Ansible example that you can repeat within other YAML files later on, focusing on enforcing file permissions on our main, critical OpenSSH server config file. Note that you should carefully check every single file that you hard-reset permissions on before doing so. This is because there are horrifyingly subtle differences between Linux distributions. Believe me when I say that it’s worth checking first. + +name: Hard reset permissions on sshd server file + +file: owner=root group=root mode=0600 path=/etc/ssh/sshd_config + +To check existing file permissions I prefer this natty little command for the job: + +``` +$ stat -c "%a %n" /etc/ssh/sshd_config + +644 /etc/ssh/sshd_config +``` + +As our “stat” command shows our Ansible snippet would be an improvement to the current permissions because 0600 means only the “root” user can read and write to that file. Other users or groups can’t even read that file which is of benefit because if we’ve made any mistakes in securing SSH’s config they can’t be discovered as easily by less-privileged users. + +#### System Accounts + +At a simple level this file might define how many users should be on a standard server. Usually a number of users who are admins have home directories with public keys copied into them. However this file might also include performing simple checks that the root user is the only system user with the all-powerful superuser UID 0; in case an attacker has altered user accounts on the system for example. + +#### Kernel + +Here’s a file that can grow arms and legs. Typically I might affect between fifteen and twenty sysctl changes on an OS which I’m satisfied won’t be disruptive to current and, all going well, any future uses of a system. These changes are again at your discretion and, at my last count (as there’s between five hundred and a thousand configurable kernel options using sysctl on a Debian/Ubuntu box) you might opt to split off these many changes up into different categories. + +Such categories might include network stack tuning, stopping core dumps from filling up disk space, disabling IPv6 entirely and so on. Here’s an Ansible example of logging network packets that shouldn’t been routed out onto the Internet, namely those packets using spoofed private IP Addresses, called “martians”. + +name: Keep track of traffic that shouldn’t be routed onto the Internet + +lineinfile: dest="/etc/sysctl.conf" line="{{item.network}}" state=present + +with_items: + +\- { network: 'net.ipv4.conf.all.log_martians = 1' } + +\- { network: 'net.ipv4.conf.default.log_martians = 1' } + +Pay close attention that you probably don’t want to use the file “/etc/sysctl.conf” but create a custom file under the directory “/etc/sysctl.d/” or similar. Again, check your OS’s preference, usually in the comments of the pertinent files. If you’ve not seen martian packets being enabled before then type “dmesg” (sometimes only as the “root” user) to view kernel messages and after a week or two of logging being in place you’ll probably see some traffic polluting your logs. It’s much better to know how attackers are probing your servers than not. A few log entries for reference can only be of value. When it comes to looking after servers, ignorance is certainly not bliss. + +#### Network + +As mentioned you might want to include hardening the network stack within your kernel.yml file, depending on whether there’s many entries or not, or simply for greater clarity. For your network.yml file have a think about stopping old-school broadcast attacks flooding your LAN and ICMP oddities from changing your routing in addition. + +#### Services + +Usually I would stop or start miscellaneous system services (and potentially applications) within this Ansible file. If there weren’t many services then rather than also using a “cron.yml” file specifically for “cron” hardening I’d include those here too. + +There’s a bundle of changes you can make around cron’s file permissions etc. If you haven’t come across it, on some OSs, there’s a “cron.deny” file for example which blacklists certain users from accessing the “crontab” command. Additionally you also have a multitude of cron directories under the “/etc” directory which need permissions enforced and improved, indeed along with the file “/etc/crontab” itself. Once again check with your OS’s current settings before altering these or “bad things” ™ might happen to your uptime. + +In terms of miscellaneous services being purposefully stopped and certain services, such as system logging which is imperative to a healthy and secure system, have a quick look at the Ansible below which I might put in place for syslog as an example. + +name: Insist syslog is definitely installed (so we can receive upstream logs) + +apt: name=rsyslog state=present + +name: Make sure that syslog starts after a reboot + +service: name=rsyslog state=started enabled=yes + +#### IPtables + +The venerable Netfilter which, from within the Linux kernel offers the IPtables software firewall the ability to filter network packets in an exceptionally sophisticated manner, is a must if you can enable it sensibly. If you’re confident that each of your varying flavours of servers (whether it’s a webserver, database server and so on) can use the same IPtables config then copy a file onto the filesystem via Ansible and make sure it’s always loaded up using this YAML file. + +Next time, we’ll wrap up our look at specific system suggestions and talk a little more about how the playbook might be used. + +Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][3] + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-2 + +作者:[Chris Binnie][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/chrisbinnie +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/artificial-intelligence-3382507_1280.jpg?itok=PHazitpd +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.devsecops.cc/ diff --git a/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md b/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md new file mode 100644 index 0000000000..657464affb --- /dev/null +++ b/sources/tech/20190414 Working with Microsoft Exchange from your Linux Desktop.md @@ -0,0 +1,100 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Working with Microsoft Exchange from your Linux Desktop) +[#]: via: (https://itsfoss.com/microsoft-exchange-linux-desktop/) +[#]: author: (It's FOSS Community https://itsfoss.com/author/itsfoss/) + +Working with Microsoft Exchange from your Linux Desktop +====== + +Recently I had to do some research (and even magic) to be able to work on my Ubuntu Desktop with Exchange Mail Server from my current employer. I am going to share my experience with you. + +### Microsoft Exchange on Linux desktop + +I guess many readers might feel confused, I mean, it shouldn’t be that hard if you simply use [Thunderbird][1] or any other [Linux email client][2] with your Office365 Exchange Account, right? Well, for better or for worse it was not this case for me. + +Here’s my ordeal and what I did to make Microsoft Exchange work on my Linux desktop. + +![][3] + +#### The initial problem, no Office365 + +The first problem encountered in my situation was that we don’t currently use Office365 like probably majority of current people does for hosting their Exchange accounts, we currently use an on premises Exchange server and a very old version of it. + +So, this means I didn’t have the luxury of using automatic configuration that comes in majority of email clients to simply connect to Office365. + +#### Webmail is always an option… right? + +Short answer is yes, however, as I mentioned we are using Exchange 2010, so the webmail interface is not only outdated, it even won’t allow you to have a decent email signature as it has a limit of characters in webmail configuration, so I needed to use an email client if I really wanted to be able to use the email the way I needed. + +#### Another problem, I am picky for my email client + +I am a regular Google user, I have been using GMail for the past 14 years as my personal email, so I really like how it looks and works. I actually use the webmail as I don’t like to be tied to my email client or even my computer device, if something happens and I need to switch to a newer device I don’t want to have to copy things over, I just want things to be there waiting for me to use them. + +This leads me not liking Thunderbird, K-9 or Evolution Mail clients. All of these are capable of being connected to Exchange servers (one way or the other) but again, they don’t meet the standard of a clean, easy and modern GUI I wanted plus they couldn’t even manage my Exchange calendar well (which was a real deal breaker for me). + +#### Found some options as email clients! + +After some other research I found there were a couple of options for email clients that I could use and that actually would work the way I expected. + +These were: [Hiri][4], which had a very modern and innovative user interface and had Exchange Server capabilities and there also was [Mailspring][5] which is a fork of an old foe ([Nylas Mail][6]) and which was my real favorite. + +However, Mailspring couldn’t connect directly to an Exchange server (using Exchange’s protocol) unless you use Office365, it required [IMAP][7] (another luxury!) and the IT department at my office was reluctant to activate IMAP for “security reasons”. + +Hiri is a good option but it’s not free. + +#### No IMAP, no Office365, game over? Not yet! + +I have to confess, I was really ready to give up and simply use the old webmail and learn to live with it, however, I gave a last shot on my research capabilities and I found a possible solution: what if I had a way to put a “man in the middle”? What if I was able to make the IMAP to run locally on my computer while my computer simply pull the emails via Exchange protocol? It was a long shot but, could work… + +So I started looking here and there and found this [DavMail][8], which works as a Gateway to “talk” with an Exchange server and then locally provide you whatever you need in order to use it. Basically it was like a “translator” between by computer and the Exchange and then provided me with whatever service I needed. + +![DavMail Settings][9] + +So basically I only had to give DavMail my Exchange Server’s URL (even OWA URL) and set whatever ports I wanted on my local computer to be the new ports where my email client could connect. + +This way I was free to basically use ANY client I wanted, at least any client which was capable of using IMAP protocol would work, as long as I configure the same ports I set up as my local ports. + +![Mailspring working my office’s on premises Exchange. Information has been blurred due to non-disclosure agreement at my office.][10] + +And that was it! I was able to use MailSpring (which is my preferred choice for email client) under my non favorable conditions. + +#### Bonus point: this is a multi-platform solution! + +What’s best is that this solution will work for any platform! So if you have the same problem while using Windows or macOS, DavMail has a version for all tastes! + +![avatar][11] + +![avatar][11] + +### Helder Martins + +Systems Engineer, technology evangelist, Ubuntu user, Linux enthusiast, father and husband. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/microsoft-exchange-linux-desktop/ + +作者:[It's FOSS Community][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/itsfoss/ +[b]: https://github.com/lujun9972 +[1]: https://www.thunderbird.net/en-US/ +[2]: https://itsfoss.com/best-email-clients-linux/ +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/microsoft-exchange-linux-desktop.png?resize=800%2C450&ssl=1 +[4]: https://www.hiri.com/ +[5]: https://getmailspring.com/ +[6]: https://itsfoss.com/n1-open-source-email-client/ +[7]: https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol +[8]: http://davmail.sourceforge.net/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings.png?resize=800%2C597&ssl=1 +[10]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/davmail-exchange-settings-1.jpg?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/helder-martins-1.jpeg?ssl=1 diff --git a/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md new file mode 100644 index 0000000000..f33d614f86 --- /dev/null +++ b/sources/tech/20190415 Blender short film, new license for Chef, ethics in open source, and more news.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blender short film, new license for Chef, ethics in open source, and more news) +[#]: via: (https://opensource.com/article/15/4/news-april-15) +[#]: author: (Joshua Allen Holm https://opensource.com/users/holmja) + +Blender short film, new license for Chef, ethics in open source, and more news +====== +Here are some of the biggest headlines in open source in the last two +weeks +![][1] + +In this edition of our open source news roundup, we take a look at the 12th Blender short film, Chef shifts away from open core toward a 100% open source license, SuperTuxKart's latest release candidate with online multiplayer support, and more. + +### Blender Animation Studio releases Spring + +[Spring][2], the latest short film from [Blender Animation Studio][3], premiered on April 4th. The [press release on Blender.org][4] describes _Spring_ as "the story of a shepherd girl and her dog, who face ancient spirits in order to continue the cycle of life." The development version of Blender 2.80, as well as other open source tools, were used to create this animated short film. The character and asset files for the film are available from [Blender Cloud][5], and tutorials, walkthroughs, and other instructional material are coming soon. + +### The importance of ethics in open source + +Reuven M. Lerner, writing for [Linux Journal][6], shares his thoughts about need for teaching programmers about ethics in an article titled [Open Source Is Winning, and Now It's Time for People to Win Too][7]. Part retrospective looking back at the history of open source and part call to action for moving forward, Lerner's article discusses many issues relevant to open source beyond just coding. He argues that when we teach kids about open source "[w]e also need to inform them of the societal parts of their work, and the huge influence and power that today's programmers have." He continues by stating "It's sometimes okay—and even preferable—for a company to make less money deliberately, when the alternative would be to do things that are inappropriate or illegal." Overall a very thought-provoking piece, Lerner makes a solid case for making sure to remember that the open source movement is about more than free code. + +### Chef transitions from open core to open source + +Chef, the company behind the well-known DevOps automation tool, [announced][8] that they will be release 100% of their software as open source under an Apache 2.0 license. This move marks a departure from their current [open core model][9]. Given a tendency for companies to try to move in the opposite direction, Chef's move is a big one. By operating under a fully open source model Chef builds a better, stronger relationship with the community, and the community benefits from full access to all the source code. Even developers of competing projects (and the commercial projects based on those products) benefit from being able to learn from Chef's code, as Chef can do from its open source competitors, which is one of the greatest advantages of open source; the best ideas get to win and business relationships are built around trust and quality of service, not proprietary secrets. For a more detailed look at this development, read Steven J. Vaughan-Nichols's [article for ZDNet][10]. + +### SuperTuxKart releases version 0.10 RC1 for testing + +SuperTuxKart, the open source Mario Kart clone featuring open source mascots, is getting very close to releasing a version that supports online multi-player. On April 5th, the SuperTuxKart blog announced the release of [SuperTuxKart 0.10 Release Candidate 1][11], which needs testing before the final release. Users who want to help test the online and LAN multiplayer options can [download the game from SourceForge][12]. In addition to the new online and LAN features, SuperTuxKart 0.10 features a couple new tracks to race on; Ravenbridge Mansion replaces the old Mansion track, and Black Forest, which was an add-on track in earlier versions, is now part of the official track set. + +#### In other news + + * [My code is your code: Embracing the power of open sourcing][13] + * [FOSS means kids can have a big impact][14] + * [Open-source textbooks lighten students’ financial load][15] + * [Developing the ultimate open source radio control transmitter][16] + * [How does open source tech transform Government?][17] + + + +_Thanks, as always, to Opensource.com staff members and moderators for their help this week._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/15/4/news-april-15 + +作者:[Joshua Allen Holm (Community Moderator)][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/holmja +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/weekly_news_roundup_tv.png?itok=B6PM4S1i +[2]: https://www.youtube.com/watch?v=WhWc3b3KhnY (Spring) +[3]: https://blender.studio/ (Blender Animation Studio) +[4]: https://www.blender.org/press/spring-open-movie/ (Spring Open Movie) +[5]: https://cloud.blender.org/p/spring/ (Spring on Blender Cloud) +[6]: https://www.linuxjournal.com/ (Linux Journal) +[7]: https://www.linuxjournal.com/content/open-source-winning-and-now-its-time-people-win-too (Open Source Is Winning, and Now It's Time for People to Win Too) +[8]: https://blog.chef.io/2019/04/02/chef-software-announces-the-enterprise-automation-stack/ (Introducing the New Chef: 100% Open, Always) +[9]: https://en.wikipedia.org/wiki/Open-core_model (Wikipedia: Open-core model) +[10]: https://www.zdnet.com/article/leading-devops-program-chef-goes-all-in-with-open-source/ (Leading DevOps program Chef goes all in with open source) +[11]: http://blog.supertuxkart.net/2019/04/supertuxkart-010-release-candidate-1.html (SuperTuxKart 0.10 Release Candidate 1 Released) +[12]: https://sourceforge.net/projects/supertuxkart/files/SuperTuxKart/0.10-rc1/ (SourceForge: SuperTuxKart) +[13]: https://www.forbes.com/sites/forbestechcouncil/2019/04/10/my-code-is-your-code-embracing-the-power-of-open-sourcing/ (My code is your code: Embracing the power of open sourcing) +[14]: https://www.linuxjournal.com/content/foss-means-kids-can-have-big-impact (FOSS means kids can have a big impact) +[15]: https://www.schoolnewsnetwork.org/2019/04/09/open-source-textbooks-lighten-students-financial-load/ (Open-source textbooks lighten students’ financial load) +[16]: https://hackaday.com/2019/04/03/developing-the-ultimate-open-source-radio-control-transmitter/ (Developing the ultimate open source radio control transmitter) +[17]: https://www.openaccessgovernment.org/open-source-tech-transform/62059/ (How does open source tech transform Government?) diff --git a/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md new file mode 100644 index 0000000000..50f4981c08 --- /dev/null +++ b/sources/tech/20190416 Linux Server Hardening Using Idempotency with Ansible- Part 3.md @@ -0,0 +1,118 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Server Hardening Using Idempotency with Ansible: Part 3) +[#]: via: (https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3) +[#]: author: (Chris Binnie https://www.linux.com/users/chrisbinnie) + +Linux Server Hardening Using Idempotency with Ansible: Part 3 +====== + +![][1] + +[Creative Commons Zero][2] + +In the previous articles, we introduced idempotency as a way to approach your server’s security posture and looked at some specific Ansible examples, including the kernel, system accounts, and IPtables. In this final article of the series, we’ll look at a few more server-hardening examples and talk a little more about how the idempotency playbook might be used. + +#### **Time** + +Due to its reduced functionality, and therefore attack surface, the preference amongst a number of OSs has been to introduce “chronyd” over “ntpd”. If you’re new to “chrony” then fret not. It’s still using the NTP (Network Time Protocol) that we all know and love but in a more secure fashion. + +The first thing I do with Ansible within the “chrony.conf” file is alter the “bind address” and if my memory serves there’s also a “command port” option. These config options allow Chrony to only listen on the localhost. In other words you are still syncing as usual with other upstream time servers (just as NTP does) but no remote servers can query your time services; only your local machine has access. + +There’s more information on the “bindcmdaddress 127.0.0.1” and “cmdport 0” on this Chrony page () under “2.5. How can I make chronyd more secure?” which you should read for clarity. This premise behind the comment on that page is a good idea: “you can disable the internet command sockets completely by adding cmdport 0 to the configuration file”. + +Additionally I would also focus on securing the file permissions for Chrony and insist that the service starts as expected just like the syslog config above. Otherwise make sure that your time sources are sane, have a degree of redundancy with multiple sources set up and then copy the whole config file over using Ansible. + +#### **Logging** + +You can clearly affect the level of detail included in the logs from a number pieces of software on a server. Thinking back to what we’ve looked at in relation to syslog already you can also tweak that application’s config using Ansible to your needs and then use the example Ansible above in addition. + +#### **PAM** + +Apparently PAM (Pluggable Authentication Modules) has been a part of Linux since 1997. It is undeniably useful (a common use is that you can force SSH to use it for password logins, as per the SSH YAML file above). It is extensible, sophisticated and can perform useful functions such as preventing brute force attacks on password logins using a clever rate limiting system. The syntax varies a little between OSes but if you have the time then getting PAM working well (even if you’re only using SSH keys and not passwords for your logins) is a worthwhile effort. Attackers like their own users on a system with lots of usernames, something innocuous such as “webadmin” or similar might be easy to miss on a server, and PAM can help you out in this respect. + +#### **Auditd** + +We’ve looked at logging a little already but what about capturing every “system call” that a kernel makes. The Linux kernel is a super-busy component of any system and logging almost every single thing that a system does is an excellent way of providing post-event forensics. This article will hopefully shed some light on where to begin: . Note the comments in that article about performance, there’s little point in paying extra for compute and disk IO resource because you’ve misconfigured your logging so spend some time getting it correct would be my advice. + +For concerns over disk space I will usually change a few lines in the file “/etc/audit/auditd.conf” in order to prevent there firstly being too many log files created and secondly logs that grow very large without being rotated. This is also on the proviso that logs are being ingested upstream via another mechanism too. Clearly the files permissions and the service starting are also the basics you need to cover here too. Generally file permissions for auditd are tight as it’s a “root” oriented service so there’s less changes needed here generally. + +#### **Filesystems** + +With a little reading you can discover which filesystems that are made available to your OS by default. You should disable these (at the “modprode.d” file level) with Ansible to prevent weird and wonderful things being attached unwittingly to your servers. You are reducing the attack surface with this approach. The Ansible might look something like this below for example. + +``` +name: Make sure filesystems which are not needed are forced as off + +lineinfile: dest="/etcmodprobe.d/harden.conf" line='install squashfs /bin/true' state=present +``` + +#### **SELinux** + +The old, but sometimes avoided due to complexity, security favourite, SELinux, should be set to “enforcing” mode. Or, at the every least, set to log sensibly using “permissive” mode. Permissive mode will at least fill your auditd logs up with any correct rule matches nicely. In terms of what Ansible looks like it’s simple and is along these lines: + +``` +name: Configure SElinux to be running in permissive mode + +replace: path=”/etc/selinux/config” regexp='SELINUX=disabled' replace='SELINUX=permissive' +``` + +#### **Packages** + +Needless to say the compliance hardening playbook is also a good place to upgrade all the packages (with some selective exclusions) on the system. Pay attention to the section relating to reboots and idempotency in a moment however. With other mechanisms in place you might not want to update packages here but instead as per the Automation Documents article mentioned in a moment. + +### **Idempotency** + +Now we’ve run through some of the aspects you would want to look at when hardening on a server, let’s think a little more about how the playbook might be used. + +When it comes to cloud platforms most of my professional work has been on AWS and therefore, more often than not, a fresh AMI is launched and then a playbook is run over the top of it. There’s a mountain of detail in one way of doing that in this article () which you may be pleased to discover accommodates a mechanism to spawn a script or playbook. + +It is important to note, when it comes to idempotency, that it may take a little more effort initially to get your head around the logic involved in being able to re-run Ansible repeatedly without disturbing the required status quo of your server estate. + +One thing to be absolutely certain of however (barring rare edge cases) is that after you apply your hardening for the very first time, on a new AMI or server build, you will require a reboot. This is an important element due to a number of system facets not being altered correctly without a reboot. These include applying kernel changes so alterations become live, writing auditd rules as immutable config and also starting or stopping services to improve the security posture. + +Note though that you’re probably not going to want to execute all plays in a playbook every twenty or thirty minutes, such as updating all packages and stopping and restarting key customer-facing services. As a result you should factor the logic into your Ansible so that some tasks only run once initially and then maybe write a “completed” placeholder file to the filesystem afterwards for referencing. There’s a million different ways of achieving a status checker. + +The nice thing about Ansible is that the logic for rerunning playbooks is implicit and unlike shell scripts which for this type of task can be arduous to code the logic into. Sometimes, such as updating the GRUB bootloader for example, trying to guess the many permutations of a system change can be painful. + +### **Bedtime Reading** + +I still think that you can’t beat trial and error when it comes to computing. Experience is valued for good reason. + +Be warned that you’ll find contradictory advice sometimes from the vast array of online resources in this area. Advice differs probably because of the different use cases. The only way to harden the varying flavours of OS to my mind is via a bespoke approach. This is thanks to the environments that servers are used within and the requirements of the security framework or standard that an organisation needs to meet. + +For OS hardening details you can check with resources such as the NSA ([https://www.nsa.gov][3]), the Cloud Security Alliance (), proprietary training organisations such as GIAC ([https://www.giac.org][4]) who offer resources (), the diverse CIS Benchmarks ([https://www.cisecurity.org][5]) for industry consensus-based benchmarking, the SANS Institute (), NIST’s Computer Security Research ([https://csrc.nist.gov][6]) and of course print media too. + +### **Conclusion** + +Hopefully, you can see how powerful an idempotent server infrastructure is and are tempted to try it for yourself. + +The ever-present threat of APT (Advanced Persistent Threat) attacks on infrastructure, where a successful attacker will sit silently monitoring events and then when it’s opportune infiltrate deeper into an estate, makes this type of configuration highly valuable. + +The amount of detail that goes into the tests and configuration changes is key to the value that such an approach will bring to an organisation. Like the tests in a CI/CD pipeline they’re only as ever as good as their coverage. + +Chris Binnie’s latest book, Linux Server Security: Hack and Defend, shows you how to make your servers invisible and perform a variety of attacks. You can find out more about DevSecOps, containers and Linux security on his website: [https://www.devsecops.cc][7] + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/linux-server-hardening-using-idempotency-ansible-part-3 + +作者:[Chris Binnie][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/chrisbinnie +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/tech-1495181_1280.jpg?itok=5WcwApNN +[2]: /LICENSES/CATEGORY/CREATIVE-COMMONS-ZERO +[3]: https://www.nsa.gov/ +[4]: https://www.giac.org/ +[5]: https://www.cisecurity.org/ +[6]: https://csrc.nist.gov/ +[7]: https://www.devsecops.cc/ diff --git a/sources/tech/20190417 How to use Ansible to document procedures.md b/sources/tech/20190417 How to use Ansible to document procedures.md new file mode 100644 index 0000000000..51eddfe92c --- /dev/null +++ b/sources/tech/20190417 How to use Ansible to document procedures.md @@ -0,0 +1,132 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use Ansible to document procedures) +[#]: via: (https://opensource.com/article/19/4/ansible-procedures) +[#]: author: (Marco Bravo https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo) + +How to use Ansible to document procedures +====== +In Ansible, the documentation is the playbook, so the documentation +naturally evolves alongside the code +![][1] + +> "Documentation is a love letter that you write to your future self." —[Damian Conway][2] + +I use [Ansible][3] as my personal notebook for documenting coding procedures—both the ones I use often and the ones I rarely use. This process facilitates my work and reduces the time it takes to do repetitive tasks, the ones where specific commands in a certain sequence are executed to accomplish a specific result. + +By documenting with Ansible, I don't need to memorize all the parameters for each command or all the steps involved with a specific procedure, and it's easy to share the details with my teammates. + +Traditional approaches for documentation, like wikis or shared drives, are useful for general documents, but inevitably they become outdated and can't keep pace with the rapid changes in infrastructure and environments. For specific procedures, it's better to document directly into the code using a tool like Ansible. + +### Ansible's advantages + +Before we begin, let's recap some basic Ansible concepts: a _playbook_ is a high-level organization of procedures using plays; _plays_ are specific procedures for a group of hosts; _tasks_ are specific actions, _modules_ are units of code, and _inventory_ is a list of managed nodes. + +Ansible's great advantage is that the documentation is the playbook itself, so it evolves with and is contained inside the code. This is not only useful; it's also practical because, more than just documenting solutions with Ansible, you're also coding a playbook that permits you to write your procedures and commands, reproduce them, and automate them. This way, you can look back in six months and be able to quickly understand and execute them again. + +It's true that this way of resolving problems could take more time at first, but it will definitely save a lot of time in the long term. By being courageous and disciplined to adopt these new habits, you will improve your skills in each iteration. + +Following are some other important elements and support tools that will facilitate your process. + +### Use source code control + +> "First do it, then do it right, then do it better." —[Addy Osmani][4] + +When working with Ansible playbooks, it's very important to implement a playbook-as-code strategy. A good way to accomplish this is to use a source code control repository that will permit to you start with a simple solution and iterate to improve it. + +A source code control repository provides many advantages as you collaborate with other developers, restore previous versions, and back up your work. But in creating documentation, its main advantages are that you get traceability about what are you doing and can iterate around small changes to improve your work. + +The most popular source control system is [Git][5], but there are [others][6] like [Subversion][7], [Bazaar][8], [BitKeeper][9], and [Mercurial][10]. + +### Keep idempotency in mind + +In infrastructure automation, idempotency means to reach a specific end state that remains the same, no matter how many times the process is executed. So when you are preparing to automate your procedures, keep the desired result in mind and write scripts and commands that will achieve them consistently. + +This concept exists in most Ansible modules because after you specify the desired final state, Ansible will accomplish it. For instance, there are modules for creating filesystems, modifying iptables, and managing cron entries. All of these modules are idempotent by default, so you should give them preference. + +If you are using some of the lower-level modules, like command or shell, or developing your own modules, be careful to write code that will be idempotent and safe to repeat many times to get the same result. + +The idempotency concept is important when you prepare procedures for automation because it permits you to evaluate several scenarios and incorporate the ones that will make your code safer and create an abstraction level that points to the desired result. + +### Test it! + +Testing your deployment workflow creates fewer surprises when your code arrives in production. Ansible's belief that you shouldn't need another framework to validate basic things in your infrastructure is true. But your focus should be on application testing, not infrastructure testing. + +Ansible's documentation offers several [testing strategies for your procedures][11]. For testing Ansible playbooks, you can use [Molecule][12], which is designed to aid in the development and testing of Ansible roles. Molecule supports testing with multiple instances, operating systems/distributions, virtualization providers, test frameworks, and testing scenarios. This means Molecule will run through all the testing steps: linting verifications, checking playbook syntax, building Docker environments, running playbooks against Docker environments, running the playbook again to verify idempotence, and cleaning everything up afterward. [Testing Ansible roles with Molecule][13] is a good introduction to Molecule. + +### Run it! + +Running Ansible playbooks can create logs that are formatted in an unfriendly and difficult-to-read way. In those cases, the Ansible Run Analysis (ARA) is a great complementary tool for running Ansible playbooks, as it provides an intuitive interface to browse them. Read [Analyzing Ansible runs using ARA][14] for more information. + +Remember to protect your passwords and other sensitive information with [Ansible Vault][15]. Vault can encrypt binary files, **group_vars** , **host_vars** , **include_vars** , and **var_files**. But this encrypted data is exposed when you run a playbook in **-v** (verbose) mode, so it's a good idea to combine it with the keyword **no_log** set to **true** to hide any task's information, as it indicates that the value of the argument should not be logged or displayed. + +### A basic example + +Do you need to connect to a server to produce a report file and copy the file to another server? Or do you need a lot of specific parameters to connect? Maybe you're not sure where to store the parameters. Or are your procedures are taking a long time because you need to collect all the parameters from several sources? + +Suppose you have a network topology with some restrictions and you need to copy a file from a server that you can access ( **server1** ) to another server that is managed by a third party ( **server2** ). The parameters to connect are: + + +``` +Source server: server1 +Target server: server2 +Port: 2202 +User: transfers +SSH Key: transfers_key +File to copy: file.log +Remote directory: /logs/server1/ +``` + +In this scenario, you need to connect to **server1** and copy the file using these parameters. You can accomplish this using a one-line command: + + +``` +`ssh server1 "scp -P 2202 -oUser=transfers -i ~/.ssh/transfers_key file.log server2:/logs/server1/"` +``` + +Now your playbook can do the procedure. + +### Useful combinations + +If you produce a lot of Ansible playbooks, you can organize all your procedures with other tools like [AWX][16] (Ansible Works Project), which provides a web-based user interface, a REST API, and a task engine built on top of Ansible so that users can better control their Ansible project use in IT environments. + +Other interesting combinations are Ansible with [Rundeck][17], which provides procedures as self-service jobs, and [Jenkins][18] for continuous integration and continuous delivery processes. + +### Conclusion + +I hope that these tips for using Ansible will help you improve your automation processes, coding, and documentation. If you have more interest, dive in and learn more. And I would like to hear your ideas or questions, so please share them in the comments below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/ansible-procedures + +作者:[Marco Bravo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/marcobravo/users/shawnhcorey/users/marcobravo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/document_free_access_cut_security.png?itok=ocvCv8G2 +[2]: https://en.wikipedia.org/wiki/Damian_Conway +[3]: https://www.ansible.com/ +[4]: https://addyosmani.com/ +[5]: https://git-scm.com/ +[6]: https://en.wikipedia.org/wiki/Comparison_of_version_control_software +[7]: https://subversion.apache.org/ +[8]: https://bazaar.canonical.com/en/ +[9]: https://www.bitkeeper.org/ +[10]: https://www.mercurial-scm.org/ +[11]: https://docs.ansible.com/ansible/latest/reference_appendices/test_strategies.html +[12]: https://molecule.readthedocs.io/en/latest/ +[13]: https://opensource.com/article/18/12/testing-ansible-roles-molecule +[14]: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara +[15]: https://docs.ansible.com/ansible/latest/user_guide/vault.html +[16]: https://github.com/ansible/awx +[17]: https://www.rundeck.com/ansible +[18]: https://www.redhat.com/en/blog/integrating-ansible-jenkins-cicd-process diff --git a/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md b/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md new file mode 100644 index 0000000000..5c81f2d8bc --- /dev/null +++ b/sources/tech/20190418 Electronics designed in 5 different countries with open hardware.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Electronics designed in 5 different countries with open hardware) +[#]: via: (https://opensource.com/article/19/4/hardware-international) +[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg) + +Electronics designed in 5 different countries with open hardware +====== +This month's open source hardware column looks at certified open +hardware from five countries that may surprise you. +![Gadgets and open hardware][1] + +The Open Source Hardware Association's [Hardware Registry][2] lists hardware from 29 different countries on five continents, demonstrating the broad, international footprint of certified open source hardware. + +![Open source hardware map][3] + +In some ways, this international reach shouldn't be a surprise. Like many other open source communities, the open source hardware community is built on top of the internet, not grounded in any specific geographical location. The focus on documentation, sharing, and openness makes it easy for people in different places with different backgrounds to connect and work together to develop new hardware. Even the community-developed open source hardware [definition][4] has been translated into 11 languages from the original English. + +Even if you're familiar with the international nature of open source hardware, it can still be refreshing to step back and remember what it means in practice. While it may not surprise you that there are many certifications from the United States, Germany, and India, some of the other countries boasting certifications might be a bit less expected. Let's look at six such projects from five of those countries. + +### Bulgaria + +Bulgaria may have the highest per-capita open source hardware certification rate of any country on earth. That distinction is mostly due to the work of two companies: [ANAVI Technology][5] and [Olimex][6]. + +ANAVI focuses mostly on IoT projects built on top of the Raspberry Pi and ESP8266. The concept of "creator contribution" means that these projects can be certified open source even though they are built upon non-open bases. That is because all of ANAVI's work to develop the hardware on top of these platforms (ANAVI's "creator contribution") has been open sourced in compliance with the certification requirements. + +The [ANAVI Light pHAT][7] was the first piece of Bulgarian hardware to be certified by OSHWA. The Light pHAT makes it easy to add a 12V RGB LED strip to a Raspberry Pi. + +![ANAVI-Light-pHAT][8] + +[ANAVI-Light-pHAT][9] + +Olimex's first OSHWA certification was for the [ESP32-PRO][10], a highly connectable IoT board built around an ESP32 microcontroller. + +![Olimex ESP32-PRO][11] + +[Olimex ESP32-PRO][12] + +### China + +While most people know China is a hotbed for hardware development, fewer realize that it is also the home to a thriving _open source_ hardware culture. One of the reasons is the tireless advocacy of Naomi Wu (also known as [SexyCyborg][13]). It is fitting that the first piece of certified hardware from China is one she helped develop: the [sino:bit][14]. The sino:bit is designed to help introduce students to programming and includes China-specific features like a LED matrix big enough to represent Chinese characters. + +![sino:bit][15] + +[ sino:bit][16] + +### Mexico + +Mexico has also produced a range of certified open source hardware. A recent certification is the [Meow Meow][17], a capacitive touch interface from [Electronic Cats][18]. Meow Meow makes it easy to use a wide range of objects—bananas are always a favorite—as controllers for your computer. + +![Meow Meow][19] + +[Meow Meow][20] + +### Saudi Arabia + +Saudi Arabia jumped into open source hardware earlier this year with the [M1 Rover][21]. The robot is an unmanned vehicle that you can build (and build upon). It is compatible with a number of different packages designed for specific purposes, so you can customize it for a wide range of applications. + +![M1-Rover ][22] + +[M1-Rover][23] + +### Sri Lanka + +This project from Sri Lanka is part of a larger effort to improve traffic flow in urban areas. The team behind the [Traffic Wave Disruptor][24] read research about how many traffic jams are caused by drivers slamming on their brakes when they drive too close to the car in front of them, producing a ripple of rapid breaking on the road behind them. This stop/start effect can be avoided if cars maintain a consistent, optimal distance from one another. If you reduce the stop/start pattern, you also reduce the number of traffic jams. + +![Traffic Wave Disruptor][25] + +[Traffic Wave Disruptor][26] + +But how can drivers know if they are keeping an optimal distance? The prototype Traffic Wave Disruptor aims to give drivers feedback when they fail to keep optimal spacing. Wider adoption could help increase traffic flow without building new highways nor reducing the number of cars using them. + +* * * + +You may have noticed that all the hardware featured here is based on electronics. In next month's open source hardware column, we will take a look at open source hardware for the outdoors, away from batteries and plugs. Until then, [certify][27] your open source hardware project (especially if your country is not yet on the registry). It might be featured in a future column. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/hardware-international + +作者:[Michael Weinberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mweinberg +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/openhardwaretools_0.png?itok=NUIvc-R1 (Gadgets and open hardware) +[2]: https://certification.oshwa.org/list.html +[3]: https://opensource.com/sites/default/files/uploads/opensourcehardwaremap.jpg (Open source hardware map) +[4]: https://www.oshwa.org/definition/ +[5]: http://anavi.technology/ +[6]: https://www.olimex.com/ +[7]: https://certification.oshwa.org/bg000001.html +[8]: https://opensource.com/sites/default/files/uploads/anavi-light-phat.png (ANAVI-Light-pHAT) +[9]: http://anavi.technology/#products +[10]: https://certification.oshwa.org/bg000010.html +[11]: https://opensource.com/sites/default/files/uploads/olimex-esp32-pro.png (Olimex ESP32-PRO) +[12]: https://www.olimex.com/Products/IoT/ESP32/ESP32-PRO/open-source-hardware +[13]: https://www.youtube.com/channel/UCh_ugKacslKhsGGdXP0cRRA +[14]: https://certification.oshwa.org/cn000001.html +[15]: https://opensource.com/sites/default/files/uploads/sinobit.png (sino:bit) +[16]: https://github.com/sinobitorg/hardware +[17]: https://certification.oshwa.org/mx000003.html +[18]: https://electroniccats.com/ +[19]: https://opensource.com/sites/default/files/uploads/meowmeow.png (Meow Meow) +[20]: https://electroniccats.com/producto/meowmeow/ +[21]: https://certification.oshwa.org/sa000001.html +[22]: https://opensource.com/sites/default/files/uploads/m1-rover.png (M1-Rover ) +[23]: https://www.hackster.io/AhmedAzouz/m1-rover-362c05 +[24]: https://certification.oshwa.org/lk000001.html +[25]: https://opensource.com/sites/default/files/uploads/traffic-wave-disruptor.png (Traffic Wave Disruptor) +[26]: https://github.com/Aightm8/Traffic-wave-disruptor +[27]: https://certification.oshwa.org/ diff --git a/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md b/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md new file mode 100644 index 0000000000..7c9d844315 --- /dev/null +++ b/sources/tech/20190418 How to organize with Calculist- Ideas, events, and more.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to organize with Calculist: Ideas, events, and more) +[#]: via: (https://opensource.com/article/19/4/organize-calculist) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +How to organize with Calculist: Ideas, events, and more +====== +Give structure to your ideas and plans with Calculist, an open source +web app for creating outlines. +![Team checklist][1] + +Thoughts. Ideas. Plans. We all have a few of them. Often, more than a few. And all of us want to make some or all of them a reality. + +Far too often, however, those thoughts and ideas and plans are a jumble inside our heads. They refuse to take a discernable shape, preferring instead to rattle around here, there, and everywhere in our brains. + +One solution to that problem is to put everything into [an outline][2]. An outline can be a great way to organize what you need to organize and give it the shape you need to take it to the next step. + +A number of people I know rely on a popular web-based tool called WorkFlowy for their outlining needs. If you prefer your applications (including web ones) to be open source, you'll want to take a look at [Calculist][3]. + +The brainchild of [Dan Allison][4], Calculist is billed as _the thinking tool for problem solvers_. It does much of what WorkFlowy does, and it has a few features that its rival is missing. + +Let's take a look at using Calculist to organize your ideas (and more). + +### Getting started + +If you have a server, you can try to [install Calculist][5] on it. If, like me, you don't have server or just don't have the technical chops, you can turn to the [hosted version][6] of Calculist. + +[Sign up][7] for a no-cost account, then log in. Once you've done that, you're ready to go. + +### Creating a basic outline + +What you use Calculist for really depends on your needs. I use Calculist to create outlines for articles and essays, to create lists of various sorts, and to plan projects. Regardless of what I'm doing, every outline I create follows the same pattern. + +To get started, click the **New List** button. This creates a blank outline (which Calculist calls a _list_ ). + +![Create a new list in Calculist][8] + +The outline is a blank slate waiting for you to fill it up. Give the outline a name, then press Enter. When you do that, Calculist adds the first blank line for your outline. Use that as your starting point. + +![A new outline in Calculist][9] + +Add a new line by pressing Enter. To indent a line, press the Tab key while on that line. If you need to create a hierarchy, you can indent lines as far as you need to indent them. Press Shift+Tab to outdent a line. + +Keep adding lines until you have a completed outline. Calculist saves your work every few seconds, so you don't need to worry about that. + +![Calculist outline][10] + +### Editing an outline + +Outlines are fluid. They morph. They grow and shrink. Individual items in an outline change. Calculist makes it easy for you to adapt and make those changes. + +You already know how to add an item to an outline. If you don't, go back a few paragraphs for a refresher. To edit text, click on an item and start typing. Don't double-click (more on this in a few moments). If you accidentally double-click on an item, press Esc on your keyboard and all will be well. + +Sometimes you need to move an item somewhere else in the outline. Do that by clicking and holding the bullet for that item. Drag the item and drop it wherever you want it. Anything indented below the item moves with it. + +At the moment, Calculist doesn't support adding notes or comments to an item in an outline. A simple workaround I use is to add a line indented one level deeper than the item where I want to add the note. That's not the most elegant solution, but it works. + +### Let your keyboard do the walking + +Not everyone likes to use their mouse to perform actions in an application. Like a good desktop application, you're not at the mercy of your mouse when you use Calculist. It has many keyboard shortcuts that you can use to move around your outlines and manipulate them. + +The keyboard shortcuts I mentioned a few paragraphs ago are just the beginning. There are a couple of dozen keyboard shortcuts that you can use. + +For example, you can focus on a single portion of an outline by pressing Ctrl+Right Arrow key. To get back to the full outline, press Ctrl+Left Arrow key. There are also shortcuts for moving up and down in your outline, expanding and collapsing lists, and deleting items. + +You can view the list of shortcuts by clicking on your user name in the upper-right corner of the Calculist window and clicking **Preferences**. You can also find a list of [keyboard shortcuts][11] in the Calculist GitHub repository. + +If you need or want to, you can change the shortcuts on the **Preferences** page. Click on the shortcut you want to change—you can, for example, change the shortcut for zooming in on an item to Ctrl+0. + +### The power of commands + +Calculist's keyboard shortcuts are useful, but they're only the beginning. The application has command mode that enables you to perform basic actions and do some interesting and complex tasks. + +To use a command, double-click an item in your outline or press Ctrl+Enter while on it. The item turns black. Type a letter or two, and a list of commands displays. Scroll down to find the command you want to use, then press Enter. There's also a [list of commands][12] in the Calculist GitHub repository. + +![Calclulist commands][13] + +The commands are quite comprehensive. While in command mode, you can, for example, delete an item in an outline or delete an entire outline. You can import or export outlines, sort and group items in an outline, or change the application's theme or font. + +### Final thoughts + +I've found that Calculist is a quick, easy, and flexible way to create and view outlines. It works equally well on my laptop and my phone, and it packs not only the features I regularly use but many others (including support for [LaTeX math expressions][14] and a [table/spreadsheet mode][15]) that more advanced users will find useful. + +That said, Calculist isn't for everyone. If you prefer your outlines on the desktop, then check out [TreeLine][16], [Leo][17], or [Emacs org-mode][18]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/organize-calculist + +作者:[Scott Nesbitt ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_todo_clock_time_team.png?itok=1z528Q0y (Team checklist) +[2]: https://en.wikipedia.org/wiki/Outline_(list) +[3]: https://calculist.io/ +[4]: https://danallison.github.io/ +[5]: https://github.com/calculist/calculist-web +[6]: https://app.calculist.io/ +[7]: https://app.calculist.io/join +[8]: https://opensource.com/sites/default/files/uploads/calculist-new-list.png (Create a new list in Calculist) +[9]: https://opensource.com/sites/default/files/uploads/calculist-getting-started.png (A new outline in Calculist) +[10]: https://opensource.com/sites/default/files/uploads/calculist-outline.png (Calculist outline) +[11]: https://github.com/calculist/calculist/wiki/Keyboard-Shortcuts +[12]: https://github.com/calculist/calculist/wiki/Command-Mode +[13]: https://opensource.com/sites/default/files/uploads/calculist-commands.png (Calculist commands) +[14]: https://github.com/calculist/calculist/wiki/LaTeX-Expressions +[15]: https://github.com/calculist/calculist/issues/32 +[16]: https://opensource.com/article/18/1/creating-outlines-treeline +[17]: http://www.leoeditor.com/ +[18]: https://orgmode.org/ diff --git a/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md b/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md new file mode 100644 index 0000000000..411adc44fa --- /dev/null +++ b/sources/tech/20190418 Level up command-line playgrounds with WebAssembly.md @@ -0,0 +1,196 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Level up command-line playgrounds with WebAssembly) +[#]: via: (https://opensource.com/article/19/4/command-line-playgrounds-webassembly) +[#]: author: (Robert Aboukhalil https://opensource.com/users/robertaboukhalil) + +Level up command-line playgrounds with WebAssembly +====== +WebAssembly is a powerful tool for bringing command line utilities to +the web and giving people the chance to tinker with tools. +![Various programming languages in use][1] + +[WebAssembly][2] (Wasm) is a new low-level language designed with the web in mind. Its main goal is to enable developers to compile code written in other languages—such as C, C++, and Rust—into WebAssembly and run that code in the browser. In an environment where JavaScript has traditionally been the only option, WebAssembly is an appealing counterpart, and it enables portability along with the promise for near-native runtimes. WebAssembly has also already been used to port lots of tools to the web, including [desktop applications][3], [games][4], and even [data science tools written in Python][5]! + +Another application of WebAssembly is command line playgrounds, where users are free to play with a simulated version of a command line tool. In this article, we'll explore a concrete example of leveraging WebAssembly for this purpose, specifically to port the tool **[jq][6]** —which is normally confined to the command line—to run directly in the browser. + +If you haven't heard, jq is a very powerful command line tool for querying, modifying, and wrangling JSON objects on the command line. + +### Why WebAssembly? + +Aside from WebAssembly, there are two other approaches we can take to build a jq playground: + + 1. **Set up a sandboxed environment** on your server that executes queries and returns the result to the user via API calls. Although this means your users get to play with the real thing, the thought of hosting, securing, and sanitizing user inputs for such an application is worrisome. Aside from security, the other concern is responsiveness; the additional round trips to the server can introduce noticeable latencies and negatively impact the user experience. + 2. **Simulate the command line environment using JavaScript** , where you define a series of steps that the user can take. Although this approach is more secure than option 1, it involves _a lot_ more work, as you need to rewrite the logic of the tool in JavaScript. This method is also limiting: when I'm learning a new tool, I'm not just interested in the "happy path"; I want to break things! + + + +These two solutions are not ideal because we have to choose between security and a meaningful learning experience. Ideally, we could simply run the command line tool directly in the browser, with no servers and no simulations. Lucky for us, WebAssembly is just the solution we need to achieve that. + +### Set up your environment + +In this article, we'll use the [Emscripten tool][7] to port jq from C to WebAssembly. Conveniently, it provides us with drop-in replacements for the most common C/C++ build tools, including gcc, make, and configure. + +Instead of [installing Emscripten from scratch][8] (the build process can take a long time), we'll use a Docker image I put together that comes prepackaged with everything you'll need for this article (and beyond!). + +Let's start by pulling the image and creating a container from it: + + +``` +# Fetch docker image containing Emscripten +docker pull robertaboukhalil/emsdk:1.38.26 + +# Create container from that image +docker run -dt --name wasm robertaboukhalil/emsdk:1.38.26 + +# Enter the container +docker exec -it wasm bash + +# Make sure we can run emcc, Emscripten's wrapper around gcc +emcc --version +``` + +If you see the Emscripten version on the screen, you're good to go! + +### Porting jq to WebAssembly + +Next, let's clone the jq repository: + + +``` +git clone +cd jq +git checkout 9fa2e51 +``` + +Note that we're checking out a specific commit, just in case the jq code changes significantly after this article is published. + +Before we compile jq to WebAssembly, let's first consider how we would normally compile jq to binary for use on the command line. + +From the [README file][9], here is what we need to build jq to binary (don't type this in yet): + + +``` +# Fetch jq dependencies +git submodule update --init + +# Generate ./configure file +autoreconf -fi + +# Run ./configure +./configure \ +\--with-oniguruma=builtin \ +\--disable-maintainer-mode + +# Build jq executable +make LDFLAGS=-all-static +``` + +Instead, to compile jq to WebAssembly, we'll leverage Emscripten's drop-in replacements for the configure and make build tools (note the differences here from the previous entry: **emconfigure** and **emmake** in the Run and Build statements, respectively): + + +``` +# Fetch jq dependencies +git submodule update --init + +# Generate ./configure file +autoreconf -fi + +# Run ./configure +emconfigure ./configure \ +\--with-oniguruma=builtin \ +\--disable-maintainer-mode + +# Build jq executable +emmake make LDFLAGS=-all-static +``` + +If you type the commands above inside the Wasm container we created earlier, you'll notice that emconfigure and emmake will make sure jq is compiled using emcc instead of gcc (Emscripten also has a g++ replacement called em++). + +So far, this was surprisingly easy: we just prepended a handful of commands with Emscripten tools and ported a codebase—comprising tens of thousands of lines—from C to WebAssembly. Note that it won't always be this easy, especially for more complex codebases and graphical applications, but that's for [another article][10]. + +Another advantage of Emscripten is that it can generate some JavaScript glue code for us that handles initializing the WebAssembly module, calling C functions from JavaScript, and even providing a [virtual filesystem][11]. + +Let's generate that glue code from the executable file jq that emmake outputs: + + +``` +# But first, rename the jq executable to a .o file; otherwise, +# emcc complains that the "file has an unknown suffix" +mv jq jq.o + +# Generate .js and .wasm files from jq.o +# Disable errors on undefined symbols to avoid warnings about llvm_fma_f64 +emcc jq.o -o jq.js \ +-s ERROR_ON_UNDEFINED_SYMBOLS=0 +``` + +To make sure it works, let's try an example from the [jq tutorial][12] directly on the command line: + + +``` +# Output the description of the latest commit on the jq repo +$ curl -s "" | \ +node jq.js '.[0].commit.message' +"Restore cfunction arity in builtins/0\n\nCount arguments up-front at definition/invocation instead of doing it at\nbind time, which comes after generating builtins/0 since e843a4f" +``` + +And just like that, we are now ready to run jq in the browser! + +### The result + +Using the output of emcc above, we can put together a user interface that calls jq on a JSON blob the user provides. This is the approach I took to build [jqkungfu][13] (source code [available on GitHub][14]): + +![jqkungfu screenshot][15] + +jqkungfu, a playground built by compiling jq to WebAssembly + +Although there are similar web apps that let you execute arbitrary jq queries in the browser, they are generally implemented as server-side applications that execute user queries in a sandbox (option #1 above). + +Instead, by compiling jq from C to WebAssembly, we get the best of both worlds: the flexibility of the server and the security of the browser. Specifically, the benefits are: + + 1. **Flexibility** : Users can "choose their own adventure" and use the app with fewer limitations + 2. **Speed** : Once the Wasm module is loaded, executing queries is extremely fast because all the magic happens in the browser + 3. **Security** : No backend means we don't have to worry about our servers being compromised or used to mine Bitcoins + 4. **Convenience** : Since we don't need a backend, jqkungfu is simply hosted as static files on a cloud storage platform + + + +### Conclusion + +WebAssembly is a powerful tool for bringing existing command line utilities to the web. When included as part of a tutorial, such playgrounds can become powerful teaching tools. They can even allow your users to test-drive your tool before they bother installing it. + +If you want to dive further into WebAssembly and learn how to build applications like jqkungfu (or games like Pacman!), check out my book [_Level up with WebAssembly_][16]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/command-line-playgrounds-webassembly + +作者:[Robert Aboukhalil][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/robertaboukhalil +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_language_c.png?itok=mPwqDAD9 (Various programming languages in use) +[2]: https://webassembly.org/ +[3]: https://www.figma.com/blog/webassembly-cut-figmas-load-time-by-3x/ +[4]: http://www.continuation-labs.com/projects/d3wasm/ +[5]: https://hacks.mozilla.org/2019/03/iodide-an-experimental-tool-for-scientific-communicatiodide-for-scientific-communication-exploration-on-the-web/ +[6]: https://stedolan.github.io/jq/ +[7]: https://emscripten.org/ +[8]: https://emscripten.org/docs/getting_started/downloads.html +[9]: https://github.com/stedolan/jq/blob/9fa2e51099c55af56e3e541dc4b399f11de74abe/README.md +[10]: https://medium.com/@robaboukhalil/porting-games-to-the-web-with-webassembly-70d598e1a3ec?sk=20c835664031227eae5690b8a12514f0 +[11]: https://emscripten.org/docs/porting/files/file_systems_overview.html +[12]: https://stedolan.github.io/jq/tutorial/ +[13]: http://jqkungfu.com +[14]: https://github.com/robertaboukhalil/jqkungfu/ +[15]: https://opensource.com/sites/default/files/uploads/jqkungfu.gif (jqkungfu screenshot) +[16]: http://levelupwasm.com/ diff --git a/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md b/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md new file mode 100644 index 0000000000..e9fa0cb7fd --- /dev/null +++ b/sources/tech/20190418 Simplifying organizational change- A guide for the perplexed.md @@ -0,0 +1,167 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Simplifying organizational change: A guide for the perplexed) +[#]: via: (https://opensource.com/open-organization/19/4/simplifying-change) +[#]: author: (Jen Kelchner https://opensource.com/users/jenkelchner) + +Simplifying organizational change: A guide for the perplexed +====== +Here's a 4-step, open process for making change easier—both for you and +your organization. +![][1] + +Most organizational leaders have encountered a certain paralysis around efforts to implement culture change—perhaps because of perceived difficulty or the time necessary for realizing our work. But change is only as difficult as we choose to make it. In order to lead successful change efforts, we must simplify our understanding and approach to change. + +Change isn't something rare. We live everyday life in a continuous state of change—from grappling with the speed of innovation to simply interacting with the environment around us. Quite simply, *change is how we process, disseminate, and adopt new information. *And whether you're leading a team or an organization—or are simply breathing—you'll benefit from a more focused, simplified approach to change. Here's a process that can save you time and reduce frustration. + +### Three interactions with change + +Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs. In fact, [only 5% of decision making involves conscious processing][2]. Even when you don't _think_ you're making a decision, you are actually making a decision (that is, to not take action). + +So you see, two actors are at play in situations involving change. The first is the human decision maker. The second is the information _coming to_ the decision maker. Both are present in three sets of interactions at varying stages in the decision-making process. + +#### **Engaging change** + +First, we must understand that uncertainty is really the result of "new information" we must process. We must accept where we are, at that moment, while waiting for additional information. Engaging with change requires us to trust—at the very least, ourselves and our capacity to manage—as new information continues to arrive. Everyone will respond to new information differently, and those responses are based on multiple factors: general hardwiring, unconscious needs that need to be met to feel safe, and so on. How do you feel safe in periods of uncertainty? Are you routine driven? Do you need details or need to assess risk? Are you good with figuring it out on the fly? Or does safety feel like creating something brand new? + +#### **Navigating change** + +"Navigating" doesn't necessarily mean "going around" something safely. It's knowing how to "get through it." Navigating change truly requires "all hands on deck" in order to keep everything intact and moving forward as we encounter each oncoming wave of new information. Everyone around you has something to contribute to the process of navigation; leverage them for “smooth sailing." + +#### **Adopting change** + +Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization. Consider taking advantage of what researchers call "[the pendulum effect][3]," which holds that a group as small as 5% of an organization's population can influence a crowd's direction (the other 95% will follow along without realizing it). Moreover, [scientists at Rensselaer Polytechnic Institute have found][4] that when just 10% of a population holds an unshakable belief, that belief will always be adopted by a majority. Findings from this cognitive study have implications for the spread of innovations and movements within a collective group of people. Opportunities for mass adoption are directly related to your influence with the external parties around you. + +Everyone interacts with change in different ways. Those differences are based on who we are, our own unique experiences, and our core beliefs. + +### A useful matrix to guide culture change + +So far, we've identified three "interactions" every person, team, or department will experience with change: "engaging," "navigating," and "adopting." When we examine the work of _implementing_ change in the broader context of an organization (any kind), we can also identify _three relationships_ that drive the success of each interaction: "people," "capacity," and "information." + +Here's a brief list of considerations you should make—at every moment and with every relationship—to help you build roadmaps thoughtfully. + +#### **Engaging—People** + +Organizational success comes from the overlap of awareness and action of the "I" and the "We." + + * _Individuals (I)_ are aware of and engage based on their [natural response strength][5]. + * _Teams (We)_ are aware of and balance their responsibilities based on the Individual strengths by initiative. + * _Leaders (I/We) l_ everage insight based on knowing their (I) and the collective (We). + + + +#### **Engaging—Capacity** + +"Capacity" applies to skills, processes, and culture that is clearly structured, documented, and accessible with your organization. It is the “space” within which you operate and achieve solutions. + + * _Current state_ awareness allows you to use what and who you have available and accessible through your known operational capacity. + * _Future state_ needs will show you what is required of you to learn, _or stretch_ , in order to bridge any gaps; essentially, you will design the recoding of your organization. + + + +#### **Engaging—Information** + + * _Access to information_ is readily available to all based on appropriate needs within protocols. + * _Communication flows_ easily and is reciprocated at all levels. + * _Communication flow_ is timely and transparent. + + + +#### **Navigating—People** + + * Balance responses from both individuals and the collective will impact your outcomes. + * Balance the _I_ with the _We_. This allows for responses to co-exist in a seamless, collaborative way—which fuels every project. + + + +#### **Navigating—Capacity** + + * _Skills_ : Assuring a continuous state of assessment and learning through various modalities allows you to navigate with ease as each person graduates their understanding in preparation for the next iteration of change. + * _Culture:_ Be clear on goals and mission with a supported ecosystem in which your teams can operate by contributing their best efforts when working together. + * _Processes:_ Review existing processes and let go of anything that prevents you from evolving. Open practices and methodologies do allow for a higher rate of adaptability and decision making. + * _Utilize Talent:_ Discover who is already in your organization and how you can leverage their talent in new ways. Go beyond your known teams and seek out sources of new perspectives. + + + +#### **Navigating—Information** + + * Be clear on your mission. + * Be very clear on your desired endgame so everyone knows what you are navigating toward (without clearly defined and posted directions, it's easy to waste time, money and efforts resulting in missed targets). + + + +#### **Adopting—People** + + * _Behaviors_ have a critical impact on influence and adoption. + * For _internal adoption_ , consider the [pendulum of thought][3] swung by the committed few. + + + +#### **Adopting—Capacity** + + * _Sustainability:_ Leverage people who are more routine and legacy-oriented to help stabilize and embed your new initiatives. + * Allows your innovators and co-creators to move into the next phase of development and begin solving problems while other team members can perform follow-through efforts. + + + +#### **Adopting—Information** + + * Be open and transparent with your external communication. + * Lead the way in _what_ you do and _how_ you do it to create a tidal wave of change. + * Remember that mass adoption has a tipping point of 10%. + + + +[**Download a one-page guide to this model on GitHub.**][6] +--- + +### Four steps to simplify change + +You now understand what change is and how you are processing it. You've seen how you and your organization can reframe various interactions with it. Now, let's examine the four steps to simplify how you interact with and implement change as an individual, team leader, or organizational executive. + +#### **1\. Understand change** + +Change is receiving and processing new information and determining how to respond and participate with it (think personal or organizational operating system). Change is a _reciprocal_ action between yourself and incoming new information (think system interface). Change is an evolutionary process that happens in layers and stages in a continuous cycle (think data processing, bug fixes, and program iterations). + +#### **2\. Know your people** + +Change is personal and responses vary by context. People's responses to change are not indicators of the speed of adoption. Knowing how your people and your teams interact with change allows you to balance and optimize your efforts to solving problems, building solutions and sustaining implementations. Are they change makers, fast followers, innovators, stabilizers? When you know how you, _or others_ , process change, you can leverage your risk mitigators to sweep for potential pitfalls; and, your routine minded folks to be responsible for implementation follow through. + +Only a small set of members in your organization will be truly comfortable with adopting change. But that committed and confident minority can spread the fire of change and help you grow some innovative ideas within your organization. + +#### **3\. Know your capacity** + +Your capacity to implement widespread change will depend on your culture, your processes, and decision-making models. Get familiar with your operational capacity and guardrails (process and policy). + +#### **4\. Prepare for Interaction** + +Each interaction uses your people, capacity (operational), and information flow. Working with the stages of change is not always a linear process and may overlap at certain points along the way. Understand that [_people_ feed all engagement, navigation, and adoption actions][7]. + +Humans are built for adaptation to our environments. Yes, any kind of change can be scary at first. But it need not involve some major new implementation with a large, looming deadline that throws you off. Knowing that you can take a simplified approach to change, hopefully, you're able to engage new information with ease. Using this approach over time—and integrating it as habit—allows for both the _I_ and the _We_ to experience continuous cycles of change without the tensions of old. + +_Want to learn more about simplifying change?[View additional resources on GitHub][8]._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/4/simplifying-change + +作者:[Jen Kelchner][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jenkelchner +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/GOV_2dot0.png?itok=bKJ41T85 +[2]: http://www.simplifyinginterfaces.com/2008/08/01/95-percent-of-brain-activity-is-beyond-our-conscious-awareness/ +[3]: http://www.leeds.ac.uk/news/article/397/sheep_in_human_clothing__scientists_reveal_our_flock_mentality +[4]: https://news.rpi.edu/luwakkey/2902 +[5]: https://opensource.com/open-organization/18/7/transformation-beyond-digital-2 +[6]: https://github.com/jenkelchner/simplifying-change/blob/master/Visual_%20Simplifying%20Change%20(1).pdf +[7]: https://opensource.com/open-organization/17/7/digital-transformation-people-1 +[8]: https://github.com/jenkelchner/simplifying-change diff --git a/sources/tech/20190422 9 ways to save the planet.md b/sources/tech/20190422 9 ways to save the planet.md new file mode 100644 index 0000000000..d3301006cc --- /dev/null +++ b/sources/tech/20190422 9 ways to save the planet.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (9 ways to save the planet) +[#]: via: (https://opensource.com/article/19/4/save-planet) +[#]: author: (Jen Wike Huger https://opensource.com/users/jen-wike/users/alanfdoss/users/jmpearce) + +9 ways to save the planet +====== +These ideas have an open source twist. +![][1] + +What can be done to help save the planet? The question can seem depressing at a time when it feels like an individual's contribution isn't enough. But, who are we Earth dwellers if not for a collection of individuals? So, I asked our writer community to share ways that open source software or hardware can be used to make a difference. Here's what I heard back. + +### 9 ways to save the planet with an open source twist + +**1.** **Disable the blinking cursor in your terminal.** + +It might sound silly, but the trivial, blinking cursor can cause up to [2 watts per hour of extra power consumption][2]. To disable it, go to Terminal Settings: Edit > Preferences > Cursor > Cursor blinking > Disabled. + +_Recommended by Mars Toktonaliev_ + +**2\. Reduce your consumption of animal products and processed foods.** + +One way to do this is to add these open source apps to your phone: Daily Dozen, OpenFoodFacts, OpenVegeMap, and Food Restrictions. These apps will help you eat a healthy, plant-based diet, find vegan- and vegetarian-friendly restaurants, and communicate your dietary needs to others, even if they do not speak the same language. To learn more about these apps read [_4 open source apps to support eating a plant-based diet_][3]. + +_Recommendation by Joshua Allen Holm_ + +**3\. Recycle old computers.** + +How? With Linux, of course. Pay it forward by giving creating a new computer for someone who can't one and keep a computer out of the landfill. Here's how we do it at [The Asian Penguins][4]. + +_Recommendation by Stu Keroff_ + +**4\. Turn off devices when you're not using them.** + +Use "smart power strips" that have a "master" outlet and several "controlled" outlets. Plug your PC into the master outlet, and when you turn on the computer, your monitor, printer, and anything else plugged into the controlled outlets turns on too. A simpler, low-tech solution is a power strip with a timer. That's what I use at home. You can use switches on the timer to set a handy schedule to turn the power on and off at specific times. Automatically turn off your network printer when no one is at home. Or for my six-year-old laptop, extend the life of the battery with a schedule to alternate when it's running from wall power (outlet is on) and when it's running from the battery (outlet is off). + +_Recommended by Jim Hall_ + +**5\. Reduce the use of your HVAC system.** + +Sunlight shining through windows adds a lot of heat to your home during the summer. Use Home Assistant to [automatically adjust][5] window blinds and awnings [based on the time of day][6], or even based on the angle of the sun. + +_Recommended by Michael Hrivnak_ + +**6\. Turn your thermostat off or to a lower setting while you're away.** + +If your home thermostat has an "Away" feature, activating it on your way out the door is easy to forget. With a touch of automation, any connected thermostat can begin automatically saving energy while you're not home. [Stataway][7] is one such project that uses your phone's GPS coordinates to determine when it should set your thermostat to "Home" or "Away". + +_Recommended by Michael Hrivnak_ + +**7\. Save computing power for later.** + +I have an idea: Create a script that can read the power output from an alternative energy array (wind and solar) and begin turning on servers (taking them from a power-saving sleep mode to an active mode) in a computing cluster until the overload power is used (whatever excess is produced beyond what can be stored/buffered for later use). Then use the overload power during high-production times for compute-intensive projects like rendering. This process would be essentially free of cost because the power can't be buffered for other uses. I'm sure the monitoring, power management, and server array tools must exist to do this. Then, it's just an integration problem, making it all work together. + +_Recommended by Terry Hancock_ + +**8\. Turn off exterior lights.** + +Light pollution affects more than 80% of the world's population, according to the [World Atlas of Artificial Night Sky Brightness][8], published (Creative Commons Attribution-NonCommercial 4.0) in 2016 in the open access journal _Science Advances_. Turning off exterior lights is a quick way to benefit wildlife, human health, our ability to enjoy the night sky, and of course energy consumption. Visit [darksky.org][9] for more ideas on how to reduce the impact of your exterior lighting. + +_Recommended by Michael Hrivnak_ + +**9\. Reduce your CPU count.** + +For me, I remember I used to have a whole bunch of computers running in my basement as my IT playground/lab. I've become more conscious now of power consumption and so have really drastically reduced my CPU count. I like to take advantage of VMs, zones, containers... that type of technology a lot more these days. Also, I'm really glad that small form factor and SoC computers, such as the Raspberry Pi, exist because I can do a lot with one, such as run a DNS or Web server, without heating the room and running up my electricity bill. + +P.S. All of these computers are running Linux, FreeBSD, or Raspbian! + +_Recommended by Alan Formy-Duvall_ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/4/save-planet + +作者:[Jen Wike Huger ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jen-wike/users/alanfdoss/users/jmpearce +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pixelated-world.png?itok=fHjM6m53 +[2]: https://www.redhat.com/archives/fedora-devel-list/2009-January/msg02406.html +[3]: https://opensource.com/article/19/4/apps-plant-based-diets +[4]: https://opensource.com/article/19/2/asian-penguins-close-digital-divide +[5]: https://www.home-assistant.io/docs/automation/trigger/#sun-trigger +[6]: https://www.home-assistant.io/components/cover/ +[7]: https://github.com/mhrivnak/stataway +[8]: http://advances.sciencemag.org/content/2/6/e1600377 +[9]: http://darksky.org/ diff --git a/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md b/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md new file mode 100644 index 0000000000..66b0345586 --- /dev/null +++ b/sources/tech/20190422 Strawberry- A Fork of Clementine Music Player.md @@ -0,0 +1,132 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Strawberry: A Fork of Clementine Music Player) +[#]: via: (https://itsfoss.com/strawberry-music-player/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Strawberry: A Fork of Clementine Music Player +====== + +In this age of streaming music and cloud services, there are still people who need an application to collect and play their music. If you are such a person, this article should interest you. + +We have earlier covered [Sayonara music player][1]. Today, we will be taking a look at the Strawberry Music Player. + +### Strawberry Music Player: A fork of Clementine + +The [Strawberry Music Player][2] is, quite simply, an application to manage and play your music. + +![Strawberry media library][3] + +Strawberry contains the following list of features: + + * Play and organize music + * Supports WAV, FLAC, WavPack, DSF, DSDIFF, Ogg Vorbis, Speex, MPC, TrueAudio, AIFF, MP4, MP3, ASF and Monkey’s Audio Audio CD playback + * Native desktop notifications + * Support for playlists in multiple formats + * Advanced audio output and device configuration for bit-perfect playback on Linux + * Edit tags on music files + * Fetch tags from [MusicBrainz Picard][4] + * Album cover art from [Last.fm][5], MusicBrainz and Discogs + * Song lyrics from [AudD][6] + * Support for multiple backends + * Audio analyzer + * Audio equalizer + * Transfer music to iPod, iPhone, MTP or mass-storage USB player + * Streaming support for Tidal + * Scrobbler with support for Last.fm, Libre.fm and ListenBrainz + + + +If you take a look at the screenshots, they probably look familiar. That is because Strawberry is a fork of the [Clementine Music Player][7]. Clementine has not been updated since 2016, while the most recent version of Strawberry (0.5.3) was released early April 2019. + +Trivia + +You might think that Strawberry music player is named after the fruit. However, its [creator][8] claims that he has named the project after the band [Strawbs][9]. + +### Installing Strawberry Music player + +Now let’s take a look at how you can install Strawberry on your system. + +#### Ubuntu + +The easiest way to install Strawberry on Ubuntu is to install the [official snap][10]. Just type: + +``` +sudo snap install strawberry +``` + +If you are not a fan of snaps, you can download a .deb file from Strawberry’s GitHub [release page][11]. You can [install the .deb file][12] by double-clicking it and opening it via the Software Center. + +Strawberry is not available in the main [Ubuntu repositories][13]. + +#### Fedora + +Installing Strawberry on Fedora is much simpler. Strawberry is in the Fedora repos, so you just have to type `sudo dnf strawberry`. Strawberry is not available on Flatpak. + +#### Arch + +Just like Fedora, Strawberry is in the Arch repos. All you have to type is `sudo pacman -S strawberry`. The same is true for Manjaro. + +You can find a list of Linux distros that have Strawberry in their repos [here][14]. If you have openSUSE or Mageia, click [here][15]. You can also compile Strawberry from source. + +### Experience with Strawberry Music Player + +![Playing an audio book with Strawberry][16] + +I installed Strawberry on Fedora and Windows. I have used Clementine in the past, so I knew what to expect. I downloaded a number of audiobooks and several [Old Time Radio][17] [shows][18] as I don’t listen to a lot of music. Instead of using a dedicated [audiobook player like Cozy][19], I used Strawberry for listening to these radio shows. + +Once I told Strawberry where my files were located, it quickly imported them. I used [EasyTag][20] to fix some of the MP3 information on the old time radio shows. Strawberry has a tag editor, but EasyTag allows you to edit several folders very quickly. Strawberry undated the media library instantaneously. + +The big plus for me was performance. It loaded quickly and ran well. This might have something to do with the fact that it is not another Electron app. Strawberry is written in good-old-fashioned C++ and Qt 5. No need to load a whole web browser every time you want to play music, or in my case listen to audio dramas. + +I was not able to test the Tidal streaming feature because I don’t have an account. Also, I don’t sync music to my iPod. + +### Final Thoughts + +Strawberry is like a standard music player that makes managing and playing your audio library very easy. + +The features that I miss from Clementine include the option to access your media from cloud storage systems (like Box and Dropbox) and the ability to download podcasts. But then, I don’t store my media in the cloud and I mainly listen to podcasts on my iPod. + +I recommend giving Strawberry a try. You just might like it as much as I do. + +Have you ever used Strawberry? What is your favorite music player/manager? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][21]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/strawberry-music-player/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/sayonara-music-player/ +[2]: https://strawbs.org/ +[3]: https://itsfoss.com/wp-content/uploads/2019/04/strawberry1-800x471.png +[4]: https://itsfoss.com/musicbrainz-picard/ +[5]: https://www.last.fm/ +[6]: https://audd.io/ +[7]: https://www.clementine-player.org/ +[8]: https://github.com/jonaski +[9]: https://en.wikipedia.org/wiki/Strawbs +[10]: https://snapcraft.io/strawberry +[11]: https://github.com/jonaski/strawberry/releases +[12]: https://itsfoss.com/install-deb-files-ubuntu/ +[13]: https://itsfoss.com/ubuntu-repositories/ +[14]: https://repology.org/project/strawberry/versions +[15]: https://download.opensuse.org/repositories/home:/jonaski:/audio/ +[16]: https://itsfoss.com/wp-content/uploads/2019/04/strawberry3-800x471.png +[17]: https://en.wikipedia.org/wiki/Golden_Age_of_Radio +[18]: https://zootradio.com/ +[19]: https://itsfoss.com/cozy-audiobook-player/ +[20]: https://wiki.gnome.org/Apps/EasyTAG +[21]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190425 Debian has a New Project Leader.md b/sources/tech/20190425 Debian has a New Project Leader.md new file mode 100644 index 0000000000..00f114b907 --- /dev/null +++ b/sources/tech/20190425 Debian has a New Project Leader.md @@ -0,0 +1,106 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Debian has a New Project Leader) +[#]: via: (https://itsfoss.com/debian-project-leader-election/) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Debian has a New Project Leader +====== + +Like each year, the Debian Secretary announced a call for nominations for the post of Debian Project Leader (commonly known as DPL) in early March. Soon 5 candidates shared their nomination. One of the DPL candidates backed out due to personal reasons and we had [four candidates][1] as can be seen in the Nomination section of the Vote page. + +### Sam Hartman, the new Debian Project Leader + +![][2] + +While I will not go much into details as Sam already outlined his position on his [platform][3], it is good to see that most Debian developers recognize that it’s no longer just the technical excellence which need to be looked at. I do hope he is able to create more teams which would leave some more time in DPL’s hands and less stress going forward. + +As he has shared, he would be looking into also helping the other DPL candidates, all of which presented initiatives to make Debian better. + +Apart from this, there had been some excellent suggestions, for example modernizing debian-installer, making lists.debian.org have a [Mailman 3][4] instance, modernizing Debian packaging and many more. + +While probably a year is too short a time for any of the deliverables that Debian people are thinking, some sort of push or start should enable Debian to reach greater heights than today. + +### A brief history of DPL elections + +In the beginning, Debian was similar to many distributions which have a [BDFL][5], although from the very start Debian had a sort of rolling leadership. While I wouldn’t go through the whole history, from October 1998 there was an idea [germinated][6] to have a Debian Constitution. + +After quite a bit of discussion between Debian users, contributors, developers etc. [Debian 1.0 Constitution][7] was released on December 2nd, 1998. One of the big changes was that it formalised the selection of Debian Project Leader via elections. + +From 1998 till 2019 13 Debian project leaders have been elected till date with Sam Hartman being the latest (2019). + +Before Sam, [Chris Lamb][8] was DPL in 2017 and again stood up for re-election in 2018. One of the biggest changes in Chris’s tenure was having more impetus to outreach than ever before. This made it possible to have many more mini-debconfs all around the world and thus increasing more number of Debian users and potential Debian Developers. + +[][9] + +Suggested read SemiCode OS: A Linux Distribution For Programmers And Web Developers + +### Duties and Responsibilities of the Debian Project Leader + +![][10] + +Debian Project Leader (DPL) is a non-monetary position which means that the DPL doesn’t get a salary or any monetary benefits in the traditional sense but it’s a prestigious position. + +Curious what what a DPL does? Here are some of the duties, responsibilities, prestige and perks associated with this position. + +#### Travelling + +As the DPL is the public face of the project, she/he is supposed to travel to many places in the world to share about Debian. While the travel may be a perk, it is and could be discounted by being not paid for the time spent articulating Debian’s position in various free software and other communities. Also travel, language, politics of free software are also some of the stress points that any DPL would have to go through. + +#### Communication + +A DPL is expected to have excellent verbal and non-verbal communication skills as she/he is the expected to share Debian’s vision of computing to technical and non-technical people. As she/he is also expected to weigh in many a sensitive matter, the Project Leader has to make choices about which communications should be made public and which should be private. + +#### Budgeting + +Quite a bit of the time the Debian Project Leader has to look into the finances along with the Secretary and take a call at various initiatives mooted by the larger community. The Project Leader has to ask and then make informed decisions on the same. + +#### Delegation + +One of the important tasks of the DPL is to delegate different tasks to suitable people. Some sensitive delegations include ftp-master, ftp-assistant, list-managers, debian-mirror, debian-infrastructure and so on. + +#### Influence + +Last but not the least, just like any other election, the people who contest for DPL have a platform where they share their ideas about where they would like to see the Debian project heading and how they would go about doing it. + +This is by no means an exhaustive list. I would suggest to read Lucas Nussbaum’s [mail][11] in which he outlines some more responsibilities as a Debian Project Leader. + +[][12] + +Suggested read Lightweight Linux Distribution Bodhi Linux 5.0 Released + +**In the end…** + +I wish Sam Hartman all the luck. I look forward to see how Debian grows under his leadership. + +I also hope that you learned a few non-technical thing around Debian. If you are an [ardent Debian user][13], stuff like this make you feel more involved with Debian project. What do you say? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/debian-project-leader-election/ + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://www.debian.org/vote/2019/vote_001 +[2]: https://itsfoss.com/wp-content/uploads/2019/04/Debian-Project-Leader-election-800x450.png +[3]: https://www.debian.org/vote/2019/platforms/hartmans +[4]: http://docs.mailman3.org/en/latest/ +[5]: https://en.wikipedia.org/wiki/Benevolent_dictator_for_life +[6]: https://lists.debian.org/debian-devel/1998/09/msg00506.html +[7]: https://www.debian.org/devel/constitution.1.0 +[8]: https://www.debian.org/vote/2017/platforms/lamby +[9]: https://itsfoss.com/semicode-os-linux/ +[10]: https://itsfoss.com/wp-content/uploads/2019/04/leadership-800x450.jpg +[11]: https://lists.debian.org/debian-vote/2019/03/msg00023.html +[12]: https://itsfoss.com/bodhi-linux-5/ +[13]: https://itsfoss.com/reasons-why-i-love-debian/ diff --git a/sources/tech/20190426 NomadBSD, a BSD for the Road.md b/sources/tech/20190426 NomadBSD, a BSD for the Road.md new file mode 100644 index 0000000000..d31f9b4a90 --- /dev/null +++ b/sources/tech/20190426 NomadBSD, a BSD for the Road.md @@ -0,0 +1,125 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (NomadBSD, a BSD for the Road) +[#]: via: (https://itsfoss.com/nomadbsd/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +NomadBSD, a BSD for the Road +====== + +As regular It’s FOSS readers should know, I like diving into the world of BSDs. Recently, I came across an interesting BSD that is designed to live on a thumb drive. Let’s take a look at NomadBSD. + +### What is NomadBSD? + +![Nomadbsd Desktop][1] + +[NomadBSD][2] is different than most available BSDs. NomadBSD is a live system based on FreeBSD. It comes with automatic hardware detection and an initial config tool. NomadBSD is designed to “be used as a desktop system that works out of the box, but can also be used for data recovery, for educational purposes, or to test FreeBSD’s hardware compatibility.” + +This German BSD comes with an [OpenBox][3]-based desktop with the Plank application dock. NomadBSD makes use of the [DSB project][4]. DSB stands for “Desktop Suite (for) (Free)BSD” and consists of a collection of programs designed to create a simple and working environment without needing a ton of dependencies to use one tool. DSB is created by [Marcel Kaiser][5] one of the lead devs of NomadBSD. + +Just like the original BSD projects, you can contact the NomadBSD developers via a [mailing list][6]. + +[][7] + +Suggested read Enjoy Netflix? You Should Thank FreeBSD + +#### Included Applications + +NomadBSD comes with the following software installed: + + * Thunar file manager + * Asunder CD ripper + * Bash 5.0 + * Filezilla FTP client + * Firefox web browser + * Fish Command line + * Gimp + * Qpdfview + * Git + + + * Hexchat IRC client + * Leafpad text editor + * Midnight Commander file manager + * PaleMoon web browser + * PCManFM file manager + * Pidgin messaging client + * Transmission BitTorrent client + + + * Redshift + * Sakura terminal emulator + * Slim login manager + * Thunderbird email client + * VLC media player + * Plank application dock + * Z Shell + + + +You can see a complete of the pre-installed applications in the [MANIFEST file][8]. + +![Nomadbsd Openbox Menu][9] + +#### Version 1.2 Released + +NomadBSD recently released version 1.2 on April 21, 2019. This means that NomadBSD is now based on FreeBSD 12.0-p3. TRIM is now enabled by default. One of the biggest changes is that the initial command-line setup was replaced with a Qt graphical interface. They also added a Qt5 tool to install NomadBSD to your hard drive. A number of fixes were included to improve graphics support. They also added support for creating 32-bit images. + +[][10] + +Suggested read 6 Reasons Why Linux Users Switch to BSD + +### Installing NomadBSD + +Since NomadBSD is designed to be a live system, we will need to add the BSD to a USB drive. First, you will need to [download it][11]. There are several options to choose from: 64-bit, 32-bit, or 64-bit Mac. + +You will be a USB drive that has at least 4GB. The system that you are installing to should have a 1.2 GHz processor and 1GB of RAM to run NomadBSD comfortably. Both BIOS and UEFI are supported. + +All of the images available for download are compressed as a `.lzma` file. So, once you have downloaded the file, you will need to extract the `.img` file. On Linux, you can use either of these commands: `lzma -d nomadbsd-x.y.z.img.lzma` or `xzcat nomadbsd-x.y.z.img.lzma`. (Be sure to replace x.y.z with the correct file name you just downloaded.) + +Before we proceed, we need to find out the id of your USB drive. (Hopefully, you have inserted it by now.) I use the `lsblk` command to find my USB drive, which in my case is `sdb`. To write the image file, use this command `sudo dd if=nomadbsd-x.y.z.img of=/dev/sdb bs=1M conv=sync`. (Again, don’t forget to correct the file name.) If you are uncomfortable using `dd`, you can use [Etcher][12]. If you have Windows, you will need to use [7-zip][13] to extract the image file and Etcher or [Rufus][14] to write the image to the USB drive. + +When you boot from the USB drive, you will encounter a simple config tool. Once you answer the required questions, you will be greeted with a simple Openbox desktop. + +### Thoughts on NomadBSD + +I first discovered NomadBSD back in January when they released 1.2-RC1. At the time, I had been unable to install [Project Trident][15] on my laptop and was very frustrated with BSDs. I downloaded NomadBSD and tried it out. I initially ran into issues reaching the desktop, but RC2 fixed that issue. However, I was unable to get on the internet, even though I had an Ethernet cable plugged in. Luckily, I found the wifi manager in the menu and was able to connect to my wifi. + +Overall, my experience with NomadBSD was pleasant. Once I figured out a few things, I was good to go. I hope that NomadBSD is the first of a new generation of BSDs that focus on mobility and ease of use. BSD has conquered the server world, it’s about time they figured out how to be more user-friendly. + +Have you ever used NomadBSD? What is your BSD? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/nomadbsd/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-desktop-800x500.jpg +[2]: http://nomadbsd.org/ +[3]: http://openbox.org/wiki/Main_Page +[4]: https://freeshell.de/%7Emk/projects/dsb.html +[5]: https://github.com/mrclksr +[6]: http://nomadbsd.org/contact.html +[7]: https://itsfoss.com/netflix-freebsd-cdn/ +[8]: http://nomadbsd.org/download/nomadbsd-1.2.manifest +[9]: https://itsfoss.com/wp-content/uploads/2019/04/NomadBSD-Openbox-menu-800x500.jpg +[10]: https://itsfoss.com/why-use-bsd/ +[11]: http://nomadbsd.org/download.html +[12]: https://www.balena.io/etcher/ +[13]: https://www.7-zip.org/ +[14]: https://rufus.ie/ +[15]: https://itsfoss.com/project-trident-interview/ +[16]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190429 Awk utility in Fedora.md b/sources/tech/20190429 Awk utility in Fedora.md new file mode 100644 index 0000000000..21e40641f7 --- /dev/null +++ b/sources/tech/20190429 Awk utility in Fedora.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Awk utility in Fedora) +[#]: via: (https://fedoramagazine.org/awk-utility-in-fedora/) +[#]: author: (Stephen Snow https://fedoramagazine.org/author/jakfrost/) + +Awk utility in Fedora +====== + +![][1] + +Fedora provides _awk_ as part of its default installation, including all its editions, including the immutable ones like Silverblue. But you may be asking, what is _awk_ and why would you need it? + +_Awk_ is a data driven programming language that acts when it matches a pattern. On Fedora, and most other distributions, GNU _awk_ or _gawk_ is used. Read on for more about this language and how to use it. + +### A brief history of awk + +_Awk_ began at Bell Labs in 1977. Its name is an acronym from the initials of the designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan. + +> The specification for _awk_ in the POSIX Command Language and Utilities standard further clarified the language. Both the _gawk_ designers and the original _awk_ designers at Bell Laboratories provided feedback for the POSIX specification. +> +> From [The GNU Awk User’s Guide][2] + +For a more in-depth look at how _awk/gawk_ ended up being as powerful and useful as it is, follow the link above. Numerous individuals have contributed to the current state of _gawk_. Among those are: + + * Arnold Robbins and David Trueman, the creators of _gawk_ + * Michael Brennan, the creator of _mawk_ , which later was merged with _gawk_ + * Jurgen Kahrs, who added networking capabilities to _gawk_ in 1997 + * John Hague, who rewrote the _gawk_ internals and added an _awk_ -level debugger in 2011 + + + +### Using awk + +The following sections show various ways of using _awk_ in Fedora. + +#### At the command line + +The simples way to invoke _awk_ is at the command line. You can search a text file for a particular pattern, and if found, print out the line(s) of the file that match the pattern anywhere. As an example, use _cat_ to take a look at the command history file in your home director: + +``` +$ cat ~/.bash_history +``` + +There are probably many lines scrolling by right now. + +_Awk_ helps with this type of file quite easily. Instead of printing the entire file out to the terminal like _cat_ , you can use _awk_ to find something of specific interest. For this example, type the following at the command line if you’re running a standard Fedora edition: + +``` +$ awk '/dnf/' ~/.bash_history +``` + +If you’re running Silverblue, try this instead: + +``` +$ awk '/rpm-ostree/' ~/.bash_history +``` + +In both cases, more data likely appears than what you really want. That’s no problem for _awk_ since it can accept regular expressions. Using the previous example, you can change the pattern to more closely match search requirements of wanting to know about installs only. Try changing the search pattern to one of these: + +``` +$ awk '/rpm-ostree install/' ~/.bash_history +$ awk '/dnf install/' ~/.bash_history +``` + +All the entries of your bash command line history appear that have the pattern specified at any position along the line. Awk works on one line of a data file at a time. It matches pattern, then performs an action, then moves to next line until the end of file (EOF) is reached. + +#### From an _awk_ program + +Using awk at the command line as above is not much different than piping output to _grep_ , like this: + +``` +$ cat .bash_history | grep 'dnf install' +``` + +The end result of printing to standard output ( _stdout_ ) is the same with both methods. + +Awk is a programming language, and the command _awk_ is an interpreter of that language. The real power and flexibility of _awk_ is you can make programs with it, and combine them with shell scripts to create even more powerful programs. For more feature rich development with _awk_ , you can also incorporate C or C++ code using [Dynamic-Extensions][3]. + +Next, to show the power of _awk_ , let’s make a couple of program files to print the header and draw five numbers for the first row of a bingo card. To do this we’ll create two awk program files. + +The first file prints out the header of the bingo card. For this example it is called _bingo-title.awk_. Use your favorite editor to save this text as that file name: +``` + +``` + +BEGIN { +print "B\tI\tN\tG\tO" +} +``` + +``` + +Now the title program is ready. You could try it out with this command: + +``` +$ awk -f bingo-title.awk +``` + +The program prints the word BINGO, with a tab space ( _\t_ ) between the characters. For the number selection, let’s use one of awk’s builtin numeric functions called _rand()_ and use two of the control statements, _for_ and _switch._ (Except the editor changed my program, so no switch statement used this time). + +The title of the second awk program is _bingo-num.awk_. Enter the following into your favorite editor and save with that file name: +``` + +``` + +@include "bingo-title.awk" +BEGIN { +for (i = 1; i < = 5; i++) { +b = int(rand() * 15) + (15*(i-1)) +printf "%s\t", b +} +print +} +``` + +``` + +The _@include_ statement in the file tells the interpreter to process the included file first. In this case the interpreter processs the _bingo-title.awk_ file so the title prints out first. + +#### Running the test program + +Now enter the command to pick a row of bingo numbers: + +``` +$ awk -f bingo-num.awk +``` + +Output appears similar to the following. Note that the _rand()_ function in _awk_ is not ideal for truly random numbers. It’s used here only as for example purposes. +``` + +``` + +$ awk -f bingo-num.awk +B I N G O +13 23 34 53 71 +``` + +``` + +In the example, we created two programs with only beginning sections that used actions to manipulate data generated from within the awk program. In order to satisfy the rules of Bingo, more work is needed to achieve the desirable results. The reader is encouraged to fix the programs so they can reliably pick bingo numbers, maybe look at the awk function _srand()_ for answers on how that could be done. + +### Final examples + +_Awk_ can be useful even for mundane daily search tasks that you encounter, like listing all _flatpak’s_ on the _Flathub_ repository from _org.gnome_ (providing you have the Flathub repository setup). The command to do that would be: + +``` +$ flatpak remote-ls flathub --system | awk /org.gnome/ +``` + +A listing appears that shows all output from _remote-ls_ that matches the _org.gnome_ pattern. To see flatpaks already installed from org.gnome, enter this command: + +``` +$ flatpak list --system | awk /org.gnome/ +``` + +Awk is a powerful and flexible programming language that fills a niche with text file manipulation exceedingly well. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/awk-utility-in-fedora/ + +作者:[Stephen Snow][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/jakfrost/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/awk-816x345.jpg +[2]: https://www.gnu.org/software/gawk/manual/gawk.html#Foreword3 +[3]: https://www.gnu.org/software/gawk/manual/gawk.html#Dynamic-Extensions diff --git a/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md b/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md new file mode 100644 index 0000000000..ce667a1dff --- /dev/null +++ b/sources/tech/20190429 How To Turn On And Shutdown The Raspberry Pi -Absolute Beginner Tip.md @@ -0,0 +1,111 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip]) +[#]: via: (https://itsfoss.com/turn-on-raspberry-pi/) +[#]: author: (Chinmay https://itsfoss.com/author/chinmay/) + +How To Turn On And Shutdown The Raspberry Pi [Absolute Beginner Tip] +====== + +_**Brief: This quick tip teaches you how to turn on Raspberry Pi and how to shut it down properly afterwards.**_ + +The [Raspberry Pi][1] is one of the [most popular SBC (Single-Board-Computer)][2]. If you are interested in this topic, I believe that you’ve finally got a Pi device. I also advise to get all the [additional Raspberry Pi accessories][3] to get started with your device. + +You’re ready to turn it on and start to tinker around with it. It has it’s own similarities and differences compared to traditional computers like desktops and laptops. + +Today, let’s go ahead and learn how to turn on and shutdown a Raspberry Pi as it doesn’t really feature a ‘power button’ of sorts. + +For this article I’m using a Raspberry Pi 3B+, but it’s the same for all the Raspberry Pi variants. + +Bestseller No. 1 + +[][4] + +[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4] + +CanaKit - Personal Computers + +$79.99 [][5] + +Bestseller No. 2 + +[][6] + +[CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply][6] + +CanaKit - Personal Computers + +$54.99 [][5] + +### Turn on Raspberry Pi + +![Micro USB port for Power][7] + +The micro USB port powers the Raspberry Pi, the way you turn it on is by plugging in the power cable into the micro USB port. But, before you do that you should make sure that you have done the following things. + + * Preparing the micro SD card with Raspbian according to the official [guide][8] and inserting into the micro SD card slot. + * Plugging in the HDMI cable, USB keyboard and a Mouse. + * Plugging in the Ethernet Cable(Optional). + + + +Once you have done the above, plug in the power cable. This turns on the Raspberry Pi and the display will light up and load the Operating System. + +Bestseller No. 1 + +[][4] + +[CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)][4] + +CanaKit - Personal Computers + +$79.99 [][5] + +### Shutting Down the Pi + +Shutting down the Pi is pretty straight forward, click the menu button and choose shutdown. + +![Turn off Raspberry Pi graphically][9] + +Alternatively, you can use the [shutdown command][10] in the terminal: + +``` +sudo shutdown now +``` + +Once the shutdown process has started **wait** till it completely finishes and then you can cut the power to it. Once the Pi shuts down, there is no real way to turn the Pi back on without turning off and turning on the power. You could the GPIO’s to turn on the Pi from the shutdown state but it’ll require additional modding. + +[][2] + +Suggested read 12 Single Board Computers: Alternative to Raspberry Pi + +_Note: Micro USB ports tend to be fragile, hence turn-off/on the power at source instead of frequently unplugging and plugging into the micro USB port._ + +Well, that’s about all you should know about turning on and shutting down the Pi, what do you plan to use it for? Let me know in the comments. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/turn-on-raspberry-pi/ + +作者:[Chinmay][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/chinmay/ +[b]: https://github.com/lujun9972 +[1]: https://www.raspberrypi.org/ +[2]: https://itsfoss.com/raspberry-pi-alternatives/ +[3]: https://itsfoss.com/things-you-need-to-get-your-raspberry-pi-working/ +[4]: https://www.amazon.com/CanaKit-Raspberry-Starter-Premium-Black/dp/B07BCC8PK7?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BCC8PK7&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) Starter Kit (32 GB EVO+ Edition, Premium Black Case)) +[5]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime) +[6]: https://www.amazon.com/CanaKit-Raspberry-Premium-Clear-Supply/dp/B07BC7BMHY?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=B07BC7BMHY&keywords=raspberry%20pi%20kit (CanaKit Raspberry Pi 3 B+ (B Plus) with Premium Clear Case and 2.5A Power Supply) +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/raspberry-pi-3-microusb.png?fit=800%2C532&ssl=1 +[8]: https://www.raspberrypi.org/documentation/installation/installing-images/README.md +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/Raspbian-ui-menu.jpg?fit=800%2C492&ssl=1 +[10]: https://linuxhandbook.com/linux-shutdown-command/ diff --git a/sources/tech/20190501 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-1.md b/sources/tech/20190501 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-1.md new file mode 100644 index 0000000000..27bf04eb05 --- /dev/null +++ b/sources/tech/20190501 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-1.md @@ -0,0 +1,247 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1) +[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/) +[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/) + +Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-1 +====== + +As **Docker** usage and adoption is growing faster and faster, monitoring **Docker container** images is becoming more challenging. As multiple Docker container images are getting created day-by-day, monitoring them is very important. There are already some in built tools and technologies, but configuring them is little complex. As micro-services based architecture is becoming the de-facto standard in coming days, learning such tool adds one more arsenal to your tool-set. + +Based on the above scenarios, there was in need of one light weight and robust tool requirement was growing. So Portainer.io addressed this. “ **Portainer.io** “,(Latest version is 1.20.2) the tool is very light weight(with 2-3 commands only one can configure it) and has become popular among Docker users. + +**This tool has advantages over other tools; some of these are as below** , + + * Light weight (requires only 2-3 commands to be required to run to install this tool) {Also installation image is only around 26-30MB of size) + * Robust and easy to use + * Can be used for Docker monitor and Build + * This tool provides us a detailed overview of your Docker environments + * This tool allows us to manage your containers, images, networks and volumes. + * Portainer is simple to deploy – this requires just one Docker command (can be run from anywhere.) + * Complete Docker-container environment can be monitored easily + + + +**Portainer is also equipped with** , + + * Community support + * Enterprise support + * Has professional services available(along with partner OEM services) + + + +**Functionality and features of Portainer tool are,** + + 1. It comes-up with nice Dashboard, easy to use and monitor. + 2. Many in-built templates for ease of operation and creation + 3. Support of services (OEM, Enterprise level) + 4. Monitoring of Containers, Images, Networks, Volume and configuration at almost real-time. + 5. Also includes Docker-Swarm monitoring + 6. User management with many fancy capabilities + + + +**Read Also :[How to Install Docker CE on Ubuntu 16.04 / 18.04 LTS System][1]** + +### How to install and configure Portainer.io on Ubuntu Linux / RHEL / CentOS + +**Note:** This installation is done on Ubuntu 18.04 but the installation on RHEL & CentOS would be same. We are assuming Docker CE is already installed on your system. + +``` +root@linuxtechi:~$ lsb_release -a +No LSB modules are available. +Distributor ID: Ubuntu +Description: Ubuntu 18.04 LTS +Release: 18.04 +Codename: bionic +root@linuxtechi:~$ +``` + +Create the Volume for portainer + +``` +root@linuxtechi:~$ sudo docker volume create portainer_data +portainer_data +root@linuxtechi:~$ +``` + +Launch and start Portainer Container using the beneath docker command, + +``` +root@linuxtechi:~$ sudo docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer +Unable to find image 'portainer/portainer:latest' locally +latest: Pulling from portainer/portainer +d1e017099d17: Pull complete +0b1e707a06d2: Pull complete +Digest: sha256:d6cc2c20c0af38d8d557ab994c419c799a10fe825e4aa57fea2e2e507a13747d +Status: Downloaded newer image for portainer/portainer:latest +35286de9f2e21d197309575bb52b5599fec24d4f373cc27210d98abc60244107 +root@linuxtechi:~$ +``` + +Once the complete installation is done, use the ip of host or Docker using port 9000 of the Docker engine where portainer is running using your browser. + +**Note:** If OS firewall is enabled on your Docker host then make sure 9000 port is allowed else its GUI will not come up. + +In my case, IP address of my Docker Host / Engine is “192.168.1.16” so URL will be, + + + +[![Portainer-Login-User-Name-Password][2]][3] + +Please make sure that you enter 8-character passwords. Let the admin be the user as it is and then click “Create user”. + +Now the following screen appears, in this select “Local” rectangle box. + +[![Connect-Portainer-Local-Docker][4]][5] + +Click on “Connect” + +Nice GUI with admin as user home screen appears as below, + +[![Portainer-io-Docker-Monitor-Dashboard][6]][7] + +Now Portainer is ready to launch and manage your Docker containers and it can also be used for containers monitoring. + +### Bring-up container image on Portainer tool + +[![Portainer-Endpoints][8]][9] + +Now check the present status, there are two container images are already running, if you create one more that appears instantly. + +From your command line kick-start one or two containers as below, + +``` +root@linuxtechi:~$ sudo docker run --name test -it debian +Unable to find image 'debian:latest' locally +latest: Pulling from library/debian +e79bb959ec00: Pull complete +Digest: sha256:724b0fbbda7fda6372ffed586670573c59e07a48c86d606bab05db118abe0ef5 +Status: Downloaded newer image for debian:latest +root@linuxtechi:/# +``` + +Now click Refresh button (Are you sure message appears, click “continue” on this) in Portainer GUI, you will now see 3 container images as highlighted below, + +[![Portainer-io-new-container-image][10]][11] + +Click on the “ **containers** ” (in which it is red circled above), next window appears with “ **Dashboard Endpoint summary** ” + +[![Portainer-io-Docker-Container-Dash][12]][13] + +In this page, click on “ **Containers** ” as highlighted in red color. Now you are ready to monitor your container image. + +### Simple Docker container image monitoring + +From the above step, it appears that a fancy and nice looking “Container List” page appears as below, + +[![Portainer-Container-List][14]][15] + +All the container images can be controlled from here (stop, start, etc) + +**1)** Now from this page, stop the earlier started {“test” container (this was the debian image that we started earlier)} + +To do this select the check box in front of this image and click stop button from above, + +[![Stop-Container-Portainer-io-dashboard][16]][17] + +From the command line option, you will see that this image has been stopped or exited now, + +``` +root@linuxtechi:~$ sudo docker container ls -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +d45902e717c0 debian "bash" 21 minutes ago Exited (0) 49 seconds ago test +08b96eddbae9 centos:7 "/bin/bash" About an hour ago Exited (137) 9 minutes ago mycontainer2 +35286de9f2e2 portainer/portainer "/portainer" 2 hours ago Up About an hour 0.0.0.0:9000->9000/tcp compassionate_benz +root@linuxtechi:~$ +``` + +**2)** Now start the stopped containers (test & mycontainer2) from Portainer GUI, + +Select the check box in front of stopped containers, and the click on Start + +[![Start-Containers-Portainer-GUI][18]][19] + +You will get a quick window saying, “ **Container successfully started** ” and with running state + +[![Conatiner-Started-successfully-Portainer-GUI][20]][21] + +### Various other options and features are explored as below step-by-step + +**1)** Click on “ **Images** ” which is highlighted, you will get the below window, + +[![Docker-Container-Images-Portainer-GUI][22]][23] + +This is the list of container images that are available but some may not running. These images can be imported, exported or uploaded to various locations, below screen shot shows the same, + +[![Upload-Docker-Container-Image-Portainer-GUI][24]][25] + +**2)** Click on “ **volumes”** which is highlighted, you will get the below window, + +[![Volume-list-Portainer-io-gui][26]][27] + +**3)** Volumes can be added easily with following option, click on add volume button, below window appears, + +Provide the name as “ **myvol** ” in the name box and click on “ **create the volume** ” button. + +[![Volume-Creation-Portainer-io-gui][28]][29] + +The newly created volume appears as below, (with unused state) + +[![Volume-unused-Portainer-io-gui][30]][31] + +#### Conclusion: + +As from the above installation steps, configuration and playing around with various options you can see how easy and fancy looking is Portainer.io tool is. This provides multiple features and options to explore on building, monitoring docker container. As explained this is very light weight tool, so doesn’t add any overload to host system. Next set-of options will be explored in part-2 of this series. + +Read Also: **[Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2][32]** + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-part1/ + +作者:[Shashidhar Soppin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/shashidhar/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/how-to-setup-docker-on-ubuntu-server-16-04/ +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password-1024x681.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Login-User-Name-Password.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker-1024x538.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Connect-Portainer-Local-Docker.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard-1024x544.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Monitor-Dashboard.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints-1024x252.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Endpoints.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image-1024x544.jpg +[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-new-container-image.jpg +[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash-1024x544.jpg +[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-io-Docker-Container-Dash.jpg +[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List-1024x538.jpg +[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-Container-List.jpg +[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard-1024x447.jpg +[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Stop-Container-Portainer-io-dashboard.jpg +[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI-1024x449.jpg +[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Start-Containers-Portainer-GUI.jpg +[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI-1024x538.jpg +[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Conatiner-Started-successfully-Portainer-GUI.jpg +[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI-1024x544.jpg +[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker-Container-Images-Portainer-GUI.jpg +[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI-1024x544.jpg +[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Upload-Docker-Container-Image-Portainer-GUI.jpg +[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui-1024x544.jpg +[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-list-Portainer-io-gui.jpg +[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui-1024x544.jpg +[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-Creation-Portainer-io-gui.jpg +[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui-1024x544.jpg +[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Volume-unused-Portainer-io-gui.jpg +[32]: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/ diff --git a/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md b/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md new file mode 100644 index 0000000000..fe36e37b9c --- /dev/null +++ b/sources/tech/20190502 Crowdsourcing license compliance with ClearlyDefined.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Crowdsourcing license compliance with ClearlyDefined) +[#]: via: (https://opensource.com/article/19/5/license-compliance-clearlydefined) +[#]: author: (Jeff McAffer https://opensource.com/users/jeffmcaffer) + +Crowdsourcing license compliance with ClearlyDefined +====== +Licensing is what holds open source together, and ClearlyDefined takes +the mystery out of projects' licenses, copyright, and source location. +![][1] + +Open source use continues to skyrocket, not just in use cases and scenarios but also in volume. It is trivial for a developer to depend on a 1,000 JavaScript packages from a single run of `npm install` or have thousands of packages in a [Docker][2] image. At the same time, there is increased interest in ensuring license compliance. + +Without the right license you may not be able to legally use a software component in the way you intend or may have obligations that run counter to your business model. For instance, a JavaScript package could be marked as [MIT license][3], which allows commercial reuse, while one of its dependencies is licensed has a [copyleft license][4] that requires you give your software away under the same license. Complying means finding the applicable license(s), and assessing and adhering to the terms, which is not too bad for individual components adn can be daunting for large initiatives. + +Fortunately, this open source challenge has an open source solution: [ClearlyDefined][5]. ClearlyDefined is a crowdsourced, open source, [Open Source Initiative][6] (OSI) effort to gather, curate, and upstream/normalize data about open source components, such as license, copyright, and source location. This data is the cornerstone of reducing the friction in open source license compliance. + +The premise behind ClearlyDefined is simple: we are all struggling to find and understand key information related to the open source we use—whether it is finding the license, knowing who to attribute, or identifying the source that goes with a particular package. Rather than struggling independently, ClearlyDefined allows us to collaborate and share the compliance effort. Moreover, the ClearlyDefined community seeks to upstream any corrections so future releases are more clearly defined and make conventions more explicit to improve community understanding of project intent. + +### How it works + +![ClearlyDefined's harvest, curate, upstream process][7] + +ClearlyDefined monitors the open source ecosystem and automatically harvests relevant data from open source components using a host of open source tools such as [ScanCode][8], [FOSSology][9], and [Licensee][10]. The results are summarized and aggregated to create a _definition_ , which is then surfaced to users via an API and a UI. Each definition includes: + + * Declared license of the component + * Licenses and copyrights discovered across all files + * Exact source code location to the commit level + * Release date + * List of embedded components + + + +Coincidentally (well, not really), this is exactly the data you need to do license compliance. + +### Curating + +Any given definition may have gaps or imperfections due to tool issues or the data being missing or incorrect at the origin. ClearlyDefined enables users to curate the results by refining the values and filling in the gaps. These contributions are reviewed and merged, as with any open source project. The result is an improved dataset for all to use. + +### Getting ahead + +To a certain degree, this process is still chasing the problem—analyzing and curating after the packages have already been published. To get ahead of the game, the ClearlyDefined community also feeds merged curations back to the originating projects as pull requests (e.g., adding a license file, clarifying a copyright). This increases the clarity of future release and sets up a virtuous cycle. + +### Adapting, not mandating + +In doing the analysis, we've found quite a number of approaches to expressing license-related data. Different communities put LICENSE files in different places or have different practices around attribution. The ClearlyDefined philosophy is to discover these conventions and adapt to them rather than asking the communities to do something different. A side benefit of this is that implicit conventions can be made more explicit, improving clarity for all. + +Related to this, ClearlyDefined is careful to not look too hard for this interesting data. If we have to be too smart and infer too much to find the data, then there's a good chance the origin is not all that clear. Instead, we prefer to work with the community to better understand and clarify the conventions being used. From there, we can update the tools accordingly and make it easier to be "clearly defined." + +#### NOTICE files + +As an added bonus for users, we set up an API and UI for generating NOTICE files, making it trivial for you to comply with the attribution requirements found in most open source licenses. You can give ClearlyDefined a list of components (e.g., _drag and drop an npm package-lock.json file on the UI_ ) and get back a fully formed NOTICE file rendered by one of several renderers (e.g., text, HTML, Handlebars.js template). This is a snap, given that we already have all the compliance data. Big shout out to the [OSS Attribution Builder project][11] for making a simple and pluggable NOTICE renderer we could just pop into the ClearlyDefined service. + +### Getting involved + +You can get involved with ClearlyDefined in several ways: + + * Become an active user, contributing to your compliance workflow + * Review other people's curations using the interface + * Get involved in [the code][12] (Node and React) + * Ask and answer questions on [our mailing list][13] or [Discord channel][14] + * Contribute money to the OSI targeted to ClearlyDefined. We'll use that to fund development and curation. + + + +We are excited to continue to grow our community of contributors so that licensing can continue to become an understable part of any team's open source adoption. For more information, check out [https://clearlydefined.io][15]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/license-compliance-clearlydefined + +作者:[Jeff McAffer][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeffmcaffer +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Crowdfunding_520x292_9597717_0612CM.png?itok=lxSKyFXU +[2]: https://opensource.com/resources/what-docker +[3]: /article/19/4/history-mit-license +[4]: /resources/what-is-copyleft +[5]: https://clearlydefined.io +[6]: https://opensource.org +[7]: https://opensource.com/sites/default/files/uploads/clearlydefined.png (ClearlyDefined's harvest, curate, upstream process) +[8]: https://github.com/nexB/scancode-toolkit +[9]: https://www.fossology.org/ +[10]: https://github.com/licensee/licensee +[11]: https://github.com/amzn/oss-attribution-builder +[12]: https://github.com/clearlydefined +[13]: mailto:clearlydefined@googlegroups.com +[14]: %C2%A0https://clearlydefined.io/discord) +[15]: https://clearlydefined.io/ diff --git a/sources/tech/20190502 The making of the Breaking the Code electronic book.md b/sources/tech/20190502 The making of the Breaking the Code electronic book.md new file mode 100644 index 0000000000..6786df8549 --- /dev/null +++ b/sources/tech/20190502 The making of the Breaking the Code electronic book.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The making of the Breaking the Code electronic book) +[#]: via: (https://opensource.com/article/19/5/code-book) +[#]: author: (Alicia Gibb https://opensource.com/users/aliciagibb/users/don-watkins) + +The making of the Breaking the Code electronic book +====== +Offering a safe space for middle school girls to learn technology speaks +volumes about who should be sitting around the tech table. +![Open hardware electronic book][1] + +I like a good challenge. The [Open Source Stories team][2] came to me with a great one: Create a hardware project where students could create their own thing that would be put together as a larger thing. The students would be middle school girls. My job was to figure out the hardware and make this thing make sense. + +After days of sketching out concepts, I was wandering through my local public library, and it dawned on me that the perfect piece of hardware where everyone could design their own part to create something whole is a book! The idea of a book using paper electronics was exciting, simple enough to be taught in a day, and fit the criteria of needing no special equipment, like soldering irons. + +!["Breaking the Code" book cover][3] + +I designed two parts to the electronics within the book. Half the circuits were developed with copper tape, LEDs, and DIY buttons, and half were developed with LilyPad Arduino microcontrollers, sensors, LEDs, and DIY buttons. Using the electronics in the book, the girls could make pages light up, buzz, or play music using various inputs such as button presses, page turns, or tilting the book. + +!['Breaking the Code' interior pages][4] + +We worked with young adult author [Lauren Sabel][5] to come up with the story, which features two girls who get locked in the basement of their school and have to solve puzzles to get out. Setting the scene in the basement gave us lots of opportunities to use lights! Along with the story, we received illustrations that the girls enhanced with electronics. The girls got creative, for example, using lights as the skeleton's eyes, not just for the obvious light bulb in the room. + +Creating a curriculum that was flexible enough to empower each girl to build her own successfully functioning circuit was a vital piece of the user experience. We chose components so the circuit wouldn't need to be over-engineered. We also used breakout boards and LEDs with built-in resistors so that the circuits allowed flexibility and functioned with only basic knowledge of circuit design—without getting too muddled in the deep end. + +!['Breaking the Code' interior pages][6] + +The project curriculum gave girls the confidence and skills to understand electronics by building two circuits, in the process learning circuit layout, directional aspects, cause-and-effect through inputs and outputs, and how to identify various components. Controlling electrons by pushing them through a circuit feels a bit like you're controlling a tiny part of the universe. And seeing the girls' faces light up is like seeing a universe of opportunities open in front of them. + +!['Breaking the Code' interior pages][7] + +The girls were ecstatic to see their work as a completed book, taking pride in their pages and showing others what they had built. + +![About 'Breaking the Code'][8] + +Teaching them my little corner of the world for the day was a truly empowering experience for me. As a woman in tech, I think this is the right approach for companies trying to change the gender inequalities we see in tech. Offering a safe space to learn—with lots of people in the room who look like you as mentors—speaks volumes about who should be sitting around the tech table. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/code-book + +作者:[Alicia Gibb][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/aliciagibb/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/code_book_electronics_hardware.jpg?itok=zb-zaiwz (Open hardware electronic book) +[2]: https://www.redhat.com/en/open-source-stories +[3]: https://opensource.com/sites/default/files/uploads/codebook_cover.jpg ("Breaking the Code" book cover) +[4]: https://opensource.com/sites/default/files/uploads/codebook_38-39.jpg ('Breaking the Code' interior pages) +[5]: https://www.amazon.com/Lauren-Sabel/e/B01M0FW223 +[6]: https://opensource.com/sites/default/files/uploads/codebook_lightbulb.jpg ('Breaking the Code' interior pages) +[7]: https://opensource.com/sites/default/files/uploads/codebook_10-11.jpg ('Breaking the Code' interior pages) +[8]: https://opensource.com/sites/default/files/uploads/codebook_pg1.jpg (About 'Breaking the Code') diff --git a/sources/tech/20190503 Mirror your System Drive using Software RAID.md b/sources/tech/20190503 Mirror your System Drive using Software RAID.md new file mode 100644 index 0000000000..1b5936dfa0 --- /dev/null +++ b/sources/tech/20190503 Mirror your System Drive using Software RAID.md @@ -0,0 +1,306 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mirror your System Drive using Software RAID) +[#]: via: (https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/) +[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/) + +Mirror your System Drive using Software RAID +====== + +![][1] + +Nothing lasts forever. When it comes to the hardware in your PC, most of it can easily be replaced. There is, however, one special-case hardware component in your PC that is not as easy to replace as the rest — your hard disk drive. + +### Drive Mirroring + +Your hard drive stores your personal data. Some of your data can be backed up automatically by scheduled backup jobs. But those jobs scan the files to be backed up for changes and trying to scan an entire drive would be very resource intensive. Also, anything that you’ve changed since your last backup will be lost if your drive fails. [Drive mirroring][2] is a better way to maintain a secondary copy of your entire hard drive. With drive mirroring, a secondary copy of _all the data_ on your hard drive is maintained _in real time_. + +An added benefit of live mirroring your hard drive to a secondary hard drive is that it can [increase your computer’s performance][3]. Because disk I/O is one of your computer’s main performance [bottlenecks][4], the performance improvement can be quite significant. + +Note that a mirror is not a backup. It only protects your data from being lost if one of your physical drives fail. Types of failures that drive mirroring, by itself, does not protect against include: + + * [File System Corruption][5] + * [Bit Rot][6] + * Accidental File Deletion + * Simultaneous Failure of all Mirrored Drives (highly unlikely) + + + +Some of the above can be addressed by other file system features that can be used in conjunction with drive mirroring. File system features that address the above types of failures include: + + * Using a [Journaling][7] or [Log-Structured][8] file system + * Using [Checksums][9] ([ZFS][10] , for example, does this automatically and transparently) + * Using [Snapshots][11] + * Using [BCVs][12] + + + +This guide will demonstrate one method of mirroring your system drive using the Multiple Disk and Device Administration (mdadm) toolset. Just for fun, this guide will show how to do the conversion without using any extra boot media (CDs, USB drives, etc). For more about the concepts and terminology related to the multiple device driver, you can skim the _md_ man page: + +``` +$ man md +``` + +### The Procedure + + 1. **Use** [**sgdisk**][13] **to (re)partition the _extra_ drive that you have added to your computer** : + +``` + $ sudo -i +# MY_DISK_1=/dev/sdb +# sgdisk --zap-all $MY_DISK_1 +# test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_1 $MY_DISK_1 +# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_1 $MY_DISK_1 +# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_1 $MY_DISK_1 +# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_1 $MY_DISK_1 +``` + +– If the drive that you will be using for the second half of the mirror in step 12 is smaller than this drive, then you will need to adjust down the size of the last partition so that the total size of all the partitions is not greater than the size of your second drive. +– A few of the commands in this guide are prefixed with a test for the existence of an _efivars_ directory. This is necessary because those commands are slightly different depending on whether your computer is BIOS-based or UEFI-based. + + 2. **Use** [**mdadm**][14] **to create RAID devices that use the new partitions to store their data** : + +``` + # mdadm --create /dev/md/boot --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/boot_1 missing +# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/swap_1 missing +# mdadm --create /dev/md/root --homehost=any --metadata=1.0 --level=1 --raid-devices=2 /dev/disk/by-partlabel/root_1 missing + +# cat << END > /etc/mdadm.conf +MAILADDR root +AUTO +all +DEVICE partitions +END + +# mdadm --detail --scan >> /etc/mdadm.conf +``` + +– The _missing_ parameter tells mdadm to create an array with a missing member. You will add the other half of the mirror in step 14. +– You should configure [sendmail][15] so you will be notified if a drive fails. +– You can configure [Evolution][16] to [monitor a local mail spool][17]. + + 3. **Use** [**dracut**][18] **to update the initramfs** : + +``` +# dracut -f --add mdraid --add-drivers xfs +``` + +– Dracut will include the /etc/mdadm.conf file you created in the previous section in your initramfs _unless_ you build your initramfs with the _hostonly_ option set to _no_. If you build your initramfs with the hostonly option set to no, then you should either manually include the /etc/mdadm.conf file, manually specify the UUID’s of the RAID arrays to assemble at boot time with the _rd.md.uuid_ kernel parameter, or specify the _rd.auto_ kernel parameter to have all RAID arrays automatically assembled and started at boot time. This guide will demonstrate the _rd.auto_ option since it is the most generic. + + 4. **Format the RAID devices** : + +``` + # mkfs -t vfat /dev/md/boot +# mkswap /dev/md/swap +# mkfs -t xfs /dev/md/root +``` + +– The new [Boot Loader Specification][19] states “if the OS is installed on a disk with GPT disk label, and no ESP partition exists yet, a new suitably sized (let’s say 500MB) ESP should be created and should be used as $BOOT” and “$BOOT must be a VFAT (16 or 32) file system”. + + 5. **Reboot and set the _rd.auto_ , _rd.break_ and _single_ kernel parameters** : + +``` +# reboot +``` + +– You may need to [set your root password][20] before rebooting so that you can get into _single-user mode_ in step 7. +– See “[Making Temporary Changes to a GRUB 2 Menu][21]” for directions on how to set kernel parameters on compters that use the GRUB 2 boot loader. + + 6. **Use** [**the dracut shell**][18] **to copy the root file system** : + +``` + # mkdir /newroot +# mount /dev/md/root /newroot +# shopt -s dotglob +# cp -ax /sysroot/* /newroot +# rm -rf /newroot/boot/* +# umount /newroot +# exit +``` + +– The _dotglob_ flag is set for this bash session so that the [wildcard character][22] will match hidden files. +– Files are removed from the _boot_ directory because they will be copied to a separate partition in the next step. +– This copy operation is being done from the dracut shell to insure that no processes are accessing the files while they are being copied. + + 7. **Use _single-user mode_ to copy the non-root file systems** : + +``` + # mkdir /newroot +# mount /dev/md/root /newroot +# mount /dev/md/boot /newroot/boot +# shopt -s dotglob +# cp -Lr /boot/* /newroot/boot +# test -d /newroot/boot/efi/EFI && mv /newroot/boot/efi/EFI/* /newroot/boot/efi && rmdir /newroot/boot/efi/EFI +# test -d /sys/firmware/efi/efivars && ln -sfr /newroot/boot/efi/fedora/grub.cfg /newroot/etc/grub2-efi.cfg +# cp -ax /home/* /newroot/home +# exit +``` + +– It is OK to run these commands in the dracut shell shown in the previous section instead of doing it from single-user mode. I’ve demonstrated using single-user mode to avoid having to explain how to mount the non-root partitions from the dracut shell. +– The parameters being past to the _cp_ command for the _boot_ directory are a little different because the VFAT file system doesn’t support symbolic links or Unix-style file permissions. +– In rare cases, the _rd.auto_ parameter is known to cause LVM to fail to assemble due to a [race condition][23]. If you see errors about your _swap_ or _home_ partition failing to mount when entering single-user mode, simply try again by repeating step 5 but omiting the _rd.break_ paramenter so that you will go directly to single-user mode. + + 8. **Update _fstab_ on the new drive** : + +``` + # cat << END > /newroot/etc/fstab +/dev/md/root / xfs defaults 0 0 +/dev/md/boot /boot vfat defaults 0 0 +/dev/md/swap swap swap defaults 0 0 +END +``` + + 9. **Configure the boot loader on the new drive** : + +``` + # NEW_GRUB_CMDLINE_LINUX=$(cat /etc/default/grub | sed -n 's/^GRUB_CMDLINE_LINUX="\(.*\)"/\1/ p') +# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//rd.lvm.*([^ ])} +# NEW_GRUB_CMDLINE_LINUX=${NEW_GRUB_CMDLINE_LINUX//resume=*([^ ])} +# NEW_GRUB_CMDLINE_LINUX+=" selinux=0 rd.auto" +# sed -i "/^GRUB_CMDLINE_LINUX=/s/=.*/=\"$NEW_GRUB_CMDLINE_LINUX\"/" /newroot/etc/default/grub +``` + +– You can re-enable selinux after this procedure is complete. But you will have to [relabel your file system][24] first. + + 10. **Install the boot loader on the new drive** : + +``` + # sed -i '/^GRUB_DISABLE_OS_PROBER=.*/d' /newroot/etc/default/grub +# echo "GRUB_DISABLE_OS_PROBER=true" >> /newroot/etc/default/grub +# MY_DISK_1=$(mdadm --detail /dev/md/boot | grep active | grep -m 1 -o "/dev/sd.") +# for i in dev dev/pts proc sys run; do mount -o bind /$i /newroot/$i; done +# chroot /newroot env MY_DISK_1=$MY_DISK_1 bash --login +# test -d /sys/firmware/efi/efivars || MY_GRUB_DIR=/boot/grub2 +# test -d /sys/firmware/efi/efivars && MY_GRUB_DIR=$(find /boot/efi -type d -name 'fedora' -print -quit) +# test -e /usr/sbin/grub2-switch-to-blscfg && grub2-switch-to-blscfg --grub-directory=$MY_GRUB_DIR +# grub2-mkconfig -o $MY_GRUB_DIR/grub.cfg \; +# test -d /sys/firmware/efi/efivars && test /boot/grub2/grubenv -nt $MY_GRUB_DIR/grubenv && cp /boot/grub2/grubenv $MY_GRUB_DIR/grubenv +# test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_1" +# logout +# for i in run sys proc dev/pts dev; do umount /newroot/$i; done +# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_1" -p 1 -l "$(find /newroot/boot -name shimx64.efi -printf '/%P\n' -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 1" +``` + +– The _grub2-switch-to-blscfg_ command is optional. It is only supported on Fedora 29+. +– The _cp_ command above should not be necessary, but there appears to be a bug in the current version of grub which causes it to write to $BOOT/grub2/grubenv instead of $BOOT/efi/fedora/grubenv on UEFI systems. +– You can use the following command to verify the contents of the _grub.cfg_ file right after running the _grub2-mkconfig_ command above: + +``` +# sed -n '/BEGIN .*10_linux/,/END .*10_linux/ p' $MY_GRUB_DIR/grub.cfg +``` + +– You should see references to _mdraid_ and _mduuid_ in the output from the above command if the RAID array was detected properly. + + 11. **Boot off of the new drive** : + +``` +# reboot +``` + +– How to select the new drive is system-dependent. It usually requires pressing one of the **F12** , **F10** , **Esc** or **Del** keys when you hear the [System OK BIOS beep code][25]. +– On UEFI systems the boot loader on the new drive should be labeled “Fedora RAID Disk 1”. + + 12. **Remove all the volume groups and partitions from your old drive** : + +``` + # MY_DISK_2=/dev/sda +# MY_VOLUMES=$(pvs | grep $MY_DISK_2 | awk '{print $2}' | tr "\n" " ") +# test -n "$MY_VOLUMES" && vgremove $MY_VOLUMES +# sgdisk --zap-all $MY_DISK_2 +``` + +– **WARNING** : You want to make certain that everything is working properly on your new drive before you do this. A good way to verify that your old drive is no longer being used is to try booting your computer once without the old drive connected. +– You can add another new drive to your computer instead of erasing your old one if you prefer. + + 13. **Create new partitions on your old drive to match the ones on your new drive** : + +``` + # test -d /sys/firmware/efi/efivars || sgdisk -n 0:0:+1MiB -t 0:ef02 -c 0:grub_2 $MY_DISK_2 +# sgdisk -n 0:0:+1GiB -t 0:ea00 -c 0:boot_2 $MY_DISK_2 +# sgdisk -n 0:0:+4GiB -t 0:fd00 -c 0:swap_2 $MY_DISK_2 +# sgdisk -n 0:0:0 -t 0:fd00 -c 0:root_2 $MY_DISK_2 +``` + +– It is important that the partitions match in size and type. I prefer to use the _parted_ command to display the partition table because it supports setting the display unit: + +``` + # parted /dev/sda unit MiB print +# parted /dev/sdb unit MiB print +``` + + 14. **Use mdadm to add the new partitions to the RAID devices** : + +``` + # mdadm --manage /dev/md/boot --add /dev/disk/by-partlabel/boot_2 +# mdadm --manage /dev/md/swap --add /dev/disk/by-partlabel/swap_2 +# mdadm --manage /dev/md/root --add /dev/disk/by-partlabel/root_2 +``` + + 15. **Install the boot loader on your old drive** : + +``` + # test -d /sys/firmware/efi/efivars || grub2-install "$MY_DISK_2" +# test -d /sys/firmware/efi/efivars && efibootmgr -c -d "$MY_DISK_2" -p 1 -l "$(find /boot -name shimx64.efi -printf "/%P\n" -quit | sed 's!/!\\!g')" -L "Fedora RAID Disk 2" +``` + + 16. **Use mdadm to test that email notifications are working** : + +``` +# mdadm --monitor --scan --oneshot --test +``` + + + + +As soon as your drives have finished synchronizing, you should be able to select either drive when restarting your computer and you will receive the same live-mirrored operating system. If either drive fails, mdmonitor will send an email notification. Recovering from a drive failure is now simply a matter of swapping out the bad drive with a new one and running a few _sgdisk_ and _mdadm_ commands to re-create the mirrors (steps 13 through 15). You will no longer have to worry about losing any data if a drive fails! + +### Video Demonstrations + +Converting a UEFI PC to RAID1 + +Converting a BIOS PC to RAID1 + + * TIP: Set the the quality to 720p on the above videos for best viewing. + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/mirror-your-system-drive-using-software-raid/ + +作者:[Gregory Bartholomew][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/glb/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/raid_mirroring-816x345.jpg +[2]: https://en.wikipedia.org/wiki/Disk_mirroring +[3]: https://en.wikipedia.org/wiki/Disk_mirroring#Additional_benefits +[4]: https://en.wikipedia.org/wiki/Bottleneck_(software) +[5]: https://en.wikipedia.org/wiki/Data_corruption +[6]: https://en.wikipedia.org/wiki/Data_degradation +[7]: https://en.wikipedia.org/wiki/Journaling_file_system +[8]: https://www.quora.com/What-is-the-difference-between-a-journaling-vs-a-log-structured-file-system +[9]: https://en.wikipedia.org/wiki/File_verification +[10]: https://en.wikipedia.org/wiki/ZFS#Summary_of_key_differentiating_features +[11]: https://en.wikipedia.org/wiki/Snapshot_(computer_storage)#File_systems +[12]: https://en.wikipedia.org/wiki/Business_continuance_volume +[13]: https://fedoramagazine.org/managing-partitions-with-sgdisk/ +[14]: https://fedoramagazine.org/managing-raid-arrays-with-mdadm/ +[15]: https://fedoraproject.org/wiki/QA:Testcase_Sendmail +[16]: https://en.wikipedia.org/wiki/Evolution_(software) +[17]: https://dotancohen.com/howto/root_email.html +[18]: https://fedoramagazine.org/initramfs-dracut-and-the-dracut-emergency-shell/ +[19]: https://systemd.io/BOOT_LOADER_SPECIFICATION#technical-details +[20]: https://docs.fedoraproject.org/en-US/Fedora/26/html/System_Administrators_Guide/sec-Changing_and_Resetting_the_Root_Password.html +[21]: https://docs.fedoraproject.org/en-US/fedora/rawhide/system-administrators-guide/kernel-module-driver-configuration/Working_with_the_GRUB_2_Boot_Loader/#sec-Making_Temporary_Changes_to_a_GRUB_2_Menu +[22]: https://en.wikipedia.org/wiki/Wildcard_character#File_and_directory_patterns +[23]: https://en.wikipedia.org/wiki/Race_condition +[24]: https://wiki.centos.org/HowTos/SELinux#head-867ca18a09f3103705cdb04b7d2581b69cd74c55 +[25]: https://en.wikipedia.org/wiki/Power-on_self-test#Original_IBM_POST_beep_codes diff --git a/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md b/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md new file mode 100644 index 0000000000..63802d4976 --- /dev/null +++ b/sources/tech/20190503 SuiteCRM- An Open Source CRM Takes Aim At Salesforce.md @@ -0,0 +1,105 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (SuiteCRM: An Open Source CRM Takes Aim At Salesforce) +[#]: via: (https://itsfoss.com/suitecrm-ondemand/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +SuiteCRM: An Open Source CRM Takes Aim At Salesforce +====== + +SuiteCRM is one of the most popular open source CRM (Customer Relationship Management) software available. With its unique-priced managed CRM hosting service, SuiteCRM is aiming to challenge enterprise CRMs like Salesforce. + +### SuiteCRM: An Open Source CRM Software + +CRM stands for Customer Relationship Management. It is used by businesses to manage the interaction with customers, keep track of services, supplies and other things that help the business manage their customers. + +![][1] + +[SuiteCRM][2] came into existence after the hugely popular [SugarCRM][3] decided to stop developing its open source version. The open source version of SugarCRM was then forked into SuiteCRM by UK-based [SalesAgility][4] team. + +In just a couple of years, SuiteCRM became immensely popular and started to be considered the best open source CRM software out there. You can gauge its popularity from the fact that it’s nearing a million download and it has over 100,000 community members. There are around 4 million SuiteCRM users worldwide (a CRM software usually has more than one user) and it is available in several languages. It’s even used by National Health Service ([NHS][5]) in UK. + +Since SuiteCRM is a free and open source software, you are free to download it and deploy it on your cloud server such as [UpCloud][6] (we at It’s FOSS use it), [DigitalOcean][7], [AWS][8] or any Linux server of our own. + +But configuring the software, deploying it and managing it a tiresome job and requires certain skill level or a the services of a sysadmin. This is why business oriented open source software provide a hosted version of their software. + +This enables you to enjoy the open source software without the additional headache and the team behind the software has a way to generate revenue and continue the development of their software. + +### Suite:OnDemand – Cost effective managed hosting of SuiteCRM + +So, recently, [SalesAgility][4] – the creators/maintainers of SuiteCRM, decided to challenge [Salesforce][9] and other enterprise CRMs by introducing [Suite:OnDemand][10] , a hosted version of SuiteCRM. + +[][11] + +Suggested read Papyrus: An Open Source Note Manager + +Normally, you will observe pricing plans on the basis of number of users. But, with SuiteCRM’s OnDemand cloud hosting plans, they are trying to give businesses an affordable solution on a “per-server” basis instead of paying for every user you add. + +In other words, they want you to pay extra only for advanced features, not for more users. + +Here’s what SalesAgility mentioned in their [press release][12]: + +> Unlike Salesforce and other enterprise CRM vendors, the practice of pricing per user has been abandoned in favour of per-server hosting packages all of which will support unlimited users. In addition, there’s no increase in cost for access to advanced features. With Suite:OnDemand every feature and benefit is available with each hosting package. + +Of course, unlimited users does not mean that you will have to abuse the term. So, there’s a recommended number of users for every hosting plan you opt for. + +![Suitecrm Hosting][13] + +The CEO of SalesAgility also had to describe their goals for this step: + +“ _We want SuiteCRM to be available to all businesses and to all users within a business,_ ”said **Dale Murray CEO** of **SalesAgility**. + +In addition to that, they also mentioned that they want to revolutionize the way enterprise-class CRM is being currently offered in order to make it more accessible to businesses and organizations: + +> “Many organisations do not have the experience to run and support our product on-premise or it is not part of their technology strategy to do so. With Suite:OnDemand we are providing our customers with a quick and easy solution to access all the features of SuiteCRM without a per user cost. We’re also saying to Salesforce that enterprise-class CRM can be delivered, enhanced, maintained and supported without charging mouth-wateringly expensive monthly fees. Our aim is to transform the CRM market to enable users to make CRM pervasive within their organisations.” +> +> Dale Murray, CEO of SalesAgility + +### Why is this a big deal? + +This is a huge relief for small business owners and startups because other CRMs like Saleforce and SugarCRM charge $30-$40 per month per user. If you have 10 members in your team, this will increase the cost to $300-$400 per month. + +[][14] + +Suggested read Winds Beautifully Combines Feed Reader and Podcast Player in One Single App + +This is also a good news for the open source community that we will have an affordable alternative to Salesforce. + +In addition to this, SuiteCRM is fully open source meaning there are no license fees or vendor lock-in – as they mention. You are always free to use it on your own. + +It is interesting to see different strategies and solutions being applied for an open source CRM software to take an aim at Salesforce directly. + +What do you think? Let us know your thoughts in the comments below. + +_With inputs from Abhishek Prakash._ + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/suitecrm-ondemand/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/wp-content/uploads/2019/05/suite-crm-800x450.png +[2]: https://suitecrm.com/ +[3]: https://www.sugarcrm.com/ +[4]: https://salesagility.com/ +[5]: https://www.nhs.uk/ +[6]: https://www.upcloud.com/register/?promo=itsfoss +[7]: https://m.do.co/c/d58840562553 +[8]: https://aws.amazon.com/ +[9]: https://www.salesforce.com +[10]: https://suitecrm.com/suiteondemand/ +[11]: https://itsfoss.com/papyrus-open-source-note-manager/ +[12]: https://suitecrm.com/sod-pr/ +[13]: https://itsfoss.com/wp-content/uploads/2019/05/suitecrm-hosting-800x457.jpg +[14]: https://itsfoss.com/winds-podcast-feedreader/ diff --git a/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md b/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md new file mode 100644 index 0000000000..692b4ecba8 --- /dev/null +++ b/sources/tech/20190503 Tutanota Launches New Encrypted Tool to Support Press Freedom.md @@ -0,0 +1,85 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tutanota Launches New Encrypted Tool to Support Press Freedom) +[#]: via: (https://itsfoss.com/tutanota-secure-connect/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Tutanota Launches New Encrypted Tool to Support Press Freedom +====== + +A secure email provider has announced the release of a new product designed to help whistleblowers get their information to the media. The tool is free for journalists. + +### Tutanota helps you protect your privacy + +![][1] + +[Tutanota][2] is a German-based company that provides “world’s most secure email service, easy to use and private by design.” They offer end-to-end encryption for their [secure email service][3]. Recently Tutanota announced a [desktop app for their email service][4]. + +They also make use of two-factor authentication and [open source the code][5] that they use. + +While you can get an account for free, you don’t have to worry about your information being sold or seeing ads. Tutanota makes money by charging for extra features and storage. They also offer solutions for non-profit organizations. + +Tutanota has launched a new service to further help journalists, social activists and whistleblowers in communicating securely. + +[][6] + +Suggested read Purism's New Offering is a Dream Come True for Privacy Concerned People + +### Secure Connect: An encrypted form for websites + +![][7] + +Tutanota has released a new piece of software named Secure Connect. Secure Connect is “an open source encrypted contact form for news sites”. The goal of the project is to create a way so that “whistleblowers can get in touch with journalists securely”. Tutanota picked the right day because May 3rd is the [Day of Press Freedom][8]. + +According to Tutanota, Secure Connect is designed to be easily added to websites, but can also work on any blog to ensure access by smaller news agencies. A whistleblower would access Secure Connect app on a news site, preferably using Tor, and type in any information that they want to bring to light. The whistleblower would also be able to upload files. Once they submit the information, Secure Connect will assign a random address and password, “which lets the whistleblower re-access his sent message at a later stage and check for replies from the news site.” + +![Secure Connect Encrypted Contact Form][9] + +While Tutanota will be offering Secure Connect to journalists for free, they know that someone will have to foot the bill. They plan to pay for further development of the project by selling it to businesses, such as “lawyers, financial institutions, medical institutions, educational institutions, and the authorities”. Non-journalists would have to pay €24 per month. + +You can see a demo of Secure Connect, by clicking [here][10]. If you are a journalist interested in adding Secure Connect to your website or blog, you can contact them at [[email protected]][11] Be sure to include a link to your website. + +[][12] + +Suggested read 8 Privacy Oriented Alternative Search Engines To Google in 2019 + +### Final Thoughts on Secure Connect + +I have read repeatedly about whistleblowers whose identities were accidentally exposed, either by themselves or others. Tutanota’s project looks like it would remove that possibility by making it impossible for others to discover their identity. It also gives both parties an easy way to exchange information without having to worry about encryption or PGP keys. + +I understand that it’s not the same as [Firefox Send][13], another encrypted file sharing program from Mozilla. The only question I have is whose servers will the whistleblowers’ information be sitting on? + +Do you think that Tutanota’s Secure Connect will be a boon for whistleblowers and activists? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][14]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/tutanota-secure-connect/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/wp-content/uploads/2018/02/tutanota-featured-800x450.png +[2]: https://tutanota.com/ +[3]: https://itsfoss.com/tutanota-review/ +[4]: https://itsfoss.com/tutanota-desktop/ +[5]: https://tutanota.com/blog/posts/open-source-email +[6]: https://itsfoss.com/librem-one/ +[7]: https://itsfoss.com/wp-content/uploads/2019/05/secure-communication.jpg +[8]: https://en.wikipedia.org/wiki/World_Press_Freedom_Day +[9]: https://itsfoss.com/wp-content/uploads/2019/05/secure-connect-encrypted-contact-form.png +[10]: https://secureconnect.tutao.de/contactform/demo +[11]: /cdn-cgi/l/email-protection +[12]: https://itsfoss.com/privacy-search-engines/ +[13]: https://itsfoss.com/firefox-send/ +[14]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190504 Fedora 30 Workstation Installation Guide with Screenshots.md b/sources/tech/20190504 Fedora 30 Workstation Installation Guide with Screenshots.md new file mode 100644 index 0000000000..9e0ebd4381 --- /dev/null +++ b/sources/tech/20190504 Fedora 30 Workstation Installation Guide with Screenshots.md @@ -0,0 +1,207 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Fedora 30 Workstation Installation Guide with Screenshots) +[#]: via: (https://www.linuxtechi.com/fedora-30-workstation-installation-guide/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +Fedora 30 Workstation Installation Guide with Screenshots +====== + +If you are a **Fedora distribution** lover and always try the things at Fedora Workstation and Servers, then it is good news for you as Fedora has released its latest OS edition as **Fedora 30** for the Workstation and Server. One of the important updates in Fedora 30 from its previous release is that it has introduced **Fedora CoreOS** as a replacement of Fedora Atomic host. + +Some other noticeable updates in Fedora 30 are listed beneath: + + * Updated Desktop Gnome 3.32 + * New Linux Kernel 5.0.9 + * Updated Bash Version 5.0, PHP 7.3 & GCC 9 + * Updated Python 3.7.3, JDK12, Ruby 2.6 Mesa 19.0.2 and Golang 1.12 + * Improved DNF (Default Package Manager) + + + +In this article we will walk through the Fedora 30 workstation Installation steps for laptop or desktop. + +**Following are the minimum system requirement for Fedora 30 workstation,** + + * 1GHz Processor (Recommended 2 GHz Dual Core processor) + * 2 GB RAM + * 15 GB unallocated Hard Disk + * Bootable Media (USB / DVD) + * nternet Connection (Optional) + + + +Let’s Jump into Installation steps, + +### Step:1) Download Fedora 30 Workstation ISO File + +Download the Fedora 30 Workstation ISO file on your system from its Official Web Site + + + +Once the ISO file is downloaded, then burn it either in USB drive or DVD and make it bootable. + +### Step:2) Boot Your Target System with Bootable media (USB Drive or DVD) + +Reboot your target machine (i.e. machine where you want to install Fedora 30), Set the boot medium as USB or DVD from Bios settings so system boots up with bootable media. + +### Step:3) Choose Start Fedora-Workstation-30 Live + +When the system boots up with bootable media then we will get the following screen, to begin with installation on your system’s hard disk, choose “ **Start Fedora-Workstation-30 Live** “, + + + +### Step:4) Select Install to Hard Drive Option + +Select “ **Install to Hard Drive** ” option to install Fedora 30 on your system’s hard disk, you can also try Fedora on your system without installing it, for that select “ **Try Fedora** ” Option + + + +### Step:5) Choose appropriate language for your Fedora 30 Installation + +In this step choose your language which will be used during Fedora 30 Installation, + + + +Click on Continue + +### Step:6) Choose Installation destination and partition Scheme + +In the next window we will be present the following screen, here we will choose our installation destination, means on which hard disk we will do installation + + + +In the next screen we will see the local available hard disk, select the disk that suits your installation and then choose how you want to create partitions on it from storage configuration tab. + +If you choose “ **Automatic** ” partition scheme, then installer will create the necessary partition for your system automatically but if you want to create your own customize partition scheme then choose “ **Custom** ” option, + + + +Click on Done + +In this article I will demonstrate how to create [**LVM**][1] based custom partitions, in my case I have around 40 GB unallocated hard drive, so I will be creating following partitions on it, + + * /boot = 2 GB (ext4 file system) + * /home = 15 GB (ext4 file system) + * /var = 10 GB (ext4 file system) + * / = 10 GB (ext4 file system) + * Swap = 2 GB + + + + + +Select “ **LVM** ” as partitioning scheme and then click on plus (+) symbol, + +Specify the mount point as /boot and partition size as 2 GB and then click on “Add mount point” + + + + + +Now create next partition as /home of size 15 GB, Click on + symbol + + + +Click on “ **Add mount point** ” + + + +If you might have noticed, /home partition is created as LVM partition under default Volume Group, if you wish to change default Volume Group name then click on “ **Modify** ” option from Volume Group Tab, + +Mention the Volume Group name you want to set and then click on Save. Now onward all the LVM partition will be part of fedora30 volume group. + + + +Similarly create the next two partitions **/var** and **/** of size 10 GB respectively, + +**/var partition:** + + + +**/ (slash) partition:** + + + +Now create the last partition as swap of size 2 GB, + + + +In the next window, click on Done + + + +In the next screen, choose “ **Accept Changes** ” + + + +Now we will get Installation Summary window, here you can also change the time zone that suits to your installation and then click on “ **Begin Installation** ” + + + +### Step:7) Fedora 30 Installation started + +In this step we can see Fedora 30 Installation has been started and it is in progress, + + + +Once the Installation is completed, you will be prompted to restart your system + + + +Click on Quit and reboot your system. + +Don’t forget the Change boot medium from Bios settings so your system boots up with hard disk. + +### Step:8) Welcome message and login Screen after reboot + +When we first time reboot Fedora 30 system after the successful installation, we will get below welcome screen, + + + +Click on Next + +In the next screen you can Sync your online accounts or else you can skip, + + + +In the next window you will be required to specify the local account (user name) and its password, later this account will be used to login to the system + + + + + +Click on Next + +And finally, we will get below screen which confirms that we are ready to use Fedora 30, + + + +Click on “ **Start Using Fedora** ” + + + +Above Gnome Desktop Screen confirms that we have successfully installed Fedora 30 Workstation, now explore it and have fun 😊 + +In Fedora 30 workstation, if you want to install any packages or software from command line use DNF command. + +Read More On: **[26 DNF Command Examples for Package Management in Fedora Linux][2]** + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/lvm-good-way-to-utilize-disks-space/ +[2]: https://www.linuxtechi.com/dnf-command-examples-rpm-management-fedora-linux/ diff --git a/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md b/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md new file mode 100644 index 0000000000..a05f9a6b4f --- /dev/null +++ b/sources/tech/20190504 May the fourth be with you- How Star Wars (and Star Trek) inspired real life tech.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech) +[#]: via: (https://opensource.com/article/19/5/may-the-fourth-star-wars-trek) +[#]: author: (Jeff Macharyas https://opensource.com/users/jeffmacharyas) + +May the fourth be with you: How Star Wars (and Star Trek) inspired real life tech +====== +The technologies may have been fictional, but these two acclaimed sci-fi +series have inspired open source tech. +![Triangulum galaxy, NASA][1] + +Conventional wisdom says you can either be a fan of _Star Trek_ or of _Star Wars_ , but mixing the two is like mixing matter and anti-matter. I'm not sure that's true, but even if the laws of physics cannot be changed, these two acclaimed sci-fi series have influenced the open source universe and created their own open source multi-verses. + +For example, fans have used the original _Star Trek_ as "source code" to create fan-made films, cartoons, and games. One of the more notable fan creations was the web series _Star Trek Continues_ , which faithfully adapted Gene Roddenberry's universe and redistributed it to the world. + +"Eventually we realized that there is no more profound way in which people could express what _Star Trek_ has meant to them than by creating their own very personal _Star Trek_ things," [Roddenberry said][2]. However, due to copyright restrictions, this "open source" channel [has since been curtailed][3]. + +_Star Wars_ has a different approach to open sourcing its universe. [Jess Paguaga writes][4] on FanSided: "With a variety [of] fan film awards dating back to 2002, the _Star Wars_ brand has always supported and encouraged the creation of short films that help expand the universe of a galaxy far, far away." + +But, _Star Wars_ is not without its own copyright prime directives. In one case, a Darth Vader film by a YouTuber called Star Wars Theory has drawn a copyright claim from Disney. The claim does not stop production of the film, but diverts monetary gains from it, [reports James Richards][5] on FanSided. + +This could be one of the [Ferengi Rules of Acquisition][6], perhaps. + +But if you can't watch your favorite fan film, you can still get your [_Star Wars_ fix right in the Linux terminal][7] by entering: + + +``` +`telnet towel.blinkenlights.nl` +``` + +And _Star Trek_ fans can also interact with the Federation with the original text-based video game from 1971. While a high-school senior, Mike Mayfield ported the game from punch cards to HP BASIC. If you'd like to go old school and battle Klingons, the source code is available at the [Code Project][8]. + +### Real-life star tech + +Both _Star Wars_ and _Star Trek_ have inspired real-life technologies. Although those technologies were fictional, many have become the practical, open technology we use today. Some of them inspired technologies that are still in development now. + +In the early 1970s, Motorola engineer Martin Cooper was trying to beat AT&T at the car-phone game. He says he was watching Captain Kirk use a "communicator" on an episode of _Star Trek_ and had a eureka moment. His team went on to create the first portable cellular 800MHz phone prototype in 90 days. + +In _Star Wars_ , scout stormtroopers of the Galactic Empire rode the Aratech 74-Z Speeder Bike, and a real-life counterpart is the [Aero-X][9] being developed by California's Aerofex. + +Perhaps the most visible _Star Wars_ tech to enter our lives is droids. We first encountered R2-D2 back in the 1970s, but now we have droids vacuuming our carpets and mowing our lawns, from Roombas to the [Worx Landroid][10] lawnmower. + +And, in _Star Wars_ , Princess Leia appeared to Obi-Wan Kenobi as a hologram, and in Star Trek: Voyager, the ship's chief medical officer was an interactive hologram that could diagnose and treat patients. The technology to bring characters like these to "life" is still a ways off, but there are some interesting open source developments that hint of things to come. [OpenHolo][11], "an open source library containing algorithms and software implementations for holograms in various fields," is one such project. + +### Where's the beef? + +> "She handled… real meat… touched it, and cut it?" —Keiko O'Brien, Star Trek: The Next Generation + +In the _Star Trek_ universe, crew members get their meals by simply ordering a replicator to produce whatever food they desire. That could one day become a reality thanks to a concept created by two German students for an open source "meat-printer" they call the [Cultivator][12]. It would use bio-printing to produce something that appears to be meat; the user could even select its mineral and fat content. Perhaps with more collaboration and development, the Cultivator could become the replicator in tomorrow's kitchen! + +### The 501st + +Cosplayers, people from all walks of life who dress as their favorite characters, are the "open source embodiment" of their favorite universes. The [501st][13] [Legion][13] is an all-volunteer _Star Wars_ fan organization "formed for the express purpose of bringing together costume enthusiasts under a collective identity within which to operate," according to its charter. + +Jon Stallard, a member of Garrison Tyranus, the Central Virginia chapter of the 501st Legion says, "Everybody wanted to be something else when they were a kid, right? Whether it was Neil Armstrong, Batman, or the Six Million Dollar Man. Every backyard playdate was some kind of make-believe. The 501st lets us participate in our fan communities while contributing to the community at large." + +Are cosplayers really "open source characters"? Well, that depends. The copyright laws around cosplay and using unique props, costumes, and more are very complex, [writes Meredith Filak Rose][14] for _Public Knowledge_. "We're lucky to be living in a time where fandom generally enjoys a positive relationship with the creators whose work it admires," Rose concludes. + +So, it is safe to say that stormtroopers, Ferengi, Vulcans, and Yoda are all here to stay for a long, long time, near, and far, far away. + +Live long and prosper, you shall. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/may-the-fourth-star-wars-trek + +作者:[Jeff Macharyas ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jeffmacharyas +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/triangulum_galaxy_nasa_stars.jpg?itok=NdS19A7m +[2]: https://fanlore.org/wiki/Gene_Roddenberry#His_Views_Regarding_Fanworks +[3]: https://trekmovie.com/2016/06/23/cbs-and-paramount-release-fan-film-guidelines/ +[4]: https://dorksideoftheforce.com/2019/01/17/star-wars-fan-films/ +[5]: https://dorksideoftheforce.com/2019/01/16/disney-claims-copyright-star-wars-theory/ +[6]: https://en.wikipedia.org/wiki/Rules_of_Acquisition +[7]: https://itsfoss.com/star-wars-linux/ +[8]: https://www.codeproject.com/Articles/28228/Star-Trek-1971-Text-Game +[9]: https://www.livescience.com/58943-real-life-star-wars-technology.html +[10]: https://www.digitaltrends.com/cool-tech/best-robot-lawnmowers/ +[11]: http://openholo.org/ +[12]: https://www.pastemagazine.com/articles/2016/05/the-future-is-vegan-according-to-star-trek.html +[13]: https://www.501st.com/ +[14]: https://www.publicknowledge.org/news-blog/blogs/copyright-and-cosplay-working-with-an-awkward-fit diff --git a/sources/tech/20190505 -Review- Void Linux, a Linux BSD Hybrid.md b/sources/tech/20190505 -Review- Void Linux, a Linux BSD Hybrid.md new file mode 100644 index 0000000000..cc8a660252 --- /dev/null +++ b/sources/tech/20190505 -Review- Void Linux, a Linux BSD Hybrid.md @@ -0,0 +1,136 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: ([Review] Void Linux, a Linux BSD Hybrid) +[#]: via: (https://itsfoss.com/void-linux/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +[Review] Void Linux, a Linux BSD Hybrid +====== + +There are distros that follow the crowd and there are others that try to make their own path through the tall weed. Today, we’ll be looking at a small distro that looks to challenge how a distro should work. We’ll be looking at Void Linux. + +### What is Void Linux? + +[Void Linux][1] is a “general purpose operating system, based on the monolithic Linux kernel. Its package system allows you to quickly install, update and remove software; software is provided in binary packages or can be built directly from sources with the help of the XBPS source packages collection.” + +![Void Linux Neofetch][2] + +Like Solus, Void Linux is written from scratch and does not depend on any other operating system. It is a rolling release. Unlike the majority of Linux distros, Void does not use [systemd][3]. Instead, it uses [runit][4]. Another thing that separates Void from the rest of Linux distros is the fact that they use LibreSSL instead of OpenSSL. Void also offers support for the [musl C library][5]. In fact, when you download a .iso file, you can choose between `glibc` and `musl`. + +The homegrown package manager that Void uses is named X Binary Package System (or xbps). According to the [Void wiki][6], xbps has the following features: + + * Supports multiple local and remote repositories (HTTP/HTTPS/FTP). + * RSA signed remote repositories + * SHA256 hashes for package metadata, files, and binary packages + * Supports package states (ala dpkg) to mitigate broken package * installs/updates + * Ability to resume partial package install/updates + * Ability to unpack only files that have been modified in * package updates + * Ability to use virtual packages + * Ability to check for incompatible shared libraries in reverse dependencies + * Ability to replace packages + * Ability to put packages on hold (to never update them) + * Ability to preserve/update configuration files + * Ability to force reinstallation of any installed package + * Ability to downgrade any installed package + * Ability to execute pre/post install/remove/update scriptlets + * Ability to check package integrity: missing files, hashes, missing or unresolved (reverse)dependencies, dangling or modified symlinks, etc. + + + +#### System Requirements + +According to the [Void Linux download page][7], the system requirements differ based on the architecture you choose. 64-bit images require “EM64T CPU, 96MB RAM, 350MB disk, Ethernet/WiFi for network installation”. 32-bit images require “Pentium 4 CPU (SSE2), 96MB RAM, 350MB disk, Ethernet / WiFi for network installation”. The [Void Linux handbook][8] recommends 700 MB for storage and also notes that “Flavor installations require more resources. How much more depends on the flavor.” + +Void also supports ARM devices. You can download [ready to boot images][9] for Raspberry Pi and several other [Raspberry Pi alternatives][10]. + +[][11] + +Suggested read NomadBSD, a BSD for the Road + +### Void Linux Installation + +NOTE: you can either install [Void Linux download page][7] via a live image or use a net installer. I used a live image. + +I was able to successfully install Void Linux on my Dell Latitude D630. This laptop has an Intel Centrino Duo Core processor running at 2.00 GHz, NVIDIA Quadro NVS 135M graphics chip, and 4 GB of RAM. + +![Void Linux Mate][12] + +After I `dd`ed the 800 MB Void Linux MATE image to my thumb drive and inserted it, I booted my computer. I was very quickly presented with a vanilla MATE desktop. To start installing Void, I opened up a terminal and typed `sudo void-installer`. After using the default password `voidlinux`, the installer started. The installer reminded me a little bit of the terminal Debian installer, but it was laid out more like FreeBSD. It was divided into keyboard, network, source, hostname, locale, timezone, root password, user account, bootloader, partition, and filesystems sections. + +Most of the sections where self-explanatory. In the source section, you could choose whether to install the packages from the local image or grab them from the web. I chose local because I did not want to eat up bandwidth or take longer than I had to. The partition and filesystems sections are usually handled automatically by most installers, but not on Void. In this case, the first section allows you to use `cfdisk` to create partitions and the second allows to specify what filesystems will be used in those partitions. I followed the partition layout on [this page][13]. + +If you install Void Linux from the local image, you definitely need to update your system. The [Void wiki][14] recommends running `xbps-install -Suv` until there are no more updates to install. It would probably be a good idea to reboot between batches of updates. + +### Experience with Void Linux + +So far in my Linux journey, Void Linux has been by far the most difficult. It feels more like I’m [using a BSD than a Linux distro][15]. (I guess that should not be surprising since Void was created by a former [NetBSD][16] developer who wanted to experiment with his own package manager.) The steps in the command line installer are closer to that of [FreeBSD][17] than Debian. + +Once Void was installed and updated, I went to work installing apps. Unfortunately, I ran into an issue with missing applications. Most of these applications come preinstalled on other distros. I had to install wget, unzip, git, nano, LibreOffice to name just a few. + +Void does not come with a graphical package manager. There are three unofficial frontends for the xbps package manager and [one is based on qt][18]. I ran into issues getting one of the Bash-based tools to work. It hadn’t been updated in 4-5 years. + +![Octoxbps][19] + +The xbps package manager is kinda interesting. It downloads the package and its signature to verify it. You can see the [terminal print out][20] from when I installed Mcomix. Xbps does not use the normal naming convention used in most package managers (ie `apt install` or `pacman -R`), instead, it uses `xbps-install`, `xbps-query`, `xbps-remove`. Luckily, the Void wiki had a [page][21] to show what xbps command relates to apt or dnf commands. + +[][22] + +Suggested read How To Solve: error: no such partition grub rescue in Ubuntu Linux + +The main repo for Void is located in Germany, so I decided to switch to a more local server to ease the burden on that server and to download packages quicker. Switching to a local mirror took a couple of tries because the documentation was not very clear. Documentation for Void is located in two different places: the [wiki][23] and the [handbook][24]. For me, the wiki’s [explanation][25] was confusing and I ran into issues. So, I searched for an answer on DuckDuckGo. From there I stumbled upon the [handbook’s instructions][26], which were much clearer. (The handbook is not linked on the Void Linux website and I had to stumble across it via search.) + +One of the nice things about Void is the speed of the system once everything was installed. It had the quickest boot time I have ever encountered. Overall, the system was very responsive. I did not run into any system crashes. + +### Final Thoughts + +Void Linux took more work to get to a useable state than any other distro I have tried. Even the BSDs I tried felt more polished than Void. I think the tagline “General purpose Linux” is misleading. It should be “Linux with hackers and tinkerers in mind”. Personally, I prefer using distros that are ready for me to use after installing. While it is an interesting combination of Linux and BSD ideas, I don’t think I’ll add Void to my short list of go-to distros. + +If you like tinkering with your Linux system or like building it from scratch, give [Void Linux][7] a try. + +Have you ever used Void Linux? What is your favorite Debian-based distro? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][27]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/void-linux/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://voidlinux.org/ +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/04/Void-Linux-Neofetch.png?resize=800%2C562&ssl=1 +[3]: https://en.wikipedia.org/wiki/Systemd +[4]: http://smarden.org/runit/ +[5]: https://www.musl-libc.org/ +[6]: https://wiki.voidlinux.org/XBPS +[7]: https://voidlinux.org/download/ +[8]: https://docs.voidlinux.org/installation/base-requirements.html +[9]: https://voidlinux.org/download/#download-ready-to-boot-images-for-arm +[10]: https://itsfoss.com/raspberry-pi-alternatives/ +[11]: https://itsfoss.com/nomadbsd/ +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/04/Void-Linux-Mate.png?resize=800%2C640&ssl=1 +[13]: https://wiki.voidlinux.org/Disks#Filesystems +[14]: https://wiki.voidlinux.org/Post_Installation#Updates +[15]: https://itsfoss.com/why-use-bsd/ +[16]: https://itsfoss.com/netbsd-8-release/ +[17]: https://www.freebsd.org/ +[18]: https://github.com/aarnt/octoxbps +[19]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/04/OctoXBPS.jpg?resize=800%2C534&ssl=1 +[20]: https://pastebin.com/g31n1bFT +[21]: https://wiki.voidlinux.org/Rosetta_stone +[22]: https://itsfoss.com/solve-error-partition-grub-rescue-ubuntu-linux/ +[23]: https://wiki.voidlinux.org/ +[24]: https://docs.voidlinux.org/ +[25]: https://wiki.voidlinux.org/XBPS#Official_Repositories +[26]: https://docs.voidlinux.org/xbps/repositories/mirrors/changing.html +[27]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md b/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md new file mode 100644 index 0000000000..bb1d187ea4 --- /dev/null +++ b/sources/tech/20190505 Blockchain 2.0 - An Introduction To Hyperledger Project (HLP) -Part 8.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – An Introduction To Hyperledger Project (HLP) [Part 8] +====== + +![Introduction To Hyperledger Project][1] + +Once a new technology platform reaches a threshold level of popularity in terms of active development and commercial interests, major global companies and smaller start-ups alike rush to catch a slice of the pie. **Linux** was one such platform back in the day. Once the ubiquity of its applications was realized individuals, firms, and institutions started displaying their interest in it and by 2000 the **Linux foundation** was formed. + +The Linux foundation aims to standardize and develop Linux as a platform by sponsoring their development team. The Linux Foundation is a non-profit organization that is supported by software and IT behemoths such as Microsoft, Oracle, Samsung, Cisco, IBM, Intel among others[1]. This is excluding the hundreds of individual developers who offer their services for the betterment of the platform. Over the years the Linux foundation has taken many projects under its roof. The **Hyperledger Project** is their fastest growing one till date. + +Such consortium led development have a lot of advantages when it comes to furthering tech into usable useful forms. Developing the standards, libraries and all the back-end protocols for large scale projects are expensive and resource intensive without a shred of income generating from it. Hence, it makes sense for companies to pool in their resources to develop the common “boring” parts by supporting such organizations and later upon completing work on these standard parts to simply plug & play and customize their products afterwards. Apart from the economics of the model, such collaborative efforts also yield standards allowing for easier use and integration into aspiring products and services. + +Other major innovations that were once or are currently being developed following the said consortium model include standards for WiFi (The Wi-Fi alliance), Mobile Telephony etc. + +### Introduction to Hyperledger Project (HLP) + +The Hyperledger project was launched in December 2015 by the Linux foundation as is currently among the fastest growing project they’ve incubated. It’s an umbrella organization for collaborative efforts into developing and advancing tools & standards for [**blockchain**][2] based distributed ledger technologies(DLT). Major industry players supporting the project include **IBM** , **Intel** and **SAP Ariba** among [**others**][3]. The HLP aims to create frameworks for individuals and companies to create shared as well as closed blockchains as required to further their own requirements. The design principles include a strong tilt toward developing a globally deployable, scalable, robust platform with a focus on privacy, and future auditability[2]. It is also important to note that most of the blockchains proposed and the frame. + +### Development goals and structure: Making it plug & play + +Although enterprise facing platforms exist from the likes of the Ethereum alliance, HLP is by definition business facing and supported by industry behemoths who contribute and further development in the many modules that come under the HLP banner. The HLP incubates projects in development after their induction into the cause and after finishing work on it and correcting the knick-knacks rolls it out for the public. Members of the Hyperledger project contribute their own work such as how IBM contributed their Fabric platform for collaborative development. The codebase is absorbed and developed in house by the group in the project and rolled out for all members equally for their use. + +Such processes make the modules in HLP highly flexible plug-in frameworks which will support rapid development and roll-outs in enterprise settings. Furthermore, other comparable platforms are open **permission-less blockchains** or rather **public chains** by default and even though it is possible to adapt them to specific applications, HLP modules support the feature natively. + +The differences and use cases of public & private blockchains are covered more [**here**][4] in this comparative primer on the same. + +The Hyperledger project’s mission is four-fold according to **Brian Behlendorf** , the executive director of the project. + +They are: + + 1. To create an enterprise grade DLT framework and standards which anyone can port to suit their specific industrial or personal needs. + 2. To give rise to a robust open source community to aid the ecosystem. + 3. To promote and further participation of industry members of the said ecosystem such as member firms. + 4. To host a neutral unbiased infrastructure for the HLP community to gather and share updates and developments regarding the same. + + + +The original document can be accessed [**here**][5]****. + +### Structure of the HLP + +The **HLP consists of 12 projects** that are classified as independent modules, each usually structured and working independently to develop their module. These are first studied for their capabilities and viability before being incubated. Proposals for additions can be made by any member of the organization. After the project is incubated active development ensues after which it is rolled out. The interoperability between these modules are given a high priority, hence regular communication between these groups are maintained by the community. Currently 4 of these projects are categorized as active. The active tag implies these are ready for use but not ready for a major release yet. These 4 are arguably the most significant or rather fundamental modules to furthering the blockchain revolution. We’ll look at the individual modules and their functionalities at a later time in detail. However, a brief description of a the Hyperledger Fabric platform, arguably the most popular among them follows. + +### Hyperledger Fabric + +The **Hyperledger Fabric** [2] is a fully open-source, permissioned (non-public) blockchain-based DLT platform that is designed keeping enterprise uses in mind. The platform provides features and is structured to fit the enterprise environment. It is highly modular allowing its developers to choose from different consensus protocols, **chain code protocols ([smart contracts][6])** , or identity management systems etc., as they go along. **It is a permissioned blockchain based platform** that’s makes use of an identity management system, meaning participants will be aware of each other’s identities which is required in an enterprise setting. Fabric allows for smart contract ( _ **“chaincode”, is the term that the Hyperledger team uses**_ ) development in a variety of mainstream programming languages including **Java** , **Javascript** , **Go** etc. This allows institutions and enterprises to make use of their existing talent in the area without hiring or re-training developers to develop their own smart contracts. Fabric also uses an execute-order-validate system to handle smart contracts for better reliability compared to the standard order-validate system that is used by other platforms providing smart contract functionality. Pluggable performance, identity management systems, DBMS, Consensus platforms etc. are other features of Fabric that keeps it miles ahead of its competition. + +### Conclusion + +Projects such as the Hyperledger Fabric platforms enable a faster rate of adoption of blockchain technology in mainstream use-cases. The Hyperledger community structure itself supports open governance principles and since all the projects are led as open source platforms, this improves the security and accountability that the teams exhibit in pushing out commitments. + +Since major applications of such projects involve working with enterprises to further development of platforms and standards, the Hyperledger project is currently at a great position with respect to comparable projects by others. + +**References:** + + * **[1][Samsung takes a seat with Intel and IBM at the Linux Foundation | TheINQUIRER][7]** + * **[2] E. Androulaki et al., “Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains,” 2018.** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Introduction-To-Hyperledger-Project-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.hyperledger.org/members +[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ +[5]: http://www.hitachi.com/rev/archive/2017/r2017_01/expert/index.html +[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[7]: https://www.theinquirer.net/inquirer/news/2182438/samsung-takes-seat-intel-ibm-linux-foundation diff --git a/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md b/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md new file mode 100644 index 0000000000..a4669a2eb0 --- /dev/null +++ b/sources/tech/20190505 Blockchain 2.0 - What Is Ethereum -Part 9.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – What Is Ethereum [Part 9]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – What Is Ethereum [Part 9] +====== + +![Ethereum][1] + +In the previous guide of this series, we discussed about [**Hyperledger Project (HLP)**][2], a fastest growing product developed by **Linux Foundation**. In this guide, we are going to discuss about what is **Ethereum** and its features in detail. Many researchers opine that the future of the internet will be based on principles of decentralized computing. Decentralized computing was in fact among one of the broader objectives of having the internet in the first place. However, the internet took another turn owing to differences in computing capabilities available. While modern server capabilities make the case for server-side processing and execution, lack of decent mobile networks in large parts of the world make the case for the same on the client side. Modern smartphones now have **SoCs** (system on a chip or system on chip) capable of handling many such operations on the client side itself, however, limitations owing to retrieving and storing data securely still pushes developers to have server-side computing and data management. Hence, a bottleneck in regards to data transfer capabilities is currently observed. + +All of that might soon change because of advancements in distributed data storage and program execution platforms. [**The blockchain**][3], for the first time in the history of the internet, basically allows for secure data management and program execution on a distributed network of users as opposed to central servers. + +**Ethereum** is one such blockchain platform that gives developers access to frameworks and tools used to build and run applications on such a decentralized network. Though more popularly known in general for its cryptocurrency, Ethereum is more than just **ethers** (the cryptocurrency). It’s a full **Turing complete programming language** that is designed to develop and deploy **DApps** or **Distributed APPlications** [1]. We’ll look at DApps in more detail in one of the upcoming posts. + +Ethereum is an open-source, supports by default a public (non-permissioned) blockchain, and features an extensive smart contract platform **(Solidity)** underneath. Ethereum provides a virtual computing environment called the **Ethereum virtual machine** to run applications and [**smart contracts**][4] as well[2]. The Ethereum virtual machine runs on thousands of participating nodes all over the world, meaning the application data while being secure, is almost impossible to be tampered with or lost. + +### Getting behind Ethereum: What sets it apart + +In 2017, a 30 plus group of the who’s who of the tech and financial world got together to leverage the Ethereum blockchain’s capabilities. Thus, the **Ethereum Enterprise Alliance (EEA)** was formed by a long list of supporting members including _Microsoft_ , _JP Morgan_ , _Cisco Systems_ , _Deloitte_ , and _Accenture_. JP Morgan already has **Quorum** , a decentralized computing platform for financial services based on Ethereum currently in operation, while Microsoft has Ethereum based cloud services it markets through its Azure cloud business[3]. + +### What is ether and how is it related to Ethereum + +Ethereum creator **Vitalik Buterin** understood the true value of a decentralized processing platform and the underlying blockchain tech that powered bitcoin. He failed to gain majority agreement for his idea of proposing that Bitcoin should be developed to support running distributed applications (DApps) and programs (now referred to as smart contracts). + +Hence in 2013, he proposed the idea of Ethereum in a white paper he published. The original white paper is still maintained and available for readers **[here][5]**. The idea was to develop a blockchain based platform to run smart contracts and applications designed to run on nodes and user devices instead of servers. + +The Ethereum system is often mistaken to just mean the cryptocurrency ether, however, it has to be reiterated that Ethereum is a full stack platform for developing applications and executing them as well and has been so since inception whereas bitcoin isn’t. **Ether is currently the second biggest cryptocurrency** by market capitalization and trades at an average of $170 per ether at the time of writing this article[4]. + +### Features and technicalities of the platform[5] + + * As we’ve already mentioned, the cryptocurrency called ether is simply one of the things the platform features. The purpose of the system is more than taking care of financial transactions. In fact, the key difference between the Ethereum platform and Bitcoin is in their scripting capabilities. Ethereum is developed in a Turing complete programming language which means it has scripting and application capabilities similar to other major programming languages. Developers require this feature to create DApps and complex smart contracts on the platform, a feature that bitcoin misses on. + * The “mining” process of ether is more stringent and complex. While specialized ASICs may be used to mine bitcoin, the basic hashing algorithm used by Ethereum **(EThash)** reduces the advantage that ASICs have in this regard. + * The transaction fees itself to be paid as an incentive to miners and node operators for running the network is calculated using a computational token called **Gas**. Gas improves the system’s resilience and resistance to external hacks and attacks by requiring the initiator of the transaction to pay ethers proportionate to the number of computational resources that are required to carry out that transaction. This is in contrast to other platforms such as Bitcoin where the transaction fee is measured in tandem with the transaction size. As such, the average transaction costs in Ethereum is radically less than Bitcoin. This also implies that running applications running on the Ethereum virtual machine will require a fee depending straight up on the computational problems that the application is meant to solve. Basically, the more complex an execution, the more the fee. + * The block time for Ethereum is estimated to be around _**10-15 seconds**_. The block time is the average time that is required to timestamp and create a block on the blockchain network. Compared to the 10+ minutes the same transaction will take on the bitcoin network, it becomes apparent that _**Ethereum is much faster**_ with respect to transactions and verification of blocks. + * _It is also interesting to note that there is no hard cap on the amount of ether that can be mined or the rate at which ether can be mined leading to less radical system design than bitcoin._ + + + +### Conclusion + +While Ethereum is comparable and far outpaces similar platforms, the platform itself lacked a definite path for development until the Ethereum enterprise alliance started pushing it. While the definite push for enterprise developments are made by the Ethereum platform, it has to be noted that Ethereum also caters to small-time developers and individuals as well. As such developing the platform for end users and enterprises leave a lot of specific functionality out of the loop for Ethereum. Also, the blockchain model proposed and developed by the Ethereum foundation is a public model whereas the one proposed by projects such as the Hyperledger project is private and permissioned. + +While only time can tell which platform among the ones put forward by Ethereum, Hyperledger, and R3 Corda among others will find the most fans in real-world use cases, such systems do prove the validity behind the claim of a blockchain powered future. + +**References:** + + * [1] [**Gabriel Nicholas, “Ethereum Is Coding’s New Wild West | WIRED,” Wired , 2017**][6]. + * [2] [**What is Ethereum? — Ethereum Homestead 0.1 documentation**][7]. + * [3] [**Ethereum, a Virtual Currency, Enables Transactions That Rival Bitcoin’s – The New York Times**][8]. + * [4] [**Cryptocurrency Market Capitalizations | CoinMarketCap**][9]. + * [5] [**Introduction — Ethereum Homestead 0.1 documentation**][10]. + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/04/Ethereum-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[5]: https://github.com/ethereum/wiki/wiki/White-Paper +[6]: https://www.wired.com/story/ethereum-is-codings-new-wild-west/ +[7]: http://www.ethdocs.org/en/latest/introduction/what-is-ethereum.html#ethereum-virtual-machine +[8]: https://www.nytimes.com/2016/03/28/business/dealbook/ethereum-a-virtual-currency-enables-transactions-that-rival-bitcoins.html +[9]: https://coinmarketcap.com/ +[10]: http://www.ethdocs.org/en/latest/introduction/index.html diff --git a/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md b/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md new file mode 100644 index 0000000000..2169f04e51 --- /dev/null +++ b/sources/tech/20190505 Five Methods To Check Your Current Runlevel In Linux.md @@ -0,0 +1,183 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Five Methods To Check Your Current Runlevel In Linux?) +[#]: via: (https://www.2daygeek.com/check-current-runlevel-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Five Methods To Check Your Current Runlevel In Linux? +====== + +A run level is an operating system state on Linux system. + +There are seven runlevels exist, numbered from zero to six. + +A system can be booted into any of the given runlevel. Run levels are identified by numbers. + +Each runlevel designates a different system configuration and allows access to a different combination of processes. + +By default Linux boots either to runlevel 3 or to runlevel 5. + +Only one runlevel is executed at a time on startup. It doesn’t execute one after another. + +The default runlevel for a system is specified in the /etc/inittab file for SysVinit system. + +But systemd systems doesn’t read this file and it uses the following file `/etc/systemd/system/default.target` to get default runlevel information. + +We can check the Linux system current runlevel using the below five methods. + + * **`runlevel Command:`** runlevel prints the previous and current runlevel of the system. + * **`who Command:`** Print information about users who are currently logged in. It will print the runlevel information with “-r” option. + * **`systemctl Command:`** It controls the systemd system and service manager. + * **`Using /etc/inittab File:`** The default runlevel for a system is specified in the /etc/inittab file for SysVinit System. + * **`Using /etc/systemd/system/default.target File:`** The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System. + + + +Detailed runlevels information is described in the below table. + +**Runlevel** | **SysVinit System** | **systemd System** +---|---|--- +0 | Shutdown or Halt the system | shutdown.target +1 | Single user mode | rescue.target +2 | Multiuser, without NFS | multi-user.target +3 | Full multiuser mode | multi-user.target +4 | unused | multi-user.target +5 | X11 (Graphical User Interface) | graphical.target +6 | reboot the system | reboot.target + +The system will execute the programs/service based on the runlevel. + +For SysVinit system, it will be execute from the following location. + + * Run level 0 – /etc/rc.d/rc0.d/ + * Run level 1 – /etc/rc.d/rc1.d/ + * Run level 2 – /etc/rc.d/rc2.d/ + * Run level 3 – /etc/rc.d/rc3.d/ + * Run level 4 – /etc/rc.d/rc4.d/ + * Run level 5 – /etc/rc.d/rc5.d/ + * Run level 6 – /etc/rc.d/rc6.d/ + + + +For systemd system, it will be execute from the following location. + + * runlevel1.target – /etc/systemd/system/rescue.target + * runlevel2.target – /etc/systemd/system/multi-user.target.wants + * runlevel3.target – /etc/systemd/system/multi-user.target.wants + * runlevel4.target – /etc/systemd/system/multi-user.target.wants + * runlevel5.target – /etc/systemd/system/graphical.target.wants + + + +### 1) How To Check Your Current Runlevel In Linux Using runlevel Command? + +runlevel prints the previous and current runlevel of the system. + +``` +$ runlevel +N 5 +``` + + * **`N:`** “N” indicates that the runlevel has not been changed since the system was booted. + * **`5:`** “5” indicates the current runlevel of the system. + + + +### 2) How To Check Your Current Runlevel In Linux Using who Command? + +Print information about users who are currently logged in. It will print the runlevel information with `-r` option. + +``` +$ who -r + run-level 5 2019-04-22 09:32 +``` + +### 3) How To Check Your Current Runlevel In Linux Using systemctl Command? + +systemctl is used to controls the systemd system and service manager. systemd is system and service manager for Unix like operating systems. + +It can work as a drop-in replacement for sysvinit system. systemd is the first process get started by kernel and holding PID 1. + +systemd uses `.service` files Instead of bash scripts (SysVinit uses). systemd sorts all daemons into their own Linux cgroups and you can see the system hierarchy by exploring `/cgroup/systemd` file. + +``` +$ systemctl get-default +graphical.target +``` + +### 4) How To Check Your Current Runlevel In Linux Using /etc/inittab File? + +The default runlevel for a system is specified in the /etc/inittab file for SysVinit System but systemd systemd doesn’t read the files. + +So, it will work only on SysVinit system and not in systemd system. + +``` +$ cat /etc/inittab +# inittab is only used by upstart for the default runlevel. +# +# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM. +# +# System initialization is started by /etc/init/rcS.conf +# +# Individual runlevels are started by /etc/init/rc.conf +# +# Ctrl-Alt-Delete is handled by /etc/init/control-alt-delete.conf +# +# Terminal gettys are handled by /etc/init/tty.conf and /etc/init/serial.conf, +# with configuration in /etc/sysconfig/init. +# +# For information on how to write upstart event handlers, or how +# upstart works, see init(5), init(8), and initctl(8). +# +# Default runlevel. The runlevels used are: +# 0 - halt (Do NOT set initdefault to this) +# 1 - Single user mode +# 2 - Multiuser, without NFS (The same as 3, if you do not have networking) +# 3 - Full multiuser mode +# 4 - unused +# 5 - X11 +# 6 - reboot (Do NOT set initdefault to this) +# +id:5:initdefault: +``` + +### 5) How To Check Your Current Runlevel In Linux Using /etc/systemd/system/default.target File? + +The default runlevel for a system is specified in the /etc/systemd/system/default.target file for systemd System. + +It doesn’t work on SysVinit system. + +``` +$ cat /etc/systemd/system/default.target +# This file is part of systemd. +# +# systemd is free software; you can redistribute it and/or modify it +# under the terms of the GNU Lesser General Public License as published by +# the Free Software Foundation; either version 2.1 of the License, or +# (at your option) any later version. + +[Unit] +Description=Graphical Interface +Documentation=man:systemd.special(7) +Requires=multi-user.target +Wants=display-manager.service +Conflicts=rescue.service rescue.target +After=multi-user.target rescue.service rescue.target display-manager.service +AllowIsolate=yes +``` +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/check-current-runlevel-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190505 How To Navigate Directories Faster In Linux.md b/sources/tech/20190505 How To Navigate Directories Faster In Linux.md new file mode 100644 index 0000000000..e0979b3915 --- /dev/null +++ b/sources/tech/20190505 How To Navigate Directories Faster In Linux.md @@ -0,0 +1,350 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Navigate Directories Faster In Linux) +[#]: via: (https://www.ostechnix.com/navigate-directories-faster-linux/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +How To Navigate Directories Faster In Linux +====== + +![Navigate Directories Faster In Linux][1] + +Today we are going to learn some command line productivity hacks. As you already know, we use “cd” command to move between a stack of directories in Unix-like operating systems. In this guide I am going to teach you how to navigate directories faster without having to use “cd” command often. There could be many ways, but I only know the following five methods right now! I will keep updating this guide when I came across any methods or utilities to achieve this task in the days to come. + +### Five Different Methods To Navigate Directories Faster In Linux + +##### Method 1: Using “Pushd”, “Popd” And “Dirs” Commands + +This is the most frequent method that I use everyday to navigate between a stack of directories. The “Pushd”, “Popd”, and “Dirs” commands comes pre-installed in most Linux distributions, so don’t bother with installation. These trio commands are quite useful when you’re working in a deep directory structure and scripts. For more details, check our guide in the link given below. + + * **[How To Use Pushd, Popd And Dirs Commands For Faster CLI Navigation][2]** + + + +##### Method 2: Using “bd” utility + +The “bd” utility also helps you to quickly go back to a specific parent directory without having to repeatedly typing “cd ../../.” on your Bash. + +Bd is also available in the [**Debian extra**][3] and [**Ubuntu universe**][4] repositories. So, you can install it using “apt-get” package manager in Debian, Ubuntu and other DEB based systems as shown below: + +``` +$ sudo apt-get update + +$ sudo apt-get install bd +``` + +For other distributions, you can install as shown below. + +``` +$ sudo wget --no-check-certificate -O /usr/local/bin/bd https://raw.github.com/vigneshwaranr/bd/master/bd + +$ sudo chmod +rx /usr/local/bin/bd + +$ echo 'alias bd=". bd -si"' >> ~/.bashrc + +$ source ~/.bashrc +``` + +To enable auto completion, run: + +``` +$ sudo wget -O /etc/bash_completion.d/bd https://raw.github.com/vigneshwaranr/bd/master/bash_completion.d/bd + +$ source /etc/bash_completion.d/bd +``` + +The Bd utility has now been installed. Let us see few examples to understand how to quickly move through stack of directories using this tool. + +Create some directories. + +``` +$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10 +``` + +The above command will create a hierarchy of directories. Let us check [**directory structure**][5] using command: + +``` +$ tree dir1/ +dir1/ +└── dir2 + └── dir3 + └── dir4 + └── dir5 + └── dir6 + └── dir7 + └── dir8 + └── dir9 + └── dir10 + +9 directories, 0 files +``` + +Alright, we have now 10 directories. Let us say you’re currently in 7th directory i.e dir7. + +``` +$ pwd +/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7 +``` + +You want to move to dir3. Normally you would type: + +``` +$ cd /home/sk/dir1/dir2/dir3 +``` + +Right? yes! But it not necessary though! To go back to dir3, just type: + +``` +$ bd dir3 +``` + +Now you will be in dir3. + +![][6] + +Navigate Directories Faster In Linux Using “bd” Utility + +Easy, isn’t it? It supports auto complete, so you can just type the partial name of a directory and hit the tab key to auto complete the full path. + +To check the contents of a specific parent directory, you don’t need to inside that particular directory. Instead, just type: + +``` +$ ls `bd dir1` +``` + +The above command will display the contents of dir1 from your current working directory. + +For more details, check out the following GitHub page. + + * [**bd GitHub repository**][7] + + + +##### Method 3: Using “Up” Shell script + +The “Up” is a shell script allows you to move quickly to your parent directory. It works well on many popular shells such as Bash, Fish, and Zsh etc. Installation is absolutely easy too! + +To install “Up” on **Bash** , run the following commands one bye: + +``` +$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh + +$ echo 'source ~/.config/up/up.sh' >> ~/.bashrc +``` + +The up script registers the “up” function and some completion functions via your “.bashrc” file. + +Update the changes using command: + +``` +$ source ~/.bashrc +``` + +On **zsh** : + +``` +$ curl --create-dirs -o ~/.config/up/up.sh https://raw.githubusercontent.com/shannonmoeller/up/master/up.sh + +$ echo 'source ~/.config/up/up.sh' >> ~/.zshrc +``` + +The up script registers the “up” function and some completion functions via your “.zshrc” file. + +Update the changes using command: + +``` +$ source ~/.zshrc +``` + +On **fish** : + +``` +$ curl --create-dirs -o ~/.config/up/up.fish https://raw.githubusercontent.com/shannonmoeller/up/master/up.fish + +$ source ~/.config/up/up.fish +``` + +The up script registers the “up” function and some completion functions via “funcsave”. + +Now it is time to see some examples. + +Let us create some directories. + +``` +$ mkdir -p dir1/dir2/dir3/dir4/dir5/dir6/dir7/dir8/dir9/dir10 +``` + +Let us say you’re in 7th directory i.e dir7. + +``` +$ pwd +/home/sk/dir1/dir2/dir3/dir4/dir5/dir6/dir7 +``` + +You want to move to dir3. Using “cd” command, we can do this by typing the following command: + +``` +$ cd /home/sk/dir1/dir2/dir3 +``` + +But it is really easy to go back to dir3 using “up” script: + +``` +$ up dir3 +``` + +That’s it. Now you will be in dir3. To go one directory up, just type: + +``` +$ up 1 +``` + +To go back two directory type: + +``` +$ up 2 +``` + +It’s that simple. Did I type the full path? Nope. Also it supports tab completion. So just type the partial directory name and hit the tab to complete the full path. + +For more details, check out the GitHub page. + + * [**Up GitHub Repository**][8] + + + +Please be mindful that “bd” and “up” tools can only help you to go backward i.e to the parent directory of the current working directory. You can’t move forward. If you want to switch to dir10 from dir5, you can’t! Instead, you need to use “cd” command to switch to dir10. These two utilities are meant for quickly moving you to the parent directory! + +##### Method 4: Using “Shortcut” tool + +This is yet another handy method to switch between different directories quickly and easily. This is somewhat similar to [**alias**][9] command. In this method, we create shortcuts to frequently used directories and use the shortcut name to go to that respective directory without having to type the path. If you’re working in deep directory structure and stack of directories, this method will greatly save some time. You can learn how it works in the guide given below. + + * [**Create Shortcuts To The Frequently Used Directories In Your Shell**][10] + + + +##### Method 5: Using “CDPATH” Environment variable + +This method doesn’t require any installation. **CDPATH** is an environment variable. It is somewhat similar to **PATH** variable which contains many different paths concatenated using **‘:’** (colon). The main difference between PATH and CDPATH variables is the PATH variable is usable with all commands whereas CDPATH works only for **cd** command. + +I have the following directory structure. + +![][11] + +Directory structure + +As you see, there are four child directories under a parent directory named “ostechnix”. + +Now add this parent directory to CDPATH using command: + +``` +$ export CDPATH=~/ostechnix +``` + +You now can instantly cd to the sub-directories of the parent directory (i.e **~/ostechnix** in our case) from anywhere in the filesystem. + +For instance, currently I am in **/var/mail/** location. + +![][12] + +To cd into **~/ostechnix/Linux/** directory, we don’t have to use the full path of the directory as shown below: + +``` +$ cd ~/ostechnix/Linux +``` + +Instead, just mention the name of the sub-directory you want to switch to: + +``` +$ cd Linux +``` + +It will automatically cd to **~/ostechnix/Linux** directory instantly. + +![][13] + +As you can see in the above output, I didn’t use “cd ”. Instead, I just used “cd ” command. + +Please note that CDPATH will allow you to quickly navigate to only one child directory of the parent directory set in CDPATH variable. It doesn’t much help for navigating a stack of directories (directories inside sub-directories, of course). + +To find the values of CDPATH variable, run: + +``` +$ echo $CDPATH +``` + +Sample output would be: + +``` +/home/sk/ostechnix +``` + +**Set multiple values to CDPATH** + +Similar to PATH variable, we can also set multiple values (more than one directory) to CDPATH separated by colon (:). + +``` +$ export CDPATH=.:~/ostechnix:/etc:/var:/opt +``` + +**Make the changes persistent** + +As you already know, the above command (export) will only keep the values of CDPATH until next reboot. To permanently set the values of CDPATH, just add them to your **~/.bashrc** or **~/.bash_profile** files. + +``` +$ vi ~/.bash_profile +``` + +Add the values: + +``` +export CDPATH=.:~/ostechnix:/etc:/var:/opt +``` + +Hit **ESC** key and type **:wq** to save and exit. + +Apply the changes using command: + +``` +$ source ~/.bash_profile +``` + +**Clear CDPATH** + +To clear the values of CDPATH, use **export CDPATH=””**. Or, simply delete the entire line from **~/.bashrc** or **~/.bash_profile** files. + +In this article, you have learned the different ways to navigate directory stack faster and easier in Linux. As you can see, it’s not that difficult to browse a pile of directories faster. Now stop typing “cd ../../..” endlessly by using these tools. If you know any other worth trying tool or method to navigate directories faster, feel free to let us know in the comment section below. I will review and add them in this guide. + +And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! + +Cheers! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/navigate-directories-faster-linux/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-In-Linux-720x340.png +[2]: https://www.ostechnix.com/use-pushd-popd-dirs-commands-faster-cli-navigation/ +[3]: https://tracker.debian.org/pkg/bd +[4]: https://launchpad.net/ubuntu/+source/bd +[5]: https://www.ostechnix.com/view-directory-tree-structure-linux/ +[6]: http://www.ostechnix.com/wp-content/uploads/2017/12/Navigate-Directories-Faster-1.png +[7]: https://github.com/vigneshwaranr/bd +[8]: https://github.com/shannonmoeller/up +[9]: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/ +[10]: https://www.ostechnix.com/create-shortcuts-frequently-used-directories-shell/ +[11]: http://www.ostechnix.com/wp-content/uploads/2018/12/tree-command-output.png +[12]: http://www.ostechnix.com/wp-content/uploads/2018/12/pwd-command.png +[13]: http://www.ostechnix.com/wp-content/uploads/2018/12/cdpath.png diff --git a/sources/tech/20190506 Use udica to build SELinux policy for containers.md b/sources/tech/20190506 Use udica to build SELinux policy for containers.md new file mode 100644 index 0000000000..4e31288a43 --- /dev/null +++ b/sources/tech/20190506 Use udica to build SELinux policy for containers.md @@ -0,0 +1,199 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Use udica to build SELinux policy for containers) +[#]: via: (https://fedoramagazine.org/use-udica-to-build-selinux-policy-for-containers/) +[#]: author: (Lukas Vrabec https://fedoramagazine.org/author/lvrabec/) + +Use udica to build SELinux policy for containers +====== + +![][1] + +While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux. + +Fedora SELinux engineering built a new standalone tool, **udica** , to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why _udica_ is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type _container_t_ is too tight. With _udica_ you can easily customize the policy with limited SELinux policy writing skills. + +### SELinux technology + +SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all _subjects_ (processes and users) and _objects_ (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is: + +``` +allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr }; +``` + +The rule allows any process labeled as _httpd_t_ ****to create, append, read and lock files labeled as _httpd_log_t_. Using the _ps_ command, you can list all processes with their labels: + +``` +$ ps -efZ | grep httpd +system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND +... +``` + +To see which objects are labeled as httpd_log_t, use _semanage_ : + +``` +# semanage fcontext -l | grep httpd_log_t +/var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0 +/var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0 +... +``` + +The SELinux security policy for Fedora is shipped in the _selinux-policy_ RPM package. + +### SELinux vs. containers + +In Fedora, the _container-selinux_ RPM package provides a generic SELinux policy for all containers started by engines like _podman_ or _docker_. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type _container_t_ can only read/execute files in _/usr_ and write to _container_file_t_ ****files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used. + +Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type ( _container_t_ ) is often too strict. For example: + + * [Fedora SilverBlue][2] needs containers to read/write a user’s home directory + * [Fluentd][3] project needs containers to be able to read logs in the _/var/log_ directory + + + +On the other hand, the default container type could be too loose for certain use cases: + + * It has no SELinux network controls — all container processes can bind to any network port + * It has no SELinux control on [Linux capabilities][4] — all container processes can use all capabilities + + + +There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the _udica_ tool was created. + +### Introducing udica + +Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the [common intermediate language][5] (CIL) supported by SELinux userspace. The tool creates a policy that combines: + + * Rules inherited from specified CIL blocks (templates), and + * Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions + + + +You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that: + + * Mounts _/home_ as read only + * Mounts _/var/spool_ as read/write + * Exposes port _tcp/21_ + + + +The container starts with this command: + +``` +# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash +``` + +The default container type ( _container_t_ ) doesn’t allow any of these three actions. To prove it, you could use the _sesearch_ tool to query that the _allow_ rules are present on system: + +``` +# sesearch -A -s container_t -t home_root_t -c dir -p read +``` + +There’s no _allow_ rule present that lets a process labeled as _container_t_ access a directory labeled _home_root_t_ (like the _/home_ directory). The same situation occurs with _/var/spool_ , which is labeled _var_spool_t:_ + +``` +# sesearch -A -s container_t -t var_spool_t -c dir -p read +``` + +On the other hand, the default policy completely allows network access. + +``` +# sesearch -A -s container_t -t port_type -c tcp_socket +allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg }; +allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg }; +``` + +### Securing the container + +It would be great to restrict this access and allow the container to bind just on TCP port _21_ or with the same label. Imagine you find an example container using _podman ps_ whose ID is _37a3635afb8f_ : + +``` +# podman ps -q +37a3635afb8f +``` + +You can now inspect the container and pass the inspection file to the _udica_ tool. The name for the new policy is _my_container_. + +``` +# podman inspect 37a3635afb8f > container.json +# udica -j container.json my_container +Policy my_container with container id 37a3635afb8f created! + +Please load these modules using: + # semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil} + +Restart the container with: "--security-opt label=type:my_container.process" parameter +``` + +That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The _udica_ output above even tells you the command to use: + +``` +# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil} +``` + +Now you must restart the container to allow the container engine to use the new custom policy: + +``` +# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash +``` + +The example container is now running in the newly created _my_container.process_ SELinux process type: + +``` +# ps -efZ | grep my_container.process +unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash +system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash +``` + +### Seeing the results + +The command _sesearch_ now shows _allow_ rules for accessing _/home_ and _/var/spool:_ + +``` +# sesearch -A -s my_container.process -t home_root_t -c dir -p read +allow my_container.process home_root_t:dir { getattr ioctl lock open read search }; +# sesearch -A -s my_container.process -t var_spool_t -c dir -p read +allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write } +``` + +The new custom SELinux policy also allows _my_container.process_ to bind only to TCP/UDP ports labeled the same as TCP port 21: + +``` +# semanage port -l | grep 21 | grep ftp + ftp_port_t tcp 21, 989, 990 +# sesearch -A -s my_container.process -c tcp_socket -p name_bind + allow my_container.process ftp_port_t:tcp_socket name_bind; +``` + +### Conclusion + +The _udica_ tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on [GitHub][6], and an RPM package is available in Fedora repositories for Fedora 28 and later. + +* * * + +*Photo by _[_Samuel Zeller_][7]_ on *[ _Unsplash_.][8] + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/use-udica-to-build-selinux-policy-for-containers/ + +作者:[Lukas Vrabec][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/lvrabec/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/udica-816x345.jpg +[2]: https://silverblue.fedoraproject.org +[3]: https://www.fluentd.org +[4]: http://man7.org/linux/man-pages/man7/capabilities.7.html +[5]: https://en.wikipedia.org/wiki/Common_Intermediate_Language +[6]: https://github.com/containers/udica +[7]: https://unsplash.com/photos/KVG-XMOs6tw?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[8]: https://unsplash.com/search/photos/lockers?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190507 Prefer table driven tests.md b/sources/tech/20190507 Prefer table driven tests.md new file mode 100644 index 0000000000..0a41860207 --- /dev/null +++ b/sources/tech/20190507 Prefer table driven tests.md @@ -0,0 +1,521 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Prefer table driven tests) +[#]: via: (https://dave.cheney.net/2019/05/07/prefer-table-driven-tests) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Prefer table driven tests +====== + +I’m a big fan of testing, specifically [unit testing][1] and TDD ([done correctly][2], of course). A practice that has grown around Go projects is the idea of a table driven test. This post explores the how and why of writing a table driven test. + +Let’s say we have a function that splits strings: + +``` +// Split slices s into all substrings separated by sep and +// returns a slice of the substrings between those separators. +func Split(s, sep string) []string { + var result []string + i := strings.Index(s, sep) + for i > -1 { + result = append(result, s[:i]) + s = s[i+len(sep):] + i = strings.Index(s, sep) + } + return append(result, s) +} +``` + +In Go, unit tests are just regular Go functions (with a few rules) so we write a unit test for this function starting with a file in the same directory, with the same package name, `strings`. + +``` +package split + +import ( + "reflect" + "testing" +) + +func TestSplit(t *testing.T) { + got := Split("a/b/c", "/") + want := []string{"a", "b", "c"} + if !reflect.DeepEqual(want, got) { + t.Fatalf("expected: %v, got: %v", want, got) + } +} +``` + +Tests are just regular Go functions with a few rules: + + 1. The name of the test function must start with `Test`. + 2. The test function must take one argument of type `*testing.T`. A `*testing.T` is a type injected by the testing package itself, to provide ways to print, skip, and fail the test. + + + +In our test we call `Split` with some inputs, then compare it to the result we expected. + +### Code coverage + +The next question is, what is the coverage of this package? Luckily the go tool has a built in branch coverage. We can invoke it like this: + +``` +% go test -coverprofile=c.out +PASS +coverage: 100.0% of statements +ok split 0.010s +``` + +Which tells us we have 100% branch coverage, which isn’t really surprising, there’s only one branch in this code. + +If we want to dig in to the coverage report the go tool has several options to print the coverage report. We can use `go tool cover -func` to break down the coverage per function: + +``` +% go tool cover -func=c.out +split/split.go:8: Split 100.0% +total: (statements) 100.0% +``` + +Which isn’t that exciting as we only have one function in this package, but I’m sure you’ll find more exciting packages to test. + +#### Spray some .bashrc on that + +This pair of commands is so useful for me I have a shell alias which runs the test coverage and the report in one command: + +``` +cover () { + local t=$(mktemp -t cover) + go test $COVERFLAGS -coverprofile=$t $@ \ + && go tool cover -func=$t \ + && unlink $t +} +``` + +### Going beyond 100% coverage + +So, we wrote one test case, got 100% coverage, but this isn’t really the end of the story. We have good branch coverage but we probably need to test some of the boundary conditions. For example, what happens if we try to split it on comma? + +``` +func TestSplitWrongSep(t *testing.T) { + got := Split("a/b/c", ",") + want := []string{"a/b/c"} + if !reflect.DeepEqual(want, got) { + t.Fatalf("expected: %v, got: %v", want, got) + } +} +``` + +Or, what happens if there are no separators in the source string? + +``` +func TestSplitNoSep(t *testing.T) { + got := Split("abc", "/") + want := []string{"abc"} + if !reflect.DeepEqual(want, got) { + t.Fatalf("expected: %v, got: %v", want, got) + } +} +``` + +We’re starting build a set of test cases that exercise boundary conditions. This is good. + +### Introducing table driven tests + +However the there is a lot of duplication in our tests. For each test case only the input, the expected output, and name of the test case change. Everything else is boilerplate. What we’d like to to set up all the inputs and expected outputs and feel them to a single test harness. This is a great time to introduce table driven testing. + +``` +func TestSplit(t *testing.T) { + type test struct { + input string + sep string + want []string + } + + tests := []test{ + {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + {input: "abc", sep: "/", want: []string{"abc"}}, + } + + for _, tc := range tests { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("expected: %v, got: %v", tc.want, got) + } + } +} +``` + +We declare a structure to hold our test inputs and expected outputs. This is our table. The `tests` structure is usually a local declaration because we want to reuse this name for other tests in this package. + +In fact, we don’t even need to give the type a name, we can use an anonymous struct literal to reduce the boilerplate like this: + +``` +func TestSplit(t *testing.T) { + tests := []struct { + input string + sep string + want []string + }{ + {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + {input: "abc", sep: "/", want: []string{"abc"}}, + } + + for _, tc := range tests { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("expected: %v, got: %v", tc.want, got) + } + } +} +``` + +Now, adding a new test is a straight forward matter; simply add another line the `tests` structure. For example, what will happen if our input string has a trailing separator? + +``` +{input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, +{input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, +{input: "abc", sep: "/", want: []string{"abc"}}, +{input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, // trailing sep +``` + +But, when we run `go test`, we get + +``` +% go test +--- FAIL: TestSplit (0.00s) + split_test.go:24: expected: [a b c], got: [a b c ] +``` + +Putting aside the test failure, there are a few problems to talk about. + +The first is by rewriting each test from a function to a row in a table we’ve lost the name of the failing test. We added a comment in the test file to call out this case, but we don’t have access to that comment in the `go test` output. + +There are a few ways to resolve this. You’ll see a mix of styles in use in Go code bases because the table testing idiom is evolving as people continue to experiment with the form. + +### Enumerating test cases + +As tests are stored in a slice we can print out the index of the test case in the failure message: + +``` +func TestSplit(t *testing.T) { + tests := []struct { + input string + sep . string + want []string + }{ + {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + {input: "abc", sep: "/", want: []string{"abc"}}, + {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, + } + + for i, tc := range tests { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("test %d: expected: %v, got: %v", i+1, tc.want, got) + } + } +} +``` + +Now when we run `go test` we get this + +``` +% go test +--- FAIL: TestSplit (0.00s) + split_test.go:24: test 4: expected: [a b c], got: [a b c ] +``` + +Which is a little better. Now we know that the fourth test is failing, although we have to do a little bit of fudging because slice indexing—​and range iteration—​is zero based. This requires consistency across your test cases; if some use zero base reporting and others use one based, it’s going to be confusing. And, if the list of test cases is long, it could be difficult to count braces to figure out exactly which fixture constitutes test case number four. + +### Give your test cases names + +Another common pattern is to include a name field in the test fixture. + +``` +func TestSplit(t *testing.T) { + tests := []struct { + name string + input string + sep string + want []string + }{ + {name: "simple", input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + {name: "wrong sep", input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + {name: "no sep", input: "abc", sep: "/", want: []string{"abc"}}, + {name: "trailing sep", input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, + } + + for _, tc := range tests { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("%s: expected: %v, got: %v", tc.name, tc.want, got) + } + } +} +``` + +Now when the test fails we have a descriptive name for what the test was doing. We no longer have to try to figure it out from the output—​also, now have a string we can search on. + +``` +% go test +--- FAIL: TestSplit (0.00s) + split_test.go:25: trailing sep: expected: [a b c], got: [a b c ] +``` + +We can dry this up even more using a map literal syntax: + +``` +func TestSplit(t *testing.T) { + tests := map[string]struct { + input string + sep string + want []string + }{ + "simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + "wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + "no sep": {input: "abc", sep: "/", want: []string{"abc"}}, + "trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, + } + + for name, tc := range tests { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("%s: expected: %v, got: %v", name, tc.want, got) + } + } +} +``` + +Using a map literal syntax we define our test cases not as a slice of structs, but as map of test names to test fixtures. There’s also a side benefit of using a map that is going to potentially improve the utility of our tests. + +Map iteration order is _undefined_ 1 This means each time we run `go test`, our tests are going to be potentially run in a different order. + +This is super useful for spotting conditions where test pass when run in statement order, but not otherwise. If you find that happens you probably have some global state that is being mutated by one test with subsequent tests depending on that modification. + +### Introducing sub tests + +Before we fix the failing test there are a few other issues to address in our table driven test harness. + +The first is we’re calling `t.Fatalf` when one of the test cases fails. This means after the first failing test case we stop testing the other cases. Because test cases are run in an undefined order, if there is a test failure, it would be nice to know if it was the only failure or just the first. + +The testing package would do this for us if we go to the effort to write out each test case as its own function, but that’s quite verbose. The good news is since Go 1.7 a new feature was added that lets us do this easily for table driven tests. They’re called [sub tests][3]. + +``` +func TestSplit(t *testing.T) { + tests := map[string]struct { + input string + sep string + want []string + }{ + "simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + "wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + "no sep": {input: "abc", sep: "/", want: []string{"abc"}}, + "trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, + } + + for name, tc := range tests { + t.Run(name, func(t *testing.T) { + got := Split(tc.input, tc.sep) + if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("expected: %v, got: %v", tc.want, got) + } + }) + } +} +``` + +As each sub test now has a name we get that name automatically printed out in any test runs. + +``` +% go test +--- FAIL: TestSplit (0.00s) + --- FAIL: TestSplit/trailing_sep (0.00s) + split_test.go:25: expected: [a b c], got: [a b c ] +``` + +Each subtest is its own anonymous function, therefore we can use `t.Fatalf`, `t.Skipf`, and all the other `testing.T`helpers, while retaining the compactness of a table driven test. + +#### Individual sub test cases can be executed directly + +Because sub tests have a name, you can run a selection of sub tests by name using the `go test -run` flag. + +``` +% go test -run=.*/trailing -v +=== RUN TestSplit +=== RUN TestSplit/trailing_sep +--- FAIL: TestSplit (0.00s) + --- FAIL: TestSplit/trailing_sep (0.00s) + split_test.go:25: expected: [a b c], got: [a b c ] +``` + +### Comparing what we got with what we wanted + +Now we’re ready to fix the test case. Let’s look at the error. + +``` +--- FAIL: TestSplit (0.00s) + --- FAIL: TestSplit/trailing_sep (0.00s) + split_test.go:25: expected: [a b c], got: [a b c ] +``` + +Can you spot the problem? Clearly the slices are different, that’s what `reflect.DeepEqual` is upset about. But spotting the actual difference isn’t easy, you have to spot that extra space after `c`. This might look simple in this simple example, but it is any thing but when you’re comparing two complicated deeply nested gRPC structures. + +We can improve the output if we switch to the `%#v` syntax to view the value as a Go(ish) declaration: + +``` +got := Split(tc.input, tc.sep) +if !reflect.DeepEqual(tc.want, got) { + t.Fatalf("expected: %#v, got: %#v", tc.want, got) +} +``` + +Now when we run our test it’s clear that the problem is there is an extra blank element in the slice. + +``` +% go test +--- FAIL: TestSplit (0.00s) + --- FAIL: TestSplit/trailing_sep (0.00s) + split_test.go:25: expected: []string{"a", "b", "c"}, got: []string{"a", "b", "c", ""} +``` + +But before we go to fix our test failure I want to talk a little bit more about choosing the right way to present test failures. Our `Split` function is simple, it takes a primitive string and returns a slice of strings, but what if it worked with structs, or worse, pointers to structs? + +Here is an example where `%#v` does not work as well: + +``` +func main() { + type T struct { + I int + } + x := []*T{{1}, {2}, {3}} + y := []*T{{1}, {2}, {4}} + fmt.Printf("%v %v\n", x, y) + fmt.Printf("%#v %#v\n", x, y) +} +``` + +The first `fmt.Printf`prints the unhelpful, but expected slice of addresses; `[0xc000096000 0xc000096008 0xc000096010] [0xc000096018 0xc000096020 0xc000096028]`. However our `%#v` version doesn’t fare any better, printing a slice of addresses cast to `*main.T`;`[]*main.T{(*main.T)(0xc000096000), (*main.T)(0xc000096008), (*main.T)(0xc000096010)} []*main.T{(*main.T)(0xc000096018), (*main.T)(0xc000096020), (*main.T)(0xc000096028)}` + +Because of the limitations in using any `fmt.Printf` verb, I want to introduce the [go-cmp][4] library from Google. + +The goal of the cmp library is it is specifically to compare two values. This is similar to `reflect.DeepEqual`, but it has more capabilities. Using the cmp pacakge you can, of course, write: + +``` +func main() { + type T struct { + I int + } + x := []*T{{1}, {2}, {3}} + y := []*T{{1}, {2}, {4}} + fmt.Println(cmp.Equal(x, y)) // false +} +``` + +But far more useful for us with our test function is the `cmp.Diff` function which will produce a textual description of what is different between the two values, recursively. + +``` +func main() { + type T struct { + I int + } + x := []*T{{1}, {2}, {3}} + y := []*T{{1}, {2}, {4}} + diff := cmp.Diff(x, y) + fmt.Printf(diff) +} +``` + +Which instead produces: + +``` +% go run +{[]*main.T}[2].I: + -: 3 + +: 4 +``` + +Telling us that at element 2 of the slice of `T`s the `I`field was expected to be 3, but was actually 4. + +Putting this all together we have our table driven go-cmp test + +``` +func TestSplit(t *testing.T) { + tests := map[string]struct { + input string + sep string + want []string + }{ + "simple": {input: "a/b/c", sep: "/", want: []string{"a", "b", "c"}}, + "wrong sep": {input: "a/b/c", sep: ",", want: []string{"a/b/c"}}, + "no sep": {input: "abc", sep: "/", want: []string{"abc"}}, + "trailing sep": {input: "a/b/c/", sep: "/", want: []string{"a", "b", "c"}}, + } + + for name, tc := range tests { + t.Run(name, func(t *testing.T) { + got := Split(tc.input, tc.sep) + diff := cmp.Diff(tc.want, got) + if diff != "" { + t.Fatalf(diff) + } + }) + } +} +``` + +Running this we get + +``` +% go test +--- FAIL: TestSplit (0.00s) + --- FAIL: TestSplit/trailing_sep (0.00s) + split_test.go:27: {[]string}[?->3]: + -: + +: "" +FAIL +exit status 1 +FAIL split 0.006s +``` + +Using `cmp.Diff` our test harness isn’t just telling us that what we got and what we wanted were different. Our test is telling us that the strings are different lengths, the third index in the fixture shouldn’t exist, but the actual output we got an empty string, “”. From here fixing the test failure is straight forward. + + 1. Please don’t email me to argue that map iteration order is _random_. [It’s not][5]. + + + +#### Related posts: + + 1. [Writing table driven tests in Go][6] + 2. [Internets of Interest #7: Ian Cooper on Test Driven Development][7] + 3. [Automatically run your package’s tests with inotifywait][8] + 4. [How to write benchmarks in Go][9] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/2019/04/03/absolute-unit-test +[2]: https://www.youtube.com/watch?v=EZ05e7EMOLM +[3]: https://blog.golang.org/subtests +[4]: https://github.com/google/go-cmp +[5]: https://golang.org/ref/spec#For_statements +[6]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go) +[7]: https://dave.cheney.net/2018/10/15/internets-of-interest-7-ian-cooper-on-test-driven-development (Internets of Interest #7: Ian Cooper on Test Driven Development) +[8]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait) +[9]: https://dave.cheney.net/2013/06/30/how-to-write-benchmarks-in-go (How to write benchmarks in Go) diff --git a/sources/tech/20190508 Innovations on the Linux desktop- A look at Fedora 30-s new features.md b/sources/tech/20190508 Innovations on the Linux desktop- A look at Fedora 30-s new features.md new file mode 100644 index 0000000000..083a8c9768 --- /dev/null +++ b/sources/tech/20190508 Innovations on the Linux desktop- A look at Fedora 30-s new features.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Innovations on the Linux desktop: A look at Fedora 30's new features) +[#]: via: (https://opensource.com/article/19/5/fedora-30-features) +[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/marcobravo/users/alanfdoss/users/ansilva) + +Innovations on the Linux desktop: A look at Fedora 30's new features +====== +Learn about some of the highlights in the latest version of Fedora +Linux. +![Fedora Linux distro on laptop][1] + +The latest version of Fedora Linux was released at the end of April. As a full-time Fedora user since its original release back in 2003 and an active contributor since 2007, I always find it satisfying to see new features and advancements in the community. + +If you want a TL;DR version of what's has changed in [Fedora 30][2], feel free to ignore this article and jump straight to Fedora's [ChangeSet][3] wiki page. Otherwise, keep on reading to learn about some of the highlights in the new version. + +### Upgrade vs. fresh install + +I upgraded my Lenovo ThinkPad T series from Fedora 29 to 30 using the [DNF system upgrade instructions][4], and so far it is working great! + +I also had the chance to do a fresh install on another ThinkPad, and it was a nice surprise to see a new boot screen on Fedora 30—it even picked up the Lenovo logo. I did not see this new and improved boot screen on the upgrade above; it was only on the fresh install. + +![Fedora 30 boot screen][5] + +### Desktop changes + +If you are a GNOME user, you'll be happy to know that Fedora 30 comes with the latest version, [GNOME 3.32][6]. It has an improved on-screen keyboard (handy for touch-screen laptops), brand new icons for core applications, and a new "Applications" panel under Settings that allows users to gain a bit more control on GNOME default handlers, access permissions, and notifications. Version 3.32 also improves Google Drive performance so that Google files and calendar appointments will be integrated with GNOME. + +![Applications panel in GNOME Settings][7] + +The new Applications panel in GNOME Settings + +Fedora 30 also introduces two new Desktop environments: Pantheon and Deepin. Pantheon is [ElementaryOS][8]'s default desktop environment and can be installed with a simple: + + +``` +`$ sudo dnf groupinstall "Pantheon Desktop"` +``` + +I haven't used Pantheon yet, but I do use [Deepin][9]. Installation is simple; just run: + + +``` +`$ sudo dnf install deepin-desktop` +``` + +then log out of GNOME and log back in, choosing "Deepin" by clicking on the gear icon on the login screen. + +![Deepin desktop on Fedora 30][10] + +Deepin desktop on Fedora 30 + +Deepin appears as a very polished, user-friendly desktop environment that allows you to control many aspects of your environment with a click of a button. So far, the only issue I've had is that it can take a few extra seconds to complete login and return control to your mouse pointer. Other than that, it is brilliant! It is the first desktop environment I've used that seems to do high dots per inch (HiDPI) properly—or at least close to correctly. + +### Command line + +Fedora 30 upgrades the Bourne Again Shell (aka Bash) to version 5.0.x. If you want to find out about every change since its last stable version (4.4), read this [description][11]. I do want to mention that three new environments have been introduced in Bash 5: + + +``` +$ echo $EPOCHSECONDS +1556636959 +$ echo $EPOCHREALTIME +1556636968.012369 +$ echo $BASH_ARGV0 +bash +``` + +Fedora 30 also updates the [Fish shell][12], a colorful shell with auto-suggestion, which can be very helpful for beginners. Fedora 30 comes with [Fish version 3][13], and you can even [try it out in a browser][14] without having to install it on your machine. + +(Note that Fish shell is not the same as guestfish for mounting virtual machine images, which comes with the libguestfs-tools package.) + +### Development + +Fedora 30 brings updates to the following languages: [C][15], [Boost (C++)][16], [Erlang][17], [Go][18], [Haskell][19], [Python][20], [Ruby][21], and [PHP][22]. + +Regarding these updates, the most important thing to know is that Python 2 is deprecated in Fedora 30. The community and Fedora leadership are requesting that all package maintainers that still depend on Python 2 port their packages to Python 3 as soon as possible, as the plan is to remove virtually all Python 2 packages in Fedora 31. + +### Containers + +If you would like to run Fedora as an immutable OS for a container, kiosk, or appliance-like environment, check out [Fedora Silverblue][23]. It brings you all of Fedora's technology managed by [rpm-ostree][24], which is a hybrid image/package system that allows automatic updates and easy rollbacks for developers. It is a great option for anyone who wants to learn more and play around with [Flatpak deployments][25]. + +Fedora Atomic is no longer available under Fedora 30, but you can still [download it][26]. If your jam is containers, don't despair: even though Fedora Atomic is gone, a brand new [Fedora CoreOS][27] is under development and should be going live soon! + +### What else is new? + +As of Fedora 30, **/usr/bin/gpg** points to [GnuPG][28] v2 by default, and [NFS][29] server configuration is now located at **/etc/nfs.conf** instead of **/etc/sysconfig/nfs**. + +There have also been a [few changes][30] for installation and boot time. + +Last but not least, check out [Fedora Spins][31] for a spin of Fedora that defaults to your favorite Window manager and [Fedora Labs][32] for functionally curated software bundles built on Fedora 30 (i.e. astronomy, security, and gaming). + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/fedora-30-features + +作者:[Anderson Silva ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ansilva/users/marcobravo/users/alanfdoss/users/ansilva +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/fedora_on_laptop_lead.jpg?itok=XMc5wo_e (Fedora Linux distro on laptop) +[2]: https://getfedora.org/ +[3]: https://fedoraproject.org/wiki/Releases/30/ChangeSet +[4]: https://fedoraproject.org/wiki/DNF_system_upgrade#How_do_I_use_it.3F +[5]: https://opensource.com/sites/default/files/uploads/fedora30_fresh-boot.jpg (Fedora 30 boot screen) +[6]: https://help.gnome.org/misc/release-notes/3.32/ +[7]: https://opensource.com/sites/default/files/uploads/fedora10_gnome.png (Applications panel in GNOME Settings) +[8]: https://elementary.io/ +[9]: https://www.deepin.org/en/dde/ +[10]: https://opensource.com/sites/default/files/uploads/fedora10_deepin.png (Deepin desktop on Fedora 30) +[11]: https://git.savannah.gnu.org/cgit/bash.git/tree/NEWS +[12]: https://fishshell.com/ +[13]: https://fishshell.com/release_notes.html +[14]: https://rootnroll.com/d/fish-shell/ +[15]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_C/ +[16]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Boost/ +[17]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Erlang/ +[18]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Go/ +[19]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Haskell/ +[20]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Python/ +[21]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Ruby/ +[22]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/developers/Development_Web/ +[23]: https://silverblue.fedoraproject.org/ +[24]: https://rpm-ostree.readthedocs.io/en/latest/ +[25]: https://flatpak.org/setup/Fedora/ +[26]: https://getfedora.org/en/atomic/ +[27]: https://coreos.fedoraproject.org/ +[28]: https://gnupg.org/index.html +[29]: https://en.wikipedia.org/wiki/Network_File_System +[30]: https://docs.fedoraproject.org/en-US/fedora/f30/release-notes/sysadmin/Installation/ +[31]: https://spins.fedoraproject.org +[32]: https://labs.fedoraproject.org/ diff --git a/sources/tech/20190509 A day in the life of an open source performance engineering team.md b/sources/tech/20190509 A day in the life of an open source performance engineering team.md new file mode 100644 index 0000000000..373983e8bc --- /dev/null +++ b/sources/tech/20190509 A day in the life of an open source performance engineering team.md @@ -0,0 +1,138 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A day in the life of an open source performance engineering team) +[#]: via: (https://opensource.com/article/19/5/life-performance-engineer) +[#]: author: (Aakarsh Gopi https://opensource.com/users/aakarsh/users/portante/users/anaga/users/gameloid) + +A day in the life of an open source performance engineering team +====== +Collaborating with the community enables performance engineering to +address the confusion and complexity that come with working on a broad +spectrum of products. +![Team checklist and to dos][1] + +In today's world, open source software solutions are a collaborative effort of the community. Can a performance engineering team operate the same way, by collaborating with the community to address the confusion and complexity that come with working on a broad spectrum of products? + +To answer that question, we need to explore some basic questions: + + * What does a performance engineering team do? + * How does a performance engineering team fulfill its responsibilities? + * How are open source tools developed or leveraged for performance analysis? + + + +The term "performance engineering" has different meanings, which causes difficulty in figuring out a performance engineering team's responsibilities. Adding to the confusion, a team may be charged with working on a broad spectrum of products, ranging from an operating system like RHEL, whose performance can be significantly impacted by hardware components (CPU caches, network interface controllers, disk technologies, etc.), to something much higher up in the stack like Kubernetes, which comes with the added challenges of operating at scale without compromising on performance. + +Performance engineering has progressed a lot since the days of running manual A/B testing and single-system benchmarks. Now, these teams test cloud infrastructures and add machine learning classifiers as a component in the CI/CD pipeline for identifying performance regression in releases of products. + +### What does a performance engineering team do? + +A performance engineering team is generally responsible for the following (among other things): + + * Identifying potential performance issues + * Identifying any scale issues that could occur + * Developing tuning guides and/or tools that would enable the user to achieve the most out of a product + * Developing guides and/or working with customers to help with capacity planning + * Providing customers with performance expectations for different use cases of the product + + + +The mission of our specific team is to: + + * Establish performance and scale leadership of the Red Hat portfolio; the scope includes component level, system, and solution analysis + * Collaborate with engineering, product management, product marketing, and Customer Experience and Engagement (CEE), as well as hardware and software partners + * Deliver public-facing guidance, internal enablement, and continuous integration tests + + + +Our team fulfills our mission in the following ways: + + * We work with product teams to set performance goals and develop performance tests to run against those products deployed to see how they measure up to those goals. + * We also work to re-run performance tests to ensure there are no regressions in behaviors. + * We develop open source tooling to achieve our product performance goals, making them available to the communities where the products are derived to re-create what we do. + * We work to be transparent and open about how we do performance engineering; sharing these methods and approaches benefits communities, allowing them to reuse our work, and benefits us by leveraging the work they contribute with these tools. + + + +### How does a performance engineering team fulfill its responsibilities? + +Meeting these responsibilities requires collaboration with other teams, such as product management, development, QA/QE, documentation, and consulting, and with the communities. + +_Collaboration_ allows a team to be successful by pulling together team members' diverse knowledge and experience. A performance engineering team builds tools to share their knowledge both within the team and with the community, furthering the value of collaboration. + +Our performance engineering team achieves success through: + + * **Collaboration:** _Intra_ -team collaboration is as important as _inter_ -team collaboration for our performance engineering team + * Most performance engineers tend to create a niche for themselves in one or more sub-disciplines of performance engineering via tooling, performance analysis, systems knowledge, systems configuration, and such. Our team is composed of engineers with knowledge of setting up/configuring systems across the product stack, those who know how a configuration option would affect the system's performance, and so on. Our team's success is heavily reliant on effective collaboration between performance engineers on the team. + * Our team works closely with other organizations at various levels within Red Hat and the communities where our products are derived. + * **Knowledge:** To understand the performance implications of configuration and/or system changes, deep knowledge of the product alone is not sufficient. + * Our team has the knowledge to cover performance across all levels of the stack: + * Hardware setup and configuration + * Networking and scale considerations + * Operating system setup and configuration (Linux kernel, userspace stack) + * Storage sub-systems (Ceph) + * Cloud infrastructure (OpenStack, RHV) + * Cloud deployments (OpenShift/Kubernetes) + * Product architectures + * Software technologies (databases like Postgres; software-defined networking and storage) + * Product interactions with the underlying hardware + * Tooling to monitor and accomplish repeatable benchmarking + * **Tooling:** The differentiator for our performance engineering team is the data collected through its tools to help tackle performance analysis complexity in the environments where our products are deployed. + + + +### How are open source tools developed or leveraged for performance analysis? + +Tooling is no longer a luxury but a need for today's performance engineering teams. With today's product solutions being so complex (and increasing in complexity as more solutions are composed to solve ever-larger problems), we need tools to help us run performance test suites in a repeatable manner, collect data about those runs, and help us distill that data so it becomes understandable and usable. + +Yet, no performance engineering team is judged on how performance analysis is done, but rather on the results achieved from this analysis. + +This tension can be resolved by collaboratively developing tools. A performance engineering team can't spend all its time developing tools, since that would prevent it from effectively collecting data. By developing its tools in a collaborative manner, a team can leverage work from the community to make further progress while still generating the result by which they will be measured. + +Tooling is the backbone of our performance engineering team, and we strive to use the tools already available upstream. When no tools are available in the community that fit our needs, we've built tools that help us achieve our goals and made them available to the community. Open sourcing our tools has helped us immensely because we receive contributions from our competitors and partners, allowing us to solve problems collectively through collaboration. + +![Performance Engineering Tools][2] + +Following are some of the tools our team has contributed to and rely upon for our work: + + * **[Perf-c2c][3]:** Is your performance impacted by false sharing in CPU caches? The perf-c2c tool can help you tackle this problem by helping you inspect the cache lines where false sharing is detected and understand the readers/writers accessing those cache lines along with the offsets where those accesses occurred. You can read more about this tool on [Joe Mario's blog][4]. + * **[Pbench][5]:** Do you repeat the same steps when collecting data about performance, but fail to do it consistently? Or do you find it difficult to compare results with others because you're collecting different configuration data? Pbench is a tool that attempts to standardize the way data is collected for performance so comparisons and historical reviews are much easier. Pbench is at the heart of our tooling efforts, as most of the other tools consume it in some form. Pbench is a Swiss Army Knife, as it allows the user to run benchmarks such as fio, uperf, or custom, user-defined tests while gathering metrics through tools such as sar, iostat, and pidstat, standardizing the methods of collecting configuration data about the environment. Pbench provides a dashboard UI to help review and analyze the data collected. + * **[Browbeat][6]:** Do you want to monitor a complex environment such as an OpenStack cluster while running tests? Browbeat is the solution, and its power lies in its ability to collect comprehensive data, ranging from logs to system metrics, about an OpenStack cluster while it orchestrates workloads. Browbeat can also monitor the OpenStack cluster while users run test/workloads of their choice either manually or through their own automation. + * **[Ripsaw][7]:** Do you want to compare the performance of different Kubernetes distros against the same platform? Do you want to compare the performance of the same Kubernetes distros deployed on different platforms? Ripsaw is a relatively new tool created to run workloads through Kubernetes native calls using the Ansible operator framework to provide solutions to the above questions. Ripsaw's unique selling point is that it can run against any kind of Kubernetes distribution, thus it would run the same against a Kubernetes cluster, on Minikube, or on an OpenShift cluster deployed on OpenStack or bare metal. + * **[ClusterLoader][8]:** Ever wondered how an OpenShift component would perform under different cluster states? If you are looking for an answer that can stress the cluster, ClusterLoader will help. The team has generalized the tool so it can be used with any Kubernetes distro. It is currently hosted in the [perf-tests repository][9]. + + + +### Bottom line + +Given the scale at which products are evolving rapidly, performance engineering teams need to build tooling to help them keep up with products' evolution and diversification. + +Open source-based software solutions are a collaborative effort of the community. Our performance engineering team operates in the same way, collaborating with the community to address the confusion and complexity that comes with working on a broad spectrum of products. By developing our tools in a collaborative manner and using tools from the community, we are leveraging the community's work to make progress, while still generating the results we are measured on. + +_Collaboration_ is our key to accomplish our goals and ensure the success of our team. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/life-performance-engineer + +作者:[Aakarsh Gopi ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/aakarsh/users/portante/users/anaga/users/gameloid +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/todo_checklist_team_metrics_report.png?itok=oB5uQbzf (Team checklist and to dos) +[2]: https://opensource.com/sites/default/files/uploads/performanceengineeringtools.png (Performance Engineering Tools) +[3]: http://man7.org/linux/man-pages/man1/perf-c2c.1.html +[4]: https://joemario.github.io/blog/2016/09/01/c2c-blog/ +[5]: https://github.com/distributed-system-analysis/pbench +[6]: https://github.com/openstack/browbeat +[7]: https://github.com/cloud-bulldozer/ripsaw +[8]: https://github.com/openshift/origin/tree/master/test/extended/cluster +[9]: https://github.com/kubernetes/perf-tests/tree/master/clusterloader diff --git a/sources/tech/20190509 Query freely available exchange rate data with ExchangeRate-API.md b/sources/tech/20190509 Query freely available exchange rate data with ExchangeRate-API.md new file mode 100644 index 0000000000..3db9708c81 --- /dev/null +++ b/sources/tech/20190509 Query freely available exchange rate data with ExchangeRate-API.md @@ -0,0 +1,162 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Query freely available exchange rate data with ExchangeRate-API) +[#]: via: (https://opensource.com/article/19/5/exchange-rate-data) +[#]: author: (Chris Hermansen https://opensource.com/users/clhermansen) + +Query freely available exchange rate data with ExchangeRate-API +====== +In this interview, ExchangeRate-API's founder explains why exchange rate +data should be freely accessible to developers who want to build useful +stuff. +![scientific calculator][1] + +Last year, [I wrote about][2] using the Groovy programming language to access foreign exchange rate data from an API to simplify my expense records. I showed how two exchange rate sites, [fixer.io][3] and apilayer.net (now [apilayer.com][4]), could provide the data I needed, allowing me to convert between Indian rupees (INR) and Canadian dollars (CAD) using the former, and Chilean pesos (CLP) and Canadian dollars using the latter. + +Recently, David over at [ExchangeRate-API.com][5] reached out to me to say, "the free API you mentioned (Fixer) has been bought by CurrencyLayer and had its no-signup/unlimited access deprecated." He also told me, "I run a free API called ExchangeRate-API.com that has the same JSON format as the original Fixer, doesn't require any signup, and allows unlimited requests." + +After exchanging a few emails, we decided to turn our conversation into an interview. Below the interview, you can find scripts and usage instructions. (The interview has been edited slightly for clarity.) + +### About ExchangeRate-API + +_**Chris:** How is ExchangeRate-API different from other online exchange-rate services? What motivates you to provide this service?_ + +**David:** When I started ExchangeRate-API with a friend in 2010, we built and released it for free because we really needed this service for another project and couldn't find one despite extensive googling. There are now around 20 such APIs offering quite a few different approaches. Over the years, I've tried a number of different approaches, but offering quality data for free has always proven the most popular. I'm also motivated by the thought that this data should be freely accessible to developers who want to build useful stuff even if they don't have a budget. + +Thus, the main difference with our currency conversion API is that it's unlimited and requires no signup. This also makes starting to use it really fast—you literally just copy the endpoint URL and you're good to go. + +There are one or two other free and unlimited APIs, but these typically just serve the daily reference rates provided by the European Central Bank. ExchangeRate-API collects the public reference rates from a number of central banks and then blends them to reduce the risk of outlying values. It also does acceptance checking to ensure the rates aren't wildly wrong (for instance an inverted data capture recording US dollars to CLP instead of CLP to USD) and weights different sources based on their historical accuracy. This makes the service quite reliable. I'm currently working on a transparency project to compare and show the accuracy of this public reference rate blend against a proprietary data source so potential users can make more informed decisions on what type of currency data service is right for them. + +_**Chris:** I'm delighted that you've included Canadian dollars and Indian rupees, as that is one problem I need to solve. I'm sad to see that you don't have Chilean pesos (another problem I need to solve). Can you tell us how you select the list of currencies? Do you anticipate adding other currencies to your list?_ + +**David:** Since my main aim for this service is to offer stable and reliable exchange rate data, I only include currencies when there is more than one data source for that currency code. For instance, after you mentioned that you're looking for CLP data, I added the daily reference rates published by the Central Bank of Chile to our system. If I can find another source that includes CLP, it would be included in our list of supported currencies, but until then, unfortunately not. The goal is to support as many currencies as possible. + +One thing to note is that, for some currencies, the service has the minimum two sources, but a few currency pairs (for instance USD/EUR) are included in almost every set of public reference rates. The transparent accuracy project I mentioned will hopefully make this difference clear so that users can understand why our USD/EUR rate might be more accurate than less common pairs like CLP/INR and also the degree of variance in accuracy between the pairs. It will take some work to make showing this information quick and easy to understand. + +### The API's architecture + +_**Chris:** Can you tell us a bit about your API's architecture? Do you use open source components to deliver your service?_ + +**David:** I exclusively use open source software to run ExchangeRate-API. I'm definitely an open source enthusiast and am always getting friends to switch to open source, explaining licenses, and donating when I can to the projects I use most. I also try to email maintainers of projects I use to say thanks, but I don't do this enough. + +The stack is currently Ubuntu LTS, MariaDB, Nginx, PHP 7, and Memcached. I also use Bootstrap and Picnic open source CSS frameworks. I use Let's Encrypt for HTTPS certificates via the Electronic Frontier Foundation's open source ACME client, [Certbot][6]. The service makes extensive use of classic tools like UFW/iptables, cURL, OpenSSH, and Git. + +My approach is typically to keep everything as simple as possible while using the tried-and-tested open source building blocks. For a project that aims to _always_ be available for users to convert currencies, this feels like the best route to reliability. I love reading about innovative new projects that could be useful for a project like this (for example, CockroachDB), but I wouldn't use them until they are considered really bulletproof. Obviously, things like [Heartbleed][7] show that there are risks with "boring" projects too—but I think these are easier to manage than the potential for unknown risks with newer, cutting-edge projects. + +In terms of the infrastructure setup, I've steadily built and improved the system over the last nine years, and it now comprises roughly three tiers. The main cluster runs on Amazon Web Services (AWS) and consists of Ubuntu EC2 servers and a high-availability MariaDB relational database service (RDS) instance. The EC2 instances are spread across multiple AWS Availability Zones and fronted by the managed AWS Elastic Load Balancing (ELB) service. Between the RDS database instance with automated cross-zone failover and the ELB-fronted EC2 instances spread across availability zones, this setup is exceptionally available. It is, however, only in one locale. So I've set up a second tier of virtual private server (VPS) instances in different geographic locations to reduce latency and distribute the load away from the more expensive AWS infrastructure. These are currently with Linode, but I have also used DigitalOcean and Vultr recently. + +Finally, this is all protected behind Cloudflare. With a free service, it's inevitable that some users will choose to abuse the system, and Cloudflare is an amazing product that's vital to ExchangeRate-API. Our servers can be protected and our users get low-latency, in-region caches. Cloudflare is set up with both the load balancing and traffic steering products to reduce latency and instantly shift traffic from unhealthy parts of the infrastructure to available origins. + +With this very redundant approach, there hasn't been downtime as a result of infrastructure problems or user load for around three years. The few periods of degraded service experienced in this time are all due to issues with code, deployment strategy, or config mistakes. The setup currently handles hundreds of millions of requests per month with low load levels and manageable costs, so there's plenty of room for growth. + +The actual application code is PHP with heavy use of Memcached. Memcached is an amazing open source project started by Brad Fitzpatrick in 2003. It's not particularly glamorous, but it is an incredibly reliable and performant distributed in-memory key value store. + +### Engaging with the open source community + +_**Chris:** There is an impressive amount of open source in your configuration. How do you engage with the broader community of users in these projects?_ + +**David:** I really struggle with the best way to be a good open source citizen while running a side project SaaS. I've considered building an open source library of some sort and releasing it, but I haven't thought of something that hasn't already been done and that I would be able to make the time commitment to reliably maintain. I'd only start a project like this if I could be confident I'd have the time to ensure users who choose the project wouldn't suddenly find themselves depending on abandonware. I've also looked into contributing to the projects that ExchangeRate-API depends on, but since I only use the biggest, most established options, I lack the expertise to make a meaningful contribution to such serious projects. + +I'm currently working on a new "Pro" plan for the service and I'm going to set a percentage of this income to donate to my open source dependencies. This still feels like a bandage though—answering this question makes me realize I need to put more time into starting an open source project that calls ExchangeRate-API home! + +### Looking ahead + +_**Chris:** We can only query the latest exchange rate, but it appears that you may be offering historical rates sometime later this year. Can you tell us more about the technical challenges with serving up historical data?_ + +**David:** There is a dataset of historical rates blended using our same algorithm from multiple central bank reference sets. However, I stopped new signups for it due to some issues with the data quality. The dataset reaches back to 1990, and there were a few earlier periods that need better data validation. As such, I'm building a better system for checking and comparing the data as it's ingested as well as adding an additional data source. The plan is to have a clean and more comprehensively verified-as-accurate dataset available later this year. + +In terms of the technical side of things, historical data is slightly more complex than live data. Compared to the live dataset (which is just a few bytes) the historical data is millions of database rows. This data was originally served from the database infrastructure with a long time-to-live (TTL) intermediary-caching layer. This was largely performant but struggled in situations where users wanted to dump the entire dataset as fast as the network could handle it. If the cache was sufficiently warm, this was fine, but if reboots, new server deployments, etc. had taken place recently, these big request sets would "miss" enough on the cache that the database would have problematic load spikes. + +Obviously, the goal is an infrastructure that can handle even aggressive use cases with normal performance, so the new historical rates dataset will be accompanied by a preemptive in-memory cache rather than a request-driven one. Thankfully, RAM is cheap these days, and putting a couple hundred megabytes of data entirely into RAM is a plausible approach even for a small project like ExchangeRate-API.com. + +_**Chris:** It sounds like you've been through quite a few iterations of this service to get to where it is today! Where do you see it going in the next few years?_ + +**David:** I'd aim for it to have reached coverage of every world currency so that anyone looking for this sort of software can easily and programmatically get the exchange rates they need for free. + +I'd also definitely like to have an affordable Pro plan that really resonates with users. Getting this right would mean better infrastructure and lower latency for free users as well. + +Finally, I'd like to have some sort of useful open source library under the ExchangeRate-API banner. Starting a small project that finds an enthusiastic community would be really rewarding. It's great to run something that's free-as-in-beer, but it would be even better if part of it was free-as-in-speech, as well. + +### How to use the service + +It's easy enough to test out the service using **wget** , as follows: + + +``` +clh@marseille:~$ wget -O - +\--2019-04-26 13:48:23-- +Resolving api.exchangerate-api.com (api.exchangerate-api.com)... 2606:4700:20::681a:c80, 2606:4700:20::681a:d80, 104.26.13.128, ... +Connecting to api.exchangerate-api.com (api.exchangerate-api.com)|2606:4700:20::681a:c80|:443... connected. +HTTP request sent, awaiting response... 200 OK +Length: unspecified [application/json] +Saving to: ‘STDOUT’ + +\- [<=> +] 0 --.-KB/s {"base":"INR","date":"2019-04-26","time_last_updated":1556236800,"rates":{"INR":1,"AUD":0.020343,"BRL":0.056786,"CAD":0.019248,"CHF":0.014554,"CNY":0.096099,"CZK":0.329222,"DKK":0.095497,"EUR":0.012789,"GBP":0.011052,"HKD":0.111898,"HUF":4.118615,"IDR":199.61769,"ILS":0.051749,"ISK":1.741659,"JPY":1.595527,"KRW":16.553091,"MXN":0.272383,"MYR":0.058964,"NOK":0.123365,"NZD":0.02161,"PEN":0.047497,"PHP":0.744974,"PLN":0.054927,"RON":0.060923,"RUB":0.921808,"SAR":0.053562,"SEK":0.135226,"SGD":0.019442,"THB":0.457501,"TRY":0- [ <=> ] 579 --.-KB/s in 0s + +2019-04-26 13:48:23 (15.5 MB/s) - written to stdout [579] + +clh@marseille:~$ +``` + +The result is returned as a JSON payload, giving conversion rates from Indian rupees (the currency I requested in the URL) to all the currencies handled by ExchangeRate-API. + +The Groovy shell can access the API: + + +``` +clh@marseille:~$ groovysh +Groovy Shell (2.5.3, JVM: 1.8.0_212) +Type ':help' or ':h' for help. +\---------------------------------------------------------------------------------------------------------------------------------- +groovy:000> import groovy.json.JsonSlurper +===> groovy.json.JsonSlurper +groovy:000> result = (new JsonSlurper()).parse( +groovy:001> new InputStreamReader((new URL(')) +groovy:002> ) +===> [base:INR, date:2019-04-26, time_last_updated:1556236800, rates:[INR:1, AUD:0.020343, BRL:0.056786, CAD:0.019248, CHF:0.014554, CNY:0.096099, CZK:0.329222, DKK:0.095497, EUR:0.012789, GBP:0.011052, HKD:0.111898, HUF:4.118615, IDR:199.61769, ILS:0.051749, ISK:1.741659, JPY:1.595527, KRW:16.553091, MXN:0.272383, MYR:0.058964, NOK:0.123365, NZD:0.02161, PEN:0.047497, PHP:0.744974, PLN:0.054927, RON:0.060923, RUB:0.921808, SAR:0.053562, SEK:0.135226, SGD:0.019442, THB:0.457501, TRY:0.084362, TWD:0.441385, USD:0.014255, ZAR:0.206271]] +groovy:000> +``` + +The same JSON payload is returned as a result of the Groovy JSON slurper operating on the URL. Of course, since this is Groovy, the JSON is converted into a Map, so you can do stuff like this: + + +``` +groovy:000> println result.base +INR +===> null +groovy:000> println result.date +2019-04-26 +===> null +groovy:000> println result.rates.CAD +0.019248 +===> null +``` + +And that's it! + +Do you use ExchangeRate-API or a similar service? Share how you use exchange rate data in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/exchange-rate-data + +作者:[Chris Hermansen ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c (scientific calculator) +[2]: https://opensource.com/article/18/3/groovy-calculate-foreign-exchange +[3]: https://fixer.io/ +[4]: https://apilayer.com/ +[5]: https://www.exchangerate-api.com/ +[6]: https://certbot.eff.org/ +[7]: https://en.wikipedia.org/wiki/Heartbleed diff --git a/sources/tech/20190509 Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots.md b/sources/tech/20190509 Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots.md new file mode 100644 index 0000000000..0b2d9e55c6 --- /dev/null +++ b/sources/tech/20190509 Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots.md @@ -0,0 +1,256 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots) +[#]: via: (https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +Red Hat Enterprise Linux (RHEL) 8 Installation Steps with Screenshots +====== + +Red Hat has released its most awaited OS **RHEL 8** on 7th May 2019. RHEL 8 is based on **Fedora 28** distribution and Linux **kernel version 4.18**. One of the important key features in RHEL 8 is that it has introduced “ **Application Streams** ” which allows developers tools, frameworks and languages to be updated frequently without impacting the core resources of base OS. In other words, application streams will help to segregate the users space packages from OS Kernel Space. + +Apart from this, there are many new features which are noticed in RHEL 8 like: + + * XFS File system supports copy-on-write of file extents + * Introduction of Stratis filesystem, Buildah, Podman, and Skopeo + * Yum utility is based on DNF + * Chrony replace NTP. + * Cockpit is the default Web Console tool for Server management. + * OpenSSL 1.1.1 & TLS 1.3 support + * PHP 7.2 + * iptables replaced by nftables + + + +### Minimum System Requirements for RHEL 8: + + * 4 GB RAM + * 20 GB unallocated disk space + * 64-bit x86 or ARM System + + + +**Note:** RHEL 8 supports the following architectures: + + * AMD or Intel x86 64-bit + * 64-bit ARM + * IBM Power Systems, Little Endian & IBM Z + + + +In this article we will demonstrate how to install RHEL 8 step by step with screenshots. + +### RHEL 8 Installation Steps with Screenshots + +### Step:1) Download RHEL 8.0 ISO file + +Download RHEL 8 iso file from its official web site, + + + +I am assuming you have the active subscription if not then register yourself for evaluation and then download ISO file + +### Step:2) Create Installation bootable media (USB or DVD) + +Once you have downloaded RHEL 8 ISO file, make it bootable by burning it either into a USB drive or DVD. Reboot the target system where you want to install RHEL 8 and then go to its bios settings and set the boot medium as USB or DVD, + +### Step:3) Choose “Install Red Hat Enterprise Linux 8.0” option + +When the system boots up with installation media (USB or DVD), we will get the following screen, choose “ **Install Red Hat Enterprise Linux 8.0** ” and hit enter, + + + +### Step:4) Choose your preferred language for RHEL 8 installation + +In this step, you need to choose a language that you want to use for RHEL 8 installation, so make a selection that suits to your setup. + + + +Click on Continue + +### Step:5) Preparing RHEL 8 Installation + +In this step we will decide the installation destination for RHEL 8, apart from this we can configure the followings: + + * Time Zone + * Kdump (enabled/disabled) + * Software Selection (Packages) + * Networking and Hostname + * Security Policies & System purpose + + + + + +By default, installer will automatically pick time zone and will enable the **kdump** , if wish to change the time zone then click on “ **Time & Date**” option and set your preferred time zone and then click on Done. + + + +To configure IP address and Hostname click on “ **Network & Hostname**” option from installation summary screen, + +If your system is connected to any switch or modem, then it will try to get IP from DHCP server otherwise we can configure IP manually. + +Mention the hostname that you want to set and then click on “ **Apply”**. Once you are done with IP address and hostname configuration click on “Done” + + + +To define the installation disk and partition scheme for RHEL 8, click on “ **Installation Destination** ” option, + + + +Click on Done + +As we can see I have around 60 GB free disk space on sda drive, I will be creating following customize lvm based partitions on this disk, + + * /boot = 2GB (xfs file system) + * / = 20 GB (xfs file system) + * /var = 10 GB (xfs file system) + * /home = 15 GB (xfs file system) + * /tmp = 5 GB (xfs file system) + * Swap = 2 GB (xfs file system) + + + +**Note:** If you don’t want to create manual partitions then select “ **Automatic** ” option from Storage Configuration Tab + + + +Let’s create our first partition as /boot of size 2 GB, Select LVM as mount point partitioning scheme and then click on + “plus” symbol, + + + +Click on “ **Add mount point** ” + + + +To create next partition as / of size 20 GB, click on + symbol and specify the details as shown below, + + + +Click on “Add mount point” + + + +As we can see installer has created the Volume group as “ **rhel_rhel8** “, if you want to change this name then click on Modify option and specify the desired name and then click on Save + + + +Now onward all partitions will be part of Volume Group “ **VolGrp** ” + +Similarly create next three partitions **/home** , **/var** and **/tmp** of size 15GB, 10 GB and 5 GB respectively + +**/home partition:** + + + +**/var partition:** + + + +**/tmp partition:** + + + +Now finally create last partition as swap of size of 2 GB, + + + +Click on “Add mount point” + +Once you are done with partition creations, click on Done on Next screen, example is shown below + + + +In the next window, choose “ **Accept Changes** ” + + + +### Step:6) Select Software Packages and Choose Security Policy and System purpose + +After accepting the changes in above step, we will be redirected to installation summary window. + +By default, installer will select “ **Server with GUI”** as software packages and if you want to change it then click on “ **Software Selection** ” option and choose your preferred “ **Basic Environment** ” + + + +Click on Done + +If you want to set the security policies during the installation, the choose the required profile from Security polices option else you can leave as it is. + +From “ **System Purpose** ” option specify the Role, Red Hat Service Level Agreement and Usage. Though You can leave this option as it is. + + + +Click on Done to proceed further. + +### Step:7) Choose “Begin Installation” option to start installation + +From the Installation summary window click on “Begin Installation” option to start the installation, + + + +As we can see below RHEL 8 Installation is started & is in progress + + + +Set the root password, + + + +Specify the local user details like its Full Name, user name and its password, + + + +Once the installation is completed, installer will prompt us to reboot the system, + + + +Click on “Reboot” to restart your system and don’t forget to change boot medium from bios settings so that system boots up with hard disk. + +### Step:8) Initial Setup after installation + +When the system is rebooted first time after the successful installation then we will get below window there we need to accept the license (EULA), + + + +Click on Done, + +In the next Screen click on “ **Finish Configuration** ” + + + +### Step:8) Login Screen of RHEL 8 Server after Installation + +As we have installed RHEL 8 Server with GUI, so we will get below login screen, use the same user name and password that we created during the installation + + + +After the login we will get couple of Welcome Screen and follow the screen instructions and then finally we will get the following screen, + + + +Click on “Start Using Red Hat Enterprise Linux” + + + +This confirms that we have successfully installed RHEL 8, that’s all from this article. We will be writing articles on RHEL 8 in the coming future till then please do share your feedback and comments on this article. + +Read Also :** [How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File][1]** + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/rhel-8-installation-steps-screenshots/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/ diff --git a/sources/tech/20190510 5 open source hardware products for the great outdoors.md b/sources/tech/20190510 5 open source hardware products for the great outdoors.md new file mode 100644 index 0000000000..357fbfdcb8 --- /dev/null +++ b/sources/tech/20190510 5 open source hardware products for the great outdoors.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 open source hardware products for the great outdoors) +[#]: via: (https://opensource.com/article/19/5/hardware-outdoors) +[#]: author: (Michael Weinberg https://opensource.com/users/mweinberg/users/aliciagibb) + +5 open source hardware products for the great outdoors +====== +Here's some equipment you can buy or make yourself for hitting the great +outdoors, no generators or batteries required. +![Tree clouds][1] + +When people think about open source hardware, they often think about the general category of electronics that can be soldered and needs batteries. While there are [many][2] fantastic open source pieces of electronics, the overall category of open source hardware is much broader. This month we take a look at open source hardware that you can take out into the world, no power outlet or batteries required. + +### Hummingbird Hammocks + +[Hummingbird Hammocks][3] offers an entire line of open source camping gear. You can set up an open source [rain tarp][4]... + +![An open source rain tarp from Hummingbird Hammocks][5] + +...with open source [friction adjusters][6] + +![Open source friction adjusters from Hummingbird Hammocks.][7] + +Open source friction adjusters from Hummingbird Hammocks. + +...over your open source [hammock][8] + +![An open source hammock from Hummingbird Hammocks.][9] + +An open source hammock from Hummingbird Hammocks. + +...hung with open source [tree straps][10]. + +![Open source tree straps from Hummingbird Hammocks.][11] + +Open source tree straps from Hummingbird Hammocks. + +The design for each of these items is fully documented, so you can even use them as a starting point for making your own outdoor gear (if you are willing to trust friction adjusters you design yourself). + +### Openfoil + +[Openfoil][12] is an open source hydrofoil for kitesurfing. Hydrofoils are attached to the bottom of kiteboards and allow the rider to rise out of the water. This aspect of the design makes riding in low wind situations and with smaller kites easier. It can also reduce the amount of noise the board makes on the water, making for a quieter experience. Because this hydrofoil is open source you can customize it to your needs and adventure tolerance. + +![Openfoil, an open source hydrofoil for kitesurfing.][13] + +Openfoil, an open source hydrofoil for kitesurfing. + +### Solar water heater + +If you prefer your outdoors-ing a bit closer to home, you could build this open source [solar water heater][14] created by the [Anisa Foundation][15]. This appliance focuses energy from the sun to heat water that can then be used in your home, letting you reduce your carbon footprint without having to give up long, hot showers. Of course, you can also [monitor its temperature ][16]over the internet if you need to feel connected. + +![An open source solar water heater from the Anisa Foundation.][17] + +An open source solar water heater from the Anisa Foundation. + +## Wrapping up + +As these projects make clear, open source hardware is more than just electronics. You can take it with you to the woods, to the beach, or just to your roof. Next month we’ll talk about open source instruments and musical gear. Until then, [certify][18] your open source hardware! + +Learn how and why you may want to start using the Open Source Hardware Certification logo on an... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/hardware-outdoors + +作者:[Michael Weinberg][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mweinberg/users/aliciagibb +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/life_tree_clouds.png?itok=b_ftihhP (Tree clouds) +[2]: https://certification.oshwa.org/list.html +[3]: https://hummingbirdhammocks.com/ +[4]: https://certification.oshwa.org/us000102.html +[5]: https://opensource.com/sites/default/files/uploads/01-hummingbird_hammocks_rain_tarp.png (An open source rain tarp from Hummingbird Hammocks) +[6]: https://certification.oshwa.org/us000105.html +[7]: https://opensource.com/sites/default/files/uploads/02-hummingbird_hammocks_friction_adjusters_400_px.png (Open source friction adjusters from Hummingbird Hammocks.) +[8]: https://certification.oshwa.org/us000095.html +[9]: https://opensource.com/sites/default/files/uploads/03-hummingbird_hammocks_hammock_400_px.png (An open source hammock from Hummingbird Hammocks.) +[10]: https://certification.oshwa.org/us000098.html +[11]: https://opensource.com/sites/default/files/uploads/04-hummingbird_hammocks_tree_straps_400_px_0.png (Open source tree straps from Hummingbird Hammocks.) +[12]: https://certification.oshwa.org/fr000004.html +[13]: https://opensource.com/sites/default/files/uploads/05-openfoil-original_size.png (Openfoil, an open source hydrofoil for kitesurfing.) +[14]: https://certification.oshwa.org/mx000002.html +[15]: http://www.fundacionanisa.org/index.php?lang=en +[16]: https://thingspeak.com/channels/72565 +[17]: https://opensource.com/sites/default/files/uploads/06-solar_water_heater_500_px.png (An open source solar water heater from the Anisa Foundation.) +[18]: https://certification.oshwa.org/ diff --git a/sources/tech/20190510 Check storage performance with dd.md b/sources/tech/20190510 Check storage performance with dd.md new file mode 100644 index 0000000000..8cdea81f69 --- /dev/null +++ b/sources/tech/20190510 Check storage performance with dd.md @@ -0,0 +1,432 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Check storage performance with dd) +[#]: via: (https://fedoramagazine.org/check-storage-performance-with-dd/) +[#]: author: (Gregory Bartholomew https://fedoramagazine.org/author/glb/) + +Check storage performance with dd +====== + +![][1] + +This article includes some example commands to show you how to get a _rough_ estimate of hard drive and RAID array performance using the _dd_ command. Accurate measurements would have to take into account things like [write amplification][2] and [system call overhead][3], which this guide does not. For a tool that might give more accurate results, you might want to consider using [hdparm][4]. + +To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. **WARNING** : The _write_ tests will destroy any data on the block devices against which they are run. **Do not run them against any device that contains data you want to keep!** + +### Four tests + +Below are four example dd commands that can be used to test the performance of a block device: + + 1. One process reading from $MY_DISK: + +``` +# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache +``` + + 2. One process writing to $MY_DISK: + +``` +# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct +``` + + 3. Two processes reading concurrently from $MY_DISK: + +``` +# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) +``` + + 4. Two processes writing concurrently to $MY_DISK: + +``` +# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) +``` + + + + +– The _iflag=nocache_ and _oflag=direct_ parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from [RAM][5] rather than the hard drive. + +– The values for the _bs_ and _count_ parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware. + +– The _null_ and _zero_ devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests. + +– The _skip=200_ parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive. + +### 16 examples + +Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices: + + 1. MY_DISK=/dev/sda2 (used in examples 1-X) + 2. MY_DISK=/dev/sdb2 (used in examples 2-X) + 3. MY_DISK=/dev/md/stripped (used in examples 3-X) + 4. MY_DISK=/dev/md/mirrored (used in examples 4-X) + + + +A video demonstration of the these tests being run on a PC is provided at the end of this guide. + +Begin by putting your computer into _rescue_ mode to reduce the chances that disk I/O from background services might randomly affect your test results. **WARNING** : This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your _root_ password to get into rescue mode. The _passwd_ command, when run as the root user, will prompt you to (re)set your root account password. + +``` +$ sudo -i +# passwd +# setenforce 0 +# systemctl rescue +``` + +You might also want to temporarily disable logging to disk: + +``` +# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf +# systemctl restart systemd-journald.service +``` + +If you have a swap device, it can be temporarily disabled and used to perform the following tests: + +``` +# swapoff -a +# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*") +# mdadm --stop /dev/md/swap +# mdadm --zero-superblock $MY_DEVS +``` + +#### Example 1-1 (reading from sda) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) +# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s +``` + +#### Example 1-2 (writing to sda) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) +# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s +``` + +#### Example 1-3 (reading concurrently from sda) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) +# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s +``` + +#### Example 1-4 (writing concurrently to sda) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1) +# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) +``` + +``` +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s +``` + +#### Example 2-1 (reading from sdb) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) +# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s +``` + +#### Example 2-2 (writing to sdb) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) +# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s +``` + +#### Example 2-3 (reading concurrently from sdb) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) +# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s +``` + +#### Example 2-4 (writing concurrently to sdb) + +``` +# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2) +# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s +``` + +#### Example 3-1 (reading from RAID0) + +``` +# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS +# MY_DISK=/dev/md/stripped +# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s +``` + +#### Example 3-2 (writing to RAID0) + +``` +# MY_DISK=/dev/md/stripped +# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s +``` + +#### Example 3-3 (reading concurrently from RAID0) + +``` +# MY_DISK=/dev/md/stripped +# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s +``` + +#### Example 3-4 (writing concurrently to RAID0) + +``` +# MY_DISK=/dev/md/stripped +# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s +``` + +#### Example 4-1 (reading from RAID1) + +``` +# mdadm --stop /dev/md/stripped +# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS +# MY_DISK=/dev/md/mirrored +# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s +``` + +#### Example 4-2 (writing to RAID1) + +``` +# MY_DISK=/dev/md/mirrored +# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s +``` + +#### Example 4-3 (reading concurrently from RAID1) + +``` +# MY_DISK=/dev/md/mirrored +# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s +``` + +#### Example 4-4 (writing concurrently to RAID1) + +``` +# MY_DISK=/dev/md/mirrored +# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &) +``` + +``` +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s +200+0 records in +200+0 records out +209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s +``` + +#### Restore your swap device and journald configuration + +``` +# mdadm --stop /dev/md/stripped /dev/md/mirrored +# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS +# mkswap /dev/md/swap +# swapon -a +# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf +# systemctl restart systemd-journald.service +# reboot +``` + +### Interpreting the results + +Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s. + +Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s). + +The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a [catastrophic failure][6]. + +The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost [if a drive fails][7]. + +### Video demo + +Testing storage throughput using dd + +### Troubleshooting + +If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology ([SMART][8]). If your drive supports it, the _smartctl_ command can be used to query your hard drive for its internal statistics: + +``` +# smartctl --health /dev/sda +# smartctl --log=error /dev/sda +# smartctl -x /dev/sda +``` + +Another way that you might be able to tune your PC for better performance is by changing your [I/O scheduler][9]. Linux systems support several I/O schedulers and the current default for Fedora systems is the [multiqueue][10] variant of the [deadline][11] scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions. + +To view which I/O scheduler your drives are using, issue the following command: + +``` +$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done +``` + +You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block//queue/scheduler file: + +``` +# echo bfq > /sys/block/sda/queue/scheduler +``` + +You can make your changes permanent by creating a [udev rule][12] for your drive. The following example shows how to create a udev rule that will set all [rotational drives][13] to use the [BFQ][14] I/O scheduler: + +``` +# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules +ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq" +END +``` + +Here is another example that sets all [solid-state drives][15] to use the [NOOP][16] I/O scheduler: + +``` +# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules +ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none" +END +``` + +Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering. + +* * * + +_Photo by _[ _James Donovan_][17]_ on _[_Unsplash_][18]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/check-storage-performance-with-dd/ + +作者:[Gregory Bartholomew][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/glb/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/04/dd-performance-816x345.jpg +[2]: https://www.ibm.com/developerworks/community/blogs/ibmnas/entry/misalignment_can_be_twice_the_cost1?lang=en +[3]: https://eklitzke.org/efficient-file-copying-on-linux +[4]: https://en.wikipedia.org/wiki/Hdparm +[5]: https://en.wikipedia.org/wiki/Random-access_memory +[6]: https://blog.elcomsoft.com/2019/01/why-ssds-die-a-sudden-death-and-how-to-deal-with-it/ +[7]: https://www.computerworld.com/article/2484998/ssds-do-die--as-linus-torvalds-just-discovered.html +[8]: https://en.wikipedia.org/wiki/S.M.A.R.T. +[9]: https://en.wikipedia.org/wiki/I/O_scheduling +[10]: https://lwn.net/Articles/552904/ +[11]: https://en.wikipedia.org/wiki/Deadline_scheduler +[12]: http://www.reactivated.net/writing_udev_rules.html +[13]: https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics +[14]: http://algo.ing.unimo.it/people/paolo/disk_sched/ +[15]: https://en.wikipedia.org/wiki/Solid-state_drive +[16]: https://en.wikipedia.org/wiki/Noop_scheduler +[17]: https://unsplash.com/photos/0ZBRKEG_5no?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[18]: https://unsplash.com/search/photos/speed?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190510 Keeping an open source project alive when people leave.md b/sources/tech/20190510 Keeping an open source project alive when people leave.md new file mode 100644 index 0000000000..31a0ab7412 --- /dev/null +++ b/sources/tech/20190510 Keeping an open source project alive when people leave.md @@ -0,0 +1,180 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Keeping an open source project alive when people leave) +[#]: via: (https://opensource.com/article/19/5/code-missing-community-management) +[#]: author: (Rodrigo Duarte Sousa https://opensource.com/users/rodrigods/users/tellesnobrega) + +Keeping an open source project alive when people leave +====== +How to find out what's done, what's not, and what's missing. +![][1] + +Suppose you wake up one day and decide to finally use that recipe video you keep watching all over social media. You get the ingredients, organize the necessary utensils, and start to follow the recipe steps. You cut this, cut that, then start heating the oven at the same time you put butter and onions in a pan. Then, your phone reminds you: you have a dinner appointment with your boss, and you're already late! You turn off everything and leave immediately, stopping the cooking process somewhere near the end. + +Some minutes later, your roommate arrives at home ready to have dinner and finds only the _ongoing work_ in the kitchen. They have the following options: + + 1. Clean up the mess and start cooking something from scratch. + 2. Order dinner and don’t bother to cook and/or fix the mess you left. + 3. Start cooking “around” the mess you left, which will probably take more time since most of the utensils are dirty and there isn’t much space left in the kitchen. + + + +If you left the printed version of the recipe somewhere, your roommate also has a fourth option. They could finish what you started! The problem is that they have no idea what's missing. It is not like you crossed out each completed step. Their best bet is either to call you or to examine all of your _changes_ to infer what is missing. + +In this example, the kitchen is like a software project, the utensils are the code, and the recipe is a new feature being implemented. Leaving something behind is not usually doable in a company's private project since you're accountable for your work and—in a scenario where you need to leave—it's almost certain that there is someone tracking/following the project, so they avoid having a "single point of failure." With open source projects, though, this continuity rarely happens. So how can we in the open source community deal with legacy, unfinished code, or code that is completed but no one dares touch it? + +### Knowledge legacy in open source projects + +We have always felt that open source is one of the best ways for an inexperienced software engineer to improve her skills. For many, open source projects offer their first hands-on experience with particular tools. [Version control systems][2], [unit][3] and [integration][4] tests, [continuous delivery][5], [code reviews][6], [features planning][7], [bug reporting/fixing][8], and more. + +In addition to learning opportunities, we can also view open source projects as a career opportunity—many senior engineers in the community get paid to be there, and you can add your contributions to your resume. That’s pretty cool. There's nothing like learning while improving your resume and getting potential employers' attention so you can pay your rent. + +Is this whole situation an infinite loop where everyone wins? The answer is obviously no. This post focuses on one of the main issues that arise in any project: the [bus/truck factor][9]. In the open source context, specifically, when people experience major changes such as a new job or other more personal factors, they tend to leave the community. We will first describe the problems that can arise from people leaving their _recipes_ unfinished by using [OpenStack][10] as an example. Then, we'll try to discuss some ideas to try to mitigate the issues. + +### Common problems + +In the past few years, we've seen a lot of changes in the [OpenStack][11] community, where some projects lost some portion of their active contributors team. These losses led to incomplete work and even finished modules without clear maintainers. Below are other examples of what happens when people suddenly leave. While this article uses OpenStack terms, such as “specs,” these issues easily apply to software development in general: + + * **Broken documentation:** A new API or setting either wasn't documented, or it was documented but not implemented. + * **Hard to resolve knowledge deficits:** For example, a new requirement and/or feature requires part of the code to be refactored but no one has the necessary expertise. + * **Incomplete features:** What are the missing tasks required for each feature? Which tasks were completed? + * **Debugging drama:** If the person who wrote the code isn't there, meaning that it takes a lot of engineering hours just to decrypt—so to speak—the code path that needs to be fixed. + + + +To illustrate, we will use the [Project Tree Deletion][12] feature. Project Tree Deletion is a tiny feature that one of us proposed more than three years ago and couldn’t complete. Basically, the main goal was to enable an OpenStack user/operator to erase a whole branch of projects without having to manually disable/delete every single of them starting from the leaves. Very straightforward, right? The PTD spec has been merged and has the following _work items_ : + + * Update API spec documentation. + * Add new rules to the file **policy.json**. + * Add new endpoints to mirror the new features. + * Implement the new deletion/disabling behavior for the project’s hierarchy. + + + +What about the sequence of steps (roadmap) to get these work items done? How do we know where to start and when what to tackle next? Are there any logical dependencies between the work items? How do we know where to start, and with what? + +Also, how do we know which work has been completed (if any)? One of the things that we do is look in the [blueprint][13] and/or the new [bug tracker][14], for example: + + * Recursive deletion and project disabling: (merged) + * API changes for Reseller: (merged) + * Add parent_id to GET /projects: (merged) + * Manager support for project cascade update: (merged) + * API support for cascade update: (abandoned) + * Manager support for project delete cascade: (merged) + * API support for project cascade delete: (abandoned) + * Add backend support for deleting a projects list: (merged) + * Test list project hierarchy is correct for a large tree: (merged) + * Fix cascade operations documentation: (merged) + * Revert “Fix cascade operations documentation”: (merged) + * Remove the APIs from the doc that aren't supported yet: (merged) + + + +Here we can see a lot of merged patches, but also that some were abandoned, and that some include the words Revert and Remove in their titles. Now we have strong evidence that this work is not completed, but at least some work was started to clean it up and avoid exposing something incomplete in the service API. Let’s dig a little bit deeper and look at the [_current_ delete project code][15]. + +There, we can see an added **cascade** argument (“cascade” resembles deleting related things together, so this argument must be somehow related to the proposed feature), and that it has a special block to treat the cases for the possible values of **cascade** : + + +``` +`def _delete_project(self, project, initiator=None, cascade=False):`[/code] [code] + +if cascade: +# Getting reversed project's subtrees list, i.e. from the leaves +# to the root, so we do not break parent_id FK. +subtree_list = self.list_projects_in_subtree(project_id) +subtree_list.reverse() +if not self._check_whole_subtree_is_disabled( +project_id, subtree_list=subtree_list): +raise exception.ForbiddenNotSecurity( +_('Cannot delete project %(project_id)s since its subtree ' +'contains enabled projects.') +% {'project_id': project_id}) + +project_list = subtree_list + [project] +projects_ids = [x['id'] for x in project_list] + +ret = self.driver.delete_projects_from_ids(projects_ids) +for prj in project_list: +self._post_delete_cleanup_project(prj['id'], prj, initiator) +else: +ret = self.driver.delete_project(project_id) +self._post_delete_cleanup_project(project_id, project, initiator) +``` + +What about the callers of this function? Do they use **cascade** at all? If we search for it, we only find occurrences in the backend tests: + + +``` +$ git grep "delete_project" | grep "cascade" | grep -v "def" +keystone/tests/unit/resource/test_backends.py: PROVIDERS.resource_api.delete_project(root_project['id'], cascade=True) +keystone/tests/unit/resource/test_backends.py: PROVIDERS.resource_api.delete_project(p1['id'], cascade=True) +``` + +We can also confirm this finding by looking at the [delete projects API implementation][16]. + +So it seems that we have a problem here, something simple that I started was left behind a very long time ago. How could the community or I have prevented this from happening? + +From the example above, one of the most apparent problems is the lack of a clear roadmap and list of completed tasks somewhere. To follow the actual implementation status, we had to dig into the blueprint/bug comments and the code. + +Based on this issue, we can sketch an idea: for each new feature, we need a roadmap stored somewhere to reflect the implementation status. Once the roadmap is defined within a spec, we can track each step as a [Launchpad][17] entry, for example, and have a better view of the progress status of that spec. + +Of course, these steps won’t prevent unfinished projects and they add a little bit of process, but following them can give a better view of what's missing so someone else from the community could finish or even revert what's there. + +### That’s not all + +What about other aspects of the project besides feature completion? We shouldn’t expect that every person on the core team is an expert in every single project module. This issue highlights another very important aspect of any open source community: mentoring. + +New people come to the community all the time and many have an incentive to continuing coming back as we discussed earlier. However, are our current community members willing to mentor them? How many times have you participated as a mentor in a program such as [Outreachy ][18]or [Google Summer of Code][19], or taken time to answer questions in the project’s chat? + +We also know that people eventually move on to other open source communities, so we have the chance of not leaving what we learned behind. We can always transmit that knowledge directly to those who are currently interested and actively asking questions, or indirectly, by writing documentation, blog posts, giving talks, and so forth. + +In order to have a healthy open source community, knowledge can’t be dominated by few people. We need to make an effort to have as many people capable of moving the project forward as possible. Also, a key aspect of mentoring is not only related to coding, but also to leadership skills. Preparing people to take roles like Project Team Lead, joining the Technical Committee, and so on is crucial if we intend to see the community grow even when we're not around anymore. + +Needless to say, mentoring is also an important skill for climbing the engineering ladder in most companies. Consider that another motivation. + +### To conclude + +Open source should not be treated as only the means to an end. Collaboration is a crucial part of these projects, and alongside mentoring, should always be treated as a first citizen in any open source community. And, of course, we will fix the unfinished spec used as this article's example. + +If you are part of an open source community, it is your responsibility to be focusing on sharing your knowledge while you are still around. Chances are that no one is going to tell you to do so, it should be part of the routine of any open source collaborator. + +What are other ways of sharing knowledge? What are your thoughts and ideas about the issue? + +_This original article was posted on[rodrigods][20]._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/code-missing-community-management + +作者:[Rodrigo Duarte Sousa][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rodrigods/users/tellesnobrega +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_question_B.png?itok=f88cyt00 +[2]: https://en.wikipedia.org/wiki/Version_control +[3]: https://en.wikipedia.org/wiki/Unit_testing +[4]: https://en.wikipedia.org/wiki/Integration_testing +[5]: https://en.wikipedia.org/wiki/Continuous_delivery +[6]: https://en.wikipedia.org/wiki/Code_review +[7]: https://www.agilealliance.org/glossary/sprint-planning/ +[8]: https://www.softwaretestinghelp.com/how-to-write-good-bug-report/ +[9]: https://en.wikipedia.org/wiki/Bus_factor +[10]: https://www.openstack.org/ +[11]: /resources/what-is-openstack +[12]: https://review.opendev.org/#/c/148730/35 +[13]: https://blueprints.launchpad.net/keystone/+spec/project-tree-deletion +[14]: https://bugs.launchpad.net/keystone/+bug/1816105 +[15]: https://github.com/openstack/keystone/blob/master/keystone/resource/core.py#L475-L519 +[16]: https://github.com/openstack/keystone/blob/master/keystone/api/projects.py#L202-L214 +[17]: https://launchpad.net +[18]: https://www.outreachy.org/ +[19]: https://summerofcode.withgoogle.com/ +[20]: https://blog.rodrigods.com/knowledge-legacy-the-issue-of-passing-the-baton/ diff --git a/sources/tech/20190510 Learn to change history with git rebase.md b/sources/tech/20190510 Learn to change history with git rebase.md new file mode 100644 index 0000000000..4d46fef81f --- /dev/null +++ b/sources/tech/20190510 Learn to change history with git rebase.md @@ -0,0 +1,597 @@ +Translating by Scoutydren.... + + +[#]: collector: (lujun9972) +[#]: translator: (Scoutydren) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Learn to change history with git rebase!) +[#]: via: (https://git-rebase.io/) +[#]: author: (git-rebase https://git-rebase.io/) + +Learn to change history with git rebase! +====== +One of Git 's core value-adds is the ability to edit history. Unlike version control systems that treat the history as a sacred record, in git we can change history to suit our needs. This gives us a lot of powerful tools and allows us to curate a good commit history in the same way we use refactoring to uphold good software design practices. These tools can be a little bit intimidating to the novice or even intermediate git user, but this guide will help to demystify the powerful git-rebase . + +``` +A word of caution : changing the history of public, shared, or stable branches is generally advised against. Editing the history of feature branches and personal forks is fine, and editing commits that you haven't pushed yet is always okay. Use git push -f to force push your changes to a personal fork or feature branch after editing your commits. +``` + +Despite the scary warning, it's worth mentioning that everything mentioned in this guide is a non-destructive operation. It's actually pretty difficult to permanently lose data in git. Fixing things when you make mistakes is covered at the end of this guide. + +### Setting up a sandbox + +We don't want to mess up any of your actual repositories, so throughout this guide we'll be working with a sandbox repo. Run these commands to get started: + +``` +git init /tmp/rebase-sandbox +cd /tmp/rebase-sandbox +git commit --allow-empty -m"Initial commit" +``` + +If you run into trouble, just run rm -rf /tmp/rebase-sandbox and run these steps again to start over. Each step of this guide can be run on a fresh sandbox, so it's not necessary to re-do every task. + + +### Amending your last commit + +Let's start with something simple: fixing your most recent commit. Let's add a file to our sandbox - and make a mistake: + +``` +echo "Hello wrold!" >greeting.txt + git add greeting.txt + git commit -m"Add greeting.txt" +``` + +Fixing this mistake is pretty easy. We can just edit the file and commit with `--amend`, like so: + +``` +echo "Hello world!" >greeting.txt + git commit -a --amend +``` + +Specifying `-a` automatically stages (i.e. `git add`'s) all files that git already knows about, and `--amend` will squash the changes into the most recent commit. Save and quit your editor (you have a chance to change the commit message now if you'd like). You can see the fixed commit by running `git show`: + +``` +commit f5f19fbf6d35b2db37dcac3a55289ff9602e4d00 (HEAD -> master) +Author: Drew DeVault +Date: Sun Apr 28 11:09:47 2019 -0400 + + Add greeting.txt + +diff --git a/greeting.txt b/greeting.txt +new file mode 100644 +index 0000000..cd08755 +--- /dev/null ++++ b/greeting.txt +@@ -0,0 +1 @@ ++Hello world! +``` + +### Fixing up older commits + +Amending only works for the most recent commit. What happens if you need to correct an older commit? Let's start by setting up our sandbox accordingly: + +``` +echo "Hello!" >greeting.txt +git add greeting.txt +git commit -m"Add greeting.txt" + +echo "Goodbye world!" >farewell.txt +git add farewell.txt +git commit -m"Add farewell.txt" +``` + +Looks like `greeting.txt` is missing "world". Let's write a commit normally which fixes that: + +``` +echo "Hello world!" >greeting.txt +git commit -a -m"fixup greeting.txt" +``` + +So now the files look correct, but our history could be better - let's use the new commit to "fixup" the last one. For this, we need to introduce a new tool: the interactive rebase. We're going to edit the last three commits this way, so we'll run `git rebase -i HEAD~3` (`-i` for interactive). This'll open your text editor with something like this: + +``` +pick 8d3fc77 Add greeting.txt +pick 2a73a77 Add farewell.txt +pick 0b9d0bb fixup greeting.txt + +# Rebase f5f19fb..0b9d0bb onto f5f19fb (3 commands) +# +# Commands: +# p, pick = use commit +# f, fixup = like "squash", but discard this commit's log message +``` + +This is the rebase plan, and by editing this file you can instruct git on how to edit history. I've trimmed the summary to just the details relevant to this part of the rebase guide, but feel free to skim the full summary in your text editor. + +When we save and close our editor, git is going to remove all of these commits from its history, then execute each line one at a time. By default, it's going to pick each commit, summoning it from the heap and adding it to the branch. If we don't edit this file at all, we'll end up right back where we started, picking every commit as-is. We're going to use one of my favorite features now: fixup. Edit the third line to change the operation from "pick" to "fixup" and move it to immediately after the commit we want to "fix up": + +``` +pick 8d3fc77 Add greeting.txt +fixup 0b9d0bb fixup greeting.txt +pick 2a73a77 Add farewell.txt +``` + +**Tip** : We can also abbreviate this with just "f" to speed things up next time. + +Save and quit your editor - git will run these commands. We can check the log to verify the result: + +``` +$ git log -2 --oneline +fcff6ae (HEAD -> master) Add farewell.txt +a479e94 Add greeting.txt +``` + +### Squashing several commits into one + +As you work, you may find it useful to write lots of commits as you reach small milestones or fix bugs in previous commits. However, it may be useful to "squash" these commits together, to make a cleaner history before merging your work into master. For this, we'll use the "squash" operation. Let's start by writing a bunch of commits - just copy and paste this if you want to speed it up: + +``` +git checkout -b squash +for c in H e l l o , ' ' w o r l d; do + echo "$c" >>squash.txt + git add squash.txt + git commit -m"Add '$c' to squash.txt" +done +``` + +That's a lot of commits to make a file that says "Hello, world"! Let's start another interactive rebase to squash them together. Note that we checked out a branch to try this on, first. Because of that, we can quickly rebase all of the commits since we branched by using `git rebase -i master`. The result: + +``` +pick 1e85199 Add 'H' to squash.txt +pick fff6631 Add 'e' to squash.txt +pick b354c74 Add 'l' to squash.txt +pick 04aaf74 Add 'l' to squash.txt +pick 9b0f720 Add 'o' to squash.txt +pick 66b114d Add ',' to squash.txt +pick dc158cd Add ' ' to squash.txt +pick dfcf9d6 Add 'w' to squash.txt +pick 7a85f34 Add 'o' to squash.txt +pick c275c27 Add 'r' to squash.txt +pick a513fd1 Add 'l' to squash.txt +pick 6b608ae Add 'd' to squash.txt + +# Rebase 1af1b46..6b608ae onto 1af1b46 (12 commands) +# +# Commands: +# p, pick = use commit +# s, squash = use commit, but meld into previous commit +``` + +**Tip** : your local master branch evolves independently of the remote master branch, and git stores the remote branch as `origin/master`. Combined with this trick, `git rebase -i origin/master` is often a very convenient way to rebase all of the commits which haven't been merged upstream yet! + +We're going to squash all of these changes into the first commit. To do this, change every "pick" operation to "squash", except for the first line, like so: + +``` +pick 1e85199 Add 'H' to squash.txt +squash fff6631 Add 'e' to squash.txt +squash b354c74 Add 'l' to squash.txt +squash 04aaf74 Add 'l' to squash.txt +squash 9b0f720 Add 'o' to squash.txt +squash 66b114d Add ',' to squash.txt +squash dc158cd Add ' ' to squash.txt +squash dfcf9d6 Add 'w' to squash.txt +squash 7a85f34 Add 'o' to squash.txt +squash c275c27 Add 'r' to squash.txt +squash a513fd1 Add 'l' to squash.txt +squash 6b608ae Add 'd' to squash.txt +``` + +When you save and close your editor, git will think about this for a moment, then open your editor again to revise the final commit message. You'll see something like this: + +``` +# This is a combination of 12 commits. +# This is the 1st commit message: + +Add 'H' to squash.txt + +# This is the commit message #2: + +Add 'e' to squash.txt + +# This is the commit message #3: + +Add 'l' to squash.txt + +# This is the commit message #4: + +Add 'l' to squash.txt + +# This is the commit message #5: + +Add 'o' to squash.txt + +# This is the commit message #6: + +Add ',' to squash.txt + +# This is the commit message #7: + +Add ' ' to squash.txt + +# This is the commit message #8: + +Add 'w' to squash.txt + +# This is the commit message #9: + +Add 'o' to squash.txt + +# This is the commit message #10: + +Add 'r' to squash.txt + +# This is the commit message #11: + +Add 'l' to squash.txt + +# This is the commit message #12: + +Add 'd' to squash.txt + +# Please enter the commit message for your changes. Lines starting +# with '#' will be ignored, and an empty message aborts the commit. +# +# Date: Sun Apr 28 14:21:56 2019 -0400 +# +# interactive rebase in progress; onto 1af1b46 +# Last commands done (12 commands done): +# squash a513fd1 Add 'l' to squash.txt +# squash 6b608ae Add 'd' to squash.txt +# No commands remaining. +# You are currently rebasing branch 'squash' on '1af1b46'. +# +# Changes to be committed: +# new file: squash.txt +# +``` + +This defaults to a combination of all of the commit messages which were squashed, but leaving it like this is almost always not what you want. The old commit messages may be useful for reference when writing the new one, though. + +**Tip** : the "fixup" command you learned about in the previous section can be used for this purpose, too - but it discards the messages of the squashed commits. + +Let's delete everything and replace it with a better commit message, like this: + +``` +Add squash.txt with contents "Hello, world" + +# Please enter the commit message for your changes. Lines starting +# with '#' will be ignored, and an empty message aborts the commit. +# +# Date: Sun Apr 28 14:21:56 2019 -0400 +# +# interactive rebase in progress; onto 1af1b46 +# Last commands done (12 commands done): +# squash a513fd1 Add 'l' to squash.txt +# squash 6b608ae Add 'd' to squash.txt +# No commands remaining. +# You are currently rebasing branch 'squash' on '1af1b46'. +# +# Changes to be committed: +# new file: squash.txt +# +``` + +Save and quit your editor, then examine your git log - success! + +``` +commit c785f476c7dff76f21ce2cad7c51cf2af00a44b6 (HEAD -> squash) +Author: Drew DeVault +Date: Sun Apr 28 14:21:56 2019 -0400 + + Add squash.txt with contents "Hello, world" +``` + +Before we move on, let's pull our changes into the master branch and get rid of this scratch one. We can use `git rebase` like we use `git merge`, but it avoids making a merge commit: + +``` +git checkout master +git rebase squash +git branch -D squash +``` + +We generally prefer to avoid using git merge unless we're actually merging unrelated histories. If you have two divergent branches, a git merge is useful to have a record of when they were... merged. In the course of your normal work, rebase is often more appropriate. + +### Splitting one commit into several + +Sometimes the opposite problem happens - one commit is just too big. Let's look into splitting it up. This time, let's write some actual code. Start with a simple C program2 (you can still copy+paste this snippet into your shell to do this quickly): + +``` +cat <main.c +int main(int argc, char *argv[]) { + return 0; +} +EOF +``` + +We'll commit this first. + +``` +git add main.c +git commit -m"Add C program skeleton" +``` + +Next, let's extend the program a bit: + +``` +cat <main.c +#include <stdio.h> + +const char *get_name() { + static char buf[128]; + scanf("%s", buf); + return buf; +} + +int main(int argc, char *argv[]) { + printf("What's your name? "); + const char *name = get_name(); + printf("Hello, %s!\n", name); + return 0; +} +EOF +``` + +After we commit this, we'll be ready to learn how to split it up. + +``` +git commit -a -m"Flesh out C program" +``` + +The first step is to start an interactive rebase. Let's rebase both commits with `git rebase -i HEAD~2`, giving us this rebase plan: + +``` +pick 237b246 Add C program skeleton +pick b3f188b Flesh out C program + +# Rebase c785f47..b3f188b onto c785f47 (2 commands) +# +# Commands: +# p, pick = use commit +# e, edit = use commit, but stop for amending +``` + +Change the second commit's command from "pick" to "edit", then save and close your editor. Git will think about this for a second, then present you with this: + +``` +Stopped at b3f188b... Flesh out C program +You can amend the commit now, with + + git commit --amend + +Once you are satisfied with your changes, run + + git rebase --continue +``` + +We could follow these instructions to add new changes to the commit, but instead let's do a "soft reset"3 by running `git reset HEAD^`. If you run `git status` after this, you'll see that it un-commits the latest commit and adds its changes to the working tree: + +``` +Last commands done (2 commands done): + pick 237b246 Add C program skeleton + edit b3f188b Flesh out C program +No commands remaining. +You are currently splitting a commit while rebasing branch 'master' on 'c785f47'. + (Once your working directory is clean, run "git rebase --continue") + +Changes not staged for commit: + (use "git add ..." to update what will be committed) + (use "git checkout -- ..." to discard changes in working directory) + + modified: main.c + +no changes added to commit (use "git add" and/or "git commit -a") +``` + +To split this up, we're going to do an interactive commit. This allows us to selectively commit only specific changes from the working tree. Run `git commit -p` to start this process, and you'll be presented with the following prompt: + +``` +diff --git a/main.c b/main.c +index b1d9c2c..3463610 100644 +--- a/main.c ++++ b/main.c +@@ -1,3 +1,14 @@ ++#include <stdio.h> ++ ++const char *get_name() { ++ static char buf[128]; ++ scanf("%s", buf); ++ return buf; ++} ++ + int main(int argc, char *argv[]) { ++ printf("What's your name? "); ++ const char *name = get_name(); ++ printf("Hello, %s!\n", name); + return 0; + } +Stage this hunk [y,n,q,a,d,s,e,?]? +``` + +Git has presented you with just one "hunk" (i.e. a single change) to consider committing. This one is too big, though - let's use the "s" command to "split" up the hunk into smaller parts. + +``` +Split into 2 hunks. +@@ -1 +1,9 @@ ++#include ++ ++const char *get_name() { ++ static char buf[128]; ++ scanf("%s", buf); ++ return buf; ++} ++ + int main(int argc, char *argv[]) { +Stage this hunk [y,n,q,a,d,j,J,g,/,e,?]? +``` + +**Tip** : If you're curious about the other options, press "?" to summarize them. + +This hunk looks better - a single, self-contained change. Let's hit "y" to answer the question (and stage that "hunk"), then "q" to "quit" the interactive session and proceed with the commit. Your editor will pop up to ask you to enter a suitable commit message. + +``` +Add get_name function to C program + +# Please enter the commit message for your changes. Lines starting +# with '#' will be ignored, and an empty message aborts the commit. +# +# interactive rebase in progress; onto c785f47 +# Last commands done (2 commands done): +# pick 237b246 Add C program skeleton +# edit b3f188b Flesh out C program +# No commands remaining. +# You are currently splitting a commit while rebasing branch 'master' on 'c785f47'. +# +# Changes to be committed: +# modified: main.c +# +# Changes not staged for commit: +# modified: main.c +# +``` + +Save and close your editor, then we'll make the second commit. We could do another interactive commit, but since we just want to include the rest of the changes in this commit we'll just do this: + +``` +git commit -a -m"Prompt user for their name" +git rebase --continue +``` + +That last command tells git that we're done editing this commit, and to continue to the next rebase command. That's it! Run `git log` to see the fruits of your labor: + +``` +$ git log -3 --oneline +fe19cc3 (HEAD -> master) Prompt user for their name +659a489 Add get_name function to C program +237b246 Add C program skeleton +``` + +### Reordering commits + +This one is pretty easy. Let's start by setting up our sandbox: + +``` +echo "Goodbye now!" >farewell.txt +git add farewell.txt +git commit -m"Add farewell.txt" + +echo "Hello there!" >greeting.txt +git add greeting.txt +git commit -m"Add greeting.txt" + +echo "How're you doing?" >inquiry.txt +git add inquiry.txt +git commit -m"Add inquiry.txt" +``` + +The git log should now look like this: + +``` +f03baa5 (HEAD -> master) Add inquiry.txt +a4cebf7 Add greeting.txt +90bb015 Add farewell.txt +``` + +Clearly, this is all out of order. Let's do an interactive rebase of the past 3 commits to resolve this. Run `git rebase -i HEAD~3` and this rebase plan will appear: + +``` +pick 90bb015 Add farewell.txt +pick a4cebf7 Add greeting.txt +pick f03baa5 Add inquiry.txt + +# Rebase fe19cc3..f03baa5 onto fe19cc3 (3 commands) +# +# Commands: +# p, pick = use commit +# +# These lines can be re-ordered; they are executed from top to bottom. +``` + +The fix is now straightforward: just reorder these lines in the order you wish for the commits to appear. Should look something like this: + +``` +pick a4cebf7 Add greeting.txt +pick f03baa5 Add inquiry.txt +pick 90bb015 Add farewell.txt +``` + +Save and close your editor and git will do the rest for you. Note that it's possible to end up with conflicts when you do this in practice - click here for help resolving conflicts. + +### git pull --rebase + +If you've been writing some commits on a branch which has been updated upstream, normally `git pull` will create a merge commit. In this respect, `git pull`'s behavior by default is equivalent to: + +``` +git fetch origin +git merge origin/master +``` + +There's another option, which is often more useful and leads to a much cleaner history: `git pull --rebase`. Unlike the merge approach, this is equivalent to the following: + +``` +git fetch origin +git rebase origin/master +``` + +The merge approach is simpler and easier to understand, but the rebase approach is almost always what you want to do if you understand how to use git rebase. If you like, you can set it as the default behavior like so: + +``` +git config --global pull.rebase true +``` + +When you do this, technically you're applying the procedure we discuss in the next section... so let's explain what it means to do that deliberately, too. + +### Using git rebase to... rebase + +Ironically, the feature of git rebase that I use the least is the one it's named for: rebasing branches. Say you have the following branches: + +``` +o--o--o--o--> master + \--o--o--> feature-1 + \--o--> feature-2 +``` + +It turns out feature-2 doesn't depend on any of the changes in feature-1, so you can just base it off of master. The fix is thus: + +``` +git checkout feature-2 +git rebase master +``` + +The non-interactive rebase does the default operation for all implicated commits ("pick")4, which simply rolls your history back to the last common anscestor and replays the commits from both branches. Your history now looks like this: + +``` +o--o--o--o--> master + | \--o--> feature-2 + \--o--o--> feature-1 +``` + +### Resolving conflicts + +The details on resolving merge conflicts are beyond the scope of this guide - keep your eye out for another guide for this in the future. Assuming you're familiar with resolving conflicts in general, here are the specifics that apply to rebasing. + +The details on resolving merge conflicts are beyond the scope of this guide - keep your eye out for another guide for this in the future. Assuming you're familiar with resolving conflicts in general, here are the specifics that apply to rebasing. + +Sometimes you'll get a merge conflict when doing a rebase, which you can handle just like any other merge conflict. Git will set up the conflict markers in the affected files, `git status` will show you what you need to resolve, and you can mark files as resolved with `git add` or `git rm`. However, in the context of a git rebase, there are two options you should be aware of. + +The first is how you complete the conflict resolution. Rather than `git commit` like you'll use when addressing conflicts that arise from `git merge`, the appropriate command for rebasing is `git rebase --continue`. However, there's another option available to you: `git rebase --skip`. This will skip the commit you're working on, and it won't be included in the rebase. This is most common when doing a non-interactive rebase, when git doesn't realize that a commit it's pulled from the "other" branch is an updated version of the commit that it conflicts with on "our" branch. + +### Help! I broke it! + +No doubt about it - rebasing can be hard sometimes. If you've made a mistake and in so doing lost commits which you needed, then `git reflog` is here to save the day. Running this command will show you every operation which changed a ref, or reference - that is, branches and tags. Each line shows you what the old reference pointed to, and you can `git cherry-pick`, `git checkout`, `git show`, or use any other operation on git commits once thought lost. + + +-------------------------------------------------------------------------------- + +via: https://git-rebase.io/ + +作者:[git-rebase][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://git-rebase.io/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190512 How to Setup Local Yum-DNF Repository on RHEL 8 Server Using DVD or ISO File.md b/sources/tech/20190512 How to Setup Local Yum-DNF Repository on RHEL 8 Server Using DVD or ISO File.md new file mode 100644 index 0000000000..c136a83deb --- /dev/null +++ b/sources/tech/20190512 How to Setup Local Yum-DNF Repository on RHEL 8 Server Using DVD or ISO File.md @@ -0,0 +1,164 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File) +[#]: via: (https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Setup Local Yum/DNF Repository on RHEL 8 Server Using DVD or ISO File +====== + +Recently Red Hat has released its most awaited operating system “ **RHEL 8** “, in case you have installed RHEL 8 Server on your system and wondering how to setup local yum or dnf repository using installation DVD or ISO file then refer below steps and procedure. + + + +In RHEL 8, we have two package repositories: + + * BaseOS + * Application Stream + + + +BaseOS repository have all underlying OS packages where as Application Stream repository have all application related packages, developer tools and databases etc. Using Application stream repository, we can have multiple of versions of same application and Database. + +### Step:1) Mount RHEL 8 ISO file / Installation DVD + +To mount RHEL 8 ISO file inside your RHEL 8 server use the beneath mount command, + +``` +[root@linuxtechi ~]# mount -o loop rhel-8.0-x86_64-dvd.iso /opt/ +``` + +**Note:** I am assuming you have already copied RHEL 8 ISO file inside your system, + +In case you have RHEL 8 installation DVD, then use below mount command to mount it, + +``` +[root@linuxtechi ~]# mount /dev/sr0 /opt +``` + +### Step:2) Copy media.repo file from mounted directory to /etc/yum.repos.d/ + +In our case RHEL 8 Installation DVD or ISO file is mounted under /opt folder, use cp command to copy media.repo file to /etc/yum.repos.d/ directory, + +``` +[root@linuxtechi ~]# cp -v /opt/media.repo /etc/yum.repos.d/rhel8.repo +'/opt/media.repo' -> '/etc/yum.repos.d/rhel8.repo' +[root@linuxtechi ~]# +``` + +Set “644” permission on “ **/etc/yum.repos.d/rhel8.repo** ” + +``` +[root@linuxtechi ~]# chmod 644 /etc/yum.repos.d/rhel8.repo +[root@linuxtechi ~]# +``` + +### Step:3) Add repository entries in “/etc/yum.repos.d/rhel8.repo” file + +By default, **rhel8.repo** file will have following content, + + + +Edit rhel8.repo file and add the following contents, + +``` +[root@linuxtechi ~]# vi /etc/yum.repos.d/rhel8.repo +[InstallMedia-BaseOS] +name=Red Hat Enterprise Linux 8 - BaseOS +metadata_expire=-1 +gpgcheck=1 +enabled=1 +baseurl=file:///opt/BaseOS/ +gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release + +[InstallMedia-AppStream] +name=Red Hat Enterprise Linux 8 - AppStream +metadata_expire=-1 +gpgcheck=1 +enabled=1 +baseurl=file:///opt/AppStream/ +gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release +``` + +rhel8.repo should look like above once we add the content, In case you have mounted the Installation DVD or ISO on different folder then change the location and folder name in base url line for both repositories and rest of parameter leave as it is. + +### Step:4) Clean Yum / DNF and Subscription Manager Cache + +Use the following command to clear yum or dnf and subscription manager cache, + +``` +root@linuxtechi ~]# dnf clean all +[root@linuxtechi ~]# subscription-manager clean +All local data removed +[root@linuxtechi ~]# +``` + +### Step:5) Verify whether Yum / DNF is getting packages from Local Repo + +Use dnf or yum repolist command to verify whether these commands are getting packages from Local repositories or not. + +``` +[root@linuxtechi ~]# dnf repolist +Updating Subscription Management repositories. +Unable to read consumer identity +This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. +Last metadata expiration check: 1:32:44 ago on Sat 11 May 2019 08:48:24 AM BST. +repo id repo name status +InstallMedia-AppStream Red Hat Enterprise Linux 8 - AppStream 4,672 +InstallMedia-BaseOS Red Hat Enterprise Linux 8 - BaseOS 1,658 +[root@linuxtechi ~]# +``` + +**Note :** You can use either dnf or yum command, if you use yum command then its request is redirecting to DNF itself because in RHEL 8 yum is based on DNF command. + +If you have noticed the above command output carefully, we are getting warning message “ **This system is not registered to Red Hat Subscription Management**. **You can use subscription-manager to register”** , if you want to suppress or prevent this message while running dnf / yum command then edit the file “/etc/yum/pluginconf.d/subscription-manager.conf”, changed the parameter “enabled=1” to “enabled=0” + +``` +[root@linuxtechi ~]# vi /etc/yum/pluginconf.d/subscription-manager.conf +[main] +enabled=0 +``` + +save and exit the file. + +### Step:6) Installing packages using DNF / Yum + +Let’s assume we want to install nginx web server then run below dnf command, + +``` +[root@linuxtechi ~]# dnf install nginx +``` + +![][1] + +Similarly if you want to install **LEMP** stack on your RHEL 8 system use the following dnf command, + +``` +[root@linuxtechi ~]# dnf install nginx mariadb php -y +``` + +[![][2]][3] + +This confirms that we have successfully configured Local yum / dnf repository on our RHEL 8 server using Installation DVD or ISO file. + +In case these steps help you technically, please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/setup-local-yum-dnf-repository-rhel-8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/dnf-install-nginx-rhel8-1024x376.jpg +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8-1024x540.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/LEMP-Stack-Install-RHEL8.jpg diff --git a/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md b/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md new file mode 100644 index 0000000000..c685af487c --- /dev/null +++ b/sources/tech/20190513 Blockchain 2.0 - Introduction To Hyperledger Fabric -Part 10.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Blockchain 2.0 – Introduction To Hyperledger Fabric [Part 10] +====== + +![Hyperledger Fabric][1] + +### Hyperledger Fabric + +The [**Hyperledger project**][2] is an umbrella organization of sorts featuring many different modules and systems under development. Among the most popular among these individual sub-projects is the **Hyperledger Fabric**. This post will explore the features that would make the Fabric almost indispensable in the near future once blockchain systems start proliferating into main stream use. Towards the end we will also take a quick look at what developers and enthusiasts need to know regarding the technicalities of the Hyperledger Fabric. + +### Inception + +In the usual fashion for the Hyperledger project, Fabric was “donated” to the organization by one of its core members, **IBM** , who was previously the principle developer of the same. The technology platform shared by IBM was put to joint development at the Hyperledger project with contributions from over a 100 member companies and institutions. + +Currently running on **v1.4** of the LTS version, Fabric has come a long way and is currently seen as the go to enterprise solution for managing business data. The core vision that surrounds the Hyperledger project inevitably permeates into the Fabric as well. The Hyperledger Fabric system carries forward all the enterprise ready and scalable features that are hard coded into all projects under the Hyperledger organization. + +### Highlights Of Hyperledger Fabric + +Hyperledger Fabric offers a wide variety of features and standards that are built around the mission of supporting fast development and modular architectures. Furthermore, compared to its competitors (primarily **Ripple** and [**Ethereum**][3]), Fabric takes an explicit stance toward closed and [**permissioned blockchains**][4]. Their core objective here is to develop a set of tools which will aid blockchain developers in creating customized solutions and not to create a standalone ecosystem or a product. + +Some of the highlights of the Hyperledger Fabric are given below: + + * **Permissioned blockchain systems** + + + +This is a category where other platforms such as Ethereum and Ripple differ quite a lot with Hyperledger Fabric. The Fabric by default is a tool designed to implement a private permissioned blockchain. Such blockchains cannot be accessed by everyone and the nodes working to offer consensus or to verify transactions are chosen by a central authority. This might be important for some applications such as banking and insurance, where transactions have to be verified by the central authority rather than participants. + + * **Confidential and controlled information flow** + + + +The Fabric has built in permission systems that will restrict information flow within a specific group or certain individuals as the case may be. Unlike a public blockchain where anyone and everyone who runs a node will have a copy and selective access to data stored in the blockchain, the admin of the system can choose how to and who to share access to the information. There are also subsystems which will encrypt the stored data at better security standards compared to existing competition. + + * **Plug and play architecture** + + + +Hyperledger Fabric has a plug and play type architecture. Individual components of the system may be chosen to be implemented and components of the system that developers don’t see a use for maybe discarded. The Fabric takes a highly modular and customizable route to development rather than a one size fits all approach taken by its competitors. This is especially attractive for firms and companies looking to build a lean system fast. This combined with the interoperability of the Fabric with other Hyperledger components implies that developers and designers now have access to a diverse set of standardized tools instead of having to pull code from different sources and integrate them afterwards. It also presents a rather fail-safe way to build robust modular systems. + + * **Smart contracts and chaincode** + + + +A distributed application running on a blockchain is called a [**Smart contract**][5]. While the smart contract term is more or less associated with the Ethereum platform, chaincode is the name given to the same in the Hyperledger camp. Apart from possessing all the benefits of **DApps** being present in chaincode applications, what sets Hyperledger apart is the fact that the code for the same may be written in multiple high-level programming language. It supports [**Go**][6] and **JavaScript** out of the box and supports many other after integration with appropriate compiler modules as well. Though this fact might not mean much at this point, the fact remains that if existing talent can be used for ongoing projects involving blockchain that has the potential to save companies billions of dollars in personnel training and management in the long run. Developers can code in languages they’re comfortable in to start building applications on the Hyperledger Fabric and need not learn nor train in platform specific languages and syntax. This presents flexibility which current competitors of the Hyperledger Fabric do not offer. + + * The Hyperledger Fabric is a back-end driver platform and is mainly aimed at integration projects where a blockchain or another distributed ledger technology is required. As such it does not provide any user facing services except for minor scripting capabilities. (Think of it to be more like a scripting language.) + * Hyperledger Fabric supports building sidechains for specific use-cases. In case, the developer wishes to isolate a set of users or participants to a specific part or functionality of the application, they may do so by implementing side-chains. Side-chains are blockchains that derive from a main parent, but form a different chain after their initial block. This block which gives rise to the new chain will stay immune to further changes in the new chain and the new chain remains immutable even if new information is added to the original chain. This functionality will aid in scaling the platform being developed and usher in user specific and case specific processing capabilities. + * The previous feature also means that not all users will have an “exact” copy of all the data in the blockchain as is expected usually from public chains. Participating nodes will have a copy of data that is only relevant to them. For instance, consider an application similar to PayTM in India. The app has wallet functionality as well as an e-commerce end. However, not all its wallet users use PayTM to shop online. In this scenario, only active shoppers will have the corresponding chain of transactions on the PayTM e-commerce site, whereas the wallet users will just have a copy of the chain that stores wallet transactions. This flexible architecture for data storage and retrieval is important while scaling, since massive singular blockchains have been shown to increase lead times for processing transactions. The chain can be kept lean and well categorised this way. + + + +We will look at other modules under the Hyperledger Project in detail in upcoming posts. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[4]: https://www.ostechnix.com/blockchain-2-0-public-vs-private-blockchain-comparison/ +[5]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[6]: https://www.ostechnix.com/install-go-language-linux/ diff --git a/sources/tech/20190513 How To Set Password Complexity On Linux.md b/sources/tech/20190513 How To Set Password Complexity On Linux.md new file mode 100644 index 0000000000..e9a3171c6b --- /dev/null +++ b/sources/tech/20190513 How To Set Password Complexity On Linux.md @@ -0,0 +1,243 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Set Password Complexity On Linux?) +[#]: via: (https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Set Password Complexity On Linux? +====== + +User management is one of the important task of Linux system administration. + +There are many aspect is involved in this and implementing the strong password policy is one of them. + +Navigate to the following URL, if you would like to **[generate a strong password on Linux][1]**. + +It will Restrict unauthorized access to systems. + +By default Linux is secure that everybody know. however, we need to make necessary tweak on this to make it more secure. + +Insecure password will leads to breach security. So, take additional care on this. + +Navigate to the following URL, if you would like to see the **[password strength and score][2]** of the generated strong password. + +In this article, we will teach you, how to implement the best security policy on Linux. + +We can use PAM (the “pluggable authentication module”) to enforce password policy On most Linux systems. + +The file can be found in the following location. + +For Redhat based systems @ `/etc/pam.d/system-auth` and Debian based systems @ `/etc/pam.d/common-password`. + +The default password aging details can be found in the `/etc/login.defs` file. + +I have trimmed this file for better understanding. + +``` +# vi /etc/login.defs + +PASS_MAX_DAYS 99999 +PASS_MIN_DAYS 0 +PASS_MIN_LEN 5 +PASS_WARN_AGE 7 +``` + +**Details:** + + * **`PASS_MAX_DAYS:`**` ` Maximum number of days a password may be used. + * **`PASS_MIN_DAYS:`**` ` Minimum number of days allowed between password changes. + * **`PASS_MIN_LEN:`**` ` Minimum acceptable password length. + * **`PASS_WARN_AGE:`**` ` Number of days warning given before a password expires. + + + +We will show you, how to implement the below eleven password policies in Linux. + + * Password Max days + * Password Min days + * Password warning days + * Password history or Deny Re-Used Passwords + * Password minimum length + * Minimum upper case characters + * Minimum lower case characters + * Minimum digits in password + * Minimum other characters (Symbols) + * Account lock – retries + * Account unlock time + + + +### What Is Password Max days? + +This parameter limits the maximum number of days a password can be used. It’s mandatory for user to change his/her account password before expiry. + +If they forget to change, they are not allowed to login into the system. They need to work with admin team to get rid of it. + +It can be set in `/etc/login.defs` file. I’m going to set `90 days`. + +``` +# vi /etc/login.defs + +PASS_MAX_DAYS 90 +``` + +### What Is Password Min days? + +This parameter limits the minimum number of days after password can be changed. + +Say for example, if this parameter is set to 15 and user changed password today. Then he won’t be able to change the password again before 15 days from now. + +It can be set in `/etc/login.defs` file. I’m going to set `15 days`. + +``` +# vi /etc/login.defs + +PASS_MIN_DAYS 15 +``` + +### What Is Password Warning Days? + +This parameter controls the password warning days and it will warn the user when the password is going to expires. + +A warning will be given to the user regularly until the warning days ends. This can helps user to change their password before expiry. Otherwise we need to work with admin team for unlock the password. + +It can be set in `/etc/login.defs` file. I’m going to set `10 days`. + +``` +# vi /etc/login.defs + +PASS_WARN_AGE 10 +``` + +**Note:** All the above parameters only applicable for new accounts and not for existing accounts. + +### What Is Password History Or Deny Re-Used Passwords? + +This parameter keep controls of the password history. Keep history of passwords used (the number of previous passwords which cannot be reused). + +When the users try to set a new password, it will check the password history and warn the user when they set the same old password. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `5` for history of password. + +``` +# vi /etc/pam.d/system-auth + +password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok remember=5 +``` + +### What Is Password Minimum Length? + +This parameter keeps the minimum password length. When the users set a new password, it will check against this parameter and warn the user if they try to set the password length less than that. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `12` character for minimum password length. + +``` +# vi /etc/pam.d/system-auth + +password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 +``` + +**try_first_pass retry=3** : Allow users to set a good password before the passwd command aborts. + +### Set Minimum Upper Case Characters? + +This parameter keeps, how many upper case characters should be added in the password. These are password strengthening parameters ,which increase the password strength. + +When the users set a new password, it will check against this parameter and warn the user if they are not including any upper case characters in the password. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character for minimum password length. + +``` +# vi /etc/pam.d/system-auth + +password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ucredit=-1 +``` + +### Set Minimum Lower Case Characters? + +This parameter keeps, how many lower case characters should be added in the password. These are password strengthening parameters ,which increase the password strength. + +When the users set a new password, it will check against this parameter and warn the user if they are not including any lower case characters in the password. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character. + +``` +# vi /etc/pam.d/system-auth + +password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 lcredit=-1 +``` + +### Set Minimum Digits In Password? + +This parameter keeps, how many digits should be added in the password. These are password strengthening parameters ,which increase the password strength. + +When the users set a new password, it will check against this parameter and warn the user if they are not including any digits in the password. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character. + +``` +# vi /etc/pam.d/system-auth + +password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 dcredit=-1 +``` + +### Set Minimum Other Characters (Symbols) In Password? + +This parameter keeps, how many Symbols should be added in the password. These are password strengthening parameters ,which increase the password strength. + +When the users set a new password, it will check against this parameter and warn the user if they are not including any Symbol in the password. + +It can be set in `/etc/pam.d/system-auth` file. I’m going to set `1` character. + +``` +# vi /etc/pam.d/system-auth + +password requisite pam_cracklib.so try_first_pass retry=3 minlen=12 ocredit=-1 +``` + +### Set Account Lock? + +This parameter controls users failed attempts. It locks user account after reaches the given number of failed login attempts. + +It can be set in `/etc/pam.d/system-auth` file. + +``` +# vi /etc/pam.d/system-auth + +auth required pam_tally2.so onerr=fail audit silent deny=5 +account required pam_tally2.so +``` + +### Set Account Unlock Time? + +This parameter keeps users unlock time. If the user account is locked after consecutive failed authentications. + +It’s unlock the locked user account after reaches the given time. Sets the time (900 seconds = 15 minutes) for which the account should remain locked. + +It can be set in `/etc/pam.d/system-auth` file. + +``` +# vi /etc/pam.d/system-auth + +auth required pam_tally2.so onerr=fail audit silent deny=5 unlock_time=900 +account required pam_tally2.so +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-set-password-complexity-policy-on-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/5-ways-to-generate-a-random-strong-password-in-linux-terminal/ +[2]: https://www.2daygeek.com/how-to-check-password-complexity-strength-and-score-in-linux/ diff --git a/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md b/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md new file mode 100644 index 0000000000..c8e82151ef --- /dev/null +++ b/sources/tech/20190513 Manage business documents with OpenAS2 on Fedora.md @@ -0,0 +1,153 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage business documents with OpenAS2 on Fedora) +[#]: via: (https://fedoramagazine.org/manage-business-documents-with-openas2-on-fedora/) +[#]: author: (Stuart D Gathman https://fedoramagazine.org/author/sdgathman/) + +Manage business documents with OpenAS2 on Fedora +====== + +![][1] + +Business documents often require special handling. Enter Electronic Document Interchange, or **EDI**. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that: + +1\. It goes to the right destination, and is not intercepted by competitors. +2\. Your invoice cannot be forged by a 3rd party. +3\. Your customer can’t claim in court that they never got the invoice. + +The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part. + +This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup. + +### Centralized EDI + +The traditional solution is to use a Value Added Network, or **VAN**. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers + +### AS Protocols and MDN + +The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or **MDN**. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party. + +The [AS2 protocol][2] uses HTTP or HTTPS for transport. Other AS protocols target [FTP][3] and [SMTP][4]. AS2 is used by companies big and small to avoid depending on (and paying) a VAN. + +### OpenAS2 + +OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with: + +``` +$ sudo dnf install openas2 +$ cd /etc/openas2 +``` + +Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords. + +Edit _/etc/openas2/config.xml_ and search for _ChangeMe_. Change those passwords. The default password on the certificate store is _testas2_ , but that doesn’t matter much as anyone who can read the certificate store can read _config.xml_ and get the password. + +### What to share with AS2 partners + +There are 3 things you will exchange with an AS2 peer. + +#### AS2 ID + +Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters. + +#### SSL certificate + +For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with _keytool_ which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using _openssl_ to generate keys and certificates. Simply note that _sudo keytool -list -keystore as2_certs.p12_ will list the two factory practice certs. + +#### AS2 URL + +This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in _config.xml_ , and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here. + +By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using __ as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install [Cjdns][5] to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like _http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080_. + +Most businesses will also want a list of IPs to add to their firewall. This is actually [bad practice][6]. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner. + +### OpenAS2 Partners + +With that in mind, open _partnerships.xml_ in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document. + +### OpenAS2 Partnerships + +Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the _as2_receipt_option_ for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application. + +The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented. + +If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the _as2_url_ on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need: + +``` +# sudo firewall-cmd --zone=public --add-port=10080/tcp +``` + +Now start the _openas2_ service (on both machines if needed): + +``` +# sudo systemctl start openas2 +``` + +### Resetting the MDN password + +This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password: + +``` +$ sudo systemctl stop openas2 +$ cat >h2passwd <<'DONE' +#!/bin/bash +AS2DIR="/var/lib/openas2" +java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \ + -url jdbc:h2:"$AS2DIR"/db/openas2 \ + -user sa -password "$1" <testdoc <<'DONE' +This is not a real EDI format, but is nevertheless a document. +DONE +$ sudo chown openas2 testdoc +$ sudo mv testdoc /var/spool/openas2/toOpenAS2B +$ sudo journalctl -f -u openas2 +... log output of sending file, Control-C to stop following log +^C +``` + +OpenAS2 does not send a document until it is writable by the _openas2_ user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document. + +Now, on the OpenAS2B machine, _/var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox_ shows the message received. That should get you started! + +* * * + +_Photo by _[ _Beatriz Pérez Moya_][7]_ on _[_Unsplash_][8]_._ + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/manage-business-documents-with-openas2-on-fedora/ + +作者:[Stuart D Gathman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/sdgathman/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/openas2-816x345.jpg +[2]: https://en.wikipedia.org/wiki/AS2 +[3]: https://en.wikipedia.org/wiki/AS3_(networking) +[4]: https://en.wikipedia.org/wiki/AS1_(networking) +[5]: https://fedoramagazine.org/decentralize-common-fedora-apps-cjdns/ +[6]: https://www.ld.com/as2-part-2-best-practices/ +[7]: https://unsplash.com/photos/XN4T2PVUUgk?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText +[8]: https://unsplash.com/search/photos/documents?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText diff --git a/sources/tech/20190514 Why bother writing tests at all.md b/sources/tech/20190514 Why bother writing tests at all.md new file mode 100644 index 0000000000..2b80dbaf40 --- /dev/null +++ b/sources/tech/20190514 Why bother writing tests at all.md @@ -0,0 +1,93 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why bother writing tests at all?) +[#]: via: (https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Why bother writing tests at all? +====== + +In previous posts and presentations I talked about [how to test][1], and [when to test][2]. To conclude this series of I’m going to ask the question, _why test at all?_ + +### Even if you don’t, someone _will_ test your software + +I’m sure no-one reading this post thinks that software should be delivered without being tested first. Even if that were true, your customers are going to test it, or at least use it. If nothing else, it would be good to discover any issues with the code before your customers do. If not for the reputation of your company, at least for your professional pride. + +So, if we agree that software should be tested, the question becomes: _who_ should do that testing? + +### The majority of testing should be performed by development teams + +I argue that the majority of the testing should be done by development groups. Moreover, testing should be automated, and thus the majority of these tests should be unit style tests. + +To be clear, I am _not_ saying you shouldn’t write integration, functional, or end to end tests. I’m also _not_ saying that you shouldn’t have a QA group, or integration test engineers. However at a recent software conference, in a room of over 1,000 engineers, nobody raised their hand when I asked if they considered themselves in a pure quality assurance role. + +You might argue that the audience was self selecting, that QA engineers did not feel a software conference was relevant–or welcoming–to them. However, I think this proves my point, the days of [one developer to one test engineer][3] are gone and not coming back. + +If development teams aren’t writing the majority of tests, who is? + +### Manual testing should not be the majority of your testing because manual testing is O(n) + +Thus, if individual contributors are expected to test the software they write, why do we need to automate it? Why is a manual testing plan not good enough? + +Manual testing of software or manual verification of a defect is not sufficient because it does not scale. As the number of manual tests grows, engineers are tempted to skip them or only execute the scenarios they _think_ are could be affected. Manual testing is expensive in terms of time, thus dollars, and it is boring. 99.9% of the tests that passed last time are _expected_ to pass again. Manual testing is looking for a needle in a haystack, except you don’t stop when you find the first needle. + +This means that your first response when given a bug to fix or a feature to implement should be to write a failing test. This doesn’t need to be a unit test, but it should be an automated test. Once you’ve fixed the bug, or added the feature, now have the test case to prove it worked–and you can check them in together. + +### Tests are the critical component that ensure you can always ship your master branch + +As a development team, you are judged on your ability to deliver working software to the business. No, seriously, the business could care less about OOP vs FP, CI/CD, table tennis or limited run La Croix. + +Your super power is, at any time, anyone on the team should be confident that the master branch of your code is shippable. This means at any time they can deliver a release of your software to the business and the business can recoup its investment in your development R&D. + +I cannot emphasise this enough. If you want the non technical parts of the business to believe you are heros, you must never create a situation where you say “well, we can’t release right now because we’re in the middle of an important refactoring. It’ll be a few weeks. We hope.” + +Again, I’m not saying you cannot refactor, but at every stage your product must be shippable. Your tests have to pass. It may not have all the desired features, but the features that are there should work as described on the tin. + +### Tests lock in behaviour + +Your tests are the contract about what your software does and does not do. Unit tests should lock in the behaviour of the package’s API. Integration tests do the same for complex interactions. Tests describe, in code, what the program promises to do. + +If there is a unit test for each input permutation, you have defined the contract for what the code will do _in code_ , not documentation. This is a contract anyone on your team can assert by simply running the tests. At any stage you _know_ with a high degree of confidence that the behaviour people relied on before your change continues to function after your change. + +### Tests give you confidence to change someone else’s code + +Lastly, and this is the biggest one, for programmers working on a piece of code that has been through many hands. Tests give you the confidence to make changes. + +Even though we’ve never met, something I know about you, the reader, is you will eventually leave your current employer. Maybe you’ll be moving on to a new role, or perhaps a promotion, perhaps you’ll move cities, or follow your partner overseas. Whatever the reason, the succession of the maintenance of programs you write is key. + +If people cannot maintain our code then as you and I move from job to job we’ll leave behind programs which cannot be maintained. This goes beyond advocacy for a language or tool. Programs which cannot be changed, programs which are too hard to onboard new developers, or programs which feel like career digression to work on them will reach only one end state–they are a dead end. They represent a balance sheet loss for the business. They will be replaced. + +If you worry about who will maintain your code after you’re gone, write good tests. + +#### Related posts: + + 1. [Writing table driven tests in Go][4] + 2. [Prefer table driven tests][5] + 3. [Automatically run your package’s tests with inotifywait][6] + 4. [The value of TDD][7] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/05/14/why-bother-writing-tests-at-all + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests +[2]: https://dave.cheney.net/paste/absolute-unit-test-london-gophers.pdf +[3]: https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/evolving-test-practices-microsoft +[4]: https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go (Writing table driven tests in Go) +[5]: https://dave.cheney.net/2019/05/07/prefer-table-driven-tests (Prefer table driven tests) +[6]: https://dave.cheney.net/2016/06/21/automatically-run-your-packages-tests-with-inotifywait (Automatically run your package’s tests with inotifywait) +[7]: https://dave.cheney.net/2016/04/11/the-value-of-tdd (The value of TDD) diff --git a/sources/tech/20190515 How to manage access control lists with Ansible.md b/sources/tech/20190515 How to manage access control lists with Ansible.md new file mode 100644 index 0000000000..692dd70599 --- /dev/null +++ b/sources/tech/20190515 How to manage access control lists with Ansible.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage access control lists with Ansible) +[#]: via: (https://opensource.com/article/19/5/manage-access-control-lists-ansible) +[#]: author: (Taz Brown https://opensource.com/users/heronthecli) + +How to manage access control lists with Ansible +====== +Automating ACL management with Ansible's ACL module is a smart way to +strengthen your security strategy. +![Data container block with hexagons][1] + +Imagine you're a new DevOps engineer in a growing agile environment, and recently your company has experienced phenomenal growth. To support expansion, the company increased hiring by 25% over the last quarter and added 5,000 more servers and network devices to its infrastructure. The company now has over 13,000 users, and you need a tool to scale the existing infrastructure and manage your large number of users and their thousands of files and directories. The company decided to adopt [Ansible][2] company-wide to manage [access control lists (ACLs)][3] and answer the call of effectively managing files and directories and permissions. + +Ansible can be used for a multitude of administration and maintenance tasks and, as a DevOps engineer or administrator, it's likely you've been tasked with using it to manage ACLs. + +### About managing ACLs + +ACLs allow regular users to share their files and directories selectively with other users and groups. With ACLs, a user can grant others the ability to read, write, and execute files and directories without leaving those filesystem elements open. + +ACLs are set and removed at the command line using the **setfacl** utility. The command is usually followed by the name of a file or directory. To set permissions, you would use the Linux command **setfacl -m d ⭕rx ** (e.g., **setfacl -m d ⭕rx Music/**). To view the current permissions on a directory, you would use the command **getfacl ** (e.g., **getfacl Music/** ). To remove an ACL from a file or directory, you would type the command, **# setfacl -x ** (to remove only the specified ACL from the file/directory) or **# setfacl -b ** (to remove all ACLs from the file/directory). + +Only the owner assigned to the file or directory can set ACLs. (It's important to understand this before you, as the admin, take on Ansible to manage your ACLs.) There are also default ACLs, which control directory access; if a file inside a directory has no ACL, then the default ACL is applied. + + +``` +sudo setfacl -m d⭕rx Music +getfacl Music/ +# file: Music/ +# owner: root +# group: root +user::rwx +group::--- +other::--- +default:user::rwx +default:group::--- +default:other::r-x +``` + +### Enter Ansible + +So how can Ansible, in all its wisdom, tackle the task of applying permissions to users, files, directories, and more? Ansible can play nicely with ACLs, just as it does with a lot of features, utilities, APIs, etc. Ansible has an out-of-the-box [ACL module][3] that allows you to create playbooks/roles around granting a user access to a file, removing ACLs for users on a specific file, setting default ACLs for users on files, or obtaining ACLs on particular files. + +Anytime you are administering ACLs, you should use the best practice of "least privilege," meaning you should give a user access only to what they need to perform their role or execute a task, and no more. Restraint and minimizing the attack surface are critical. The more access extended, the higher the risk of unauthorized access to company assets. + +Here's an example Ansible playbook: + +![Ansible playbook][4] + +As an admin, automating ACL management demands that your Ansible playbooks can scale across your infrastructure to increase speed, improve efficiency, and reduce the time it takes to achieve your goals. There will be times when you need to determine the ACL for a specific file. This is essentially the same as using **getfacl ** in Linux. If you want to determine the ACLs of many, specific files, start with a playbook that looks like this: + + +``` +\--- +\- hosts: all +tasks: +\- name: obtain the acl for a specific file +acl: +path: /etc/logrotate.d +user_nfsv4_acls: true +register: acl_info +``` + +You can use the following playbook to set permissions on files/directories: + +![Ansible playbook][5] + +This playbook grants user access to a file: + + +``` +\- hosts: +become: yes +gather_facts: no +tasks: +\- name: Grant user Shirley read access to a file +acl: +path: /etc/foo.conf +entity: shirley +etype: user +permissions: r +state: present +``` + +And this playbook grants user access to a directory: + + +``` +\--- +\- hosts: all +become: yes +gather_facts: no +tasks: +\- name: setting permissions on directory and user +acl: +path: /path/to/scripts/directory +entity: "{{ item }}" +etype: user +permissions: rwx +state: present +loop: +\- www-data +\- root +``` + +### Security realized? + +Applying ACLs to files and users is a practice you should take seriously in your role as a DevOps engineer. Security best practices and formal compliance often get little or no attention. When you allow access to files with sensitive data, you are always risking that the data will be tampered with, stolen, or deleted. Therefore, data protection must be a focal point in your security strategy. Ansible can be part of your security automation strategy, as demonstrated here, and your ACL application is as good a place to start as any. + +Automating your security practices will, of course, go beyond just managing ACLs; it might also involve [SELinux][6] configuration, cryptography, security, and compliance. Remember that Ansible also allows you to define your systems for security, whether it's locking down users and groups (e.g., managing ACLs), setting firewall rules, or applying custom security policies. + +Your security strategy should start with a baseline plan. As a DevOps engineer or admin, you should examine the current security strategy (or the lack thereof), then chart your plan for automating security in your environment. + +### Conclusion + +Using Ansible to manage your ACLs as part of your overall security automation strategy depends on the size of both the company you work for and the infrastructure you manage. Permissions, users, and files can quickly get out of control, potentially placing your security in peril and putting the company in a position you definitely don't want to it be. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/manage-access-control-lists-ansible + +作者:[Taz Brown ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/heronthecli +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_container_block.png?itok=S8MbXEYw (Data container block with hexagons) +[2]: https://opensource.com/article/19/2/quickstart-guide-ansible +[3]: https://docs.ansible.com/ansible/latest/modules/acl_module.html +[4]: https://opensource.com/sites/default/files/images/acl.yml_.png (Ansible playbook) +[5]: https://opensource.com/sites/default/files/images/set_filedir_permissions.png (Ansible playbook) +[6]: https://opensource.com/article/18/8/cheat-sheet-selinux diff --git a/sources/tech/20190516 Create flexible web content with a headless management system.md b/sources/tech/20190516 Create flexible web content with a headless management system.md new file mode 100644 index 0000000000..df58e96d0d --- /dev/null +++ b/sources/tech/20190516 Create flexible web content with a headless management system.md @@ -0,0 +1,113 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create flexible web content with a headless management system) +[#]: via: (https://opensource.com/article/19/5/headless-cms) +[#]: author: (Sam Bocetta https://opensource.com/users/sambocetta) + +Create flexible web content with a headless management system +====== +Get the versatility and freedom to deliver content however you think is +best. +![Browser of things][1] + +In recent years, we’ve witnessed an explosion in the number of technological devices that deliver web-based content to users. Smartphones, tablets, smartwatches, and more—all with progressively advancing technical capabilities and support for an ever-widening list of operating systems and web browsers—swarm anew onto the market each year. + +What does this trend have to do with web development and headless versus traditional Content Management Systems (CMS)? Quite a lot. + +### CMS creates the internet + +A CMS is an application or set of computer programs used to manage digital content like images, videos, blog posts—essentially anything you would post on a website. An obvious example of a CMS is [WordPress][2]. + +The word "manage" is used broadly here. It can refer to creating, editing, or updating any kind of digital content on a website, as well as indexing the site to make it easily searchable. + +So, a CMS essentially separates the content displayed on a website from how that content is displayed. It also allows you, the website administrator, to set permissions on who can access, edit, modify, or otherwise manage that content. + +Suppose you want to post a new blog entry, update or correct something in an old post, write on your Facebook page, share a social media link to a video or article, or embed a video, music file, or pre-written set of text into a page on your website. If you have ever done anything like this, you have made use of CMS features. + +### Traditional CMS architecture: Benefits and flaws + +There are two major components that make up a CMS: the Content Management Application (CMA) and the Content Delivery Application (CDA). The CMA pertains to the front-end portion of the website. This is what allows authors or other content managers to edit and create content without help from a web developer. The CDA pertains to the back end portion of a website. By organizing and compiling content to make website content updates possible, it automates the function of a website administrator. + +Traditionally, these two pieces are joined into a single unit as a "coupled" CMS architecture. A **coupled CMS** uses a specific front-end delivery system (CMA) built into the application itself. The term "coupled" comes from the fact that the front-end framework—the templates and layout of the pages and how those pages respond to being opened in certain browsers—is coupled to the website’s content. In other words, in a coupled CMS architecture the Content Management Application (CMA) and Content Delivery Application (CDA) are inseparably merged. + +#### Benefits of the traditional CMS + +Coupled architecture does offer advantages, mainly in simplicity and ease of use for those who are not technically sophisticated. This fact explains why a platform like WordPress, which retains a traditional CMS setup, [remains so popular][3] for those who create websites or blogs. + +Further simplifying the web development process [are website builder applications][4], such as [Wix][5] and [Squarespace][6], which allow you to build drag-and-drop websites. The most popular of these builders use open source libraries but are themselves closed source. These sites allow almost anyone who can find the internet to put a website together without wading through the relatively short weeds of a CMS environment. While builder applications were [the object of derision][7] not so long ago amongst many in the open source community—mainly because they tended to give websites a generic and pre-packaged look and feel—they have grown increasingly functional and variegated. + +#### Security is an issue + +However, for all but the simplest web apps, a traditional CMS architecture results in inflexible technology. Modifying a static website or web app with a traditional CMS requires tremendous time and effort to produce updates, patches, and installations, preventing developers from keeping up with the growing number of devices and browsers. + +Furthermore, coupled CMSs have two built-in security flaws: + +**Risk #1** : Since content management and delivery are bound together, hackers who breach your website through the front end automatically gain access to the back-end database. This lack of separation between data and its presentation increases the likelihood that data will be stolen. Depending on the kind of user data stored on your website’s servers, a large-scale theft could be catastrophic. + +**Risk #2** : The risk of successful [Distributed Denial of Service][8] (DDoS) attacks increases without a separate system for delivering content to your website. DDoS attacks flood content delivery networks with so many traffic requests that they become overwhelmed and go offline. If your content delivery network is separated from your actual web servers, attackers will be less able to bring down your site. + +To avoid these problems, developers have introduced headless and decoupled CMSs. + +### Comparing headless and decoupled CMSs + +The "head" of a CMS is a catch-all term for the Content Delivery Application. Therefore, a CMS without one—and so with no way of delivering content to a user—is called "headless." + +This lack of an established delivery method gives headless CMSs enormous versatility. Without a CDA there is no pre-established delivery method, so developers can design separate frameworks as the need arises. The problem of constantly patching your website, web apps, and other code to guarantee compatibility disappears. + +Another option, a **decoupled CMS** , includes many of the same features and benefits as a headless CMS, but there is one crucial difference. Where a headless CMS leaves it entirely to the developer to deliver and present content to their users, a decoupled CMS offers pre-established delivery tools that developers can either take or leave. Decoupled CMSs thus offer both the simplicity of the traditional CMS and the versatility of the headless ones. + +In short, a decoupled CMS is sometimes called a **hybrid CMS ****since it's a hybrid of the coupled and headless designs. Decoupled CMSs are not a new concept. As far back as 2015, PHP core repository developer David Buchmann was [calling on devs][9] to decouple their CMSs to meet a wider set of challenges. + +### Security improvements with a headless CMS + +Perhaps the most important point to make about headless versus decoupled content management architectures, and how they both differ from traditional architecture, is the added security benefit. In both the headless and decoupled designs, content and user data are located on a separate back-end system protected by a firewall. The user can’t access the content management application itself. + +However, it's important to keep in mind that the major consequence of this change in architectures is that since the architecture is fragmented, developers have to fill in the gaps and design content delivery and presentation mechanisms on their own. This means that whether you opt to go headless or decoupled, your developer needs to understand security. While separating content management and content delivery gives hackers one fewer vector through which to attack, this isn’t a security benefit in itself. The burden will be on your devs to properly secure your resulting CDA. + +A firewall protecting the back end provides a [crucial layer of security][10]. Headless and decoupled architectures can distribute your content among multiple databases, so if you take advantage of this possibility you can lower the chance of successful DDoS attacks even further. Open source headless CMS can also benefit from the installation of a [Linux VPN][11] or Linux kernel firewall management tool like [iptables][12]. All of these options combine to provide the added security developers need to create no matter what kind of CDA or back end setup they choose. + +Benefits aside, keep in mind that headless CMS platforms are a fairly new tech. Before making the switch to headless or decoupled, consider whether the host you’re using can support your added security so that you can host your application behind network security systems to block attempts at unauthorized access. If they cannot, a host change might be in order. When evaluating new hosts, also consider any existing contracts or security and compliance restrictions in place (GDPR, CCPA, etc.) which could cause migration troubles. + +### Open source options + +As you can see, headless architecture offers designers the versatility and freedom to deliver content however they think best. This spirit of freedom fits naturally with the open source paradigm in software design, in which all source code is available to public view and may be taken and modified by anyone for any reason. + +There are a number of open source headless CMS platforms that allow developers to do just that: [Mura,][13] [dotCMS][14], and [Cockpit CMS][15] to name a few. For a deeper dive into the world of open source headless CMS platforms, [check out this article][16]. + +### Final thoughts + +For web designers and developers, the idea of a headless CMS marks a significant rethinking of how sites are built and delivered. Moving to this architecture is a great way to future-proof your website against changing preferences and whatever tricks future hackers may cook up, while at the same time creating a seamless user experience no matter what device or browser is used. You might also take a look at [this guide][17] for UX tips on designing your website in a way that meshes with headless and decoupled architectures. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/headless-cms + +作者:[Sam Bocetta][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/sambocetta +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://wordpress.org/ +[3]: https://kinsta.com/wordpress-market-share/ +[4]: https://hostingcanada.org/website-builders/ +[5]: https://www.wix.com/ +[6]: https://www.squarespace.com +[7]: https://arstechnica.com/information-technology/2016/11/wordpress-and-wix-trade-shots-over-alleged-theft-of-open-source-code/ +[8]: https://www.cloudflare.com/learning/ddos/what-is-a-ddos-attack/ +[9]: https://opensource.com/business/15/3/decoupling-your-cms +[10]: https://www.hostpapa.com/blog/security/why-your-small-business-needs-a-firewall/ +[11]: https://surfshark.com/download/linux +[12]: https://www.linode.com/docs/security/firewalls/control-network-traffic-with-iptables/ +[13]: https://www.getmura.com/ +[14]: https://dotcms.com/ +[15]: https://getcockpit.com/ +[16]: https://www.cmswire.com/web-cms/13-headless-cmss-to-put-on-your-radar/ +[17]: https://medium.com/@mat_walker/tips-for-content-modelling-with-the-headless-cms-contentful-7e886a911962 diff --git a/sources/tech/20190516 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-2.md b/sources/tech/20190516 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-2.md new file mode 100644 index 0000000000..58a9459a41 --- /dev/null +++ b/sources/tech/20190516 Monitor and Manage Docker Containers with Portainer.io (GUI tool) - Part-2.md @@ -0,0 +1,244 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2) +[#]: via: (https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/) +[#]: author: (Shashidhar Soppin https://www.linuxtechi.com/author/shashidhar/) + +Monitor and Manage Docker Containers with Portainer.io (GUI tool) – Part-2 +====== + +As a continuation of Part-1, this part-2 has remaining features of Portainer covered and as explained below. + +### Monitoring docker container images + +``` +root@linuxtechi ~}$ docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +9ab9aa72f015 ubuntu "/bin/bash" 14 seconds ago Exited (0) 12 seconds ago suspicious_shannon +305369d3b2bb centos "/bin/bash" 24 seconds ago Exited (0) 22 seconds ago admiring_mestorf +9a669f3dc4f6 portainer/portainer "/portainer" 7 minutes ago Up 7 minutes 0.0.0.0:9000->9000/tcp trusting_keller +``` + +Including the portainer(which is a docker container image), all the exited and present running docker images are displayed. Below screenshot from Portainer GUI displays the same. + +[![Docker_status][1]][2] + +### Monitoring events + +Click on the “Events” option from the portainer webpage as shown below. + +Various events that are generated and created based on docker-container activity, are captured and displayed in this page + +[![Container-Events-Poratiner-GUI][3]][4] + +Now to check and validate how the “ **Events** ” section works. Create a new docker-container image redis as explained below, check the docker ps –a status at docker command-line. + +``` +root@linuxtechi ~}$ docker ps -a +CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES +cdbfbef59c31 redis "docker-entrypoint.s…" About a minute ago Up About a minute 6379/tcp angry_varahamihira +9ab9aa72f015 ubuntu "/bin/bash" 10 minutes ago Exited (0) 10 minutes ago suspicious_shannon +305369d3b2bb centos "/bin/bash" 11 minutes ago Exited (0) 11 minutes ago admiring_mestorf +9a669f3dc4f6 portainer/portainer "/portainer" 17 minutes ago Up 17 minutes 0.0.0.0:9000->9000/tcp trusting_keller +``` + +Click the “Event List” on the top to refresh the events list, + +[![events_updated][5]][6] + +Now the event’s page also updated with this change, + +### Host status + +Below is the screenshot of the portainer displaying the host status. This is a simple window showing-up. This shows the basic info like “CPU”, “hostname”, “OS info” etc of the host linux machine. Instead of logging- into the host command-line, this page provides very useful info on for quick glance. + +[![Host-names-Portainer][7]][8] + +### Dashboard in Portainer + +Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Dashboard** ” section of the selected Docker Container image. + +When “ **EndPoint** ” option is clicked in the GUI of Portainer, the following window appears, + +[![End_Point_Settings][9]][10] + +This Dashboard has many statuses and options, for a host container image. + +**1) Stacks:** Clicking on this option, provides status of any stacks if any. Since there are no stacks, this displays zero. + +**2) Images:** Clicking on this option provides host of container images that are available. This option will display all the live and exited container images + +[![Docker-Container-Images-Portainer][11]][12] + +For example create one more “ **Nginx”** container and refresh this list to see the updates. + +``` +root@linuxtechi ~}$ sudo docker run nginx +Unable to find image 'nginx:latest' locally +latest: Pulling from library/nginx +27833a3ba0a5: Pull complete +ea005e36e544: Pull complete +d172c7f0578d: Pull complete +Digest: sha256:e71b1bf4281f25533cf15e6e5f9be4dac74d2328152edf7ecde23abc54e16c1c +Status: Downloaded newer image for nginx:latest +``` + +The following is the image after refresh, + +[![Nginx_Image_creation][13]][14] + +Once the Nginx image is stopped/killed and docker container image will be moved to unused status. + +**Note** :-One can see all the image details here are very clear with memory usage, creation date and time. As compared to command-line option, maintaining and monitoring containers from here it will be very easy. + +**3) Networks:** this option is used for network operations. Like assigning IP address, creating subnets, providing IP address range, access control (admin and normal user) . The following window provides the details of various options possible. Based on your need these options can be explored further. + +[![Conatiner-Network-Portainer][15]][16] + +Once all the various networking parameters are entered, “ **create network** ” button is clicked for creating the network. + +**4) Container:** (click on container) This option will provide the container status. This list will provide details on live and not running container statuses. This output is similar to docker ps command option. + +[![Containers-Status-Portainer][17]][18] + +From this window only the containers can be stopped and started as need arises by checking the check box and selecting the above buttons. One example is provided as below, + +Example, Both “CentOS” and “Ubuntu” containers which are in stopped state, they are started now by selecting check boxes and hitting “Start” button. + +[![start_containers1][19]][20] + +[![start_containers2][21]][22] + +**Note:** Since both are Linux container images, they will not be started. Portainer tries to start and stops later. Try “Nginx” instead and you can see it coming to “running”status. + +[![start_containers3][23]][24] + +**5) Volume:** Described in Part-I of Portainer Article + +### Setting option in Portainer + +Until now we have seen various features of portainer based under “ **Local”** section. Now jump on to the “ **Setting”** section of the selected Docker Container image. + +When “Settings” option is clicked in the GUI of Portainer, the following further configuration options are available, + +**1) Extensions** : This is a simple Portainer CE subscription process. The details and uses can be seen from the attached window. This is mainly used for maintaining the license and subscription of the respective version. + +[![Extensions][25]][26] + +**2) Users:** This option is used for adding “users” with or without administrative privileges. Following example provides the same. + +Enter the selected user name “shashi” in this case and your choice of password and hit “ **Create User** ” button below. + +[![create_user_portainer][27]][28] + +[![create_user2_portainer][29]][30] + +[![Internal-user-Portainer][31]][32] + +Similarly the just now created user “shashi” can be removed by selecting the check box and hitting remove button. + +[![user_remove_portainer][33]][34] + +**3) Endpoints:** this option is used for Endpoint management. Endpoints can be added and removed as shown in the attached windows. + +[![Endpoint-Portainer-GUI][35]][36] + +The new endpoint “shashi” is created using the various default parameters as shown below, + +[![Endpoint2-Portainer-GUI][37]][38] + +Similarly this endpoint can be removed by clicking the check box and hitting remove button. + +**4) Registries:** this option is used for registry management. As docker hub has registry of various images, this feature can be used for similar purposes. + +[![Registry-Portainer-GUI][39]][40] + +With the default options the “shashi-registry” can be created. + +[![Registry2-Portainer-GUI][41]][42] + +Similarly this can be removed if not required. + +**5) Settings:** This option is used for the following various options, + + * Setting-up snapshot interval + * For using custom logo + * To create external templates + * Security features like- Disable and enable bin mounts for non-admins, Disable/enable privileges for non-admins, Enabling host management features + + + +Following screenshot shows some options enabled and disabled for demonstration purposes. Once all done hit on “Save Settings” button to save all these options. + +[![Portainer-GUI-Settings][43]][44] + +Now one more option pops-up on “Authentication settings” for LDAP, Internal or OAuth extension as shown below” + +[![Authentication-Portainer-GUI-Settings][45]][46] + +Based on what level of security features we want for our environment, respective option is chosen. + +That’s all from this article, I hope these Portainer GUI articles helps you to manage and monitor containers more efficiently. Please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/monitor-manage-docker-containers-portainer-io-part-2/ + +作者:[Shashidhar Soppin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/shashidhar/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status-1024x423.jpg +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Docker_status.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events-1024x404.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Events.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated-1024x414.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/events_updated.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names-1024x408.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Host_names.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings-1024x471.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/05/End_Point_Settings.jpg +[11]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images-1024x398.jpg +[12]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Images.jpg +[13]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation-1024x439.jpg +[14]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Nginx_Image_creation.jpg +[15]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network-1024x463.jpg +[16]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Network.jpg +[17]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers-1024x364.jpg +[18]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Containers.jpg +[19]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1-1024x432.jpg +[20]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers1.jpg +[21]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2-1024x307.jpg +[22]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers2.jpg +[23]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3-1024x435.jpg +[24]: https://www.linuxtechi.com/wp-content/uploads/2019/05/start_containers3.jpg +[25]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions-1024x421.jpg +[26]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Extensions.jpg +[27]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user-1024x350.jpg +[28]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user.jpg +[29]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2-1024x372.jpg +[30]: https://www.linuxtechi.com/wp-content/uploads/2019/05/create_user2.jpg +[31]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer-1024x257.jpg +[32]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Internal-user-Portainer.jpg +[33]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove-1024x318.jpg +[34]: https://www.linuxtechi.com/wp-content/uploads/2019/05/user_remove.jpg +[35]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint-1024x349.jpg +[36]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint.jpg +[37]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2-1024x379.jpg +[38]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Endpoint2.jpg +[39]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry-1024x420.jpg +[40]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry.jpg +[41]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2-1024x409.jpg +[42]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Registry2.jpg +[43]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings-1024x418.jpg +[44]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Portainer-GUI-Settings.jpg +[45]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings-1024x344.jpg +[46]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Authentication-Portainer-GUI-Settings.jpg diff --git a/sources/tech/20190516 System76-s secret sauce for success.md b/sources/tech/20190516 System76-s secret sauce for success.md new file mode 100644 index 0000000000..9409de535f --- /dev/null +++ b/sources/tech/20190516 System76-s secret sauce for success.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (System76's secret sauce for success) +[#]: via: (https://opensource.com/article/19/5/system76-secret-sauce) +[#]: author: (Don Watkins https://opensource.com/users/don-watkins/users/don-watkins) + +System76's secret sauce for success +====== +Linux computer maker's approach to community-informed software and +hardware development embodies the open source way. +![][1] + +In [_The Open Organization_][2], Jim Whitehurst says, "show passion for the purpose of your organization and constantly drive interest in it. People are drawn to and generally, want to follow passionate people." Carl Richell, the founder and CEO of Linux hardware maker [System76][3], pours that secret sauce to propel his company in the world of open hardware, Linux, and open source. + +Carl demonstrates quiet confidence and engages the team at System76 in a way that empowers their creative synergy. During a recent visit to System76's Denver factory, I could immediately tell that the employees love what they do, what they produce, and their interaction with each other and their customers, and Carl sets that example. They are as they [describe themselves][4]: a diverse team of creators, makers, and builders; a small company innovating the next big things; and a group of extremely hard-core nerds. + +### A revolutionary approach + +In 2005, Carl had a vision, which began as talk over some beers, to produce desktop and laptop computers that come installed with Linux. He's transformed that idea into a highly successful company founded on the [belief][5] that "the computer and operating system are the most powerful and versatile tools ever created." And by producing the best tools, System76 can inspire the curious to make their greatest discovery or complete their greatest project. + +![System 76 founder and CEO Carl Richell][6] + +Carl Richell's enthusiasm was obvious at System 76's [Thelio launch event][7]. + +System76 lives up to its name, which was inspired by the American Revolution of 1776. The company views itself as a leader in the open source revolution, granting people freedom and independence from proprietary hardware and software. + +But the revolution does not end there; it continues with the company's business practices and diverse environment that aims to close the gender gap in technology leadership. Eight of the company's 28 employees are women, including vice president of marketing Louisa Bisio, creative manager Kate Hazen, purchasing manager May Liu, head of technical support Emma Marshall, and manufacturing control and logistics manager Sarah Zinger. + +### Community-informed design + +The staff members' passion and ingenuity for making the Linux experience enjoyable for customers creates an outstanding culture. Because the company believes the Linux desktop deserves a dedicated PC manufacturer, in 2018, it brought manufacturing in-house. This allows System76's engineers to make design changes more quickly, based on their frequent interactions with Linux users to learn about their needs and wants. It also opens up its parts and process to the public, including publishing design files under GPL on [GitHub][8], consistent with its commitment to openness and open source. + +For example, when System76 decided to create its own version of Linux, [Pop!_OS][9], it hosted online meetings to discuss and learn what features and software its customers wanted. This decision to work closely with the community has been instrumental in making Pop!_OS successful. + +System76 again turned to the community when it began developing [Thelio][10], its new line of desktop computers. Marketing VP Louisa Bisio says, "Taking a similar approach to open hardware has been great. We started in-house design in 2016, prototyping different desktop designs. Then we moved from prototyping acrylic to sheet metal. Then the first few prototypes of Thelio were presented to our [Superfan][11] attendees in 2017, and their feedback was really important in adjusting the desktop designs and progressing Thelio iterations forward." + +Thelio is the product of research and development focusing on high-quality components and design. It features a unique cabling layout, innovative airflow within the computer case, and the Thelio Io open hardware SATA controller. Many of System76's customers use platforms like [CUDA][12] to do their work; to support them, System76 works backward and pulls out proprietary functionality, piece by piece, until everything is open. + +### Open roads ahead + +Manufacturing open laptops are on the long-range roadmap, but the company is actively working on an open motherboard and maintaining Pop!_OS and System76 drivers, which are open. This commitment to openness, customer-driven design, and culture give System 76 a unique place in computer manufacturing. All of this stems from founder Carl Richell and his philosophy "that technology should be open and accessible to everyone." [As Carl says][13], "open hardware benefits all of us. It's how we further advance technology and make it more available to everyone." + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/system76-secret-sauce + +作者:[Don Watkins ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bubblehands_fromRHT_520_0612LL.png?itok=_iQ2dO3S +[2]: https://www.amazon.com/Open-Organization-Igniting-Passion-Performance/dp/1511392460 +[3]: https://system76.com/ +[4]: https://system76.com/about +[5]: https://system76.com/pop +[6]: https://opensource.com/sites/default/files/uploads/carl_richell.jpg (System 76 founder and CEO Carl Richell) +[7]: https://trevgstudios.smugmug.com/System76/121418-Thelio-Press-Event/i-w6XNmKS +[8]: https://github.com/system76 +[9]: https://opensource.com/article/18/1/behind-scenes-popos-linux +[10]: https://system76.com/desktops +[11]: https://system76.com/superfan +[12]: https://en.wikipedia.org/wiki/CUDA +[13]: https://opensource.com/article/19/4/system76-hardware diff --git a/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md b/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md new file mode 100644 index 0000000000..81d021f7d7 --- /dev/null +++ b/sources/tech/20190517 Announcing Enarx for running sensitive workloads.md @@ -0,0 +1,83 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Announcing Enarx for running sensitive workloads) +[#]: via: (https://opensource.com/article/19/5/enarx-security) +[#]: author: (Mike Bursell https://opensource.com/users/mikecamel/users/wgarry155) + +Announcing Enarx for running sensitive workloads +====== +Enarx leverages the capabilities of a TEE to change the trust model for +your application. +![cubes coming together to create a larger cube][1] + +Running software is something that most of us do without thinking about it. We run in "on premises"—our own machines—or we run it in the cloud - on somebody else's machines. We don't always think about what those differences mean, or about what assumptions we're making about the securtiy of the data that's being processed, or even of the software that's doing that processing. Specifically, when you run software (a "workload") on a system (a "host") on the cloud or on your own premises, there are lots and lots of layers. You often don't see those layers, but they're there. + +Here's an example of the layers that you might see in a standard cloud virtualisation architecture. The different colours represent different entities that "own" different layers or sets of layers. + +![Layers in a standard cloud virtualisation architecture][2] + +Here's a similar diagram depicting a standard cloud container architecture. As before, each different colour represents a different "owner" of a layer or set of layers. + +![Standard cloud container architecture][3] + +These owners may be of very different types, from hardware vendors to OEMs to cloud service providers (CSPs) to middleware vendors to operating system vendors to application vendors to you, the workload owner. And for each workload that you run, on each host, the exact list of layers is likely to be different. And even when they're the same, the versions of the layers instances may be different, whether it's a different BIOS version, a different bootloader, a different kernel version, or whatever else. + +Now, in many contexts, you might not worry about this, and your CSP goes out of its way to abstract these layers and their version details away from you. But this is a security article, for security people, and that means that anybody who's reading this probably does care. + +The reason we care is not just the different versions and the different layers, but the number of different things—and different entities—that we need to trust if we're going to be happy running any sort of sensitive workload on these types of stacks. I need to trust every single layer, and the owner of every single layer, not only to do what they say they will do, but also not to be compromised. This is a _big_ stretch when it comes to running my sensitive workloads. + +### What's Enarx? + +Enarx is a new project that is trying to address this problem of having to trust all of those layers. A few of us at Red Hat have been working on it for a few months now. My colleague Nathaniel McCallum demoed an early incarnation of it at [Red Hat Summit 2019][4] in Boston, and we're ready to start announcing it to the world. We have code, we have a demo, we have a GitHub repository, we have a logo: what more could a project want? Well, people—but we'll get to that. + +![Enarx logo][5] + +With Enarx, we made the decision that we wanted to allow people running workloads to be able to reduce the number of layers—and owners—that they need to trust to the absolute minimum. We plan to use trusted execution environments ("TEEs"—see "[Oh, how I love my TEE (or do I?)][6]") to provide an architecture that looks a little more like this: + +![Enarx architecture][7] + +In a world like this, you have to trust the CPU and firmware, and you need to trust some middleware—of which Enarx is part—but you don't need to trust all of the other layers, because we will leverage the capabilities of the TEE to ensure the integrity and confidentiality of your application. The Enarx project will provide attestation of the TEE, so that you know you're running on a true and trusted TEE, and will provide open source, auditable code to help you trust the layer directly beneath your application. + +The initial code is out there—working on AMD's SEV TEE at the momen—and enough of it works now that we're ready to tell you about it. + +Making sure that your application meets your own security requirements is down to you. :-) + +### How do I find out more? + +The easiest way to learn more is to visit the [Enarx GitHub][8]. + +We'll be adding more information there—it's currently just code—but bear with us: there are only a few of us on the project at the moment. A blog is on the list of things we'd like to have, but we wanted to get things started. + +We'd love to have people in the community getting involved in the project. It's currently quite low-level and requires quite a lot of knowledge to get running, but we'll work on that. You will need some specific hardware to make it work, of course. Oh, and if you're an early boot or a low-level KVM hacker, we're _particularly_ interested in hearing from you. + +I will, of course, respond to comments on this article. + +* * * + +_This article was originally published on[Alice, Eve, and Bob][9] and is reprinted with the author's permission._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/enarx-security + +作者:[Mike Bursell ][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mikecamel/users/wgarry155 +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube) +[2]: https://opensource.com/sites/default/files/uploads/classic-cloud-virt-arch-1.png (Layers in a standard cloud virtualisation architecture) +[3]: https://opensource.com/sites/default/files/uploads/cloud-container-arch.png (Standard cloud container architecture) +[4]: https://www.redhat.com/en/summit/2019 +[5]: https://opensource.com/sites/default/files/uploads/enarx.png (Enarx logo) +[6]: https://aliceevebob.com/2019/02/26/oh-how-i-love-my-tee-or-do-i/ +[7]: https://opensource.com/sites/default/files/uploads/reduced-arch.png (Enarx architecture) +[8]: https://github.com/enarx +[9]: https://aliceevebob.com/2019/05/07/announcing-enarx/ diff --git a/sources/tech/20190518 Best Linux Distributions for Beginners.md b/sources/tech/20190518 Best Linux Distributions for Beginners.md new file mode 100644 index 0000000000..1630f1bc85 --- /dev/null +++ b/sources/tech/20190518 Best Linux Distributions for Beginners.md @@ -0,0 +1,206 @@ +[#]: collector: (lujun9972) +[#]: translator: () +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Best Linux Distributions for Beginners) +[#]: via: (https://itsfoss.com/best-linux-beginners/) +[#]: author: (Aquil Roshan https://itsfoss.com/author/aquil/) + +Best Linux Distributions for Beginners +====== + +_**Brief** : In this article, we will see the **best Linux distro for beginners**_. This will help new Linux users to pick their first distribution. + +Let’s face it, [Linux][1] can pose an overwhelming complexity to new users. But then, it’s not Linux itself that brings this complexity. Rather, it’s the “newness” factor that causes this. Not getting nostalgic, but remembering my first time with Linux, I didn’t even know what to expect. I liked it. But it was an upstream swim for me initially. + +Not knowing where to start can be a downer. Especially for someone who does not have the concept of something else running on their PC in place of Windows. + +Linux is more than an OS. It’s an idea where everybody grows together and there’s something for everybody. We have already covered: + + * [Best Linux distributions for Windows users][2] + * [Best lightweight Linux distros][3] + * [Best Linux distributions for hacking][4] + * [Best Linux distributions for gaming][5] + * [Best Linux distributions for privacy and anonymity][6] + * [Best Linux distributions that look like MacOS][7] + + + +In addition to that, there are distributions that cater to the needs of newcomers especially. So here are a few such Linux distros for beginners. You can watch it in a video and [subscribe to our YouTube channel][8] for more Linux related videos. + +### Best Linux Distros for Beginners + +Please remember that this list is no particular order. The main criteria for compiling this list is ease of installation, out of the box hardware software, ease of use and availability of software packages. + +#### 1\. Ubuntu + +If you’ve researched Linux on the internet, it’s highly probable that you have come across Ubuntu. Ubuntu is one of the leading Linux distributions. It is also the perfect path to begin your Linux journey. + +![][9] + +Ubuntu has been tagged as Linux for human beings. Now, this is because Ubuntu has put in a lot of effort on universal usability. Ubuntu does not require you to be technically sound for you to use it. It breaks the notion of Linux=Command line hassle. This is one of the major plus points that rocketed Ubuntu to where it is today. + +Ubuntu offers a very convenient installation procedure. The installer speaks plain English (or any major language you want). You can even try out Ubuntu before actually going through the installation procedure. The installer provides simple options to: + + * Install Ubuntu removing the older OS + * [Install Ubuntu alongside Windows][10] or any other existing OS (A choice is given at every startup to select the OS to boot). + * Configure partitions for users who know what they are doing. + + + +_Beginner tip: Select the second option if you are not sure about what to do._ + +Ubuntu’s user interface is called GNOME. It is as simple as well as productive as it gets. You can search anything from applications to files by pressing the Windows key. Is there any way you can make this simpler? + +There are no driver installation issues as Ubuntu comes with a hardware detector which detects, downloads and installs optimal drivers for your PC. Also, the installation comes with all the basic software like a music player, video player, an office suite and games for some time killing. + +Ubuntu has a great documentation and community support. [Ubuntu forums ][11]and [Ask Ubuntu][12] provide an appreciable quality support in almost all aspects regarding Ubuntu. It’s highly probable that any question you might have will already be answered. And the answers are beginner friendly. + +Do check out and download [Ubuntu][13] at the [official site.][13] + +#### 2\. Linux Mint Cinnamon + +For years, Linux Mint has been the **number one** Linux distribution on [Distrowatch][14]. Well deserved throne I must say. Linux mint is one of my personal favorites. It is elegant, graceful and provides a superior computing experience (out of the box). + +![][15] + +Linux Mint features the Cinnamon desktop environment. New Linux users who are still in the process of familiarizing themselves with Linux software will find Cinnamon very useful. All the software are very accessibly grouped under categories. Although this is nothing of a mind-blowing feature, to new users who do not know the names of Linux software, this is a huge bonus. + +[][16] + +Suggested read Installing Microsoft Visual Studio Code on Linux + +Linux Mint is fast. Runs fine on older computers. Linux Mint is built upon the rock-solid Ubuntu base. It uses the same software repository as Ubuntu. About the Ubuntu software repository, Ubuntu pushes software for general only use after extensive testing. This means users will not have to deal with unexpected crashes and glitches that some new software are prone to, which can be a real no-no for new Linux users. + +![][17] + +Windows 7 lovers who are really not into where Microsoft if heading with Windows 10 will find Linux Mint lovable. Linux Mint desktop is pretty similar to Windows 7 desktop. Similar toolbar, similar menu, similar tray icons are all set to make Windows users feel absolutely at home. + +Personally, I’m more likely to suggest Linux Mint to someone who is new to Linux world as Linux Mint does impress users enough for them to accept it. To me, Linux Mint should be the first among the list of Linux for beginners. + +Do check out [Linux Mint here][18]. Go for the Cinnamon version. + +#### 3\. Zorin OS + +A majority of computer users are Windows users. And when a [Windows user gets a Linux][2], there’s a fair amount of ‘unlearning process’ that user must go through. A huge amount of operations have been fixed in our muscle memory. For example, the mouse reaching to the lower left corner of the screen (Start) everytime you want to launch an application. So if we could find something that eases these issues on Linux, it’s half a battle won. Enter Zorin OS. + +![][19] + +Zorin OS is an Ubuntu-based, highly polished Linux distribution, entirely made for Windows refugees. Although pretty much every Linux distro is usable by everybody, some people might tend to be reluctant when the desktop looks too alien. Zorin OS dodges past this obstacle because of its similarities with Windows appearance wise. + +Package managers are something of a new concept to Linux newcomers. That’s why Zorin OS comes with a huge (I mean really huge) list of pre-installed software. Anything you need, there’s good chance it’s already installed on Zorin OS. As if that was not enough, [Wine and PlayOnLinux][20] come pre-installed so you can run your loved Windows software and [games][21] here too. + +![][22] + +Zorin OS comes with an amazing theme engine called the ‘Zorin look changer’. It offers some heavy customization options with presets to make your OS look like Windows 7, XP, 2000 or even a Mac for that matter. You’re going to feel home. + +![][23] + +These features make Zorin OS the _**best Linux distro for beginners**_ , isn’t it? Do check out the [Zorin OS website][24] to know more and download the OS. + +#### 4\. Elementary OS + +Since we have taken a look at Linux distros for Windows users, let’s swing by something for MacOS users too. Elementary OS very quickly rose to fame and now is always included in the list of top distros, all thanks to its aesthetic essence. Inspired by MacOS looks, Elementary OS is one of the most beautiful Linux distros. + +![][25] + +Elementary OS is another Ubuntu-based operating system which means the operating system itself is unquestionably stable. Elementary OS features the Pantheon desktop environment. You can immediately notice the resemblance to MacOS desktop. This is an advantage to MacOS users switching to Linux as they will much comfortable with the desktop and this really eases the process of coping to this change. + +![][26] + +The menu is simple and customizable according to user preferences. The operating system is zero intrusive so you can really focus on your work. It comes with a very small number of pre-installed software. So, any new user will not be repulsed by huge bloat. But hey, it’s got everything you need out of the box. For more software, Elementary OS provides a neat AppCenter. It is highly accessible and simple. Everything at one place. You can get all the software you want and perform upgrades in clicks. + +[][27] + +Suggested read How to Install and Use Slack in Linux + +Experience wise, [Elementary OS][28] is really a great piece of software. Definitely give [it a try.][28] + +#### 5\. Linux Mint Mate + +A good number of people who come to Linux are looking to revive older computers. With Windows 10, many computers that had decent specs just some years ago have become incompetent. A quick google will suggest you install Linux on such computers. In that way, you can keep them running up to the mark for the near future. Linux Mint Mate is a great Linux distro if you are looking for something to run your older computers. + +![][29] + +Linux Mint Mate is very light, resource efficient but still a polished distro. It can run smoothly on computers with less muscle power. The desktop environment does not come with bells and jingles. But in no way is it functionally inferior to any other desktop environments. The operating system is non-intrusive and allows you to have a productive computing experience without getting in your way. + +Again, the Linux Mint Mate is based on Ubuntu and has the advantage of huge base solid Ubuntu software repository. It comes with a minimum number of necessities pre-installed. Easy driver installation and setting management are made available. + +You can run Linux Mint Mate even if you have 512 MB RAM and 9 GB hard disk space (the more the merrier). + +The Mate desktop environment is really simple to use with no twists in the tale. This is really a huge plus point for Linux beginners. All the more reason to [try out Linux Mint Mate][30]. + +#### 6\. Manjaro Linux + +Ok. Any long time Linux user will say guiding a newcomer even in the general direction of Arch Linux is a sin. But hear me out. + +Arch is considered experts-only Linux because of it’s highly complex installation procedure. Manajro and Arch Linux have a common origin. But they differ extensively in everything else. + +![][31] + +Manajro Linux has an extremely beginner friendly installation procedure. A lot of things are automated like driver installation using ‘Hardware detection’. Manjaro hugely negates the hardware driver hassles that torments a lot of other Linux distros. And even if you face any issues, Manjaro has an amazing community support. + +Manjaro has its own software repository which maintains the latest of software. While providing up to date software to users is a priority, guaranteed stability is not at all compromised. This is one of the prime differences between Arch and Manjaro. Manjaro delays package releases to make sure they are absolutely stable and no regression will be caused. You can also access the Arch User Repository on Manjaro, so anything and everything you need, is always available. + +If you want to know more about Manjaro features, do read my colleague [John’s experience with Manjaro Linux and why he is hooked][32] to it. + +![][33] + +Manjaro Linux comes in XFCE, KDE, Gnome, Cinnamon and a host of more desktop environments. Do check out the [official website][34]. + +To install any of the above 6 operating systems, you need to create a bootable USB stick. If you are currently using Windows [use this guide to do so][35]. Mac OS users may [follow this guide][36]. + +**Your choice for the best Linux distro for beginners?** + +Linux might come with a learning curve, but that’s not something anybody ever regretted. Go ahead get an ISO and check out Linux. If you are already a Linux user, do share this article and help someone fall in love with Linux in this season of love. Cheers. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-linux-beginners/ + +作者:[Aquil Roshan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/aquil/ +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/what-is-linux +[2]: https://itsfoss.com/windows-like-linux-distributions/ +[3]: https://itsfoss.com/lightweight-linux-beginners/ +[4]: https://itsfoss.com/linux-hacking-penetration-testing/ +[5]: https://itsfoss.com/linux-gaming-distributions/ +[6]: https://itsfoss.com/privacy-focused-linux-distributions/ +[7]: https://itsfoss.com/macos-like-linux-distros/ +[8]: https://www.youtube.com/c/itsfoss +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/06/ubuntu-18-04-desktop.jpeg?resize=800%2C450&ssl=1 +[10]: https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/ +[11]: https://ubuntuforums.org/ +[12]: http://askubuntu.com/ +[13]: https://www.ubuntu.com/ +[14]: https://distrowatch.com/ +[15]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/02/LM_Home.jpg?ssl=1 +[16]: https://itsfoss.com/install-visual-studio-code-ubuntu/ +[17]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/LM_SS.jpg?ssl=1 +[18]: https://linuxmint.com/ +[19]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/02/Zorin.jpg?ssl=1 +[20]: https://itsfoss.com/use-windows-applications-linux/ +[21]: https://itsfoss.com/linux-gaming-guide/ +[22]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/Zorin-office.jpg?ssl=1 +[23]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/02/OSX.jpg?ssl=1 +[24]: https://zorinos.com/ +[25]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/02/Pantheon-Desktop.jpg?resize=800%2C500&ssl=1 +[26]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/02/Application-Menu.jpg?ssl=1 +[27]: https://itsfoss.com/slack-use-linux/ +[28]: https://elementary.io/ +[29]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/mate.jpg?ssl=1 +[30]: http://blog.linuxmint.com/?p=3182 +[31]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2017/02/manajro.jpg?ssl=1 +[32]: https://itsfoss.com/why-use-manjaro-linux/ +[33]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2017/02/manjaro-kde.jpg?ssl=1 +[34]: https://manjaro.org/ +[35]: https://www.ubuntu.com/download/desktop/create-a-usb-stick-on-windows +[36]: https://www.ubuntu.com/download/desktop/create-a-usb-stick-on-macos diff --git a/sources/tech/20190519 The three Rs of remote work.md b/sources/tech/20190519 The three Rs of remote work.md new file mode 100644 index 0000000000..f40f8b652e --- /dev/null +++ b/sources/tech/20190519 The three Rs of remote work.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The three Rs of remote work) +[#]: via: (https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +The three Rs of remote work +====== + +I started working remotely in 2012. Since then I’ve worked for big companies and small, organisations with outstanding remote working cultures, and others that probably would have difficulty spelling the word without predictive text. I broadly classify my experiences into three tiers; + +### Little r remote + +The first kind of remote work I call _little r_ remote. + +Your company has an office, but it’s not convenient or you don’t want to work from there. It could be the commute is too long, or its in the next town over, or perhaps a short plane flight away. Sometimes you might go into the office for a day or two a week, and should something serious arise you could join your co-workers onsite for an extended period of time. + +If you often hear people say they are going to work from home to get some work done, that’s little r remote. + +### Big R remote + +The next category I call _Big R_ remote. Big R remote differs mainly from little r remote by the tyranny of distance. It’s not impossible to visit your co-workers in person, but it is inconvenient. Meeting face to face requires a day’s flying. Passports and boarder crossings are frequently involved. The expense and distance necessitates week long sprints and commensurate periods of jetlag recuperation. + +Because of timezone differences meetings must be prearranged and periods of overlap closely guarded. Communication becomes less spontaneous and care must be taken to avoid committing to unsustainable working hours. + +### Gothic ℜ remote + +The final category is basically Big R remote working on hard mode. Everything that was hard about Big R remote, timezone, travel schedules, public holidays, daylight savings, video call latency, cultural and language barriers is multiplied for each remote worker. + +In person meetings are so rare that without a focus on written asynchronous communication progress can repeatedly stall for days, if not weeks, as miscommunication leads to disillusionment and loss of trust. + +In my experience, for knowledge workers, little r remote work offers many benefits over [the open office hell scape][1] du jour. Big R remote takes a serious commitment by all parties and if you are the first employee in that category you will bare most of the cost to making Big R remote work for you. + +Gothic ℜ remote working should probably be avoided unless all those involved have many years of working in that style _and_ the employer is committed to restructuring the company as a remote first organisation. It is not possible to succeed in a Gothic ℜ remote role without a culture of written communication and asynchronous decision making mandated, _and consistently enforced,_ by the leaders of the company. + +#### Related posts: + + 1. [How to dial remote SSL/TLS services in Go][2] + 2. [How does the go build command work ?][3] + 3. [Why Slack is inappropriate for open source communications][4] + 4. [The office coffee model of concurrent garbage collection][5] + + + +-------------------------------------------------------------------------------- + +via: https://dave.cheney.net/2019/05/19/the-three-rs-of-remote-work + +作者:[Dave Cheney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://dave.cheney.net/author/davecheney +[b]: https://github.com/lujun9972 +[1]: https://twitter.com/davecheney/status/761693088666357760 +[2]: https://dave.cheney.net/2010/10/05/how-to-dial-remote-ssltls-services-in-go (How to dial remote SSL/TLS services in Go) +[3]: https://dave.cheney.net/2013/10/15/how-does-the-go-build-command-work (How does the go build command work ?) +[4]: https://dave.cheney.net/2017/04/11/why-slack-is-inappropriate-for-open-source-communications (Why Slack is inappropriate for open source communications) +[5]: https://dave.cheney.net/2018/12/28/the-office-coffee-model-of-concurrent-garbage-collection (The office coffee model of concurrent garbage collection) diff --git a/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md b/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md new file mode 100644 index 0000000000..c34effe6be --- /dev/null +++ b/sources/tech/20190520 Blockchain 2.0 - Explaining Distributed Computing And Distributed Applications -Part 11.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Explaining Distributed Computing And Distributed Applications [Part 11]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Explaining Distributed Computing And Distributed Applications [Part 11] +====== + +![Explaining Distributed Computing And Distributed Applications][1] + +### How DApps serve the purpose of [Blockchain 2.0][2] + +**Blockchain 1.0** was about introducing the “blockchain” into the list of modern buzzwords along with the advent of **bitcoin**. Multiple white papers detailing bitcoin’s underlying blockchain network specified the use of the blockchain for other uses as well. Although most of the said uses was around the basic concept of using the blockchain as a **decentralized medium** for storage, a use that stems from this property is utilizing it for carrying out **Distributed computing** on top of this layer. + +**DApps** or **Distributed Applications** are computer programs that are stored and run on a distributed storage system such as the [**Ethereum**][3] blockchain for instance. To understand how DApps function and how they’re different from traditional applications on your desktop or phone, we’ll need to delve into what distributed computing is. This post will explore some fundamental concepts of distributed computing and the role of blockchains in executing the said objective. Furthermore, well also look at a few applications or DApps, in blockchain lingo, to get a hang of things. + +### What is Distributed Computing? + +We’re assuming many readers are familiar with multi-threaded applications and multi-threading in general. Multi-threading is the reason why processor manufacturers are forever hell bent on increasing the core count on their products. Fundamentally speaking, some applications such as video rendering software suites are capable of dividing their work (in this case rendering effects and video styles) into multiple chunks and parallelly get them processed from a supporting computing system. This reduces the lead time on getting the work done and is generally more efficient in terms of time, money and energy usage. Applications such as some games however, cannot make use of this system since processing and responses need to be obtained real time based on user inputs rather than via planned execution. Nonetheless, the fact that more processing power may be exploited from existing hardware using these computing methods remains true and significant. + +Even supercomputers are basically a bunch of powerful CPUs all tied up together in a circuit to enable faster processing as mentioned above. The average core count on flagship CPUs from the lead manufacturers AMD and Intel have in fact gone up in the last few years, because increasing core count has recently been the only method to claim better processing and claim upgrades to their product lines. This information notwithstanding, the fact remains that distributed computing and related concepts of parallel computing are the only legitimate ways to improve processing capabilities in the near future. There are minor differences between distributed and parallel computing models as well, however that is beyond the scope off this post. + +Another method to get many computers executing programs simultaneously is to connect them through the internet and have a cloud-based program to be implemented in parts by all of the participating systems. This is the basic fundamental behind distributed applications. + +For a more detailed account and primer regarding what and how parallel computing works, interested readers may visit [this][4] webpage. For a more detailed study of the topic, for people who have a background in computer science, you may refer to [this][5] website and the accompanying book. + +### What are DApps or Distributed Applications + +Application that can make use of the capabilities offered by a distributed computing system is called a **distributed application**. The execution and structure of such an application’s back end needs to be carefully designed in order to be compatible with the system. + +The blockchain presents an opportunity to store data in a distributed system of participating nodes. Stepping up from this opportunity we can logically build systems and applications running on such a network (think about how you used to download files via the Torrent protocol). + +Such decentralized applications present a lot of benefits over conventional applications that typically run from a central server. Some highlights are: + + * DApps run on a network of such participating nodes and any user request is parsed through such network nodes to provide the user with the requested functionality. _**Program is executed on the network instead of a single computer or a server**_. + * DApps will have codified methods of filtering through requests and executing them so as to always be fair and transparent when users interact with it. To create a new block of data in the chain, the same has to be approved via a **consensus algorithm** by the participating nodes. This fundamental idea of peer to peer approval applies for DApps as well. This essentially means that DApps cannot by extension of this principle provide different outputs to the same query or input. All users will be given the same priority unless it is explicitly mentioned and all users will receive similar results from the DApp as well. This will prove to be important in developing better industry practices for insurance and finance companies for instance. A DApp that specializes in microlending, for instance, cannot differentiate and offer different interest rates for different borrowers other than their credit history. This also means that all users will eventually end up paying for their required operations uniformly depending on the computational complexity of the task they passed on to the application. For instance, combing through 10000 entries of data will cost proportionately more than combing through say 100. The payment or incentivisation system might be different for different applications and blockchain protocols though. + * Most DApps are by default redundant and fail safe. If you’re using a service which is run on a central server, a failure from the server end will freeze the application. Think of a service such as PayPal for instance. If the PayPal server in your immediate region fails due to some reason and somehow the central server cannot re route your request, your payment will not go through. However, even in case multiple participating nodes in the blockchain dies, you will still find the application live and running provided at least one node is live. This presents a use case for applications which are by definition supposed to be live all the time. Emergency services, insurance, communications etc., are some key areas where investors hope such DApps will bring in much needed reliability. + * DApps are usually cost-effective owing to them not requiring a central server to be maintained for their functionality. Once they become mainstream, the mean computing cost of running tasks on the same is also supposed to decrease. + * DApps will as mentioned exist till eternity at least until one participant is live on the chain. This essentially means that DApps cannot be censored or hacked into bowing and shutting down. + + + +The above list of features seems very few, however, combine that with all the other capabilities of the blockchain, the advancement of wireless network access, and, the increasing capabilities of millions of smartphones and here we have in our hands nothing less than a paradigm shift in how the apps that we rely on work. + +We will look deeper into how DApps function and how you can make your own DApps on the Ethereum blockchain in a proceeding post. To give you an idea of the DApp environment right now, we present 4 carefully chosen examples that are fairly advanced and popular. + +##### 1\. BITCOIN (or any Cryptocurrency) + +We’re very sure that readers did not expect BITCOIN to be one among a list of applications in this post. The point we’re trying to make here however, is that any cryptocurrency currently running on a blockchain backbone can be termed as a DApp. Cryptocurrencies are in fact the most popular DApp format out there and a revolutionary one at that too. + +##### 2\. [MELON][6] + +We’ve talked about how asset management can be an easier task utilizing blockchain and [**smart contracts**][7]. **Melon** is a company that aims to provide its users with usable relevant tools to manage and maximize their returns from the assets they own. They specialize in cryptographic assets as of now with plans to turn to real digitized assets in the future. + +##### 3\. [Request][8] + +**Request** is primarily a ledger system that handles financial transactions, invoicing, and taxation among other things. Working with other compatible databases and systems it is also capable of verifying payer data and statistics. Large corporations which typically have a significant number of defaulting customers will find it easier to handle their operations with a system such as this. + +##### 4\. [CryptoKitties][9] + +Known the world over as the video game that broke the Ethereum blockchain, **CryptoKitties** is a video game that runs on the Ethereum blockchain. The video game identifies each user individually by building your own digital profiles and gives you unique **virtual cats** in return. The game went viral and due to the sheer number of users it actually managed to slow down the Ethereum blockchain and its transaction capabilities. Transactions took longer than usual with users having to pay significantly extra money for simple transactions even. Concerns regarding scalability of the Ethereum blockchain have been raised by several stakeholders since then. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Distributed-Computing-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[4]: https://www.techopedia.com/definition/7/distributed-computing-system +[5]: https://www.distributed-systems.net/index.php/books/distributed-systems-3rd-edition-2017/ +[6]: https://melonport.com/ +[7]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[8]: https://request.network/en/use-cases/ +[9]: https://www.cryptokitties.co/ diff --git a/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md b/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md new file mode 100644 index 0000000000..4e9df8a0ff --- /dev/null +++ b/sources/tech/20190520 How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux.md @@ -0,0 +1,229 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux?) +[#]: via: (https://www.2daygeek.com/shell-script-map-oracle-asm-disks-physical-disk-lun-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Map Oracle ASM Disk Against Physical Disk And LUNs In Linux? +====== + +You might already know about ASM, Device Mapper Multipathing (DM-Multipathing) if you are working quit long time as a Linux administrator. + +There are multiple ways to check these information. However, you will be getting part of the information when you use the default commands. + +It doesn’t show you all together in the single output. + +If you want to check all together in the single output then we need to write a small shell script to achieve this. + +We have added two shell script to get those information and you can use which one is suitable for you. + +Major and Minor numbers can be used to match the physical devices in Linux system. + +This tutorial helps you to find which ASM disk maps to which Linux partition or DM Device. + +If you want to **[manage Oracle ASM disks][1]** (such as start, enable, stop, list, query and etc) then navigate to following URL. + +### What Is ASMLib? + +ASMLib is an optional support library for the Automatic Storage Management feature of the Oracle Database. + +Automatic Storage Management (ASM) simplifies database administration and greatly reduces kernel resource usage (e.g. the number of open file descriptors). + +It eliminates the need for the DBA to directly manage potentially thousands of Oracle database files, requiring only the management of groups of disks allocated to the Oracle Database. + +ASMLib allows an Oracle Database using ASM more efficient and capable access to the disk groups it is using. + +### What Is Device Mapper Multipathing (DM-Multipathing)? + +Device Mapper Multipathing or DM-multipathing is a Linux host-side native multipath tool, which allows us to configure multiple I/O paths between server nodes and storage arrays into a single device by utilizing device-mapper. + +### Method-1 : Shell Script To Map ASM Disks To Physical Devices? + +In this shell script we are using for loop to achieve the results. + +Also, we are not using any ASM related commands. + +``` +# vi asm_disk_mapping.sh + +#!/bin/bash + +ls -lh /dev/oracleasm/disks > /tmp/asmdisks1.txt + +for ASMdisk in `cat /tmp/asmdisks1.txt | tail -n +2 | awk '{print $10}'` + +do + +minor=$(grep -i "$ASMdisk" /tmp/asmdisks1.txt | awk '{print $6}') + +major=$(grep -i "$ASMdisk" /tmp/asmdisks1.txt | awk '{print $5}' | cut -d"," -f1) + +phy_disk=$(ls -l /dev/* | grep ^b | grep "$major, *$minor" | awk '{print $10}') + +echo "ASM disk $ASMdisk is associated on $phy_disk [$major, $minor]" + +done +``` + +Set an executable permission to port_scan.sh file. + +``` +$ chmod +x asm_disk_mapping.sh +``` + +Finally run the script to achieve this. + +``` +# sh asm_disk_mapping.sh + +ASM disk MP4E6D_DATA01 is associated on /dev/dm-1 +3600a0123456789012345567890234q11 [253, 1] +ASM disk MP4E6E_DATA02 is associated on /dev/dm-2 +3600a0123456789012345567890234q12 [253, 2] +ASM disk MP4E6F_DATA03 is associated on /dev/dm-3 +3600a0123456789012345567890234q13 [253, 3] +ASM disk MP4E70_DATA04 is associated on /dev/dm-4 +3600a0123456789012345567890234q14 [253, 4] +ASM disk MP4E71_DATA05 is associated on /dev/dm-5 +3600a0123456789012345567890234q15 [253, 5] +ASM disk MP4E72_DATA06 is associated on /dev/dm-6 +3600a0123456789012345567890234q16 [253, 6] +ASM disk MP4E73_DATA07 is associated on /dev/dm-7 +3600a0123456789012345567890234q17 [253, 7] +``` + +### Method-2 : Shell Script To Map ASM Disks To Physical Devices? + +In this shell script we are using while loop to achieve the results. + +Also, we are using ASM related commands. + +``` +# vi asm_disk_mapping_1.sh + +#!/bin/bash + +/etc/init.d/oracleasm listdisks > /tmp/asmdisks.txt + +while read -r ASM_disk + +do + +major="$(/etc/init.d/oracleasm querydisk -d $ASM_disk | awk -F[ '{ print $2 }'| awk -F] '{ print $1 }' | cut -d"," -f1)" + +minor="$(/etc/init.d/oracleasm querydisk -d $ASM_disk | awk -F[ '{ print $2 }'| awk -F] '{ print $1 }' | cut -d"," -f2)" + +phy_disk="$(ls -l /dev/* | grep ^b | grep "$major, *$minor" | awk '{ print $10 }')" + +echo "ASM disk $ASM_disk is associated on $phy_disk [$major, $minor]" + +done < /tmp/asmdisks.txt +``` + +Set an executable permission to port_scan.sh file. + +``` +$ chmod +x asm_disk_mapping_1.sh +``` + +Finally run the script to achieve this. + +``` +# sh asm_disk_mapping_1.sh + +ASM disk MP4E6D_DATA01 is associated on /dev/dm-1 +3600a0123456789012345567890234q11 [253, 1] +ASM disk MP4E6E_DATA02 is associated on /dev/dm-2 +3600a0123456789012345567890234q12 [253, 2] +ASM disk MP4E6F_DATA03 is associated on /dev/dm-3 +3600a0123456789012345567890234q13 [253, 3] +ASM disk MP4E70_DATA04 is associated on /dev/dm-4 +3600a0123456789012345567890234q14 [253, 4] +ASM disk MP4E71_DATA05 is associated on /dev/dm-5 +3600a0123456789012345567890234q15 [253, 5] +ASM disk MP4E72_DATA06 is associated on /dev/dm-6 +3600a0123456789012345567890234q16 [253, 6] +ASM disk MP4E73_DATA07 is associated on /dev/dm-7 +3600a0123456789012345567890234q17 [253, 7] +``` + +### How To List Oracle ASM Disks? + +If you would like to list only Oracle ASM disk then use the below command to List available/created Oracle ASM disks in Linux. + +``` +# oracleasm listdisks + +ASM_Disk1 +ASM_Disk2 +ASM_Disk3 +ASM_Disk4 +ASM_Disk5 +ASM_Disk6 +ASM_Disk7 +``` + +### How To List Oracle ASM Disks Against Major And Minor Number? + +If you would like to map Oracle ASM disks against major and minor number then use the below commands to List available/created Oracle ASM disks in Linux. + +``` +# for ASMdisk in `oracleasm listdisks`; do /etc/init.d/oracleasm querydisk -d $ASMdisk; done + +Disk "ASM_Disk1" is a valid Disk on device [253, 1] +Disk "ASM_Disk2" is a valid Disk on device [253, 2] +Disk "ASM_Disk3" is a valid Disk on device [253, 3] +Disk "ASM_Disk4" is a valid Disk on device [253, 4] +Disk "ASM_Disk5" is a valid Disk on device [253, 5] +Disk "ASM_Disk6" is a valid Disk on device [253, 6] +Disk "ASM_Disk7" is a valid Disk on device [253, 7] +``` + +Alternatively, we can get the same results using the ls command. + +``` +# ls -lh /dev/oracleasm/disks + +total 0 +brw-rw---- 1 oracle oinstall 253, 1 May 19 14:44 ASM_Disk1 +brw-rw---- 1 oracle oinstall 253, 2 May 19 14:44 ASM_Disk2 +brw-rw---- 1 oracle oinstall 253, 3 May 19 14:44 ASM_Disk3 +brw-rw---- 1 oracle oinstall 253, 4 May 19 14:44 ASM_Disk4 +brw-rw---- 1 oracle oinstall 253, 5 May 19 14:44 ASM_Disk5 +brw-rw---- 1 oracle oinstall 253, 6 May 19 14:44 ASM_Disk6 +brw-rw---- 1 oracle oinstall 253, 7 May 19 14:44 ASM_Disk7 +``` + +### How To List Physical Disks Against LUNs? + +If you would like to map physical disks against LUNs then use the below command. + +``` +# multipath -ll | grep NETAPP + +3600a0123456789012345567890234q11 dm-1 NETAPP,LUN C-Mode +3600a0123456789012345567890234q12 dm-2 NETAPP,LUN C-Mode +3600a0123456789012345567890234q13 dm-3 NETAPP,LUN C-Mode +3600a0123456789012345567890234q14 dm-4 NETAPP,LUN C-Mode +3600a0123456789012345567890234q15 dm-5 NETAPP,LUN C-Mode +3600a0123456789012345567890234q16 dm-6 NETAPP,LUN C-Mode +3600a0123456789012345567890234q17 dm-7 NETAPP,LUN C-Mode +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/shell-script-map-oracle-asm-disks-physical-disk-lun-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/start-stop-restart-enable-reload-oracleasm-service-linux-create-scan-list-query-rename-delete-configure-oracleasm-disk/ diff --git a/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md new file mode 100644 index 0000000000..4420b034e6 --- /dev/null +++ b/sources/tech/20190521 How to Disable IPv6 on Ubuntu Linux.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Disable IPv6 on Ubuntu Linux) +[#]: via: (https://itsfoss.com/disable-ipv6-ubuntu-linux/) +[#]: author: (Sergiu https://itsfoss.com/author/sergiu/) + +How to Disable IPv6 on Ubuntu Linux +====== + +Are you looking for a way to **disable IPv6** connections on your Ubuntu machine? In this article, I’ll teach you exactly how to do it and why you would consider this option. I’ll also show you how to **enable or re-enable IPv6** in case you change your mind. + +### What is IPv6 and why would you want to disable IPv6 on Ubuntu? + +**[Internet Protocol version 6][1]** [(][1] **[IPv6][1]**[)][1] is the most recent version of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. It was developed in 1998 to replace the **IPv4** protocol. + +**IPv6** aims to improve security and performance, while also making sure we don’t run out of addresses. It assigns unique addresses globally to every device, storing them in **128-bits** , compared to just 32-bits used by IPv4. + +![Disable IPv6 Ubuntu][2] + +Although the goal is for IPv4 to be replaced by IPv6, there is still a long way to go. Less than **30%** of the sites on the Internet makes IPv6 connectivity available to users (tracked by Google [here][3]). IPv6 can also cause [problems with some applications at time][4]. + +Since **VPNs** provide global services, the fact that IPv6 uses globally routed addresses (uniquely assigned) and that there (still) are ISPs that don’t offer IPv6 support shifts this feature lower down their priority list. This way, they can focus on what matters the most for VPN users: security. + +Another possible reason you might want to disable IPv6 on your system is not wanting to expose yourself to various threats. Although IPv6 itself is safer than IPv4, the risks I am referring to are of another nature. If you aren’t actively using IPv6 and its features, [having IPv6 enabled leaves you vulnerable to various attacks][5], offering the hacker another possible exploitable tool. + +On the same note, configuring basic network rules is not enough. You have to pay the same level of attention to tweaking your IPv6 configuration as you do for IPv4. This can prove to be quite a hassle to do (and also to maintain). With IPv6 comes a suite of problems different to those of IPv4 (many of which can be referenced online, given the age of this protocol), giving your system another layer of complexity. + +[][6] + +Suggested read How To Remove Drive Icons From Unity Launcher In Ubuntu 14.04 [Beginner Tips] + +### Disabling IPv6 on Ubuntu [For Advanced Users Only] + +In this section, I’ll be covering how you can disable IPv6 protocol on your Ubuntu machine. Open up a terminal ( **default:** CTRL+ALT+T) and let’s get to it! + +**Note:** _For most of the commands you are going to input in the terminal_ _you are going to need root privileges ( **sudo** )._ + +Warning! + +If you are a regular desktop Linux user and prefer a stable working system, please avoid this tutorial. This is for advanced users who know what they are doing and why they are doing so. + +#### 1\. Disable IPv6 using Sysctl + +First of all, you can **check** if you have IPv6 enabled with: + +``` +ip a +``` + +You should see an IPv6 address if it is enabled (the name of your internet card might be different): + +![IPv6 Address Ubuntu][7] + +You have see the sysctl command in the tutorial about [restarting network in Ubuntu][8]. We are going to use it here as well. To **disable IPv6** you only have to input 3 commands: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=1 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=1 +``` + +You can check if it worked using: + +``` +ip a +``` + +You should see no IPv6 entry: + +![IPv6 Disabled Ubuntu][9] + +However, this only **temporarily disables IPv6**. The next time your system boots, IPv6 will be enabled again. + +One method to make this option persist is modifying **/etc/sysctl.conf**. I’ll be using vim to edit the file, but you can use any editor you like. Make sure you have **administrator rights** (use **sudo** ): + +![Sysctl Configuration][10] + +Add the following lines to the file: + +``` +net.ipv6.conf.all.disable_ipv6=1 +net.ipv6.conf.default.disable_ipv6=1 +net.ipv6.conf.lo.disable_ipv6=1 +``` + +For the settings to take effect use: + +``` +sudo sysctl -p +``` + +If IPv6 is still enabled after rebooting, you must create (with root privileges) the file **/etc/rc.local** and fill it with: + +``` +#!/bin/bash +# /etc/rc.local + +/etc/sysctl.d +/etc/init.d/procps restart + +exit 0 +``` + +Now use [chmod command][11] to make the file executable: + +``` +sudo chmod 755 /etc/rc.local +``` + +What this will do is manually read (during the boot time) the kernel parameters from your sysctl configuration file. + +[][12] + +Suggested read 3 Ways to Check Linux Kernel Version in Command Line + +#### 2\. Disable IPv6 using GRUB + +An alternative method is to configure **GRUB** to pass kernel parameters at boot time. You’ll have to edit **/etc/default/grub**. Once again, make sure you have administrator privileges: + +![GRUB Configuration][13] + +Now you need to modify **GRUB_CMDLINE_LINUX_DEFAULT** and **GRUB_CMDLINE_LINUX** to disable IPv6 on boot: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash ipv6.disable=1" +GRUB_CMDLINE_LINUX="ipv6.disable=1" +``` + +Save the file and run: + +``` +sudo update-grub +``` + +The settings should now persist on reboot. + +### Re-enabling IPv6 on Ubuntu + +To re-enable IPv6, you’ll have to undo the changes you made. To enable IPv6 until reboot, enter: + +``` +sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.default.disable_ipv6=0 +sudo sysctl -w net.ipv6.conf.lo.disable_ipv6=0 +``` + +Otherwise, if you modified **/etc/sysctl.conf** you can either remove the lines you added or change them to: + +``` +net.ipv6.conf.all.disable_ipv6=0 +net.ipv6.conf.default.disable_ipv6=0 +net.ipv6.conf.lo.disable_ipv6=0 +``` + +You can optionally reload these values: + +``` +sudo sysctl -p +``` + +You should once again see a IPv6 address: + +![IPv6 Reenabled in Ubuntu][14] + +Optionally, you can remove **/etc/rc.local** : + +``` +sudo rm /etc/rc.local +``` + +If you modified the kernel parameters in **/etc/default/grub** , go ahead and delete the added options: + +``` +GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" +GRUB_CMDLINE_LINUX="" +``` + +Now do: + +``` +sudo update-grub +``` + +**Wrapping Up** + +In this guide I provided you ways in which you can **disable IPv6** on Linux, as well as giving you an idea about what IPv6 is and why you would want to disable it. + +Did you find this article useful? Do you disable IPv6 connectivity? Let us know in the comment section! + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/disable-ipv6-ubuntu-linux/ + +作者:[Sergiu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/sergiu/ +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/IPv6 +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/disable_ipv6_ubuntu.png?fit=800%2C450&ssl=1 +[3]: https://www.google.com/intl/en/ipv6/statistics.html +[4]: https://whatismyipaddress.com/ipv6-issues +[5]: https://www.internetsociety.org/blog/2015/01/ipv6-security-myth-1-im-not-running-ipv6-so-i-dont-have-to-worry/ +[6]: https://itsfoss.com/remove-drive-icons-from-unity-launcher-in-ubuntu/ +[7]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu.png?fit=800%2C517&ssl=1 +[8]: https://itsfoss.com/restart-network-ubuntu/ +[9]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_disabled_ubuntu.png?fit=800%2C442&ssl=1 +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/sysctl_configuration.jpg?fit=800%2C554&ssl=1 +[11]: https://linuxhandbook.com/chmod-command/ +[12]: https://itsfoss.com/find-which-kernel-version-is-running-in-ubuntu/ +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/grub_configuration-1.jpg?fit=800%2C565&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/ipv6_address_ubuntu-1.png?fit=800%2C517&ssl=1 diff --git a/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md b/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md new file mode 100644 index 0000000000..3b9be98d2f --- /dev/null +++ b/sources/tech/20190521 I don-t know how CPUs work so I simulated one in code.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (I don't know how CPUs work so I simulated one in code) +[#]: via: (https://djhworld.github.io/post/2019/05/21/i-dont-know-how-cpus-work-so-i-simulated-one-in-code/) +[#]: author: (daniel harper https://djhworld.github.io) + +I don't know how CPUs work so I simulated one in code +====== + +![][1] + +A few months ago it dawned on me that I didn’t really understand how computers work under the hood. I still don’t understand how modern computers work. + +However, after making my way through [But How Do It Know?][2] by J. Clark Scott, a book which describes the bits of a simple 8-bit computer from the NAND gates, through to the registers, RAM, bits of the CPU, ALU and I/O, I got a hankering to implement it in code. + +While I’m not that interested in the physics of the circuitry, the book just about skims the surface of those waters and gives a neat overview of the wiring and how bits move around the system without the requisite electrical engineering knowledge. For me though I can’t get comfortable with book descriptions, I have to see things in action and learn from my inevitable mistakes, which led me to chart a course on the rough seas of writing a circuit in code and getting a bit weepy about it. + +The fruits of my voyage can be seen in [simple-computer][3]; a simple computer that’s simple and computes things. + +[![][4]][5] [![][6]][7] [![][8]][9] +Example programs + +It is quite a neat little thing, the CPU code is implemented [as a horrific splurge of gates turning on and off][10] but it works, I’ve [unit tested it][11], and we all know unit tests are irrefutable proof that something works. + +It handles [keyboard inputs][12], and renders text [to a display][13] using a painstakingly crafted set of glyphs for a professional font I’ve named “Daniel Code Pro”. The only cheat bit is to get the keyboard input and display output working I had to hook up go channels to speak to the outside world via [GLFW][14], but the rest of it is a simulated circuit. + +I even wrote a [crude assembler][15] which was eye opening to say the least. It’s not perfect. Actually it’s a bit crap, but it highlighted to me the problems that other people have already solved many, many years ago and I think I’m a better person for it. Or worse, depending who you ask. + +### But why you do that? + +> “I’ve seen thirteen year old children do this in Minecraft, come back to me when you’ve built a REAL CPU out of telegraph relays” + +My mental model of computing is stuck in beginner computer science textbooks, and the CPU that powers the [gameboy emulator I wrote back in 2013][16] is really nothing like the CPUs that are running today. Even saying that, the emulator is just a state machine, it doesn’t describe the stuff at the logic gate level. You can implement most of it using just a `switch` statement and storing the state of the registers. + +So I’m trying to get a better understanding of this stuff because I don’t know what L1/L2 caches are, I don’t know what pipelining means, I’m not entirely sure I understand the Meltdown and Spectre vulnerability papers. Someone told me they were optimising their code to make use of CPU caches, I don’t know how to verify that other than taking their word for it. I’m not really sure what all the x86 instructions mean. I don’t understand how people off-load work to a GPU or TPU. I don’t know what a TPU is. I don’t know how to make use of SIMD instructions. + +But all that is built on a foundation of knowledge you need to earn your stripes for, so I ain’t gonna get there without reading the map first. Which means getting back to basics and getting my hands dirty with something simple. The “Scott Computer” described in the book is simple. That’s the reason. + +### Great Scott! It’s alive! + +The Scott computer is an 8-bit processor attached to 256 bytes of RAM, all connected via an 8-bit system bus. It has 4 general purpose registers and can execute [17 machine instructions][17]. Someone built a visual simulator [for the web here][18], which is really cool, I dread to think how long it took to track all the wiring states! + +[![][19]][20] +A diagram outlining all the components that make up the Scott CPU +Copyright © 2009 - 2016 by Siegbert Filbinger and John Clark Scott. + +The book takes you on a journey from the humble NAND gate, onto a Bit of memory, onto a register and then keeps layering on components until you end up with something resembling the above. I really recommend reading it, even if you are already familiar with the concepts because it’s quite a good overview. I don’t recommend the Kindle version though because the diagrams are sometimes hard to zoom in and decipher on a screen. A perennial problem for the Kindle in my experience. + +The only thing that’s different about my computer is I upgraded it to 16-bit to have more memory to play with, as storing even just the glyphs for the [ASCII table][21] would have dwarfed most of the 8-bit machine described in the book, with not much room left for useful code. + +### My development journey + +During development it really was just a case of reading the text, scouring the diagrams and then attempting to translate that using a general purpose programming language code and definitely not using something that’s designed for integrated circuit development. The reason why I wrote it in Go, is well, I know a bit of Go. Naysayers might chime in and say, you blithering idiot! I can’t believe you didn’t spend all your time learning [VHDL][22] or [Verilog][23] or [LogSim][24] or whatever but I’d already written my bits and bytes and NANDs by that point, I was in too deep. Maybe I’ll learn them next and weep about my time wasted, but that’s my cross to bear. + +In the grand scheme of things most of the computer is just passing around a bunch of booleans, so any boolean friendly language will do the job. + +Applying a schema to those booleans is what helps you (the programmer) derive its meaning, and the biggest decision anyone needs to make is decide what [endianness][25] your system is going to use and make sure all the components transfer things to and from the bus in the right order. + +This was an absolute pain in the backside to implement. From the offset I opted for little endian but when testing the ALU my hair took a beating trying to work out why the numbers were coming out wrong. Many, many print statements took place on this one. + +Development did take a while, maybe about a month or two during some of my free time, but once the CPU was done and successfully able to execute 2 + 2 = 5, I was happy. + +Well, until the book discussed the I/O features, with designs for a simple keyboard and display interface so you can get things in and out of the machine. Well I’ve already gotten this far, no point in leaving it in a half finished state. I set myself a goal of being able to type something on a keyboard and render the letters on a display. + +### Peripherals + +The peripherals use the [adapter pattern][26] to act as a hardware interface between the CPU and the outside world. It’s probably not a huge leap to guess this was what the software design pattern took inspiration from. + +![][27] +How the I/O adapters connect to a GLFW window + +With this separation of concerns it was actually pretty simple to hook the other end of the keyboard and display to a window managed by GLFW. In fact I just pulled most of the code from my [emulator][28] and reshaped it a bit, using go channels to act as the signals in and out of the machine. + +### Bringing it to life + +![][29] + +This was probably the most tricky part, or at least the most cumbersome. Writing assembly with such a limited instruction set sucks. Writing assembly using a crude assembler I wrote sucks even more because you can’t shake your fist at someone other than yourself. + +The biggest problem was juggling the 4 registers and keeping track of them, pulling and putting stuff in memory as a temporary store. Whilst doing this I remembered the Gameboy CPU having a stack pointer register so you could push and pop state. Unfortunately this computer doesn’t have such a luxury, so I was mostly moving stuff in and out of memory on a bespoke basis. + +The only pseudo instruction I took the time to implement was `CALL` to help calling functions, this allows you to run a function and then return to the point after the function was called. Without that stack though you can only call one level deep. + +Also as the machine does not support interrupts, you have to implement awful polling code for functions like getting keyboard state. The book does discuss the steps needed to implement interrupts, but it would involve a lot more wiring. + +But anyway enough of the moaning, I ended up writing [four programs][30] and most of them make use of some shared code for drawing fonts, getting keyboard input etc. Not exactly operating system material but it did make me appreciate some of the services a simple operating system might provide. + +It wasn’t easy though, the trickiest part of the text-writer program was getting the maths right to work out when to go to a newline, or what happens when you hit the enter key. + +``` +main-getInput: + CALL ROUTINE-io-pollKeyboard + CALL ROUTINE-io-drawFontCharacter + JMP main-getInput +``` + +The main loop for the text-writer program + +I didn’t get round to implementing the backspace key either, or any of the modifier keys. Made me appreciate how much work must go in to making text editors and how tedious that probably is. + +### On reflection + +This was a fun and very rewarding project for me. In the midst of programming in the assembly language I’d largely forgotten about the NAND, AND and OR gates firing underneath. I’d ascended into the layers of abstraction above. + +While the CPU in the is very simple and a long way from what’s sitting in my laptop, I think this project has taught me a lot, namely: + + * How bits move around between all components using a bus + * How a simple ALU works + * What a simple Fetch-Decode-Execute cycle looks like + * That a machine without a stack pointer register + concept of a stack sucks + * That a machine without interrupts sucks + * What an assembler is and does + * How a peripherals communicate with a simple CPU + * How simple fonts work and an approach to rendering them on a display + * What a simple operating system might start to look like + + + +So what’s next? The book said that no-one has built a computer like this since 1952, meaning I’ve got 67 years of material to brush up on, so that should keep me occupied for a while. I see the [x86 manual is 4800 pages long][31], enough for some fun, light reading at bedtime. + +Maybe I’ll have a brief dalliance with operating system stuff, a flirtation with the C language, a regrettable evening attempting to [solder up a PiDP-11 kit][32] then probably call it quits. I dunno, we’ll see. + +With all seriousness though I think I’m going to start looking into RISC based stuff next, maybe RISC-V, but probably start with early RISC processors to get an understanding of the lineage. Modern CPUs have a lot more features like caches and stuff so I want to understand them as well. A lot of stuff out there to learn. + +Do I need to know any of this stuff in my day job? Probably helps, but not really, but I’m enjoying it, so whatever, thanks for reading xxxx + +-------------------------------------------------------------------------------- + +via: https://djhworld.github.io/post/2019/05/21/i-dont-know-how-cpus-work-so-i-simulated-one-in-code/ + +作者:[daniel harper][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://djhworld.github.io +[b]: https://github.com/lujun9972 +[1]: https://djhworld.github.io/img/simple-computer/text-writer.gif (Hello World) +[2]: http://buthowdoitknow.com/ +[3]: https://github.com/djhworld/simple-computer +[4]: https://djhworld.github.io/img/simple-computer/ascii1.png (ASCII) +[5]: https://djhworld.github.io/img/simple-computer/ascii.png +[6]: https://djhworld.github.io/img/simple-computer/brush1.png (brush) +[7]: https://djhworld.github.io/img/simple-computer/brush.png +[8]: https://djhworld.github.io/img/simple-computer/text-writer1.png (doing the typesin') +[9]: https://djhworld.github.io/img/simple-computer/text-writer.png +[10]: https://github.com/djhworld/simple-computer/blob/master/cpu/cpu.go#L763 +[11]: https://github.com/djhworld/simple-computer/blob/master/cpu/cpu_test.go +[12]: https://github.com/djhworld/simple-computer/blob/master/io/keyboard.go#L20 +[13]: https://github.com/djhworld/simple-computer/blob/master/io/display.go#L13 +[14]: https://github.com/djhworld/simple-computer/blob/master/cmd/simulator/glfw_io.go +[15]: https://github.com/djhworld/simple-computer/blob/master/asm/assembler.go +[16]: https://github.com/djhworld/gomeboycolor +[17]: https://github.com/djhworld/simple-computer#instructions +[18]: http://www.buthowdoitknow.com/but_how_do_it_know_cpu_model.html +[19]: https://djhworld.github.io/img/simple-computer/scott-cpu.png (The Scott CPU) +[20]: https://djhworld.github.io/img/simple-computer/scott-cpu.png +[21]: https://github.com/djhworld/simple-computer/blob/master/_programs/ascii.asm#L27 +[22]: https://en.wikipedia.org/wiki/VHDL +[23]: https://en.wikipedia.org/wiki/Verilog +[24]: http://www.cburch.com/logisim/ +[25]: https://en.wikipedia.org/wiki/Endianness +[26]: https://en.wikipedia.org/wiki/Adapter_pattern +[27]: https://djhworld.github.io/img/simple-computer/io.png (i couldn't be bothered to do the corners around the CPU for the system bus) +[28]: https://github.com/djhworld/gomeboycolor-glfw +[29]: https://djhworld.github.io/img/simple-computer/brush.gif (brush.bin) +[30]: https://github.com/djhworld/simple-computer/blob/master/_programs/README.md +[31]: https://software.intel.com/sites/default/files/managed/39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf +[32]: https://obsolescence.wixsite.com/obsolescence/pidp-11 diff --git a/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md new file mode 100644 index 0000000000..8fab8bfcae --- /dev/null +++ b/sources/tech/20190522 Convert Markdown files to word processor docs using pandoc.md @@ -0,0 +1,119 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Convert Markdown files to word processor docs using pandoc) +[#]: via: (https://opensource.com/article/19/5/convert-markdown-to-word-pandoc) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez) + +Convert Markdown files to word processor docs using pandoc +====== +Living that plaintext life? Here's how to create the word processor +documents people ask for without having to work in a word processor +yourself. +![][1] + +If you live your life in [plaintext][2], there invariably comes a time when someone asks for a word processor document. I run into this issue frequently, especially at the Day JobTM. Although I've introduced one of the development teams I work with to a [Docs Like Code][3] workflow for writing and reviewing release notes, there are a small number of people who have no interest in GitHub or working with [Markdown][4]. They prefer documents formatted for a certain proprietary application. + +The good news is that you're not stuck copying and pasting unformatted text into a word processor document. Using **[pandoc][5]** , you can quickly give people what they want. Let's take a look at how to convert a document from Markdown to a word processor format in [Linux][6] using **pandoc.** ​​​​ + +Note that **pandoc** is also available for a wide variety of operating systems, ranging from two flavors of BSD ([NetBSD][7] and [FreeBSD][8]) to Chrome OS, MacOS, and Windows. + +### Converting basics + +To begin, [install **pandoc**][9] on your computer. Then, crack open a console terminal window and navigate to the directory containing the file that you want to convert. + +Type this command to create an ODT file (which you can open with a word processor like [LibreOffice Writer][10] or [AbiWord][11]): + +**pandoc -t odt filename.md -o filename.odt** + +Remember to replace **filename** with the file's actual name. And if you need to create a file for that other word processor (you know the one I mean), replace **odt** on the command line with **docx**. Here's what this article looks like when converted to an ODT file: + +![Basic conversion results with pandoc.][12] + +These results are serviceable, but a bit bland. Let's look at how to add a bit more style to the converted documents. + +### Converting with style + +**pandoc** has a nifty feature enabling you to specify a style template when converting a marked-up plaintext file to a word processor format. In this file, you can edit a small number of styles in the document, including those that control the look of paragraphs, headings, captions, titles and subtitles, a basic table, and hyperlinks. + +Let's look at the possibilities. + +#### Creating a template + +In order to style your documents, you can't just use _any_ template. You need to generate what **pandoc** calls a _reference_ template, which is the template it uses when converting text files to word processor documents. To create this file, type the following in a terminal window: + +**pandoc -o custom-reference.odt --print-default-data-file reference.odt** + +This command creates a file called **custom-reference.odt**. If you're using that other word processor, change the references to **odt** on the command line to **docx**. + +Open the template file in LibreOffice Writer, and then press **F11** to open LibreOffice Writer's **Styles** pane. Although the [pandoc manual][13] advises against making other changes to the file, I change the page size and add headers and footers when necessary. + +#### Using the template + +So, how do you use that template you just created? There are two ways to do this. + +The easiest way is to drop the template in your **/home** directory's **.pandoc** folder—you might have to create the folder first if it doesn't exist. When it's time to convert a document, **pandoc** uses this template file. See the next section on how to choose from multiple templates if you need more than one. + +The other way to use your template is to type this set of conversion options at the command line: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/reference.odt -o file-name.odt** + +If you're wondering what a converted file looks like with a customized template, here's an example: + +![A document converted using a pandoc style template.][14] + +#### Choosing from multiple templates + +Many people only need one **pandoc** template. Some people, however, need more than one. + +At my day job, for example, I use several templates—one with a DRAFT watermark, one with a watermark stating FOR INTERNAL USE, and one for a document's final versions. Each type of document needs a different template. + +If you have similar needs, start the same way you do for a single template, by creating the file **custom-reference.odt**. Rename the resulting file—for example, to **custom-reference-draft.odt** —then open it in LibreOffice Writer and modify the styles. Repeat this process for each template you need. + +Next, copy the files into your **/home** directory. You can even put them in the **.pandoc** folder if you want to. + +To select a specific template at conversion time, you'll need to run this command in a terminal: + +**pandoc -t odt file-name.md --reference-doc=path-to-your-file/custom-template.odt -o file-name.odt** + +Change **custom-template.odt** to your template file's name. + +### Wrapping up + +To avoid having to remember a set of options I don't regularly use, I cobbled together some simple, very lame one-line scripts that encapsulate the options for each template. For example, I run the script **todraft.sh** to create a word processor document using the template with a DRAFT watermark. You might want to do the same. + +Here's an example of a script using the template containing a DRAFT watermark: + +`pandoc -t odt $1.md -o $1.odt --reference-doc=~/Documents/pandoc-templates/custom-reference-draft.odt` + +Using **pandoc** is a great way to provide documents in the format that people ask for, without having to give up the command line life. This tool doesn't just work with Markdown, either. What I've discussed in this article also allows you to create and convert documents between a wide variety of markup languages. See the **pandoc** site linked earlier for more details. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/convert-markdown-to-word-pandoc + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt/users/jason-van-gumster/users/kikofernandez +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb +[2]: https://plaintextproject.online/ +[3]: https://www.docslikecode.com/ +[4]: https://en.wikipedia.org/wiki/Markdown +[5]: https://pandoc.org/ +[6]: /resources/linux +[7]: https://www.netbsd.org/ +[8]: https://www.freebsd.org/ +[9]: https://pandoc.org/installing.html +[10]: https://www.libreoffice.org/discover/writer/ +[11]: https://www.abisource.com/ +[12]: https://opensource.com/sites/default/files/uploads/pandoc-wp-basic-conversion_600_0.png (Basic conversion results with pandoc.) +[13]: https://pandoc.org/MANUAL.html +[14]: https://opensource.com/sites/default/files/uploads/pandoc-wp-conversion-with-tpl_600.png (A document converted using a pandoc style template.) diff --git a/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md new file mode 100644 index 0000000000..38c11508bf --- /dev/null +++ b/sources/tech/20190522 Damn- Antergos Linux has been Discontinued.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Damn! Antergos Linux has been Discontinued) +[#]: via: (https://itsfoss.com/antergos-linux-discontinued/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Damn! Antergos Linux has been Discontinued +====== + +_**Beginner-friendly Arch Linux based distribution Antergos has announced that the project is being discontinued.**_ + +Arch Linux has always been considered a no-go zone for the beginners. Antergos challenged this status quo and made Arch Linux accessible to everyone by providing easier installation method. People who wouldn’t dare [installing Arch Linux][1], opted for Antergos. + +![Antergos provided easy access to Arch with its easy to use GUI tools][2] + +The project started in 2012-13 and started gaining popularity around 2014. I used Antergos, liked it and covered it here on It’s FOSS and perhaps (slightly) contributed to its popularity. In last five years, Antergos was downloaded close to a million times. + +But for past year or so, I felt that this project was stagnating. Antergos hardly made any news. Neither the forum nor the social media handles were active. The community around Antergos grew thinner though a few dedicated users still remain. + +### The end of Antergos Linux project + +![][3] + +On May 21, 2019, Antergos [announced][4] its discontinuation. Lack of free time cited as the main reason behind this decision. + +> Today, we are announcing the end of this project. As many of you probably noticed over the past several months, we no longer have enough free time to properly maintain Antergos. We came to this decision because we believe that continuing to neglect the project would be a huge disservice to the community. +> +> Antergos Team + +Antergos developers also mentioned that since the project’s code still works, it’s an opportunity for interested developers to take what they find useful and start their own projects. + +#### What happens to Existing Antergos users? + +If you are an Antergos user, you don’t have to worry a lot. It’s not that your system will be unusable from today. Your system will continue to get updates directly from Arch Linux. + +Antergos team plans to release an update to remove the Antergos repositories from your system along with any Antergos-specific packages that no longer serve a purpose as the project is ending. After that any packages installed from the Antergos repo that are in the AUR will begin to receive updates from [AUR][5]. + +[][6] + +Suggested read Peppermint 8 Released. Download Now! + +The Antergos forum and wiki will be functional but only for some time. + +If you think using an ‘unmaintained’ project is not a good idea, you should switch your distribution. The most appropriate choice would be [Manjaro Linux][7]. + +Manjaro Linux started around the same time as Antergos. Both Antergos and Manjaro were sort of competitors as both of them tried to make Arch Linux accessible for everyone. + +Manjaro gained a huge userbase in the last few years and its community is thriving. If you want to remain in Arch domain but don’t want to install Arch Linux itself, Manjaro is the best choice for you. + +Just note that Manjaro Linux doesn’t provide all the updates immediately as Arch or Antergos. It is a rolling release but with stability in mind. So the updates are tested first. + +#### Inevitable fate for smaller distributions? + +_Here’s my opinion on the discontinuation on Antergos and other similar open source projects._ + +Antergos was a niche distribution. It had a smaller but dedicated userbase. The developers cited lack of free time as the main reason for their decision. However, I believe that lack of motivation plays a bigger role in such cases. + +What motivates the people behind a project? They start it mostly as a side project and if the project is good, they start gaining users. This growth of userbase drives their motivation to work on the project. + +If the userbase starts declining or gets stagnated, the motivation takes a hit. + +If the userbase keeps on growing, the motivation increases but only to a certain point. More users require more effort in various tasks around the project. Keeping the wiki and forum along with social media itself is a challenging part, leave aside the actual code development. The situation becomes overwhelming. + +When a project grows in considerable size, project owners have two choices. First choice is to form a community of volunteers and start delegating tasks that could be delegated. Having volunteers dedicated to project is not easy but it can surely be achieved as Debian and Manjaro have done it already. + +[][8] + +Suggested read Lightweight Distribution Linux Lite 4.0 Released With Brand New Look + +Second choice is to create some revenue generation channel around the project. The additional revenue may ‘justify’ those extra hours and in some cases, it could drive the developer to work full time on the project. [elementary OS][9] is trying to achieve something similar by developing an ecosystem of ‘payable apps’ in their software center. + +You may argue that money should not be a factor in Free and Open Source Software culture but the unfortunate truth is that money is always a factor, in every aspect of our life. I am not saying that a project should be purely driven by money but a project must be sustainable in every aspect. + +We have see how other smaller but moderately popular Linux distributions like Korora has been discontinued due to lack of free time. [Solus creator Ikey Doherty had to leave the project][10] to focus on his personal life. Developing and maintaining a successful open source project is not an easy task. + +That’s just my opinion. Please feel free to disagree with it and voice your opinion in the comment section. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/antergos-linux-discontinued/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/install-arch-linux/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2015/08/Installing_Antergos_Linux_7.png?ssl=1 +[3]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/05/antergos-linux-dead.jpg?resize=800%2C450&ssl=1 +[4]: https://antergos.com/blog/antergos-linux-project-ends/ +[5]: https://itsfoss.com/best-aur-helpers/ +[6]: https://itsfoss.com/peppermint-8-released/ +[7]: https://manjaro.org/ +[8]: https://itsfoss.com/linux-lite-4/ +[9]: https://elementary.io/ +[10]: https://itsfoss.com/ikey-leaves-solus/ diff --git a/sources/tech/20190522 How to Download and Use Ansible Galaxy Roles in Ansible Playbook.md b/sources/tech/20190522 How to Download and Use Ansible Galaxy Roles in Ansible Playbook.md new file mode 100644 index 0000000000..3bb9e39184 --- /dev/null +++ b/sources/tech/20190522 How to Download and Use Ansible Galaxy Roles in Ansible Playbook.md @@ -0,0 +1,193 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Download and Use Ansible Galaxy Roles in Ansible Playbook) +[#]: via: (https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Download and Use Ansible Galaxy Roles in Ansible Playbook +====== + +**Ansible** is tool of choice these days if you must manage multiple devices, be it Linux, Windows, Mac, Network Devices, VMware and lot more. What makes Ansible popular is its agent less feature and granular control. If you have worked with python or have experience with **yaml** , you will feel at home with Ansible. To see how you can install [Ansible][1] click here. + + + +Ansible core modules will let you manage almost anything should you wish to write playbooks, however often there is someone who has already written a role for a problem you are trying to solve. Let’s take an example, you wish to manage NTP clients on the Linux machines, you have 2 choices either write a role which can be applied to the nodes or use **ansible-galaxy** to download an existing role someone has already written/tested for you. Ansible galaxy has roles for almost all the domains and these caters different problems. You can visit to get an idea on domains and popular roles it has. Each role published on galaxy repository is thoroughly tested and has been rated by the users, so you get an idea on how other people who have used it liked it. + +To keep moving with the NTP idea, here is how you can search and install an NTP role from galaxy. + +Firstly, lets run ansible-galaxy with the help flag to check what options does it give us + +``` +[root@linuxtechi ~]# ansible-galaxy --help +``` + +![ansible-galaxy-help][2] + +As you can see from the output above there are some interesting options been shown, since we are looking for a role to manage ntp clients lets try the search option to see how good it is finding what we are looking for. + +``` +[root@linuxtechi ~]# ansible-galaxy search ntp +``` + +Here is the truncated output of the command above. + +![ansible-galaxy-search][3] + +It found 341 matches based on our search, as you can see from the output above many of these roles are not even related to NTP which means our search needs some refinement however, it has managed to pull some NTP roles, lets dig deeper to see what these roles are. But before that let me tell you the naming convention being followed here. The name of a role is always preceded by the author name so that it is easy to segregate roles with the same name. So, if you have written an NTP role and have published it to galaxy repo, it does not get mixed up with someone else repo with the same name. + +With that out of the way, lets continue with our job of installing a NTP role for our Linux machines. Let’s try **bennojoy.ntp** for this example, but before using this we need to figure out couple of things, is this role compatible with the version of ansible I am running. Also, what is the license status of this role. To figure out these, let’s run below ansible-galaxy command, + +``` +[root@linuxtechi ~]# ansible-galaxy info bennojoy.ntp +``` + +![ansible-galaxy-info][4] + +ok so this says the minimum version is 1.4 and the license is BSD, lets download it + +``` +[root@linuxtechi ~]# ansible-galaxy install bennojoy.ntp +- downloading role 'ntp', owned by bennojoy +- downloading role from https://github.com/bennojoy/ntp/archive/master.tar.gz +- extracting bennojoy.ntp to /etc/ansible/roles/bennojoy.ntp +- bennojoy.ntp (master) was installed successfully +[root@linuxtechi ~]# ansible-galaxy list +- bennojoy.ntp, master +[root@linuxtechi ~]# +``` + +Let’s find the newly installed role. + +``` +[root@linuxtechi ~]# cd /etc/ansible/roles/bennojoy.ntp/ +[root@linuxtechi bennojoy.ntp]# ls -l +total 4 +drwxr-xr-x. 2 root root 21 May 21 22:38 defaults +drwxr-xr-x. 2 root root 21 May 21 22:38 handlers +drwxr-xr-x. 2 root root 48 May 21 22:38 meta +-rw-rw-r--. 1 root root 1328 Apr 20 2016 README.md +drwxr-xr-x. 2 root root 21 May 21 22:38 tasks +drwxr-xr-x. 2 root root 24 May 21 22:38 templates +drwxr-xr-x. 2 root root 55 May 21 22:38 vars +[root@linuxtechi bennojoy.ntp]# +``` + +I am going to run this newly downloaded role on my Elasticsearch CentOS node. Here is my hosts file + +``` +[root@linuxtechi ~]# cat hosts +[CentOS] +elastic7-01 ansible_host=192.168.1.15 ansibel_port=22 ansible_user=linuxtechi +[root@linuxtechi ~]# +``` + +Let’s try to ping the node using below ansible ping module, + +``` +[root@linuxtechi ~]# ansible -m ping -i hosts elastic7-01 +elastic7-01 | SUCCESS => { + "changed": false, + "ping": "pong" +} +[root@linuxtechi ~]# +``` + +Here is what the current ntp.conf looks like on elastic node. + +``` +[root@linuxtechi ~]# head -30 /etc/ntp.conf +``` + +![Current-ntp-conf][5] + +Since I am in India, lets add server **in.pool.ntp.org** to ntp.conf. I would have to edit the variables in default directory of the role. + +``` +[root@linuxtechi ~]# vi /etc/ansible/roles/bennojoy.ntp/defaults/main.yml +``` + +Change NTP server address in “ntp_server” parameter, after updating it should look like below. + +![Update-ansible-ntp-role][6] + +The last thing now is to create my playbook which would call this role. + +``` +[root@linuxtechi ~]# vi ntpsite.yaml +--- + - name: Configure NTP on CentOS/RHEL/Debian System + become: true + hosts: all + roles: + - {role: bennojoy.ntp} +``` + +save and exit the file + +We are ready to run this role now, use below command to run ntp playbook, + +``` +[root@linuxtechi ~]# ansible-playbook -i hosts ntpsite.yaml +``` + +Output of above ntp ansible playbook should be something like below, + +![ansible-playbook-output][7] + +Let’s check updated file now. go to elastic node and view the contents of ntp.conf file + +``` +[root@linuxtechi ~]# cat /etc/ntp.conf +#Ansible managed + +driftfile /var/lib/ntp/drift +server in.pool.ntp.org + +restrict -4 default kod notrap nomodify nopeer noquery +restrict -6 default kod notrap nomodify nopeer noquery +restrict 127.0.0.1 +[root@linuxtechi ~]# +``` + +Just in case you do not find a role fulfilling your requirement ansible-galaxy can help you create a directory structure for your custom roles. This helps your playbooks along with the variables, handlers, templates etc assembled in a standardized file structure. Let’s create our own role, its always a good practice to let ansible-galaxy create the structure for you. + +``` +[root@linuxtechi ~]# ansible-galaxy init pk.backup +- pk.backup was created successfully +[root@linuxtechi ~]# +``` + +Verify the structure of your role using the tree command, + +![createing-roles-ansible-galaxy][8] + +Let me quickly explain what each of these directories and files are for, each of these serves a purpose. + +The very first one is the **defaults** directory which contains the files containing variables with takes the lowest precedence, if the same variables are assigned in var directory it will be take precedence over default. The **handlers** directory hosts the handlers. The **file** and **templates** keep any files your role may need to copy and **jinja templates** to be used in playbooks respectively. The **tasks** directory is where your playbooks containing the tasks are kept. The var directory consists of all the files that hosts the variables used in role. The test directory consists of a sample inventory and test playbooks which can be used to test the role. The **meta directory** consists of any dependencies on other roles along with the authorship information. + +Finally, **README.md** file simply consists of some general information like description and minimum version of ansible this role is compatible with. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/use-ansible-galaxy-roles-ansible-playbook/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/install-and-use-ansible-in-centos-7/ +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-help-1024x294.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-search-1024x552.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-galaxy-info-1024x557.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Current-ntp-conf.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/05/Update-ansible-ntp-role.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/05/ansible-playbook-output-1024x376.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/05/createing-roles-ansible-galaxy.jpg diff --git a/sources/tech/20190523 Hardware bootstrapping with Ansible.md b/sources/tech/20190523 Hardware bootstrapping with Ansible.md new file mode 100644 index 0000000000..94842453cc --- /dev/null +++ b/sources/tech/20190523 Hardware bootstrapping with Ansible.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Hardware bootstrapping with Ansible) +[#]: via: (https://opensource.com/article/19/5/hardware-bootstrapping-ansible) +[#]: author: (Mark Phillips https://opensource.com/users/markp/users/feeble/users/markp) + +Hardware bootstrapping with Ansible +====== + +![computer servers processing data][1] + +At a recent [Ansible London Meetup][2], I got chatting with somebody about automated hardware builds. _"It's all cloud now!"_ I hear you say. Ah, but for many large organisations it's not—they still have massive data centres full of hardware. Almost regularly somebody pops up on our internal mail list and asks, *"can Ansible do hardware provisioning?" *Well yes, you can provision hardware with Ansible… + +### Requirements + +Bootstrapping hardware is mostly about network services. Before we do any operating system (OS) installing then, we must set up some services. We will need: + + * DHCP + * PXE + * TFTP + * Operating system media + * Web server + + + +### Setup + +Besides the DHCP configuration, everything else in this article is handled by the Ansible plays included in [this repository][3]. + +#### DHCP server + +I'm writing here on the assumption you can control your DHCP configuration. If you don't have access to your DHCP server, you'll need to ask the owner to set two options. DHCP option 67 needs to be set to **pxelinux.0** and **next-server** (which is option 66—but you may not need to know that; often a DHCP server will have a field/option for 'next server') needs to be set to the IP address of your TFTP server. + +If you can own the DHCP server, I'd suggest using dnsmasq. It's small and simple. I will not cover configuring it here, but look at [the man page][4] and the **\--enable-tftp** option. + +#### TFTP + +The **next-server** setting for our DHCP server, above, will point to a machine serving [TFTP][5]. Here I've used a [CentOS Linux][6] virtual machine, as it only takes one package (syslinux-tftpboot) and a service to start to have TFTP up and running. We'll stick with the default path, **/var/lib/tftpboot**. + +#### PXE + +If you're not already familiar with PXE, you might like to take a quick look at [the Wikipedia page][7]. For this article I'll keep it short—we will serve some files over TFTP, which DHCP guides our hardware to. + +You'll want **images/pxeboot/{initrd.img,vmlinuz}** from the OS distribution media for pxeboot. These need to be copied to **/var/lib/tftpboot/pxeboot**. The referenced Ansible plays **do not do this step, **so you need to copy them over yourself. + +We'll also need to serve the OS installation files. There are two approaches to this: 1) install, via HTTP, from the internet or 2) install, again via HTTP, from a local server. For my testing, since I'm on a private LAN (and I guess you are too), the fastest installation method is the second. The easiest way to prepare this is to mount the DVD image and rsync the `images`, **`Packages` **and `repodata` directories to your webserver location. The referenced Ansible plays will install **httpd** but won't copy over these files, so don't forget to do that after running [the play][8]. For this article, we'll once again stick with defaults for simplicity—so files need to be copied to Apache's standard docroot, **/var/www/html**. + +#### Directories + +We should end up with directory structures like this: + +##### PXE/TFTP + + +``` +[root@c7 ~]# tree /var/lib/tftpboot/pxe{b*,l*cfg} +/var/lib/tftpboot/pxeboot +└── 6 +├── initrd.img +└── vmlinuz +``` + +##### httpd + + +``` +[root@c7 ~]# tree -d /var/www/html/ +/var/www/html/ +├── 6 -> centos/6 +├── 7 -> centos/7 +├── centos +│ ├── 6 +│ │ └── os +│ │ └── x86_64 +│ │ ├── images +│ │ │ └── pxeboot +│ │ ├── Packages +│ │ └── repodata +│ └── 7 +│ └── os +│ └── x86_64 +│ ├── images +│ │ └── pxeboot +│ ├── Packages +│ └── repodata +└── ks +``` + +You'll notice my web setup appears a little less simple than the words above! I've pasted my actual structure to give you some ideas. The hardware I'm using is really old, and even getting CentOS 7 to work was horrible (if you're interested, it's due to the lack of [cciss][9] drivers for the HP Smart Array controller—yes, [there is an answer][10], but it takes a lot of faffing to make work), so all examples are of CentOS 6. I also wanted a flexible setup that could install many versions. Here I've done that using symlinks—this arrangement will work just fine for RHEL too, for example. The basic structure is present though—note the images, Packages and repodata directories. + +These paths relate directly to [the PXE menu][11] file we'll serve up and [the kickstart file][12] too. + +#### If you don't have DHCP + +If you can't manage your own DHCP server or the owners of your infrastructure can't help, there is another option. In the past, I've used [iPXE][13] to create a boot image that I've loaded as virtual media. A lot of out-of-band/lights-out-management (LOM) interfaces on modern hardware support this functionality. You can make a custom embedded PXE menu in seconds with iPXE. I won't cover that here, but if it turns out to be a problem for you, then drop me a line [on Twitter][14] and I'll look at doing a follow-up blog post if enough people request it. + +### Installing hardware + +We've got our structure in place now, and we can [kickstart][15] a server. Before we do, we have to add some configuration to the TFTP setup to enable a given piece of hardware to pick up the PXE boot menu. + +It's here we come across a small chicken/egg problem. We need a host's MAC address to create a link to the specific piece of hardware we want to kickstart. If the hardware is already running and we can access it with Ansible, that's great—we have a way of finding out the boot interface MAC address via the setup module (see [the reinstall play][16]). If it's a new piece of tin, however, we need to get the MAC address and tell our setup what to do with it. This probably means some manual intervention—booting the server and looking at a screen or maybe getting the MAC from a manifest or such like. Whichever way you get hold of it, we can tell our play about it via the inventory. + +Let's put a custom variable into our simple INI format [inventory file][17], but run a play to set up TFTP… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +failed: [ks.box] (item=hp.box) => { +"assertion": "hostvars[item]['mac'] is defined", +"changed": false, +"evaluated_to": false, +"item": "hp.box", +"msg": "Assertion failed" +} + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=0 changed=0 unreachable=0 failed=1 +``` + +Uh oh, play failed. It [contains a check][18] that the host we're about to install actually has a MAC address added. Let's fix that and run the play again… + + +``` +(pip)iMac:ansible-hw-bootstrap$ ansible-inventory --host hp.box +{ +"ilo_ip": "192.168.1.68", +"ilo_password": "administrator", +"mac": "00:AA:BB:CC:DD:EE" +} +(pip)iMac:ansible-hw-bootstrap$ ansible-playbook plays/install.yml + +PLAY [kickstart] ******************************************************************************************************* + +TASK [Host inventory entry has a MAC address] ************************************************************************** +ok: [ks.box] => (item=hp.box) => { +"changed": false, +"item": "hp.box", +"msg": "All assertions passed" +} + +TASK [Set PXE menu to install] ***************************************************************************************** +ok: [ks.box] => (item=hp.box) + +TASK [Reboot target host for PXE boot] ********************************************************************************* +skipping: [ks.box] => (item=hp.box) + +PLAY RECAP ************************************************************************************************************* +ks.box : ok=2 changed=0 unreachable=0 failed=0 +``` + +That worked! What did it do? Looking at the pxelinux.cfg directory under our TFTP root, we can see a symlink… + + +``` +[root@c7 pxelinux.cfg]# pwd +/var/lib/tftpboot/pxelinux.cfg +[root@c7 pxelinux.cfg]# l +total 12 +drwxr-xr-x. 2 root root 65 May 13 14:23 ./ +drwxr-xr-x. 4 root root 4096 May 2 22:13 ../ +-r--r--r--. 1 root root 515 May 2 12:22 00README +lrwxrwxrwx. 1 root root 7 May 13 14:12 01-00-aa-bb-cc-dd-ee -> install +-rw-r--r--. 1 root root 682 May 2 22:07 install +``` + +The **install** file is symlinked to a file named after our MAC address. This is the key, useful piece. It will ensure our hardware with MAC address **00-aa-bb-cc-dd-ee** is served a PXE menu when it boots from its network card. + +So let's boot our machine. + +Usefully, Ansible has some [remote management modules][19]. We're working with an HP server here, so we can use the [hpilo_boot][20] module to save us from having to interact directly with the LOM web interface. + +Let's run the reinstall play on a booted server… + +The neat thing about the **hpilo_boot** module, you'll notice, is it sets the boot medium to be the network. When the installation completes, the server restarts and boots from its hard drive. The eagle-eyed amongst you will have spotted the critical problem with this—what happens if the server boots to its network card again? It will pick up the PXE menu and promptly reinstall itself. I would suggest removing the symlink as a "belt and braces" step then. I will leave that as an exercise for you, dear reader. Hint: I would make the new server do a 'phone home' on boot, to somewhere, which runs a clean-up job. Since you wouldn't need the console open, as I had here to demonstrate what's going on in the background, a 'phone home' job would also give a nice indication that the process completed. Ansible, [naturally][21]. Good luck! + +If you've any thoughts or comments on this process, please let me know. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/hardware-bootstrapping-ansible + +作者:[Mark Phillips][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/markp/users/feeble/users/markp +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/server_data_system_admin.png?itok=q6HCfNQ8 (computer servers processing data) +[2]: https://www.meetup.com/Ansible-London/ +[3]: https://github.com/phips/ansible-hw-bootstrap +[4]: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html +[5]: https://en.m.wikipedia.org/wiki/Trivial_File_Transfer_Protocol +[6]: https://www.centos.org +[7]: https://en.m.wikipedia.org/wiki/Preboot_Execution_Environment +[8]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/kickstart.yml +[9]: https://linux.die.net/man/4/cciss +[10]: https://serverfault.com/questions/611182/centos-7-x64-and-hp-proliant-dl360-g5-scsi-controller-compatibility +[11]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/pxe_install.j2#L10 +[12]: https://github.com/phips/ansible-hw-bootstrap/blob/master/roles/kickstart/templates/local6.ks.j2#L3 +[13]: https://ipxe.org +[14]: https://twitter.com/thismarkp +[15]: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/installation_guide/ch-kickstart2 +[16]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/reinstall.yml +[17]: https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html +[18]: https://github.com/phips/ansible-hw-bootstrap/blob/master/plays/install.yml#L9 +[19]: https://docs.ansible.com/ansible/latest/modules/list_of_remote_management_modules.html +[20]: https://docs.ansible.com/ansible/latest/modules/hpilo_boot_module.html#hpilo-boot-module +[21]: https://github.com/phips/ansible-demos/tree/master/roles/phone_home diff --git a/sources/tech/20190523 Run your blog on GitHub Pages with Python.md b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md new file mode 100644 index 0000000000..1e3634a327 --- /dev/null +++ b/sources/tech/20190523 Run your blog on GitHub Pages with Python.md @@ -0,0 +1,235 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Run your blog on GitHub Pages with Python) +[#]: via: (https://opensource.com/article/19/5/run-your-blog-github-pages-python) +[#]: author: (Erik O'Shaughnessy https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani) + +Run your blog on GitHub Pages with Python +====== +Create a blog with Pelican, a Python-based blogging platform that works +well with GitHub. +![Raspberry Pi and Python][1] + +[GitHub][2] is a hugely popular web service for source code control that uses [Git][3] to synchronize local files with copies kept on GitHub's servers so you can easily share and back up your work. + +In addition to providing a user interface for code repositories, GitHub also enables users to [publish web pages][4] directly from a repository. The website generation package GitHub recommends is [Jekyll][5], written in Ruby. Since I'm a bigger fan of [Python][6], I prefer [Pelican][7], a Python-based blogging platform that works well with GitHub. + +Pelican and Jekyll both transform content written in [Markdown][8] or [reStructuredText][9] into HTML to generate static websites, and both generators support themes that allow unlimited customization. + +In this article, I'll describe how to install Pelican, set up your GitHub repository, run a quickstart helper, write some Markdown files, and publish your first page. I'll assume that you have a [GitHub account][10], are comfortable with [basic Git commands][11], and want to publish a blog using Pelican. + +### Install Pelican and create the repo + +First things first, Pelican (and **ghp-import** ) must be installed on your local machine. This is super easy with [pip][12], the Python package installation tool (you have pip right?): + + +``` +`$ pip install pelican ghp-import` +``` + +Next, open a browser and create a new repository on GitHub for your sweet new blog. Name it as follows (substituting your GitHub username for here and throughout this tutorial): + + +``` +`https://GitHub.com/username/username.github.io` +``` + +Leave it empty; we will fill it with compelling blog content in a moment. + +Using a command line (you command line right?), clone your empty Git repository to your local machine: + + +``` +$ git clone blog +$ cd blog +``` + +### That one weird trick… + +Here's a not-super-obvious trick about publishing web content on GitHub. For user pages (pages hosted in repos named _username.github.io_ ), the content is served from the **master** branch. + +I strongly prefer not to keep all the Pelican configuration files and raw Markdown files in **master** , rather just the web content. So I keep the Pelican configuration and the raw content in a separate branch I like to call **content**. (You can call it whatever you want, but the following instructions will call it **content**.) I like this structure since I can throw away all the files in **master** and re-populate it with the **content** branch. + + +``` +$ git checkout -b content +Switched to a new branch 'content' +``` + +### Configure Pelican + +Now it's time for content configuration. Pelican provides a great initialization tool called **pelican-quickstart** that will ask you a series of questions about your blog. + + +``` +$ pelican-quickstart +Welcome to pelican-quickstart v3.7.1. + +This script will help you create a new Pelican-based website. + +Please answer the following questions so this script can generate the files +needed by Pelican. + +> Where do you want to create your new web site? [.] +> What will be the title of this web site? Super blog +> Who will be the author of this web site? username +> What will be the default language of this web site? [en] +> Do you want to specify a URL prefix? e.g., (Y/n) n +> Do you want to enable article pagination? (Y/n) +> How many articles per page do you want? [10] +> What is your time zone? [Europe/Paris] US/Central +> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) y +> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) y +> Do you want to upload your website using FTP? (y/N) n +> Do you want to upload your website using SSH? (y/N) n +> Do you want to upload your website using Dropbox? (y/N) n +> Do you want to upload your website using S3? (y/N) n +> Do you want to upload your website using Rackspace Cloud Files? (y/N) n +> Do you want to upload your website using GitHub Pages? (y/N) y +> Is this your personal page (username.github.io)? (y/N) y +Done. Your new project is available at /Users/username/blog +``` + +You can take the defaults on every question except: + + * Website title, which should be unique and special + * Website author, which can be a personal username or your full name + * Time zone, which may not be in Paris + * Upload to GitHub Pages, which is a "y" in our case + + + +After answering all the questions, Pelican leaves the following in the current directory: + + +``` +$ ls +Makefile content/ develop_server.sh* +fabfile.py output/ pelicanconf.py +publishconf.py +``` + +You can check out the [Pelican docs][13] to find out how to use those files, but we're all about getting things done _right now_. No, I haven't read the docs yet either. + +### Forge on + +Add all the Pelican-generated files to the **content** branch of the local Git repo, commit the changes, and push the local changes to the remote repo hosted on GitHub by entering: + + +``` +$ git add . +$ git commit -m 'initial pelican commit to content' +$ git push origin content +``` + +This isn't super exciting, but it will be handy if we need to revert edits to one of these files. + +### Finally getting somewhere + +OK, now you can get bloggy! All of your blog posts, photos, images, PDFs, etc., will live in the **content** directory, which is initially empty. To begin creating a first post and an About page with a photo, enter: + + +``` +$ cd content +$ mkdir pages images +$ cp /Users/username/SecretStash/HotPhotoOfMe.jpg images +$ touch first-post.md +$ touch pages/about.md +``` + +Next, open the empty file **first-post.md** in your favorite text editor and add the following: + + +``` +title: First Post on My Sweet New Blog +date: +author: Your Name Here + +# I am On My Way To Internet Fame and Fortune! + +This is my first post on my new blog. While not super informative it +should convey my sense of excitement and eagerness to engage with you, +the reader! +``` + +The first three lines contain metadata that Pelican uses to organize things. There are lots of different metadata you can put there; again, the docs are your best bet for learning more about the options. + +Now, open the empty file **pages/about.md** and add this text: + + +``` +title: About +date: + +![So Schmexy][my_sweet_photo] + +Hi, I am and I wrote this epic collection of Interweb +wisdom. In days of yore, much of this would have been deemed sorcery +and I would probably have been burned at the stake. + +😆 + +[my_sweet_photo]: {filename}/images/HotPhotoOfMe.jpg +``` + +You now have three new pieces of web content in your content directory. Of the content branch. That's a lot of content. + +### Publish + +Don't worry; the payoff is coming! + +All that's left to do is: + + * Run Pelican to generate the static HTML files in **output** : [code]`$ pelican content -o output -s publishconf.py` +``` +* Use **ghp-import** to add the contents of the **output** directory to the **master** branch: [code]`$ ghp-import -m "Generate Pelican site" --no-jekyll -b master output` +``` + * Push the local master branch to the remote repo: [code]`$ git push origin master` +``` + * Commit and push the new content to the **content** branch: [code] $ git add content +$ git commit -m 'added a first post, a photo and an about page' +$ git push origin content +``` + + + +### OMG, I did it! + +Now the exciting part is here, when you get to view what you've published for everyone to see! Open your browser and enter: + + +``` +`https://username.github.io` +``` + +Congratulations on your new blog, self-published on GitHub! You can follow this pattern whenever you want to add more pages or articles. Happy blogging. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/run-your-blog-github-pages-python + +作者:[Erik O'Shaughnessy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jnyjny/users/jasperzanjani/users/jasperzanjani/users/jasperzanjani/users/jnyjny/users/jasperzanjani +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/getting_started_with_python.png?itok=MFEKm3gl (Raspberry Pi and Python) +[2]: https://github.com/ +[3]: https://git-scm.com +[4]: https://help.github.com/en/categories/github-pages-basics +[5]: https://jekyllrb.com +[6]: https://python.org +[7]: https://blog.getpelican.com +[8]: https://guides.github.com/features/mastering-markdown +[9]: http://docutils.sourceforge.net/docs/user/rst/quickref.html +[10]: https://github.com/join?source=header-home +[11]: https://git-scm.com/docs +[12]: https://pip.pypa.io/en/stable/ +[13]: https://docs.getpelican.com diff --git a/sources/tech/20190523 Testing a Go-based S2I builder image.md b/sources/tech/20190523 Testing a Go-based S2I builder image.md new file mode 100644 index 0000000000..a6facd515d --- /dev/null +++ b/sources/tech/20190523 Testing a Go-based S2I builder image.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing a Go-based S2I builder image) +[#]: via: (https://opensource.com/article/19/5/source-image-golang-part-3) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +Testing a Go-based S2I builder image +====== +In the third article in this series on Source-to-Image for Golang +applications, build your application image and take it out for a spin. +![gopher illustrations][1] + +In the first two articles in this series, we explored the general [requirements of a Source To Image (S2I) system][2] and [prepared an environment][3] specifically for a Go (Golang) application. Now let's give it a spin. + +### Building the builder image + +Once the Dockerfile and Source-to-Image (S2I) scripts are ready, the Golang builder image can be created with the **docker build** command: + + +``` +`docker build -t golang-builder .` +``` + +This will produce a builder image named **golang-builder** with the context of our current directory. + +### Building the application image + +The golang-builder image is not much use without an application to build. For this exercise, we will build a simple **hello-world** application. + +#### GoHelloWorld + +Let's meet our test app, [GoHelloWorld][4]. Download the latest [version of Go][5] if you want to follow along. There are two important (for this exercise) files in this repository: + + +``` +// goHelloWorld.go +package main + +import "fmt" + +func main() { +fmt.Println("Hello World!") +} +``` + +This is a very basic app, but it will work fine for testing the builder image. We also have a basic test for GoHelloWorld: + + +``` +// goHelloWorld_test.go +package main + +import "testing" + +func TestMain(t *testing.T) { +t.Log("Hello World!") +} +``` + +#### Build the application image + +Building the application image entails running the **s2i build** command with arguments for the repository containing the code to build (or **.** to build with code from the current directory), the name of the builder image to use, and the name of the resulting application image to create. + + +``` +`$ s2i build https://github.com/clcollins/goHelloWorld.git golang-builder go-hello-world` +``` + +To build from a local directory on a filesystem, replace the Git URL with a period to represent the current directory. For example: + + +``` +`$ s2i build . golang-builder go-hello-world` +``` + +_Note:_ If a Git repository is initialized in the current directory, S2I will fetch the code from the repository URL rather than using the local code. This results in local, uncommitted changes not being used when building the image (if you're unfamiliar with what I mean by "uncommitted changes," brush up on your [Git terminology over here][6]). Directories that are not Git-initialized repositories behave as expected. + +#### Run the application image + +Once the application image is built, it can be tested by running it with the **Docker** command. Source-to-Image has replaced the **CMD** in the image with the run script created earlier so it will execute the **/go/src/app/app** binary created during the build process: + + +``` +$ docker run go-hello-world +Hello World! +``` + +Success! We now have a compiled Go application inside a Docker image created by passing the contents of a Git repo to S2I and without needing a special Dockerfile for our application. + +The application image we just built includes not only the application, but also its source code, test code, the S2I scripts, Golang libraries, and _much of the Debian Linux distribution_ (because the Golang image is based on the Debian base image). The resulting image is not small: + + +``` +$ docker images | grep go-hello-world +go-hello-world latest 75a70c79a12f 4 minutes ago 789 MB +``` + +For applications written in languages that are interpreted at runtime and depend on linked libraries, like Ruby or Python, having all the source code and operating system are necessary to run. The build images will be pretty large as a result, but at least we know it will be able to run. With these languages, we could stop here with our S2I builds. + +There is the option, however, to more explicitly define the production requirements for the application. + +Since the resulting application image would be the same image that would run the production app, I want to assure the required ports, volumes, and environment variables are added to the Dockerfile for the builder image. By writing these in a declarative way, our app is closer to the [Twelve-Factor App][7] recommended practice. For example, if we were to use the builder image to create application images for a Ruby on Rails application running [Puma][8], we would want to open a port to access the webserver. We should add the line **PORT 3000** in the builder Dockerfile so it can be inherited by all the images generated from it. + +But for the Go app, we can do better. + +### Build a runtime image + +Since our builder image created a statically compiled Go binary with our application, we can create a final "runtime" image containing _only_ the binary and none of the other cruft. + +Once the application image is created, the compiled GoHelloWorld app can be extracted and put into a new, empty image using the save-artifacts script. + +#### Runtime files + +Only the application binary and a Dockerfile are required to create the runtime image. + +##### Application binary + +Inside of the application image, the save-artifacts script is written to stream a tar archive of the app binary to stdout. We can check the files included in the tar archive created by save-artifacts with the **-vt** flags for tar: + + +``` +$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -tvf - +-rwxr-xr-x 1001/root 1997502 2019-05-03 18:20 app +``` + +If this results in errors along the lines of "This does not appear to be a tar archive," the save-artifacts script is probably outputting other data in addition to the tar stream, as mentioned above. We must make sure to suppress all output other than the tar stream. + +If everything looks OK, we can use **save-artifacts** to copy the binary out of the application image: + + +``` +`$ docker run go-hello-world /usr/libexec/s2i/save-artifacts | tar -xf -` +``` + +This will copy the app file into the current directory, ready to be added to its own image. + +##### Dockerfile + +The Dockerfile is extremely simple, with only three lines. The **FROM scratch** source denotes that it uses an empty, blank parent image. The rest of the Dockerfile specifies copying the app binary into **/app** in the image and using that binary as the image **ENTRYPOINT** : + + +``` +FROM scratch +COPY app /app +ENTRYPOINT ["/app"] +``` + +Save this Dockerfile as **Dockerfile-runtime**. + +Why **ENTRYPOINT** and not **CMD**? We could do either, but since there is nothing else in the image (no filesystem, no shell), we couldn't run anything else anyway. + +#### Building the runtime image + +With the Dockerfile and binary ready to go, we can build the new runtime image: + + +``` +`$ docker build -f Dockerfile-runtime -t go-hello-world:slim .` +``` + +The new runtime image is considerably smaller—just 2MB! + + +``` +$ docker images | grep -e 'go-hello-world *slim' +go-hello-world slim 4bd091c43816 3 minutes ago 2 MB +``` + +We can test that it still works as expected with **docker run** : + + +``` +$ docker run go-hello-world:slim +Hello World! +``` + +### Bootstrapping s2i with s2i create + +While we hand-created all the S2I files in this example, the **s2i** command has a sub-command to help scaffold all the files we might need for a Source-to-Image build: **s2i create**. + +Using the **s2i create** command, we can generate a new project, creatively named **go-hello-world-2** in the **./ghw2** directory: + + +``` +$ s2i create go-hello-world-2 ./ghw2 +$ ls ./ghw2/ +Dockerfile Makefile README.md s2i test +``` + +The **create** sub-command creates a placeholder Dockerfile, a README.md with information about how to use Source-to-Image, some example S2I scripts, a basic test framework, and a Makefile. The Makefile is a great way to automate building and testing the Source-to-Image builder image. Out of the box, running **make** will build our image, and it can be extended to do more. For example, we could add steps to build a base application image, run tests, or generate a runtime Dockerfile. + +### Conclusion + +In this tutorial, we have learned how to use Source-to-Image to build a custom Golang builder image, create an application image using **s2i build** , and extract the application binary to create a super-slim runtime image. + +In a future extension to this series, I would like to look at how to use the builder image we created with [OKD][9] to automatically deploy our Golang apps with **buildConfigs** , **imageStreams** , and **deploymentConfigs**. Please let me know in the comments if you are interested in me continuing the series, and thanks for reading. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/source-image-golang-part-3 + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/go-golang.png?itok=OAW9BXny (gopher illustrations) +[2]: https://opensource.com/article/19/5/source-image-golang-part-1 +[3]: https://opensource.com/article/19/5/source-image-golang-part-2 +[4]: https://github.com/clcollins/goHelloWorld.git +[5]: https://golang.org/doc/install +[6]: /article/19/2/git-terminology +[7]: https://12factor.net/ +[8]: http://puma.io/ +[9]: https://okd.io/ diff --git a/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md new file mode 100644 index 0000000000..06b8fd6de3 --- /dev/null +++ b/sources/tech/20190524 Choosing the right model for maintaining and enhancing your IoT project.md @@ -0,0 +1,96 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Choosing the right model for maintaining and enhancing your IoT project) +[#]: via: (https://opensource.com/article/19/5/model-choose-embedded-iot-development) +[#]: author: (Drew Moseley https://opensource.com/users/drewmoseley) + +Choosing the right model for maintaining and enhancing your IoT project +====== +Learn more about these two models: Centralized Golden Master and +Distributed Build System +![][1] + +In today's connected embedded device market, driven by the [Internet of things (IoT)][2], a large share of devices in development are based on Linux of one form or another. The prevalence of low-cost boards with ready-made Linux distributions is a key driver in this. Acquiring hardware, building your custom code, connecting the devices to other hardware peripherals and the internet as well as device management using commercial cloud providers has never been easier. A developer or development team can quickly prototype a new application and get the devices in the hands of potential users. This is a good thing and results in many interesting new applications, as well as many questionable ones. + +When planning a system design for beyond the prototyping phase, things get a little more complex. In this post, we want to consider mechanisms for developing and maintaining your base [operating system (OS) image][3]. There are many tools to help with this but we won't be discussing individual tools; of interest here is the underlying model for maintaining and enhancing this image and how it will make your life better or worse. + +There are two primary models for generating these images: + + 1. Centralized Golden Master + 2. Distributed Build System + + + +These categories mirror the driving models for [Source Code Management (SCM)][4] systems, and many of the arguments regarding centralized vs. distributed are applicable when discussing OS images. + +### Centralized Golden Master + +Hobbyist and maker projects primarily use the Centralized Golden Master method of creating and maintaining application images. This fact gives this model the benefit of speed and familiarity, allowing developers to quickly set up such a system and get it running. The speed comes from the fact that many device manufacturers provide canned images for their off-the-shelf hardware. For example, boards from such families as the [BeagleBone][5] and [Raspberry Pi][6] offer ready-to-use OS images and [flashing][7]. Relying on these images means having your system up and running in just a few mouse clicks. The familiarity is due to the fact that these images are generally based on a desktop distro many device developers have already used, such as [Debian][8]. Years of using Linux can then directly transfer to the embedded design, including the fact that the packaging utilities remain largely the same, and it is simple for designers to get the extra software packages they need. + +There are a few downsides of such an approach. The first is that the [golden master image][9] is generally a choke point, resulting in lost developer productivity after the prototyping stage since everyone must wait for their turn to access the latest image and make their changes. In the SCM realm, this practice is equivalent to a centralized system with individual [file locking][10]. Only the developer with the lock can work on any given file. + +![Development flow with the Centralized Golden Master model.][11] + +The second downside with this approach is image reproducibility. This issue is usually managed by manually logging into the target systems, installing packages using the native package manager, configuring applications and dot files, and then modifying the system configuration files in place. Once this process is completed, the disk is imaged using the **dd** utility, or an equivalent, and then distributed. + +Again, this approach creates a minefield of potential issues. For example, network-based package feeds may cease to exist, and the base software provided by the vendor image may change. Scripting can help mitigate these issues. However, these scripts tend to be fragile and break when changes are made to configuration file formats or the vendor's base software packages. + +The final issue that arises with this development model is reliance on third parties. If the hardware vendor's image changes don't work for your design, you may need to invest significant time to adapt. To make matters even more complicated, as mentioned before, the hardware vendors often based their images on an upstream project such as Debian or Ubuntu. This situation introduces even more third parties who can affect your design. + +### Distributed Build System + +This method of creating and maintaining an image for your application relies on the generation of target images separate from the target hardware. The developer workflow here is similar to standard software development using an SCM system; the image is fully buildable by tooling and each developer can work independently. Changes to the system are made via edits to metadata files (scripting, recipes, configuration files, etc) and then the tooling is rerun to generate an updated image. These metadata files are then managed using an SCM system. Individual developers can merge the latest changes into their working copies to produce their development images. In this case, no golden master image is needed and developers can avoid the associated bottleneck. + +Release images are then produced by a build system using standard SCM techniques to pull changes from all the developers. + +![Development flow with the Distributed Build System model.][12] + +Working in this fashion allows the size of your development team to increase without reducing productivity of individual developers. All engineers can work independently of the others. Additionally, this build setup ensures that your builds can be reproduced. Using standard SCM workflows can ensure that, at any future time, you can regenerate a specific build allowing for long term maintenance, even if upstream providers are no longer available. Similar to working with distributed SCM tools however, there is additional policy that needs to be in place to enable reproducible, release candidate images. Individual developers have their own copies of the source and can build their own test images but for a proper release engineering effort, development teams will need to establish merging and branching standards and ensure that all changes targeted for release eventually get merged into a well-defined branch. Many upstream projects already have well-defined processes for this kind of release strategy (for instance, using *-stable and *-next branches). + +The primary downside of this approach is the lack of familiarity. For example, adding a package to the image normally requires creating a recipe of some kind and then updating the definitions so that the package binaries are included in the image. This is very different from running apt while logged into a running system. The learning curve of these systems can be daunting but the results are more predictable and scalable and are likely a better choice when considering a design for a product that will be mass produced. + +Dedicated build systems such as [OpenEmbedded][13] and [Buildroot][14] use this model as do distro packaging tools such as [debootstrap][15] and [multistrap][16]. Newer tools such as [Isar][17], [debos][18], and [ELBE][19] also use this basic model. Choices abound, and it is worth the investment to learn one or more of these packages for your designs. The long term maintainability and reproducibility of these systems will reduce risk in your design by allowing you to generate reproducible builds, track all the source code, and remove your dependency on third-party providers continued existence. + +#### Conclusion + +To be clear, the distributed model does suffer some of the same issues as mentioned for the Golden Master Model; especially the reliance on third parties. This is a consequence of using systems designed by others and cannot be completely avoided unless you choose a completely roll-your-own approach which comes with a significant cost in development and maintenance. + +For prototyping and proof-of-concept level design, and a team of just a few developers, the Golden Master Model may well be the right choice given restrictions in time and budget that are present at this stage of development. For low volume, high touch designs, this may be an acceptable trade-off for production use. + +For general production use, the benefits in terms of team size scalability, image reproducibility and developer productivity greatly outweigh the learning curve and overhead of systems implementing the distributed model. Support from board and chip vendors is also widely available in these systems reducing the upfront costs of developing with them. For your next product, I strongly recommend starting the design with a serious consideration of the model being used to generate the base OS image. If you choose to prototype with the golden master model with the intention of migrating to the distributed model, make sure to build sufficient time in your schedule for this effort; the estimates will vary widely depending on the specific tooling you choose as well as the scope of the requirements and the out-of-the-box availability of software packages your code relies on. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/model-choose-embedded-iot-development + +作者:[Drew Moseley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/drewmoseley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe +[2]: https://en.wikipedia.org/wiki/Internet_of_things +[3]: https://en.wikipedia.org/wiki/System_image +[4]: https://en.wikipedia.org/wiki/Version_control +[5]: http://beagleboard.org/ +[6]: https://www.raspberrypi.org/ +[7]: https://en.wikipedia.org/wiki/Flash_memory +[8]: https://www.debian.org/ +[9]: https://en.wikipedia.org/wiki/Software_release_life_cycle#RTM +[10]: https://en.wikipedia.org/wiki/File_locking +[11]: https://opensource.com/sites/default/files/uploads/cgm1_500.png (Development flow with the Centralized Golden Master model.) +[12]: https://opensource.com/sites/default/files/uploads/cgm2_500.png (Development flow with the Distributed Build System model.) +[13]: https://www.openembedded.org/ +[14]: https://buildroot.org/ +[15]: https://wiki.debian.org/Debootstrap +[16]: https://wiki.debian.org/Multistrap +[17]: https://github.com/ilbers/isar +[18]: https://github.com/go-debos/debos +[19]: https://elbe-rfs.org/ diff --git a/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md new file mode 100644 index 0000000000..b281b6036b --- /dev/null +++ b/sources/tech/20190524 Dual booting Windows and Linux using UEFI.md @@ -0,0 +1,104 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dual booting Windows and Linux using UEFI) +[#]: via: (https://opensource.com/article/19/5/dual-booting-windows-linux-uefi) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/ckrzen) + +Dual booting Windows and Linux using UEFI +====== +A quick rundown of setting up Linux and Windows to dual boot on the same +machine, using the Unified Extensible Firmware Interface (UEFI). +![Linux keys on the keyboard for a desktop computer][1] + +Rather than doing a step-by-step how-to guide to configuring your system to dual boot, I’ll highlight the important points. As an example, I will refer to my new laptop that I purchased a few months ago. I first installed [Ubuntu Linux][2] onto the entire hard drive, which destroyed the pre-installed [Windows 10][3] installation. After a few months, I decided to install a different Linux distribution, and so also decided to re-install Windows 10 alongside [Fedora Linux][4] in a dual boot configuration. I’ll highlight some essential facts to get started. + +### Firmware + +Dual booting is not just a matter of software. Or, it is, but it involves changing your firmware, which among other things tells your machine how to begin the boot process. Here are some firmware-related issues to keep in mind. + +#### UEFI vs. BIOS + +Before attempting to install, make sure your firmware configuration is optimal. Most computers sold today have a new type of firmware known as [Unified Extensible Firmware Interface (UEFI)][5], which has pretty much replaced the other firmware known as [Basic Input Output System (BIOS)][6], which is often included through the mode many providers call Legacy Boot. + +I had no need for BIOS, so I chose UEFI mode. + +#### Secure Boot + +One other important setting is Secure Boot. This feature detects whether the boot path has been tampered with, and stops unapproved operating systems from booting. For now, I disabled this option to ensure that I could install Fedora Linux. According to the Fedora Project Wiki [Features/Secure Boot ][7] Fedora Linux will work with it enabled. This may be different for other Linux distributions —I plan to revisit this setting in the future. + +In short, if you find that you cannot install your Linux OS with this setting active, disable Secure Boot and try again. + +### Partitioning the boot drive + +If you choose to dual boot and have both operating systems on the same drive, you have to break it into partitions. Even if you dual boot using two different drives, most Linux installations are best broken into a few basic partitions for a variety of reasons. Here are some options to consider. + +#### GPT vs MBR + +If you decide to manually partition your boot drive in advance, I recommend using the [GUID Partition Table (GPT)][8] rather than the older [Master Boot Record (MBR)][9]. Among the reasons for this change, there are two specific limitations of MBR that GPT doesn’t have: + + * MBR can hold up to 15 partitions, while GPT can hold up to 128. + * MBR only supports up to 2 terabytes, while GPT uses 64-bit addresses which allows it to support disks up to 8 million terabytes. + + + +If you have shopped for hard drives recently, then you know that many of today’s drives exceed the 2 terabyte limit. + +#### The EFI system partition + +If you are doing a fresh installation or using a new drive, there are probably no partitions to begin with. In this case, the OS installer will create the first one, which is the [EFI System Partition (ESP)][10]. If you choose to manually partition your drive using a tool such as [gdisk][11], you will need to create this partition with several parameters. Based on the existing ESP, I set the size to around 500MB and assigned it the ef00 (EFI System) partition type. The UEFI specification requires the format to be FAT32/msdos, most likely because it is supportable by a wide range of operating systems. + +![Partitions][12] + +### Operating System Installation + +Once you accomplish the first two tasks, you can install your operating systems. While I focus on Windows 10 and Fedora Linux here, the process is fairly similar when installing other combinations as well. + +#### Windows 10 + +I started the Windows 10 installation and created a 20 Gigabyte Windows partition. Since I had previously installed Linux on my laptop, the drive had an ESP, which I chose to keep. I deleted all existing Linux and swap partitions to start fresh, and then started my Windows installation. The Windows installer automatically created another small partition—16 Megabytes—called the [Microsoft Reserved Partition (MSR)][13]. Roughly 400 Gigabytes of unallocated space remained on the 512GB boot drive once this was finished. + +I then proceeded with and completed the Windows 10 installation process. I then rebooted into Windows to make sure it was working, created my user account, set up wi-fi, and completed other tasks that need to be done on a first-time OS installation. + +#### Fedora Linux + +I next moved to install Linux. I started the process, and when it reached the disk configuration steps, I made sure not to change the Windows NTFS and MSR partitions. I also did not change the EPS, but I did set its mount point to **/boot/efi**. I then created the usual ext4 formatted partitions, **/** (root), **/boot** , and **/home**. The last partition I created was Linux **swap**. + +As with Windows, I continued and completed the Linux installation, and then rebooted. To my delight, at boot time the [GRand][14] [Unified Boot Loader (GRUB)][14] menu provided the choice to select either Windows or Linux, which meant I did not have to do any additional configuration. I selected Linux and completed the usual steps such as creating my user account. + +### Conclusion + +Overall, the process was painless. In past years, there has been some difficulty navigating the changes from UEFI to BIOS, plus the introduction of features such as Secure Boot. I believe that we have now made it past these hurdles and can reliably set up multi-boot systems. + +I don’t miss the [Linux LOader (LILO)][15] anymore! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/dual-booting-windows-linux-uefi + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/ckrzen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_keyboard_desktop.png?itok=I2nGw78_ (Linux keys on the keyboard for a desktop computer) +[2]: https://www.ubuntu.com +[3]: https://www.microsoft.com/en-us/windows +[4]: https://getfedora.org +[5]: https://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface +[6]: https://en.wikipedia.org/wiki/BIOS +[7]: https://fedoraproject.org/wiki/Features/SecureBoot +[8]: https://en.wikipedia.org/wiki/GUID_Partition_Table +[9]: https://en.wikipedia.org/wiki/Master_boot_record +[10]: https://en.wikipedia.org/wiki/EFI_system_partition +[11]: https://sourceforge.net/projects/gptfdisk/ +[12]: /sites/default/files/u216961/gdisk_screenshot_s.png +[13]: https://en.wikipedia.org/wiki/Microsoft_Reserved_Partition +[14]: https://en.wikipedia.org/wiki/GNU_GRUB +[15]: https://en.wikipedia.org/wiki/LILO_(boot_loader) diff --git a/sources/tech/20190527 4 open source mobile apps for Nextcloud.md b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md new file mode 100644 index 0000000000..c97817e3c1 --- /dev/null +++ b/sources/tech/20190527 4 open source mobile apps for Nextcloud.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 open source mobile apps for Nextcloud) +[#]: via: (https://opensource.com/article/19/5/mobile-apps-nextcloud) +[#]: author: (Scott Nesbitt https://opensource.com/users/scottnesbitt) + +4 open source mobile apps for Nextcloud +====== +Increase Nextcloud's value by turning it into an on-the-go information +hub. +![][1] + +I've been using [Nextcloud][2] (and before that, ownCloud), an open source alternative to file syncing and storage services like Dropbox and Google Drive, for many years. It's been both reliable and useful, and it respects my privacy. + +While Nextcloud is great at both syncing and storage, it's much more than a place to dump your files. Thanks to applications that you can fold into Nextcloud, it becomes more of an information hub than a storage space. + +While I usually interact with Nextcloud using the desktop client or in a browser, I'm not always at my computer (or any computer that I trust). So it's important that I can work with Nextcloud using my [LineageOS][3]-powered smartphone or tablet. + +To do that, I use several open source apps that work with Nextcloud. Let's take a look at four of them. + +As you've probably guessed, this article looks at the Android version of those apps. I grabbed mine from [F-Droid][4], although you get them from other Android app markets. You might be able to get some or all of them from Apple's App Store if you're an iOS person. + +### Working with files and folders + +The obvious app to start with is the [Nextcloud sync client][5]. This little app links your phone or tablet to your Nextcloud account. + +![Nextcloud mobile app][6] + +Using the app, you can: + + * Create folders + * Upload one or more files + * Sync files between your device and server + * Rename or remove files + * Make files available offline + + + +You can also tap a file to view or edit it. If your device doesn't have an app that can open the file, then you're out of luck. You can still download it to your phone or tablet though. + +### Reading news feeds + +Remember all the whining that went on when Google pulled the plug on Google Reader in 2013? This despite Google giving users several months to find an alternative. And, yes, there are alternatives. One of them, believe it or not, is Nextcloud. + +Nextcloud has a built-in RSS reader. All you need to do to get started is upload an [OPML][7] file containing your feeds or manually add a site's RSS feed to Nextcloud. + +Going mobile is easy, too, with the Nextcloud [News Reader app][8]. + +![Nextcloud News app][9] + +Unless you configure the app to sync when you start it up, you'll need to swipe down from the top of the app to load updates to your feeds. Depending on how many feeds you have, and how many unread items are in those feeds, syncing takes anywhere from a few seconds to half a minute. + +From there, tap an item to read it in the News app. + +![Nextcloud News app][10] + +You can also add feeds or open what you're reading in your device's default web browser. + +### Reading and writing notes + +I don't use Nextcloud's [Notes][11] app all that often (I'm more of a [Standard Notes][12] person). That said, you might find the Notes app comes in handy. + +How? By giving you a lightweight way to take [plain text][13] notes on your mobile device. The Notes app syncs any notes you have in your Nextcloud account and displays them in chronological order—newest or last-edited notes first. + +![Nextcloud Notes app][14] + +Tap a note to read or edit it. You can also create a note by tapping the **+** button, then typing what you need to type. + +![Nextcloud Notes app][15] + +There's no formatting, although you can add markup (like Markdown) to the note. Once you're done editing, the app syncs your note with Nextcloud. + +### Accessing your bookmarks + +Nextcloud has a decent bookmarking tool, and its [Bookmarks][16] app makes it easy to work with the tool on your phone or tablet. + +![Nextcloud Bookmarks app][17] + +Like the Notes app, Bookmarks displays your bookmarks in chronological order, with the newest appearing first in the list. + +If you tagged your bookmarks in Nextcloud, you can swipe left in the app to see a list of those tags rather than a long list of bookmarks. Tap a tag to view the bookmarks under it. + +![Nextcloud Bookmarks app][18] + +From there, just tap a bookmark. It opens in your device's default browser. + +You can also add a bookmark within the app. To do that, tap the **+** menu and add the bookmark. + +![Nextcloud Bookmarks app][19] + +You can include: + + * The URL + * A title for the bookmark + * A description + * One or more tags + + + +### Is that all? + +Definitely not. There are apps for [Nextcloud Deck][20] (a personal kanban tool) and [Nextcloud Talk][21] (a voice and video chat app). There are also a number of third-party apps that work with Nextcloud. Just do a search for _Nextcloud_ or _ownCloud_ in your favorite app store to track them down. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/mobile-apps-nextcloud + +作者:[Scott Nesbitt][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/scottnesbitt +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_BUS_cloudagenda_20140341_jehb.png?itok=1NGs3_n4 +[2]: https://nextcloud.com/ +[3]: https://lineageos.org/ +[4]: https://opensource.com/life/15/1/going-open-source-android-f-droid +[5]: https://f-droid.org/en/packages/com.nextcloud.client/ +[6]: https://opensource.com/sites/default/files/uploads/nextcloud-app.png (Nextcloud mobile app) +[7]: http://en.wikipedia.org/wiki/OPML +[8]: https://f-droid.org/en/packages/de.luhmer.owncloudnewsreader/ +[9]: https://opensource.com/sites/default/files/uploads/nextcloud-news.png (Nextcloud News app) +[10]: https://opensource.com/sites/default/files/uploads/nextcloud-news-reading.png (Nextcloud News app) +[11]: https://f-droid.org/en/packages/it.niedermann.owncloud.notes +[12]: https://opensource.com/article/18/12/taking-notes-standard-notes +[13]: https://plaintextproject.online +[14]: https://opensource.com/sites/default/files/uploads/nextcloud-notes.png (Nextcloud Notes app) +[15]: https://opensource.com/sites/default/files/uploads/nextcloud-notes-add.png (Nextcloud Notes app) +[16]: https://f-droid.org/en/packages/org.schabi.nxbookmarks/ +[17]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks.png (Nextcloud Bookmarks app) +[18]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-tags.png (Nextcloud Bookmarks app) +[19]: https://opensource.com/sites/default/files/uploads/nextcloud-bookmarks-add.png (Nextcloud Bookmarks app) +[20]: https://f-droid.org/en/packages/it.niedermann.nextcloud.deck +[21]: https://f-droid.org/en/packages/com.nextcloud.talk2 diff --git a/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md new file mode 100644 index 0000000000..8ab4b5b0b8 --- /dev/null +++ b/sources/tech/20190527 Blockchain 2.0 - Introduction To Hyperledger Sawtooth -Part 12.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – Introduction To Hyperledger Sawtooth [Part 12] +====== + +![Introduction To Hyperledger Sawtooth][1] + +After having discussed the [**Hyperledger Fabric**][2] project in detail on this blog, its time we moved on to the next project of interest at the Hyperledger camp. **Hyperledger Sawtooth** is Intel’s contribution to the [**Blockchain**][3] consortium mission to develop enterprise ready modular distributed ledgers and applications. Sawtooth is another attempt at creating an easy to roll out blockchain ledger for businesses keeping their resource constraints and security requirements in mind. While platforms such as [**Ethereum**][4] will in theory offer similar functionality when placed in capable hands, Sawtooth readily provides a lot of customizability and is built from the ground up for specific enterprise level use cases. + +The Hyperledger project page has an introductory video detailing the Sawtooth architecture and platform. We’re attaching it here for readers to get a quick round-up about the product. + +Moving to the intricacies of the Sawtooth project, there are **five primary and significant differences** between Sawtooth and its alternatives. The post from now and on will explore these differences and at the end will mention an example real world use case for Hyperledger Sawtooth in managing supply chains. + +### Distinction 1: The consensus algorithm – PoET + +This is perhaps amongst the most notable and significant changes that Sawtooth brings to the fore. While exploring all the different consensus algorithms that exist for blockchain platforms these days is out of the scope of this post, what is to be noted is that Sawtooth uses a **Proof Of Elapsed Time** (POET) based consensus algorithm. Such a system for validating transactions and blocks on the blockchain is considered to be resources efficient unlike other computation heavy systems which use the likes of **Proof of work** or **Proof of stake** algorithms. + +POET is designed to utilize the security and tamper proof features of modern processors with reference implementations utilizing **Intel’s trusted execution environment** (TEE) architecture on its modern CPUs. The fact that the execution of the validating program takes use of a TEE along with a **“lottery”** system that is implemented to choose the **validator** or **node** to fulfill the request makes the process of creating blocks on the Sawtooth architecture secure and resource efficient at the same time. + +The POET algorithm basically elects a validator randomly based on a stochastic process. The probability of a particular node being selected depends on a host of pointers one of which depends on the amount of computing resources the said node has contributed to the ledger so far. The chosen validator then proceeds to timestamp the said block of data and shares it with the permissioned nodes in the network so that there remains a reliable record of the blockchains immutability. This method of electing the “validator” node was developed by **Intel** and so far, has been shown to exhibit zero bias and or error in executing its function. + +### Distinction 2: A fully separated level of abstraction between the application level and core system + +As mentioned, the Sawtooth platform takes modularity to the next level. Here in the reference implementation that is shared by the [**Hyperledger project**][5] foundation, the core system that enables users to create a distributed ledger, and, the application run-time environment (the virtual environment where applications developed to run on the blockchain otherwise known as [**smart contracts**][6] or **chaincode** ) are separated by a full level of abstraction. This essentially means that developers can separately code applications in any programming language of their choice instead of having to conform to and follow platform specific languages. The Sawtooth platform provides support for the following contract languages out of the box: **Java** , **Python** , **C++** , **Go** , **JavaScript** and **Rust**. This distinction between the core system and application levels are obtained by defining a custom transaction family for each application that is developed on the platform. + +A transaction family contains the following: + + * **A transaction processor** : basically, your applications logic or business logic. + * **Data model** : a system that defines and handles data storage and processing at the system level. + * **Client-side handler** to handle the end user side of your application. + + + +Multiple low-level transaction families such as this may be defined in a permissioned blockchain and used in a modular fashion throughout the network. For instance, if a consortium of banks chose to implement it, they could come together and define common functionalities or rules for their applications and then plug and play the transaction families they need in their respective systems without having to develop everything on their own. + +### Distinction 3: SETH + +It is without doubt that a blockchain future would for sure have Ethereum as one of the key players. People at the Hyperledger foundation know this well. The **Hyperledger Burrow project** is in fact meant to address the existence of entities working on multiple platforms by providing a way for developers to use Ethereum blockchain specifications to build custom distributed applications using the **EVM** (Ethereum virtual machine). + +Basically speaking, Burrow lets you customize and deploy Ethereum based [**DApps**][7] (written in **solidity** ) in non-public blockchains (the kind developed for use at the Hyperledger foundation). The Burrow and Sawtooth projects teamed up and created **SETH**. **Sawtooth-Ethereum Integration project** (SETH) is meant to add Ethereum (solidity) smart contract functionality and compatibility to Sawtooth based distributed ledger networks. A much-less known agenda to SETH is the fact that applications (DApps) and smart contracts written for EVM can now be easily ported to Sawtooth. + +### Distinction 4: ACID principle and ability to batch process + +A rather path breaking feature of Sawtooth is its ability to batch transactions together and then package them into a block. The blocks and transactions will still be subject to the **ACID** principle ( **A** tomicity, **C** onsistency, **I** solation and **D** urability). The implication of these two facts are highlighted using an example as follows. + +Let’s say you have **6 transactions** to be packaged into **two blocks (4+2)**. Block A has 4 transactions which individually needs to succeed in order for the next block of 2 transactions to be timestamped and validated. Assuming they succeed, the next block of 2 transactions is processed and assuming even they are successful the entire package of 6 transactions are deemed successful and the overall business logic is deemed successful. For instance, assume you’re selling a car. Different transactions at the ends of the buyer (block A) and the seller (block B) will need be completed in order for the trade to be deemed valid. Ownership is transferred only if both the sides are successful in carrying out their individual transactions. + +Such a feature will improve accountability on individual ends by separating responsibilities and improve the recognizability of faults and errors by the same principle. The ACID principle is implemented by coding for a custom transaction processor and defining a transaction family that will store data in the said block structure. + +### Distinction 5: Parallel transaction execution + +Blockchain platforms usually follow a serial **first come first serve route** to executing transactions and follow a queueing system for the same. Sawtooth provides support for both **serial** and **parallel execution** of transactions. Parallel transaction processing offers significant performance gains for even faster transactions by reducing overall transaction latencies. More faster transactions will be processed along with slower and bigger transactions at the same time on the platform instead of transactions of all types to be kept waiting. + +The methodology followed to implement the parallel transaction paradigm efficiently takes care of the double spending problems and errors due to multiple changes being made to the same state by defining a custom scheduler for the network which can identity processes and their predecessors. + +Real world use case: Supply Chain Management using Sawtooth applied with IoT + +The Sawtooth **official website** lists seafood traceability as an example use case for the Sawtooth platform. The basic template is applicable for almost all supply chain related use cases. + +![][8] + +Figure 1 From the Hyperledger Sawtooth Official Website + +Traditional supply chain management solutions in this space work majorly through manual record keeping which leaves room for massive frauds, errors, and significant quality control issues. IoT has been cited as a solution to overcome such issues with supply chains in day to day use. Inexpensive GPS enabled RFID-tags can be attached to fresh catch or produce as the case may be and can be scanned for updating at the individual processing centres automatically. Buyers or middle men can verify and or track the information easily using a client on their mobile device to know the route their dinner has taken before arriving on their plates. + +While tracking seafood seems to be a first world problem in countries like India, the change an IoT enabled system can bring to public delivery systems in developing countries can be a welcome change. The application is available for demo at this **[link][9]** and users are encouraged to fiddle with it and adopt it to suit their needs. + +**Reference:** + + * [**The Official Hyperledger Sawtooth Docs**][10] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Hyperledger-Sawtooth-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-fabric/ +[3]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[5]: https://www.ostechnix.com/blockchain-2-0-an-introduction-to-hyperledger-project-hlp/ +[6]: https://www.ostechnix.com/blockchain-2-0-explaining-smart-contracts-and-its-types/ +[7]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[8]: http://www.ostechnix.com/wp-content/uploads/2019/05/Sawtooth.png +[9]: https://sawtooth.hyperledger.org/examples/seafood.html +[10]: https://sawtooth.hyperledger.org/docs/core/releases/1.0/contents.html diff --git a/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md new file mode 100644 index 0000000000..a717d05ed8 --- /dev/null +++ b/sources/tech/20190527 How To Enable Or Disable SSH Access For A Particular User Or Group In Linux.md @@ -0,0 +1,300 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Enable Or Disable SSH Access For A Particular User Or Group In Linux?) +[#]: via: (https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/) +[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/) + +How To Enable Or Disable SSH Access For A Particular User Or Group In Linux? +====== + +As per your organization standard policy, you may need to allow only the list of users that are allowed to access the Linux system. + +Or you may need to allow only few groups, which are allowed to access the Linux system. + +How to achieve this? What is the best way? How to achieve this in a simple way? + +Yes, there are many ways are available to perform this. + +However, we need to go with simple and easy method. + +If so, it can be done by making the necessary changes in `/etc/ssh/sshd_config` file. + +In this article we will show you, how to perform this in details. + +Why are we doing this? due to security reason. Navigate to the following URL to know more about **[openSSH][1]** usage. + +### What Is SSH? + +openssh stands for OpenBSD Secure Shell. Secure Shell (ssh) is a free open source networking tool which allow us to access remote system over an unsecured network using Secure Shell (SSH) protocol. + +It’s a client-server architecture. It handles user authentication, encryption, transferring files between computers and tunneling. + +These can be accomplished via traditional tools such as telnet or rcp, these are insecure and use transfer password in cleartext format while performing any action. + +### How To Allow A User To Access SSH In Linux? + +We can allow/enable the ssh access for a particular user or list of the users using the following method. + +If you would like to allow more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to allow ssh access for `user3`. + +``` +# echo "AllowUsers user3" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowusers +AllowUsers user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with different user. Yes, `user2` isn’t allowed for SSH login and will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:00:35 CentOS7 sshd[4900]: User user2 from 192.168.1.6 not allowed because not listed in AllowUsers +Mar 29 02:00:35 CentOS7 sshd[4900]: input_userauth_request: invalid user user2 [preauth] +Mar 29 02:00:40 CentOS7 unix_chkpwd[4902]: password check failed for user (user2) +Mar 29 02:00:40 CentOS7 sshd[4900]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user2 +Mar 29 02:00:43 CentOS7 sshd[4900]: Failed password for invalid user user2 from 192.168.1.6 port 42568 ssh2 +``` + +At the same time `user3` is allowed to login into the system because it’s in allowed users list. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:01:13 CentOS7 sshd[4939]: Accepted password for user3 from 192.168.1.6 port 42590 ssh2 +Mar 29 02:01:13 CentOS7 sshd[4939]: pam_unix(sshd:session): session opened for user user3 by (uid=0) +``` + +### How To Deny Users To Access SSH In Linux? + +We can deny/disable the ssh access for a particular user or list of the users using the following method. + +If you would like to disable more than one user then you have to add the users with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `user1`. + +``` +# echo "DenyUsers user1" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i denyusers +DenyUsers user1 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Simple open a new terminal or session and try to access the Linux system with Deny user. Yes, `user1` is in denyusers list. So, you will be getting an error message as shown below when you are try to login. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 01:53:42 CentOS7 sshd[4753]: User user1 from 192.168.1.6 not allowed because listed in DenyUsers +Mar 29 01:53:42 CentOS7 sshd[4753]: input_userauth_request: invalid user user1 [preauth] +Mar 29 01:53:46 CentOS7 unix_chkpwd[4755]: password check failed for user (user1) +Mar 29 01:53:46 CentOS7 sshd[4753]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 01:53:48 CentOS7 sshd[4753]: Failed password for invalid user user1 from 192.168.1.6 port 42522 ssh2 +``` + +### How To Allow Groups To Access SSH In Linux? + +We can allow/enable the ssh access for a particular group or groups using the following method. + +If you would like to allow more than one group then you have to add the groups with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. In this example, we are going to disable ssh access for `2g-admin` group. + +``` +# echo "AllowGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# cat /etc/ssh/sshd_config | grep -i allowgroups +AllowGroups 2g-admin +``` + +Run the following command to know the list of the users are belongs to this group. + +``` +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes, `user3` is allowed to login into the system because user3 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:21 CentOS7 sshd[5165]: Accepted password for user1 from 192.168.1.6 port 42640 ssh2 +Mar 29 02:10:22 CentOS7 sshd[5165]: pam_unix(sshd:session): session opened for user user1 by (uid=0) +``` + +Yes, `user2` is allowed to login into the system because user2 is belongs to `2g-admin` group. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:10:38 CentOS7 sshd[5225]: Accepted password for user2 from 192.168.1.6 port 42642 ssh2 +Mar 29 02:10:38 CentOS7 sshd[5225]: pam_unix(sshd:session): session opened for user user2 by (uid=0) +``` + +When you are try to login into the system with other users which are not part of this group then you will be getting an error message as shown below. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:12:36 CentOS7 sshd[5306]: User ladmin from 192.168.1.6 not allowed because none of user's groups are listed in AllowGroups +Mar 29 02:12:36 CentOS7 sshd[5306]: input_userauth_request: invalid user ladmin [preauth] +Mar 29 02:12:56 CentOS7 unix_chkpwd[5310]: password check failed for user (ladmin) +Mar 29 02:12:56 CentOS7 sshd[5306]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=ladmin +Mar 29 02:12:58 CentOS7 sshd[5306]: Failed password for invalid user ladmin from 192.168.1.6 port 42674 ssh2 +``` + +### How To Deny Group To Access SSH In Linux? + +We can deny/disable the ssh access for a particular group or groups using the following method. + +If you would like to disable more than one group then you need to add the group with space in the same line. + +To do so, just append the following value into `/etc/ssh/sshd_config` file. + +``` +# echo "DenyGroups 2g-admin" >> /etc/ssh/sshd_config +``` + +You can double check this by running the following command. + +``` +# # cat /etc/ssh/sshd_config | grep -i denygroups +DenyGroups 2g-admin + +# getent group 2g-admin +2g-admin:x:1005:user1,user2,user3 +``` + +That’s it. Just bounce the ssh service and see the magic. + +``` +# systemctl restart sshd + +# service restart sshd +``` + +Yes `user3` isn’t allowed to login into the system because it’s not part of `2g-admin` group. It’s in Denygroups. + +``` +# ssh [email protected] +[email protected]'s password: +Permission denied, please try again. +``` + +Output: + +``` +Mar 29 02:17:32 CentOS7 sshd[5400]: User user1 from 192.168.1.6 not allowed because a group is listed in DenyGroups +Mar 29 02:17:32 CentOS7 sshd[5400]: input_userauth_request: invalid user user1 [preauth] +Mar 29 02:17:38 CentOS7 unix_chkpwd[5402]: password check failed for user (user1) +Mar 29 02:17:38 CentOS7 sshd[5400]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.6 user=user1 +Mar 29 02:17:41 CentOS7 sshd[5400]: Failed password for invalid user user1 from 192.168.1.6 port 42710 ssh2 +``` + +Anyone can login into the system except `2g-admin` group. Hence, `ladmin` user is allowed to login into the system. + +``` +# ssh [email protected] +[email protected]'s password: +[[email protected] ~]$ +``` + +Output: + +``` +Mar 29 02:19:13 CentOS7 sshd[5432]: Accepted password for ladmin from 192.168.1.6 port 42716 ssh2 +Mar 29 02:19:13 CentOS7 sshd[5432]: pam_unix(sshd:session): session opened for user ladmin by (uid=0) +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/allow-deny-enable-disable-ssh-access-user-group-in-linux/ + +作者:[2daygeek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.2daygeek.com/author/2daygeek/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/ssh-tutorials/ diff --git a/sources/tech/20190528 A Quick Look at Elvish Shell.md b/sources/tech/20190528 A Quick Look at Elvish Shell.md new file mode 100644 index 0000000000..82927332a7 --- /dev/null +++ b/sources/tech/20190528 A Quick Look at Elvish Shell.md @@ -0,0 +1,107 @@ +Translating by name1e5s +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A Quick Look at Elvish Shell) +[#]: via: (https://itsfoss.com/elvish-shell/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +A Quick Look at Elvish Shell +====== + +Everyone who comes to this site has some knowledge (no matter how slight) of the Bash shell that comes default of so many systems. There have been several attempts to create shells that solve some of the shortcomings of Bash that have appeared over the years. One such shell is Elvish, which we will look at today. + +### What is Elvish Shell? + +![Pipelines In Elvish][1] + +[Elvish][2] is more than just a shell. It is [also][3] “an expressive programming language”. It has a number of interesting features including: + + * Written in Go + * Built-in file manager, inspired by the [Ranger file manager][4] (`Ctrl + N`) + * Searchable command history (`Ctrl + R`) + * History of directories visited (`Ctrl + L`) + * Powerful pipelines that support structured data, such as lists, maps, and functions + * Includes a “standard set of control structures: conditional control with `if`, loops with `for` and `while`, and exception handling with `try`“ + * Support for [third-party modules via a package manager to extend Elvish][5] + * Licensed under the BSD 2-Clause license + + + +“Why is it named Elvish?” I hear you shout. Well, according to [their website][6], they chose their current name because: + +> In roguelikes, items made by the elves have a reputation of high quality. These are usually called elven items, but “elvish” was chosen because it ends with “sh”, a long tradition of Unix shells. It also rhymes with fish, one of the shells that influenced the philosophy of Elvish. + +### How to Install Elvish Shell + +Elvish is available in several mainstream distributions. + +Note that the software is very young. The most recent version is 0.12. According to the project’s [GitHub page][3]: “Despite its pre-1.0 status, it is already suitable for most daily interactive use.” + +![Elvish Control Structures][7] + +#### Debian and Ubuntu + +Elvish packages were introduced into Debian Buster and Ubuntu 17.10. Unfortunately, those packages are out of date and you will need to use a [PPA][8] to install the latest version. You will need to use the following commands: + +``` +sudo add-apt-repository ppa:zhsj/elvish +sudo apt update +sudo apt install elvish +``` + +#### Fedora + +Elvish is not available in the main Fedora repos. You will need to add the [FZUG Repository][9] to install Evlish. To do so, you will need to use these commands: + +``` +sudo dnf config-manager --add-repo=http://repo.fdzh.org/FZUG/FZUG.repol +sudo dnf install elvish +``` + +#### Arch + +Elvish is available in the [Arch User Repository][10]. + +I believe you know [how to change shell in Linux][11] so after installing you can switch to Elvish to use it. + +### Final Thoughts on Elvish Shell + +Personally, I have no reason to install Elvish on any of my systems. I can get most of its features by installing a couple of small command line programs or using already installed programs. + +For example, the search past commands feature already exists in Bash and it works pretty well. If you want to improve your ability to search past commands, I would recommend installing [fzf][12] instead. Fzf uses fuzzy search, so you don’t need to remember the exact command you are looking for. Fzf also allows you to preview and open files. + +I do think that the fact that Elvish is also a programming language is neat, but I’ll stick with Bash shell scripting until Elvish matures a little more. + +Have you every used Elvish? Do you think it would be worthwhile to install Elvish? What is your favorite Bash replacement? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][13]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/elvish-shell/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/05/pipelines-in-elvish.png?fit=800%2C421&ssl=1 +[2]: https://elv.sh/ +[3]: https://github.com/elves/elvish +[4]: https://ranger.github.io/ +[5]: https://github.com/elves/awesome-elvish +[6]: https://elv.sh/ref/name.html +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/05/Elvish-control-structures.png?fit=800%2C425&ssl=1 +[8]: https://launchpad.net/%7Ezhsj/+archive/ubuntu/elvish +[9]: https://github.com/FZUG/repo/wiki/Add-FZUG-Repository +[10]: https://aur.archlinux.org/packages/elvish/ +[11]: https://linuxhandbook.com/change-shell-linux/ +[12]: https://github.com/junegunn/fzf +[13]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md b/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md new file mode 100644 index 0000000000..ad431547da --- /dev/null +++ b/sources/tech/20190529 Packit - packaging in Fedora with minimal effort.md @@ -0,0 +1,244 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Packit – packaging in Fedora with minimal effort) +[#]: via: (https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/) +[#]: author: (Petr Hracek https://fedoramagazine.org/author/phracek/) + +Packit – packaging in Fedora with minimal effort +====== + +![][1] + +### What is packit + +Packit ([https://packit.dev/)][2] is a CLI tool that helps you auto-maintain your upstream projects into the Fedora operating system. But what does it really mean? + +As a developer, you might want to update your package in Fedora. If you’ve done it in the past, you know it’s no easy task. If you haven’t let me reiterate: it’s no easy task. + +And this is exactly where packit can help: once you have your package in Fedora, you can maintain your SPEC file upstream and, with just one additional configuration file, packit will help you update your package in Fedora when you update your source code upstream. + +Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project. + +Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR. + +Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others. + +Packit provides also another two commands: _build_ and _create-update_. + +The command _packit build_ performs a production build of your project in Fedora build system – koji. You can Fedora version you want to build against using an option _–dist-git-branch_. The command _packit create-updates_ creates a Bodhi update for the specific branch using the option — _dist-git-branch_. + +### Installation + +You can install packit on Fedora using dnf: + +``` +sudo dnf install -y packit +``` + +### Configuration + +For demonstration use case, I have selected the upstream repository of **colin** ([https://github.com/user-cont/colin)][3]. Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images. + +First of all, clone **colin** git repository: + +``` +$ git clone https://github.com/user-cont/colin.git +$ cd colin +``` + +Packit expects to run in the root of your git repository. + +Packit ([https://github.com/packit-service/packit/)][4] needs information about your project, which has to be stored in the upstream repository in the _.packit.yaml_ file (). + +See colin’s packit configuration file: + +``` +$ cat .packit.yaml +specfile_path: colin.spec +synced_files: + -.packit.yaml + - colin.spec +upstream_project_name: colin +downstream_package_name: colin +``` + +What do the values mean? + + * _specfile_path_ – a relative path to a spec file within the upstream repository (mandatory) + * _synced_files_ – a list of relative paths to files in the upstream repo which are meant to be copied to dist-git during an update + * _upstream_project_name_ – name of the upstream repository (e.g. in PyPI); this is used in %prep section + * _downstream_package_name_ – name of the package in Fedora (mandatory) + + + +For more information see the packit configuration documentation () + +### What can packit do? + +Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project. + +Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories. + +Export GitHub token taken from : + +``` +$ export GITHUB_TOKEN= +``` + +Obtain your Kerberos ticket needed for Fedora Account System (FAS) : + +``` +$ kinit @FEDORAPROJECT.ORG +``` + +Export your Pagure API keys taken from : + +``` +$ export PAGURE_USER_TOKEN= +``` + +Packit also needs a fork token to create a pull request. The token is taken from + +Do it by running: + +``` +$ export PAGURE_FORK_TOKEN= +``` + +Or store these tokens in the **~/.config/packit.yaml** file: + +``` +$ cat ~/.config/packit.yaml + +github_token: +pagure_user_token: +pagure_fork_token: +``` + +#### Propose a new upstream release in Fedora + +The command for this first use case is called _**propose-update**_ (). The command creates a new pull request in Fedora dist-git repository using a selected or the latest upstream release. + +``` +$ packit propose-update + +INFO: Running 'anitya' versioneer +Version in upstream registries is '0.3.1'. +Version in spec file is '0.3.0'. +WARNING Version in spec file is outdated +Picking version of the latest release from the upstream registry. +Checking out upstream version 0.3.1 +Using 'master' dist-git branch +Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec. +Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload). +INFO: Downloading file from URL https://files.pythonhosted.org/packages/source/c/colin/colin-0.3.0.tar.gz +100%[=============================>] 3.18M eta 00:00:00 +Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz' +About to upload to lookaside cache +won't be doing kinit, no credentials provided +PR created: https://src.fedoraproject.org/rpms/colin/pull-request/14 +``` + +Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged. + +![][5] + +#### Sync downstream changes back to the upstream repository + +Another use case is to sync downstream changes into the upstream project repository. + +The command for this purpose is called _**sync-from-downstream**_ (). Files synced into the upstream repository are mentioned in the _packit.yaml_ configuration file under the _synced_files_ value. + +``` +$ packit sync-from-downstream + +upstream active branch master +using "master" dist-git branch +Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec. +Creating remote fork-ssh with URL git@github.com:phracek/colin.git. +Pushing to remote fork-ssh using branch master-downstream-sync. +PR created: https://github.com/user-cont/colin/pull/229 +``` + +As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository. + +![][6] + +#### Get the status of your upstream project + +If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the _**status**_ command for this purpose. + +``` +$ packit status +Downstream PRs: + ID Title URL +---- -------------------------------- --------------------------------------------------------- + 14 Update to upstream release 0.3.1 https://src.fedoraproject.org//rpms/colin/pull-request/14 + 12 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/12 + 11 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/11 + 8 Upstream pr: 226 https://src.fedoraproject.org//rpms/colin/pull-request/8 + +Dist-git versions: +f27: 0.2.0 +f28: 0.2.0 +f29: 0.2.0 +f30: 0.2.0 +master: 0.2.0 + +GitHub upstream releases: +0.3.1 +0.3.0 +0.2.1 +0.2.0 +0.1.0 + +Latest builds: +f27: colin-0.2.0-1.fc27 +f28: colin-0.3.1-1.fc28 +f29: colin-0.3.1-1.fc29 +f30: colin-0.3.1-2.fc30 + +Latest bodhi updates: +Update Karma status +------------------ ------- -------- +colin-0.3.1-1.fc29 1 stable +colin-0.3.1-1.fc28 1 stable +colin-0.3.0-2.fc28 0 obsolete +``` + +#### Create an SRPM + +The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is _**srpm**_. + +``` +$ packit srpm +Version in spec file is '0.3.1.37.g00bb80e'. +SRPM: /home/phracek/work/colin/colin-0.3.1.37.g00bb80e-1.fc29.src.rpm +``` + +### Packit as a service + +In the summer, the people behind packit would like to introduce packit as a service (). In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg. + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/packit-packaging-in-fedora-with-minimal-effort/ + +作者:[Petr Hracek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/phracek/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/05/packit3-816x345.png +[2]: https://packit.dev/ +[3]: https://github.com/user-cont/colin +[4]: https://github.com/packit-service/packit/ +[5]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_pr-1024x781.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/05/colin_upstream_pr-1-1024x677.png diff --git a/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md b/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md new file mode 100644 index 0000000000..713d117cb3 --- /dev/null +++ b/sources/tech/20190530 Creating a Source-to-Image build pipeline in OKD.md @@ -0,0 +1,484 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Creating a Source-to-Image build pipeline in OKD) +[#]: via: (https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +Creating a Source-to-Image build pipeline in OKD +====== +S2I is an ideal way to build and compile Go applications in a repeatable +way, and it just gets better when paired with OKD BuildConfigs. +![][1] + +In the first three articles in this series, we explored the general [requirements][2] of a Source-to-Image (S2I) system and [prepared][3] and [tested][4] an environment specifically for a Go (Golang) application. This S2I build is perfect for local development or maintaining a builder image with a code pipeline, but if you have access to an [OKD][5] or OpenShift cluster (or [Minishift][6]), you can set up the entire workflow using OKD BuildConfigs, not only to build and maintain the builder image but also to use the builder image to create the application image and subsequent runtime image automatically. This way, the images can be rebuilt automatically when downstream images change and can trigger OKD deploymentConfigs to redeploy applications running from these images. + +### Step 1: Build the builder image in OKD + +As in local S2I usage, the first step is to create the builder image to build the GoHelloWorld test application that we can reuse to compile other Go-based applications. This first build step will be a Docker build, just like before, that pulls the Dockerfile and S2I scripts from a Git repository to build the image. Therefore, those files must be committed and available in a public Git repo (or you can use the companion [GitHub repo][7] for this article). + +_Note:_ OKD BuildConfigs do not require that source Git repos are public. To use a private repo, you must set up deploy keys and link the keys to a builder service account. This is not difficult, but for simplicity's sake, this exercise will use a public repository. + +#### Create an image stream for the builder image + +The BuildConfig will create a builder image for us to compile the GoHelloWorld app, but first, we need a place to store the image. In OKD, that place is an image stream. + +An [image stream][8] and its tags are like a manifest or list of related images and image tags. It serves as an abstraction layer that allows you to reference an image, even if the image changes. Think of it as a collection of aliases that reference specific images and, as images are updated, automatically points to the new image version. The image stream is nothing except these aliases—just metadata about real images stored in a registry. + +An image stream can be created with the **oc create imagestream ** command, or it can be created from a YAML file with **oc create -f **. Either way, a brand-new image stream is a small placeholder object that is empty until it is populated with image references, either manually (who wants to do things manually?) or with a BuildConfig. + +Our golang-builder image stream looks like this: + + +``` +# imageStream-golang-builder.yaml +\--- +apiVersion: image.openshift.io/v1 +kind: ImageStream +metadata: +generation: 1 +name: golang-builder +spec: +lookupPolicy: +local: false +``` + +Other than a name, and a (mostly) empty spec, there is nothing really there. + +_Note:_ The **lookupPolicy** has to do with allowing Kubernetes-native components to resolve image stream references since image streams are OKD-native and not a part of the Kubernetes core. This topic is out of scope for this article, but you can read more about how it works in OKD's documentation [Using Image Streams with Kubernetes Resources][9]. + +Create an image stream for the builder image and its progeny. + + +``` +$ oc create -f imageStream-golangBuilder.yaml + +# Check the ImageStream +$ oc get imagestream golang-builder +NAME DOCKER REPO TAGS UPDATED +imagestream.image.openshift.io/golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder +``` + +Note that the newly created image stream has no tags and has never been updated. + +#### Create a BuildConfig for the builder image + +In OKD, a [BuildConfig][10] describes how to build container images from a specific source and triggers for when they build. Don't be thrown off by the language—just as you might say you build and re-build the same image from a Dockerfile, but in reality, you have built multiple images, a BuildConfig builds and rebuilds the same image, but in reality, it creates multiple images. (And suddenly the reason for image streams becomes much clearer!) + +Our builder image BuildConfig describes how to build and re-build our builder image(s). The BuildConfig's core is made up of four important parts: + + 1. Build source + 2. Build strategy + 3. Build output + 4. Build triggers + + + +The _build source_ (predictably) describes where the thing that runs the build comes from. The builds described by the golang-builder BuildConfig will use the Dockerfile and S2I scripts we created previously and, using the Git-type build source, clone a Git repository to get the files to do the builds. + + +``` +source: +type: Git +git: +ref: master +uri: +``` + +The _build strategy_ describes what the build will do with the source files from the build source. The golang-builder BuildConfig mimics the **docker build** we used previously to build our local builder image by using the Docker-type build strategy. + + +``` +strategy: +type: Docker +dockerStrategy: {} +``` + +The **dockerStrategy** build type tells OKD to build a Docker image from the Dockerfile contained in the source specified by the build source. + +The _build output_ tells the BuildConfig what to do with the resulting image. In this case, we specify the image stream we created above and a tag to give to the image. As with our local build, we're tagging it with **golang-builder:1.12** as a reference to the Go version inherited from the parent image. + + +``` +output: +to: +kind: ImageStreamTag +name: golang-builder:1.12 +``` + +Finally, the BuildConfig defines a set of _build triggers_ —events that will cause the image to be rebuilt automatically. For this BuildConfig, a change to the BuildConfig configuration or an update to the upstream image (golang:1.12) will trigger a new build. + + +``` +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Using the [builder image BuildConfig][11] from the GitHub repo as a reference (or just using that file), create a BuildConfig YAML file and use it to create the BuildConfig. + + +``` +$ oc create -f buildConfig-golang-builder.yaml + +# Check the BuildConfig +$ oc get bc golang-builder +NAME TYPE FROM LATEST +golang-builder Docker Git@master 1 +``` + +Because the BuildConfig included the "ImageChange" trigger, it immediately kicks off a new build. You can check that the build was created with the **oc get builds** command. + + +``` +# Check the Builds +$ oc get builds +NAME TYPE FROM STATUS STARTED DURATION +golang-builder-1 Docker Git@8eff001 Complete About a minute ago 13s +``` + +While the build is running and after it has completed, you can view its logs with **oc logs -f ** and see the Docker build output as you would locally. + + +``` +$ oc logs -f golang-builder-1-build +Step 1/11 : FROM docker.io/golang:1.12 +\---> 7ced090ee82e +Step 2/11 : LABEL maintainer "Chris Collins <[collins.christopher@gmail.com][12]>" +\---> 7ad989b765e4 +Step 3/11 : ENV CGO_ENABLED 0 GOOS linux GOCACHE /tmp STI_SCRIPTS_PATH /usr/libexec/s2i SOURCE_DIR /go/src/app +\---> 2cee2ce6757d + +<...> +``` + +If you did not include any build triggers (or did not have them in the right place), your build may not start automatically. You can manually kick off a new build with the **oc start-build** command. + + +``` +$ oc start-build golang-builder + +# Or, if you want to automatically tail the build log +$ oc start-build golang-builder --follow +``` + +When the build completes, the resulting image is tagged and pushed to the integrated image registry and the image stream is updated with the new image's information. Check the image stream with the **oc get imagestream** command to see that the new tag exists. + + +``` +$ oc get imagestream golang-builder +NAME DOCKER REPO TAGS UPDATED +golang-builder docker-registry.default.svc:5000/golang-builder/golang-builder 1.12 33 seconds ago +``` + +### Step 2: Build the application image in OKD + +Now that we have a builder image for our Golang applications created and stored within OKD, we can use this builder image to compile all of our Go apps. First on the block is the example GoHelloWorld app from our [local build example][4]. GoHelloWorld is a simple Go app that just outputs **Hello World!** when it's run. + +Just as we did in the local example with the **s2i build** command, we can tell OKD to use our builder image and S2I to build the application image for GoHelloWorld, compiling the Go binary from the source code in the [GoHelloWorld GitHub repository][13]. This can be done with a BuildConfig with a **sourceStrategy** build. + +#### Create an image stream for the application image + +First things first, we need to create an image stream to manage the image created by the BuildConfig. The image stream is just like the golang-builder image stream, just with a different name. Create it with **oc create is** or using a YAML file from the [GitHub repo][7]. + + +``` +$ oc create -f imageStream-goHelloWorld-appimage.yaml +imagestream.image.openshift.io/go-hello-world-appimage created +``` + +#### Create a BuildConfig for the application image + +Just as we did with the builder image BuildConfig, this BuildConfig will use the Git source option to clone our source code from the GoHelloWorld repository. + + +``` +source: +type: Git +git: +uri: +``` + +Instead of using a DockerStrategy build to create an image from a Dockerfile, this BuildConfig will use the sourceStrategy definition to build the image using S2I. + + +``` +strategy: +type: Source +sourceStrategy: +from: +kind: ImageStreamTag +name: golang-builder:1.12 +``` + +Note the **from:** hash in sourceStrategy. This tells OKD to use the **golang-builder:1.12** image we created previously for the S2I build. + +The BuildConfig will output to the new **appimage** image stream we created, and we'll include config- and image-change triggers to kick off new builds automatically if anything updates. + + +``` +output: +to: +kind: ImageStreamTag +name: go-hello-world-appimage:1.0 +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Once again, create a BuildConfig or use the one from the GitHub repo. + + +``` +`$ oc create -f buildConfig-goHelloWorld-appimage.yaml` +``` + +The new build shows up alongside the golang-builder build and, because image-change triggers are specified, the build starts immediately. + + +``` +$ oc get builds +NAME TYPE FROM STATUS STARTED DURATION +golang-builder-1 Docker Git@8eff001 Complete 8 minutes ago 13s +go-hello-world-appimage-1 Source Git@99699a6 Running 44 seconds ago +``` + +If you want to watch the build logs, use the **oc logs -f** command. Once the application image build completes, it is pushed to the image stream we specified, then the new image stream tag is created. + + +``` +$ oc get is go-hello-world-appimage +NAME DOCKER REPO TAGS UPDATED +go-hello-world-appimage docker-registry.default.svc:5000/golang-builder/go-hello-world-appimage 1.0 10 minutes ago +``` + +Success! The GoHelloWorld app was cloned from source into a new image and compiled and tested using our S2I scripts. We can use the image as-is but, as with our local S2I builds, we can do better and create an image with just the new Go binary in it. + +### Step 3: Build the runtime image in OKD + +Now that the application image has been created with a compiled Go binary for the GoHelloWorld app, we can use something called chain builds to mimic when we extracted the binary from our local application image and created a new runtime image with just the binary in it. + +#### Create an image stream for the runtime image + +Once again, the first step is to create an image stream image for the new runtime image. + + +``` +# Create the ImageStream +$ oc create -f imageStream-goHelloWorld.yaml +imagestream.image.openshift.io/go-hello-world created + +# Get the ImageStream +$ oc get imagestream go-hello-world +NAME DOCKER REPO TAGS UPDATED +go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world +``` + +#### Chain builds + +Chain builds are when one or more BuildConfigs are used to compile software or assemble artifacts for an application, and those artifacts are saved and used by a subsequent BuildConfig to generate a runtime image without re-compiling the code. + +![Chain Build workflow][14] + +Chain build workflow + +#### Create a BuildConfig for the runtime image + +The runtime BuildConfig uses the DockerStrategy build to build the image from a Dockerfile—the same thing we did with the builder image BuildConfig. This time, however, the source is not a Git source, but a Dockerfile source. + +What is the Dockerfile source? It's an inline Dockerfile! Instead of cloning a repo with a Dockerfile in it and building that, we specify the Dockerfile in the BuildConfig. This is especially appropriate with our runtime Dockerfile because it's just three lines long. + + +``` +source: +type: Dockerfile +dockerfile: |- +FROM scratch +COPY app /app +ENTRYPOINT ["/app"] +images: +\- from: +kind: ImageStreamTag +name: go-hello-world-appimage:1.0 +paths: +\- sourcePath: /go/src/app/app +destinationDir: "." +``` + +Note that the Dockerfile in the Dockerfile source definition above is the same as the Dockerfile we used in the [third article][4] in this series when we built the slim GoHelloWorld image locally using the binary we extracted with the S2I **save-artifacts** script. + +Something else to note: **scratch** is a reserved word in Dockerfiles. Unlike other **FROM** statements, it does not define an _actual_ image, but rather that the first layer of this image will be nothing. It is defined with **kind: DockerImage** but does not have a registry or group/namespace/project string. Learn more about this behavior in this excellent [container best practices][15] reference. + +The **images** section of the Dockerfile source describes the source of the artifact(s) to be used in the build; in this case, from the appimage generated earlier. The **paths** subsection describes where to get the binary (i.e., in the **/go/src/app** directory of the app image, get the **app** binary) and where to save it (i.e., in the current working directory of the build itself: **"."** ). This allows the **COPY app /app** to grab the binary from the current working directory and add it to **/app** in the runtime image. + +_Note:_ **paths** is an array of source and the destination path _pairs_. Each entry in the list consists of a source and destination. In the example above, there is just one entry because there is just a single binary to copy. + +The Docker strategy is then used to build the inline Dockerfile. + + +``` +strategy: +type: Docker +dockerStrategy: {} +``` + +Once again, it is output to the image stream created earlier and includes build triggers to automatically kick off new builds. + + +``` +output: +to: +kind: ImageStreamTag +name: go-hello-world:1.0 +triggers: +\- type: ConfigChange +\- imageChange: +type: ImageChange +``` + +Create a BuildConfig YAML or use the runtime BuildConfig from the GitHub repo. + + +``` +$ oc create -f buildConfig-goHelloWorld.yaml +buildconfig.build.openshift.io/go-hello-world created +``` + +If you watch the logs, you'll notice the first step is **FROM scratch** , which confirms we're adding the compiled binary to a blank image. + + +``` +$ oc logs -f pod/go-hello-world-1-build +Step 1/5 : FROM scratch +\---> +Step 2/5 : COPY app /app +\---> 9e70e6c710f8 +Removing intermediate container 4d0bd9cef0a7 +Step 3/5 : ENTRYPOINT /app +\---> Running in 7a2dfeba28ca +\---> d697577910fc + +<...> +``` + +Once the build is completed, check the image stream tag to validate that the new image was pushed to the registry and image stream was updated. + + +``` +$ oc get imagestream go-hello-world +NAME DOCKER REPO TAGS UPDATED +go-hello-world docker-registry.default.svc:5000/golang-builder/go-hello-world 1.0 4 minutes ago +``` + +Make a note of the **DOCKER REPO** string for the image. It will be used in the next section to run the image. + +### Did we create a tiny, binary-only image? + +Finally, let's validate that we did, indeed, build a tiny image with just the binary. + +Check out the image details. First, get the image's name from the image stream. + + +``` +$ oc describe imagestream go-hello-world +Name: go-hello-world +Namespace: golang-builder +Created: 42 minutes ago +Labels: +Annotations: +Docker Pull Spec: docker-registry.default.svc:5000/golang-builder/go-hello-world +Image Lookup: local=false +Unique Images: 1 +Tags: 1 + +1.0 +no spec tag + +* docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +13 minutes ago +``` + +The image is listed at the bottom, described with the SHA hash (e.g., **sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053** ; yours will be different). + +Get the details of the image using the hash. + + +``` +$ oc describe image sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Docker Image: docker-registry.default.svc:5000/golang-builder/go-hello-world@sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Name: sha256:eb11e0147a2917312f5e0e9da71109f0cb80760e945fdc1e2db6424b91bc9053 +Created: 15 minutes ago +Annotations: image.openshift.io/dockerLayersOrder=ascending +image.openshift.io/manifestBlobStored=true +openshift.io/image.managed=true +Image Size: 1.026MB +Image Created: 15 minutes ago +Author: +Arch: amd64 +Entrypoint: /app +Working Dir: +User: +Exposes Ports: +Docker Labels: io.openshift.build.name=go-hello-world-1 +io.openshift.build.namespace=golang-builder +Environment: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +OPENSHIFT_BUILD_NAME=go-hello-world-1 +OPENSHIFT_BUILD_NAMESPACE=golang-builder +``` + +Notice the image size, 1.026MB, is exactly as we want. The image is a scratch image with just the binary inside it! + +### Run a pod with the runtime image + +Using the runtime image we just created, let's create a pod on-demand and run it and validate that it still works. + +This almost never happens in Kubernetes/OKD, but we will run a pod, just a pod, by itself. + + +``` +$ oc run -it go-hello-world --image=docker-registry.default.svc:5000/golang-builder/go-hello-world:1.0 --restart=Never +Hello World! +``` + +Everything is working as expected—the image runs and outputs "Hello World!" just as it did in the previous, local S2I builds. + +By creating this workflow in OKD, we can use the golang-builder S2I image for any Go application. This builder image is in place and built for any other applications, and it will auto-update and rebuild itself anytime the upstream golang:1.12 image changes. + +New apps can be built automatically using the S2I build by creating a chain build strategy in OKD with an appimage BuildConfig to compile the source and the runtime BuildConfig to create the final image. Using the build triggers, any change to the source code in the Git repo will trigger a rebuild through the entire pipeline, rebuilding the appimage and the runtime image automatically. + +This is a great way to maintain updated images for any application. Paired with an OKD deploymentConfig with an image build trigger, long-running applications (e.g., webapps) will be automatically redeployed when new code is committed. + +Source-to-Image is an ideal way to develop builder images to build and compile Go applications in a repeatable way, and it just gets better when paired with OKD BuildConfigs. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/5/creating-source-image-build-pipeline-okd + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/blocks_building.png?itok=eMOT-ire +[2]: https://opensource.com/article/19/5/source-image-golang-part-1 +[3]: https://opensource.com/article/19/5/source-image-golang-part-2 +[4]: https://opensource.com/article/19/5/source-image-golang-part-3 +[5]: https://www.okd.io/ +[6]: https://github.com/minishift/minishift +[7]: https://github.com/clcollins/golang-s2i.git +[8]: https://docs.okd.io/latest/architecture/core_concepts/builds_and_image_streams.html#image-streams +[9]: https://docs.okd.io/latest/dev_guide/managing_images.html#using-is-with-k8s +[10]: https://docs.okd.io/latest/dev_guide/builds/index.html#defining-a-buildconfig +[11]: https://github.com/clcollins/golang-s2i/blob/master/okd/buildConfig-golang-builder.yaml +[12]: mailto:collins.christopher@gmail.com +[13]: https://github.com/clcollins/goHelloWorld.git +[14]: https://opensource.com/sites/default/files/uploads/chainingbuilds.png (Chain Build workflow) +[15]: http://docs.projectatomic.io/container-best-practices/#_from_scratch diff --git a/sources/tech/20190602 How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server.md b/sources/tech/20190602 How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server.md new file mode 100644 index 0000000000..e3a533b3b2 --- /dev/null +++ b/sources/tech/20190602 How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server.md @@ -0,0 +1,200 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server) +[#]: via: (https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Install LEMP (Linux, Nginx, MariaDB, PHP) on Fedora 30 Server +====== + +In this article, we’ll be looking at how to install **LEMP** stack on Fedora 30 Server. LEMP Stands for: + + * L -> Linux + * E -> Nginx + * M -> Maria DB + * P -> PHP + + + +I am assuming **[Fedora 30][1]** is already installed on your system. + +![LEMP-Stack-Fedora30][2] + +LEMP is a collection of powerful software setup that is installed on a Linux server to help in developing popular development platforms to build websites, LEMP is a variation of LAMP wherein instead of **Apache** , **EngineX (Nginx)** is used as well as **MariaDB** used in place of **MySQL**. This how-to guide is a collection of separate guides to install Nginx, Maria DB and PHP. + +### Install Nginx, PHP 7.3 and PHP-FPM on Fedora 30 Server + +Let’s take a look at how to install Nginx and PHP along with PHP FPM on Fedora 30 Server. + +### Step 1) Switch to root user + +First step in installing Nginx in your system is to switch to root user. Use the following command : + +``` +root@linuxtechi ~]$ sudo -i +[sudo] password for pkumar: +[root@linuxtechi ~]# +``` + +### Step 2) Install Nginx, PHP 7.3 and PHP FPM using dnf command + +Install Nginx using the following dnf command: + +``` +[root@linuxtechi ~]# dnf install nginx php php-fpm php-common -y +``` + +### Step 3) Install Additional PHP modules + +The default installation of PHP only comes with the basic and the most needed modules installed. If you need additional modules like GD, XML support for PHP, command line interface Zend OPCache features etc, you can always choose your packages and install everything in one go. See the sample command below: + +``` +[root@linuxtechi ~]# sudo dnf install php-opcache php-pecl-apcu php-cli php-pear php-pdo php-pecl-mongodb php-pecl-redis php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml -y +``` + +### Step 4) Start & Enable Nginx and PHP-fpm Service + +Start and enable Nginx service using the following command + +``` +[root@linuxtechi ~]# systemctl start nginx && systemctl enable nginx +Created symlink /etc/systemd/system/multi-user.target.wants/nginx.service → /usr/lib/systemd/system/nginx.service. +[root@linuxtechi ~]# +``` + +Use the following command to start and enable PHP-FPM service + +``` +[root@linuxtechi ~]# systemctl start php-fpm && systemctl enable php-fpm +Created symlink /etc/systemd/system/multi-user.target.wants/php-fpm.service → /usr/lib/systemd/system/php-fpm.service. +[root@linuxtechi ~]# +``` + +**Verify Nginx (Web Server) and PHP installation,** + +**Note:** In case OS firewall is enabled and running on your Fedora 30 system, then allow 80 and 443 ports using beneath commands, + +``` +[root@linuxtechi ~]# firewall-cmd --permanent --add-service=http +success +[root@linuxtechi ~]# +[root@linuxtechi ~]# firewall-cmd --permanent --add-service=https +success +[root@linuxtechi ~]# firewall-cmd --reload +success +[root@linuxtechi ~]# +``` + +Open the web browser, type the following URL: http:// + +[![Test-Page-HTTP-Server-Fedora-30][3]][4] + +Above screen confirms that NGINX is installed successfully. + +Now let’s verify PHP installation, create a test php page(info.php) using the beneath command, + +``` +[root@linuxtechi ~]# echo "" > /usr/share/nginx/html/info.php +[root@linuxtechi ~]# +``` + +Type the following URL in the web browser, + +http:///info.php + +[![Php-info-page-fedora30][5]][6] + +Above page confirms that PHP 7.3.5 has been installed successfully. Now let’s install MariaDB database server. + +### Install MariaDB on Fedora 30 + +MariaDB is a great replacement for MySQL DB as it is works much similar to MySQL and also compatible with MySQL steps too. Let’s look at the steps to install MariaDB on Fedora 30 Server + +### Step 1) Switch to Root User + +First step in installing MariaDB in your system is to switch to root user or you can use a local user who has root privilege. Use the following command below: + +``` +[root@linuxtechi ~]# sudo -i +[root@linuxtechi ~]# +``` + +### Step 2) Install latest version of MariaDB (10.3) using dnf command + +Use the following command to install MariaDB on Fedora 30 Server + +``` +[root@linuxtechi ~]# dnf install mariadb-server -y +``` + +### Step 3) Start and enable MariaDB Service + +Once the mariadb is installed successfully in step 2), next step is to start the MariaDB service. Use the following command: + +``` +[root@linuxtechi ~]# systemctl start mariadb.service ; systemctl enable mariadb.service +``` + +### Step 4) Secure MariaDB Installation + +When we install MariaDB server, so by default there is no root password, also anonymous users are created in database. So, to secure MariaDB installation, run the beneath “mysql_secure_installation” command + +``` +[root@linuxtechi ~]# mysql_secure_installation +``` + +Next you will be prompted with some question, just answer the questions as shown below: + +![Secure-MariaDB-Installation-Part1][7] + +![Secure-MariaDB-Installation-Part2][8] + +### Step 5) Test MariaDB Installation + +Once you have installed, you can always test if MariaDB is successfully installed on the server. Use the following command: + +``` +[root@linuxtechi ~]# mysql -u root -p +Enter password: +``` + +Next you will be prompted for a password. Enter the password same password that you have set during MariaDB secure installation, then you can see the MariaDB welcome screen. + +``` +Welcome to the MariaDB monitor. Commands end with ; or \g. +Your MariaDB connection id is 17 +Server version: 10.3.12-MariaDB MariaDB Server + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> +``` + +And finally, we’ve completed everything to install LEMP (Linux, Nginx, MariaDB and PHP) on your server successfully. Please post all your comments and suggestions in the feedback section below and we’ll respond back at the earliest. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/install-lemp-stack-fedora-30-server/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/fedora-30-workstation-installation-guide/ +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/LEMP-Stack-Fedora30.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30-1024x732.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Test-Page-HTTP-Server-Fedora-30.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30-1024x732.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Php-info-page-fedora30.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part1.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Secure-MariaDB-Installation-Part2.jpg diff --git a/sources/tech/20190603 How many browser tabs do you usually have open.md b/sources/tech/20190603 How many browser tabs do you usually have open.md new file mode 100644 index 0000000000..7777477029 --- /dev/null +++ b/sources/tech/20190603 How many browser tabs do you usually have open.md @@ -0,0 +1,44 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How many browser tabs do you usually have open?) +[#]: via: (https://opensource.com/article/19/6/how-many-browser-tabs) +[#]: author: (Lauren Pritchett https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst) + +How many browser tabs do you usually have open? +====== +Plus, get a few tips for browser productivity. +![Browser of things][1] + +Here's a potentially loaded question: How many browser tabs do you usually have open at one time? Do you have multiple windows, each with multiple tabs? Or are you a minimalist, and only have a couple of tabs open at once. Another option is to move a 20-tabbed browser window to a different monitor so that it is out of the way while working on a particular task. Does your approach differ between work, personal, and mobile browsers? Is your browser strategy related to your [productivity habits][2]? + +### 4 tips for browser productivity + + 1. Know your browser shortcuts to save clicks. Whether you use Firefox or Chrome, there are plenty of keyboard shortcuts to help make switching between tabs and performing certain functions a breeze. For example, Chrome makes it easy to open up a blank Google document. Use the shortcut **"Ctrl + t"** to open a new tab, then type **"doc.new"**. The same can be done for spreadsheets, slides, and forms. + 2. Organize your most frequent tasks with bookmark folders. When it's time to start a particular task, simply open all of the bookmarks in the folder **(Ctrl + click)** to check it off your list quickly. + 3. Get the right browser extensions for you. There are thousands of browser extensions out there all claiming to improve productivity. Before you install, make sure you're not just adding more distractions to your screen. + 4. Reduce screen time by using a timer. It doesn't matter if you use an old-fashioned egg timer or a fancy browser extension. To prevent eye strain, implement the 20/20/20 rule. Every 20 minutes, take a 20-second break from your screen and look at something 20 feet away. + + + +Take our poll to share how many browser tabs you like to have open at once. Be sure to tell us about your favorite browser tricks in the comments. + +There are two components of productivity—doing the right things and doing those things efficiently... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/how-many-browser-tabs + +作者:[Lauren Pritchett][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lauren-pritchett/users/sarahwall/users/ksonney/users/jwhitehurst +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/browser_desktop_website_checklist_metrics.png?itok=OKKbl1UR (Browser of things) +[2]: https://enterprisersproject.com/article/2019/1/5-time-wasting-habits-break-new-year diff --git a/sources/tech/20190603 How to stream music with GNOME Internet Radio.md b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md new file mode 100644 index 0000000000..fc21d82d0b --- /dev/null +++ b/sources/tech/20190603 How to stream music with GNOME Internet Radio.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to stream music with GNOME Internet Radio) +[#]: via: (https://opensource.com/article/19/6/gnome-internet-radio) +[#]: author: (Alan Formy-Duval https://opensource.com/users/alanfdoss/users/r3bl) + +How to stream music with GNOME Internet Radio +====== +If you're looking for a simple, straightforward interface that gets your +streams playing, try GNOME's Internet Radio plugin. +![video editing dashboard][1] + +Internet radio is a great way to listen to stations from all over the world. Like many developers, I like to turn on a station as I code. You can listen to internet radio with a media player for the terminal like [MPlayer][2] or [mpv][3], which is what I use to listen via the Linux command line. However, if you prefer using a graphical user interface (GUI), you may want to try [GNOME Internet Radio][4], a nifty plugin for the GNOME desktop. You can find it in the package manager. + +![GNOME Internet Radio plugin][5] + +Listening to internet radio with a graphical desktop operating system generally requires you to launch an application such as [Audacious][6] or [Rhythmbox][7]. They have nice interfaces, plenty of options, and cool audio visualizers. But if you want a simple, straightforward interface that gets your streams playing, GNOME Internet Radio is for you. + +After installing it, a small icon appears in your toolbar, which is where you do all your configuration and management. + +![GNOME Internet Radio icons][8] + +The first thing I did was go to the Settings menu. I enabled the following two options: show title notifications and show volume adjustment. + +![GNOME Internet Radio Settings][9] + +GNOME Internet Radio includes a few pre-configured stations, and it is really easy to add others. Just click the ( **+** ) sign. You'll need to enter a channel name, which can be anything you prefer (including the station name), and the station address. For example, I like to listen to Synthetic FM. I enter the name, e.g., "Synthetic FM," and the stream address, i.e., . + +Then click the star next to the stream to add it to your menu. + +However you listen to music and whatever genre you choose, it is obvious—coders need their music! The GNOME Internet Radio plugin makes it simple to get your favorite internet radio station queued up. + +In honor of the Gnome desktop's 18th birthday on August 15, we've rounded up 18 reasons to toast... + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/gnome-internet-radio + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss/users/r3bl +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/video_editing_folder_music_wave_play.png?itok=-J9rs-My (video editing dashboard) +[2]: https://opensource.com/article/18/12/linux-toy-mplayer +[3]: https://mpv.io/ +[4]: https://extensions.gnome.org/extension/836/internet-radio/ +[5]: https://opensource.com/sites/default/files/uploads/packagemanager_s.png (GNOME Internet Radio plugin) +[6]: https://audacious-media-player.org/ +[7]: https://help.gnome.org/users/rhythmbox/stable/ +[8]: https://opensource.com/sites/default/files/uploads/titlebaricons.png (GNOME Internet Radio icons) +[9]: https://opensource.com/sites/default/files/uploads/gnomeinternetradio_settings.png (GNOME Internet Radio Settings) diff --git a/sources/tech/20190604 Aging in the open- How this community changed us.md b/sources/tech/20190604 Aging in the open- How this community changed us.md new file mode 100644 index 0000000000..a03d49eca2 --- /dev/null +++ b/sources/tech/20190604 Aging in the open- How this community changed us.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Aging in the open: How this community changed us) +[#]: via: (https://opensource.com/open-organization/19/6/four-year-celebration) +[#]: author: (Bryan Behrenshausen https://opensource.com/users/bbehrens) + +Aging in the open: How this community changed us +====== +Our community dedicated to exploring open organizational culture and +design turns four years old this week. +![Browser window with birthday hats and a cake][1] + +A community will always surprise you. + +That's not an easy statement for someone like me to digest. I'm not one for surprises. I revel in predictability. I thrive on consistency. + +A passionate and dedicated community offers few of these comforts. Participating in something like [the open organization community at Opensource.com][2]—which [turns four years old this week][3]—means acquiescing to dynamism, to constant change. Every day brings novelty. Every correspondence is packed with possibility. Every interaction reveals undisclosed pathways. + +To [a certain type of person][4] (me again), it can be downright terrifying. + +But that unrelenting and genuine surprise is the [very source of a community's richness][5], its sheer abundance. If a community is the nucleus of all those reactions that catalyze innovations and breakthroughs, then unpredictability and serendipity are its fuel. I've learned to appreciate it—more accurately, perhaps, to stand in awe of it. Four years ago, when the Opensource.com team heeded [Jim Whitehurst's call][6] to build a space for others to "share your thoughts and opinions… on how you think we can all lead and work better in the future" (see the final page of _The Open Organization_ ), we had little more than a mandate, a platform, and a vision. We'd be an open organization [committed to studying, learning from, and propagating open organizations][7]. The rest was a surprise—or rather, a series of surprises: + + * [Hundreds of articles, reviews, guides, and tutorials][2] on infusing open principles into organizations of all sizes across industries + * [A book series][8] spanning five volumes (with [another currently in production][9]) + * A detailed, comprehensive, community-maintained [definition of the "open organization" concept][10] + * A [robust maturity model][11] for anyone seeking to understand how that definition might (or might not) support their own work + + + +All of that—everything you see there, [and more][12]—is the work of a community that never stopped conversing, never ceased creating, never failed to outsmart itself. No one could have predicted it. No one could have [planned for it][13]. We simply do our best to keep up with it. + +And after four years the work continues, more focused and impassioned than ever. Remaining involved with this bunch of writers, educators, consultants, coaches, leaders, and mentors—all united by their belief that openness is the surest source of hope for organizations struggling to address the challenges of our age—has made me appreciate the power of the utterly surprising. I'm even getting a little more comfortable with it. + +That's been its gift to me. But the gifts it has given each of its participants have been equally special. + +As we celebrate four years of challenges, collaboration, and camaraderie this week, let's recount those surprising gifts by hearing from some of the members: + +* * * + +Four years of the open organization community—congratulations to all! + +My first thought was to look at the five most-read articles over the past four years. Here they are: + + * [5 laws every aspiring DevOps engineer should know][14] + * [What value do you bring to your company?][15] + * [8 answers to management questions from an open point of view][16] + * [What to do when you're feeling underutilized][17] + * [What's the point of DevOps?][18] + + + +All great articles. And then I started to think: Of all the great content over the past four years, which articles have impacted me the most? + +I remembered reading several great articles about meetings and how to make them more effective. So I typed "opensource.com meetings" into my search engine, and these two wonderful articles were at the top of the results list: + + * [The secret to better one-on-one meetings][19] + * [Time to rethink your team's approach to meetings][20] + + + +Articles like that have inspired my favorite open organization management principle, which I've tried to apply and has made a huge difference: All meetings are optional. + +**—Jeff Mackanic, senior director, Marketing, Red Hat** + +* * * + +Being a member of the "open community" has reminded me of the power of getting things done via values and shared purpose without command and control—something that seems more important than ever in today's fragmented and often abusive management world, and at a time when truth and transparency themselves are under attack. + +Four years is a long journey for this kind of initiative—but there's still so much to learn and understand about what makes "open" work and what it will take to accelerate the embrace of its principles more widely through different domains of work and society. Congratulations on all you, your colleagues, partners, and other members of the community have done thus far! + +**—Brook Manville, Principal, Brook Manville LLC, author of _A Company of Citizens_ and co-author of _The Harvard Business Review Leader's Handbook_** + +* * * + +The Open Organization Ambassador program has, in the last four years, become an inspired community of experts. We have defined what it means to be a truly open organization. We've written books, guides, articles, and other resources for learning about, understanding, and implementing open principles. We've done this while bringing open principles to other communities, and we've done this together. + +For me, personally and professionally, the togetherness is the best part of this endeavor. I have learned so much from my colleagues. I'm absolutely ecstatic to be one of the idealists and activists in this community—committed to making our workplaces more equitable and open. + +**—Laura Hilliger, co-founder, We Are Open Co-Op, and[Open Organization Ambassador][21]** + +* * * + +Finding the open organization community opened me up to knowing that there are others out there who thought as I did. I wasn't alone. My ideas on leadership and the workplace were not crazy. This sense of belonging increased once I joined the Ambassador team. Our monthly meetings are never long enough. I don't like when we have to hang up because each session is full of laughter, sharpening each other, idea exchange, and joy. Humans seek community. We search for people who share values and ideals as we do but who also push back and help us expand. This is the gift of the open organization community—expansion, growth, and lifelong friendships. Thank you to all of those who contribute their time, intellect, content, and whole self to this awesome think tank that is changing the shape of how we organize to solve problems! + +**—Jen Kelchner, founder and Chief Change Architect, LDR21, and[Open Organization Ambassador][21]** + +* * * + +Happy fourth birthday, open organization community! Thank you for being an ever-present reminder in my life that being open is better than being closed, that listening is more fruitful than telling, and that together we can achieve more. + +**—Michael Doyle, professional coach and[Open Organization Ambassador][21]** + +* * * + +Wow, what a journey it's been exploring the world of open organizations. We're seeing more interest now than ever before. It's amazing to see what this community has done and I'm excited to see what the future holds for open organizations and open leadership. + +**—Jason Hibbets, senior community architect, Red Hat** + +-------------------------------------------------------------------------------- + +via: https://opensource.com/open-organization/19/6/four-year-celebration + +作者:[Bryan Behrenshausen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bbehrens +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/happy_birthday_anniversary_celebrate_hats_cake.jpg?itok=Zfsv6DE_ (Browser window with birthday hats and a cake) +[2]: https://opensource.com/open-organization +[3]: https://opensource.com/open-organization/15/5/introducing-open-organization +[4]: https://opensource.com/open-organization/18/11/design-communities-personality-types +[5]: https://opensource.com/open-organization/18/1/why-build-community-1 +[6]: https://www.redhat.com/en/explore/the-open-organization-book +[7]: https://opensource.com/open-organization/resources/ambassadors-program +[8]: https://opensource.com/open-organization/resources/book-series +[9]: https://opensource.com/open-organization/19/5/educators-guide-project +[10]: https://opensource.com/open-organization/resources/open-org-definition +[11]: https://opensource.com/open-organization/resources/open-org-maturity-model +[12]: https://opensource.com/open-organization/resources +[13]: https://opensource.com/open-organization/19/2/3-misconceptions-agile +[14]: https://opensource.com/open-organization/17/5/5-devops-laws +[15]: https://opensource.com/open-organization/15/7/what-value-do-you-bring-your-company +[16]: https://opensource.com/open-organization/16/5/open-questions-and-answers-about-open-management +[17]: https://opensource.com/open-organization/17/4/feeling-underutilized +[18]: https://opensource.com/open-organization/17/5/what-is-the-point-of-DevOps +[19]: https://opensource.com/open-organization/18/5/open-one-on-one-meetings-guide +[20]: https://opensource.com/open-organization/18/3/open-approaches-meetings +[21]: https://opensource.com/open-organization/resources/meet-ambassadors diff --git a/sources/tech/20190604 Create a CentOS homelab in an hour.md b/sources/tech/20190604 Create a CentOS homelab in an hour.md new file mode 100644 index 0000000000..039af752db --- /dev/null +++ b/sources/tech/20190604 Create a CentOS homelab in an hour.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create a CentOS homelab in an hour) +[#]: via: (https://opensource.com/article/19/6/create-centos-homelab-hour) +[#]: author: (Bob Murphy https://opensource.com/users/murph) + +Create a CentOS homelab in an hour +====== +Set up a self-sustained set of basic Linux servers with nothing more +than a system with virtualization software, a CentOS ISO, and about an +hour of your time. +![metrics and data shown on a computer screen][1] + +When working on new Linux skills (or, as I was, studying for a Linux certification), it is helpful to have a few virtual machines (VMs) available on your laptop so you can do some learning on the go. + +But what happens if you are working somewhere without a good internet connection and you want to work on a web server? What about using other software that you don't already have installed? If you were depending on downloading it from the distribution's repositories, you may be out of luck. With a bit of preparation, you can set up a [homelab][2] that will allow you to install anything you need wherever you are, with or without a network connection. + +The requirements are: + + * A downloaded ISO file of the Linux distribution you intend to use (for example, CentOS, Red Hat, etc.) + * A host computer with virtualization. I use [Fedora][3] with [KVM][4] and [virt-manager][5], but any Linux will work similarly. You could even use Windows or Mac with virtualization, with some difference in implementation + * About an hour of time + + + +### 1\. Create a VM for your repo host + +Use virt-manager to create a VM with modest specs; 1GB RAM, one CPU, and 16GB of disk space are plenty. + +Install [CentOS 7][6] on the VM. + +![Installing a CentOS homelab][7] + +Select your language and continue. + +Click _Installation Destination_ , select your local disk, mark the _Automatically Configure Partitioning_ checkbox, and click *Done *in the upper-left corner. + +Under _Software Selection_ , select _Infrastructure Server_ , mark the _FTP Server_ checkbox, and click _Done_. + +![Installing a CentOS homelab][8] + +Select _Network and Host Name_ , enable Ethernet in the upper-right, then click _Done_ in the upper-left corner. + +Click _Begin Installation_ to start installing the OS. + +You must create a root password, then you can create a user with a password as it installs. + +### 2\. Start the FTP service + +The next step is to start and set the FTP service to run and allow it through the firewall. + +Log in with your root password, then start the FTP server: + + +``` +`systemctl start vsftpd` +``` + +Enable it to work on every start: + + +``` +`systemctl enable vsftpd` +``` + +Set the port as allowed through the firewall: + + +``` +`firewall-cmd --add-service=ftp --perm` +``` + +Enable this change immediately: + + +``` +`firewall-cmd --reload` +``` + +Get your IP address: + + +``` +`ip a` +``` + +(it's probably **eth0** ). You'll need it in a minute. + +### 3\. Copy the files for your local repository + +Mount the CD you installed from to your VM through your virtualization software. + +Create a directory for the CD to be mounted to temporarily: + + +``` +`mkdir /root/temp` +``` + +Mount the install CD: + + +``` +`mount /dev/cdrom /root/temp` +``` + +Copy all the files to the FTP server directory: + + +``` +`rsync -avhP /root/temp/ /var/ftp/pub/` +``` + +### 4\. Point the server to the local repository + +Red Hat-based systems use files that end in **.repo** to identify where to get updates and new software. Those files can be found at + + +``` +`cd /etc/yum.repos.d` +``` + +You need to get rid of the repo files that point your server to look to the CentOS repositories on the internet. I prefer to copy them to root's home directory to get them out of the way: + + +``` +`mv * ~` +``` + +Then create a new repo file to point to your server. Use your favorite text editor to create a file named **network.repo** with the following lines (substituting the IP address you got in step 2 for _< your IP>_), then save it: + + +``` +[network] +name=network +baseurl=/pub +gpgcheck=0 +``` + +When that's done, we can test it out with the following: + + +``` +`yum clean all; yum install ftp` +``` + +If your FTP client installs as expected from the "network" repository, your local repo is set up! + +![Installing a CentOS homelab][9] + +### 5\. Install a new VM with the repository you set up + +Go back to the virtual machine manager, and create another VM—but this time, select _Network Install_ with a URL of: + + +``` +`ftp://192.168.122./pub` +``` + +If you're using a different host OS or virtualization manager, install your VM similarly as before, and skip to the next section. + +### 6\. Set the new VM to use your existing network repository + +You can copy the repo file from your existing server to use here. + +As in the first server example, enter: + + +``` +cd /etc/yum.repos.d +mv * ~ +``` + +Then: + + +``` +`scp root@192.168.122.:/etc/yum.repos.d/network.repo /etc/yum.repos.d` +``` + +Now you should be ready to work with your new VM and get all your software from your local repository. + +Test this again: + + +``` +`yum clean all; yum install screen` +``` + +This will install your software from your local repo server. + +This setup, which gives you independence from the network with the ability to install software, can create a much more dependable environment for expanding your skills on the road. + +* * * + +_Bob Murphy will present this topic as well as an introduction to[GNU Screen][10] at [Southeast Linux Fest][11], June 15-16 in Charlotte, N.C._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/create-centos-homelab-hour + +作者:[Bob Murphy][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/murph +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) +[2]: https://opensource.com/article/19/3/home-lab +[3]: https://getfedora.org/ +[4]: https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine +[5]: https://virt-manager.org/ +[6]: https://www.centos.org/download/ +[7]: https://opensource.com/sites/default/files/uploads/homelab-3b_0.png (Installing a CentOS homelab) +[8]: https://opensource.com/sites/default/files/uploads/homelab-5b.png (Installing a CentOS homelab) +[9]: https://opensource.com/sites/default/files/uploads/homelab-14b.png (Installing a CentOS homelab) +[10]: https://opensource.com/article/17/3/introduction-gnu-screen +[11]: https://southeastlinuxfest.org/ diff --git a/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md b/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md new file mode 100644 index 0000000000..97414f530f --- /dev/null +++ b/sources/tech/20190604 ExamSnap Guide- 6 Excellent Resources for Microsoft 98-366- Networking Fundamentals Exam.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam) +[#]: via: (https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/) +[#]: author: (2daygeek http://www.2daygeek.com/author/2daygeek/) + +ExamSnap Guide: 6 Excellent Resources for Microsoft 98-366: Networking Fundamentals Exam +====== + +The Microsoft 98-366 exam is almost similar to the CompTIA Network+ certification test when it comes to its content. + +It is also known as Networking Fundamentals, and its purpose is to assess your knowledge of switches, routers, OSI models, wide area and local area networks, wireless networking, and IP addressing. + +Those who pass the exam earn the MTA (Microsoft Technology Associate) certificate. This certifications an ideal entry-level credential to help you begin your IT career. + +### 6 Resources for Microsoft MTA 98-366 Exam + +Using approved training materials is the best method to prepare for your certification exam. Most candidates are fond of shortcuts and often use PDFs and brain dumps to prepare for the test. + +It is important to note that these materials need additional study methods. They will not help you gain better knowledge that is meant to make you perform better at work. + +When you take your time to master the course contents of a certification exam, you are not only getting ready for the test but also developing your skills, expertise, and knowledge in the topics covered there. + + * **[ExamSnap][1]** + + + +Another important point to note is that you shouldn’t rely only on brain dumps. Microsoft can withhold or withdraw your certification if it is discovered that you have cheated. + +This may also result in the situation when a person is not allowed to earn the credentials any more. Thus, use only verified platforms such as Examsnap. + +Most people tend to believe that there is no way they can get discovered. However, you need to know that Microsoft has been offering professional certification exams for years and they know what they are doing. + +Now when we have established the importance of using legal materials to prepare for your certification exam, we need to highlight the top resources you can use to prepare for the Microsoft 98-366 test. + +### 1\. Microsoft Video Academy + +The MVA (Microsoft Video Academy) provides you with introductory lessons on the 98-366 certification exam.This is not sufficient for your study although it is a great way to begin your preparation for the test. The materials in the video do not cover everything you need to know before taking the exam. + +In fact, it is introductory series that is meant to lay down the foundation for your study. You will have to explore other materials for you to get an in-depth knowledge of the topics. + +These videos are available without payment, so you can easily access them and use whenever you want. + + * [Microsoft Certification Overview][2] + + + +### 2\. Examsnap + +If you have been looking for material that can help you prepare for the Microsoft Networking Fundamentals exam, look no further because you will know about Examsnap now. + +It is an online platform that provides you with exam dumps and video series of various certification courses. All you need to do is to register on the website and pay a fee for you to be able to access various tools that will help you prepare for the test. + +Examsnap will enable you to pass your exams with confidence by providing you with the most accurate and comprehensive preparation materials on the Internet. The platform also has training courses to help you improve your study. + +Before making any payment on the site, you can complete the trial course to establish whether the site is suitable for your exam preparation needs. + +### 3\. Exam 98-366: MTA Networking Fundamentals + +This is a study resource that is a book. It provides you with a comprehensive approach to the various topics. It is important for you to note that this study guide is critical to your success. + +It offers you more detailed material than the introductory lectures by the MVA. It is advisable that you do not focus on the sample questions in each part when using this book. + +You should not concentrate on the sample questions because they are not so informative. You can make up for this shortcoming by checking out other practice test options. Overall, this book is a top resource that will contribute greatly to your certification exam success. + +### 4\. Measure-Up + +Measure-Up is the official Microsoft practice test provider whereby you can access different materials. You can get a lab and hands-on practice with networking software tools, which are very beneficial for your preparation. The site also has study questions that you can purchase. + +### 5\. U-Certify + +The U-Certify platform is a reputable organization that offers video courses that are considered to be more understandable than those offered by the MVA. In addition to video courses, the site presents flashcards at the end of the videos. + +You can also access a series of practice tests, which contain several hundreds of study questions on the platform. There are more contents in videos and tests that you can access the moment you subscribe. Depending on what you are looking for, you can choose to buy the tests or the videos. + +### 6\. Networking Essentials – Wiki + +There are several posts that are linked to the Networking Essentials page on Wiki, and you can be sure that these articles are greatly detailed with information that will be helpful for your exam preparation. + +It is important to note that they are not meant to be studied as the only resource materials. You should only use them as additional means for the purpose of getting more information on specific topics but not as an individual study tool. + +### Conclusion + +You may not be able to access all the top resources available. However, you can access some of them. In addition to the resources mentioned, there are also some others that are very good. Visit the Microsoft official website to get the list of reliable resource platforms you can use. Study and be well-prepared for the 98-366 certification exam! + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/examsnap-guide-6-excellent-resources-for-microsoft-98-366-networking-fundamentals-exam/ + +作者:[2daygeek][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.2daygeek.com/author/2daygeek/ +[b]: https://github.com/lujun9972 +[1]: https://www.examsnap.com/ +[2]: https://www.microsoft.com/en-us/learning/certification-overview.aspx diff --git a/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md b/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md new file mode 100644 index 0000000000..ebe841dbcb --- /dev/null +++ b/sources/tech/20190604 Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems.md @@ -0,0 +1,174 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems?) +[#]: via: (https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Four Ways To Install Security Updates On Red Hat (RHEL) And CentOS Systems? +====== + +Patching of the Linux server is one of important and routine task of Linux admin. + +Keeping the system with latest patch level is must. It protects your system against unnecessary attack. + +There are three kind of erratas available in the RHEL/CentOS repository, these are Security, Bug Fix and Product Enhancement. + +Now, you have two options to handle this. + +Either install only security updates or all the errata packages. + +We have already written an article in the past **[to check available security updates?][1]**. + +Also, **[check the installed security updates on your system][2]** using this link. + +You can navigate to the above link, if you would like to verify available security updates before installing them. + +In this article, we will show your, how to install security updates in multiple ways on RHEL and CentOS system. + +### 1) How To Install Entire Errata Updates In Red Hat And CentOS System? + +Run the following command to download and apply all available security updates on your system. + +Make a note, this command will install the last available version of any package with at least one security errata. + +Also, install non-security erratas if they provide a more updated version of the package. + +``` +# yum update --security + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +RHEL7-Server-DVD | 4.3 kB 00:00:00 +rhel-7-server-rpms | 2.0 kB 00:00:00 +--> 1:grub2-tools-extra-2.02-0.76.el7.1.x86_64 from rhel-7-server-rpms removed (updateinfo) +--> nss-pem-1.0.3-5.el7_6.1.x86_64 from rhel-7-server-rpms removed (updateinfo) +. +35 package(s) needed (+0 related) for security, out of 115 available +Resolving Dependencies +--> Running transaction check +---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated +---> Package NetworkManager.x86_64 1:1.12.0-10.el7_6 will be an update +``` + +Once you ran the above command, it will check all the available updates and its dependency satisfaction. + +``` +--> Finished Dependency Resolution +--> Running transaction check +---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased +---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased +--> Finished Dependency Resolution + +Dependencies Resolved +===================================================================================================================================================================== +Package Arch Version Repository Size +===================================================================================================================================================================== +Installing: +kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M +kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M +Updating: +NetworkManager x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 1.7 M +NetworkManager-adsl x86_64 1:1.12.0-10.el7_6 rhel-7-server-rpms 157 k +. +Removing: +kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M +kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M +``` + +If these dependencies were satisfied, which finally gives you a total summary about it. + +The transaction summary shows, how many packages will be getting Installed, upgraded and removed from the system. + +``` +Transaction Summary +===================================================================================================================================================================== +Install 2 Packages +Upgrade 33 Packages +Remove 2 Packages + +Total download size: 124 M +Is this ok [y/d/N]: +``` + +### How To Install Only Security Updates In Red Hat And CentOS System? + +Run the following command to install only the packages that have a security errata. + +``` +# yum update-minimal --security + +Loaded plugins: changelog, package_upload, product-id, search-disabled-repos, subscription-manager, verify, versionlock +rhel-7-server-rpms | 2.0 kB 00:00:00 +Resolving Dependencies +--> Running transaction check +---> Package NetworkManager.x86_64 1:1.12.0-6.el7 will be updated +---> Package NetworkManager.x86_64 1:1.12.0-8.el7_6 will be an update +. +--> Finished Dependency Resolution +--> Running transaction check +---> Package kernel.x86_64 0:3.10.0-514.26.1.el7 will be erased +---> Package kernel-devel.x86_64 0:3.10.0-514.26.1.el7 will be erased +--> Finished Dependency Resolution + +Dependencies Resolved +===================================================================================================================================================================== +Package Arch Version Repository Size +===================================================================================================================================================================== +Installing: +kernel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 48 M +kernel-devel x86_64 3.10.0-957.10.1.el7 rhel-7-server-rpms 17 M +Updating: +NetworkManager x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 1.7 M +NetworkManager-adsl x86_64 1:1.12.0-8.el7_6 rhel-7-server-rpms 157 k +. +Removing: +kernel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 148 M +kernel-devel x86_64 3.10.0-514.26.1.el7 @rhel-7-server-rpms 34 M + +Transaction Summary +===================================================================================================================================================================== +Install 2 Packages +Upgrade 33 Packages +Remove 2 Packages + +Total download size: 124 M +Is this ok [y/d/N]: +``` + +### How To Install Security Update Using CVE reference In Red Hat And CentOS System? + +If you would like to install a security update using a CVE reference, run the following command. + +``` +# yum update --cve + +# yum update --cve CVE-2008-0947 +``` + +### How To Install Security Update Using Specific Advisory In Red Hat And CentOS System? + +Run the following command, if you want to apply only a specific advisory. + +``` +# yum update --advisory= + +# yum update --advisory=RHSA-2014:0159 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/install-security-updates-on-redhat-rhel-centos-system/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/check-list-view-find-available-security-updates-on-redhat-rhel-centos-system/ +[2]: https://www.2daygeek.com/check-installed-security-updates-on-redhat-rhel-and-centos-system/ diff --git a/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md b/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md new file mode 100644 index 0000000000..f1cb86573b --- /dev/null +++ b/sources/tech/20190604 Linux Shell Script To Monitor CPU Utilization And Send Email.md @@ -0,0 +1,178 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Shell Script To Monitor CPU Utilization And Send Email) +[#]: via: (https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Linux Shell Script To Monitor CPU Utilization And Send Email +====== + +There are many opensource monitoring tools are available to monitor Linux systems performance. + +It will send an email alert when the system reaches the given threshold limit. + +It monitors everything such as CPU utilization, Memory utilization, swap utilization, disk space utilization and much more. + +If you only have few systems and want to monitor them then writing a small shell script can achieve this. + +In this tutorial we have added two shell script to monitor CPU utilization on Linux system. + +When the system reaches the given threshold then it will trigger a mail to corresponding email id. + +### Method-1 : Linux Shell Script To Monitor CPU Utilization And Send an Email + +If you want to only get CPU utilization percentage through mail when the system reaches the given threshold, use the following script. + +This is very simple and straightforward and one line script. + +It will trigger an email when your system reaches `80%` CPU utilization. + +``` +*/5 * * * * /usr/bin/cat /proc/loadavg | awk '{print $1}' | awk '{ if($1 > 80) printf("Current CPU Utilization is: %.2f%\n"), $0;}' | mail -s "High CPU Alert" [email protected] +``` + +**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement. + +**Output:** You will be getting an email alert similar to below. + +``` +Current CPU Utilization is: 80.40% +``` + +We had added many useful shell scripts in the past. If you want to check those, navigate to the below link. + + * **[How to automate day to day activities using shell scripts?][1]** + + + +### Method-2 : Linux Shell Script To Monitor CPU Utilization And Send an Email + +If you want to get more information about the CPU utilization in the mail alert. + +Then use the following script, which includes top CPU utilization process details based on the top Command and ps Command. + +This will inconstantly gives you an idea what is going on your system. + +It will trigger an email when your system reaches `80%` CPU utilization. + +**Note:** You need to change the email id instead of ours. Also, you can change the CPU utilization threshold value as per your requirement. + +``` +# vi /opt/scripts/cpu-alert.sh + +#!/bin/bash +cpuuse=$(cat /proc/loadavg | awk '{print $1}') + +if [ "$cpuuse" > 80 ]; then + +SUBJECT="ATTENTION: CPU Load Is High on $(hostname) at $(date)" + +MESSAGE="/tmp/Mail.out" + +TO="[email protected]" + + echo "CPU Current Usage is: $cpuuse%" >> $MESSAGE + + echo "" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "Top CPU Process Using top command" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "$(top -bn1 | head -20)" >> $MESSAGE + + echo "" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "Top CPU Process Using ps command" >> $MESSAGE + + echo "+------------------------------------------------------------------+" >> $MESSAGE + + echo "$(ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10)" >> $MESSAGE + + mail -s "$SUBJECT" "$TO" < $MESSAGE + + rm /tmp/Mail.out + + fi +``` + +Finally add a **[cronjob][2]** to automate this. It will run every 5 minutes. + +``` +# crontab -e +*/10 * * * * /bin/bash /opt/scripts/cpu-alert.sh +``` + +**Note:** You will be getting an email alert 5 mins later since the script has scheduled to run every 5 minutes (But it's not exactly 5 mins and it depends the timing). + +Say for example. If your system reaches the limit at 8.25 then you will get an email alert in another 5 mins. Hope it's clear now. + +**Output:** You will be getting an email alert similar to below. + +``` +CPU Current Usage is: 80.51% + ++------------------------------------------------------------------+ +Top CPU Process Using top command ++------------------------------------------------------------------+ +top - 13:23:01 up 1:43, 1 user, load average: 2.58, 2.58, 1.51 +Tasks: 306 total, 3 running, 303 sleeping, 0 stopped, 0 zombie +%Cpu0 : 6.2 us, 6.2 sy, 0.0 ni, 87.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu1 : 18.8 us, 0.0 sy, 0.0 ni, 81.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu2 : 50.0 us, 37.5 sy, 0.0 ni, 12.5 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu3 : 5.9 us, 5.9 sy, 0.0 ni, 88.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu4 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu5 : 29.4 us, 23.5 sy, 0.0 ni, 47.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu6 : 0.0 us, 5.9 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +%Cpu7 : 5.9 us, 0.0 sy, 0.0 ni, 94.1 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +KiB Mem : 16248588 total, 223436 free, 5816924 used, 10208228 buff/cache +KiB Swap: 17873388 total, 17871340 free, 2048 used. 7440884 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND + 8867 daygeek 20 2743884 440420 360952 R 100.0 2.7 1:07.25 /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox + 9119 daygeek 20 36136 784 R 46.7 0.0 0:00.07 /usr/bin/CROND -n + 1057 daygeek 20 889808 487692 461692 S 13.3 3.0 4:21.12 /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 + 3098 daygeek 20 1929012 351412 120532 S 13.3 2.2 16:42.51 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -prefsLen 9236 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /us+ + 1 root 20 188820 10144 7708 S 6.7 0.1 0:06.92 /sbin/init + 818 gdm 20 199836 25120 15876 S 6.7 0.2 0:01.85 /usr/lib/Xorg vt1 -displayfd 3 -auth /run/user/120/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty -verbose 3 + 1170 daygeek 9 -11 2676516 16516 12520 S 6.7 0.1 1:28.30 /usr/bin/pulseaudio --daemonize=no + 8271 root 20 I 6.7 0:00.21 [kworker/u16:4-i915] + 9117 daygeek 20 13528 4036 3144 R 6.7 0.0 0:00.01 top -bn1 + ++------------------------------------------------------------------+ +Top CPU Process Using ps command ++------------------------------------------------------------------+ +%CPU PID USER COMMAND + 8.8 8522 daygeek /usr/lib/virtualbox/VirtualBox +86.2 8867 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment CentOS7 --startvm 002f47b8-2af2-48f5-be1d-67b67e03514c --no-startvm-errormsgbox +76.1 8921 daygeek /usr/lib/virtualbox/VirtualBoxVM --comment Ubuntu-18.04 --startvm e8c32dbb-8b01-41b0-977a-bf28b9db1117 --no-startvm-errormsgbox + 5.5 8080 daygeek /usr/bin/nautilus --gapplication-service + 4.7 4575 daygeek /usr/lib/firefox/firefox -contentproc -childID 12 -isForBrowser -prefsLen 9375 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 3511 daygeek /usr/lib/firefox/firefox -contentproc -childID 8 -isForBrowser -prefsLen 9308 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 3190 daygeek /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 9237 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.4 1612 daygeek /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 184485 -parentBuildID 20190521202118 -greomni /usr/lib/firefox/omni.ja -appomni /usr/lib/firefox/browser/omni.ja -appdir /usr/lib/firefox/browser 1525 true tab + 4.2 3565 daygeek /usr/bin/../lib/notepadqq/notepadqq-bin +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-shell-script-to-monitor-cpu-utilization-usage-and-send-email/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/shell-script/ +[2]: https://www.2daygeek.com/crontab-cronjob-to-schedule-jobs-in-linux/ diff --git a/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md b/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md new file mode 100644 index 0000000000..441415925f --- /dev/null +++ b/sources/tech/20190605 Tweaking the look of Fedora Workstation with themes.md @@ -0,0 +1,140 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tweaking the look of Fedora Workstation with themes) +[#]: via: (https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/) +[#]: author: (Ryan Lerch https://fedoramagazine.org/author/ryanlerch/) + +Tweaking the look of Fedora Workstation with themes +====== + +![][1] + +Changing the theme of a desktop environment is a common way to customize your daily experience with Fedora Workstation. This article discusses the 4 different types of visual themes you can change and how to change to a new theme. Additionally, this article will cover how to install new themes from both the Fedora repositories and 3rd party theme sources. + +### Theme Types + +When changing the theme of Fedora Workstation, there are 4 different themes that can be changed independently of each other. This allows a user to mix and match the theme types to customize their desktop in a multitude of combinations. The 4 theme types are the **Application** (GTK) theme, the **shell** theme, the **icon** theme, and the **cursor** theme. + +#### Application (GTK) themes + +As the name suggests, Application themes change the styling of the applications that are displayed on a user’s desktop. Application themes control the style of the window borders and the window titlebar. Additionally, they also control the style of the widgets in the windows — like dropdowns, text inputs, and buttons. One point to note is that an application theme does not change the icons that are displayed in an application — this is achieved using the icon theme. + +![Two application windows with two different application themes. The default Adwaita theme on the left, the Adapta theme on the right.][2] + +Application themes are also known as GTK themes, as GTK ( **G** IMP **T** ool **k** it) is the underlying technology that is used to render the windows and user interface widgets in those windows on Fedora Workstation. + +#### Shell Themes + +Shell themes change the appearance of the GNOME Shell. The GNOME Shell is the technology that displays the top bar (and the associated widgets like drop downs), as well as the overview screen and the applications list it contains. + +![Comparison of two Shell themes, with the Fedora Workstation default on top, and the Adapta shell theme on the bottom.][3] + +#### Icon Themes + +As the name suggests, icon themes change the icons used in the desktop. Changing the icon theme will change the icons displayed both in the Shell, and in applications. + +![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the left, and the Yaru icon theme on the right][4] + +One important item to note with icon themes is that all icon themes will not have customized icons for all application icons. Consequently, changing the icon theme will not change all the icons in the applications list in the overview. + +![Comparison of two icon themes, with the Fedora 30 Workstation default Adwaita on the top, and the Yaru icon theme on the bottom][5] + +#### Cursor Theme + +The cursor theme allows a user to change how the mouse pointer is displayed. Most cursor themes change all the common cursors, including the pointer, drag handles and the loading cursor. + +![Comparison of multiple cursors of two different cursor themes. Fedora 30 default is on the left, the Breeze Snow theme on the right.][6] + +### Changing the themes + +Changing themes on Fedora Workstation is a simple process. To change all 4 types of themes, use the **Tweaks** application. Tweaks is a tool used to change a range of different options in Fedora Workstation. It is not installed by default, and is installed using the Software application: + +![][7] + +Alternatively, install Tweaks from the command line with the command: + +``` +sudo dnf install gnome-tweak-tool +``` + +In addition to Tweaks, to change the Shell theme, the **User Themes** GNOME Shell Extension needs to be installed and enabled. [Check out this post for more details on installing extensions][8]. + +Next, launch Tweaks, and switch to the Appearance pane. The Themes section in the Appearance pane allows the changing of the multiple theme types. Simply choose the theme from the dropdown, and the new theme will apply automatically. + +![][9] + +### Installing themes + +Armed with the knowledge of the types of themes, and how to change themes, it is time to install some themes. Broadly speaking, there are two ways to install new themes to your Fedora Workstation — installing theme packages from the Fedora repositories, or manually installing a theme. One point to note when installing themes, is that you may need to close and re-open the Tweaks application to make a newly installed theme appear in the dropdowns. + +#### Installing from the Fedora repositories + +The Fedora repositories contain a small selection of additional themes that once installed are available to we chosen in Tweaks. Theme packages are not available in the Software application, and have to be searched for and installed via the command line. Most theme packages have a consistent naming structure, so listing available themes is pretty easy. + +To find Application (GTK) themes use the command: + +``` +dnf search gtk | grep theme +``` + +To find Shell themes: + +``` +dnf search shell-theme +``` + +Icon themes: + +``` +dnf search icon-theme +``` + +Cursor themes: + +``` +dnf search cursor-theme +``` + +Once you have found a theme to install, install the theme using dnf. For example: + +``` +sudo dnf install numix-gtk-theme +``` + +#### Installing themes manually + +For a wider range of themes, there are a plethora of places on the internet to find new themes to use on Fedora Workstation. Two popular places to find themes are [OpenDesktop][10] and [GNOMELook][11]. + +Typically when downloading themes from these sites, the themes are encapsulated in an archive like a tar.gz or zip file. In most cases, to install these themes, simply extract the contents into the correct directory, and the theme will appear in Tweaks. Note too, that themes can be installed either globally (must be done using sudo) so all users on the system can use them, or can be installed just for the current user. + +For Application (GTK) themes, and GNOME Shell themes, extract the archive to the **.themes/** directory in your home directory. To install for all users, extract to **/usr/share/themes/** + +For Icon and Cursor themes, extract the archive to the **.icons/** directory in your home directory. To install for all users, extract to **/usr/share/icons/** + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/tweaking-the-look-of-fedora-workstation-with-themes/ + +作者:[Ryan Lerch][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/ryanlerch/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/wp-content/uploads/2019/06/themes.png-816x345.jpg +[2]: https://fedoramagazine.org/wp-content/uploads/2019/06/application-theme-1024x514.jpg +[3]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-theme-1024x649.jpg +[4]: https://fedoramagazine.org/wp-content/uploads/2019/06/icon-theme-application-1024x441.jpg +[5]: https://fedoramagazine.org/wp-content/uploads/2019/06/overview-icons-1024x637.jpg +[6]: https://fedoramagazine.org/wp-content/uploads/2019/06/cursortheme-1024x467.jpg +[7]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-in-software-1024x725.png +[8]: https://fedoramagazine.org/install-extensions-via-software-application/ +[9]: https://fedoramagazine.org/wp-content/uploads/2019/06/tweaks-choose-themes.png +[10]: https://www.opendesktop.org/ +[11]: https://www.gnome-look.org/ diff --git a/sources/tech/20190606 Examples of blameless culture outside of DevOps.md b/sources/tech/20190606 Examples of blameless culture outside of DevOps.md new file mode 100644 index 0000000000..b78722f3ef --- /dev/null +++ b/sources/tech/20190606 Examples of blameless culture outside of DevOps.md @@ -0,0 +1,65 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Examples of blameless culture outside of DevOps) +[#]: via: (https://opensource.com/article/19/6/everyday-blameless) +[#]: author: (Patrick Housley https://opensource.com/users/patrickhousley) + +Examples of blameless culture outside of DevOps +====== +Is blameless culture just a matter of postmortems and top-down change? +Or are there things individuals can do to promote it? +![people in different locations who are part of the same team][1] + +A blameless culture is not a new concept in the technology industry. In fact, in 2012, [John Allspaw][2] wrote about how [Etsy uses blameless postmortems][3] to dive to the heart of problems when they arise. Other technology giants, like Google, have also worked hard to implement a blameless culture. But what is a blameless culture? Is it just a matter of postmortems? Does it take a culture change to make blameless a reality? And what about flagrant misconduct? + +### Exploring blameless culture + +In 2009, [Mike Rother][4] wrote an [award-winning book][5] on the culture of Toyota, in which he broke down how the automaker became so successful in the 20th century when most other car manufacturers were either stagnant or losing ground. Books on Toyota were nothing new, but how Mike approached Toyota's success was unique. Instead of focusing on the processes and procedures Toyota implements, he explains in exquisite detail the company's culture, including its focus on blameless failure and continuous improvement. + +Mike explains that Toyota, in the face of failure, focuses on the system where the failure occurred instead of who is at fault. Furthermore, the company treats failure as a learning opportunity, not a chance to chastise the operator. This is the very definition of a blameless culture and one that the technology field can still learn much from. + +### It's not a culture shift + +It shouldn't take an executive initiative to attain blamelessness. It's not so much the company's culture that we need to change, but our attitudes towards fault and failure. Sure, the company's culture should change, but, even in a blameless culture, some people still have the undying urge to point fingers and call others out for their shortcomings. + +I was once contracted to work with a company on developing and improving its digital footprint. This company employed its own developers, and, as you might imagine, there was tension at times. If a bug was found in production, the search began immediately for the person responsible. I think it's just human nature to want to find someone to blame. But there is a better way, and it will take practice. + +### Blamelessness at the microscale + +When I talk about implementing blamelessness, I'm not talking about doing it at the scale of companies and organizations. That's too large for most of us. Instead, focus your attention on the smallest scale: the code commit, review, and pull request. Focus on your actions and the actions of your peers and those you lead. You may find that you have the biggest impact in this area. + +How often do you or one of your peers get a bug report, dig in to find out what is wrong, and stop once you determine who made the breaking change? Do you immediately assume that a pull request or code commit needs to be reverted? Do you contact that individual and tell them what they broke and which commit it was? If this is happening within your team, you're the furthest from blamelessness you could be. But it can be remedied. + +Obviously, when you find a bug, you need to understand what broke, where, and who did it. But don't stop there. Attempt to fix the issue. The chances are high that patching the code will be a faster resolution than trying to figure out which code to back out. Too many times, I have seen people try to back out code only to find that they broke something else. + +If you're not confident that you can fix the issue, politely ask the individual who made the breaking change to assist. Yes, assist! My mom always said, "you can catch more flies with honey than vinegar." You will typically get a more positive response if you ask people for help instead of pointing out what they broke. + +Finally, once you have a fix, make sure to ask the individual who caused the bug to review your change. This isn't about rubbing it in their face. Remember that failure represents a learning opportunity, and the person who created the failure will learn if they have a chance to review the fix you created. Furthermore, that individual may have unique details and reasoning that suggests your change may fix the immediate issue but may not solve the original problem. + +### Catch flagrant misconduct and abuse sooner + +A blameless culture doesn't provide blanket protection if someone is knowingly attempting to do wrong. That also doesn't mean the system is not faulty. Remember how Toyota focuses on the system where failure occurs? If an individual can knowingly create havoc within the software they are working on, they should be held accountable—but so should the system. + +When reviewing failure, no matter how small, always ask, "How could we have caught this sooner?" Chances are you could improve some part of your software development lifecycle (SDLC) to make failures less likely to happen. Maybe you need to add more tests. Or run your tests more often. Whatever the solution, remember that fixing the bug is only part of a complete fix. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/everyday-blameless + +作者:[Patrick Housley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/patrickhousley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/connection_people_team_collaboration.png?itok=0_vQT8xV (people in different locations who are part of the same team) +[2]: https://twitter.com/allspaw +[3]: https://codeascraft.com/2012/05/22/blameless-postmortems/ +[4]: http://www-personal.umich.edu/~mrother/Homepage.html +[5]: https://en.wikipedia.org/wiki/Toyota_Kata diff --git a/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md b/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md new file mode 100644 index 0000000000..766393dc3f --- /dev/null +++ b/sources/tech/20190606 Why hypothesis-driven development is key to DevOps.md @@ -0,0 +1,152 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why hypothesis-driven development is key to DevOps) +[#]: via: (https://opensource.com/article/19/6/why-hypothesis-driven-development-devops) +[#]: author: (Brent Aaron Reed https://opensource.com/users/brentaaronreed/users/wpschaub) + +Why hypothesis-driven development is key to DevOps +====== +A hypothesis-driven development mindset harvests the core value of +feature flags: experimentation in production. +![gears and lightbulb to represent innovation][1] + +The definition of DevOps, offered by [Donovan Brown][2] * _is_ "The union of **people** , **process** , and **products** to enable continuous delivery of **value** to our customers.*" It accentuates the importance of continuous delivery of value. Let's discuss how experimentation is at the heart of modern development practices. + +![][3] + +### Reflecting on the past + +Before we get into hypothesis-driven development, let's quickly review how we deliver value using waterfall, agile, deployment rings, and feature flags. + +In the days of _**waterfall**_ , we had predictable and process-driven delivery. However, we only delivered value towards the end of the development lifecycle, often failing late as the solution drifted from the original requirements, or our killer features were outdated by the time we finally shipped. + +![][4] + +Here, we have one release X and eight features, which are all deployed and exposed to the patiently waiting user. We are continuously delivering value—but with a typical release cadence of six months to two years, _the value of the features declines as the world continues to move on_. It worked well enough when there was time to plan and a lower expectation to react to more immediate needs. + +The introduction of _**agile**_ allowed us to create and respond to change so we could continuously deliver working software, sense, learn, and respond. + +![][5] + +Now, we have three releases: X.1, X.2, and X.3. After the X.1 release, we improved feature 3 based on feedback and re-deployed it in release X.3. This is a simple example of delivering features more often, focused on working software, and responding to user feedback. _We are on the path of continuous delivery, focused on our key stakeholders: our users._ + +Using _**deployment rings**_ and/or _**feature flags**_ , we can decouple release deployment and feature exposure, down to the individual user, to control the exposure—the blast radius—of features. We can conduct experiments; progressively expose, test, enable, and hide features; fine-tune releases, and continuously pivot on learnings and feedback. + +When we add feature flags to the previous workflow, we can toggle features to be ON (enabled and exposed) or OFF (hidden). + +![][6] + +Here, feature flags for features 2, 4, and 8 are OFF, which results in the user being exposed to fewer of the features. All features have been deployed but are not exposed (yet). _We can fine-tune the features (value) of each release after deploying to production._ + +_**Ring-based deployment**_ limits the impact (blast) on users while we gradually deploy and evaluate one or more features through observation. Rings allow us to deploy features progressively and have multiple releases (v1, v1.1, and v1.2) running in parallel. + +![Ring-based deployment][7] + +Exposing features in the canary and early-adopter rings enables us to evaluate features without the risk of an all-or-nothing big-bang deployment. + +_**Feature flags**_ decouple release deployment and feature exposure. You "flip the flag" to expose a new feature, perform an emergency rollback by resetting the flag, use rules to hide features, and allow users to toggle preview features. + +![Toggling feature flags on/off][8] + +When you combine deployment rings and feature flags, you can progressively deploy a release through rings and use feature flags to fine-tune the deployed release. + +> See [deploying new releases: Feature flags or rings][9], [what's the cost of feature flags][10], and [breaking down walls between people, process, and products][11] for discussions on feature flags, deployment rings, and related topics. + +### Adding hypothesis-driven development to the mix + +_**Hypothesis-driven development**_ is based on a series of experiments to validate or disprove a hypothesis in a [complex problem domain][12] where we have unknown-unknowns. We want to find viable ideas or fail fast. Instead of developing a monolithic solution and performing a big-bang release, we iterate through hypotheses, evaluating how features perform and, most importantly, how and if customers use them. + +> **Template:** _**We believe**_ {customer/business segment} _**wants**_ {product/feature/service} _**because**_ {value proposition}. +> +> **Example:** _**We believe**_ that users _**want**_ to be able to select different themes _**because**_ it will result in improved user satisfaction. We expect 50% or more users to select a non-default theme and to see a 5% increase in user engagement. + +Every experiment must be based on a hypothesis, have a measurable conclusion, and contribute to feature and overall product learning. For each experiment, consider these steps: + + * Observe your user + * Define a hypothesis and an experiment to assess the hypothesis + * Define clear success criteria (e.g., a 5% increase in user engagement) + * Run the experiment + * Evaluate the results and either accept or reject the hypothesis + * Repeat + + + +Let's have another look at our sample release with eight hypothetical features. + +![][13] + +When we deploy each feature, we can observe user behavior and feedback, and prove or disprove the hypothesis that motivated the deployment. As you can see, the experiment fails for features 2 and 6, allowing us to fail-fast and remove them from the solution. _**We do not want to carry waste that is not delivering value or delighting our users!**_ The experiment for feature 3 is inconclusive, so we adapt the feature, repeat the experiment, and perform A/B testing in Release X.2. Based on observations, we identify the variant feature 3.2 as the winner and re-deploy in release X.3. _**We only expose the features that passed the experiment and satisfy the users.**_ + +### Hypothesis-driven development lights up progressive exposure + +When we combine hypothesis-driven development with progressive exposure strategies, we can vertically slice our solution, incrementally delivering on our long-term vision. With each slice, we progressively expose experiments, enable features that delight our users and hide those that did not make the cut. + +But there is more. When we embrace hypothesis-driven development, we can learn how technology works together, or not, and what our customers need and want. We also complement the test-driven development (TDD) principle. TDD encourages us to write the test first (hypothesis), then confirm our features are correct (experiment), and succeed or fail the test (evaluate). _**It is all about quality and delighting our users** , as outlined in principles 1, 3, and 7_ of the [Agile Manifesto][14]: + + * Our highest priority is to satisfy the customers through early and continuous delivery of value. + * Deliver software often, from a couple of weeks to a couple of months, with a preference to the shorter timescale. + * Working software is the primary measure of progress. + + + +More importantly, we introduce a new mindset that breaks down the walls between development, business, and operations to view, design, develop, deliver, and observe our solution in an iterative series of experiments, adopting features based on scientific analysis, user behavior, and feedback in production. We can evolve our solutions in thin slices through observation and learning in production, a luxury that other engineering disciplines, such as aerospace or civil engineering, can only dream of. + +The good news is that hypothesis-driven development supports the empirical process theory and its three pillars: **Transparency** , **Inspection** , and **Adaption**. + +![][15] + +But there is more. Based on lean principles, we must pivot or persevere after we measure and inspect the feedback. Using feature toggles in conjunction with hypothesis-driven development, we get the best of both worlds, as well as the ability to use A|B testing to make decisions on feedback, such as likes/dislikes and value/waste. + +### Remember: + +Hypothesis-driven development: + + * Is about a series of experiments to confirm or disprove a hypothesis. Identify value! + * Delivers a measurable conclusion and enables continued learning. + * Enables continuous feedback from the key stakeholder—the user—to understand the unknown-unknowns! + * Enables us to understand the evolving landscape into which we progressively expose value. + + + +Progressive exposure: + + * Is not an excuse to hide non-production-ready code. _**Always ship quality!**_ + * Is about deploying a release of features through rings in production. _**Limit blast radius!**_ + * Is about enabling or disabling features in production. _**Fine-tune release values!**_ + * Relies on circuit breakers to protect the infrastructure from implications of progressive exposure. _**Observe, sense, act!**_ + + + +What have you learned about progressive exposure strategies and hypothesis-driven development? We look forward to your candid feedback. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/why-hypothesis-driven-development-devops + +作者:[Brent Aaron Reed][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brentaaronreed/users/wpschaub +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M (gears and lightbulb to represent innovation) +[2]: http://donovanbrown.com/post/what-is-devops +[3]: https://opensource.com/sites/default/files/hypo-1_copy.png +[4]: https://opensource.com/sites/default/files/uploads/hyp0-2-trans.png +[5]: https://opensource.com/sites/default/files/uploads/hypo-3-trans.png +[6]: https://opensource.com/sites/default/files/uploads/hypo-4_0.png +[7]: https://opensource.com/sites/default/files/uploads/hypo-6-trans.png +[8]: https://opensource.com/sites/default/files/uploads/hypo-7-trans.png +[9]: https://opensource.com/article/18/2/feature-flags-ring-deployment-model +[10]: https://opensource.com/article/18/7/does-progressive-exposure-really-come-cost +[11]: https://opensource.com/article/19/3/breaking-down-walls-between-people-process-and-products +[12]: https://en.wikipedia.org/wiki/Cynefin_framework +[13]: https://opensource.com/sites/default/files/uploads/hypo-5-trans.png +[14]: https://agilemanifesto.org/principles.html +[15]: https://opensource.com/sites/default/files/uploads/adapt-transparent-inspect.png diff --git a/sources/tech/20190607 4 tools to help you drive Kubernetes.md b/sources/tech/20190607 4 tools to help you drive Kubernetes.md new file mode 100644 index 0000000000..bb45fb0bea --- /dev/null +++ b/sources/tech/20190607 4 tools to help you drive Kubernetes.md @@ -0,0 +1,219 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 tools to help you drive Kubernetes) +[#]: via: (https://opensource.com/article/19/6/tools-drive-kubernetes) +[#]: author: (Scott McCarty https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux) + +4 tools to help you drive Kubernetes +====== +Learning to drive Kubernetes is more important that knowing how to build +it, and these tools will get you on the road fast. +![Tools in a workshop][1] + +In the third article in this series, _[Kubernetes basics: Learn how to drive first][2]_ , I emphasized that you should learn to drive Kubernetes, not build it. I also explained that there is a minimum set of primitives that you have to learn to model an application in Kubernetes. I want to emphasize this point: the set of primitives that you _need_ to learn are the easiest set of primitives that you _can_ learn to achieve production-quality application deployments (i.e., high-availability [HA], multiple containers, multiple applications). Stated another way, learning the set of primitives built into Kubernetes is easier than learning clustering software, clustered file systems, load balancers, crazy Apache configs, crazy Nginx configs, routers, switches, firewalls, and storage backends—all the things you would need to model a simple HA application in a traditional IT environment (for virtual machines or bare metal). + +In this fourth article, I'll share some tools that will help you learn to drive Kubernetes quickly. + +### 1\. Katacoda + +[Katacoda][3] is the easiest way to test-drive a Kubernetes cluster, hands-down. With one click and five seconds of time, you have a web-based terminal plumbed straight into a running Kubernetes cluster. It's magnificent for playing and learning. I even use it for demos and testing out new ideas. Katacoda provides a completely ephemeral environment that is recycled when you finish using it. + +![OpenShift Playground][4] + +[OpenShift playground][5] + +![Kubernetes Playground][6] + +[Kubernetes playground][7] + +Katacoda provides ephemeral playgrounds and deeper lab environments. For example, the [Linux Container Internals Lab][3], which I have run for the last three or four years, is built in Katacoda. + +Katacoda maintains a bunch of [Kubernetes and cloud tutorials][8] on its main site and collaborates with Red Hat to support a [dedicated learning portal for OpenShift][9]. Explore them both—they are excellent learning resources. + +When you first learn to drive a dump truck, it's always best to watch how other people drive first. + +### 2\. Podman generate kube + +The **podman generate kube** command is a brilliant little subcommand that helps users naturally transition from a simple container engine running simple containers to a cluster use case running many containers (as I described in the [last article][2]). [Podman][10] does this by letting you start a few containers, then exporting the working Kube YAML, and firing them up in Kubernetes. Check this out (pssst, you can run it in this [Katacoda lab][3], which already has Podman and OpenShift). + +First, notice the syntax to run a container is strikingly similar to Docker: + + +``` +`podman run -dtn two-pizza quay.io/fatherlinux/two-pizza` +``` + +But this is something other container engines don't do: + + +``` +`podman generate kube two-pizza` +``` + +The output: + + +``` +# Generation of Kubernetes YAML is still under development! +# +# Save the output of this file and use kubectl create -f to import +# it into Kubernetes. +# +# Created with podman-1.3.1 +apiVersion: v1 +kind: Pod +metadata: +creationTimestamp: "2019-06-07T08:08:12Z" +labels: +app: two-pizza +name: two-pizza +spec: +containers: +\- command: +\- /bin/sh +\- -c +\- bash -c 'while true; do /usr/bin/nc -l -p 3306 < /srv/hello.txt; done' +env: +\- name: PATH +value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +\- name: TERM +value: xterm +\- name: HOSTNAME +\- name: container +value: oci +image: quay.io/fatherlinux/two-pizza:latest +name: two-pizza +resources: {} +securityContext: +allowPrivilegeEscalation: true +capabilities: {} +privileged: false +readOnlyRootFilesystem: false +tty: true +workingDir: / +status: {} +\--- +apiVersion: v1 +kind: Service +metadata: +creationTimestamp: "2019-06-07T08:08:12Z" +labels: +app: two-pizza +name: two-pizza +spec: +selector: +app: two-pizza +type: NodePort +status: +loadBalancer: {} +``` + +You now have some working Kubernetes YAML, which you can use as a starting point for mucking around and learning, tweaking, etc. The **-s** flag created a service for you. [Brent Baude][11] is even working on new features like [adding Volumes/Persistent Volume Claims][12]. For a deeper dive, check out Brent's amazing work in his blog post "[Podman can now ease the transition to Kubernetes and CRI-O][13]." + +### 3\. oc new-app + +The **oc new-app** command is extremely powerful. It's OpenShift-specific, so it's not available in default Kubernetes, but it's really useful when you're starting to learn Kubernetes. Let's start with a quick command to create a fairly sophisticated application: + + +``` +oc new-project -n example +oc new-app -f +``` + +With **oc new-app** , you can literally steal templates from the OpenShift developers and have a known, good starting point when developing primitives to describe your own applications. After you run the above command, your Kubernetes namespace (in OpenShift) will be populated by a bunch of new, defined resources. + + +``` +`oc get all` +``` + +The output: + + +``` +NAME READY STATUS RESTARTS AGE +pod/cakephp-mysql-example-1-build 0/1 Completed 0 4m +pod/cakephp-mysql-example-1-gz65l 1/1 Running 0 1m +pod/mysql-1-nkhqn 1/1 Running 0 4m + +NAME DESIRED CURRENT READY AGE +replicationcontroller/cakephp-mysql-example-1 1 1 1 1m +replicationcontroller/mysql-1 1 1 1 4m + +NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE +service/cakephp-mysql-example ClusterIP 172.30.234.135 8080/TCP 4m +service/mysql ClusterIP 172.30.13.195 3306/TCP 4m + +NAME REVISION DESIRED CURRENT TRIGGERED BY +deploymentconfig.apps.openshift.io/cakephp-mysql-example 1 1 1 config,image(cakephp-mysql-example:latest) +deploymentconfig.apps.openshift.io/mysql 1 1 1 config,image(mysql:5.7) + +NAME TYPE FROM LATEST +buildconfig.build.openshift.io/cakephp-mysql-example Source Git 1 + +NAME TYPE FROM STATUS STARTED DURATION +build.build.openshift.io/cakephp-mysql-example-1 Source Git@47a951e Complete 4 minutes ago 2m27s + +NAME DOCKER REPO TAGS UPDATED +imagestream.image.openshift.io/cakephp-mysql-example docker-registry.default.svc:5000/example/cakephp-mysql-example latest About aminute ago + +NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD +route.route.openshift.io/cakephp-mysql-example cakephp-mysql-example-example.2886795271-80-rhsummit1.environments.katacoda.com cakephp-mysql-example None +``` + +The beauty of this is that you can delete Pods, watch the replication controllers recreate them, scale Pods up, and scale them down. You can play with the template and change it for other applications (which is what I did when I first started). + +### 4\. Visual Studio Code + +I saved one of my favorites for last. I use [vi][14] for most of my work, but I have never found a good syntax highlighter and code completion plugin for Kubernetes (if you have one, let me know). Instead, I have found that Microsoft's [VS Code][15] has a killer set of plugins that complete the creation of Kubernetes resources and provide boilerplate. + +![VS Code plugins UI][16] + +First, install Kubernetes and YAML plugins shown in the image above. + +![Autocomplete in VS Code][17] + +Then, you can create a new YAML file from scratch and get auto-completion of Kubernetes resources. The example above shows a Service. + +![VS Code autocomplete filling in boilerplate for an object][18] + +When you use autocomplete and select the Service resource, it fills in some boilerplate for the object. This is magnificent when you are first learning to drive Kubernetes. You can build Pods, Services, Replication Controllers, Deployments, etc. This is a really nice feature when you are building these files from scratch or even modifying the files you create with **Podman generate kube**. + +### Conclusion + +These four tools (six if you count the two plugins) will help you learn to drive Kubernetes, instead of building or equipping it. In my final article in the series, I will talk about why Kubernetes is so exciting for running so many different workloads. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/tools-drive-kubernetes + +作者:[Scott McCarty][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/fatherlinux/users/fatherlinux/users/fatherlinux/users/fatherlinux +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_workshop_blue_mechanic.jpg?itok=4YXkCU-J (Tools in a workshop) +[2]: https://opensource.com/article/19/6/kubernetes-basics +[3]: https://learn.openshift.com/subsystems/container-internals-lab-2-0-part-1 +[4]: https://opensource.com/sites/default/files/uploads/openshift-playground.png (OpenShift Playground) +[5]: https://learn.openshift.com/playgrounds/openshift311/ +[6]: https://opensource.com/sites/default/files/uploads/kubernetes-playground.png (Kubernetes Playground) +[7]: https://katacoda.com/courses/kubernetes/playground +[8]: https://katacoda.com/learn +[9]: http://learn.openshift.com/ +[10]: https://podman.io/ +[11]: https://developers.redhat.com/blog/author/bbaude/ +[12]: https://github.com/containers/libpod/issues/2303 +[13]: https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/ +[14]: https://en.wikipedia.org/wiki/Vi +[15]: https://code.visualstudio.com/ +[16]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_red_hat_-_plugins.png (VS Code plugins UI) +[17]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_autocomplete.png (Autocomplete in VS Code) +[18]: https://opensource.com/sites/default/files/uploads/vscode_-_kubernetes_service_-_boiler_plate.png (VS Code autocomplete filling in boilerplate for an object) diff --git a/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md b/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md new file mode 100644 index 0000000000..7d28e67ea4 --- /dev/null +++ b/sources/tech/20190607 An Introduction to Kubernetes Secrets and ConfigMaps.md @@ -0,0 +1,608 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An Introduction to Kubernetes Secrets and ConfigMaps) +[#]: via: (https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps) +[#]: author: (Chris Collins https://opensource.com/users/clcollins) + +An Introduction to Kubernetes Secrets and ConfigMaps +====== +Kubernetes Secrets and ConfigMaps separate the configuration of +individual container instances from the container image, reducing +overhead and adding flexibility. +![Kubernetes][1] + +Kubernetes has two types of objects that can inject configuration data into a container when it starts up: Secrets and ConfigMaps. Secrets and ConfigMaps behave similarly in [Kubernetes][2], both in how they are created and because they can be exposed inside a container as mounted files or volumes or environment variables. + +To explore Secrets and ConfigMaps, consider the following scenario: + +> You're running the [official MariaDB container image][3] in Kubernetes and must do some configuration to get the container to run. The image requires an environment variable to be set for **MYSQL_ROOT_PASSWORD** , **MYSQL_ALLOW_EMPTY_PASSWORD** , or **MYSQL_RANDOM_ROOT_PASSWORD** to initialize the database. It also allows for extensions to the MySQL configuration file **my.cnf** by placing custom config files in **/etc/mysql/conf.d**. + +You could build a custom image, setting the environment variables and copying the configuration files into it to create a bespoke container image. However, it is considered a best practice to create and use generic images and add configuration to the containers created from them, instead. This is a perfect use-case for ConfigMaps and Secrets. The **MYSQL_ROOT_PASSWORD** can be set in a Secret and added to the container as an environment variable, and the configuration files can be stored in a ConfigMap and mounted into the container as a file on startup. + +Let's try it out! + +### But first: A quick note about Kubectl + +Make sure that your version of the **kubectl** client command is the same or newer than the Kubernetes cluster version in use. + +An error along the lines of: + + +``` +`error: SchemaError(io.k8s.api.admissionregistration.v1beta1.ServiceReference): invalid object doesn't have additional properties` +``` + +may mean the client version is too old and needs to be upgraded. The [Kubernetes Documentation for Installing Kubectl][4] has instructions for installing the latest client on various platforms. + +If you're using Docker for Mac, it also installs its own version of **kubectl** , and that may be the issue. You can install a current client with **brew install** , replacing the symlink to the client shipped by Docker: + + +``` +$ rm /usr/local/bin/kubectl +$ brew link --overwrite kubernetes-cli +``` + +The newer **kubectl** client should continue to work with Docker's Kubernetes version. + +### Secrets + +Secrets are a Kubernetes object intended for storing a small amount of sensitive data. It is worth noting that Secrets are stored base64-encoded within Kubernetes, so they are not wildly secure. Make sure to have appropriate [role-based access controls][5] (RBAC) to protect access to Secrets. Even so, extremely sensitive Secrets data should probably be stored using something like [HashiCorp Vault][6]. For the root password of a MariaDB database, however, base64 encoding is just fine. + +#### Create a Secret manually + +To create the Secret containing the **MYSQL_ROOT_PASSWORD** , choose a password and convert it to base64: + + +``` +# The root password will be "KubernetesRocks!" +$ echo -n 'KubernetesRocks!' | base64 +S3ViZXJuZXRlc1JvY2tzIQ== +``` + +Make a note of the encoded string. You need it to create the YAML file for the Secret: + + +``` +apiVersion: v1 +kind: Secret +metadata: +name: mariadb-root-password +type: Opaque +data: +password: S3ViZXJuZXRlc1JvY2tzIQ== +``` + +Save that file as **mysql-secret.yaml** and create the Secret in Kubernetes with the **kubectl apply** command: + + +``` +$ kubectl apply -f mysql-secret.yaml +secret/mariadb-root-password created +``` + +#### View the newly created Secret + +Now that you've created the Secret, use **kubectl describe** to see it: + + +``` +$ kubectl describe secret mariadb-root-password +Name: mariadb-root-password +Namespace: secrets-and-configmaps +Labels: +Annotations: +Type: Opaque + +Data +==== +password: 16 bytes +``` + +Note that the **Data** field contains the key you set in the YAML: **password**. The value assigned to that key is the password you created, but it is not shown in the output. Instead, the value's size is shown in its place, in this case, 16 bytes. + +You can also use the **kubectl edit secret ** command to view and edit the Secret. If you edit the Secret, you'll see something like this: + + +``` +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 +data: +password: S3ViZXJuZXRlc1JvY2tzIQ== +kind: Secret +metadata: +annotations: +kubectl.kubernetes.io/last-applied-configuration: | +{"apiVersion":"v1","data":{"password":"S3ViZXJuZXRlc1JvY2tzIQ=="},"kind":"Secret","metadata":{"annotations":{},"name":"mariadb-root-password","namespace":"secrets-and-configmaps"},"type":"Opaque"} +creationTimestamp: 2019-05-29T12:06:09Z +name: mariadb-root-password +namespace: secrets-and-configmaps +resourceVersion: "85154772" +selfLink: /api/v1/namespaces/secrets-and-configmaps/secrets/mariadb-root-password +uid: 2542dadb-820a-11e9-ae24-005056a1db05 +type: Opaque +``` + +Again, the **data** field with the **password** key is visible, and this time you can see the base64-encoded Secret. + +#### Decode the Secret + +Let's say you need to view the Secret in plain text, for example, to verify that the Secret was created with the correct content. You can do this by decoding it. + +It is easy to decode the Secret by extracting the value and piping it to base64. In this case, you will use the output format **-o jsonpath= ** to extract only the Secret value using a JSONPath template. + + +``` +# Returns the base64 encoded secret string +$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}' +S3ViZXJuZXRlc1JvY2tzIQ== + +# Pipe it to `base64 --decode -` to decode: +$ kubectl get secret mariadb-root-password -o jsonpath='{.data.password}' | base64 --decode - +KubernetesRocks! +``` + +#### Another way to create Secrets + +You can also create Secrets directly using the **kubectl create secret** command. The MariaDB image permits setting up a regular database user with a password by setting the **MYSQL_USER** and **MYSQL_PASSWORD** environment variables. A Secret can hold more than one key/value pair, so you can create a single Secret to hold both strings. As a bonus, by using **kubectl create secret** , you can let Kubernetes mess with base64 so that you don't have to. + + +``` +$ kubectl create secret generic mariadb-user-creds \ +\--from-literal=MYSQL_USER=kubeuser\ +\--from-literal=MYSQL_PASSWORD=kube-still-rocks +secret/mariadb-user-creds created +``` + +Note the **\--from-literal** , which sets the key name and the value all in one. You can pass as many **\--from-literal** arguments as you need to create one or more key/value pairs in the Secret. + +Validate that the username and password were created and stored correctly with the **kubectl get secrets** command: + + +``` +# Get the username +$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_USER}' | base64 --decode - +kubeuser + +# Get the password +$ kubectl get secret mariadb-user-creds -o jsonpath='{.data.MYSQL_PASSWORD}' | base64 --decode - +kube-still-rocks +``` + +### ConfigMaps + +ConfigMaps are similar to Secrets. They can be created and shared in the containers in the same ways. The only big difference between them is the base64-encoding obfuscation. ConfigMaps are intended for non-sensitive data—configuration data—like config files and environment variables and are a great way to create customized running services from generic container images. + +#### Create a ConfigMap + +ConfigMaps can be created in the same ways as Secrets. You can write a YAML representation of the ConfigMap manually and load it into Kubernetes, or you can use the **kubectl create configmap** command to create it from the command line. The following example creates a ConfigMap using the latter method but, instead of passing literal strings (as with **\--from-literal= =** in the Secret above), it creates a ConfigMap from an existing file—a MySQL config intended for **/etc/mysql/conf.d** in the container. This config file overrides the **max_allowed_packet** setting that MariaDB sets to 16M by default. + +First, create a file named **max_allowed_packet.cnf** with the following content: + + +``` +[mysqld] +max_allowed_packet = 64M +``` + +This will override the default setting in the **my.cnf** file and set **max_allowed_packet** to 64M. + +Once the file is created, you can create a ConfigMap named **mariadb-config** using the **kubectl create configmap** command that contains the file: + + +``` +$ kubectl create configmap mariadb-config --from-file=max_allowed_packet.cnf +configmap/mariadb-config created +``` + +Just like Secrets, ConfigMaps store one or more key/value pairs in their Data hash of the object. By default, using **\--from-file= ** (as above) will store the contents of the file as the value, and the name of the file will be stored as the key. This is convenient from an organization viewpoint. However, the key name can be explicitly set, too. For example, if you used **\--from-file=max-packet=max_allowed_packet.cnf** when you created the ConfigMap, the key would be **max-packet** rather than the file name. If you had multiple files to store in the ConfigMap, you could add each of them with an additional **\--from-file= ** argument. + +#### View the new ConfigMap and read the data + +As mentioned, ConfigMaps are not meant to store sensitive data, so the data is not encoded when the ConfigMap is created. This makes it easy to view and validate the data and edit it directly. + +First, validate that the ConfigMap was, indeed, created: + + +``` +$ kubectl get configmap mariadb-config +NAME DATA AGE +mariadb-config 1 9m +``` + +The contents of the ConfigMap can be viewed with the **kubectl describe** command. Note that the full contents of the file are visible and that the key name is, in fact, the file name, **max_allowed_packet.cnf**. + + +``` +$ kubectl describe cm mariadb-config +Name: mariadb-config +Namespace: secrets-and-configmaps +Labels: +Annotations: + +Data +==== +max_allowed_packet.cnf: +\---- +[mysqld] +max_allowed_packet = 64M + +Events: +``` + +A ConfigMap can be edited live within Kubernetes with the **kubectl edit** command. Doing so will open a buffer with the default editor showing the contents of the ConfigMap as YAML. When changes are saved, they will immediately be live in Kubernetes. While not really the _best_ practice, it can be handy for testing things in development. + +Say you want a **max_allowed_packet** value of 32M instead of the default 16M or the 64M in the **max_allowed_packet.cnf** file. Use **kubectl edit configmap mariadb-config** to edit the value: + + +``` +$ kubectl edit configmap mariadb-config + +# Please edit the object below. Lines beginning with a '#' will be ignored, +# and an empty file will abort the edit. If an error occurs while saving this file will be +# reopened with the relevant failures. +# +apiVersion: v1 + +data: +max_allowed_packet.cnf: | +[mysqld] +max_allowed_packet = 32M +kind: ConfigMap +metadata: +creationTimestamp: 2019-05-30T12:02:22Z +name: mariadb-config +namespace: secrets-and-configmaps +resourceVersion: "85609912" +selfLink: /api/v1/namespaces/secrets-and-configmaps/configmaps/mariadb-config +uid: c83ccfae-82d2-11e9-832f-005056a1102f +``` + +After saving the change, verify the data has been updated: + + +``` +# Note the '.' in max_allowed_packet.cnf needs to be escaped +$ kubectl get configmap mariadb-config -o "jsonpath={.data['max_allowed_packet\\.cnf']}" + +[mysqld] +max_allowed_packet = 32M +``` + +### Using Secrets and ConfigMaps + +Secrets and ConfigMaps can be mounted as environment variables or as files within a container. For the MariaDB container, you will need to mount the Secrets as environment variables and the ConfigMap as a file. First, though, you need to write a Deployment for MariaDB so that you have something to work with. Create a file named **mariadb-deployment.yaml** with the following: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: +labels: +app: mariadb +name: mariadb-deployment +spec: +replicas: 1 +selector: +matchLabels: +app: mariadb +template: +metadata: +labels: +app: mariadb +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +``` + +This is a bare-bones Kubernetes Deployment of the official MariaDB 10.4 image from Docker Hub. Now, add your Secrets and ConfigMap. + +#### Add the Secrets to the Deployment as environment variables + +You have two Secrets that need to be added to the Deployment: + + 1. **mariadb-root-password** (with one key/value pair) + 2. **mariadb-user-creds** (with two key/value pairs) + + + +For the **mariadb-root-password** Secret, specify the Secret and the key you want by adding an **env** list/array to the container spec in the Deployment and setting the environment variable value to the value of the key in your Secret. In this case, the list contains only a single entry, for the variable **MYSQL_ROOT_PASSWORD**. + + +``` +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +``` + +Note that the name of the object is the name of the environment variable that is added to the container. The **valueFrom** field defines **secretKeyRef** as the source from which the environment variable will be set; i.e., it will use the value from the **password** key in the **mariadb-root-password** Secret you set earlier. + +Add this section to the definition for the **mariadb** container in the **mariadb-deployment.yaml** file. It should look something like this: + + +``` +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +``` + +In this way, you have explicitly set the variable to the value of a specific key from your Secret. This method can also be used with ConfigMaps by using **configMapRef** instead of **secretKeyRef**. + +You can also set environment variables from _all_ key/value pairs in a Secret or ConfigMap to automatically use the key name as the environment variable name and the key's value as the environment variable's value. By using **envFrom** rather than **env** in the container spec, you can set the **MYSQL_USER** and **MYSQL_PASSWORD** from the **mariadb-user-creds** Secret you created earlier, all in one go: + + +``` +envFrom: +\- secretRef: +name: mariadb-user-creds +``` + +**envFrom** is a list of sources for Kubernetes to take environment variables. Use **secretRef** again, this time to specify **mariadb-user-creds** as the source of the environment variables. That's it! All the keys and values in the Secret will be added as environment variables in the container. + +The container spec should now look like this: + + +``` +spec: +containers: +\- name: mariadb +image: docker.io/mariadb:10.4 +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +envFrom: +\- secretRef: +name: mariadb-user-creds +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +``` + +_Note:_ You could have just added the **mysql-root-password** Secret to the **envFrom** list and let it be parsed as well, as long as the **password** key was named **MYSQL_ROOT_PASSWORD** instead. There is no way to manually specify the environment variable name with **envFrom** as with **env**. + +#### Add the max_allowed_packet.cnf file to the Deployment as a volumeMount + +As mentioned, both **env** and **envFrom** can be used to share ConfigMap key/value pairs with a container as well. However, in the case of the **mariadb-config** ConfigMap, your entire file is stored as the value to your key, and the file needs to exist in the container's filesystem for MariaDB to be able to use it. Luckily, both Secrets and ConfigMaps can be the source of Kubernetes "volumes" and mounted into the containers instead of using a filesystem or block device as the volume to be mounted. + +The **mariadb-deployment.yaml** already has a volume and volumeMount specified, an **emptyDir** (effectively a temporary or ephemeral) volume mounted to **/var/lib/mysql** to store the MariaDB data: + + +``` +<...> + +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 + +<...> + +volumes: +\- emptyDir: {} +name: mariadb-volume-1 + +<...> +``` + +_Note:_ This is not a production configuration. When the Pod restarts, the data in the **emptyDir** volume is lost. This is primarily used for development or when the contents of the volume don't need to be persistent. + +You can add your ConfigMap as a source by adding it to the volume list and then adding a volumeMount for it to the container definition: + + +``` +<...> + +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +\- mountPath: /etc/mysql/conf.d +name: mariadb-config + +<...> + +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +\- configMap: +name: mariadb-config +items: +\- key: max_allowed_packet.cnf +path: max_allowed_packet.cnf +name: mariadb-config-volume + +<...> +``` + +The **volumeMount** is pretty self-explanatory—create a volume mount for the **mariadb-config-volume** (specified in the **volumes** list below it) to the path **/etc/mysql/conf.d**. + +Then, in the **volumes** list, **configMap** tells Kubernetes to use the **mariadb-config** ConfigMap, taking the contents of the key **max_allowed_packet.cnf** and mounting it to the path **max_allowed_packed.cnf**. The name of the volume is **mariadb-config-volume** , which was referenced in the **volumeMounts** above. + +_Note:_ The **path** from the **configMap** is the name of a file that will contain the contents of the key's value. In this case, your key was a file name, too, but it doesn't have to be. Note also that **items** is a list, so multiple keys can be referenced and their values mounted as files. These files will all be created in the **mountPath** of the **volumeMount** specified above: **/etc/mysql/conf.d**. + +### Create a MariaDB instance from the Deployment + +At this point, you should have enough to create a MariaDB instance. You have two Secrets, one holding the **MYSQL_ROOT_PASSWORD** and another storing the **MYSQL_USER** , and the **MYSQL_PASSWORD** environment variables to be added to the container. You also have a ConfigMap holding the contents of a MySQL config file that overrides the **max_allowed_packed** value from its default setting. + +You also have a **mariadb-deployment.yaml** file that describes a Kubernetes deployment of a Pod with a MariaDB container and adds the Secrets as environment variables and the ConfigMap as a volume-mounted file in the container. It should look like this: + + +``` +apiVersion: apps/v1 +kind: Deployment +metadata: +labels: +app: mariadb +name: mariadb-deployment +spec: +replicas: 1 +selector: +matchLabels: +app: mariadb +template: +metadata: +labels: +app: mariadb +spec: +containers: +\- image: docker.io/mariadb:10.4 +name: mariadb +env: +\- name: MYSQL_ROOT_PASSWORD +valueFrom: +secretKeyRef: +name: mariadb-root-password +key: password +envFrom: +\- secretRef: +name: mariadb-user-creds +ports: +\- containerPort: 3306 +protocol: TCP +volumeMounts: +\- mountPath: /var/lib/mysql +name: mariadb-volume-1 +\- mountPath: /etc/mysql/conf.d +name: mariadb-config-volume +volumes: +\- emptyDir: {} +name: mariadb-volume-1 +\- configMap: +name: mariadb-config +items: +\- key: max_allowed_packet.cnf +path: max_allowed_packet.cnf +name: mariadb-config-volume +``` + +#### Create the MariaDB instance + +Create a new MariaDB instance from the YAML file with the **kubectl create** command: + + +``` +$ kubectl create -f mariadb-deployment.yaml +deployment.apps/mariadb-deployment created +``` + +Once the deployment has been created, use the **kubectl get** command to view the running MariaDB pod: + + +``` +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +mariadb-deployment-5465c6655c-7jfqm 1/1 Running 0 3m +``` + +Make a note of the Pod name (in this example, it's **mariadb-deployment-5465c6655c-7jfqm** ). Note that the Pod name will differ from this example. + +#### Verify the instance is using the Secrets and ConfigMap + +Use the **kubectl exec** command (with your Pod name) to validate that the Secrets and ConfigMaps are in use. For example, check that the environment variables are exposed in the container: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm env |grep MYSQL +MYSQL_PASSWORD=kube-still-rocks +MYSQL_USER=kubeuser +MYSQL_ROOT_PASSWORD=KubernetesRocks! +``` + +Success! All three environment variables—the one using the **env** setup to specify the Secret, and two using **envFrom** to mount all the values from the Secret—are available in the container for MariaDB to use. + +Spot check that the **max_allowed_packet.cnf** file was created in **/etc/mysql/conf.d** and that it contains the expected content: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm ls /etc/mysql/conf.d +max_allowed_packet.cnf + +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm cat /etc/mysql/conf.d/max_allowed_packet.cnf +[mysqld] +max_allowed_packet = 32M +``` + +Finally, validate that MariaDB used the environment variable to set the root user password and read the **max_allowed_packet.cnf** file to set the **max_allowed_packet** configuration variable. Use the **kubectl exec** command again, this time to get a shell inside the running container and use it to run some **mysql** commands: + + +``` +$ kubectl exec -it mariadb-deployment-5465c6655c-7jfqm / +bin/sh + +# Check that the root password was set correctly +$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e 'show databases;' ++--------------------+ +| Database | ++--------------------+ +| information_schema | +| mysql | +| performance_schema | ++--------------------+ + +# Check that the max_allowed_packet.cnf was parsed +$ mysql -uroot -p${MYSQL_ROOT_PASSWORD} -e "SHOW VARIABLES LIKE 'max_allowed_packet';" ++--------------------+----------+ +| Variable_name | Value | ++--------------------+----------+ +| max_allowed_packet | 33554432 | ++--------------------+----------+ +``` + +### Advantages of Secrets and ConfigMaps + +This exercise explained how to create Kubernetes Secrets and ConfigMaps and how to use those Secrets and ConfigMaps by adding them as environment variables or files inside of a running container instance. This makes it easy to keep the configuration of individual instances of containers separate from the container image. By separating the configuration data, overhead is reduced to maintaining only a single image for a specific type of instance while retaining the flexibility to create instances with a wide variety of configurations. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/introduction-kubernetes-secrets-and-configmaps + +作者:[Chris Collins][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clcollins +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/kubernetes.png?itok=PqDGb6W7 (Kubernetes) +[2]: https://opensource.com/resources/what-is-kubernetes +[3]: https://hub.docker.com/_/mariadb +[4]: https://kubernetes.io/docs/tasks/tools/install-kubectl/ +[5]: https://kubernetes.io/docs/reference/access-authn-authz/rbac/ +[6]: https://www.vaultproject.io/ diff --git a/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md b/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md new file mode 100644 index 0000000000..a44ceeb105 --- /dev/null +++ b/sources/tech/20190610 Blockchain 2.0 - EOS.IO Is Building Infrastructure For Developing DApps -Part 13.md @@ -0,0 +1,88 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Blockchain 2.0 – EOS.IO Is Building Infrastructure For Developing DApps [Part 13]) +[#]: via: (https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Blockchain 2.0 – EOS.IO Is Building Infrastructure For Developing DApps [Part 13] +====== + +![Building infrastructure for Developing DApps][1] + +When a blockchain startup makes over **$4 billion** through an ICO without having a product or service to show for it, that is newsworthy. It becomes clear at this point that people invested these billions on this project because it seemed to promise a lot. This post will seek to demystify this mystical and seemingly powerful platform. + +**EOS.IO** is a [**blockchain**][2] platform that aims to develop standardized infrastructure including an application protocol and operating system for developing DApps ([ **distributed applications**][3]). **Block.one** the lead developer and investor in the project envisions **EOS.IO as the world’s’ first distributed operating system** providing a developing environment for decentralized applications. The system is meant to mirror a real computer by simulating hardware such as its CPUs, GPUs, and even RAM, apart from the obvious storage solutions courtesy of the blockchain database system. + +### Who is block.one? + +Block.one is the company behind EOS.IO. They developed the platform from the ground up based on a white paper published on GitHub. Block.one also spearheaded the yearlong “continuous” ICO that eventually made them a whopping $4 billion. They have one of the best teams of backers and advisors any company in the blockchain space can hope for with partnerships from **Bitmain** , **Louis Bacon** and **Alan Howard** among others. Not to mention **Peter Thiel** being one of the lead investors in the company. The expectations from their purported platform the EOS.IO and their crypto VC fund **EOS.VC** are high indeed. + +### What is EOS.IO? + +It is difficult to arrive at a short description for EOS.IO. The platform aims to position itself as a world wide computer with virtually simulated resources, hence creating a virtual environment for distributed applications to be built and run. The team behind EOS.IO aims achieve the following by directly quoting [**Ethereum**][4] as competition. + + * Increase transaction throughput to millions per second, + + * Reduce transaction costs to insignificant sums or remove them altogether. + + + + +Though EOS.IO is not anywhere near solving any of these problems, the platform has a few capabilities that make it noteworthy as an alternative to Ethereum for DApp enthusiasts and developers. + + 1. **Scalability** : EOS.IO uses a different consensus algorithm for handling blocks called **DPoS**. We’ve described it briefly below. The DPoS system basically allows the system to handle far more requests at better speeds than Ethereum is capable of with its **PoW** algorithm. The claim is that because they’ll be able to handle such massive throughputs they’ll be able to afford transactions at insignificant or even zero charges if need be. + 2. **Governance capabilities** : The consensus algorithm allows EOS.IO to dynamically understand malicious user (or node) behaviour to penalize or deactivate the user. The elected delegate feature of the delegated proof of stake system also ensures faster amendments to the rules that govern the network and its users. + 3. **Parallel processing** : Touted to another major feature. This will basically allow programs on the EOS.IO blockchain to utilize multiple computers or processors or even computing resources such as GPUs to parallelly processes large chunks of data and blocks. This is not yet seen in a roll out ready form. (This however is not a unique feature of EOS.IO. [**Hyperledger Sawtooth**][5] and Burrow for instance support the same at the moment). + 4. **Self-sufficiency** : The system has a built-in grievance system along with well defined incentive and penal systems for providing feedback for acceptable and non-acceptable behaviour. This means that the platform has a governance system without actually having a central governing body. + + + +All or at least most of the selling points of the system is based on the consensus algorithm it follows, DPoS. We explore more about the same below. + +### What is the delegated Proof of Stake (DPoS) consensus algorithm? + +As far as blockchains are concerned consensus algorithms are what gives them the strength and the selling point they need. However, as a general rule of thumb, as the “openness” and immutability of the ledger increases so does the computational power that is required to run it. For instance, if a blockchain intends to be secure against intrusion, be safe and immutable with respect to data, while being accessible to a lot of users, it will use up a lot of computing power in creating and maintaining itself. Think of it as a number lock. A 6-digit pin code is safer than a 3-digit pin code, but the latter will be easier and faster to open, now consider a million of these locks but with limited manpower to open them, and you get the scale at which blockchains operate and how much these consensus algorithms matter. + +In fact, this is a major area where competing platforms differ from each other. Hyperledger Sawtooth uses a proof of elapsed time algorithm (PoET), while ethereum uses a slightly modified proof of work (PoW) algorithm. Each of these have their own pros and cons which we will cover in a detailed post later on. However, for now, to be noted is that EOS.IO uses a delegated proof of stake mechanism for attesting and validating blocks under it. This has the following implications for users. + +Only one node can actively change the status of data written on the blockchain. In the case of a DPoS based system, this validator node is selected as part of a delegation by all the token holders of the blockchain. Every token holder gets to vote and have a say in who should be a validator. The weight the vote carries is usually proportional to the number of tokens the user carries. This is seen as a democratic way to ensure centralized accountability in terms of running the network. Furthermore, the validator is given additional monetary incentives to keep the network running smoothly and without friction. In case a validator or delegate member who is elected appears to be malicious, the system automatically votes out the said node member. + +DPoS system is efficient as it requires fewer computing resources to cast a vote and select a leader to validate. Further, it incentivizes good behaviour and penalizes bad ones leading to self-correction and maintenance of the blockchain. **The average transaction time for PoW vs DPoS is 10 minutes vs 10 seconds**. The downside to this paradigm being centralized operations, weighted voting systems, lack of redundancy, and possible malicious behaviour from the validator. + +To understand the difference between PoW and DPoS, imagine this: Let’s say your network has 100 participants out of which 10 are capable of handling validator duties and you need to choose one to do the same. In PoW, you give each of them a tough problem to solve to know who’s the fastest and smartest. You give the validator position to the winner and reward them for the same. In the DPoS system, the rest of the members vote for the person they think should hold the position. This is a simple matter of choosing based on arithmetic performance data based on the past. The node with the most votes win, and if the winner tries to do something fishy, the same electorate votes him out for the next transaction. + +### So does Ethereum lose out? + +While EOS.IO has miles to go before it even steps into the same ballpark as Ethereum with respect to the market cap and user base, EOS.IO targets specific shortfalls with Ethereum and solves them. We conclude this post by summarising some findings based on a 2017 [**paper**][6] written and published by **Ian Grigg**. + + 1. The consensus algorithm used in the Ethereum (proof of work) platform uses far more computing resources and time to process transactions. This is true even for small block sizes. This limits its scaling potential and throughput. A meagre 15 transactions per second globally is no match for the over 2000 that payments network Visa manages. If the platform is to be adopted on a global scale based on a large scale roll out this is not acceptable. + + 2. The reliance on proprietary languages, tool kits and protocols (including **Solidity** for instance) limits developer capability. Interoperability between platforms is also severely hurt due to this fact. + + 3. This is rather subjective, however, the fact that Ethereum foundation refuses to acknowledge even the need for governance on the platform instead choosing to intervene on an ad-hoc manner when things turn sour on the network is not seen by many industry watchers as a sustainable model to be emulated in the future. + + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/blockchain-2-0-eos-io-is-building-infrastructure-for-developing-dapps/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/05/Developing-DApps-720x340.png +[2]: https://www.ostechnix.com/blockchain-2-0-an-introduction/ +[3]: https://www.ostechnix.com/blockchain-2-0-explaining-distributed-computing-and-distributed-applications/ +[4]: https://www.ostechnix.com/blockchain-2-0-what-is-ethereum/ +[5]: https://www.ostechnix.com/blockchain-2-0-introduction-to-hyperledger-sawtooth/ +[6]: http://iang.org/papers/EOS_An_Introduction.pdf diff --git a/sources/tech/20190610 Constant Time.md b/sources/tech/20190610 Constant Time.md new file mode 100644 index 0000000000..f19207fb58 --- /dev/null +++ b/sources/tech/20190610 Constant Time.md @@ -0,0 +1,281 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Constant Time) +[#]: via: (https://dave.cheney.net/2019/06/10/constant-time) +[#]: author: (Dave Cheney https://dave.cheney.net/author/davecheney) + +Constant Time +====== + +This essay is a derived from my [dotGo 2019 presentation][1] about my favourite feature in Go. + +* * * + +Many years ago Rob Pike remarked, + +> “Numbers are just numbers, you’ll never see `0x80ULL` in a `.go` source file”. + +—Rob Pike, [The Go Programming Language][2] + +Beyond this pithy observation lies the fascinating world of Go’s constants. Something that is perhaps taken for granted because, as Rob noted, is Go numbers–constants–just work. +In this post I intend to show you a few things that perhaps you didn’t know about Go’s `const` keyword. + +## What’s so great about constants? + +To kick things off, why are constants good? Three things spring to mind: + + * _Immutability_. Constants are one of the few ways we have in Go to express immutability to the compiler. + * _Clarity_. Constants give us a way to extract magic numbers from our code, giving them names and semantic meaning. + * _Performance_. The ability to express to the compiler that something will not change is key as it unlocks optimisations such as constant folding, constant propagation, branch and dead code elimination. + + + +But these are generic use cases for constants, they apply to any language. Let’s talk about some of the properties of Go’s constants. + +### A Challenge + +To introduce the power of Go’s constants let’s try a little challenge: declare a _constant_ whose value is the number of bits in the natural machine word. + +We can’t use `unsafe.SizeOf` as it is not a constant expression. We could use a build tag and laboriously record the natural word size of each Go platform, or we could do something like this: + +``` +const uintSize = 32 << (^uint(0) >> 32 & 1) +``` + +There are many versions of this expression in Go codebases. They all work roughly the same way. If we’re on a 64 bit platform then the exclusive or of the number zero–all zero bits–is a number with all bits set, sixty four of them to be exact. + +``` +1111111111111111111111111111111111111111111111111111111111111111 +``` + +If we shift that value thirty two bits to the right, we get another value with thirty two ones in it. + +``` +0000000000000000000000000000000011111111111111111111111111111111 +``` + +Anding that with a number with one bit in the final position give us, the same thing, `1`, + +``` +0000000000000000000000000000000011111111111111111111111111111111 & 1 = 1 +``` + +Finally we shift the number thirty two one place to the right, giving us 641. + +``` +32 << 1 = 64 +``` + +This expression is an example of a _constant expression_. All of these operations happen at compile time and the result of the expression is itself a constant. If you look in the in runtime package, in particular the garbage collector, you’ll see how constant expressions are used to set up complex invariants based on the word size of the machine the code is compiled on. + +So, this is a neat party trick, but most compilers will do this kind of constant folding at compile time for you. Let’s step it up a notch. + +## Constants are values + +In Go, constants are values and each value has a type. In Go, user defined types can declare their own methods. Thus, a constant value can have a method set. If you’re surprised by this, let me show you an example that you probably use every day. + +``` +const timeout = 500 * time.Millisecond +fmt.Println("The timeout is", timeout) // 500ms +``` + +In the example the untyped literal constant `500` is multiplied by `time.Millisecond`, itself a constant of type `time.Duration`. The rule for assignments in Go are, unless otherwise declared, the type on the left hand side of the assignment operator is inferred from the type on the right.`500` is an untyped constant so it is converted to a `time.Duration` then multiplied with the constant `time.Millisecond`. + +Thus `timeout` is a constant of type `time.Duration` which holds the value `500000000`. +Why then does `fmt.Println` print `500ms`, not `500000000`? + +The answer is `time.Duration` has a `String` method. Thus any `time.Duration` value, even a constant, knows how to pretty print itself. + +Now we know that constant values are typed, and because types can declare methods, we can derive that _constant values can fulfil interfaces_. In fact we just saw an example of this. `fmt.Println` doesn’t assert that a value has a `String` method, it asserts the value implements the `Stringer` interface. + +Let’s talk a little about how we can use this property to make our Go code better, and to do that I’m going to take a brief digression into the Singleton pattern. + +## Singletons + +I’m generally not a fan of the singleton pattern, in Go or any language. Singletons complicate testing and create unnecessary coupling between packages. I feel the singleton pattern is often used _not_ to create a singular instance of a thing, but instead to create a place to coordinate registration. `net/http.DefaultServeMux` is a good example of this pattern. + +``` +package http + +// DefaultServeMux is the default ServeMux used by Serve. +var DefaultServeMux = &defaultServeMux + +var defaultServeMux ServeMux +``` + +There is nothing singular about `http.defaultServerMux`, nothing prevents you from creating another `ServeMux`. In fact the `http` package provides a helper that will create as many `ServeMux`‘s as you want. + +``` +// NewServeMux allocates and returns a new ServeMux. +func NewServeMux() *ServeMux { return new(ServeMux) } +``` + +`http.DefaultServeMux` is not a singleton. Never the less there is a case for things which are truely singletons because they can only represent a single thing. A good example of this are the file descriptors of a process; 0, 1, and 2 which represent stdin, stdout, and stderr respectively. + +It doesn’t matter what names you give them, `1` is always stdout, and there can only ever be one file descriptor `1`. Thus these two operations are identical: + +``` +fmt.Fprintf(os.Stdout, "Hello dotGo\n") +syscall.Write(1, []byte("Hello dotGo\n")) +``` + +So let’s look at how the `os` package defines `Stdin`, `Stdout`, and `Stderr`: + +``` +package os + +var ( + Stdin = NewFile(uintptr(syscall.Stdin), "/dev/stdin") + Stdout = NewFile(uintptr(syscall.Stdout), "/dev/stdout") + Stderr = NewFile(uintptr(syscall.Stderr), "/dev/stderr") +) +``` + +There are a few problems with this declaration. Firstly their type is `*os.File` not the respective `io.Reader` or `io.Writer` interfaces. People have long complained that this makes replacing them with alternatives problematic. However the notion of replacing these variables is precisely the point of this digression. Can you safely change the value of `os.Stdout` once your program is running without causing a data race? + +I argue that, in the general case, you cannot. In general, if something is unsafe to do, as programmers we shouldn’t let our users think that it is safe, [lest they begin to depend on that behaviour][3]. + +Could we change the definition of `os.Stdout` and friends so that they retain the observable behaviour of reading and writing, but remain immutable? It turns out, we can do this easily with constants. + +``` +type readfd int + +func (r readfd) Read(buf []byte) (int, error) { + return syscall.Read(int(r), buf) +} + +type writefd int + +func (w writefd) Write(buf []byte) (int, error) { + return syscall.Write(int(w), buf) +} + +const ( + Stdin = readfd(0) + Stdout = writefd(1) + Stderr = writefd(2) +) + +func main() { + fmt.Fprintf(Stdout, "Hello world") +} +``` + +In fact this change causes only one compilation failure in the standard library.2 + +## Sentinel error values + +Another case of things which look like constants but really aren’t, are sentinel error values. `io.EOF`, `sql.ErrNoRows`, `crypto/x509.ErrUnsupportedAlgorithm`, and so on are all examples of sentinel error values. They all fall into a category of _expected_ errors, and because they are expected, you’re expected to check for them. + +To compare the error you have with the one you were expecting, you need to import the package that defines that error. Because, by definition, sentinel errors are exported public variables, any code that imports, for example, the `io` package could change the value of `io.EOF`. + +``` +package nelson + +import "io" + +func init() { + io.EOF = nil // haha! +} +``` + +I’ll say that again. If I know the name of `io.EOF` I can import the package that declares it, which I must if I want to compare it to my error, and thus I could change `io.EOF`‘s value. Historically convention and a bit of dumb luck discourages people from writing code that does this, but technically there is nothing to prevent you from doing so. + +Replacing `io.EOF` is probably going to be detected almost immediately. But replacing a less frequently used sentinel error may cause some interesting side effects: + +``` +package innocent + +import "crypto/rsa" + +func init() { + rsa.ErrVerification = nil // 🤔 +} +``` + +If you were hoping the race detector will spot this subterfuge, I suggest you talk to the folks writing testing frameworks who replace `os.Stdout` without it triggering the race detector. + +## Fungibility + +I want to digress for a moment to talk about _the_ most important property of constants. Constants aren’t just immutable, its not enough that we cannot overwrite their declaration, +Constants are _fungible_. This is a tremendously important property that doesn’t get nearly enough attention. + +Fungible means identical. Money is a great example of fungibility. If you were to lend me 10 bucks, and I later pay you back, the fact that you gave me a 10 dollar note and I returned to you 10 one dollar bills, with respect to its operation as a financial instrument, is irrelevant. Things which are fungible are by definition equal and equality is a powerful property we can leverage for our programs. + +``` +var myEOF = errors.New("EOF") // io/io.go line 38 +fmt.Println(myEOF == io.EOF) // false +``` + +Putting aside the effect of malicious actors in your code base the key design challenge with sentinel errors is they behave like _singletons_ , not _constants_. Even if we follow the exact procedure used by the `io` package to create our own EOF value, `myEOF` and `io.EOF` are not equal. `myEOF` and `io.EOF` are not fungible, they cannot be interchanged. Programs can spot the difference. + +When you combine the lack of immutability, the lack of fungibility, the lack of equality, you have a set of weird behaviours stemming from the fact that sentinel error values in Go are not constant expressions. But what if they were? + +## Constant errors + +Ideally a sentinel error value should behave as a constant. It should be immutable and fungible. Let’s recap how the built in `error` interface works in Go. + +``` +type error interface { + Error() string +} +``` + +Any type with an `Error() string` method fulfils the `error` interface. This includes user defined types, it includes types derived from primitives like string, and it includes constant strings. With that background, consider this error implementation: + +``` +type Error string + +func (e Error) Error() string { + return string(e) +} +``` + +We can use this error type as a constant expression: + +``` +const err = Error("EOF") +``` + +Unlike `errors.errorString`, which is a struct, a compact struct literal initialiser is not a constant expression and cannot be used. + +``` +const err2 = errors.errorString{"EOF"} // doesn't compile +``` + +As constants of this `Error` type are not variables, they are immutable. + +``` +const err = Error("EOF") +err = Error("not EOF") // doesn't compile +``` + +Additionally, two constant strings are always equal if their contents are equal: + +``` +const str1 = "EOF" +const str2 = "EOF" +fmt.Println(str1 == str2) // true +``` + +which means two constants of a type derived from string with the same contents are also equal. + +``` +type Error string + +const err1 = Error("EOF") +const err2 = Error("EOF") + fmt.Println(err1 == err2) // true``` + +``` +Said another way, equal constant `Error` values are the same, in the way that the literal constant `1` is the same as every other literal constant `1`. + +Now we have all the pieces we need to make sentinel errors, like `io.EOF`, and `rsa.ErrVerfication`, immutable, fungible, constant expressions. +``` + + % git diff + diff --git a/src/io/io.go b/src/io/io.go + inde \ No newline at end of file diff --git a/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md b/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md new file mode 100644 index 0000000000..d9ed871a10 --- /dev/null +++ b/sources/tech/20190610 Tmux Command Examples To Manage Multiple Terminal Sessions.md @@ -0,0 +1,296 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tmux Command Examples To Manage Multiple Terminal Sessions) +[#]: via: (https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Tmux Command Examples To Manage Multiple Terminal Sessions +====== + +![tmux command examples][1] + +We’ve already learned to use [**GNU Screen**][2] to manage multiple Terminal sessions. Today, we will see yet another well-known command-line utility named **“Tmux”** to manage Terminal sessions. Similar to GNU Screen, Tmux is also a Terminal multiplexer that allows us to create number of terminal sessions and run more than one programs or processes at the same time inside a single Terminal window. Tmux is free, open source and cross-platform program that supports Linux, OpenBSD, FreeBSD, NetBSD and Mac OS X. In this guide, we will discuss most-commonly used Tmux commands in Linux. + +### Installing Tmux in Linux + +Tmux is available in the official repositories of most Linux distributions. + +On Arch Linux and its variants, run the following command to install it. + +``` +$ sudo pacman -S tmux +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install tmux +``` + +On Fedora: + +``` +$ sudo dnf install tmux +``` + +On RHEL and CentOS: + +``` +$ sudo yum install tmux +``` + +On SUSE/openSUSE: + +``` +$ sudo zypper install tmux +``` + +Well, we have just installed Tmux. Let us go ahead and see some examples to learn how to use Tmux. + +### Tmux Command Examples To Manage Multiple Terminal Sessions + +The default prefix shortcut to all commands in Tmux is **Ctrl+b**. Just remember this keyboard shortcut when using Tmux. + +* * * + +**Note:** The default prefix to all **Screen** commands is **Ctrl+a**. + +* * * + +##### Creating Tmux sessions + +To create a new Tmux session and attach to it, run the following command from the Terminal: + +``` +tmux +``` + +Or, + +``` +tmux new +``` + +Once you are inside the Tmux session, you will see a **green bar at the bottom** as shown in the screenshot below. + +![][3] + +New Tmux session + +It is very handy to verify whether you’re inside a Tmux session or not. + +##### Detaching from Tmux sessions + +To detach from a current Tmux session, just press **Ctrl+b** and **d**. You don’t need to press this both Keyboard shortcut at a time. First press “Ctrl+b” and then press “d”. + +Once you’re detached from a session, you will see an output something like below. + +``` +[detached (from session 0)] +``` + +##### Creating named sessions + +If you use multiple sessions, you might get confused which programs are running on which sessions. In such cases, you can just create named sessions. For example if you wanted to perform some activities related to web server in a session, just create the Tmux session with a custom name, for example **“webserver”** (or any name of your choice). + +``` +tmux new -s webserver +``` + +Here is the new named Tmux session. + +![][4] + +Tmux session with a custom name + +As you can see in the above screenshot, the name of the Tmux session is **webserver**. This way you can easily identify which program is running on which session. + +To detach, simply press **Ctrl+b** and **d**. + +##### List Tmux sessions + +To view the list of open Tmux sessions, run: + +``` +tmux ls +``` + +Sample output: + +![][5] + +List Tmux sessions + +As you can see, I have two open Tmux sessions. + +##### Creating detached sessions + +Sometimes, you might want to simply create a session and don’t want to attach to it automatically. + +To create a new detached session named **“ostechnix”** , run: + +``` +tmux new -s ostechnix -d +``` + +The above command will create a new Tmux session called “ostechnix”, but won’t attach to it. + +You can verify if the session is created using “tmux ls” command: + +![][6] + +Create detached Tmux sessions + +##### Attaching to Tmux sessions + +You can attach to the last created session by running this command: + +``` +tmux attach +``` + +Or, + +``` +tmux a +``` + +If you want to attach to any specific named session, for example “ostechnix”, run: + +``` +tmux attach -t ostechnix +``` + +Or, shortly: + +``` +tmux a -t ostechnix +``` + +##### Kill Tmux sessions + +When you’re done and no longer required a Tmux session, you can kill it at any time with command: + +``` +tmux kill-session -t ostechnix +``` + +To kill when attached, press **Ctrl+b** and **x**. Hit “y” to kill the session. + +You can verify if the session is closed with “tmux ls” command. + +To Kill Tmux server along with all Tmux sessions, run: + +``` +tmux kill-server +``` + +Be careful! This will terminate all Tmux sessions even if there are any running jobs inside the sessions without any warning. + +When there were no running Tmux sessions, you will see the following output: + +``` +$ tmux ls +no server running on /tmp/tmux-1000/default +``` + +##### Split Tmux Session Windows + +Tmux has an option to split a single Tmux session window into multiple smaller windows called **Tmux panes**. This way we can run different programs on each pane and interact with all of them simultaneously. Each pane can be resized, moved and closed without affecting the other panes. We can split a Tmux window either horizontally or vertically or both at once. + +**Split panes horizontally** + +To split a pane horizontally, press **Ctrl+b** and **”** (single quotation mark). + +![][7] + +Split Tmux pane horizontally + +Use the same key combination to split the panes further. + +**Split panes vertically** + +To split a pane vertically, press **Ctrl+b** and **%**. + +![][8] + +Split Tmux panes vertically + +**Split panes horizontally and vertically** + +We can also split a pane horizontally and vertically at the same time. Take a look at the following screenshot. + +![][9] + +Split Tmux panes + +First, I did a horizontal split by pressing **Ctrl+b “** and then split the lower pane vertically by pressing **Ctrl+b %**. + +As you see in the above screenshot, I am running three different programs on each pane. + +**Switch between panes** + +To switch between panes, press **Ctrl+b** and **Arrow keys (Left, Right, Up, Down)**. + +**Send commands to all panes** + +In the previous example, we run three different commands on each pane. However, it is also possible to run send the same commands to all panes at once. + +To do so, press **Ctrl+b** and type the following command and hit ENTER: + +``` +:setw synchronize-panes +``` + +Now type any command on any pane. You will see that the same command is reflected on all panes. + +**Swap panes** + +To swap panes, press **Ctrl+b** and **o**. + +**Show pane numbers** + +Press **Ctrl+b** and **q** to show pane numbers. + +**Kill panes** + +To kill a pane, simply type **exit** and ENTER key. Alternatively, press **Ctrl+b** and **x**. You will see a confirmation message. Just press **“y”** to close the pane. + +![][10] + +Kill Tmux panes + +At this stage, you will get a basic idea of Tmux and how to use it to manage multiple Terminal sessions. For more details, refer man pages. + +``` +$ man tmux +``` + +Both GNU Screen and Tmux utilities can be very helpful when managing servers remotely via SSH. Learn Screen and Tmux commands thoroughly to manage your remote servers like a pro. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/tmux-command-examples-to-manage-multiple-terminal-sessions/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-720x340.png +[2]: https://www.ostechnix.com/screen-command-examples-to-manage-multiple-terminal-sessions/ +[3]: https://www.ostechnix.com/wp-content/uploads/2019/06/Tmux-session.png +[4]: https://www.ostechnix.com/wp-content/uploads/2019/06/Named-Tmux-session.png +[5]: https://www.ostechnix.com/wp-content/uploads/2019/06/List-Tmux-sessions.png +[6]: https://www.ostechnix.com/wp-content/uploads/2019/06/Create-detached-sessions.png +[7]: https://www.ostechnix.com/wp-content/uploads/2019/06/Horizontal-split.png +[8]: https://www.ostechnix.com/wp-content/uploads/2019/06/Vertical-split.png +[9]: https://www.ostechnix.com/wp-content/uploads/2019/06/Split-Panes.png +[10]: https://www.ostechnix.com/wp-content/uploads/2019/06/Kill-panes.png diff --git a/sources/tech/20190611 Cisco software to make networks smarter, safer, more manageable.md b/sources/tech/20190611 Cisco software to make networks smarter, safer, more manageable.md new file mode 100644 index 0000000000..7bdf5361ab --- /dev/null +++ b/sources/tech/20190611 Cisco software to make networks smarter, safer, more manageable.md @@ -0,0 +1,104 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cisco software to make networks smarter, safer, more manageable) +[#]: via: (https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html) +[#]: author: (Michael Cooney https://www.networkworld.com/author/Michael-Cooney/) + +Cisco software to make networks smarter, safer, more manageable +====== +Cisco software announced at Cisco Live embraces AI to help customers set consistent network and security policies across their domains and improve intent-based networking. +![bigstock][1] + +SAN DIEGO—Cisco injected a number of new technologies into its key networking control-point software that makes it easier to stretch networking from the data center to the cloud while making the whole environment smarter and easier to manage. + +At the company’s annual Cisco Live customer event here it rolled out software that lets customers more easily meld typically siloed domains across the enterprise and cloud to the wide area network. The software enables what Cisco calls multidomain integration that lets customers set policies to apply uniform access controls to users, devices and applications regardless of where they connect to the network, the company said. + +**More about SD-WAN** + + * [How to buy SD-WAN technology: Key questions to consider when selecting a supplier][2] + * [How to pick an off-site data-backup method][3] + * [SD-Branch: What it is and why you’ll need it][4] + * [What are the options for security SD-WAN?][5] + + + +The company also unveiled Cisco AI Network Analytics, a software package that uses [AI and machine learning techniques][6] to learn network traffic and security patterns that can help customers spot and fix problems proactively across the enterprise. + +All of the new software runs on Cisco’s DNA Center platform which is rapidly becoming an ever-more crucial component to the company’s intent-based networking plans. DNA Center has always been important since its introduction two years ago as it features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks. + +Beyond device management and configuration, Cisco DNA Center gives IT teams the ability to control access through policies using Software-Defined Access (SD-Access), automatically provision through Cisco DNA Automation, virtualize devices through Cisco Network Functions Virtualization (NFV), and lower security risks through segmentation and Encrypted Traffic Analysis. But experts say these software enhancements take it to a new level. + +“You can call it the rise of DNA Center and it’s important because it lets customers manage and control their entire network from one place – similar to what VMware does with its vCenter,” said Zeus Kerravala, founder and principal analyst with ZK Research. vCenter is VMware’s centralized platform for controlling its vSphere virtualized environments. + +“Cisco will likely roll more and more functionality into DNA Center in the future making it stronger,” Kerravala said. + +**[[Become a Microsoft Office 365 administrator in record time with this quick start course from PluralSight.][7] ]** + +Together the new software and DNA Center will help customers set consistent policies across their domains and collaborate with others for the benefit of the entire network. Customers can define a policy once, apply it everywhere, and monitor it systematically to ensure it is realizing its business intent, said Prashanth Shenoy, Cisco vice president of marketing for Enterprise Network and Mobility. It will help customers segment their networks to reduce congestion, improve security and compliance and contain network problems, he said. + +“In the campus, Cisco’s SD-Access solution uses this technology to group users and devices within the segments it creates according to their access privileges. Similarly, Cisco ACI creates groups of similar applications in the data center,” Shenoy said. “When integrated, SD-Access and ACI exchange their groupings and provide each other an awareness into their access policies. With this knowledge, each of the domains can map user groups with applications, jointly enforce policies, and block unauthorized access to applications.” + +In the Cisco world it basically means there now can be a unification of its central domain network controllers and they can work together and let customers drive policies across domains. + +Cisco also said that security capabilities can be spread across domains. + +Cisco Advanced Malware Protection (AMP) prevents breaches, monitors malicious behavior and detects and removes malware. Security constructs built into Cisco SD-WAN, and the recently announced SD-WAN onRamp for CoLocation, provide a full security stack that applies protection consistently from user to branch to clouds. Cisco Stealthwatch and Stealthwatch Cloud detect threats across the private network, public clouds, and in encrypted traffic. + +Analysts said Cisco’s latest efforts are an attempt to simplify what are fast becoming complex networks with tons of new devices and applications to support. + +Cisco’s initial efforts were product specific, but its latest announcements cross products and domains, said Lee Doyle principal analyst with Doyle Research. “Cisco is making a strong push to make its networks easier to use, manage and program.” + +That same strategy is behind the new AI Analytics program. + +“Trying to manually analyze and troubleshoot the traffic flowing through thousands of APs, switches and routers is a near impossible task, even for the most sophisticated NetOps team. In a wireless environment, onboarding and interference errors can crop up randomly and intermittently, making it even more difficult to determine probable causes,” said Anand Oswal, senior vice president, engineering for Cisco’s Enterprise Networking Business. + +Cisco has been integrating AI/ML into many operational and security components, with Cisco DNA Center the focal point for insights and actions, Oswal wrote in a [blog][8] about the AI announcement. AI Network Analytics collects massive amounts of network data from Cisco DNA Centers at participating customer sites, encrypts and anonymizes the data to ensure privacy, and collates all of it into the Cisco Worldwide Data Platform. In this cloud, the aggregated data is analyzed with deep machine learning to reveal patterns and anomalies such as: + + * Highly personalized network baselines with multiple levels of granularity that define “normal” for a given network, site, building and SSID. + + * Sudden changes in onboarding times for Wi-Fi devices, by individual APs, floor, building, campus +``` + +``` + and branch. + + * Simultaneous connectivity failures with numerous clients at a specific location + + * Changes in SaaS and Cloud application performance via SD-WAN direct internet connections or [Cloud OnRamps][9]. + + * Pattern-matching capabilities of ML will be used to spot anomalies in network behavior that might otherwise be missed. + + + + +“The intelligence of its large base of customers can help Cisco to derive important insights about how users can better manage their networks and solve problems and the power of MI/AI technology will continue to improve over time,” Doyle said. + +Join the Network World communities on [Facebook][10] and [LinkedIn][11] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3401523/cisco-software-to-make-networks-smarter-safer-more-manageable.html + +作者:[Michael Cooney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Michael-Cooney/ +[b]: https://github.com/lujun9972 +[1]: https://images.idgesg.net/images/article/2018/11/intelligentnetwork-100780636-large.jpg +[2]: https://www.networkworld.com/article/3323407/sd-wan/how-to-buy-sd-wan-technology-key-questions-to-consider-when-selecting-a-supplier.html +[3]: https://www.networkworld.com/article/3328488/backup-systems-and-services/how-to-pick-an-off-site-data-backup-method.html +[4]: https://www.networkworld.com/article/3250664/lan-wan/sd-branch-what-it-is-and-why-youll-need-it.html +[5]: https://www.networkworld.com/article/3285728/sd-wan/what-are-the-options-for-securing-sd-wan.html?nsdr=true +[6]: https://www.networkworld.com/article/3400382/cisco-will-use-aiml-to-boost-intent-based-networking.html +[7]: https://pluralsight.pxf.io/c/321564/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fadministering-office-365-quick-start +[8]: https://blogs.cisco.com/analytics-automation/cisco-ai-network-analytics-making-networks-smarter-simpler-and-more-secure +[9]: https://www.networkworld.com/article/3393232/cisco-boosts-sd-wan-with-multicloud-to-branch-access-system.html +[10]: https://www.facebook.com/NetworkWorld/ +[11]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190611 How To Find Linux System Details Using inxi.md b/sources/tech/20190611 How To Find Linux System Details Using inxi.md new file mode 100644 index 0000000000..4fc24fd137 --- /dev/null +++ b/sources/tech/20190611 How To Find Linux System Details Using inxi.md @@ -0,0 +1,608 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Find Linux System Details Using inxi) +[#]: via: (https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +How To Find Linux System Details Using inxi +====== + +![find Linux system details using inxi][1] + +**Inxi** is a free, open source, and full featured command line system information tool. It shows system hardware, CPU, drivers, Xorg, Desktop, Kernel, GCC version(s), Processes, RAM usage, and a wide variety of other useful information. Be it a hard disk or CPU, mother board or the complete detail of the entire system, inxi will display it more accurately in seconds. Since it is CLI tool, you can use it in Desktop or server edition. Inxi is available in the default repositories of most Linux distributions and some BSD systems. + +### Install inxi + +**On Arch Linux and derivatives:** + +To install inxi in Arch Linux or its derivatives like Antergos, and Manajaro Linux, run: + +``` +$ sudo pacman -S inxi +``` + +Just in case if Inxi is not available in the default repositories, try to install it from AUR (It varies year to year) using any AUR helper programs. + +Using [**Yay**][2]: + +``` +$ yay -S inxi +``` + +**On Debian / Ubuntu and derivatives:** + +``` +$ sudo apt-get install inxi +``` + +**On Fedora / RHEL / CentOS / Scientific Linux:** + +inxi is available in the Fedora default repositories. So, just run the following command to install it straight away. + +``` +$ sudo dnf install inxi +``` + +In RHEL and its clones like CentOS and Scientific Linux, you need to add the EPEL repository and then install inxi. + +To install EPEL repository, just run: + +``` +$ sudo yum install epel-release +``` + +After installing EPEL repository, install inxi using command: + +``` +$ sudo yum install inxi +``` + +**On SUSE/openSUSE:** + +``` +$ sudo zypper install inxi +``` + +### Find Linux System Details Using inxi + +inxi will require some additional programs to operate properly. They will be installed along with inxi. However, in case if they are not installed automatically, you need to find and install them. + +To list all required programs, run: + +``` +$ inxi --recommends +``` + +If you see any missing programs, then install them before start using inxi. + +Now, let us see how to use it to reveal the Linux system details. inxi usage is pretty simple and straight forward. + +Open up your Terminal and run the following command to print a short summary of CPU, memory, hard drive and kernel information: + +``` +$ inxi +``` + +**Sample output:** + +``` +CPU: Dual Core Intel Core i3-2350M (-MT MCP-) speed/min/max: 798/800/2300 MHz +Kernel: 5.1.2-arch1-1-ARCH x86_64 Up: 1h 31m Mem: 2800.5/7884.2 MiB (35.5%) +Storage: 465.76 GiB (80.8% used) Procs: 163 Shell: bash 5.0.7 inxi: 3.0.34 +``` + +![][3] + +Find Linux System Details Using inxi + +As you can see, Inxi displays the following details of my Arch Linux desktop: + + 1. CPU type, + 2. CPU speed, + 3. Kernel details, + 4. Uptime, + 5. Memory details (Total and used memory), + 6. Hard disk size along with current usage, + 7. Procs, + 8. Default shell details, + 9. Inxi version. + + + +To display full summary, use **“-F”** switch as shown below. + +``` +$ inxi -F +``` + +**Sample output:** + +``` +System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux +Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: + Mobo: Dell model: 01HXXJ v: A05 serial: BIOS: Dell v: A05 date: 08/03/2012 +Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%) +CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB + Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798 +Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel + Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz + Message: Unable to show advanced data. Required tool glxinfo missing. +Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel + Sound Server: ALSA v: k5.1.2-arch1-1-ARCH +Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169 + IF: enp5s0 state: down mac: 45:c8:gh:89:b6:45 + Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k + IF: wlp9s0 state: up mac: c3:11:96:22:87:3g + Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb +Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%) + ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB +Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2 + ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1 + ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3 +Sensors: System Temperatures: cpu: 58.0 C mobo: N/A + Fan Speeds (RPM): cpu: 3445 +Info: Processes: 169 Uptime: 1h 38m Memory: 7.70 GiB used: 2.94 GiB (38.2%) Shell: bash inxi: 3.0.34 +``` + +Inxi used on IRC automatically filters out your network device MAC address, WAN and LAN IP, your /home username directory in partitions, and a few other items in order to maintain basic privacy and security. You can also trigger this filtering with the **-z** option like below. + +``` +$ inxi -Fz +``` + +To override the IRC filter, use the **-Z** option. + +``` +$ inxi -FZ +``` + +This can be useful in debugging network connection issues online in a private chat, for example. Please be very careful while using -Z option. It will display your MAC addresses. You shouldn’t share the results got with -Z option in public forums. + +##### Displaying device-specific details + +When running inxi without any options, you will get basic details of your system, such as CPU, Memory, Kernel, Uptime, harddisk etc. + +You can, of course, narrow down the result to show specific device details using various options. Inxi has numerous options (both uppercase and lowercase). + +First, we will see example commands for all uppercase options in alphabetical order. Some commands may require root/sudo privileges to get actual data. + +####### **Uppercase options** + +**1\. Display Audio/Sound card details** + +To show your audio and sound card(s) information with sound card driver, use **-A** option. + +``` +$ inxi -A +Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio driver: snd_hda_intel + Sound Server: ALSA v: k5.1.2-arch1-1-ARCH +``` + +**2\. Display Battery details** + +To show battery details of your system with current charge and condition, use **-B** option. + +``` +$ inxi -B +Battery: ID-1: BAT0 charge: 39.0 Wh condition: 39.0/48.8 Wh (80%) +``` + +**3\. Display CPU details** + +To show complete CPU details including no of cores, CPU model, CPU cache, CPU clock speed, CPU min/max speed etc., use **-C** option. + +``` +$ inxi -C +CPU: Topology: Dual Core model: Intel Core i3-2350M bits: 64 type: MT MCP L2 cache: 3072 KiB + Speed: 798 MHz min/max: 800/2300 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798 +``` + +**4\. Display hard disk details** + +To show information about your hard drive, such as Disk type, vendor, device ID, model, disk size, total disk space, used percentage etc., use **-D** option. + +``` +$ inxi -D +Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%) + ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB +``` + +**5\. Disply Graphics details** + +To show details about the graphics card, including details of grahics card, driver, vendor, display server, resolution etc., use **-G** option. + +``` +$ inxi -G +Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics driver: i915 v: kernel + Display: x11 server: X.Org 1.20.4 driver: modesetting unloaded: vesa resolution: 1366x768~60Hz + Message: Unable to show advanced data. Required tool glxinfo missing. +``` + +**6\. Display details about processes, uptime, memory, inxi version** + +To show information about no of processes, total uptime, total memory with used memory, Shell details and inxi version etc., use **-I** option. + +``` +$ inxi -I +Info: Processes: 170 Uptime: 5h 47m Memory: 7.70 GiB used: 3.27 GiB (42.4%) Shell: bash inxi: 3.0.34 +``` + +**7\. Display Motherboard details** + +To show information about your machine details, manufacturer, motherboard, BIOS, use **-M** option. + +``` +$ inxi -M +Machine: Type: Portable System: Dell product: Inspiron N5050 v: N/A serial: + Mobo: Dell model: 034ygt v: A018 serial: BIOS: Dell v: A001 date: 09/04/2015 +``` + +**8\. Display network card details** + +To show information about your network card, including vendor, card driver and no of network interfaces etc., use **-N** option. + +``` +$ inxi -N +Network: Device-1: Realtek RTL810xE PCI Express Fast Ethernet driver: r8169 + Device-2: Qualcomm Atheros AR9285 Wireless Network Adapter driver: ath9k + Device-3: Qualcomm Atheros AR3011 Bluetooth type: USB driver: btusb +``` + +If you want to show the advanced details of the network cards, such as MAC address, speed and state of nic, use **-n** option. + +``` +$ inxi -n +``` + +Please careful sharing this details on public forum. + +**9\. Display Partition details** + +To display basic partition information, use **-P** option. + +``` +$ inxi -P +Partition: ID-1: / size: 456.26 GiB used: 376.25 GiB (82.5%) fs: ext4 dev: /dev/sda2 + ID-2: /boot size: 92.8 MiB used: 62.9 MiB (67.7%) fs: ext4 dev: /dev/sda1 + ID-3: swap-1 size: 2.00 GiB used: 0 KiB (0.0%) fs: swap dev: /dev/sda3 +``` + +To show full partition information including mount points, use **-p** option. + +``` +$ inxi -p +``` + +**10\. Display RAID details** + +To show RAID info, use **-R** option. + +``` +$ inxi -R +``` + +**11\. Display system details** + +To show Linux system information such as hostname, kernel, DE, OS version etc., use **-S** option. + +``` +$ inxi -S +System: Host: sk Kernel: 5.1.2-arch1-1-ARCH x86_64 bits: 64 Desktop: Deepin 15.10.1 Distro: Arch Linux +``` + +**12\. Displaying weather details** + +Inixi is not just for finding hardware details. It is useful for getting other stuffs too. + +For example, you can display the weather details of a given location. To do so, run inxi with **-W** option like below. + +``` +$ inxi -W 95623,us +Weather: Temperature: 21.1 C (70 F) Conditions: Scattered clouds Current Time: Tue 11 Jun 2019 04:34:35 AM PDT + Source: WeatherBit.io +``` + +Please note that you should use only ASCII letters in city/state/country names to get valid results. + +####### Lowercase options + +**1\. Display basic system details** + +To show only the basic summary of your system details, use **-b** option. + +``` +$ inxi -b +``` + +Alternatively, you can use this command: + +Both servers the same purpose. + +``` +$ inxi -v 2 +``` + +**2\. Set color scheme** + +We can set different color schemes for inxi output using **-c** option. Yu can set color scheme number from **0** to **42**. If no scheme number is supplied, **0** is assumed. + +Here is inxi output with and without **-c** option. + +![][4] + +inxi output without color scheme + +As you can see, when we run inxi with -c option, the color scheme is disabled. The -c option is useful to turnoff colored output when redirecting clean output without escape codes to a text file. + +Similarly, we can use other color scheme values. + +``` +$ inxi -c10 + +$ inxi -c42 +``` + +**3\. Display optical drive details** + +We can show the optical drive data details along with local hard drive details using **-d** option. + +``` +$ inxi -d +Drives: Local Storage: total: 465.76 GiB used: 376.31 GiB (80.8%) + ID-1: /dev/sda vendor: Seagate model: ST9500325AS size: 465.76 GiB + Optical-1: /dev/sr0 vendor: PLDS model: DVD+-RW DS-8A8SH dev-links: cdrom + Features: speed: 24 multisession: yes audio: yes dvd: yes rw: cd-r,cd-rw,dvd-r,dvd-ram +``` + +**4\. Display all CPU flags** + +To show all CPU flags used, run: + +``` +$ inxi -f +``` + +**5\. Display IP details** + +To show WAN and local ip address along network card details such as device vendor, driver, mac, state etc., use **-i** option. + +``` +$ inxi -i +``` + +**6\. Display partition labels** + +If you have set labels for the partitions, you can view them using **-l** option. + +``` +$ inxi -l +``` + +You can also view the labels of all partitions along with mountpoints using command: + +``` +$ inxi -pl +``` + +**7\. Display Memory details** + +We can display memory details such as total size of installed RAM, how much memory is used, no of available DIMM slots, total size of supported RAM, how much RAM is currently installed in each slots etc., using **-m** option. + +``` +$ sudo inxi -m +[sudo] password for sk: +Memory: RAM: total: 7.70 GiB used: 2.26 GiB (29.3%) + Array-1: capacity: 16 GiB slots: 2 EC: None + Device-1: DIMM_A size: 4 GiB speed: 1067 MT/s + Device-2: DIMM_B size: 4 GiB speed: 1067 MT/s +``` + +**8\. Display unmounted partition details** + +To show unmounted partition details, use **-o** option. + +``` +$ inxi -o +``` + +If there were no unmounted partitions in your system, you will see an output something like below. + +``` +Unmounted: Message: No unmounted partitions found. +``` + +**9\. Display list of repositories** + +To display the the list of repositories in your system, use **-r** option. + +``` +$ inxi -r +``` + +**Sample output:** + +``` +Repos: Active apt sources in file: /etc/apt/sources.list +deb http://in.archive.ubuntu.com/ubuntu/ xenial main restricted +deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates main restricted +deb http://in.archive.ubuntu.com/ubuntu/ xenial universe +deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates universe +deb http://in.archive.ubuntu.com/ubuntu/ xenial multiverse +deb http://in.archive.ubuntu.com/ubuntu/ xenial-updates multiverse +deb http://in.archive.ubuntu.com/ubuntu/ xenial-backports main restricted universe multiverse +deb http://security.ubuntu.com/ubuntu xenial-security main restricted +deb http://security.ubuntu.com/ubuntu xenial-security universe +deb http://security.ubuntu.com/ubuntu xenial-security multiverse +``` + +* * * + +**Suggested read:** + + * [**How To Find The List Of Installed Repositories From Commandline In Linux**][5] + + + +* * * + +**10\. Show system temperature, fan speed details** + +Inxi is capable to find motherboard/CPU/GPU temperatures and fan speed. + +``` +$ inxi -s +Sensors: System Temperatures: cpu: 60.0 C mobo: N/A + Fan Speeds (RPM): cpu: 3456 +``` + +Please note that Inxi requires sensors to find the system temperature. Make sure **lm_sensors** is installed and correctly configured in your system. For more details about lm_sensors, check the following guide. + + * [**How To View CPU Temperature On Linux**][6] + + + +**11\. Display details about processes** + +To show the list processes top 5 processes which are consuming most CPU and Memory, simply run: + +``` +$ inxi -t +Processes: CPU top: 5 + 1: cpu: 14.3% command: firefox pid: 15989 + 2: cpu: 10.5% command: firefox pid: 13487 + 3: cpu: 7.1% command: firefox pid: 15062 + 4: cpu: 3.1% command: xorg pid: 13493 + 5: cpu: 3.0% command: firefox pid: 14954 + System RAM: total: 7.70 GiB used: 2.99 GiB (38.8%) + Memory top: 5 + 1: mem: 1115.8 MiB (14.1%) command: firefox pid: 15989 + 2: mem: 606.6 MiB (7.6%) command: firefox pid: 13487 + 3: mem: 339.3 MiB (4.3%) command: firefox pid: 13630 + 4: mem: 303.1 MiB (3.8%) command: firefox pid: 18617 + 5: mem: 260.1 MiB (3.2%) command: firefox pid: 15062 +``` + +We can also sort this output by either CPU usage or Memory usage. + +For instance, to find the which top 5 processes are consuming most memory, use the following command: + +``` +$ inxi -t m +Processes: System RAM: total: 7.70 GiB used: 2.73 GiB (35.4%) + Memory top: 5 + 1: mem: 966.1 MiB (12.2%) command: firefox pid: 15989 + 2: mem: 468.2 MiB (5.9%) command: firefox pid: 13487 + 3: mem: 347.9 MiB (4.4%) command: firefox pid: 13708 + 4: mem: 306.7 MiB (3.8%) command: firefox pid: 13630 + 5: mem: 247.2 MiB (3.1%) command: firefox pid: 15062 +``` + +To sort the top 5 processes based on CPU usage, run: + +``` +$ inxi -t c +Processes: CPU top: 5 + 1: cpu: 14.9% command: firefox pid: 15989 + 2: cpu: 10.6% command: firefox pid: 13487 + 3: cpu: 7.0% command: firefox pid: 15062 + 4: cpu: 3.1% command: xorg pid: 13493 + 5: cpu: 2.9% command: firefox pid: 14954 +``` + +Bydefault, Inxi will display the top 5 processes. You can change the number of processes, for example 10, like below. + +``` +$ inxi -t cm10 +Processes: CPU top: 10 + 1: cpu: 14.9% command: firefox pid: 15989 + 2: cpu: 10.6% command: firefox pid: 13487 + 3: cpu: 7.0% command: firefox pid: 15062 + 4: cpu: 3.1% command: xorg pid: 13493 + 5: cpu: 2.9% command: firefox pid: 14954 + 6: cpu: 2.8% command: firefox pid: 13630 + 7: cpu: 1.8% command: firefox pid: 18325 + 8: cpu: 1.4% command: firefox pid: 18617 + 9: cpu: 1.3% command: firefox pid: 13708 + 10: cpu: 0.8% command: firefox pid: 14427 + System RAM: total: 7.70 GiB used: 2.92 GiB (37.9%) + Memory top: 10 + 1: mem: 1160.9 MiB (14.7%) command: firefox pid: 15989 + 2: mem: 475.1 MiB (6.0%) command: firefox pid: 13487 + 3: mem: 353.4 MiB (4.4%) command: firefox pid: 13708 + 4: mem: 308.0 MiB (3.9%) command: firefox pid: 13630 + 5: mem: 269.6 MiB (3.4%) command: firefox pid: 15062 + 6: mem: 249.3 MiB (3.1%) command: firefox pid: 14427 + 7: mem: 238.5 MiB (3.0%) command: firefox pid: 14954 + 8: mem: 208.2 MiB (2.6%) command: firefox pid: 18325 + 9: mem: 194.0 MiB (2.4%) command: firefox pid: 18617 + 10: mem: 143.6 MiB (1.8%) command: firefox pid: 23960 +``` + +The above command will display the top 10 processes that consumes the most CPU and Memory. + +To display only top10 based on memory usage, run: + +``` +$ inxi -t m10 +``` + +**12\. Display partition UUID details** + +To show partition UUIDs ( **U** niversally **U** nique **Id** entifier), use **-u** option. + +``` +$ inxi -u +``` + +There are much more options are yet to be covered. But, these are just enough to get almost all details of your Linux box. + +For more details and options, refer the man page. + +``` +$ man inxi +``` + +* * * + +**Related read:** + + * **[Neofetch – Display your Linux system’s information][7]** + + + +* * * + +The primary purpose of Inxi tool is to use in IRC or forum support. If you are looking for any help via a forum or website where someone is asking the specification of your system, just run this command, and copy/paste the output. + +**Resources:** + + * [**Inxi GitHub Repository**][8] + * [**Inxi home page**][9] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-your-system-details-using-inxi/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2016/08/inxi-520x245-1-720x340.png +[2]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[3]: http://www.ostechnix.com/wp-content/uploads/2016/08/Find-Linux-System-Details-Using-inxi.png +[4]: http://www.ostechnix.com/wp-content/uploads/2016/08/inxi-output-without-color-scheme.png +[5]: https://www.ostechnix.com/find-list-installed-repositories-commandline-linux/ +[6]: https://www.ostechnix.com/view-cpu-temperature-linux/ +[7]: http://www.ostechnix.com/neofetch-display-linux-systems-information/ +[8]: https://github.com/smxi/inxi +[9]: http://smxi.org/docs/inxi.htm diff --git a/sources/tech/20190611 Step by Step Zorin OS 15 Installation Guide with Screenshots.md b/sources/tech/20190611 Step by Step Zorin OS 15 Installation Guide with Screenshots.md new file mode 100644 index 0000000000..79d0b9e1a5 --- /dev/null +++ b/sources/tech/20190611 Step by Step Zorin OS 15 Installation Guide with Screenshots.md @@ -0,0 +1,204 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Step by Step Zorin OS 15 Installation Guide with Screenshots) +[#]: via: (https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +Step by Step Zorin OS 15 Installation Guide with Screenshots +====== + +Good News for all the Zorin users out there! Zorin has launched its latest version (Zorin OS 15) of its Ubuntu based Linux distro. This version is based on Ubuntu 18.04.2, since its launch in July 2009, it is estimated that this popular distribution has reached more than 17 million downloads. Zorin is renowned for creating a distribution for beginner level users and the all new Zorin OS 15 comes packed with a lot of goodies that surely will make Zorin OS lovers happy. Let’s see some of the major enhancements made in the latest version. + +### New Features of Zorin OS 15 + +Zorin OS has always amazed users with different set of features when every version is released Zorin OS 15 is no exception as it comes with a lot of new features as outlined below: + +**Enhanced User Experience** + +The moment you look at the Zorin OS 15, you will ask whether it is a Linux distro because it looks more like a Windows OS. According to Zorin, it wanted Windows users to get ported to Linux in a more user-friendly manner. And it features a Windows like Start menu, quick app launchers, a traditional task bar section, system tray etc. + +**Zorin Connect** + +Another major highlight of Zorin OS 15 is the ability to integrate your Android Smartphones seamlessly with your desktop using the Zorin Connect application. With your phone connected, you can share music, videos and other files between your phone and desktop. You can even use your phone as a mouse to control the desktop. You can also easily control the media playback in your desktop from your phone itself. Quickly reply to all your messages and notifications sent to your phone from your desktop. + +**New GTK Theme** + +Zorin OS 15 ships with an all new GTK Theme that has been exclusively built for this distro and the theme is available in 6 different colors along with the hugely popular dark theme. Another highlight is that the OS automatically detects the time of the day and changes the desktop theme accordingly. Say for example, during sunset it switches to a dark theme whereas in the morning it switches to bright theme automatically. + +**Other New Features:** + +Zorin OS 15 comes packed with a lot of new features including: + + * Compatible with Thunderbolt 3.0 devices + * Supports color emojis + * Comes with an upgraded Linux Kernel 4.18 + * Customized settings available for application menu and task bar + * System font changed to Inter + * Supports renaming bulk files + + + +### Minimum system requirements for Zorin OS 15 (Core): + + * Dual Core 64-bit (1GHZ) + * 2 GB RAM + * 10 GB free disk space + * Internet Connection Optional + * Display (800×600) + + + +### Step by Step Guide to Install Zorin OS 15 (Core) + +Before you start installing Zorin OS 15, ensure you have a copy of the Zorin OS 15 downloaded in your system. If not download then refer official website of [Zorin OS 15][1]. Remember this Linux distribution is available in 4 versions including: + + * Ultimate (Paid Version) + * Core (Free Version) + * Lite (Free Version) + * Education (Free Version) + + + +Note: In this article I will demonstrate Zorin OS 15 Core Installation Steps + +### Step 1) Create Zorin OS 15 Bootable USB Disk + +Once you have downloaded Zorin OS 15, copy the ISO into an USB disk and create a bootable disk. Change our system settings to boot using an USB disk and restart your system. Once you restart your system, you will see the screen as shown below. Click “ **Install or Try Zorin OS** ” + + + +### Step 2) Choose Install Zorin OS + +In the next screen, you will be shown option to whether install Zorin OS 15 or to try Zorin OS. Click “ **Install Zorin OS** ” to continue the installation process. + + + +### Step 3) Choose Keyboard Layout + +Next step is to choose your keyboard layout. By default, English (US) is selected and if you want to choose a different language, then choose it and click “ **Continue** ” + + + +### Step 4) Download Updates and Other Software + +In the next screen, you will be asked whether you need to download updates while you are installing Zorin OS and install other 3rd party applications. In case your system is connected to internet then you can select both of these options, but by doing so your installation time increases considerably, or if you don’t want to install updates and third party software during the installation then untick both these options and click “Continue” + + + +### Step 5) Choose Zorin OS 15 Installation Method + +If you are new to Linux and want fresh installation and don’t want to customize partitions, then better choose option “ **Erase disk and install Zorin OS** ” + +If you want to create customize partitions for Zorin OS then choose “ **Something else** “, In this tutorial I will demonstrate how to create customize partition scheme for Zorin OS 15 installation, + +So, choose “ **Something else** ” option and then click on Continue + + + + + +As we can see we have around 42 GB disk available for Zorin OS, We will be creating following partitions, + + * /boot = 2 GB (ext4 file system) + * /home = 20 GB (ext4 file system) + * / = 10 GB (ext4 file system) + * /var = 7 GB (ext4 file system) + * Swap = 2 GB (ext4 file system) + + + +To start creating partitions, first click on “ **New Partition Table** ” and it will show it is going to create empty partition table, click on continue + + + +In the next screen we will see that we have now 42 GB free space on disk (/dev/sda), so let’s create our first partition as /boot, + +Select the free space, then click on + symbol and then specify the partition size as 2048 MB, file system type as ext4 and mount point as /boot, + + + +Click on OK + +Now create our next partition /home of size 20 GB (20480 MB), + + + +Similarly create our next two partition / and /var of size 10 and 7 GB respectively, + + + + + +Let’s create our last partition as swap of size 2 GB + + + +Click on OK + +Choose “ **Install Now** ” option in next window, + + + +In next window, choose “Continue” to write changes to disk and to proceed with installation + + + +### Step 6) Choose Your Preferred Location + +In the next screen, you will be asked to choose your location and click “Continue” + + + +### Step 7) Provide User Credentials + +In the next screen, you’ll be asked to enter the user credentials including your name, computer name, + +Username and password. Once you are done, click “Continue” to proceed with the installation process. + + + +### Step 8) Installing Zorin OS 15 + +Once you click continue, you can see that the Zorin OS 15 starts installing and it may take some time to complete the installation process. + + + +### Step 9) Restart your system after Successful Installation + +Once the installation process is completed, it will ask to restart your computer. Hit “ **Restart Now** ” + + + +### Step 10) Login to Zorin OS 15 + +Once the system restarts, you will be asked to login into the system using the login credentials provided earlier. + +Note: Don’t forget to change the boot medium from bios so that system boots up with disk. + + + +### Step 11) Zorin OS 15 Welcome Screen + +Once your login is successful, you can see the Zorin OS 15 welcome screen. Now you can start exploring all the incredible features of Zorin OS 15. + + + +That’s all from this tutorial, please do share feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/zorin-os-15-installation-guide-screenshots/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://zorinos.com/download/ diff --git a/sources/tech/20190611 What is a Linux user.md b/sources/tech/20190611 What is a Linux user.md new file mode 100644 index 0000000000..f5f71758fa --- /dev/null +++ b/sources/tech/20190611 What is a Linux user.md @@ -0,0 +1,56 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What is a Linux user?) +[#]: via: (https://opensource.com/article/19/6/what-linux-user) +[#]: author: (Anderson Silva https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth) + +What is a Linux user? +====== +The definition of who is a "Linux user" has grown to be a bigger tent, +and it's a great change. +![][1] + +> _Editor's note: this article was updated on Jun 11, 2019, at 1:15:19 PM to more accurately reflect the author's perspective on an open and inclusive community of practice in the Linux community._ + +In only two years, the Linux kernel will be 30 years old. Think about that! Where were you in 1991? Were you even born? I was 13! Between 1991 and 1993 a few Linux distributions were created, and at least three of them—Slackware, Debian, and Red Hat–provided the [backbone][2] the Linux movement was built on. + +Getting a copy of a Linux distribution and installing and configuring it on a desktop or server was very different back then than today. It was hard! It was frustrating! It was an accomplishment if you got it running! We had to fight with incompatible hardware, configuration jumpers on devices, BIOS issues, and many other things. Even if the hardware was compatible, many times, you still had to compile the kernel, modules, and drivers to get them to work on your system. + +If you were around during those days, you are probably nodding your head. Some readers might even call them the "good old days," because choosing to use Linux meant you had to learn about operating systems, computer architecture, system administration, networking, and even programming, just to keep the OS functioning. I am not one of them though: Linux being a regular part of everyone's technology experience is one of the most amazing changes in our industry! + +Almost 30 years later, Linux has gone far beyond the desktop and server. You will find Linux in automobiles, airplanes, appliances, smartphones… virtually everywhere! You can even purchase laptops, desktops, and servers with Linux preinstalled. If you consider cloud computing, where corporations and even individuals can deploy Linux virtual machines with the click of a button, it's clear how widespread the availability of Linux has become. + +With all that in mind, my question for you is: **How do you define a "Linux user" today?** + +If you buy your parent or grandparent a Linux laptop from System76 or Dell, log them into their social media and email, and tell them to click "update system" every so often, they are now a Linux user. If you did the same with a Windows or MacOS machine, they would be Windows or MacOS users. It's incredible to me that, unlike the '90s, Linux is now a place for anyone and everyone to compute. + +In many ways, this is due to the web browser becoming the "killer app" on the desktop computer. Now, many users don't care what operating system they are using as long as they can get to their app or service. + +How many people do you know who use their phone, desktop, or laptop regularly but can't manage files, directories, and drivers on their systems? How many can't install a binary that isn't attached to an "app store" of some sort? How about compiling an application from scratch?! For me, it's almost no one. That's the beauty of open source software maturing along with an ecosystem that cares about accessibility. + +Today's Linux user is not required to know, study, or even look up information as the Linux user of the '90s or early 2000s did, and that's not a bad thing. The old imagery of Linux being exclusively for bearded men is long gone, and I say good riddance. + +There will always be room for a Linux user who is interested, curious, _fascinated_ about computers, operating systems, and the idea of creating, using, and collaborating on free software. There is just as much room for creative open source contributors on Windows and MacOS these days as well. Today, being a Linux user is being anyone with a Linux system. And that's a wonderful thing. + +### The change to what it means to be a Linux user + +When I started with Linux, being a user meant knowing how to the operating system functioned in every way, shape, and form. Linux has matured in a way that allows the definition of "Linux users" to encompass a much broader world of possibility and the people who inhabit it. It may be obvious to say, but it is important to say clearly: anyone who uses Linux is an equal Linux user. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/what-linux-user + +作者:[Anderson Silva][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ansilva/users/petercheer/users/ansilva/users/greg-p/users/ansilva/users/ansilva/users/bcotton/users/ansilva/users/seth/users/ansilva/users/don-watkins/users/ansilva/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22 +[2]: https://en.wikipedia.org/wiki/Linux_distribution#/media/File:Linux_Distribution_Timeline.svg diff --git a/sources/tech/20190612 How to write a loop in Bash.md b/sources/tech/20190612 How to write a loop in Bash.md new file mode 100644 index 0000000000..f63bff9cd3 --- /dev/null +++ b/sources/tech/20190612 How to write a loop in Bash.md @@ -0,0 +1,282 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to write a loop in Bash) +[#]: via: (https://opensource.com/article/19/6/how-write-loop-bash) +[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth) + +How to write a loop in Bash +====== +Automatically perform a set of actions on multiple files with for loops +and find commands. +![bash logo on green background][1] + +A common reason people want to learn the Unix shell is to unlock the power of batch processing. If you want to perform some set of actions on many files, one of the ways to do that is by constructing a command that iterates over those files. In programming terminology, this is called _execution control,_ and one of the most common examples of it is the **for** loop. + +A **for** loop is a recipe detailing what actions you want your computer to take _for_ each data object (such as a file) you specify. + +### The classic for loop + +An easy loop to try is one that analyzes a collection of files. This probably isn't a useful loop on its own, but it's a safe way to prove to yourself that you have the ability to handle each file in a directory individually. First, create a simple test environment by creating a directory and placing some copies of some files into it. Any file will do initially, but later examples require graphic files (such as JPEG, PNG, or similar). You can create the folder and copy files into it using a file manager or in the terminal: + + +``` +$ mkdir example +$ cp ~/Pictures/vacation/*.{png,jpg} example +``` + +Change directory to your new folder, then list the files in it to confirm that your test environment is what you expect: + + +``` +$ cd example +$ ls -1 +cat.jpg +design_maori.png +otago.jpg +waterfall.png +``` + +The syntax to loop through each file individually in a loop is: create a variable ( **f** for file, for example). Then define the data set you want the variable to cycle through. In this case, cycle through all files in the current directory using the ***** wildcard character (the ***** wildcard matches _everything_ ). Then terminate this introductory clause with a semicolon ( **;** ). + + +``` +`$ for f in * ;` +``` + +Depending on your preference, you can choose to press **Return** here. The shell won't try to execute the loop until it is syntactically complete. + +Next, define what you want to happen with each iteration of the loop. For simplicity, use the **file** command to get a little bit of data about each file, represented by the **f** variable (but prepended with a **$** to tell the shell to swap out the value of the variable for whatever the variable currently contains): + + +``` +`do file $f ;` +``` + +Terminate the clause with another semi-colon and close the loop: + + +``` +`done` +``` + +Press **Return** to start the shell cycling through _everything_ in the current directory. The **for** loop assigns each file, one by one, to the variable **f** and runs your command: + + +``` +$ for f in * ; do +> file $f ; +> done +cat.jpg: JPEG image data, EXIF standard 2.2 +design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +otago.jpg: JPEG image data, EXIF standard 2.2 +waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +``` + +You can also write it this way: + + +``` +$ for f in *; do file $f; done +cat.jpg: JPEG image data, EXIF standard 2.2 +design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +otago.jpg: JPEG image data, EXIF standard 2.2 +waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +``` + +Both the multi-line and single-line formats are the same to your shell and produce the exact same results. + +### A practical example + +Here's a practical example of how a loop can be useful for everyday computing. Assume you have a collection of vacation photos you want to send to friends. Your photo files are huge, making them too large to email and inconvenient to upload to your [photo-sharing service][2]. You want to create smaller web-versions of your photos, but you have 100 photos and don't want to spend the time reducing each photo, one by one. + +First, install the **ImageMagick** command using your package manager on Linux, BSD, or Mac. For instance, on Fedora and RHEL: + + +``` +`$ sudo dnf install ImageMagick` +``` + +On Ubuntu or Debian: + + +``` +`$ sudo apt install ImageMagick` +``` + +On BSD, use **ports** or [pkgsrc][3]. On Mac, use [Homebrew][4] or [MacPorts][5]. + +Once you install ImageMagick, you have a set of new commands to operate on photos. + +Create a destination directory for the files you're about to create: + + +``` +`$ mkdir tmp` +``` + +To reduce each photo to 33% of its original size, try this loop: + + +``` +`$ for f in * ; do convert $f -scale 33% tmp/$f ; done` +``` + +Then look in the **tmp** folder to see your scaled photos. + +You can use any number of commands within a loop, so if you need to perform complex actions on a batch of files, you can place your whole workflow between the **do** and **done** statements of a **for** loop. For example, suppose you want to copy each processed photo straight to a shared photo directory on your web host and remove the photo file from your local system: + + +``` +$ for f in * ; do +convert $f -scale 33% tmp/$f +scp -i seth_web tmp/$f [seth@example.com][6]:~/public_html +trash tmp/$f ; +done +``` + +For each file processed by the **for** loop, your computer automatically runs three commands. This means if you process just 10 photos this way, you save yourself 30 commands and probably at least as many minutes. + +### Limiting your loop + +A loop doesn't always have to look at every file. You might want to process only the JPEG files in your example directory: + + +``` +$ for f in *.jpg ; do convert $f -scale 33% tmp/$f ; done +$ ls -m tmp +cat.jpg, otago.jpg +``` + +Or, instead of processing files, you may need to repeat an action a specific number of times. A **for** loop's variable is defined by whatever data you provide it, so you can create a loop that iterates over numbers instead of files: + + +``` +$ for n in {0..4}; do echo $n ; done +0 +1 +2 +3 +4 +``` + +### More looping + +You now know enough to create your own loops. Until you're comfortable with looping, use them on _copies_ of the files you want to process and, as often as possible, use commands with built-in safeguards to prevent you from clobbering your data and making irreparable mistakes, like accidentally renaming an entire directory of files to the same name, each overwriting the other. + +For advanced **for** loop topics, read on. + +### Not all shells are Bash + +The **for** keyword is built into the Bash shell. Many similar shells use the same keyword and syntax, but some shells, like [tcsh][7], use a different keyword, like **foreach** , instead. + +In tcsh, the syntax is similar in spirit but more strict than Bash. In the following code sample, do not type the string **foreach?** in lines 2 and 3. It is a secondary prompt alerting you that you are still in the process of building your loop. + + +``` +$ foreach f (*) +foreach? file $f +foreach? end +cat.jpg: JPEG image data, EXIF standard 2.2 +design_maori.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +otago.jpg: JPEG image data, EXIF standard 2.2 +waterfall.png: PNG image data, 4608 x 2592, 8-bit/color RGB, non-interlaced +``` + +In tcsh, both **foreach** and **end** must appear alone on separate lines, so you cannot create a **for** loop on one line as you can with Bash and similar shells. + +### For loops with the find command + +In theory, you could find a shell that doesn't provide a **for** loop function, or you may just prefer to use a different command with added features. + +The **find** command is another way to implement the functionality of a **for** loop, as it offers several ways to define the scope of which files to include in your loop as well as options for [Parallel][8] processing. + +The **find** command is meant to help you find files on your hard drives. Its syntax is simple: you provide the path of the location you want to search, and **find** finds all files and directories: + + +``` +$ find . +. +./cat.jpg +./design_maori.png +./otago.jpg +./waterfall.png +``` + +You can filter the search results by adding some portion of the name: + + +``` +$ find . -name "*jpg" +./cat.jpg +./otago.jpg +``` + +The great thing about **find** is that each file it finds can be fed into a loop using the **-exec** flag. For instance, to scale down only the PNG photos in your example directory: + + +``` +$ find . -name "*png" -exec convert {} -scale 33% tmp/{} \; +$ ls -m tmp +design_maori.png, waterfall.png +``` + +In the **-exec** clause, the bracket characters **{}** stand in for whatever item **find** is processing (in other words, any file ending in PNG that has been located, one at a time). The **-exec** clause must be terminated with a semicolon, but Bash usually tries to use the semicolon for itself. You "escape" the semicolon with a backslash ( **\;** ) so that **find** knows to treat that semicolon as its terminating character. + +The **find** command is very good at what it does, and it can be too good sometimes. For instance, if you reuse it to find PNG files for another photo process, you will get a few errors: + + +``` +$ find . -name "*png" -exec convert {} -flip -flop tmp/{} \; +convert: unable to open image `tmp/./tmp/design_maori.png': +No such file or directory @ error/blob.c/OpenBlob/2643. +... +``` + +It seems that **find** has located all the PNG files—not only the ones in your current directory ( **.** ) but also those that you processed before and placed in your **tmp** subdirectory. In some cases, you may want **find** to search the current directory plus all other directories within it (and all directories in _those_ ). It can be a powerful recursive processing tool, especially in complex file structures (like directories of music artists containing directories of albums filled with music files), but you can limit this with the **-maxdepth** option. + +To find only PNG files in the current directory (excluding subdirectories): + + +``` +`$ find . -maxdepth 1 -name "*png"` +``` + +To find and process files in the current directory plus an additional level of subdirectories, increment the maximum depth by 1: + + +``` +`$ find . -maxdepth 2 -name "*png"` +``` + +Its default is to descend into all subdirectories. + +### Looping for fun and profit + +The more you use loops, the more time and effort you save, and the bigger the tasks you can tackle. You're just one user, but with a well-thought-out loop, you can make your computer do the hard work. + +You can and should treat looping like any other command, keeping it close at hand for when you need to repeat a single action or two on several files. However, it's also a legitimate gateway to serious programming, so if you have to accomplish a complex task on any number of files, take a moment out of your day to plan out your workflow. If you can achieve your goal on one file, then wrapping that repeatable process in a **for** loop is relatively simple, and the only "programming" required is an understanding of how variables work and enough organization to separate unprocessed from processed files. With a little practice, you can move from a Linux user to a Linux user who knows how to write a loop, so get out there and make your computer work for you! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/how-write-loop-bash + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth/users/goncasousa/users/howtopamm/users/howtopamm/users/seth/users/wavesailor/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bash_command_line.png?itok=k4z94W2U (bash logo on green background) +[2]: http://nextcloud.com +[3]: http://pkgsrc.org +[4]: http://brew.sh +[5]: https://www.macports.org +[6]: mailto:seth@example.com +[7]: https://en.wikipedia.org/wiki/Tcsh +[8]: https://opensource.com/article/18/5/gnu-parallel diff --git a/sources/tech/20190612 The bits and bytes of PKI.md b/sources/tech/20190612 The bits and bytes of PKI.md new file mode 100644 index 0000000000..3cf9bd9c41 --- /dev/null +++ b/sources/tech/20190612 The bits and bytes of PKI.md @@ -0,0 +1,284 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The bits and bytes of PKI) +[#]: via: (https://opensource.com/article/19/6/bits-and-bytes-pki) +[#]: author: (Alex Wood https://opensource.com/users/awood) + +The bits and bytes of PKI +====== +Take a look under the public key infrastructure's hood to get a better +understanding of its format. +![Computer keyboard typing][1] + +In two previous articles— _[An introduction to cryptography and public key infrastructure][2]_ and _[How do private keys work in PKI and cryptography?][3]_ —I discussed cryptography and public key infrastructure (PKI) in a general way. I talked about how digital bundles called _certificates_ store public keys and identifying information. These bundles contain a lot of complexity, and it's useful to have a basic understanding of the format for when you need to look under the hood. + +### Abstract art + +Keys, certificate signing requests, certificates, and other PKI artifacts define themselves in a data description language called [Abstract Syntax Notation One][4] (ASN.1). ASN.1 defines a series of simple data types (integers, strings, dates, etc.) along with some structured types (sequences, sets). By using those types as building blocks, we can create surprisingly complex data formats. + +ASN.1 contains plenty of pitfalls for the unwary, however. For example, it has two different ways of representing dates: GeneralizedTime ([ISO 8601][5] format) and UTCTime (which uses a two-digit year). Strings introduce even more confusion. We have IA5String for ASCII strings and UTF8String for Unicode strings. ASN.1 also defines several other string types, from the exotic [T61String][6] and [TeletexString][7] to the more innocuous sounding—but probably not what you wanted—PrintableString (only a small subset of ASCII) and UniversalString (encoded in [UTF-32][8]). If you're writing or reading ASN.1 data, I recommend referencing the [specification][9]. + +ASN.1 has another data type worth special mention: the object identifier (OID). OIDs are a series of integers. Commonly they are shown with periods delimiting them. Each integer represents a node in what is basically a "tree of things." For example, [1.3.6.1.4.1.2312][10] is the OID for my employer, Red Hat, where "1" is the node for the International Organization for Standardization (ISO), "3" is for ISO-identified organizations, "6" is for the US Department of Defense (which, for historical reasons, is the parent to the next node), "1" is for the internet, "4" is for private organizations, "1" is for enterprises, and finally "2312," which is Red Hat's own. + +More commonly, OIDs are regularly used to identify specific algorithms in PKI objects. If you have a digital signature, it's not much use if you don't know what type of signature it is. The signature algorithm "sha256WithRSAEncryption" has the OID "1.2.840.113549.1.1.11," for example. + +### ASN.1 at work + +Suppose we own a factory that produces flying brooms, and we need to store some data about every broom. Our brooms have a model name, a serial number, and a series of inspections that have been made to ensure flight-worthiness. We could store this information using ASN.1 like so: + + +``` +BroomInfo ::= SEQUENCE { +model UTF8String, +serialNumber INTEGER, +inspections SEQUENCE OF InspectionInfo +} + +InspectionInfo ::= SEQUENCE { +inspectorName UTF8String, +inspectionDate GeneralizedTime +} +``` + +The example above defines the model name as a UTF8-encoded string, the serial number as an integer, and our inspections as a series of InspectionInfo items. Then we see that each InspectionInfo item comprises two pieces of data: the inspector's name and the time of the inspection. + +An actual instance of BroomInfo data would look something like this in ASN.1's value assignment syntax: + + +``` +broom BroomInfo ::= { +model "Nimbus 2000", +serialNumber 1066, +inspections { +{ +inspectorName "Harry", +inspectionDate "201901011200Z" +} +{ +inspectorName "Hagrid", +inspectionDate "201902011200Z" +} +} +} +``` + +Don't worry too much about the particulars of the syntax; for the average developer, having a basic grasp of how the pieces fit together is sufficient. + +Now let's look at a real example from [RFC 8017][11] that I have abbreviated somewhat for clarity: + + +``` +RSAPrivateKey ::= SEQUENCE { +version Version, +modulus INTEGER, -- n +publicExponent INTEGER, -- e +privateExponent INTEGER, -- d +prime1 INTEGER, -- p +prime2 INTEGER, -- q +exponent1 INTEGER, -- d mod (p-1) +exponent2 INTEGER, -- d mod (q-1) +coefficient INTEGER, -- (inverse of q) mod p +otherPrimeInfos OtherPrimeInfos OPTIONAL +} + +Version ::= INTEGER { two-prime(0), multi(1) } +(CONSTRAINED BY +{-- version must be multi if otherPrimeInfos present --}) + +OtherPrimeInfos ::= SEQUENCE SIZE(1..MAX) OF OtherPrimeInfo + +OtherPrimeInfo ::= SEQUENCE { +prime INTEGER, -- ri +exponent INTEGER, -- di +coefficient INTEGER -- ti +} +``` + +The ASN.1 above defines the PKCS #1 format used to store RSA keys. Looking at this, we can see the RSAPrivateKey sequence starts with a version type (either 0 or 1) followed by a bunch of integers and then an optional type called OtherPrimeInfos. The OtherPrimeInfos sequence contains one or more pieces of OtherPrimeInfo. And each OtherPrimeInfo is just a sequence of integers. + +Let's look at an actual instance by asking OpenSSL to generate an RSA key and then pipe it into [asn1parse][12], which will print it out in a more human-friendly format. (By the way, the **genrsa** command I'm using here has been superseded by **genpkey** ; we'll see why a little later.) + + +``` +% openssl genrsa 4096 2> /dev/null | openssl asn1parse +0:d=0 hl=4 l=2344 cons: SEQUENCE +4:d=1 hl=2 l= 1 prim: INTEGER :00 +7:d=1 hl=4 l= 513 prim: INTEGER :B80B0C2443... +524:d=1 hl=2 l= 3 prim: INTEGER :010001 +529:d=1 hl=4 l= 512 prim: INTEGER :59C609C626... +1045:d=1 hl=4 l= 257 prim: INTEGER :E8FC43002D... +1306:d=1 hl=4 l= 257 prim: INTEGER :CA39222DD2... +1567:d=1 hl=4 l= 256 prim: INTEGER :25F6CD181F... +1827:d=1 hl=4 l= 256 prim: INTEGER :38CCE374CB... +2087:d=1 hl=4 l= 257 prim: INTEGER :C80430E810... +``` + +Recall that RSA uses a modulus, _n_ ; a public exponent, _e_ ; and a private exponent, _d_. Now let's look at the sequence. First, we see the version set to 0 for a two-prime RSA key (what **genrsa** generates), an integer for the modulus, _n_ , and then 0x010001 for the public exponent, _e_. If we convert to decimal, we'll see our public exponent is 65537, a number [commonly][13] used as an RSA public exponent. Following the public exponent, we see the integer for the private exponent, _e_ , and then some other integers that are used to speed up decryption and signing. Explaining how this optimization works is beyond the scope of this article, but if you like math, there's a [good video on the subject][14]. + +What about that other stuff on the left side of the output? What does "h=4" and "l=513" mean? We'll cover that shortly. + +### DERangement + +We've seen the "abstract" part of Abstract Syntax Notation One, but how does this data get encoded and stored? For that, we turn to a binary format called Distinguished Encoding Rules (DER) defined in the [X.690][15] specification. DER is a stricter version of its parent, Basic Encoding Rules (BER), in that for any given data, there is only one way to encode it. If we're going to be digitally signing data, it makes things a lot easier if there is only one possible encoding that needs to be signed instead of dozens of functionally equivalent representations. + +DER uses a [tag-length-value][16] (TLV) structure. The encoding of a piece of data begins with an identifier octet defining the data's type. ("Octet" is used rather than "byte" since the standard is very old and some early architectures didn't use 8 bits for a byte.) Next are the octets that encode the length of the data, and finally, there is the data. The data can be another TLV series. The left side of the **asn1parse** output makes a little more sense now. The first number indicates the absolute offset from the beginning. The "d=" tells us the depth of that item in the structure. The first line is a sequence, which we descend into on the next line (the depth _d_ goes from 0 to 1) whereupon **asn1parse** begins enumerating all the elements in that sequence. The "hl=" is the header length (the sum of the identifier and length octets), and the "l=" tells us the length of that particular piece of data. + +How is header length determined? It's the sum of the identifier byte and the bytes encoding the length. In our example, the top sequence is 2344 octets long. If it were less than 128 octets, the length would be encoded in a single octet in the "short form": bit 8 would be a zero and bits 7 to 1 would hold the length value ( **2 7-1=127**). A value of 2344 needs more space, so the "long" form is used. The first octet has bit 8 set to one, and bits 7 to 1 contain the length of the length. In our case, a value of 2344 can be encoded in two octets (0x0928). Combined with the first "length of the length" octet, we have three octets total. Add the one identifier octet, and that gives us our total header length of four. + +As a side exercise, let's consider the largest value we could possibly encode. We've seen that we have up to 127 octets to encode a length. At 8 bits per octet, we have a total of 1008 bits to use, so we can hold a number equal to **2 1008-1**. That would equate to a content length of **2.743062*10 279** yottabytes, staggeringly more than the estimated **10 80** atoms in the observable universe. If you're interested in all the details, I recommend reading "[A Layman's Guide to a Subset of ASN.1, BER, and DER][17]." + +What about "cons" and "prim"? Those indicate whether the value is encoded with "constructed" or "primitive" encoding. Primitive encoding is used for simple types like "INTEGER" or "BOOLEAN," while constructed encoding is used for structured types like "SEQUENCE" or "SET." The actual difference between the two encoding methods is whether bit 6 in the identifier octet is a zero or one. If it's a one, the parser knows that the content octets are also DER-encoded and it can descend. + +### PEM pals + +While useful in a lot of cases, a binary format won't pass muster if we need to display the data as text. Before the [MIME][18] standard existed, attachment support was spotty. Commonly, if you wanted to attach data, you put it in the body of the email, and since SMTP only supported ASCII, that meant converting your binary data (like the DER of your public key, for example) into ASCII characters. + +Thus, the PEM format emerged. PEM stands for "Privacy-Enhanced Email" and was an early standard for transmitting and storing PKI data. The standard never caught on, but the format it defined for storage did. PEM-encoded objects are just DER objects that are [base64][19]-encoded and wrapped at 64 characters per line. To describe the type of object, a header and footer surround the base64 string. You'll see **\-----BEGIN CERTIFICATE-----** or **\-----BEGIN PRIVATE KEY-----** , for example. + +Often you'll see files with the ".pem" extension. I don't find this suffix useful. The file could contain a certificate, a key, a certificate signing request, or several other possibilities. Imagine going to a sushi restaurant and seeing a menu that described every item as "fish and rice"! Instead, I prefer more informative extensions like ".crt", ".key", and ".csr". + +### The PKCS zoo + +Earlier, I showed an example of a PKCS #1-formatted RSA key. As you might expect, formats for storing certificates and signing requests also exist in various IETF RFCs. For example, PKCS #8 can be used to store private keys for many different algorithms (including RSA!). Here's some of the ASN.1 from [RFC 5208][20] for PKCS #8. (RFC 5208 has been obsoleted by RFC 5958, but I feel that the ASN.1 in RFC 5208 is easier to understand.) + + +``` +PrivateKeyInfo ::= SEQUENCE { +version Version, +privateKeyAlgorithm PrivateKeyAlgorithmIdentifier, +privateKey PrivateKey, +attributes [0] IMPLICIT Attributes OPTIONAL } + +Version ::= INTEGER + +PrivateKeyAlgorithmIdentifier ::= AlgorithmIdentifier + +PrivateKey ::= OCTET STRING + +Attributes ::= SET OF Attribute +``` + +If you store your RSA private key in a PKCS #8, the PrivateKey element will actually be a DER-encoded PKCS #1! Let's prove it. Remember earlier when I used **genrsa** to generate a PKCS #1? OpenSSL can generate a PKCS #8 with the **genpkey** command, and you can specify RSA as the algorithm to use. + + +``` +% openssl genpkey -algorithm RSA | openssl asn1parse +0:d=0 hl=4 l= 629 cons: SEQUENCE +4:d=1 hl=2 l= 1 prim: INTEGER :00 +7:d=1 hl=2 l= 13 cons: SEQUENCE +9:d=2 hl=2 l= 9 prim: OBJECT :rsaEncryption +20:d=2 hl=2 l= 0 prim: NULL +22:d=1 hl=4 l= 607 prim: OCTET STRING [HEX DUMP]:3082025B... +``` + +You may have spotted the "OBJECT" in the output and guessed that was related to OIDs. You'd be correct. The OID "1.2.840.113549.1.1.1" is assigned to RSA encryption. OpenSSL has a built-in list of common OIDs and translates them into a human-readable form for you. + + +``` +% openssl genpkey -algorithm RSA | openssl asn1parse -strparse 22 +0:d=0 hl=4 l= 604 cons: SEQUENCE +4:d=1 hl=2 l= 1 prim: INTEGER :00 +7:d=1 hl=3 l= 129 prim: INTEGER :CA6720E706... +139:d=1 hl=2 l= 3 prim: INTEGER :010001 +144:d=1 hl=3 l= 128 prim: INTEGER :05D0BEBE44... +275:d=1 hl=2 l= 65 prim: INTEGER :F215DC6B77... +342:d=1 hl=2 l= 65 prim: INTEGER :D6095CED7E... +409:d=1 hl=2 l= 64 prim: INTEGER :402C7562F3... +475:d=1 hl=2 l= 64 prim: INTEGER :06D0097B2D... +541:d=1 hl=2 l= 65 prim: INTEGER :AB266E8E51... +``` + +In the second command, I've told **asn1parse** via the **-strparse** argument to move to octet 22 and begin parsing the content's octets there as an ASN.1 object. We can clearly see that the PKCS #8's PrivateKey looks just like the PKCS #1 that we examined earlier. + +You should favor using the **genpkey** command. PKCS #8 has some features that PKCS #1 does not: PKCS #8 can store private keys for multiple different algorithms (PKCS #1 is RSA-specific), and it provides a mechanism to encrypt the private key using a passphrase and a symmetric cipher. + +Encrypted PKCS #8 objects use a different ASN.1 syntax that I'm not going to dive into, but let's take a look at an actual example and see if anything stands out. Encrypting a private key with **genpkey** requires that you specify the symmetric encryption algorithm to use. I'll use AES-256-CBC for this example and a password of "hello" (the "pass:" prefix is the way of telling OpenSSL that the password is coming in from the command line). + + +``` +% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | openssl asn1parse +0:d=0 hl=4 l= 733 cons: SEQUENCE +4:d=1 hl=2 l= 87 cons: SEQUENCE +6:d=2 hl=2 l= 9 prim: OBJECT :PBES2 +17:d=2 hl=2 l= 74 cons: SEQUENCE +19:d=3 hl=2 l= 41 cons: SEQUENCE +21:d=4 hl=2 l= 9 prim: OBJECT :PBKDF2 +32:d=4 hl=2 l= 28 cons: SEQUENCE +34:d=5 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:17E6FE554E85810A +44:d=5 hl=2 l= 2 prim: INTEGER :0800 +48:d=5 hl=2 l= 12 cons: SEQUENCE +50:d=6 hl=2 l= 8 prim: OBJECT :hmacWithSHA256 +60:d=6 hl=2 l= 0 prim: NULL +62:d=3 hl=2 l= 29 cons: SEQUENCE +64:d=4 hl=2 l= 9 prim: OBJECT :aes-256-cbc +75:d=4 hl=2 l= 16 prim: OCTET STRING [HEX DUMP]:91E9536C39... +93:d=1 hl=4 l= 640 prim: OCTET STRING [HEX DUMP]:98007B264F... + +% openssl genpkey -algorithm RSA -aes-256-cbc -pass pass:hello | head -n 1 +\-----BEGIN ENCRYPTED PRIVATE KEY----- +``` + +There are a couple of interesting items here. We see our encryption algorithm is recorded with an OID starting at octet 64. There's an OID for "PBES2" (Password-Based Encryption Scheme 2), which defines a standard process for encryption and decryption, and an OID for "PBKDF2" (Password-Based Key Derivation Function 2), which defines a standard process for creating encryption keys from passwords. Helpfully, OpenSSL uses the header "ENCRYPTED PRIVATE KEY" in the PEM output. + +OpenSSL will let you encrypt a PKCS #1, but it's done in a non-standard way via a series of headers inserted into the PEM: + + +``` +% openssl genrsa -aes256 -passout pass:hello 4096 +\-----BEGIN RSA PRIVATE KEY----- +Proc-Type: 4,ENCRYPTED +DEK-Info: AES-256-CBC,5B2C64DC05B7C0471A278C76562FD776 +... +``` + +### In conclusion + +There's a final PKCS format you need to know about: [PKCS #12][21]. The PKCS #12 format allows for storing multiple objects all in one file. If you have a certificate and its corresponding key or a chain of certificates, you can store them together in one PKCS #12 file. Individual entries in the file can be protected with password-based encryption. + +Beyond the PKCS formats, there are other storage methods such as the Java-specific JKS format and the NSS library from Mozilla, which uses file-based databases (SQLite or Berkeley DB, depending on the version). Luckily, the PKCS formats are a lingua franca that can serve as a start or reference if you need to deal with other formats. + +If this all seems confusing, that's because it is. Unfortunately, the PKI ecosystem has a lot of sharp edges between tools that generate enigmatic error messages (looking at you, OpenSSL) and standards that have grown and evolved over the past 35 years. Having a basic understanding of how PKI objects are stored is critical if you're doing any application development that will be accessed over SSL/TLS. + +I hope this article has shed a little light on the subject and might save you from spending fruitless hours in the PKI wilderness. + +* * * + +_The author would like to thank Hubert Kario for providing a technical review._ + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/bits-and-bytes-pki + +作者:[Alex Wood][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/awood +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/keyboaord_enter_writing_documentation.jpg?itok=kKrnXc5h (Computer keyboard typing) +[2]: https://opensource.com/article/18/5/cryptography-pki +[3]: https://opensource.com/article/18/7/private-keys +[4]: https://en.wikipedia.org/wiki/Abstract_Syntax_Notation_One +[5]: https://en.wikipedia.org/wiki/ISO_8601 +[6]: https://en.wikipedia.org/wiki/ITU_T.61 +[7]: https://en.wikipedia.org/wiki/Teletex +[8]: https://en.wikipedia.org/wiki/UTF-32 +[9]: https://www.itu.int/itu-t/recommendations/rec.aspx?rec=X.680 +[10]: https://www.alvestrand.no/objectid/1.3.6.1.4.1.2312.html +[11]: https://tools.ietf.org/html/rfc8017 +[12]: https://linux.die.net/man/1/asn1parse +[13]: https://www.johndcook.com/blog/2018/12/12/rsa-exponent/ +[14]: https://www.youtube.com/watch?v=NcPdiPrY_g8 +[15]: https://en.wikipedia.org/wiki/X.690 +[16]: https://en.wikipedia.org/wiki/Type-length-value +[17]: http://luca.ntop.org/Teaching/Appunti/asn1.html +[18]: https://www.theguardian.com/technology/2012/mar/26/ather-of-the-email-attachment +[19]: https://en.wikipedia.org/wiki/Base64 +[20]: https://tools.ietf.org/html/rfc5208 +[21]: https://tools.ietf.org/html/rfc7292 diff --git a/sources/tech/20190612 Why --1-, -7-, -11--.map(parseInt) returns -1, NaN, 3- in Javascript.md b/sources/tech/20190612 Why --1-, -7-, -11--.map(parseInt) returns -1, NaN, 3- in Javascript.md new file mode 100644 index 0000000000..4e892a4610 --- /dev/null +++ b/sources/tech/20190612 Why --1-, -7-, -11--.map(parseInt) returns -1, NaN, 3- in Javascript.md @@ -0,0 +1,205 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why ['1', '7', '11'].map(parseInt) returns [1, NaN, 3] in Javascript) +[#]: via: (https://medium.com/dailyjs/parseint-mystery-7c4368ef7b21) +[#]: author: (Eric Tong https://medium.com/@erictongs) + +Why ['1', '7', '11'].map(parseInt) returns [1, NaN, 3] in Javascript +====== +![][1]![][2]??? + +**Javascript is weird.** Don’t believe me? Try converting an array of strings into integers using map and parseInt. Fire up your console (F12 on Chrome), paste in the following, and press enter (or run the pen below). + +``` +['1', '7', '11'].map(parseInt); +``` + +Instead of giving us an array of integers `[1, 7, 11]`, we end up with `[1, NaN, 3]`. What? To find out what on earth is going on, we’ll first have to talk about some Javascript concepts. If you’d like a TLDR, I’ve included a quick summary at the end of this story. + +### Truthiness & Falsiness + +Here’s a simple if-else statement in Javascript: + +``` +if (true) { + // this always runs +} else { + // this never runs +} +``` + +In this case, the condition of the if-else statement is true, so the if-block is always executed and the else-block is ignored. This is a trivial example because true is a boolean. What if we put a non-boolean as the condition? + +``` +if ("hello world") { + // will this run? + console.log("Condition is truthy"); +} else { + // or this? + console.log("Condition is falsy"); +} +``` + +Try running this code in your developer’s console (F12 on Chrome). You should find that the if-block runs. This is because the string object `"hello world"` is truthy. + +Every Javascript object is either **truthy** or **falsy**. When placed in a boolean context, such as an if-else statement, objects are treated as true or false based on their truthiness. So which objects are truthy and which are falsy? Here’s a simple rule to follow: + +**All values are truthy except for:** `false` **,** `0` **,** `""` **(empty string),** `null` **,** `undefined` **, and** `NaN` **.** + +Confusingly, this means that the string `"false"`, the string `"0"`, an empty object `{}`, and an empty array `[]` are all truthy. You can double check this by passing an object into the Boolean function (e.g. `Boolean("0");`). + +For our purposes, it is enough to remember that `0` is falsy. + +### Radix + +``` +0 1 2 3 4 5 6 7 8 9 10 +``` + +When we count from zero to nine, we have different symbols for each of the numbers (0–9). However, once we reach ten, we need two different symbols (1 and 0) to represent the number. This is because our decimal counting system has a radix (or base) of ten. + +Radix is the smallest number which can only be represented by more than one symbol. **Different counting systems have different radixes, and so, the same digits can refer to different numbers in counting systems.** + +``` +DECIMAL BINARY HEXADECIMAL +RADIX=10 RADIX=2 RADIX=16 +0 0 0 +1 1 1 +2 10 + 2 +3 +11 3 +4 100 4 +5 101 5 +6 110 6 +7 111 7 +8 1000 8 +9 1001 9 +10 1010 A +11 1011 B +12 1100 C +13 1101 D +14 1110 E +15 1111 F +16 10000 10 +17 10001 +11 +``` + +For example, looking at the table above, we see that the same digits 11 can mean different numbers in different counting systems. If the radix is 2, then it refers to the number 3. If the radix is 16, then it refers to the number 17. + +You may have noticed that in our example, parseInt returned 3 when the input is 11, which corresponds to the Binary column in the table above. + +### Function arguments + +Functions in Javascript can be called with any number of arguments, even if they do not equal the number of declared function parameters. Missing arguments are treated as undefined and extra arguments are ignored (but stored in the array-like [arguments object][3]). + +``` +function foo(x, y) { + console.log(x); + console.log(y); +} +foo(1, 2); // logs 1, 2 +foo(1); // logs 1, undefined +foo(1, 2, 3); // logs 1, 2 +``` + +### map() + +We’re almost there! + +Map is a method in the Array prototype that returns a new array of the results of passing each element of the original array into a function. For example, the following code multiplies each element in the array by 3: + +``` +function multiplyBy3(x) { + return x * 3; +} +const result = [1, 2, 3, 4, 5].map(multiplyBy3);console.log(result); // logs [3, 6, 9, 12, 15]; +``` + +Now, let’s say I want to log each element using `map()` (with no return statements). I should be able to pass `console.log` as an argument into `map()`… right? + +``` +[1, 2, 3, 4, 5].map(console.log); +``` + +![][4]![][5] + +Something very strange is going on. Instead of logging just the value, each `console.log` call also logged the index and full array. + +``` +[1, 2, 3, 4, 5].map(console.log);// The above is equivalent to[1, 2, 3, 4, 5].map( + (val, index, array) => console.log(val, index, array) +); +// and not equivalent to[1, 2, 3, 4, 5].map( + val => console.log(val) +); +``` + +When a function is passed into `map()`, for each iteration, **three arguments are passed into the function** : `currentValue`, `currentIndex`, and the full `array`. That is why three entries are logged for each iteration. + +We now have all the pieces we need to solve this mystery. + +### Bringing it together + +ParseInt takes two arguments: `string` and `radix`. If the radix provided is falsy, then by default, radix is set to 10. + +``` +parseInt('11'); => 11 +parseInt('11', 2); => 3 +parseInt('11', 16); => 17 +parseInt('11', undefined); => 11 (radix is falsy) +parseInt('11', 0); => 11 (radix is falsy) +``` + +Now let’s run our example step-by-step. + +``` +['1', '7', '11'].map(parseInt); => [1, NaN, 3]// First iteration: val = '1', index = 0, array = ['1', '7', '11']parseInt('1', 0, ['1', '7', '11']); => 1 +``` + +Since 0 is falsy, the radix is set to the default value 10. `parseInt()` only takes two arguments, so the third argument `['1', '7', '11']` is ignored. The string `'1'` in radix 10 refers to the number 1. + +``` +// Second iteration: val = '7', index = 1, array = ['1', '7', '11']parseInt('7', 1, ['1', '7', '11']); => NaN +``` + +In a radix 1 system, the symbol `'7'` does not exist. As with the first iteration, the last argument is ignored. So, `parseInt()` returns `NaN`. + +``` +// Third iteration: val = '11', index = 2, array = ['1', '7', '11']parseInt('11', 2, ['1', '7', '11']); => 3 +``` + +In a radix 2 (binary) system, the symbol `'11'` refers to the number 3. The last argument is ignored. + +### Summary (TLDR) + +`['1', '7', '11'].map(parseInt)` doesn’t work as intended because `map` passes three arguments into `parseInt()` on each iteration. The second argument `index` is passed into parseInt as a `radix` parameter. So, each string in the array is parsed using a different radix. `'7'` is parsed as radix 1, which is `NaN`, `'11'` is parsed as radix 2, which is 3. `'1'` is parsed as the default radix 10, because its index 0 is falsy. + +And so, the following code will work as intended: + +``` +['1', '7', '11'].map(numStr => parseInt(numStr)); +``` + +-------------------------------------------------------------------------------- + +via: https://medium.com/dailyjs/parseint-mystery-7c4368ef7b21 + +作者:[Eric Tong][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://medium.com/@erictongs +[b]: https://github.com/lujun9972 +[1]: https://miro.medium.com/max/60/1*uq5ZTlw6HLRRqbHaPYYL0g.png?q=20 +[2]: https://miro.medium.com/max/1400/1*uq5ZTlw6HLRRqbHaPYYL0g.png +[3]: https://javascriptweblog.wordpress.com/2011/01/18/javascripts-arguments-object-and-beyond/ +[4]: https://miro.medium.com/max/60/1*rbrCQ_aNOft_OjayDtL6xg.png?q=20 +[5]: https://miro.medium.com/max/1400/1*rbrCQ_aNOft_OjayDtL6xg.png diff --git a/sources/tech/20190612 Why use GraphQL.md b/sources/tech/20190612 Why use GraphQL.md new file mode 100644 index 0000000000..ad0d3a0056 --- /dev/null +++ b/sources/tech/20190612 Why use GraphQL.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Why use GraphQL?) +[#]: via: (https://opensource.com/article/19/6/why-use-graphql) +[#]: author: (Zach Lendon https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley) + +Why use GraphQL? +====== +Here's why GraphQL is gaining ground on standard REST API technology. +![][1] + +[GraphQL][2], as I wrote [previously][3], is a next-generation API technology that is transforming both how client applications communicate with backend systems and how backend systems are designed. + +As a result of the support that began with the organization that founded it, Facebook, and continues with the backing of other technology giants such as Github, Twitter, and AirBnB, GraphQL's place as a linchpin technology for application systems seems secure; both now and long into the future. + +### GraphQL's ascent + +The rise in importance of mobile application performance and organizational agility has provided booster rockets for GraphQL's ascent to the top of modern enterprise architectures. + +Given that [REST][4] is a wildly popular architectural style that already allows mechanisms for data interaction, what advantages does this new technology provide over [REST][4]? The ‘QL’ in GraphQL stands for query language, and that is a great place to start. + +The ease at which different client applications within an organization can query only the data they need with GraphQL usurps alternative REST approaches and delivers real-world application performance boosts. With traditional [REST][4] API endpoints, client applications interrogate a server resource, and receive a response containing all the data that matches the request. If a successful response from a [REST][4] API endpoint returns 35 fields, the client application receives 35 fields + +### Fetching problems + +[REST][4] APIs traditionally provide no clean way for client applications to retrieve or update only the data they care about. This is often described as the “over-fetching” problem. With the prevalence of mobile applications in people’s day to day lives, the over-fetching problem has real world consequences. Every request a mobile application needs to make, every byte it has to send and receive, has an increasingly negative performance impact for end users. Users with slower data connections are particularly affected by suboptimal API design choices. Customers who experience poor performance using mobile applications are more likely to not purchase products and use services. Inefficient API designs cost companies money. + +“Over-fetching” isn’t alone - it has a partner in crime - “under-fetching”. Endpoints that, by default, return only a portion of the data a client actually needs require clients to make additional calls to satisfy their data needs - which requires additional HTTP requests. Because of the over and under fetching problems and their impact on client application performance, an API technology that facilitates efficient fetching has a chance to catch fire in the marketplace - and GraphQL has boldly jumped in and filled that void. + +### REST's response + +[REST][4] API designers, not willing to go down without a fight, have attempted to counter the mobile application performance problem through a mix of: + + * “include” and “exclude” query parameters, allowing client applications to specify which fields they want through a potentially long query format. + * “Composite” services, which combine multiple endpoints in a way that allow client applications to be more efficient in the number of requests they make and the data they receive. + + + +While these patterns are a valiant attempt by the [REST][4] API community to address challenges mobile clients face, they fall short in a few key regards, namely: + + * Include and exclude query key/value pairs quickly get messy, in particular for deeper object graphs that require a nested dot notation syntax (or similar) to target data to include and exclude. Additionally, debugging issues with the query string in this model often requires manually breaking up a URL. + * Server implementations for include and exclude queries are often custom, as there is no standard way for server-based applications to handle the use of include and exclude queries, just as there is no standard way for include and exclude queries to be defined. + * The rise of composite services creates more tightly coupled back-end and front-end systems, requiring increasing coordination to deliver projects and turning once agile projects back to waterfall. This coordination and coupling has the painful side effect of slowing organizational agility. Additionally, composite services are by definition, not RESTful. + + + +### GraphQL's genesis + +For Facebook, GraphQL’s genesis was a response to pain felt and experiences learned from an HTML5-based version of their flagship mobile application back in 2011-2012. Understanding that improved performance was paramount, Facebook engineers realized that they needed a new API design to ensure peak performance. Likely taking the above [REST][4] limitations into consideration, and with needing to support different needs of a number of API clients, one can begin to understand the early seeds of what led co-creators Lee Byron and Dan Schaeffer, Facebook employees at the time, to create what has become known as GraphQL. + +With what is often a single GraphQL endpoint, through the GraphQL query language, client applications are able to reduce, often significantly, the number of network calls they need to make, and ensure that they only are retrieving the data they need. In many ways, this harkens back to earlier models of web programming, where client application code would directly query back-end systems - some might remember writing SQL queries with JSTL on JSPs 10-15 years ago for example! + +The biggest difference now is with GraphQL, we have a specification that is implemented across a variety of client and server languages and libraries. And with GraphQL being an API technology, we have decoupled the back-end and front-end application systems by introducing an intermediary GraphQL application layer that provides a mechanism to access organizational data in a manner that aligns with an organization’s business domain(s). + +Beyond solving technical challenges experienced by software engineering teams, GraphQL has also been a boost to organizational agility, in particular in the enterprise. GraphQL-enabled organizational agility increases are commonly attributable to the following: + + * Rather than creating new endpoints when 1 or more new fields are needed by clients, GraphQL API designers and developers are able to include those fields in existing graph implementations, exposing new capabilities in a fashion that requires less development effort and less change across application systems. + * By encouraging API design teams to focus more on defining their object graph and be less focused on what client applications are delivering, the speed at which front-end and back-end software teams deliver solutions for customers has increasingly decoupled. + + + +### Considerations before adoption + +Despite GraphQL’s compelling benefits, GraphQL is not without its implementation challenges. A few examples include: + + * Caching mechanisms around [REST][4] APIs are much more mature. + * The patterns used to build APIs using [REST][4] are much more well established. + * While engineers may be more attracted to newer technologies like GraphQL, the talent pool in the marketplace is much broader for building [REST][4]-based solutions vs. GraphQL. + + + +### Conclusion + +By providing both a boost to performance and organizational agility, GraphQL's adoption by companies has skyrocketed in the past few years. It does, however, have some maturing to do in comparison to the RESTful ecosystem of API design. + +One of the great benefits of GraphQL is that it’s not designed as a wholesale replacement for alternative API solutions. Instead, GraphQL can be implemented to complement or enhance existing APIs. As a result, companies are encouraged to explore incrementally adopting GraphQL where it makes the most sense for them - where they find it has the greatest positive impact on application performance and organizational agility. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/why-use-graphql + +作者:[Zach Lendon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/zachlendon/users/goncasousa/users/patrickhousley +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D +[2]: https://graphql.org/ +[3]: https://opensource.com/article/19/6/what-is-graphql +[4]: https://en.wikipedia.org/wiki/Representational_state_transfer diff --git a/sources/tech/20190613 Continuous integration testing for the Linux kernel.md b/sources/tech/20190613 Continuous integration testing for the Linux kernel.md new file mode 100644 index 0000000000..d837cd9e67 --- /dev/null +++ b/sources/tech/20190613 Continuous integration testing for the Linux kernel.md @@ -0,0 +1,98 @@ +[#]: collector: (lujun9972) +[#]: translator: (LazyWolfLin) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Continuous integration testing for the Linux kernel) +[#]: via: (https://opensource.com/article/19/6/continuous-kernel-integration-linux) +[#]: author: (Major Hayden https://opensource.com/users/mhayden) + +Continuous integration testing for the Linux kernel +====== +How this team works to prevent bugs from being merged into the Linux +kernel. +![Linux kernel source code \(C\) in Visual Studio Code][1] + +With 14,000 changesets per release from over 1,700 different developers, it's clear that the Linux kernel moves quickly, and brings plenty of complexity. Kernel bugs range from small annoyances to larger problems, such as system crashes and data loss. + +As the call for continuous integration (CI) grows for more and more projects, the [Continuous Kernel Integration (CKI)][2] team forges ahead with a single mission: prevent bugs from being merged into the kernel. + +### Linux testing problems + +Many Linux distributions test the Linux kernel when needed. This testing often occurs around release time, or when users find a bug. + +Unrelated issues sometimes appear, and maintainers scramble to find which patch in a changeset full of tens of thousands of patches caused the new, unrelated bug. Diagnosing the bug may require specialized hardware, a series of triggers, and specialized knowledge of that portion of the kernel. + +#### CI and Linux + +Most modern software repositories have some sort of automated CI testing that tests commits before they find their way into the repository. This automated testing allows the maintainers to find software quality issues, along with most bugs, by reviewing the CI report. Simpler projects, such as a Python library, come with tons of tools to make this process easier. + +Linux must be configured and compiled prior to any testing. Doing so takes time and compute resources. In addition, that kernel must boot in a virtual machine or on a bare metal machine for testing. Getting access to certain system architectures requires additional expense or very slow emulation. From there, someone must identify a set of tests which trigger the bug or verify the fix. + +#### How the CKI team works + +The CKI team at Red Hat currently follows changes from several internal kernels, as well as upstream kernels such as the [stable kernel tree][3]. We watch for two critical events in each repository: + + 1. When maintainers merge pull requests or patches, and the resulting commits in the repository change. + + 2. When developers propose changes for merging via patchwork or the stable patch queue. + + + + +As these events occur, automation springs into action and [GitLab CI pipelines][4] begin the testing process. Once the pipeline runs [linting][5] scripts, merges any patches, and compiles the kernel for multiple architectures, the real testing begins. We compile kernels in under six minutes for four architectures and submit feedback to the stable mailing list usually in two hours or less. Over 100,000 kernel tests run each month and over 11,000 GitLab pipelines have completed (since January 2019). + +Each kernel is booted on its native architecture, which includes: + +● [aarch64][6]: 64-bit [ARM][7], such as the [Cavium (now Marvell) ThunderX][8]. + +● [ppc64/ppc64le][9]: Big and little endian [IBM POWER][10] systems. + +● [s390x][11]: [IBM Zseries][12] mainframes. + +● [x86_64][13]: [Intel][14] and [AMD][15] workstations, laptops, and servers. + +Multiple tests run on these kernels, including the [Linux Test Project (LTP)][16], which contains a myriad of tests using a common test harness. My CKI team open-sourced over 44 tests with more on the way. + +### Get involved + +The upstream kernel testing effort grows day-by-day. Many companies provide test output for various kernels, including [Google][17], Intel, [Linaro][18], and [Sony][19]. Each effort is focused on bringing value to the upstream kernel as well as each company’s customer base. + +If you or your company want to join the effort, please come to the [Linux Plumbers Conference 2019][20] in Lisbon, Portugal. Join us at the Kernel CI hackfest during the two days after the conference, and drive the future of rapid kernel testing. + +For more details, [review the slides][21] from my Texas Linux Fest 2019 talk. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/continuous-kernel-integration-linux + +作者:[Major Hayden][a] +选题:[lujun9972][b] +译者:[LazyWolfLin](https://github.com/LazyWolfLin) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mhayden +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_kernel_clang_vscode.jpg?itok=fozZ4zrr (Linux kernel source code (C) in Visual Studio Code) +[2]: https://cki-project.org/ +[3]: https://www.kernel.org/doc/html/latest/process/stable-kernel-rules.html +[4]: https://docs.gitlab.com/ee/ci/pipelines.html +[5]: https://en.wikipedia.org/wiki/Lint_(software) +[6]: https://en.wikipedia.org/wiki/ARM_architecture +[7]: https://www.arm.com/ +[8]: https://www.marvell.com/server-processors/thunderx-arm-processors/ +[9]: https://en.wikipedia.org/wiki/Ppc64 +[10]: https://www.ibm.com/it-infrastructure/power +[11]: https://en.wikipedia.org/wiki/Linux_on_z_Systems +[12]: https://www.ibm.com/it-infrastructure/z +[13]: https://en.wikipedia.org/wiki/X86-64 +[14]: https://www.intel.com/ +[15]: https://www.amd.com/ +[16]: https://github.com/linux-test-project/ltp +[17]: https://www.google.com/ +[18]: https://www.linaro.org/ +[19]: https://www.sony.com/ +[20]: https://www.linuxplumbersconf.org/ +[21]: https://docs.google.com/presentation/d/1T0JaRA0wtDU0aTWTyASwwy_ugtzjUcw_ZDmC5KFzw-A/edit?usp=sharing diff --git a/sources/tech/20190614 A data-centric approach to patching systems with Ansible.md b/sources/tech/20190614 A data-centric approach to patching systems with Ansible.md new file mode 100644 index 0000000000..ade27106f3 --- /dev/null +++ b/sources/tech/20190614 A data-centric approach to patching systems with Ansible.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (A data-centric approach to patching systems with Ansible) +[#]: via: (https://opensource.com/article/19/6/patching-systems-ansible) +[#]: author: (Mark Phillips https://opensource.com/users/markp/users/markp) + +A data-centric approach to patching systems with Ansible +====== +Use data and variables in Ansible to control selective patching. +![metrics and data shown on a computer screen][1] + +When you're patching Linux machines these days, I could forgive you for asking, "How hard can it be?" Sure, a **yum update -y** will sort it for you in a flash. + +![Animation of updating Linux][2] + +But for those of us working with more than a handful of machines, it's not that simple. Sometimes an update can create unintended consequences across many machines, and you're left wondering how to put things back the way they were. Or you might think, "Should I have applied the critical patch on its own and saved myself a lot of pain?" + +Faced with these sorts of challenges in the past led me to build a way to cherry-pick the updates needed and automate their application. + +### A flexible idea + +Here's an overview of the process: + +![Overview of the Ansible patch process][3] + +This system doesn't permit machines to have direct access to vendor patches. Instead, they're selectively subscribed to repositories. Repositories contain only the patches that are required––although I'd encourage you to give this careful consideration so you don't end up with a proliferation (another management overhead you'll not thank yourself for creating). + +Now patching a machine comes down to 1) The repositories it's subscribed to and 2) Getting the "thumbs up" to patch. By using variables to control both subscription and permission to patch, we don't need to tamper with the logic (the plays); we only need to alter the data. + +Here is an [example Ansible role][4] that fulfills both requirements. It manages repository subscriptions and has a simple variable that controls running the patch command. + + +``` +\--- +# tasks file for patching + +\- name: Include OS version specific differences +include_vars: "{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml" + +\- name: Ensure Yum repositories are configured +template: +src: template.repo.j2 +dest: "/etc/yum.repos.d/{{ item.label }}.repo" +owner: root +group: root +mode: 0644 +when: patching_repos is defined +loop: "{{ patching_repos }}" +notify: patching-clean-metadata + +\- meta: flush_handlers + +\- name: Ensure OS shipped yum repo configs are absent +file: +path: "/etc/yum.repos.d/{{ patching_default_repo_def }}" +state: absent + +# add flexibility of repos here +\- name: Patch this host +shell: 'yum update -y' +args: +warn: false +when: patchme|bool +register: result +changed_when: "'No packages marked for update' not in result.stdout" +``` + +### Scenarios + +In our fictitious, large, globally dispersed environment (of four hosts), we have: + + * Two web servers + * Two database servers + * An application comprising one of each server type + + + +OK, so this number of machines isn't "enterprise-scale," but remove the counts and imagine the environment as multiple, tiered, geographically dispersed applications. We want to patch elements of the stack across server types, application stacks, geographies, or the whole estate. + +![Example patch groups][5] + +Using only changes to variables, can we achieve that flexibility? Sort of. Ansible's [default behavior][6] for hashes is to overwrite. In our example, the **patching_repos** variable for the **db1** and **web1** hosts are overwritten because of their later occurrence in our inventory. Hmm, a bit of a pickle. There are two ways to manage this: + + 1. Multiple inventory files + 2. [Change the variable behavior][7] + + + +I chose number one because it maintains clarity. Once you start merging variables, it's hard to find where a hash appears and how it's put together. Using the default behavior maintains clarity, and it's the method I'd encourage you to stick with for your own sanity. + +### Get on with it then + +Let's run the play, focusing only on the database servers. + +Did you notice the final step— **Patch this host** —says **skipping**? That's because we didn't set [the controlling variable][8] to do the patching. What we have done is set up the repository subscriptions to be ready. + +So let's run the play again, limiting it to the web servers and tell it to do the patching. I ran this example with verbose output so you can see the yum updates happening. + +Patching an application stack requires another inventory file, as mentioned above. Let's rerun the play. + +Patching hosts in the European geography is the same scenario as the application stack, so another inventory file is required. + +Now that all the repository subscriptions are configured, let's just patch the whole estate. Note the **app1** and **emea** groups don't need the inventory here––they were only being used to separate the repository definition and setup. Now, **yum update -y** patches everything. If you didn't want to capture those repositories, they could be configured as **enabled=0**. + +### Conclusion + +The flexibility comes from how we group our hosts. Because of default hash behavior, we need to think about overlaps—the easiest way, to my mind at least, is with separate inventories. + +With regard to repository setup, I'm sure you've already said to yourself, "Ah, but the cherry-picking isn't that simple!" There is additional overhead in this model to download patches, test that they work together, and bundle them with dependencies in a repository. With complementary tools, you could automate the process, and in a large-scale environment, you'd have to. + +Part of me is drawn to just applying full patch sets as a simpler and easier way to go; skip the cherry-picking part and apply a full set of patches to a "standard build." I've seen this approach applied to both Unix and Windows estates with enforced quarterly updates. + +I’d be interested in hearing your experiences of patching regimes, and the approach proposed here, in the comments below or [via Twitter][9]. + +Many companies still have massive data centres full of hardware. Here's how Ansible can help. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/patching-systems-ansible + +作者:[Mark Phillips][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/markp/users/markp +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_data_dashboard_system_computer_analytics.png?itok=oxAeIEI- (metrics and data shown on a computer screen) +[2]: https://opensource.com/sites/default/files/uploads/quick_update.gif (Animation of updating Linux) +[3]: https://opensource.com/sites/default/files/uploads/patch_process.png (Overview of the Ansible patch process) +[4]: https://github.com/phips/ansible-patching/blob/master/roles/patching/tasks/main.yml +[5]: https://opensource.com/sites/default/files/uploads/patch_groups.png (Example patch groups) +[6]: https://docs.ansible.com/ansible/2.3/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable +[7]: https://docs.ansible.com/ansible/2.3/intro_configuration.html#sts=hash_behaviour +[8]: https://github.com/phips/ansible-patching/blob/master/roles/patching/defaults/main.yml#L4 +[9]: https://twitter.com/thismarkp diff --git a/sources/tech/20190614 Learning by teaching, and speaking, in open source.md b/sources/tech/20190614 Learning by teaching, and speaking, in open source.md new file mode 100644 index 0000000000..b2bd11f7b4 --- /dev/null +++ b/sources/tech/20190614 Learning by teaching, and speaking, in open source.md @@ -0,0 +1,75 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Learning by teaching, and speaking, in open source) +[#]: via: (https://opensource.com/article/19/6/conference-proposal-tips) +[#]: author: (Lauren Maffeo https://opensource.com/users/lmaffeo) + +Learning by teaching, and speaking, in open source +====== +Want to speak at an open source conference? Here are a few tips to get +started. +![photo of microphone][1] + +_"Everything good, everything magical happens between the months of June and August."_ + +When Jenny Han wrote these words, I doubt she had the open source community in mind. Yet, for our group of dispersed nomads, the summer brings a wave of conferences that allow us to connect in person. + +From [OSCON][2] in Portland to [Drupal GovCon][3] in Bethesda, and [Open Source Summit North America][4] in San Diego, there’s no shortage of ways to match faces with Twitter avatars. After months of working on open source projects via Slack and Google Hangouts, the face time that these summer conferences offer is invaluable. + +The knowledge attendees gain at open source conferences serves as the spark for new contributions. And speaking from experience, the best way to gain value from these conferences is for you to _speak_ at them. + +But, does the thought of speaking give you chills? Hear me out before closing your browser. + +Last August, I arrived at the Vancouver Convention Centre to give a lightning talk and speak on a panel at [Open Source Summit North America 2018][5]. It’s no exaggeration to say that this conference—and applying to speak at it—transformed my career. Nine months later, I’ve: + + * Become a Community Moderator for Opensource.com + * Spoken at two additional open source conferences ([All Things Open][6] and [DrupalCon North America][7]) + * Made my first GitHub pull request + * Taken "Intro to Python" and written my first lines of code in [React][8] + * Taken the first steps towards writing a book proposal + + + +I don’t discount how much time, effort, and money are [involved in conference speaking][9]. Regardless, I can say with certainty that nothing else has grown my career so drastically. In the process, I met strangers who quickly became friends and unofficial mentors. Their feedback, advice, and connections have helped me grow in ways that I hadn’t envisioned this time last year. + +Had I not boarded that flight to Canada, I would not be where I am today. + +So, have I convinced you to take the first step? It’s easier than you think. If you want to [apply to speak at an open source conference][10] but are stuck on what to discuss, ask yourself this question: **What do I want to learn?** + +You don’t have to be an expert on the topics that you pitch. You don’t have to know everything about JavaScript, [ML][11], or Linux to [write conference proposals][12] on these topics. + +Here’s what you _do_ need: A willingness to do the work of teaching yourself these topics. And like any self-directed task, you’ll be most willing to do this work if you're invested in the subject. + +As summer conference season draws closer, soak up all the knowledge you can. Then, ask yourself what you want to learn more about, and apply to speak about those subjects at fall/winter open source events. + +After all, one of the most effective ways to learn is by [teaching a topic to someone else][13]. So, what will the open source community learn from you? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/conference-proposal-tips + +作者:[Lauren Maffeo][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/lmaffeo +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/microphone_speak.png?itok=wW6elbl5 (photo of microphone) +[2]: https://conferences.oreilly.com/oscon/oscon-or +[3]: https://www.drupalgovcon.org +[4]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2019/ +[5]: https://events.linuxfoundation.org/events/open-source-summit-north-america-2018/ +[6]: https://allthingsopen.org +[7]: https://lab.getapp.com/bias-in-ai-drupalcon-debrief/ +[8]: https://reactjs.org +[9]: https://twitter.com/venikunche/status/1130868572098572291 +[10]: https://opensource.com/article/19/1/public-speaking-resolutions +[11]: https://en.wikipedia.org/wiki/ML_(programming_language) +[12]: https://dev.to/aspittel/public-speaking-as-a-developer-2ihj +[13]: https://opensource.com/article/19/5/learn-python-teaching diff --git a/sources/tech/20190614 What is a Java constructor.md b/sources/tech/20190614 What is a Java constructor.md new file mode 100644 index 0000000000..66cd30110d --- /dev/null +++ b/sources/tech/20190614 What is a Java constructor.md @@ -0,0 +1,158 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (What is a Java constructor?) +[#]: via: (https://opensource.com/article/19/6/what-java-constructor) +[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/ashleykoree) + +What is a Java constructor? +====== +Constructors are powerful components of programming. Use them to unlock +the full potential of Java. +![][1] + +Java is (disputably) the undisputed heavyweight in open source, cross-platform programming. While there are many [great][2] [cross-platform][2] [frameworks][3], few are as unified and direct as [Java][4]. + +Of course, Java is also a pretty complex language with subtleties and conventions all its own. One of the most common questions about Java relates to **constructors** : What are they and what are they used for? + +Put succinctly: a constructor is an action performed upon the creation of a new **object** in Java. When your Java application creates an instance of a class you have written, it checks for a constructor. If a constructor exists, Java runs the code in the constructor while creating the instance. That's a lot of technical terms crammed into a few sentences, but it becomes clearer when you see it in action, so make sure you have [Java installed][5] and get ready for a demo. + +### Life without constructors + +If you're writing Java code, you're already using constructors, even though you may not know it. All classes in Java have a constructor because even if you haven't created one, Java does it for you when the code is compiled. For the sake of demonstration, though, ignore the hidden constructor that Java provides (because a default constructor adds no extra features), and take a look at life without an explicit constructor. + +Suppose you're writing a simple Java dice-roller application because you want to produce a pseudo-random number for a game. + +First, you might create your dice class to represent a physical die. Knowing that you play a lot of [Dungeons and Dragons][6], you decide to create a 20-sided die. In this sample code, the variable **dice** is the integer 20, representing the maximum possible die roll (a 20-sided die cannot roll more than 20). The variable **roll** is a placeholder for what will eventually be a random number, and **rand** serves as the random seed. + + +``` +import java.util.Random; + +public class DiceRoller { +private int dice = 20; +private int roll; +private [Random][7] rand = new [Random][7](); +``` + +Next, create a function in the **DiceRoller** class to execute the steps the computer must take to emulate a die roll: Take an integer from **rand** and assign it to the **roll** variable, add 1 to account for the fact that Java starts counting at 0 but a 20-sided die has no 0 value, then print the results. + + +``` +public void Roller() { +roll = rand.nextInt(dice); +roll += 1; +[System][8].out.println (roll); +} +``` + +Finally, spawn an instance of the **DiceRoller** class and invoke its primary function, **Roller** : + + +``` +// main loop +public static void main ([String][9][] args) { +[System][8].out.printf("You rolled a "); + +DiceRoller App = new DiceRoller(); +App.Roller(); +} +} +``` + +As long as you have a Java development environment installed (such as [OpenJDK][10]), you can run your application from a terminal: + + +``` +$ java dice.java +You rolled a 12 +``` + +In this example, there is no explicit constructor. It's a perfectly valid and legal Java application, but it's a little limited. For instance, if you set your game of Dungeons and Dragons aside for the evening to play some Yahtzee, you would need 6-sided dice. In this simple example, it wouldn't be that much trouble to change the code, but that's not a realistic option in complex code. One way you could solve this problem is with a constructor. + +### Constructors in action + +The **DiceRoller** class in this example project represents a virtual dice factory: When it's called, it creates a virtual die that is then "rolled." However, by writing a custom constructor, you can make your Dice Roller application ask what kind of die you'd like to emulate. + +Most of the code is the same, with the exception of a constructor accepting some number of sides. This number doesn't exist yet, but it will be created later. + + +``` +import java.util.Random; + +public class DiceRoller { +private int dice; +private int roll; +private [Random][7] rand = new [Random][7](); + +// constructor +public DiceRoller(int sides) { +dice = sides; +} +``` + +The function emulating a roll remains unchanged: + + +``` +public void Roller() { +roll = rand.nextInt(dice); +roll += 1; +[System][8].out.println (roll); +} +``` + +The main block of code feeds whatever arguments you provide when running the application. Were this a complex application, you would parse the arguments carefully and check for unexpected results, but for this sample, the only precaution taken is converting the argument string to an integer type: + + +``` +public static void main ([String][9][] args) { +[System][8].out.printf("You rolled a "); +DiceRoller App = new DiceRoller( [Integer][11].parseInt(args[0]) ); +App.Roller(); +} +} +``` + +Launch the application and provide the number of sides you want your die to have: + + +``` +$ java dice.java 20 +You rolled a 10 +$ java dice.java 6 +You rolled a 2 +$ java dice.java 100 +You rolled a 44 +``` + +The constructor has accepted your input, so when the class instance is created, it is created with the **sides** variable set to whatever number the user dictates. + +Constructors are powerful components of programming. Practice using them to unlock the full potential of Java. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/what-java-constructor + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth/users/ashleykoree +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/build_structure_tech_program_code_construction.png?itok=nVsiLuag +[2]: https://opensource.com/resources/python +[3]: https://opensource.com/article/17/4/pyqt-versus-wxpython +[4]: https://opensource.com/resources/java +[5]: https://openjdk.java.net/install/index.html +[6]: https://opensource.com/article/19/5/free-rpg-day +[7]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+random +[8]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+system +[9]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+string +[10]: https://openjdk.java.net/ +[11]: http://www.google.com/search?hl=en&q=allinurl%3Adocs.oracle.com+javase+docs+api+integer diff --git a/sources/tech/20190617 How To Check Linux Package Version Before Installing It.md b/sources/tech/20190617 How To Check Linux Package Version Before Installing It.md new file mode 100644 index 0000000000..0ce1a510ac --- /dev/null +++ b/sources/tech/20190617 How To Check Linux Package Version Before Installing It.md @@ -0,0 +1,281 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Check Linux Package Version Before Installing It) +[#]: via: (https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +How To Check Linux Package Version Before Installing It +====== + +![Check Linux Package Version][1] + +Most of you will know how to [**find the version of an installed package**][2] in Linux. But, what would you do to find the packages’ version which are not installed in the first place? No problem! This guide describes how to check Linux package version before installing it in Debian and its derivatives like Ubuntu. This small tip might be helpful for those wondering what version they would get before installing a package. + +### Check Linux Package Version Before Installing It + +There are many ways to find a package’s version even if it is not installed already in DEB-based systems. Here I have given a few methods. + +##### Method 1 – Using Apt + +The quick and dirty way to check a package version, simply run: + +``` +$ apt show +``` + +**Example:** + +``` +$ apt show vim +``` + +**Sample output:** + +``` +Package: vim +Version: 2:8.0.1453-1ubuntu1.1 +Priority: optional +Section: editors +Origin: Ubuntu +Maintainer: Ubuntu Developers <[email protected]> +Original-Maintainer: Debian Vim Maintainers <[email protected]> +Bugs: https://bugs.launchpad.net/ubuntu/+filebug +Installed-Size: 2,852 kB +Provides: editor +Depends: vim-common (= 2:8.0.1453-1ubuntu1.1), vim-runtime (= 2:8.0.1453-1ubuntu1.1), libacl1 (>= 2.2.51-8), libc6 (>= 2.15), libgpm2 (>= 1.20.7), libpython3.6 (>= 3.6.5), libselinux1 (>= 1.32), libtinfo5 (>= 6) +Suggests: ctags, vim-doc, vim-scripts +Homepage: https://vim.sourceforge.io/ +Task: cloud-image, server +Supported: 5y +Download-Size: 1,152 kB +APT-Sources: http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages +Description: Vi IMproved - enhanced vi editor + Vim is an almost compatible version of the UNIX editor Vi. + . + Many new features have been added: multi level undo, syntax + highlighting, command line history, on-line help, filename + completion, block operations, folding, Unicode support, etc. + . + This package contains a version of vim compiled with a rather + standard set of features. This package does not provide a GUI + version of Vim. See the other vim-* packages if you need more + (or less). + +N: There is 1 additional record. Please use the '-a' switch to see it +``` + +As you can see in the above output, “apt show” command displays, many important details of the package such as, + + 1. package name, + 2. version, + 3. origin (from where the vim comes from), + 4. maintainer, + 5. home page of the package, + 6. dependencies, + 7. download size, + 8. description, + 9. and many. + + + +So, the available version of Vim package in the Ubuntu repositories is **8.0.1453**. This is the version I get if I install it on my Ubuntu system. + +Alternatively, use **“apt policy”** command if you prefer short output: + +``` +$ apt policy vim +vim: + Installed: (none) + Candidate: 2:8.0.1453-1ubuntu1.1 + Version table: + 2:8.0.1453-1ubuntu1.1 500 + 500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages + 500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages + 2:8.0.1453-1ubuntu1 500 + 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages +``` + +Or even shorter: + +``` +$ apt list vim +Listing... Done +vim/bionic-updates,bionic-security 2:8.0.1453-1ubuntu1.1 amd64 +N: There is 1 additional version. Please use the '-a' switch to see it +``` + +**Apt** is the default package manager in recent Ubuntu versions. So, this command is just enough to find the detailed information of a package. It doesn’t matter whether given package is installed or not. This command will simply list the given package’s version along with all other details. + +##### Method 2 – Using Apt-get + +To find a package version without installing it, we can use **apt-get** command with **-s** option. + +``` +$ apt-get -s install vim +``` + +**Sample output:** + +``` +NOTE: This is only a simulation! + apt-get needs root privileges for real execution. + Keep also in mind that locking is deactivated, + so don't depend on the relevance to the real current situation! +Reading package lists... Done +Building dependency tree +Reading state information... Done +Suggested packages: + ctags vim-doc vim-scripts +The following NEW packages will be installed: + vim +0 upgraded, 1 newly installed, 0 to remove and 45 not upgraded. +Inst vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64]) +Conf vim (2:8.0.1453-1ubuntu1.1 Ubuntu:18.04/bionic-updates, Ubuntu:18.04/bionic-security [amd64]) +``` + +Here, -s option indicates **simulation**. As you can see in the output, It performs no action. Instead, It simply performs a simulation to let you know what is going to happen when you install the Vim package. + +You can substitute “install” option with “upgrade” option to see what will happen when you upgrade a package. + +``` +$ apt-get -s upgrade vim +``` + +##### Method 3 – Using Aptitude + +**Aptitude** is an ncurses and commandline-based front-end to APT package manger in Debian and its derivatives. + +To find the package version with Aptitude, simply run: + +``` +$ aptitude versions vim +p 2:8.0.1453-1ubuntu1 bionic 500 +p 2:8.0.1453-1ubuntu1.1 bionic-security,bionic-updates 500 +``` + +You can also use simulation option ( **-s** ) to see what would happen if you install or upgrade package. + +``` +$ aptitude -V -s install vim +The following NEW packages will be installed: + vim [2:8.0.1453-1ubuntu1.1] +0 packages upgraded, 1 newly installed, 0 to remove and 45 not upgraded. +Need to get 1,152 kB of archives. After unpacking 2,852 kB will be used. +Would download/install/remove packages. +``` + +Here, **-V** flag is used to display detailed information of the package version. + +Similarly, just substitute “install” with “upgrade” option to see what would happen if you upgrade a package. + +``` +$ aptitude -V -s upgrade vim +``` + +Another way to find the non-installed package’s version using Aptitude command is: + +``` +$ aptitude search vim -F "%c %p %d %V" +``` + +Here, + + * **-F** is used to specify which format should be used to display the output, + * **%c** – status of the given package (installed or not installed), + * **%p** – name of the package, + * **%d** – description of the package, + * **%V** – version of the package. + + + +This is helpful when you don’t know the full package name. This command will list all packages that contains the given string (i.e vim). + +Here is the sample output of the above command: + +``` +[...] +p vim Vi IMproved - enhanced vi editor 2:8.0.1453-1ub +p vim-tlib Some vim utility functions 1.23-1 +p vim-ultisnips snippet solution for Vim 3.1-3 +p vim-vimerl Erlang plugin for Vim 1.4.1+git20120 +p vim-vimerl-syntax Erlang syntax for Vim 1.4.1+git20120 +p vim-vimoutliner script for building an outline editor on top of Vim 0.3.4+pristine +p vim-voom Vim two-pane outliner 5.2-1 +p vim-youcompleteme fast, as-you-type, fuzzy-search code completion engine for Vim 0+20161219+git +``` + +##### Method 4 – Using Apt-cache + +**Apt-cache** command is used to query APT cache in Debian-based systems. It is useful for performing many operations on APT’s package cache. One fine example is we can [**list installed applications from a certain repository/ppa**][3]. + +Not just installed applications, we can also find the version of a package even if it is not installed. For instance, the following command will find the version of Vim package: + +``` +$ apt-cache policy vim +``` + +Sample output: + +``` +vim: + Installed: (none) + Candidate: 2:8.0.1453-1ubuntu1.1 + Version table: + 2:8.0.1453-1ubuntu1.1 500 + 500 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages + 500 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages + 2:8.0.1453-1ubuntu1 500 + 500 http://archive.ubuntu.com/ubuntu bionic/main amd64 Packages +``` + +As you can see in the above output, Vim is not installed. If you wanted to install it, you would get version **8.0.1453**. It also displays from which repository the vim package is coming from. + +##### Method 5 – Using Apt-show-versions + +**Apt-show-versions** command is used to list installed and available package versions in Debian and Debian-based systems. It also displays the list of all upgradeable packages. It is quite handy if you have a mixed stable/testing environment. For instance, if you have enabled both stable and testing repositories, you can easily find the list of applications from testing and also you can upgrade all packages in testing. + +Apt-show-versions is not installed by default. You need to install it using command: + +``` +$ sudo apt-get install apt-show-versions +``` + +Once installed, run the following command to find the version of a package,for example Vim: + +``` +$ apt-show-versions -a vim +vim:amd64 2:8.0.1453-1ubuntu1 bionic archive.ubuntu.com +vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-security security.ubuntu.com +vim:amd64 2:8.0.1453-1ubuntu1.1 bionic-updates archive.ubuntu.com +vim:amd64 not installed +``` + +Here, **-a** switch prints all available versions of the given package. + +If the given package is already installed, you need not to use **-a** option. In that case, simply run: + +``` +$ apt-show-versions vim +``` + +And, that’s all. If you know any other methods, please share them in the comment section below. I will check and update this guide. + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-check-linux-package-version-before-installing-it/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Check-Linux-Package-Version-720x340.png +[2]: https://www.ostechnix.com/find-package-version-linux/ +[3]: https://www.ostechnix.com/list-installed-packages-certain-repository-linux/ diff --git a/sources/tech/20190617 How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers.md b/sources/tech/20190617 How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers.md new file mode 100644 index 0000000000..1a6f5988e9 --- /dev/null +++ b/sources/tech/20190617 How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers.md @@ -0,0 +1,173 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers) +[#]: via: (https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Use VLAN tagged NIC (Ethernet Card) on CentOS and RHEL Servers +====== + +There are some scenarios where we want to assign multiple IPs from different **VLAN** on the same Ethernet card (nic) on Linux servers ( **CentOS** / **RHEL** ). This can be done by enabling VLAN tagged interface. But for this to happen first we must make sure multiple VLANs are attached to port on switch or in other words we can say we should configure trunk port by adding multiple VLANs on switch. + + + +Let’s assume we have a Linux Server, there we have two Ethernet cards (enp0s3 & enp0s8), first NIC ( **enp0s3** ) will be used for data traffic and second NIC ( **enp0s8** ) will be used for control / management traffic. For Data traffic I will using multiple VLANs (or will assign multiple IPs from different VLANs on data traffic ethernet card). + +I am assuming the port from switch which is connected to my server data NIC is configured as trunk port by mapping the multiple VLANs to it. + +Following are the VLANs which is mapped to data traffic Ethernet Card (NIC): + + * VLAN ID (200), VLAN N/W = 172.168.10.0/24 + * VLAN ID (300), VLAN N/W = 172.168.20.0/24 + + + +To use VLAN tagged interface on CentOS 7 / RHEL 7 / CentOS 8 /RHEL 8 systems, [kernel module][1] **8021q** must be loaded. + +Use the following command to load the kernel module “8021q” + +``` +[root@linuxtechi ~]# lsmod | grep -i 8021q +[root@linuxtechi ~]# modprobe --first-time 8021q +[root@linuxtechi ~]# lsmod | grep -i 8021q +8021q 29022 0 +garp 14384 1 8021q +mrp 18542 1 8021q +[root@linuxtechi ~]# +``` + +Use below modinfo command to display information about kernel module “8021q” + +``` +[root@linuxtechi ~]# modinfo 8021q +filename: /lib/modules/3.10.0-327.el7.x86_64/kernel/net/8021q/8021q.ko +version: 1.8 +license: GPL +alias: rtnl-link-vlan +rhelversion: 7.2 +srcversion: 2E63BD725D9DC11C7DA6190 +depends: mrp,garp +intree: Y +vermagic: 3.10.0-327.el7.x86_64 SMP mod_unload modversions +signer: CentOS Linux kernel signing key +sig_key: 79:AD:88:6A:11:3C:A0:22:35:26:33:6C:0F:82:5B:8A:94:29:6A:B3 +sig_hashalgo: sha256 +[root@linuxtechi ~]# +``` + +Now tagged (or mapped) the VLANs 200 and 300 to NIC enp0s3 using the [ip command][2] + +``` +[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.200 type vlan id 200 +``` + +Bring up the interface using below ip command: + +``` +[root@linuxtechi ~]# ip link set dev enp0s3.200 up +``` + +Similarly mapped the VLAN 300 to NIC enp0s3 + +``` +[root@linuxtechi ~]# ip link add link enp0s3 name enp0s3.300 type vlan id 300 +[root@linuxtechi ~]# ip link set dev enp0s3.300 up +[root@linuxtechi ~]# +``` + +Now view the tagged interface status using ip command: + +[![tagged-interface-ip-command][3]][4] + +Now we can assign the IP address to tagged interface from their respective VLANs using beneath ip command, + +``` +[root@linuxtechi ~]# ip addr add 172.168.10.51/24 dev enp0s3.200 +[root@linuxtechi ~]# ip addr add 172.168.20.51/24 dev enp0s3.300 +``` + +Use below ip command to see whether IP is assigned to tagged interface or not. + +![ip-address-tagged-nic][5] + +All the above changes via ip commands will not be persistent across the reboot. These tagged interfaces will not be available after reboot and after network service restart + +So, to make tagged interfaces persistent across the reboot then use interface **ifcfg files** + +Edit interface (enp0s3) file “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3** ” and add the following content, + +Note: Replace the interface name that suits to your env, + +``` +[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3 +TYPE=Ethernet +DEVICE=enp0s3 +BOOTPROTO=none +ONBOOT=yes +``` + +Save & exit the file + +Create tagged interface file for VLAN id 200 as “ **/etc/sysconfig/network-scripts/ifcfg-enp0s3.200** ” and add the following contents to it. + +``` +[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.200 +DEVICE=enp0s3.200 +BOOTPROTO=none +ONBOOT=yes +IPADDR=172.168.10.51 +PREFIX=24 +NETWORK=172.168.10.0 +VLAN=yes +``` + +Save & exit the file + +Similarly create interface file for VLAN id 300 as “/etc/sysconfig/network-scripts/ifcfg-enp0s3.300” and add the following contents to it + +``` +[root@linuxtechi ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp0s3.300 +DEVICE=enp0s3.300 +BOOTPROTO=none +ONBOOT=yes +IPADDR=172.168.20.51 +PREFIX=24 +NETWORK=172.168.20.0 +VLAN=yes +``` + +Save and exit file and then restart network services using the beneath command, + +``` +[root@linuxtechi ~]# systemctl restart network +[root@linuxtechi ~]# +``` + +Now verify whether tagged interface are configured and up & running using the ip command, + +![tagged-interface-status-ip-command-linux-server][6] + +That’s all from this article, I hope you got an idea how to configure and enable VLAN tagged interface on CentOS 7 / 8 and RHEL 7 /8 Severs. Please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/vlan-tagged-nic-ethernet-card-centos-rhel-servers/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/how-to-manage-kernel-modules-in-linux/ +[2]: https://www.linuxtechi.com/ip-command-examples-for-linux-users/ +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command-1024x444.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-ip-command.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/ip-address-tagged-nic-1024x343.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/tagged-interface-status-ip-command-linux-server-1024x656.jpg diff --git a/sources/tech/20190617 KIT Scenarist is a Powerful Tool for Creating Screenplays.md b/sources/tech/20190617 KIT Scenarist is a Powerful Tool for Creating Screenplays.md new file mode 100644 index 0000000000..65f492420d --- /dev/null +++ b/sources/tech/20190617 KIT Scenarist is a Powerful Tool for Creating Screenplays.md @@ -0,0 +1,101 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (KIT Scenarist is a Powerful Tool for Creating Screenplays) +[#]: via: (https://itsfoss.com/kit-scenarist/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +KIT Scenarist is a Powerful Tool for Creating Screenplays +====== + +Did you ever wish that there was an open source tool for all your screenplay writing needs? Well, you are in luck. Today, we will be looking at an application that will do just that. Today, we will be looking at KIT Scenarist. + +### KIT Scenarist: An Open Source tool for writing screenplays + +[KIT Scenarist][1] is a program designed to be your one-stop-shop to create the next great screenplay. KIT Scenarist’s tools are split up into four modules: Research, Cards, Script, and Statistics. When you load KIT Scenarist for the first time after installing it, you will be asked if you want to enable each of these modules. You can also disable any of the modules from the setting menu. + +![Scenarist][2] + +The Research module gives you a place to store your story ideas, as well as, ideas and information for both characters and locations. You can also add images for a character or location to give you inspiration. + +The Cards module shows you the scenes that you have written or sketched out like cards on a cord board. You can drag these scenes around on the board to rearrange them. You can also jump to a certain scene or mark a scene as done. + +The Script module is where the actual writing takes place. It has a widget that tells you approximately how long your screenplay will take to perform. Like a word processor, you can tell KIT Scenarist what parts of the script are the actions, character, dialogue, etc and it will format it correctly. + +![Scenarist Research module][3] + +The Statistics module gives you all kinds of reports and graphs about your screenplay. The scene report shows how long each act is and what characters are in it. The location report shows how much time is spent at each location and in which acts. The cast report shows how many scenes each character is in and how much dialogue they have. There is even a report that lists all of the dialogue for each character. + +[][4] + +Suggested read Tilix: Advanced Tiling Terminal Emulator for Power Users + +Like all well-designed apps, KIT Scenarist has both a light and dark theme. The dark theme is the default. + +![Scenarist Statistics module][5] + +KIT Scenarist stores your projects in `.kitsp` files. These files are essentially SQLite database files with a different extension. This means that if you ever run into issues with KIT Scenarist, you can retrieve your information with an SQLite viewer. + +This application also allows you to import scripts from a wide range of script writing applications. You can import the following file formats: Final Draft Screenplay (.fdx), Final Draft Template (.fdxt), Trelby Screenplay (.trelby), [Fountain Text][6] (.foundation), Celtx Project (.celtx), .odt, . doc and .docx. You can export your outline and script to .docx, .pdf, .fdx. or .Fountain. + +The desktop version of KIT Scenarist is free, but they also have a couple of [Pro options][7]. They have a cloud service to collaborate with others or sync your work to other devices. The cloud service costs $4.99 for a month or $52.90 for a year. They have other subscription lengths. KIT Scenarist is available in the iOS app store or the Google Play store, but cost money there. Finally, if you are an organization who wants to use KIT Scenarist but it is missing a feature you want, they will add it if you finance the work. + +![Scenarist Cards module][8] + +### How to Install Scenarist + +Currently, the only repo that Scenarist is available for downloading is the [Arch User Repository][9]. (Arch rulez :D) + +Otherwise, you have to [download][10] a package installer from the website. They have packages available for Linux (both .deb and .rpm), Windows, and macOS. As I stated above, there are also versions available for both Android and iOS, but they are not free. You can find the source code of KIT Scenarist on GitHub. + +[KIT Scenarist on GitHub][11] + +If KIT Scenarist won’t start on your Linux system, try installing the `libxcb-xinerama0` package. + +![Scenarist Script][12] + +### Final Thoughts on KIT Scenarist + +KIT Scenarist offers the full script writing experience. All your information is stored in one place, so you don’t need to look everywhere for your information. It does everything. Now, the creator just needs to add a feature to submit your script directly to the person who will make you the next famous playwright. + +[][13] + +Suggested read Bookworm: A Simple yet Magnificent eBook Reader for Linux + +If KIT Scenarist looks too overwhelming and you want to try something simpler, I would recommend trying [Trelby][14]. If you are into writing, you may read my list of [useful open source tools for writers][15]. + +Have you every used KIT Scenarist? What is your favorite open source screenwriting tool? Please let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][16]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/kit-scenarist/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://kitscenarist.ru/en/index.html +[2]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/scenarist-about.png?fit=800%2C469&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/scenarist-research.png?fit=800%2C371&ssl=1 +[4]: https://itsfoss.com/tilix-terminal-emulator/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/scenarist-statistics.png?fit=800%2C467&ssl=1 +[6]: https://www.fountain.io/ +[7]: https://kitscenarist.ru/en/pricing.html +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/scenarist-cards.png?fit=800%2C470&ssl=1 +[9]: https://aur.archlinux.org/packages/scenarist +[10]: https://kitscenarist.ru/en/download.html +[11]: https://github.com/dimkanovikov/kitscenarist +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/scenarist-script.png?fit=800%2C468&ssl=1 +[13]: https://itsfoss.com/bookworm-ebook-reader-linux/ +[14]: https://www.trelby.org/ +[15]: https://itsfoss.com/open-source-tools-writers/ +[16]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190618 Cylon - The Arch Linux Maintenance Program For Newbies.md b/sources/tech/20190618 Cylon - The Arch Linux Maintenance Program For Newbies.md new file mode 100644 index 0000000000..6843910f91 --- /dev/null +++ b/sources/tech/20190618 Cylon - The Arch Linux Maintenance Program For Newbies.md @@ -0,0 +1,257 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Cylon – The Arch Linux Maintenance Program For Newbies) +[#]: via: (https://www.ostechnix.com/cylon-arch-linux-maintenance-program/) +[#]: author: (sk https://www.ostechnix.com/author/sk/) + +Cylon – The Arch Linux Maintenance Program For Newbies +====== + +![Cylon is an Arch Linux Maintenance Program][1] + +Recently switched to Arch Linux as your daily driver? Great! I’ve got a good news for you. Meet **Cylon** , a maintenance program for Arch Linux and derivatives. It is a menu-driven **Bash** script which provides updates, maintenance, backups and system checks for Arch Linux and its derivatives such as Manjaro Linux etc. Cylon is mainly a CLI program, and also has a basic dialog GUI. In this guide, we will see how to install and use Cylon in Arch Linux. + +### Cylon – The Arch Linux Maintenance Program + +##### Install Cylon + +Cylon is available in the [**AUR**][2]. You can install it using any AUR helpers, for example [**Yay**][3]. + +``` +$ yay -S cylon +``` + +##### Usage + +Please note that Cylon _**will not install all tools**_ by default. Some functions require various dependencies packages to be installed. There are three dependencies and the rest are optional dependencies. The optional dependencies are left to user discretion. When you perform a function, it will display the missing packages if there are any. All missing packages will be shown as **n/a** (not available) in menus. You need to install the missing packages by yourself before using such functions. + +To launch Cylon, type _**cylon**_ in the Terminal: + +``` +$ cylon +``` + +Sample output from my Arch linux system: + +![][4] + +Default interface of Cylon, the Arch Linux maintenance program + +You can also launch Cylon from the Menu. It usually found under **Applications > System Tools**. + +As you see in the above screenshot, there are **14** menu entries in Cylon main menu to perform different functions. To go to each entry, type the respective number. Also, as you see in the screenshot, there is **n/a** besides the 2 and 3 menu entries which means **auracle** and [**Trizen**][5] are not installed. You need to install them first before performing those functions. + +Let us see what each menu entry does. + +**1\. Pacman** + +Under [**Pacman**][6] section, you can do various package management operations such as install, update, upgrade, verify, remove packages, display package information, view Arch Linux news feed and many. Just type a number to perform the respective action. + +![][7] + +You can go back to main menu by typing the number **21**. + +**2. auracle +** + +The **auracle** is an AUR helper program that can be used to perform various AUR actions such as install, update, download, search, remove AUR packages in your Arch linux box. + +**3\. trizen** + +It is same as above section. + +**4\. System Update** + +As the name says, this section is dedicated to perform Arch Linux update. Here you can update both the official and AUR packages. Cylon gives you the following four options in this section. + + 1. Update Arch Main Repos only, + 2. Update AUR only, + 3. Update All repos, + 4. No Update and exit. + + + +![][8] + +**5\. System Maintenance** + +In this section, you can do the following maintenance tasks. + + 1. Failed Systemd Services and status, + 2. Check Journalctl log for Errors, + 3. Check Journalctl for fstrim SSD trim, + 4. Analyze system boot-up performance, + 5. Check for Broken Symlinks, + 6. Find files where no group or User corresponds to file’s numeric ID, + 7. lostfiles, + 8. Diskspace usage, + 9. Find 200 of the biggest files, + 10. Find inodes usage, + 11. Old configuration files scan, + 12. Print sensors information, + 13. Clean journal files, + 14. Delete core dumps /var/lib/systemd/coredump/, + 15. Delete files, + 16. bleachbit n/a, + 17. rmlint n/a, + 18. List All Open Files, + 19. DMI table decoder, + 20. Return. + + + +The non-installed packages will be shown with letters n/a besides that applications. You need to install them first before choosing that particular action. + +** **Recommended Download** – [**Free Video: “Penetration Testing Methodologies Training Course (a $99 value!) FREE”**][9] + +**6\. System backup** + +This section provides backup utilities such as **rsync** to backup your Arch Linux system. Also, there is a custom backup options which allows you to manually backup files/folders to a user-specified location. + +![][10] + +**7\. System Security** + +Cylon provides various security tools including the following: + + 1. ccrypt – Encrypt/decrypt files, + 2. clamav – Antivirus, + 3. rkhunter – RootKit hunter scan, + 4. lynis – System audit tool, + 5. Password generator, + 6. List the password aging info of a user, + 7. Audit SUID/SGID Files. + + + +Remember you need to install them yourself in order to use them. Cylon will not help you to install the missing packages. + +**8\. Network Maintenance** + +This section is for network related functions. Here, you can: + + 1. See wifi link quality continuously on screen, + 2. Use speedtest-cli -testing internet bandwidth, + 3. Check if website up with netcat and ping, + 4. Display all interfaces which are currently available, + 5. Display kernal routing table, + 6. Check the status of UFW, Uncomplicated Firewall, + 7. Network Time Synchronization status check, + 8. traceroute print route packets trace to network host, + 9. tracepath traces path to a network host, + 10. View all open ports + + + +**9\. xterm terminal** + +Here, you can launch xterm terminal at output folder path in new window. + +**10\. View/Edit config file** + +View and edit the configuration files if necessary. + +**11\. System information** + +This is most useful feature of Cylon utlity. This section provides your Arch Linux system’s information such as, + + * Uptime, + * Kernel details, + * OS architecture, + * Username, + * Default Shell, + * Screen resolution, + * CPU, + * RAM (used/total), + * Editor variable, + * Location of pacman cache folder, + * Hold packages, + * Number of orphan packages, + * Total number of installed packages, + * Number of all explicitly installed packages, + * All foreign installed packages, + * All foreign explicitly installed packages, + * All packages installed as dependencies, + * Top 5 largest packages, + * 5 newest updated packages, + * Packages Installed size by repositories. + + + +![][11] + +**12\. Cylon information** + +It will display the information about Cylon program. It also performs the dependencies installation check and display the list of installed non-installed dependencies. + +![][12] + +**13\. Weather** + +It displays the 3 day weather forecast by **wttr.in** utility. + +* * * + +**Related Read:** + + * **[How To Check Weather Details From Command Line In Linux][13]** + + + +* * * + +**14\. Exit** + +Type **14** to exit Cylon. + +For more details, type **cylon -h** in the Terminal to print cylon information. + +* * * + +**Recommended read:** + + * [**Cylon-deb : The Debian Linux Maintenance Program**][14] + + + +* * * + +Cylon script offers a lot of tools and features to maintain your Arch Linux system. If you’re new to Arch Linux, give it a try and see if it helps. + +**Resource:** + + * [**Cylon GitHub page**][15] + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/cylon-arch-linux-maintenance-program/ + +作者:[sk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-The-Arch-Linux-Maintenance-Program-720x340.png +[2]: https://aur.archlinux.org/packages/cylon/ +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: http://www.ostechnix.com/wp-content/uploads/2017/06/cylon-interface.png +[5]: https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ +[6]: https://www.ostechnix.com/getting-started-pacman/ +[7]: http://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-pacman.png +[8]: http://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-system-update.png +[9]: https://ostechnix.tradepub.com/free/w_cybf03/prgm.cgi +[10]: http://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-system-backup.png +[11]: http://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-system-information.png +[12]: http://www.ostechnix.com/wp-content/uploads/2017/06/Cylon-information.png +[13]: https://www.ostechnix.com/check-weather-details-command-line-linux/ +[14]: https://www.ostechnix.com/cylon-deb-debian-linux-maintenance-program/ +[15]: https://github.com/gavinlyonsrepo/cylon diff --git a/sources/tech/20190618 How to use MapTool to build an interactive dungeon RPG.md b/sources/tech/20190618 How to use MapTool to build an interactive dungeon RPG.md new file mode 100644 index 0000000000..94aaf47de2 --- /dev/null +++ b/sources/tech/20190618 How to use MapTool to build an interactive dungeon RPG.md @@ -0,0 +1,231 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use MapTool to build an interactive dungeon RPG) +[#]: via: (https://opensource.com/article/19/6/how-use-maptools) +[#]: author: (Seth Kenlon https://opensource.com/users/seth) + +How to use MapTool to build an interactive dungeon RPG +====== +By using MapTool, most of a game master's work is done well before a +role-playing game begins. +![][1] + +In my previous article on MapTool, I explained how to download, install, and configure your own private, [open source virtual tabletop][2] so you and your friends can play a role-playing game (RPG) together. [MapTool][3] is a complex application with lots of features, and this article demonstrates how a game master (GM) can make the most of it. + +### Update JavaFX + +MapTool requires JavaFX, but Java maintainers recently stopped bundling it in Java downloads. This means that, even if you have Java installed, you might not have JavaFX installed. + +Some Linux distributions have a JavaFX package available, so if you try to run MapTool and get an error about JavaFX, download the latest self-contained version: + + * For [Ubuntu and other Debian-based systems][4] + * For [Fedora and Red Hat-based systems][5] + + + +### Build a campaign + +The top-level file in MapTool is a campaign (.cmpgn) file. A campaign can contain all of the maps required by the game you're running. As your players progress through the campaign, everyone changes to the appropriate map and plays. + +For that to go smoothly, you must do a little prep work. + +First, you need the digital equivalents of miniatures: _tokens_ in MapTool terminology. Tokens are available from various sites, but the most prolific is [immortalnights.com/tokensite][6]. If you're still just trying out virtual tabletops and aren't ready to invest in digital art yet, you can get a stunning collection of starter tokens from immortalnights.com for $0. + +You can add starter content to MapTool quickly and easily using its built-in resource importer. Go to the **File** menu and select **Add Resource to Library**. + +In the **Add Resource to Library** dialogue box, select the RPTools tab, located at the bottom-left. This lists all the free art packs available from the RPTools server, tokens and maps alike. Click to download and import. + +![Add Resource to Library dialogue][7] + +You can import assets you already have on your computer by selecting files from the file system, using the same dialogue box. + +MapTool resources appear in the Library panel. If your MapTool window has no Library panel, select **Library** in the **Window** menu to add one. + +### Gather your maps + +The next step in preparing for your game is to gather maps. Depending on what you're playing, that might mean you need to draw your maps, purchase a map pack, or just open a map bundled with a game module. If all you need is a generic dungeon, you can also download free maps from within MapTool's **Add Resource to Library**. + +If you have a set of maps you intend to use often, you can import them as resources. If you are building a campaign you intend to use just once, you can quickly add any PNG or JPEG file as a **New Map** in the **Map** menu. + +![Creating a new map][8] + +Set the **Background** to a texture that roughly matches your map or to a neutral color. + +Set the **Map** to your map graphics file. + +Give your new map a unique **Name**. The map name is visible to your players, so keep it free of spoilers. + +To switch between maps, click the **Select Map** button in the top-right corner of the MapTool window, and choose the map name in the drop-down menu that appears. + +![Select a map][9] + +Before you let your players loose on your map, you still have some important prep work to do. + +### Adjust the grid size + +Since most RPGs govern how far players can move during their turn, especially during combat, game maps are designed to a specific scale. The most common scale is one map square for every five feet. Most maps you download already have a grid drawn on them; if you're designing a map, you should draw on graph paper to keep your scale consistent. Whether your map graphic has a grid or not, MapTool doesn't know about it, but you can adjust the digital grid overlay so that your player tokens are constrained into squares along the grid. + +MapTool doesn't show the grid by default, so go to the **Map** menu and select **Adjust grid**. This displays MapTool's grid lines, and your goal is to make MapTool's grid line up with the grid drawn onto your map graphic. If your map graphic doesn't have a grid, it may indicate its scale; a common scale is one inch per five feet, and you can usually assume 72 pixels is one inch (on a 72 DPI screen). While adjusting the grid, you can change the color of the grid lines for your own reference. Set the cell size in pixels. Click and drag to align MapTool's grid to your map's grid. + +![Adjusting the grid][10] + +If your map has no grid and you want the grid to remain visible after you adjust it, go to the **View** menu and select **Show Grid**. + +### Add players and NPCs + +To add a player character (PC), non-player character (NPC), or monster to your map, find an appropriate token in your **Library** panel, then drag and drop one onto your map. In the **New Token** dialogue box that appears, give the token a name and set it as an NPC or a PC, then click the OK button. + +![Adding a player character to the map][11] + +Once a token is on the map, try moving it to see how its movements are constrained to the grid you've designated. Make sure **Interaction Tools** , located in the toolbar just under the **File** menu, is selected. + +![A token moving within the grid][12] + +Each token added to a map has its own set of properties, including the direction it's facing, a light source, player ownership, conditions (such as incapacitated, prone, dead, and so on), and even class attributes. You can set as many or as few of these as you want, but at the very least you should right-click on each token and assign it ownership. Your players must be logged into your MapTool server for tokens to be assigned to them, but you can assign yourself NPCs and monsters in advance. + +The right-click menu provides access to all important token-related functions, including setting which direction it's facing, setting a health bar and health value, a copy and paste function (enabling you and your players to move tokens from map to map), and much more. + +![The token menu unlocks great, arcane power][13] + +### Activate fog-of-war effects + +If you're using maps exclusively to coordinate combat, you may not need a fog-of-war effect. But if you're using maps to help your players visualize a dungeon they're exploring, you probably don't want them to see the whole map before they've made significant moves, like opening locked doors or braving a decaying bridge over a pit of hot lava. + +The fog-of-war effect is an invaluable tool for the GM, and it's essential to set it up early so that your players don't accidentally get a sneak peek at all the horrors your dungeon holds for them. + +To activate fog-of-war on a map, go to the **Map** and select **Fog-of-War**. This blackens the entire screen for your players, so your next step is to reveal some portion of the map so that your players aren't faced with total darkness when they switch to the map. Fog-of-war is a subtractive process; it starts 100% dark, and as the players progress, you reveal new portions of the map using fog-of-war drawing tools available in the **FOG** toolbar, just under the **View** menu. + +You can reveal sections of the map in rectangle blocks, ovals, polygons, diamonds, and freehand shapes. Once you've selected the shape, click and release on the map, drag to define an area to reveal, and then click again. + +![Fog-of-war as experienced by a playe][14] + +If you're accidentally overzealous with what you reveal, you have two ways to reverse what you've done: You can manually draw new fog, or you can reset all fog. The quicker method is to reset all fog with **Ctrl+Shift+A**. The more elegant solution is to press **Shift** , then click and release, draw an area of fog, and then click again. Instead of exposing an area of the map, it restores fog. + +### Add lighting effects + +Fog-of-war mimics the natural phenomenon of not being able to see areas of the world other than where you are, but lighting effects mimic the visibility player characters might experience in light and dark. For games like Pathfinder and Dungeons and Dragons 5e, visibility is governed by light sources matched against light conditions. + +First, activate lighting by clicking on the **Map** menu, selecting **Vision** , and then choosing either Daylight or Night. Now lighting effects are active, but none of your players have light sources, so they have no visibility. + +To assign light sources to players, right-click on the appropriate token and choose **Light Source**. Definitions exist for the D20 system (candle, lantern, torch, and so on) and in generic measurements. + +With lighting effects active, players can expose portions of fog-of-war as their light sources get closer to unexposed fog. That's a great effect, but it doesn't make much sense when players can illuminate the next room right through a solid wall. To prevent that, you have to help MapTool differentiate between empty space and solid objects. + +#### Define solid objects + +Defining walls and other solid objects through which light should not pass is easier than it sounds. MapTool's **Vision Blocking Layer** (VBL) tools are basic and built to minimize prep time. There are several basic shapes available, including a basic rectangle and an oval. Draw these shapes over all the solid walls, doors, pillars, and other obstructions, and you have instant rudimentary physics. + +![Setting up obstructions][15] + +Now your players can move around the map with light sources without seeing what lurks in the shadows of a nearby pillar or behind an innocent-looking door… until it's too late! + +![Lighting effects][16] + +### Track initiative + +Eventually, your players are going to stumble on something that wants to kill them, and that means combat. In most RPG systems, combat is played in rounds, with the order of turns decided by an _initiative_ roll. During combat, each player (in order of their initiative roll, from greatest to lowest) tries to defeat their foe, ideally dealing enough damage until their foe is left with no health points (HP). It's usually the most paperwork a GM has to do during a game because it involves tracking whose turn it is, how much damage each monster has taken, what amount of damage each monster's attack deals, what special abilities each monster has, and more. Luckily, MapTool can help with that—and better yet, you can extend it with a custom macro to do even more. + +MapTool's basic initiative panel helps you keep track of whose turn it is and how many rounds have transpired so far. To view the initiative panel, go to the **Window** menu and select **Initiative**. + +To add characters to the initiative order, right-click a token and select **Add To Initiative**. As you add each, the token and its label appear in the initiative panel in the order that you add them. If you make a mistake or someone holds their action and changes the initiative order, click and drag the tokens in the initiative panel to reorder them. + +During combat, click the **Next** button in the top-left of the initiative panel to progress to the next character. As long as you use the **Next** button, the **Round** counter increments, helping you track how many rounds the combat has lasted (which is helpful when you have spells or effects that last only for a specific number of rounds). + +Tracking combat order is helpful, but it's even better to track health points. Your players should be tracking their own health, but since everyone's staring at the same screen, it doesn't hurt to track it publicly in one place. An HP property and a graphical health bar (which you can activate) are assigned to each token, so that's all the infrastructure you need to track HP in MapTool, but doing it manually takes a lot of clicking around. Since MapTool can be extended with macros, it's trivial to bring all these components together for a smooth GM experience. + +The first step is to activate graphical health bars for your tokens. To do this, right-click on each token and select **Edit**. In the **Edit Token** dialog box, click on the **State** tab and deselect the radio button next to **Hide**. + +![Don't hide the health bar][17] + +Do this for each token whose health you want to expose. + +#### Write a macro + +Macros have access to all token properties, so each token's HP can be tracked by reading and writing whatever value exists in the token's HP property. The graphical health bar, however, bases its state on a percentage, so for the health bars to be meaningful, your tokens also must have some value that represents 100% of its HP. + +Go to the **Edit** menu and select **Campaign Properties** to globally add properties to tokens. In the **Campaign Properties** window, select the **Token Properties** tab and then click the **Basic** category in the left column. Under ***@HP** , add ***@MaxHP** and click the **Update** button. Click the **OK** button to close the window. + +![Adding a property to all tokens][18] + +Now right-click a token and select **Edit**. In the **Edit Token** window, select the **State** tab and enter a value for the token's maximum HP (from the player's character sheet). + +To create a new macro, reveal the **Campaign** panel in the **Window** menu. + +In the **Campaign** panel, right-click and select **Add New Macro**. A button labeled **New** appears in the panel. Right-click on the **New** button and select **Edit**. + +Enter this code in the macro editor window: + + +``` +[h:status = input( +"hpAmount|0|Points", +"hpType|Damage,Healing|Damage or heal?|RADIO|SELECT=0")] +[h:abort(status)] + +[if(hpType == 0),CODE: { +[h:HP = HP - hpAmount] +[h:bar.Health = HP / MaxHP] +[r:token.name] takes [r:hpAmount] damage.}; +{ +[h:diff = MaxHP - HP] +[h:HP = min(HP+hpAmount, MaxHP)] +[h:bar.Health = HP / MaxHP] +[r:token.name] gains [r:min(diff,hpAmount)] HP. };] +``` + +You can find full documentation of functions available in MapTool macros and their syntax from the [RPTools wiki][19]. + +In the **Details** tab, enable **Include Label** and **Apply to Selected Tokens** , and leave all other values at their default. Give your macro a better name than **New** , such as **HPTracker** , then click **Apply** and **OK**. + +![Macro editing][20] + +Your campaign now has a new ability! + +Select a token and click your **HPTracker** button. Enter the number of points to deduct from the token, click **OK** , and watch the health bar change to reflect the token's new state. + +It may seem like a simple change, but in the heat of battle, this is a GM's greatest weapon. + +### During the game + +There's obviously a lot you can do with MapTool, but with a little prep work, most of your work is done well before you start playing. You can even create a template campaign by creating an empty campaign with only the macros and settings you want, so all you have to do is import maps and stat out tokens. + +During the game, your workflow is mostly about revealing areas from fog-of-war and managing combat. The players can manage their own tokens, and your prep work takes care of everything else. + +MapTool makes digital gaming easy and fun, and most importantly, it keeps it open source and self-contained. Level-up today by learning MapTool and using it for your games. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/how-use-maptools + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice-keys_0.jpg?itok=PGEs3ZXa +[2]: https://opensource.com/article/18/5/maptool +[3]: https://github.com/RPTools/maptool +[4]: https://github.com/RPTools/maptool/releases +[5]: https://klaatu.fedorapeople.org/RPTools/maptool/ +[6]: https://immortalnights.com/tokensite/ +[7]: https://opensource.com/sites/default/files/uploads/maptool-resources.png (Add Resource to Library dialogue) +[8]: https://opensource.com/sites/default/files/uploads/map-properties.png (Creating a new map) +[9]: https://opensource.com/sites/default/files/uploads/map-select.jpg (Select a map) +[10]: https://opensource.com/sites/default/files/uploads/grid-adjust.jpg (Adjusting the grid) +[11]: https://opensource.com/sites/default/files/uploads/token-new.png (Adding a player character to the map) +[12]: https://opensource.com/sites/default/files/uploads/token-move.jpg (A token moving within the grid) +[13]: https://opensource.com/sites/default/files/uploads/token-menu.jpg (The token menu unlocks great, arcane power) +[14]: https://opensource.com/sites/default/files/uploads/fog-of-war.jpg (Fog-of-war as experienced by a playe) +[15]: https://opensource.com/sites/default/files/uploads/vbl.jpg (Setting up obstructions) +[16]: https://opensource.com/sites/default/files/uploads/map-light.jpg (Lighting effects) +[17]: https://opensource.com/sites/default/files/uploads/token-edit.jpg (Don't hide the health bar) +[18]: https://opensource.com/sites/default/files/uploads/campaign-properties.jpg (Adding a property to all tokens) +[19]: https://lmwcs.com/rptools/wiki/Main_Page +[20]: https://opensource.com/sites/default/files/uploads/macro-detail.jpg (Macro editing) diff --git a/sources/tech/20190619 11 Free and Open Source Video Editing Software.md b/sources/tech/20190619 11 Free and Open Source Video Editing Software.md new file mode 100644 index 0000000000..57ad8a5358 --- /dev/null +++ b/sources/tech/20190619 11 Free and Open Source Video Editing Software.md @@ -0,0 +1,327 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (11 Free and Open Source Video Editing Software) +[#]: via: (https://itsfoss.com/open-source-video-editors/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +11 Free and Open Source Video Editing Software +====== + +We’ve already covered the [top video editors for Linux][1]. That list contained some non-open source software as well. This made us write this article to feature only open source video editors. We’ve also mentioned what platforms are supported by these software so that this list is helpful even if you are not using Linux. + +### Top Free and Open Source Video Editors + +![Best Open Source Video Editors][2] + +Just for your information, this list is not a ranking and the editors listed here are not in any specific order. I have not mentioned the installation procedure but you can find that information on the website of each project. + +#### 1\. Kdenlive + +![][3] + +**Key Features** : + + * Multi-track Video Editing + * All kinds of audio/video format supported with the help of FFmpeg libraries + * 2D Title maker + * Customizable Interface and shortcuts + * Proxy editing to make things faster + * Automatic backup + * Timeline preview + * Keyframeable effects + * Audiometer, Histogram, Waveform, etc. + + + +**Platforms available on:** Linux, macOS and Windows. + +Kdenlive is an open source video editor (and free) available for Windows, Mac OSX, and Linux distros. + +If you are on a Mac, you will have to manually compile and install it. However, if you are on Windows, you can download the EXE file and should have no issues installing it. + +[Kdenlive][4] + +#### 2\. LiVES + +![][5] + +**Key Features:** + + * Frame and sample accurate editing + * Edit video in real-time + * Can be controlled using MIDI, keyboard, Joystic + * Multi-track support + * VJ keyboard control during playback + * Plugins supported + * Compatible with various effects frameworks: projectM, LADSPA audio, and so on. + + + +**Platforms available on:** Linux and macOS. Support for Windows will be added soon. + +LiVES is an interesting open source video editor. You can find the code on [GitHub][6]. It is currently available for **Linux and macOS Leopard**. It will soon be available for Windows (hopefully by the end of 2019). + +[LiVES][7] + +#### 3\. OpenShot + +![][8] + +**Key Features:** + + * Almost all video/audio formats supported + * Key frame animation framework + * Multi-track support + * Desktop integration (drag and drop support) + * Video transition with real-time previews + * 3D animated titles and effects + * Advanced timeline with drag/drop support, panning, scrolling, zooming, and snapping. + + + +**Platforms available on:** Linux, macOS and Windows. + +OpenShot is a quite popular video editor and it is open source as well. Unlike others, OpenShot offers a DMG installer for Mac OSX. So, you don’t have to compile and install it manually. + +If you are a fan of open source solutions and you own a Mac, OpenShot seems like a very good option. + +[OpenShot][9] + +#### 4\. VidCutter + +![][10] + +**Key Features:** + + * Keyframes viewer + * Cut, Split, and add different clips + * Major audio/video formats supported + + + +[][11] + +Suggested read Install Windows Like Desktop Widgets In Ubuntu Linux With Screenlets + +**Platforms available on:** Linux, macOS and Windows. + +VidCutter is an open source video editor for basic tasks. It does not offer a plethora of features – but it works for all the common tasks like clipping or cutting. It’s under active development as well. + +For Linux, it is available on Flathub as well. And, for Windows and Mac OS, you do get EXE and DMG file packages in the latest releases. + +[VidCutter][12] + +#### 5\. Shotcut + +![][13] + +**Key Features:** + + * Supports almost all major audio/video formats with the help of FFmpeg libraries. + * Multiple dockable/undockable panels + * Intuitive UI + * JACK transport sync + * Stereo, mono, and 5.1 surround support + * Waveform, Histogram, etc. + * Easy to use with dual monitors + * Portable version available + + + +**Platforms available on:** Linux, macOS and Windows. + +Shotcut is yet another popular open source video editor available across multiple platforms. It features a nice interface to work on. + +When considering the features, it offers almost everything that you would ever need (from color correction to adding transitions). Also, it provides a portable version for Windows – which is an impressive thing. + +[Shotcut][14] + +#### 6\. Flowblade + +![][15] + +**Key Features:** + + * Advanced timeline control + * Multi-track editing + * [G’mic][16] tool + * All major audio/video formats supported with the help of FFMpeg libraries + + + +**Platforms available on:** Linux + +Flowblade is an intuitive open source video editor available only for Linux. Yes, it is a bummer that we do not have cross-platform support for this. + +However, if you are using a Linux distro, you can either download the .deb file and get it installed or use the source code on GitHub. + +[Flowblade][17] + +#### 7\. Avidemux + +![][18] + +**Key Features:** + + * Trim + * Cut + * Filter support + * Major video format supported + + + +**Platforms available on:** Linux, BSD, macOS and Windows. + +If you are looking for a basic cross-platform open source video editor – this will be one of our recommendations. You just get the ability to cut, save, add a filter, and perform some other basic editing tasks. Their official [SourceForge page][19] might look like it has been abandoned, but it is in active development. + +[Avidemux][19] + +#### 8\. Pitivi + +![][20] + +**Key Features:** + + * All major video formats supported using [GStreamer Multimedia Framework][21] + * Advanced timeline independent of frame rate + * Animated effects and transitions + * Audio waveforms + * Real-time trimming previews + + + +**Platforms available on:** Linux + +Yet another open source video editor that is available only for Linux. The UI is user-friendly and the features offered will help you perform some advanced edits as well. + +You can install it using Flatpak or look for the source code on their official website. It should be available in the repositories of most distributions as well. + +[Pitivi][22] + +#### 9\. Blender + +![][23] + +**Key Features:** + + * VFX + * Modeling tools + * Animation and Rigging tools + * Draw in 2D or 3D + + + +[][24] + +Suggested read 4 Best Modern Open Source Text Editors For Coding in Linux + +**Platforms available on:** Linux, macOS and Windows. + +Blender is an advanced 3D creation suite. And, it is surprising that you get all those powerful abilities for free (and while being open source). + +Of course, Blender is not a solution for every user – however, it is definitely one of the best open source tool available for Windows, macOS, and Linux. You can also find it on [Steam][25] to install it. + +[Blender][26] + +#### 10\. Cinelerra + +![][27] + +**Key Features:** + + * Advanced timeline + * Motion tracking support + * Video stabilization + * Audio mastering + * Color correction + + + +**Platforms available on:** Linux + +Cinelerra is a quite popular open source video editor. However, it has several branches to it (in other words – different versions). I am not sure if that is a good thing – but you get different features (and ability) on each of them. + +Cinelerra GG, CV, CVE, and HV are those variants catering to users with different preferences. Personally, I would recommend to check out [Cinelerra GG][28]. + +[Cinelerra][29] + +#### 11\. NATRON + +![][30] + +**Key Features:** + + * VFX + * Powerful Tracker + * Keying tools for production needs + * Shadertoy and G’mic tools + * OpenFX plugin support + + + +**Platforms available on** : Linux, macOS and Windows. + +If you are into VFX and motion graphics, NATRON is a good alternative to Blender. Of course, in order to compare them for your usage, you will have to give it a try yourself. + +You do have the installer available for Windows and the dmg package for Mac. So, it is quite easy to get it installed. You can always head to its [GitHub page][31] for more information. + +[Natron][32] + +**Wrapping Up** + +So, now that you know about some of the most popular open source video editors available out there – what do you think about them? + +Are they good enough for professional requirements? Or, did we miss any of your favorite open source video editor that deserved the mention? + +I am not an expert video editor and have only experience with simple editing tasks to create YouTube videos. If you are an experienced and professional video editor, I would like to hear your opinion on how good are these open source video editors for the experts. + +Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/open-source-video-editors/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-video-editing-software-linux/ +[2]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/best-open-source-video-editors-800x450.png?resize=800%2C450&ssl=1 +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/06/kdenlive-free-video-editor-on-ubuntu.jpg?ssl=1 +[4]: https://kdenlive.org/en/ +[5]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/06/lives-video-editor.jpg?fit=800%2C600&ssl=1 +[6]: https://github.com/salsaman/LiVES +[7]: http://lives-video.com/ +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/06/openshot-free-video-editor-on-ubuntu.jpg?ssl=1 +[9]: https://www.openshot.org/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/06/vidcutter.jpg?fit=800%2C585&ssl=1 +[11]: https://itsfoss.com/install-windows-desktop-widgets-linux/ +[12]: https://github.com/ozmartian/vidcutter +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/06/shotcut-video-editor-linux.jpg?resize=800%2C503&ssl=1 +[14]: https://shotcut.org/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/06/flowblade-movie-editor-on-ubuntu.jpg?ssl=1 +[16]: https://gmic.eu/ +[17]: https://jliljebl.github.io/flowblade/index.html +[18]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/avidemux.jpg?resize=800%2C697&ssl=1 +[19]: http://avidemux.sourceforge.net/ +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/pitvi.jpg?resize=800%2C464&ssl=1 +[21]: https://en.wikipedia.org/wiki/GStreamer +[22]: http://www.pitivi.org/ +[23]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/06/blender-running-on-ubuntu-16.04.jpg?ssl=1 +[24]: https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ +[25]: https://store.steampowered.com/app/365670/Blender/ +[26]: https://www.blender.org/ +[27]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/06/cinelerra-screenshot.jpeg?ssl=1 +[28]: https://www.cinelerra-gg.org/ +[29]: http://cinelerra.org/ +[30]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/natron.jpg?fit=800%2C481&ssl=1 +[31]: https://github.com/NatronGitHub/Natron +[32]: https://natrongithub.github.io/ diff --git a/sources/tech/20190619 Getting started with OpenSSL- Cryptography basics.md b/sources/tech/20190619 Getting started with OpenSSL- Cryptography basics.md new file mode 100644 index 0000000000..0f3da1b13e --- /dev/null +++ b/sources/tech/20190619 Getting started with OpenSSL- Cryptography basics.md @@ -0,0 +1,342 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with OpenSSL: Cryptography basics) +[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-1) +[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen) + +Getting started with OpenSSL: Cryptography basics +====== +Need a primer on cryptography basics, especially regarding OpenSSL? Read +on. +![A lock on the side of a building][1] + +This article is the first of two on cryptography basics using [OpenSSL][2], a production-grade library and toolkit popular on Linux and other systems. (To install the most recent version of OpenSSL, see [here][3].) OpenSSL utilities are available at the command line, and programs can call functions from the OpenSSL libraries. The sample program for this article is in C, the source language for the OpenSSL libraries. + +The two articles in this series cover—collectively—cryptographic hashes, digital signatures, encryption and decryption, and digital certificates. You can find the code and command-line examples in a ZIP file from [my website][4]. + +Let’s start with a review of the SSL in the OpenSSL name. + +### A quick history + +[Secure Socket Layer (SSL)][5] is a cryptographic protocol that [Netscape][6] released in 1995. This protocol layer can sit atop HTTP, thereby providing the _S_ for _secure_ in HTTPS. The SSL protocol provides various security services, including two that are central in HTTPS: + + * Peer authentication (aka mutual challenge): Each side of a connection authenticates the identity of the other side. If Alice and Bob are to exchange messages over SSL, then each first authenticates the identity of the other. + * Confidentiality: A sender encrypts messages before sending these over a channel. The receiver then decrypts each received message. This process safeguards network conversations. Even if eavesdropper Eve intercepts an encrypted message from Alice to Bob (a _man-in-the-middle_ attack), Eve finds it computationally infeasible to decrypt this message. + + + +These two key SSL services, in turn, are tied to others that get less attention. For example, SSL supports message integrity, which assures that a received message is the same as the one sent. This feature is implemented with hash functions, which likewise come with the OpenSSL toolkit. + +SSL is versioned (e.g., SSLv2 and SSLv3), and in 1999 Transport Layer Security (TLS) emerged as a similar protocol based upon SSLv3. TLSv1 and SSLv3 are alike, but not enough so to work together. Nonetheless, it is common to refer to SSL/TLS as if they are one and the same protocol. For example, OpenSSL functions often have SSL in the name even when TLS rather than SSL is in play. Furthermore, calling OpenSSL command-line utilities begins with the term **openssl**. + +The documentation for OpenSSL is spotty beyond the **man** pages, which become unwieldy given how big the OpenSSL toolkit is. Command-line and code examples are one way to bring the main topics into focus together. Let’s start with a familiar example—accessing a web site with HTTPS—and use this example to pick apart the cryptographic pieces of interest. + +### An HTTPS client + +The **client** program shown here connects over HTTPS to Google: + + +``` +/* compilation: gcc -o client client.c -lssl -lcrypto */ + +#include <stdio.h> + +#include <stdlib.h> + +#include <openssl/bio.h> /* BasicInput/Output streams */ + +#include <openssl/err.h> /* errors */ + +#include <openssl/ssl.h> /* core library */ + +#define BuffSize 1024 + +void report_and_exit(const char* msg) { +  [perror][7](msg); +  ERR_print_errors_fp(stderr); +  [exit][8](-1); +} + +void init_ssl() { +  SSL_load_error_strings(); +  SSL_library_init(); +} + +void cleanup(SSL_CTX* ctx, BIO* bio) { +  SSL_CTX_free(ctx); +  BIO_free_all(bio); +} + +void secure_connect(const char* hostname) { +  char name[BuffSize]; +  char request[BuffSize]; +  char response[BuffSize]; + +  const SSL_METHOD* method = TLSv1_2_client_method(); +  if (NULL == method) report_and_exit("TLSv1_2_client_method..."); + +  SSL_CTX* ctx = SSL_CTX_new(method); +  if (NULL == ctx) report_and_exit("SSL_CTX_new..."); + +  BIO* bio = BIO_new_ssl_connect(ctx); +  if (NULL == bio) report_and_exit("BIO_new_ssl_connect..."); + +  SSL* ssl = NULL; + +  /* link bio channel, SSL session, and server endpoint */ + +  [sprintf][9](name, "%s:%s", hostname, "https"); +  BIO_get_ssl(bio, &ssl); /* session */ +  SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* robustness */ +  BIO_set_conn_hostname(bio, name); /* prepare to connect */ + +  /* try to connect */ +  if (BIO_do_connect(bio) <= 0) { +    cleanup(ctx, bio); +    report_and_exit("BIO_do_connect..."); +  } + +  /* verify truststore, check cert */ +  if (!SSL_CTX_load_verify_locations(ctx, +                                      "/etc/ssl/certs/ca-certificates.crt", /* truststore */ +                                      "/etc/ssl/certs/")) /* more truststore */ +    report_and_exit("SSL_CTX_load_verify_locations..."); + +  long verify_flag = SSL_get_verify_result(ssl); +  if (verify_flag != X509_V_OK) +    [fprintf][10](stderr, +            "##### Certificate verification error (%i) but continuing...\n", +            (int) verify_flag); + +  /* now fetch the homepage as sample data */ +  [sprintf][9](request, +          "GET / HTTP/1.1\x0D\x0AHost: %s\x0D\x0A\x43onnection: Close\x0D\x0A\x0D\x0A", +          hostname); +  BIO_puts(bio, request); + +  /* read HTTP response from server and print to stdout */ +  while (1) { +    [memset][11](response, '\0', sizeof(response)); +    int n = BIO_read(bio, response, BuffSize); +    if (n <= 0) break; /* 0 is end-of-stream, < 0 is an error */ +  [puts][12](response); +  } + +  cleanup(ctx, bio); +} + +int main() { +  init_ssl(); + +  const char* hostname = "www.google.com:443"; +  [fprintf][10](stderr, "Trying an HTTPS connection to %s...\n", hostname); +  secure_connect(hostname); + +return 0; +} +``` + +This program can be compiled and executed from the command line (note the lowercase L in **-lssl** and **-lcrypto**): + +**gcc** **-o** **client client.c -lssl** **-lcrypto** + +This program tries to open a secure connection to the web site [www.google.com][13]. As part of the TLS handshake with the Google web server, the **client** program receives one or more digital certificates, which the program tries (but, on my system, fails) to verify. Nonetheless, the **client** program goes on to fetch the Google homepage through the secure channel. This program depends on the security artifacts mentioned earlier, although only a digital certificate stands out in the code. The other artifacts remain behind the scenes and are clarified later in detail. + +Generally, a client program in C or C++ that opened an HTTP (non-secure) channel would use constructs such as a _file descriptor_ for a _network socket_, which is an endpoint in a connection between two processes (e.g., the client program and the Google web server). A file descriptor, in turn, is a non-negative integer value that identifies, within a program, any file-like construct that the program opens. Such a program also would use a structure to specify details about the web server’s address. + +None of these relatively low-level constructs occurs in the client program, as the OpenSSL library wraps the socket infrastructure and address specification in high-level security constructs. The result is a straightforward API. Here’s a first look at the security details in the example **client** program. + + * The program begins by loading the relevant OpenSSL libraries, with my function **init_ssl** making two calls into OpenSSL: + +**SSL_library_init(); SSL_load_error_strings();** + + * The next initialization step tries to get a security _context_, a framework of information required to establish and maintain a secure channel to the web server. **TLS 1.2** is used in the example, as shown in this call to an OpenSSL library function: + +**const SSL_METHOD* method = TLSv1_2_client_method(); /* TLS 1.2 */** + +If the call succeeds, then the **method** pointer is passed to the library function that creates the context of type **SSL_CTX**: + +**SSL_CTX*** **ctx** **= SSL_CTX_new(method);** + +The **client** program checks for errors on each of these critical library calls, and then the program terminates if either call fails. + + * Two other OpenSSL artifacts now come into play: a security session of type **SSL**, which manages the secure connection from start to finish; and a secured stream of type **BIO** (Basic Input/Output), which is used to communicate with the web server. The **BIO** stream is generated with this call: + +**BIO* bio = BIO_new_ssl_connect(ctx);** + +Note that the all-important context is the argument. The **BIO** type is the OpenSSL wrapper for the **FILE** type in C. This wrapper secures the input and output streams between the **client** program and Google's web server. + + * With the **SSL_CTX** and **BIO** in hand, the program then links these together in an **SSL** session. Three library calls do the work: + +**BIO_get_ssl(bio, &ssl); /* get a TLS session */** + +**SSL_set_mode(ssl, SSL_MODE_AUTO_RETRY); /* for robustness */** + +**BIO_set_conn_hostname(bio, name); /* prepare to connect to Google */** + +The secure connection itself is established through this call: + +**BIO_do_connect(bio);** + +If this last call does not succeed, the **client** program terminates; otherwise, the connection is ready to support a confidential conversation between the **client** program and the Google web server. + + + + +During the handshake with the web server, the **client** program receives one or more digital certificates that authenticate the server’s identity. However, the **client** program does not send a certificate of its own, which means that the authentication is one-way. (Web servers typically are configured _not_ to expect a client certificate.) Despite the failed verification of the web server’s certificate, the **client** program continues by fetching the Google homepage through the secure channel to the web server. + +Why does the attempt to verify a Google certificate fail? A typical OpenSSL installation has the directory **/etc/ssl/certs**, which includes the **ca-certificates.crt** file. The directory and the file together contain digital certificates that OpenSSL trusts out of the box and accordingly constitute a _truststore_. The truststore can be updated as needed, in particular, to include newly trusted certificates and to remove ones no longer trusted. + +The client program receives three certificates from the Google web server, but the OpenSSL truststore on my machine does not contain exact matches. As presently written, the **client** program does not pursue the matter by, for example, verifying the digital signature on a Google certificate (a signature that vouches for the certificate). If that signature were trusted, then the certificate containing it should be trusted as well. Nonetheless, the client program goes on to fetch and then to print Google’s homepage. The next section gets into more detail. + +### The hidden security pieces in the client program + +Let’s start with the visible security artifact in the client example—the digital certificate—and consider how other security artifacts relate to it. The dominant layout standard for a digital certificate is X509, and a production-grade certificate is issued by a certificate authority (CA) such as [Verisign][14]. + +A digital certificate contains various pieces of information (e.g., activation and expiration dates, and a domain name for the owner), including the issuer’s identity and _digital signature_, which is an encrypted _cryptographic hash_ value. A certificate also has an unencrypted hash value that serves as its identifying _fingerprint_. + +A hash value results from mapping an arbitrary number of bits to a fixed-length digest. What the bits represent (an accounting report, a novel, or maybe a digital movie) is irrelevant. For example, the Message Digest version 5 (MD5) hash algorithm maps input bits of whatever length to a 128-bit hash value, whereas the SHA1 (Secure Hash Algorithm version 1) algorithm maps input bits to a 160-bit value. Different input bits result in different—indeed, statistically unique—hash values. The next article goes into further detail and focuses on what makes a hash function _cryptographic_. + +Digital certificates differ in type (e.g., _root_, _intermediate_, and _end-entity_ certificates) and form a hierarchy that reflects these types. As the name suggests, a _root_ certificate sits atop the hierarchy, and the certificates under it inherit whatever trust the root certificate has. The OpenSSL libraries and most modern programming languages have an X509 type together with functions that deal with such certificates. The certificate from Google has an X509 format, and the **client** program checks whether this certificate is **X509_V_OK**. + +X509 certificates are based upon public-key infrastructure (PKI), which includes algorithms—RSA is the dominant one—for generating _key pairs_: a public key and its paired private key. A public key is an identity: [Amazon’s][15] public key identifies it, and my public key identifies me. A private key is meant to be kept secret by its owner. + +The keys in a pair have standard uses. A public key can be used to encrypt a message, and the private key from the same pair can then be used to decrypt the message. A private key also can be used to sign a document or other electronic artifact (e.g., a program or an email), and the public key from the pair can then be used to verify the signature. The following two examples fill in some details. + +In the first example, Alice distributes her public key to the world, including Bob. Bob then encrypts a message with Alice’s public key, sending the encrypted message to Alice. The message encrypted with Alice’s public key is decrypted with her private key, which (by assumption) she alone has, like so: + + +``` +             +------------------+ encrypted msg  +-------------------+ +Bob's msg--->|Alice's public key|--------------->|Alice's private key|---> Bob's msg +             +------------------+                +-------------------+ +``` + +Decrypting the message without Alice’s private key is possible in principle, but infeasible in practice given a sound cryptographic key-pair system such as RSA. + +Now, for the second example, consider signing a document to certify its authenticity. The signature algorithm uses a private key from a pair to process a cryptographic hash of the document to be signed: + + +``` +                    +-------------------+ +Hash of document--->|Alice's private key|--->Alice's digital signature of the document +                    +-------------------+ +``` + +Assume that Alice digitally signs a contract sent to Bob. Bob then can use Alice’s public key from the key pair to verify the signature: + + +``` +                                             +------------------+ +Alice's digital signature of the document--->|Alice's public key|--->verified or not +                                             +------------------+ +``` + +It is infeasible to forge Alice’s signature without Alice’s private key: hence, it is in Alice’s interest to keep her private key secret. + +None of these security pieces, except for digital certificates, is explicit in the **client** program. The next article fills in the details with examples that use the OpenSSL utilities and library functions. + +### OpenSSL from the command line + +In the meantime, let’s take a look at OpenSSL command-line utilities: in particular, a utility to inspect the certificates from a web server during the TLS handshake. Invoking the OpenSSL utilities begins with the **openssl** command and then adds a combination of arguments and flags to specify the desired operation. + +Consider this command: + +**openssl list-cipher-algorithms** + +The output is a list of associated algorithms that make up a _cipher suite_. Here’s the start of the list, with comments to clarify the acronyms: + + +``` +AES-128-CBC ## Advanced Encryption Standard, Cipher Block Chaining +AES-128-CBC-HMAC-SHA1 ## Hash-based Message Authentication Code with SHA1 hashes +AES-128-CBC-HMAC-SHA256 ## ditto, but SHA256 rather than SHA1 +... +``` + +The next command, using the argument **s_client**, opens a secure connection to **[www.google.com][13]** and prints screens full of information about this connection: + +**openssl s_client -connect [www.google.com:443][16] -showcerts** + +The port number 443 is the standard one used by web servers for receiving HTTPS rather than HTTP connections. (For HTTP, the standard port is 80.) The network address **[www.google.com:443][16]** also occurs in the **client** program's code. If the attempted connection succeeds, the three digital certificates from Google are displayed together with information about the secure session, the cipher suite in play, and related items. For example, here is a slice of output from near the start, which announces that a _certificate chain_ is forthcoming. The encoding for the certificates is base64: + + +``` +Certificate chain + 0 s:/C=US/ST=California/L=Mountain View/O=Google LLC/CN=www.google.com + i:/C=US/O=Google Trust Services/CN=Google Internet Authority G3 +\-----BEGIN CERTIFICATE----- +MIIEijCCA3KgAwIBAgIQdCea9tmy/T6rK/dDD1isujANBgkqhkiG9w0BAQsFADBU +MQswCQYDVQQGEwJVUzEeMBwGA1UEChMVR29vZ2xlIFRydXN0IFNlcnZpY2VzMSUw +... +``` + +A major web site such as Google usually sends multiple certificates for authentication. + +The output ends with summary information about the TLS session, including specifics on the cipher suite: + + +``` +SSL-Session: +    Protocol : TLSv1.2 +    Cipher : ECDHE-RSA-AES128-GCM-SHA256 +    Session-ID: A2BBF0E4991E6BBBC318774EEE37CFCB23095CC7640FFC752448D07C7F438573 +... +``` + +The protocol **TLS 1.2** is used in the **client** program, and the **Session-ID** uniquely identifies the connection between the **openssl** utility and the Google web server. The **Cipher** entry can be parsed as follows: + + * **ECDHE** (Elliptic Curve Diffie Hellman Ephemeral) is an effective and efficient algorithm for managing the TLS handshake. In particular, ECDHE solves the _key-distribution problem_ by ensuring that both parties in a connection (e.g., the client program and the Google web server) use the same encryption/decryption key, which is known as the _session key_. The follow-up article digs into the details. + + * **RSA** (Rivest Shamir Adleman) is the dominant public-key cryptosystem and named after the three academics who first described the system in the late 1970s. The key-pairs in play are generated with the RSA algorithm. + + * **AES128** (Advanced Encryption Standard) is a _block cipher_ that encrypts and decrypts blocks of bits. (The alternative is a _stream cipher_, which encrypts and decrypts bits one at a time.) The cipher is _symmetric_ in that the same key is used to encrypt and to decrypt, which raises the key-distribution problem in the first place. AES supports key sizes of 128 (used here), 192, and 256 bits: the larger the key, the better the protection. + +Key sizes for symmetric cryptosystems such as AES are, in general, smaller than those for asymmetric (key-pair based) systems such as RSA. For example, a 1024-bit RSA key is relatively small, whereas a 256-bit key is currently the largest for AES. + + * **GCM** (Galois Counter Mode) handles the repeated application of a cipher (in this case, AES128) during a secured conversation. AES128 blocks are only 128-bits in size, and a secure conversation is likely to consist of multiple AES128 blocks from one side to the other. GCM is efficient and commonly paired with AES128. + + * **SHA256** (Secure Hash Algorithm 256 bits) is the cryptographic hash algorithm in play. The hash values produced are 256 bits in size, although even larger values are possible with SHA. + + + + +Cipher suites are in continual development. Not so long ago, for example, Google used the RC4 stream cipher (Ron’s Cipher version 4 after Ron Rivest from RSA). RC4 now has known vulnerabilities, which presumably accounts, at least in part, for Google’s switch to AES128. + +### Wrapping up + +This first look at OpenSSL, through a secure C web client and various command-line examples, has brought to the fore a handful of topics in need of more clarification. [The next article gets into the details][17], starting with cryptographic hashes and ending with a fuller discussion of how digital certificates address the key distribution challenge. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1 + +作者:[Marty Kalin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu/users/akritiko/users/clhermansen +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_3reasons.png?itok=k6F3-BqA (A lock on the side of a building) +[2]: https://www.openssl.org/ +[3]: https://www.howtoforge.com/tutorial/how-to-install-openssl-from-source-on-linux/ +[4]: http://condor.depaul.edu/mkalin +[5]: https://en.wikipedia.org/wiki/Transport_Layer_Security +[6]: https://en.wikipedia.org/wiki/Netscape +[7]: http://www.opengroup.org/onlinepubs/009695399/functions/perror.html +[8]: http://www.opengroup.org/onlinepubs/009695399/functions/exit.html +[9]: http://www.opengroup.org/onlinepubs/009695399/functions/sprintf.html +[10]: http://www.opengroup.org/onlinepubs/009695399/functions/fprintf.html +[11]: http://www.opengroup.org/onlinepubs/009695399/functions/memset.html +[12]: http://www.opengroup.org/onlinepubs/009695399/functions/puts.html +[13]: http://www.google.com +[14]: https://www.verisign.com +[15]: https://www.amazon.com +[16]: http://www.google.com:443 +[17]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2 diff --git a/sources/tech/20190620 How to SSH into a running container.md b/sources/tech/20190620 How to SSH into a running container.md new file mode 100644 index 0000000000..f0b4cdafc2 --- /dev/null +++ b/sources/tech/20190620 How to SSH into a running container.md @@ -0,0 +1,185 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to SSH into a running container) +[#]: via: (https://opensource.com/article/19/6/how-ssh-running-container) +[#]: author: (Seth Kenlon https://opensource.com/users/seth/users/bcotton) + +How to SSH into a running container +====== +SSH is probably not the best way to run commands in a container; try +this instead. +![cubes coming together to create a larger cube][1] + +Containers have shifted the way we think about virtualization. You may remember the days (or you may still be living them) when a virtual machine was the full stack, from virtualized BIOS, operating system, and kernel up to each virtualized network interface controller (NIC). You logged into the virtual box just as you would your own workstation. It was a very direct and simple analogy. + +And then containers came along, [starting with LXC][2] and culminating in the Open Container Initiative ([OCI][3]), and that's when things got complicated. + +### Idempotency + +In the world of containers, the "virtual machine" is only mostly virtual. Everything that doesn't need to be virtualized is borrowed from the host machine. Furthermore, the container itself is usually meant to be ephemeral and idempotent, so it stores no persistent data, and its state is defined by configuration files on the host machine. + +If you're used to the old ways of virtual machines, then you naturally expect to log into a virtual machine in order to interact with it. But containers are ephemeral, so anything you do in a container is forgotten, by design, should the container need to be restarted or respawned. + +The commands controlling your container infrastructure (such as **oc, crictl**, **lxc**, and **docker**) provide an interface to run important commands to restart services, view logs, confirm the existence and permissions modes of an important file, and so on. You should use the tools provided by your container infrastructure to interact with your application, or else edit configuration files and relaunch. That's what containers are designed to do. + +For instance, the open source forum software [Discourse][4] is officially distributed as a container image. The Discourse software is _stateless_, so its installation is self-contained within **/var/discourse**. As long as you have a backup of **/var/discourse**, you can always restore the forum by relaunching the container. The container holds no persistent data, and its configuration file is **/var/discourse/containers/app.yml**. + +Were you to log into the container and edit any of the files it contains, all changes would be lost if the container had to be restarted. + +LXC containers you're building from scratch are more flexible, with configuration files (in a location defined by you) passed to the container when you launch it. + +A build system like [Jenkins][5] usually has a default configuration file, such as **jenkins.yaml**, providing instructions for a base container image that exists only to build and run tests on source code. After the builds are done, the container goes away. + +Now that you know you don't need SSH to interact with your containers, here's an overview of what tools are available (and some notes about using SSH in spite of all the fancy tools that make it redundant). + +### OpenShift web console + +[OpenShift 4][6] offers an open source toolchain for container creation and maintenance, including an interactive web console. + +When you log into your web console, navigate to your project overview and click the **Applications** tab for a list of pods. Select a (running) pod to open the application's **Details** panel. + +![Pod details in OpenShift][7] + +Click the **Terminal** tab at the top of the **Details** panel to open an interactive shell in your container. + +![A terminal in a running container][8] + +If you prefer a browser-based experience for Kubernetes management, you can learn more through interactive lessons available at [learn.openshift.com][9]. + +### OpenShift oc + +If you prefer a command-line interface experience, you can use the **oc** command to interact with containers from the terminal. + +First, get a list of running pods (or refer to the web console for a list of active pods). To get that list, enter: + + +``` +`$ oc get pods` +``` + +You can view the logs of a resource (a pod, build, or container). By default, **oc logs** returns the logs from the first container in the pod you specify. To select a single container, add the **\--container** option: + + +``` +`$ oc logs --follow=true example-1-e1337 --container app` +``` + +You can also view logs from all containers in a pod with: + + +``` +`$ oc logs --follow=true example-1-e1337 --all-containers` +``` + +#### Execute commands + +You can execute commands remotely with: + + +``` +$ oc exec example-1-e1337 --container app hostname +        example.local +``` + +This is similar to running SSH non-interactively: you get to run the command you want to run without an interactive shell taking over your environment. + +#### Remote shell + +You can attach to a running container. This still does _not_ open a shell in the container, but it does run commands directly. For example: + + +``` +`$ oc attach example-1-e1337 --container app` +``` + +If you need a true interactive shell in a container, you can open a remote shell with the **oc rsh** command as long as the container includes a shell. By default, **oc rsh** launches **/bin/sh**: + + +``` +`$ oc rsh example-1-e1337 --container app` +``` + +### Kubernetes + +If you're using Kubernetes directly, you can use the **kubetcl** **exec** command to run a Bash shell in your pod. + +First, confirm that your pod is running: + + +``` +`$ kubectl get pods` +``` + +As long as the pod containing your application is listed, you can use the **exec** command to launch a shell in the container. Using the name **example-pod** as the pod name, enter: + + +``` +$ kubectl exec --stdin=false --tty=false +  example-pod -- /bin/bash +[root@example.local][10]:/# ls +bin   core etc   lib    root  srv +boot  dev  home  lib64  sbin  tmp  var +``` + +### Docker + +The **docker** command is similar to **kubectl**. With the **dockerd** daemon running, get the name of the running container (you may have to use **sudo** to escalate privileges if you're not in the appropriate group): + + +``` +$ docker ps +CONTAINER ID    IMAGE       COMMAND      NAME +678ac5cca78e    centos     "/bin/bash"   example-centos +``` + +Using the container name, you can run a command in the container: + + +``` +$ docker exec example/centos cat /etc/os-release +CentOS Linux release 7.6 +NAME="CentOS Linux" +VERSION="7" +ID="centos" +ID_LIKE="rhel fedora" +VERSION_ID="7" +[...] +``` + +Or you can launch a Bash shell for an interactive session: + + +``` +`$ docker exec -it example-centos /bin/bash` +``` + +### Containers and appliances + +The important thing to remember when dealing with the cloud is that containers are essentially runtimes rather than virtual machines. While they have much in common with a Linux system (because they _are_ a Linux system!), they rarely translate directly to the commands and workflow you may have developed on your Linux workstation. However, like appliances, containers have an interface to help you develop, maintain, and monitor them, so get familiar with the front-end commands and services until you're happily interacting with them just as easily as ****you interact with virtual (or bare-metal) machines. Soon, you'll wonder why everything isn't developed to be ephemeral. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/how-ssh-running-container + +作者:[Seth Kenlon][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/seth/users/bcotton +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cube_innovation_process_block_container.png?itok=vkPYmSRQ (cubes coming together to create a larger cube) +[2]: https://opensource.com/article/18/11/behind-scenes-linux-containers +[3]: https://www.opencontainers.org/ +[4]: http://discourse.org +[5]: http://jenkins.io +[6]: https://www.openshift.com/learn/get-started +[7]: https://opensource.com/sites/default/files/uploads/openshift-pod-access.jpg (Pod details in OpenShift) +[8]: https://opensource.com/sites/default/files/uploads/openshift-pod-terminal.jpg (A terminal in a running container) +[9]: http://learn.openshift.com +[10]: mailto:root@example.local diff --git a/sources/tech/20190620 How to use OpenSSL- Hashes, digital signatures, and more.md b/sources/tech/20190620 How to use OpenSSL- Hashes, digital signatures, and more.md new file mode 100644 index 0000000000..724c97bc01 --- /dev/null +++ b/sources/tech/20190620 How to use OpenSSL- Hashes, digital signatures, and more.md @@ -0,0 +1,337 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to use OpenSSL: Hashes, digital signatures, and more) +[#]: via: (https://opensource.com/article/19/6/cryptography-basics-openssl-part-2) +[#]: author: (Marty Kalin https://opensource.com/users/mkalindepauledu) + +How to use OpenSSL: Hashes, digital signatures, and more +====== +Dig deeper into the details of cryptography with OpenSSL: Hashes, +digital signatures, digital certificates, and more +![A person working.][1] + +The [first article in this series][2] introduced hashes, encryption/decryption, digital signatures, and digital certificates through the OpenSSL libraries and command-line utilities. This second article drills down into the details. Let’s begin with hashes, which are ubiquitous in computing, and consider what makes a hash function _cryptographic_. + +### Cryptographic hashes + +The download page for the OpenSSL source code () contains a table with recent versions. Each version comes with two hash values: 160-bit SHA1 and 256-bit SHA256. These values can be used to verify that the downloaded file matches the original in the repository: The downloader recomputes the hash values locally on the downloaded file and then compares the results against the originals. Modern systems have utilities for computing such hashes. Linux, for instance, has **md5sum** and **sha256sum**. OpenSSL itself provides similar command-line utilities. + +Hashes are used in many areas of computing. For example, the Bitcoin blockchain uses SHA256 hash values as block identifiers. To mine a Bitcoin is to generate a SHA256 hash value that falls below a specified threshold, which means a hash value with at least N leading zeroes. (The value of N can go up or down depending on how productive the mining is at a particular time.) As a point of interest, today’s miners are hardware clusters designed for generating SHA256 hashes in parallel. During a peak time in 2018, Bitcoin miners worldwide generated about 75 million terahashes per second—yet another incomprehensible number. + +Network protocols use hash values as well—often under the name **checksum**—to support message integrity; that is, to assure that a received message is the same as the one sent. The message sender computes the message’s checksum and sends the results along with the message. The receiver recomputes the checksum when the message arrives. If the sent and the recomputed checksum do not match, then something happened to the message in transit, or to the sent checksum, or to both. In this case, the message and its checksum should be sent again, or at least an error condition should be raised. (Low-level network protocols such as UDP do not bother with checksums.) + +Other examples of hashes are familiar. Consider a website that requires users to authenticate with a password, which the user enters in their browser. Their password is then sent, encrypted, from the browser to the server via an HTTPS connection to the server. Once the password arrives at the server, it's decrypted for a database table lookup. + +What should be stored in this lookup table? Storing the passwords themselves is risky. It’s far less risky is to store a hash generated from a password, perhaps with some _salt_ (extra bits) added to taste before the hash value is computed. Your password may be sent to the web server, but the site can assure you that the password is not stored there. + +Hash values also occur in various areas of security. For example, hash-based message authentication code ([HMAC][3]) uses a hash value and a secret cryptographic key to authenticate a message sent over a network. HMAC codes, which are lightweight and easy to use in programs, are popular in web services. An X509 digital certificate includes a hash value known as the _fingerprint_, which can facilitate certificate verification. An in-memory truststore could be implemented as a lookup table keyed on such fingerprints—as a _hash map_, which supports constant-time lookups. The fingerprint from an incoming certificate can be compared against the truststore keys for a match. + +What special property should a _cryptographic hash function_ have? It should be _one-way_, which means very difficult to invert. A cryptographic hash function should be relatively straightforward to compute, but computing its inverse—the function that maps the hash value back to the input bitstring—should be computationally intractable. Here is a depiction, with **chf** as a cryptographic hash function and my password **foobar** as the sample input: + + +``` +        +---+ +foobar—>|chf|—>hash value ## straightforward +        +--–+ +``` + +By contrast, the inverse operation is infeasible: + + +``` +            +-----------+ +hash value—>|chf inverse|—>foobar ## intractable +            +-----------+ +``` + +Recall, for example, the SHA256 hash function. For an input bitstring of any length N > 0, this function generates a fixed-length hash value of 256 bits; hence, this hash value does not reveal even the input bitstring’s length N, let alone the value of each bit in the string. By the way, SHA256 is not susceptible to a [_length extension attack_][4]. The only effective way to reverse engineer a computed SHA256 hash value back to the input bitstring is through a brute-force search, which means trying every possible input bitstring until a match with the target hash value is found. Such a search is infeasible on a sound cryptographic hash function such as SHA256. + +Now, a final review point is in order. Cryptographic hash values are statistically rather than unconditionally unique, which means that it is unlikely but not impossible for two different input bitstrings to yield the same hash value—a _collision_. The [_birthday problem_][5] offers a nicely counter-intuitive example of collisions. There is extensive research on various hash algorithms’ _collision resistance_. For example, MD5 (128-bit hash values) has a breakdown in collision resistance after roughly 221 hashes. For SHA1 (160-bit hash values), the breakdown starts at about 261 hashes. + +A good estimate of the breakdown in collision resistance for SHA256 is not yet in hand. This fact is not surprising. SHA256 has a range of 2256 distinct hash values, a number whose decimal representation has a whopping 78 digits! So, can collisions occur with SHA256 hashing? Of course, but they are extremely unlikely. + +In the command-line examples that follow, two input files are used as bitstring sources: **hashIn1.txt** and **hashIn2.txt**. The first file contains **abc** and the second contains **1a2b3c**. + +These files contain text for readability, but binary files could be used instead. + +Using the Linux **sha256sum** utility on these two files at the command line—with the percent sign (**%**) as the prompt—produces the following hash values (in hex): + + +``` +% sha256sum hashIn1.txt +9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3 hashIn1.txt + +% sha256sum hashIn2.txt +3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13 hashIn2.txt +``` + +The OpenSSL hashing counterparts yield the same results, as expected: + + +``` +% openssl dgst -sha256 hashIn1.txt +SHA256(hashIn1.txt)= 9e83e05bbf9b5db17ac0deec3b7ce6cba983f6dc50531c7a919f28d5fb3696c3 + +% openssl dgst -sha256 hashIn2.txt +SHA256(hashIn2.txt)= 3eaac518777682bf4e8840dd012c0b104c2e16009083877675f00e995906ed13 +``` + +This examination of cryptographic hash functions sets up a closer look at digital signatures and their relationship to key pairs. + +### Digital signatures + +As the name suggests, a digital signature can be attached to a document or some other electronic artifact (e.g., a program) to vouch for its authenticity. Such a signature is thus analogous to a hand-written signature on a paper document. To verify the digital signature is to confirm two things. First, that the vouched-for artifact has not changed since the signature was attached because it is based, in part, on a cryptographic _hash_ of the document. Second, that the signature belongs to the person (e.g., Alice) who alone has access to the private key in a pair. By the way, digitally signing code (source or compiled) has become a common practice among programmers. + +Let’s walk through how a digital signature is created. As mentioned before, there is no digital signature without a public and private key pair. When using OpenSSL to create these keys, there are two separate commands: one to create a private key, and another to extract the matching public key from the private one. These key pairs are encoded in base64, and their sizes can be specified during this process. + +The private key consists of numeric values, two of which (a _modulus_ and an _exponent_) make up the public key. Although the private key file contains the public key, the extracted public key does _not_ reveal the value of the corresponding private key. + +The resulting file with the private key thus contains the full key pair. Extracting the public key into its own file is practical because the two keys have distinct uses, but this extraction also minimizes the danger that the private key might be publicized by accident. + +Next, the pair’s private key is used to process a hash value for the target artifact (e.g., an email), thereby creating the signature. On the other end, the receiver’s system uses the pair’s public key to verify the signature attached to the artifact. + +Now for an example. To begin, generate a 2048-bit RSA key pair with OpenSSL: + +**openssl genpkey -out privkey.pem -algorithm rsa 2048** + +We can drop the **-algorithm rsa** flag in this example because **genpkey** defaults to the type RSA. The file’s name (**privkey.pem**) is arbitrary, but the Privacy Enhanced Mail (PEM) extension **pem** is customary for the default PEM format. (OpenSSL has commands to convert among formats if needed.) If a larger key size (e.g., 4096) is in order, then the last argument of **2048** could be changed to **4096**. These sizes are always powers of two. + +Here’s a slice of the resulting **privkey.pem** file, which is in base64: + + +``` +\-----BEGIN PRIVATE KEY----- +MIICdgIBADANBgkqhkiG9w0BAQEFAASCAmAwggJcAgEAAoGBANnlAh4jSKgcNj/Z +JF4J4WdhkljP2R+TXVGuKVRtPkGAiLWE4BDbgsyKVLfs2EdjKL1U+/qtfhYsqhkK +… +\-----END PRIVATE KEY----- +``` + +The next command then extracts the pair’s public key from the private one: + +**openssl rsa -in privkey.pem -outform PEM -pubout -out pubkey.pem** + +The resulting **pubkey.pem** file is small enough to show here in full: + + +``` +\-----BEGIN PUBLIC KEY----- +MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDZ5QIeI0ioHDY/2SReCeFnYZJY +z9kfk11RrilUbT5BgIi1hOAQ24LMilS37NhHYyi9VPv6rX4WLKoZCmkeYaWk/TR5 +4nbH1E/AkniwRoXpeh5VncwWMuMsL5qPWGY8fuuTE27GhwqBiKQGBOmU+MYlZonO +O0xnAKpAvysMy7G7qQIDAQAB +\-----END PUBLIC KEY----- +``` + +Now, with the key pair at hand, the digital signing is easy—in this case with the source file **client.c** as the artifact to be signed: + +**openssl dgst -sha256 -sign privkey.pem -out sign.sha256 client.c** + +The digest for the **client.c** source file is SHA256, and the private key resides in the **privkey.pem** file created earlier. The resulting binary signature file is **sign.sha256**, an arbitrary name. To get a readable (if base64) version of this file, the follow-up command is: + +**openssl enc -base64 -in sign.sha256 -out sign.sha256.base64** + +The file **sign.sha256.base64** now contains: + + +``` +h+e+3UPx++KKSlWKIk34fQ1g91XKHOGFRmjc0ZHPEyyjP6/lJ05SfjpAJxAPm075 +VNfFwysvqRGmL0jkp/TTdwnDTwt756Ej4X3OwAVeYM7i5DCcjVsQf5+h7JycHKlM +o/Jd3kUIWUkZ8+Lk0ZwzNzhKJu6LM5KWtL+MhJ2DpVc= +``` + +Or, the executable file **client** could be signed instead, and the resulting base64-encoded signature would differ as expected: + + +``` +VMVImPgVLKHxVBapJ8DgLNJUKb98GbXgehRPD8o0ImADhLqlEKVy0HKRm/51m9IX +xRAN7DoL4Q3uuVmWWi749Vampong/uT5qjgVNTnRt9jON112fzchgEoMb8CHNsCT +XIMdyaPtnJZdLALw6rwMM55MoLamSc6M/MV1OrJnk/g= +``` + +The final step in this process is to verify the digital signature with the public key. The hash used to sign the artifact (in this case, the executable **client** program) should be recomputed as an essential step in the verification since the verification process should indicate whether the artifact has changed since being signed. + +There are two OpenSSL commands used for this purpose. The first decodes the base64 signature: + +**openssl enc -base64 -d -in sign.sha256.base64 -out sign.sha256** + +The second verifies the signature: + +**openssl dgst -sha256 -verify pubkey.pem -signature sign.sha256 client** + +The output from this second command is, as it should be: + + +``` +`Verified OK` +``` + +To understand what happens when verification fails, a short but useful exercise is to replace the executable **client** file in the last OpenSSL command with the source file **client.c** and then try to verify. Another exercise is to change the **client** program, however slightly, and try again. + +### Digital certificates + +A digital certificate brings together the pieces analyzed so far: hash values, key pairs, digital signatures, and encryption/decryption. The first step toward a production-grade certificate is to create a certificate signing request (CSR), which is then sent to a certificate authority (CA). To do this for the example with OpenSSL, run: + +**openssl req -out myserver.csr -new -newkey rsa:4096 -nodes -keyout myserverkey.pem** + +This example generates a CSR document and stores the document in the file **myserver.csr** (base64 text). The purpose here is this: the CSR document requests that the CA vouch for the identity associated with the specified domain name—the common name (CN) in CA-speak. + +A new key pair also is generated by this command, although an existing pair could be used. Note that the use of **server** in names such as **myserver.csr** and **myserverkey.pem** hints at the typical use of digital certificates: as vouchers for the identity of a web server associated with a domain such as [www.google.com][6]. + +The same command, however, creates a CSR regardless of how the digital certificate might be used. It also starts an interactive question/answer session that prompts for relevant information about the domain name to link with the requester’s digital certificate. This interactive session can be short-circuited by providing the essentials as part of the command, with backslashes as continuations across line breaks. The **-subj** flag introduces the required information: + + +``` +% openssl req -new +-newkey rsa:2048 -nodes -keyout privkeyDC.pem +-out myserver.csr +-subj "/C=US/ST=Illinois/L=Chicago/O=Faulty Consulting/OU=IT/CN=myserver.com" +``` + +The resulting CSR document can be inspected and verified before being sent to a CA. This process creates the digital certificate with the desired format (e.g., X509), signature, validity dates, and so on: + +**openssl req -text -in myserver.csr -noout -verify** + +Here’s a slice of the output: + + +``` +verify OK +Certificate Request: +Data: +Version: 0 (0x0) +Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com +Subject Public Key Info: +Public Key Algorithm: rsaEncryption +Public-Key: (2048 bit) +Modulus: +00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73: +… +Exponent: 65537 (0x10001) +Attributes: +a0:00 +Signature Algorithm: sha256WithRSAEncryption +… +``` + +### A self-signed certificate + +During the development of an HTTPS web site, it is convenient to have a digital certificate on hand without going through the CA process. A self-signed certificate fills the bill during the HTTPS handshake’s authentication phase, although any modern browser warns that such a certificate is worthless. Continuing the example, the OpenSSL command for a self-signed certificate—valid for a year and with an RSA public key—is: + +**openssl req -x509 -sha256 -nodes -days 365 -newkey rsa:4096 -keyout myserver.pem -out myserver.crt** + +The OpenSSL command below presents a readable version of the generated certificate: + +**openssl x509 -in myserver.crt -text -noout** + +Here’s part of the output for the self-signed certificate: + + +``` +Certificate: +Data: +Version: 3 (0x2) +Serial Number: 13951598013130016090 (0xc19e087965a9055a) +Signature Algorithm: sha256WithRSAEncryption +Issuer: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com +Validity +Not Before: Apr 11 17:22:18 2019 GMT +Not After : Apr 10 17:22:18 2020 GMT +Subject: C=US, ST=Illinois, L=Chicago, O=Faulty Consulting, OU=IT, CN=myserver.com +Subject Public Key Info: +Public Key Algorithm: rsaEncryption +Public-Key: (4096 bit) +Modulus: +00:ba:36:fb:57:17:65:bc:40:30:96:1b:6e🇩🇪73: +… +Exponent: 65537 (0x10001) +X509v3 extensions: +X509v3 Subject Key Identifier: +3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91 +X509v3 Authority Key Identifier: +keyid:3A:32:EF:3D:EB:DF:65:E5:A8:96:D7:D7:16:2C:1B:29:AF:46:C4:91 + +        X509v3 Basic Constraints: +            CA:TRUE +Signature Algorithm: sha256WithRSAEncryption +     3a:eb:8d:09:53:3b:5c:2e:48:ed:14:ce:f9:20:01:4e:90:c9: +     ... +``` + +As mentioned earlier, an RSA private key contains values from which the public key is generated. However, a given public key does _not_ give away the matching private key. For an introduction to the underlying mathematics, see . + +There is an important correspondence between a digital certificate and the key pair used to generate the certificate, even if the certificate is only self-signed: + + * The digital certificate contains the _exponent_ and _modulus_ values that make up the public key. These values are part of the key pair in the originally-generated PEM file, in this case, the file **myserver.pem**. + * The exponent is almost always 65,537 (as in this case) and so can be ignored. + * The modulus from the key pair should match the modulus from the digital certificate. + + + +The modulus is a large value and, for readability, can be hashed. Here are two OpenSSL commands that check for the same modulus, thereby confirming that the digital certificate is based upon the key pair in the PEM file: + + +``` +% openssl x509 -noout -modulus -in myserver.crt | openssl sha1 ## modulus from CRT +(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769 + +% openssl rsa -noout -modulus -in myserver.pem | openssl sha1 ## modulus from PEM +(stdin)= 364d21d5e53a59d482395b1885aa2c3a5d2e3769 +``` + +The resulting hash values match, thereby confirming that the digital certificate is based upon the specified key pair. + +### Back to the key distribution problem + +Let’s return to an issue raised at the end of Part 1: the TLS handshake between the **client** program and the Google web server. There are various handshake protocols, and even the Diffie-Hellman version at work in the **client** example offers wiggle room. Nonetheless, the **client** example follows a common pattern. + +To start, during the TLS handshake, the **client** program and the web server agree on a cipher suite, which consists of the algorithms to use. In this case, the suite is **ECDHE-RSA-AES128-GCM-SHA256**. + +The two elements of interest now are the RSA key-pair algorithm and the AES128 block cipher used for encrypting and decrypting messages if the handshake succeeds. Regarding encryption/decryption, this process comes in two flavors: symmetric and asymmetric. In the symmetric flavor, the _same_ key is used to encrypt and decrypt, which raises the _key distribution problem_ in the first place: How is the key to be distributed securely to both parties? In the asymmetric flavor, one key is used to encrypt (in this case, the RSA public key) but a different key is used to decrypt (in this case, the RSA private key from the same pair). + +The **client** program has the Google web server’s public key from an authenticating certificate, and the web server has the private key from the same pair. Accordingly, the **client** program can send an encrypted message to the web server, which alone can readily decrypt this message. + +In the TLS situation, the symmetric approach has two significant advantages: + + * In the interaction between the **client** program and the Google web server, the authentication is one-way. The Google web server sends three certificates to the **client** program, but the **client** program does not send a certificate to the web server; hence, the web server has no public key from the client and can’t encrypt messages to the client. + * Symmetric encryption/decryption with AES128 is nearly a _thousand times faster_ than the asymmetric alternative using RSA keys. + + + +The TLS handshake combines the two flavors of encryption/decryption in a clever way. During the handshake, the **client** program generates random bits known as the pre-master secret (PMS). Then the **client** program encrypts the PMS with the server’s public key and sends the encrypted PMS to the server, which in turn decrypts the PMS message with its private key from the RSA pair: + + +``` +              +-------------------+ encrypted PMS  +--------------------+ +client PMS--->|server’s public key|--------------->|server’s private key|--->server PMS +              +-------------------+                +--------------------+ +``` + +At the end of this process, the **client** program and the Google web server now have the same PMS bits. Each side uses these bits to generate a _master secret_ and, in short order, a symmetric encryption/decryption key known as the _session key_. There are now two distinct but identical session keys, one on each side of the connection. In the **client** example, the session key is of the AES128 variety. Once generated on both the **client** program’s and Google web server’s sides, the session key on each side keeps the conversation between the two sides confidential. A handshake protocol such as Diffie-Hellman allows the entire PMS process to be repeated if either side (e.g., the **client** program) or the other (in this case, the Google web server) calls for a restart of the handshake. + +### Wrapping up + +The OpenSSL operations illustrated at the command line are available, too, through the API for the underlying libraries. These two articles have emphasized the utilities to keep the examples short and to focus on the cryptographic topics. If you have an interest in security issues, OpenSSL is a fine place to start—and to stay. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/cryptography-basics-openssl-part-2 + +作者:[Marty Kalin][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mkalindepauledu +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003784_02_os.comcareers_os_rh2x.png?itok=jbRfXinl (A person working.) +[2]: https://opensource.com/article/19/6/cryptography-basics-openssl-part-1 +[3]: https://en.wikipedia.org/wiki/HMAC +[4]: https://en.wikipedia.org/wiki/Length_extension_attack +[5]: https://en.wikipedia.org/wiki/Birthday_problem +[6]: http://www.google.com diff --git a/sources/tech/20190620 You can-t buy DevOps.md b/sources/tech/20190620 You can-t buy DevOps.md new file mode 100644 index 0000000000..36717058a0 --- /dev/null +++ b/sources/tech/20190620 You can-t buy DevOps.md @@ -0,0 +1,103 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (You can't buy DevOps) +[#]: via: (https://opensource.com/article/19/6/you-cant-buy-devops) +[#]: author: (Julie Gunderson https://opensource.com/users/juliegund) + +You can't buy DevOps +====== +But plenty of people are happy to sell it to you. Here's why it's not +for sale. +![Coffee shop photo][1] + +![DevOps price tag graphic][2] + +Making a move to [DevOps][3] can be a daunting undertaking, with many organizations not knowing the right place to start. I recently had some fun taking a few "DevOps assessments" to see what solutions they offered. I varied my answers—from an organization that fully embraces DevOps to one at the beginning of the journey. Some of the assessments provided real value, linking me back to articles on culture and methodologies, while others merely offered me a tool promising to bring all my DevOps dreams into reality. + +Tools are absolutely essential to the DevOps journey; for instance, tools can continuously deliver, automate, or monitor your environment. However, **DevOps is not a product**, and tools alone will not enable the processes necessary to realize the full value of DevOps. People are what matter most; you can't do DevOps without building the people, mindset, and culture first. + +### Don't 'win' at DevOps; become a champion + +As a DevOps advocate at PagerDuty, I am proud to be a part of an organization with a strong commitment to DevOps methodologies, well beyond just "checking the boxes" of tool adoption. + +I recently had a conversation with PagerDuty CEO Jennifer Tejada about being a winner versus a champion. She talked about how winning is fantastic—you get a trophy, a title, or maybe even a few million dollars (if it's the lottery). However, in the big picture, winning is all about short-term goals, while being a champion means focusing on long-term successes or outcomes. This got me thinking about how to apply this principle to organizations embracing DevOps. + +One of my favorite examples of DevOps tooling is XebiaLabs' [Periodic Table of DevOps Tools][4]: + +[![Periodic Table of DevOps][5]][4] + +(Click table for interactive version.) + +The table shows that numerous tools fit into DevOps. However, too many times, I have heard about organizations "transforming to DevOps" by purchasing tools. While tooling is an essential part of the DevOps journey, a tool alone does not create a DevOps environment. You have to consider all the factors that make a DevOps team function well: collaboration, breaking down silos, defined processes, ownership, and automation, along with continuous improvement/continuous delivery. + +Deciding to purchase tooling is a great step in the right direction; what is more important is to define the "why" or the end goal behind decisions first. Which brings us back to the mentality of a champion; look at Olympic gold medalist Michael Phelps, for example. Phelps is the most decorated Olympian of all time and holds 39 world records. To achieve these accomplishments, Phelps didn't stop at one, two, or even 20 wins; he aimed to be a champion. This was all done through commitment, practice, and focusing on the desired end state. + +### DevOps defined + +There are hundreds of definitions for DevOps, but almost everyone can agree on the core tenet outlined in the [State of DevOps Report][6]: + +> "DevOps is a set of principles aimed at building culture and processes to help teams work more efficiently and deliver better software faster." + +You can't change culture and processes with a credit card. Tooling can enable an organization to collaborate better or automate or continuously deliver; however, without the right mindset and adoption, a tool's full capability may not be achievable. + +For example, one of my former colleagues heard how amazing Slack is for teams transforming to DevOps by opening up channels for collaboration. He convinced his manager that Slack would solve all of their communication woes. However, six months into the Slack adoption, most teams were still using Skype, including the manager. Slack ended up being more of a place to talk about brewing beer than a tool to bring the product to market faster. The issue was not Slack; it was the lack of buy-in from the team and organization and knowledge around the product's full functionality. + +Purchasing a tool can definitely be a win for a team, but purchasing a tool is not purchasing DevOps. Making tooling and best practices work for the team and achieving short- and long-term goals are where our conversation around being a champion comes up. This brings us back to the why, the overall and far-reaching goal for the team or organization. Once you identify the goal, how do you get buy-in from key stakeholders? After you achieve buy-in, how do you implement the solution? + +### Organizational change + +#### + +[![Change management comic by Randy Glasbergen][7]][8] + +Change is hard for many organizations and individuals; moreover, meaningful change does not happen overnight. It is important to understand how people and organizations process change. In the [Kotter 8-Step Process for Leading Change][9], it's about articulating the need for a change, creating urgency around the why, then starting small and finding and developing internal champions, _before_ trying to prove wins or, in this case, purchasing a tool. + +If people in an organization are not aware of a problem or that there's a better way of operating, it will be hard to get the buy-in necessary and motivate team members to adopt new ideas and take action. People may be perfectly content with the current state; perhaps the processes in place are adequate or, at a minimum, the current state is a known factor. However, for the overall team to function well and achieve its shared goal in a faster, more agile way, new mechanisms must be put into place first. + +![Kotter 8-Step Process for Leading Change][10] + +### How to be a DevOps champion + +Being a champion in the DevOps world means going beyond the win and delving deeper into the team/organizational structure and culture, thereby identifying outlying issues beyond tools, and then working with others to embrace the right change that leads to defined results. Go back to the beginning and define the end goal. Here are a few sample questions you can ask to get started: + + * What are your core values? + * Why are you trying to become a more agile company or team? + * What obstacles is your team or organization facing? + * What will the tool or process accomplish? + * How are people communicating and collaborating? + * Are there silos and why? + * How are you championing the customer? + * Are employees empowered? + + + +After defining the end state, find other like-minded individuals to be part of your champion team, and don't lose sight of what you are trying to accomplish. When making any change, make sure to start small, e.g., with one team or a test environment. By starting small and building on the wins, internal champions will start creating themselves. + +Remember, companies are happy and eager to try to sell you DevOps, but at the end of the day, DevOps is not a product. It is a fully embraced methodology and mindset of automation, collaboration, people, and processes. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/you-cant-buy-devops + +作者:[Julie Gunderson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/juliegund +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee-shop-devops.png?itok=CPefJZJL (Coffee shop photo) +[2]: https://opensource.com/sites/default/files/uploads/devops-pricetag.jpg (DevOps price tag graphic) +[3]: https://opensource.com/resources/devops +[4]: https://xebialabs.com/periodic-table-of-devops-tools/ +[5]: https://opensource.com/sites/default/files/uploads/periodic-table-of-devops-tools.png (Periodic Table of DevOps) +[6]: https://puppet.com/resources/whitepaper/state-of-devops-report +[7]: https://opensource.com/sites/default/files/uploads/cartoon.png (Change management comic by Randy Glasbergen) +[8]: https://images.app.goo.gl/JiMaWAenNkLcmkZJ9 +[9]: https://www.kotterinc.com/8-steps-process-for-leading-change/ +[10]: https://opensource.com/sites/default/files/uploads/kotter-process.png (Kotter 8-Step Process for Leading Change) diff --git a/sources/tech/20190621 7 infrastructure performance and scaling tools you should be using.md b/sources/tech/20190621 7 infrastructure performance and scaling tools you should be using.md new file mode 100644 index 0000000000..3a9003ae9c --- /dev/null +++ b/sources/tech/20190621 7 infrastructure performance and scaling tools you should be using.md @@ -0,0 +1,94 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (7 infrastructure performance and scaling tools you should be using) +[#]: via: (https://opensource.com/article/19/6/performance-scaling-tools) +[#]: author: (Pradeep SurisettyPeter Portante https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga) + +7 infrastructure performance and scaling tools you should be using +====== +These open source tools will help you feel confident in your +infrastructure's performance as it scales up. +![Several images of graphs.][1] + +[Sysadmins][2], [site reliability engineers][3] (SREs), and cloud operators all too often struggle to feel confident in their infrastructure as it scales up. Also too often, they think the only way to solve their challenges is to write a tool for in-house use. Fortunately, there are options. There are many open source tools available to test an infrastructure's performance. Here are my favorites. + +### Pbench + +Pbench is a performance testing harness to make executing benchmarks and performance tools easier and more convenient. In short, it: + + * Excels at running micro-benchmarks on large scales of hosts (bare-metal, virtual machines, containers, etc.) while automating a potentially large set of benchmark parameters + * Focuses on installing, configuring, and executing benchmark code and performance tools and not on provisioning or orchestrating the testbed (e.g., OpenStack, RHEV, RHEL, Docker, etc.) + * Is designed to work in concert with provisioning tools like BrowBeat or Ansible playbooks + + + +Pbench's [documentation][4] includes installation and user guides, and the code is [maintained on GitHub][5], where the team welcomes contributions and issues. + +### Ripsaw + +Baselining is a critical aspect of infrastructure reliability. Ripsaw is a performance benchmark Operator for launching workloads on Kubernetes. It deploys as a Kuberentes Operator that then deploys common workloads, including specific applications (e.g., Couchbase) or general performance tests (e.g., Uperf) to measure and establish a performance baseline. + +Ripsaw is [maintained on GitHub][6]. You can also find its maintainers on the [Kubernetes Slack][7], where they are active contributors. + +### OpenShift Scale + +The collection of tools in OpenShift Scale, OpenShift's open source solution for performance testing, do everything from spinning up OpenShift on OpenStack installations (TripleO Install and ShiftStack Install), installing on Amazon Web Services (AWS), or providing containerized tooling, like running Pbench on your cluster or doing cluster limits testing, network tests, storage tests, metric tests with Prometheus, logging, and concurrent build testing. + +Scale's CI suite is flexible enough to both add workloads and include your workloads when deploying to Azure or anywhere else you might run. You can see the full suite of tools [on GitHub][8]. + +### Browbeat + +[Browbeat][9] calls itself "a performance tuning and analysis tool for OpenStack." You can use it to analyze and tune the deployment of your workloads. It also automates the deployment of standard monitoring and data analysis tools like Grafana and Graphite. Browbeat is [maintained on GitHub][10]. + +### Smallfile + +Smallfile is a filesystem workload generator targeted for scale-out, distributed storage. It has been used to test a number of open filesystem technologies, including GlusterFS, CephFS, Network File System (NFS), Server Message Block (SMB), and OpenStack Cinder volumes. It is [maintained on GitHub][11]. + +### Ceph Benchmarking Tool + +Ceph Benchmarking Tool (CBT) is a testing harness that can automate tasks for testing [Ceph][12] cluster performance. It records system metrics with collectl, and it can collect more information with tools including perf, blktrace, and valgrind. CBT can also do advanced testing that includes automated object storage daemon outages, erasure-coded pools, and cache-tier configurations. + +Contributors have extended CBT to use [Pbench monitoring tools and Ansible][13] and to run the [Smallfile benchmark][14]. A separate Grafana visualization dashboard uses Elasticsearch data generated by [Automated Ceph Test][15]. + +### satperf + +Satellite-performance (satperf) is a set of Ansible playbooks and helper scripts to deploy Satellite 6 environments and measure the performance of selected actions, such as concurrent registrations, remote execution, Puppet operations, repository synchronizations and promotions, and more. You can find Satperf [on GitHub][16]. + +### Conclusion + +Sysadmins, SREs, and cloud operators face a wide variety of challenges as they work to scale their infrastructure, but luckily there is also a wide variety of tools to help them get past those common issues. Any of these seven tools should help you get started testing your infrastructure's performance as it scales. + +Are there other open source performance and scaling tools that should be on this list? Add your favorites in the comments. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/performance-scaling-tools + +作者:[Pradeep SurisettyPeter Portante][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/psuriset/users/aakarsh/users/portante/users/anaga +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/containers_scale_performance.jpg?itok=R7jyMeQf (Several images of graphs.) +[2]: /16/12/yearbook-10-open-source-sysadmin-tools +[3]: /article/19/5/life-performance-engineer +[4]: https://distributed-system-analysis.github.io/pbench/ +[5]: https://github.com/distributed-system-analysis/pbench +[6]: https://github.com/cloud-bulldozer/ripsaw +[7]: https://github.com/cloud-bulldozer/ripsaw#community +[8]: https://github.com/openshift-scale +[9]: https://browbeatproject.org/ +[10]: https://github.com/cloud-bulldozer/browbeat +[11]: https://github.com/distributed-system-analysis/smallfile +[12]: https://ceph.com/ +[13]: https://github.com/acalhounRH/cbt +[14]: https://nuget.pkg.github.com/bengland2/cbt/tree/smallfile +[15]: https://github.com/acalhounRH/automated_ceph_test +[16]: https://github.com/redhat-performance/satperf diff --git a/sources/tech/20190621 The state of open source translation tools for contributors to your project.md b/sources/tech/20190621 The state of open source translation tools for contributors to your project.md new file mode 100644 index 0000000000..07bac654fc --- /dev/null +++ b/sources/tech/20190621 The state of open source translation tools for contributors to your project.md @@ -0,0 +1,97 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The state of open source translation tools for contributors to your project) +[#]: via: (https://opensource.com/article/19/6/translation-platforms-matter) +[#]: author: (Jean-Baptiste Holcroft https://opensource.com/users/jibec/users/jibec) + +The state of open source translation tools for contributors to your project +====== +There are almost 100 languages with more than 10 million speakers. How +many of your active contributors speak one? +![Team of people around the world][1] + +In the world of free software, many people speak English: It is the **one** language. English helps us cross borders to meet others. However, this language is also a barrier for the majority of people. + +Some master it while others don't. Complex English terms are, in general, a barrier to the understanding and propagation of knowledge. Whenever you use an uncommon English word, ask yourself about your real mastery of what you are explaining, and the unintentional barriers you build in the process. + +_“If you talk to a man in a language he understands, that goes to his head. If you talk to him in his language, that goes to his heart.”_ — Nelson Mandela + +We are 7 billion humans, and less than 400 million of us are English natives. The wonders done day after day by free/libre open source contributors deserve to reach the hearts of the [6.6 billion people][2] for whom English is not their mother tongue. In this day and age, we have the technology to help translate all types of content: websites, documentation, software, and even sounds and images. Even if I do not translate of all of these media personally, I do not know of any real limits. The only prerequisite for getting this content translated is both the willingness of the creators and the collective will of the users, customers, and—in the case of free software—the contributors. + +### Why successful translation requires real tooling + +Some projects are stuck in the stone ages and require translators to use [Git][3], [Mercurial][4], or other development tools. These tools don’t meet the needs of translation communities. Let’s help these projects evolve, as discussed in the section "A call for action." + +Other projects have integrated translation platforms, which are key tools for linguistic diversity and existence. These tools understand the needs of translators and serve as a bridge to the development world. They make translation contribution easy, and keep those doing the translations motivated over time. + +This aspect is important: There are almost 100 languages with more than 10 million speakers. Do you really believe that your project can have an active contributor for each of these languages? Unless you are a huge organization, like Mozilla or LibreOffice, there is no chance. The translators who help you also help two, ten, or a hundred other projects. They need tools to be effective, such as [translation memories][5], progress reports, alerts, ways to collaborate, and knowing that what they do is useful. + +### Translation platforms are in trouble + +However, the translation platforms distributed as free software are disappearing in favor of closed platforms. These platforms set their rules and efforts according to what will bring them the most profit. + +Linguistic and cultural diversity does not bring money: It opens doors and allows local development. It emancipates populations and can ensure the survival of certain cultures and languages. In the 21st century, is your culture really alive if it does not exist in cyberspace? + +The short history of translation platforms is not pretty: + + * In 2011, Transifex ceased to be open when they decided to no longer publish their source code. + * Since September 2017, the [Pootle][6] project seems to have stalled. + * In October 2018, the [Zanata][7] project shut down because it had not succeeded in building a community of technical contributors capable of taking over when corporate funding was halted. + + + +In particular, the [Fedora Project][8]—which I work closely with—has ridden the roller coaster from Transifex to Zanata and is now facing another move and more challenges. + +Two significant platforms remain: + + * [Pontoon][9]: Dedicated to the Mozilla use case (large community, common project). + * [Weblate][10]: A generic platform created by developer [Michal Čihař][11] (a generic purpose platform). + + + +These two tools are of high quality and are technically up-to-date, but Mozilla’s Pontoon is not designed to appeal to the greatest number of people. This project is dedicated to the specific challenges Mozilla faces.  + +### A call for action + +There is an urgent need for large communities to share resources to perpetuate Weblate as free software and promote its adoption. Support is also needed for other tools, such as [po4a][12], the [Translate Toolkit][13], and even our old friend [gettext][14]. Will we accept a sword of Damocles hanging over our heads? Will we continue to consume years of free work without giving a cent in return? Or will we take the lead in bringing security to our communities? + +**What you can do as a contributor**: Promote Weblate as an open source translation platform, and help your beloved project use it. [Hosting is free for open source projects][15]. + +**What you can do as a developer**: Make sure all of your project’s content can be translated into any language. Think about this issue from the beginning, as all tools don’t provide the same internationalization features. + +**What you can do as an entity with a budget**: Whether you’re a company or just part of the community, pay for the support, hosting, or development of the tools you use. Even if the amount is symbolic, doing this will lower the risks. In particular, [here is the info for Weblate][16]. (Note: I’m not involved with the Weblate project other than bug reports and translation.) + +**What to do if you’re a language enthusiast**: Contact me to help create an open source language organization to promote our tools and their usage, and find money to fund them. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/translation-platforms-matter + +作者:[Jean-Baptiste Holcroft][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jibec/users/jibec +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/team_global_people_gis_location.png?itok=Rl2IKo12 (Team of people around the world) +[2]: https://www.ethnologue.com/statistics/size +[3]: https://git-scm.com +[4]: https://www.mercurial-scm.org +[5]: https://en.wikipedia.org/wiki/Translation_memory +[6]: http://pootle.translatehouse.org +[7]: http://zanata.org +[8]: https://getfedora.org +[9]: https://github.com/mozilla/pontoon/ +[10]: https://weblate.org +[11]: https://cihar.com +[12]: https://po4a.org +[13]: http://docs.translatehouse.org/projects/translate-toolkit/en/latest/ +[14]: https://www.gnu.org/software/gettext/ +[15]: http://hosted.weblate.org/ +[16]: https://weblate.org/en/hosting/ diff --git a/sources/tech/20190624 Book Review- A Byte of Vim.md b/sources/tech/20190624 Book Review- A Byte of Vim.md new file mode 100644 index 0000000000..e221a3bc6f --- /dev/null +++ b/sources/tech/20190624 Book Review- A Byte of Vim.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Book Review: A Byte of Vim) +[#]: via: (https://itsfoss.com/book-review-a-byte-of-vim/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +Book Review: A Byte of Vim +====== + +[Vim][1] is a tool that is both simple and very powerful. Most new users will be intimidated by it because it doesn’t ‘work’ like regular graphical text editors. The ‘unusual’ keyboard shortcuts makes people wonder about [how to save and exit Vim][2]. But once you master Vim, there is nothing like it. + +There are numerous [Vim resources available online][3]. We have covered some Vim tricks on It’s FOSS as well. Apart from online resources, plenty of books have been dedicated to this editor as well. Today, we will look at one of such book that is designed to make Vim easy for most users to understand. The book we will be discussing is [A Byte of Vim][4] by [Swaroop C H][5]. + +The author [Swaroop C H][6] has worked in computing for over a decade. He previously worked at Yahoo and Adobe. Out of college, he made money by selling Linux CDs. He started a number of businesses, including an iPod charger named ion. He is currently an engineering manager for the AI team at [Helpshift][7]. + +### A Byte of Vim + +![][8] + +Like all good books, A Byte of Vim starts by talking about what Vim is: “a computer program used for writing any kind of text”. He does on to say, “What makes Vim special is that it is one of those few software which is both simple and powerful.” + +Before diving into telling how to use Vim, Swaroop tells the reader how to install Vim for Windows, Mac, Linux, and BSD. Once the installation is complete, he runs you through how to launch Vim and how to create your first file. + +Next, Swaroop discusses the different modes of Vim and how to navigate around your document using Vim’s keyboard shortcuts. This is followed by the basics of editing a document with Vim, including the Vim version of cut/copy/paste and undo/redo. + +Once the editing basics are covered, Swaroop talks about using Vim to edit multiple parts of a single document. You can also multiple tabs and windows to edit multiple documents at the same time. + +[][9] + +Suggested read  Bring Your Old Computer Back to Life With 4MLinux + +The book also covers extending the functionality of Vim through scripting and installing plugins. There are two ways to using scripts in Vim, use Vim’s built-in scripting language or using a programming language like Python or Perl to access Vim’s internals. There are five types of Vim plugins that can be written or downloaded: vimrc, global plugin, filetype plugin, syntax highlighting plugin, and compiler plugin. + +In a separate section, Swaroop C H covers the features of Vim that make it good for programming. These features include syntax highlighting, smart indentation, support for shell commands, omnicompletion, and the ability to be used as an IDE. + +#### Getting the ‘A Byte of Vim’ book and contributing to it + +A Byte of Book is licensed under [Creative Commons 4.0][10]. You can read an online version of the book for free on [the author’s website][4]. You can also download a [PDF][11], [Epub][12], or [Mobi][13] for free. + +[Get A Byte of Vim for FREE][4] + +If you prefer reading a [hard copy][14], you have that option, as well. + +Please note that the _**original version of A Byte of Vim was written in 2008**_ and converted to PDf. Unfortunately, Swaroop C H lost the original source files and he is working to convert the book to [Markdown][15]. If you would like to help, please visit the [book’s GitHub page][16]. + +Preview | Product | Price | +---|---|---|--- +![Mastering Vim Quickly: From WTF to OMG in no time][17] ![Mastering Vim Quickly: From WTF to OMG in no time][17] | [Mastering Vim Quickly: From WTF to OMG in no time][18] | $34.00[][19] | [Buy on Amazon][20] + +#### Conclusion + +When I first stared into the angry maw that is Vim, I did not have a clue what to do. I wish that I had known about A Byte of Vim then. This book is a good resource for anyone learning about Linux, especially if you are getting into the command line. + +Have you read [A Byte of Vim][4] by Swaroop C H? If yes, how do you find it? If not, what is your favorite book on an open source topic? Let us know in the comments below. + +[][21] + +Suggested read  Iridium Browser: A Browser for the Privacy Conscious + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][22]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/book-review-a-byte-of-vim/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://www.vim.org/ +[2]: https://itsfoss.com/how-to-exit-vim/ +[3]: https://linuxhandbook.com/basic-vim-commands/ +[4]: https://vim.swaroopch.com/ +[5]: https://swaroopch.com/ +[6]: https://swaroopch.com/about/ +[7]: https://www.helpshift.com/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/06/Byte-of-vim-book.png?resize=800%2C450&ssl=1 +[9]: https://itsfoss.com/4mlinux-review/ +[10]: https://creativecommons.org/licenses/by/4.0/ +[11]: https://www.gitbook.com/download/pdf/book/swaroopch/byte-of-vim +[12]: https://www.gitbook.com/download/epub/book/swaroopch/byte-of-vim +[13]: https://www.gitbook.com/download/mobi/book/swaroopch/byte-of-vim +[14]: https://swaroopch.com/buybook/ +[15]: https://itsfoss.com/best-markdown-editors-linux/ +[16]: https://github.com/swaroopch/byte-of-vim#status-incomplete +[17]: https://i2.wp.com/images-na.ssl-images-amazon.com/images/I/41itW8furUL._SL160_.jpg?ssl=1 +[18]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Mastering Vim Quickly: From WTF to OMG in no time) +[19]: https://www.amazon.com/gp/prime/?tag=chmod7mediate-20 (Amazon Prime) +[20]: https://www.amazon.com/Mastering-Vim-Quickly-WTF-time/dp/1983325740?SubscriptionId=AKIAJ3N3QBK3ZHDGU54Q&tag=chmod7mediate-20&linkCode=xm2&camp=2025&creative=165953&creativeASIN=1983325740 (Buy on Amazon) +[21]: https://itsfoss.com/iridium-browser-review/ +[22]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20190624 Check your password security with Have I Been Pwned- and pass.md b/sources/tech/20190624 Check your password security with Have I Been Pwned- and pass.md new file mode 100644 index 0000000000..d4557725ba --- /dev/null +++ b/sources/tech/20190624 Check your password security with Have I Been Pwned- and pass.md @@ -0,0 +1,113 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Check your password security with Have I Been Pwned? and pass) +[#]: via: (https://opensource.com/article/19/6/check-passwords) +[#]: author: (Brian "bex" Exelbierd https://opensource.com/users/bexelbie/users/jason-baker/users/admin/users/mtsouk) + +Check your password security with Have I Been Pwned? and pass +====== +Periodically checking for password compromise is an excellent way to +help ward off most attackers in most threat models. +![Password lock][1] + +Password security involves a broad set of practices, and not all of them are appropriate or possible for everyone. Therefore, the best strategy is to develop a threat model by thinking through your most significant risks—who and what you are protecting against—then model your security approach on the activities that are most effective against those specific threats. The Electronic Frontier Foundation (EFF) has a [great series on threat modeling][2] that I encourage everyone to read. + +In my threat model, I am very concerned about the security of my passwords against (among other things) [dictionary attacks][3], in which an attacker uses a list of likely or known passwords to try to break into a system. One way to stop dictionary attacks is to have your service provider rate-limit or deny login attempts after a certain number of failures. Another way is not to use passwords in the "known passwords" dataset. + +### Check password security with HIBP + +[Troy Hunt][4] created [Have I Been Pwned?][5] (HIBP) to notify people when their information is found in leaked data dumps and breaches. If you haven't already registered, you should, as the mere act of registering exposes nothing. Troy has built a collection of over 550 million real-world passwords from this data. These are passwords that real people used and were exposed by data that was stolen or accidentally made public. + +The site _does not_ publish the plaintext password list, but it doesn't have to. By definition, this data is already out there. If you've ever reused a password or used a "common" password, then you are at risk because someone is building a dictionary of these passwords to try right now. + +Recently, Firefox and HIBP announced they are [teaming up][6] to make breach searches easier. And the National Institutes of Standards and Technology (NIST) recommends that you [check passwords][7] against those known to be compromised and change them if they are found. HIBP supports this via a password-checking feature that is exposed via an API, so it is easy to use. + +Now, it would be a bad idea to send the website a full list of your passwords. While I trust [HaveIBeenPwned.com][5], it could be compromised one day. Instead, the site uses a process called [k-Anonymity][8] that allows you to check your passwords without exposing them. This is a three-step process. First, let's review the steps, and then we can use the **pass-pwned** plugin to do it for us: + + 1. Create a hash value of your password. A hash value is just a way of turning arbitrary data—your password—into a fixed data representation—the hash value. A cryptographic hash function is collision-resistant, meaning it creates a unique hash value for every input. The algorithm used for the hash is a one-way transformation, which makes it hard to know the input value if you only have the hash value. For example, using the SHA-1 algorithm that HIBP uses, the password **hunter2** becomes **F3BBBD66A63D4BF1747940578EC3D0103530E21D**. + 2. Send the first five characters (**F3BBB** in our example) to the site, and the site will send back a list of all the hash values that start with those five characters. This way, the site can't know which hash values you are interested in. The k-Anonymity process ensures there is so much statistical noise that it is hard for a compromised site to determine which password you inquired about. For example, our query returns a list of 527 potential matches from HIBP. + 3. Search through the list of results to see if your hash is there. If it is, your password has been compromised. If it isn't, the password isn't in a publicly known data breach. HIBP returns a bonus in its data: a count of how many times the password has been seen in data breaches. Astoundingly, **hunter2** has been seen 17,043 times! + + + +### Check password security with pass + +I use [**pass**][9], a [GNU Privacy Guard][10]-based password manager. It has many extensions, which are available on the [**pass** website][11] and as a separately maintained [awesome-style list][12]. One of these extensions is [**pass-pwned**][13], which will check your passwords with HIBP. Both **pass** and **pass-pwned** are packaged for Fedora 29, 30, and Rawhide. You can install the extension with: + + +``` +`sudo dnf install pass pass-pwned` +``` + +or you can follow the manual instructions on their respective websites. + +If you're just getting started with **pass**, read [Managing passwords the open source way][14] for a great overview. + +The following will quickly set up **pass** and check a stored password. This example assumes you already have a GPG key. + + +``` +# Setup a pass password store +$ pass init <GPG key email> + +# Add the password, "hunter2" to the store +$ pass insert awesome-site.com + +# Install the pass-pwned extension +# Download the bash script from the upstream and then review it +$ mkdir ~/.password-store/.extensions +$ wget -O ~/.password-store/.extensions/pwned.bash +$ vim ~/.password-store/.extensions/pwned.bash + +# If everything is OK, set it executable and enable pass extensions +$ chmod u+x ~/.password-store/.extensions/pwned.bash +$ echo 'export PASSWORD_STORE_ENABLE_EXTENSIONS="true"' >> ~/.bash_profile +$ source ~/.bash_profile + +# Check the password +$ pass pwned awesome-site.com +Password found in haveibeenpwned 17043 times + +# Change this password to something randomly generated and verify it +$ pass generate -i awesoem-site.com +The generated password for awesome-site.com is: +<REDACTED> +$ pass pwned awesome-site.com +Password not found in haveibeenpwned +``` + +Congratulations, your password is now more secure than it was before! You can also [use wildcards to check multiple passwords][15] at once. + +Periodically checking for password compromise is an excellent way to help ward off most attackers in most threat models. If your password management system doesn't make it this easy, you may want to upgrade to something like **pass**. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/6/check-passwords + +作者:[Brian "bex" Exelbierd][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bexelbie/users/jason-baker/users/admin/users/mtsouk +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/password.jpg?itok=ec6z6YgZ (Password lock) +[2]: https://ssd.eff.org/en/module/your-security-plan +[3]: https://en.wikipedia.org/wiki/Dictionary_attack +[4]: https://www.troyhunt.com/ +[5]: https://haveibeenpwned.com/ +[6]: https://www.troyhunt.com/were-baking-have-i-been-pwned-into-firefox-and-1password/ +[7]: https://pages.nist.gov/800-63-FAQ/#q-b5 +[8]: https://blog.cloudflare.com/validating-leaked-passwords-with-k-anonymity/ +[9]: https://www.passwordstore.org/ +[10]: https://gnupg.org/ +[11]: https://www.passwordstore.org/#extensions +[12]: https://github.com/tijn/awesome-password-store +[13]: https://github.com/alzeih/pass-pwned +[14]: https://opensource.com/life/14/7/managing-passwords-open-source-way +[15]: https://github.com/alzeih/pass-pwned/issues/3 diff --git a/sources/tech/20190624 How to Install and Configure KVM on RHEL 8.md b/sources/tech/20190624 How to Install and Configure KVM on RHEL 8.md new file mode 100644 index 0000000000..3021561dd8 --- /dev/null +++ b/sources/tech/20190624 How to Install and Configure KVM on RHEL 8.md @@ -0,0 +1,256 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Install and Configure KVM on RHEL 8) +[#]: via: (https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/) +[#]: author: (Pradeep Kumar https://www.linuxtechi.com/author/pradeep/) + +How to Install and Configure KVM on RHEL 8 +====== + +**KVM** is an open source virtualization technology which converts your Linux machine into a type-1 bare-metal hypervisor that allows you to run multiple virtual machines (VMs) or guest VMs + + + +KVM stands for **Kernel based Virtual machine**, as the name suggests KVM is a kernel module, once it is loaded into the kernel , then your Linux machine will start working as a KVM hypervisor. In this article we will demonstrate how to install KVM on RHEL 8 system but before start installing KVM on your RHEL 8 system first we have to make sure that your system’s processor supports hardware virtualization extensions like **Intel VT** or **AMD-V** and enabled it from BIOS. + +**RHEL 8 KVM Lab Details:** + + * OS = RHEL 8 + * Hostname = rhel8-kvm + * Ethernet Cards = ens32 –  192.168.1.4 & ens36 – 192.168..1.12 + * RAM = 4 GB + * CPU = 2 + * Disk = 40 GB Free Space (/var/libvirtd) + + + +Let’s Jump into the KVM installation steps + +### Step:1) Verify Hardware Virtualization is enabled or not + +Open the terminal and execute the beneath egrep command + +``` +[root@linuxtechi ~]# egrep -c '(vmx|svm)' /proc/cpuinfo +2 +[root@linuxtechi ~]# +``` + +If output of above egrep command is equal to 1 or more than 1 then this confirms that hardware virtualization is enabled and supported. + +Alternate way to check whether hardware virtualization is enabled or not , execute the beneath command, + +``` +[root@linuxtechi ~]# lscpu | grep Virtualization: +Virtualization: VT-x +[root@linuxtechi opt]# +``` + +If there is no output in above command then it confirms that Virtualization is not enabled from BIOS. + +**Note:** To enable hardware virtualization reboot your system, go to bios settings and then look for Intel VT or AMD virtualization option and enable one of this option which which suits to your system architecture. + +### Step:2) Install KVM and its dependent packages using dnf + +Run the following dnf command to install KVM and its dependent packages, + +``` +[root@linuxtechi ~]# dnf install qemu-kvm qemu-img libvirt virt-install libvirt-client virt-manager -y +``` + +Once above packages has been installed successfully, then run the below command to confirm whether KVM module has been loaded into the kernel or not, + +``` +root@linuxtechi ~]# lsmod | grep -i kvm +kvm_intel 245760 0 +kvm 745472 1 kvm_intel +irqbypass 16384 1 kvm +[root@linuxtechi ~]# +``` + +### Step:3) Enable and Start libvirtd service + +Run the following systemctl command to enable and start libvirtd service, + +``` +[root@linuxtechi ~]# systemctl enable libvirtd +[root@linuxtechi ~]# systemctl start libvirtd +``` + +### Step:4) Create Network bridge and attach Interface to it  + +In RHEL 8, network scripts are deprecated, We have to use Network Manager (nmcli / nmtui) to configure network and network bridges. + +I have two Ethernet cards on my server, ens36 will attached to bridge br0 and ens32 will be used for management . + +``` +[root@linuxtechi ~]# nmcli connection show +NAME UUID TYPE DEVICE +ens32 1d21959d-e2ea-4129-bb89-163486c8d7bc ethernet ens32 +ens36 1af408b6-c98e-47ce-bca7-5141b721f8d4 ethernet ens36 +virbr0 d0f05de4-4b3b-4710-b904-2524b5ad11bf bridge virbr0 +[root@linuxtechi ~]# +``` + +Delete the existing connection of interface “ens36” + +``` +[root@linuxtechi ~]# nmcli connection delete ens36 +Connection 'ens36' (1af408b6-c98e-47ce-bca7-5141b721f8d4) successfully deleted. +[root@linuxtechi ~]# +``` + +Create a Network Bridge with name “**br0**” using mcli command, + +``` +[root@linuxtechi ~]# nmcli connection add type bridge autoconnect yes con-name br0 ifname br0 +Connection 'br0' (62c14e9d-3e72-41c2-8ecf-d17978ad02da) successfully added. +[root@linuxtechi ~]# +``` + +Assign the same IP of ens36 to the bridge interface using following nmcli commands, + +``` +[root@linuxtechi ~]# nmcli connection modify br0 ipv4.addresses 192.168.1.12/24 ipv4.method manual +[root@linuxtechi ~]# nmcli connection modify br0 ipv4.gateway 192.168.1.1 +[root@linuxtechi ~]# nmcli connection modify br0 ipv4.dns 192.168.1.1 +``` + +Add ens36 interface as bridge salve to the network bridge br0, + +``` +[root@linuxtechi ~]# nmcli connection add type bridge-slave autoconnect yes con-name ens36 ifname ens36 master br0 +Connection 'ens36' (0c2065bc-ad39-47a7-9a3e-85c80cd73c94) successfully added. +[root@linuxtechi ~]# +``` + +Now bring up the network bridge using beneath nmcli command, + +``` +[root@linuxtechi ~]# nmcli connection up br0 +Connection successfully activated (master waiting for slaves) (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/9) +[root@linuxtechi ~]# +``` + +Verify the connections using following command, + +``` +[root@linuxtechi ~]# nmcli connection show +NAME UUID TYPE DEVICE +br0 00bcff8f-af85-49ad-9196-974de2d9d9d1 bridge br0 +ens32 1d21959d-e2ea-4129-bb89-163486c8d7bc ethernet ens32 +ens36 eaef13c9-c24e-4a3f-ae85-21bf0610471e ethernet ens36 +virbr0 d0f05de4-4b3b-4710-b904-2524b5ad11bf bridge virbr0 +[root@linuxtechi ~]# +``` + +View the bridge (br0) details and status using ip command, + +![rhel-8-bridge-details][1] + +**Note:** If you want to use network-scripts in RHEL 8 system then install network-scripts packages, + +``` +~ ]# dnf install network-scripts -y +``` + +### Step:5) Creating and Managing KVM Virtual Machines + +In RHEL 8 there are different ways to create and manage KVM virtual machines, + + * virt-manager (GUI) + * Command Line tools (**virt-install** & **virsh**) + + + +During the KVM installation we have already installed virt-manager and virt-install packages. + +### Creating Virtual Machines using virt-manager GUI tool: + +Run the virt-manager command from command line or Access virt-manager from RHEL 8 Desktop + +[![Access-Virt-Manager-RHEL8][2]][3] + +Click on Monitor Icon to create a new guest VM (Virtual Machine), + +Choose Local Installation Media as ISO, + +[![Choose-ISO-KVM-RHEL8][4]][5] + +Click on forward, + +In the next screen, browse the OS installation ISO file , in my case i have placed Ubuntu 18.04 LTS server ISO file under /opt folder, + +[![Installation-ISO-File-RHEL8-KVM][6]][7] + +click on Forward to Proceed further, + +In the next window you will be prompted to specify RAM and vCPU for your virtual machine, so specify the values that suits your installation and then click on Forward, + +[![Specify-RAM-CPU-KVM-RHEL8][8]][9] + +In next window specify disk size for your Virtual Machine and the click on Forward, in my case i am giving disk space for my VM as 20 GB, + +[![Disk-Image-RHEL8-KVM-VM][10]][11] + +In the next window, specify the name of VM and choose the Network that you want to attach to VM’s Ethernet card, as we had created network bridge “br0” for vms networking, so choose bridge“br0”. + +[![Network-Selection-KVM-RHEL8][12]][13] + +Click on Finish to proceed with VM creation and its OS installation, + +[![OS-Installation-KVM-VM-RHEL8][14]][15] + +Follow the screen Instructions and complete the Installation. + +**Creating KVM Virtual Machine from Command Line** + +if you are fan of command line then there is a command line tool for you called “**virt-install**” to create virtual machines. Once the Virtual machines are provisioned then vms can be managed via command line tool “[virsh][16]“. + +Let’s assume we want to create CentOS 7 VM using virt-install, i have already placed CentOS 7 ISO file under /opt folder, + +Execute beneath command to provision a VM + +``` +[root@linuxtechi ~]# virt-install -n CentOS7-Server --description "CentOS 7 Virtual Machine" --os-type=Linux --os-variant=rhel7 --ram=1096 --vcpus=1 --disk path=/var/lib/libvirt/images/centos7-server.img,bus=virtio,size=20 --network bridge:br0 --graphics none --location /opt/CentOS-7-x86_64-DVD-1511.iso --extra-args console=ttyS0 +``` + +Output of command would be something like below, + +![Virt-Install-KVM-RHEL8][17] + +Follow screen instructions to complete CentOS 7 Installation. That’s all from this tutorial, i hope these steps helped you to setup KVM on your RHEL 8 system, please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/install-configure-kvm-on-rhel-8/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/wp-content/uploads/2019/06/rhel-8-bridge-details-1024x628.jpg +[2]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Access-Virt-Manager-RHEL8-1024x471.jpg +[3]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Access-Virt-Manager-RHEL8.jpg +[4]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-ISO-KVM-RHEL8-1024x479.jpg +[5]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Choose-ISO-KVM-RHEL8.jpg +[6]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Installation-ISO-File-RHEL8-KVM-1024x477.jpg +[7]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Installation-ISO-File-RHEL8-KVM.jpg +[8]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Specify-RAM-CPU-KVM-RHEL8-1024x478.jpg +[9]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Specify-RAM-CPU-KVM-RHEL8.jpg +[10]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Disk-Image-RHEL8-KVM-VM-1024x483.jpg +[11]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Disk-Image-RHEL8-KVM-VM.jpg +[12]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Network-Selection-KVM-RHEL8-1024x482.jpg +[13]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Network-Selection-KVM-RHEL8.jpg +[14]: https://www.linuxtechi.com/wp-content/uploads/2019/06/OS-Installation-KVM-VM-RHEL8-1024x479.jpg +[15]: https://www.linuxtechi.com/wp-content/uploads/2019/06/OS-Installation-KVM-VM-RHEL8.jpg +[16]: https://www.linuxtechi.com/create-revert-delete-kvm-virtual-machine-snapshot-virsh-command/ +[17]: https://www.linuxtechi.com/wp-content/uploads/2019/06/Virt-Install-KVM-RHEL8.jpg diff --git a/sources/tech/20190624 Linux Package Managers Compared - AppImage vs Snap vs Flatpak.md b/sources/tech/20190624 Linux Package Managers Compared - AppImage vs Snap vs Flatpak.md new file mode 100644 index 0000000000..df5686d0a5 --- /dev/null +++ b/sources/tech/20190624 Linux Package Managers Compared - AppImage vs Snap vs Flatpak.md @@ -0,0 +1,222 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux Package Managers Compared – AppImage vs Snap vs Flatpak) +[#]: via: (https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/) +[#]: author: (editor https://www.ostechnix.com/author/editor/) + +Linux Package Managers Compared – AppImage vs Snap vs Flatpak +====== + +![3 Linux Package Managers Compared][1] + +**Package managers** provide a way of packaging, distributing, installing, and maintaining apps in an operating system. With modern desktop, server and IoT applications of the Linux operating system and the hundreds of different distros that exist, it becomes necessary to move away from platform specific packaging methods to platform agnostic ones. This post explores 3 such tools, namely **AppImage** , **Snap** and **Flatpak** , that each aim to be the future of software deployment and management in Linux. At the end we summarize a few key findings. + +### 1\. AppImage + +**AppImage** follows a concept called **“One app = one file”**. This is to be understood as an AppImage being a regular independent “file” containing one application with everything it needs to run in the said file. Once made executable, the AppImage can be run like any application in a computer by simply double-clicking it in the users file system.[1] + +It is a format for creating portable software for Linux without requiring the user to install the said application. The format allows the original developers of the software (upstream developers) to create a platform and distribution independent (also called a distribution-agnostic binary) version of their application that will basically run on any flavor of Linux. + +AppImage has been around for a long time. **Klik** , a predecessor of AppImage was created by **Simon Peter** in 2004. The project was shut down in 2011 after not having passed the beta stage. A project named **PortableLinuxApps** was created by Simon around the same time and the format was picked up by a few portals offering software for Linux users. The project was renamed again in 2013 to its current name AppImage and a repository has been maintained in GitHub (project [link][2]) with all the latest changes to the same since 2018.[2][3] + +Written primarily in **C** and donning the **MIT license** since 2013, AppImage is currently developed by **The AppImage project**. It is a very convenient way to use applications as demonstrated by the following features: + + 1. AppImages can run on virtually any Linux system. As mentioned before applications derive a lot of functionality from the operating system and a few common libraries. This is a common practice in the software world since if something is already done, there is no point in doing it again if you can pick and choose which parts from the same to use. The problem is that many Linux distros might not have all the files a particular application requires to run since it is left to the developers of that particular distro to include the necessary packages. Hence developers need to separately include the dependencies of the application for each Linux distro they are publishing their app for. Using the AppImage format developers can choose to include all the libraries and files that they cannot possibly hope the target operating system to have as part of the AppImage file. Hence the same AppImage format file can work on different operating systems and machines without needing granular control. + 2. The one app one file philosophy means that user experience is simple and elegant in that users need only download and execute one file that will serve their needs for using the application. + 3. **No requirement of root access**. System administrators will require people to have root access to stop them from messing with computers and their default setup. This also means that people with no root access or super user privileges cannot install the apps they need as they please. The practice is common in a public setting (such as library or university computers or on enterprise systems). The AppImage file does not require users to “install” anything and hence users need only download the said file and **make it executable** to start using it. This removes the access dilemmas that system administrators have and makes their job easier without sacrificing user experience. + 4. **No effect on core operating system**. The AppImage-application format allows using applications with their full functionality without needing to change or even access most system files. Meaning whatever the applications do, the core operating system setup and files remain untouched. + 5. An AppImage can be made by a developer for a particular version of their application. Any updated version is made as a different AppImage. Hence users if need be can **test multiple versions of the same application** by running different instances using different AppImages. This is an invaluable feature when you need to test your applications from an end-user POV to notice differences. + 6. Take your applications where you go. As mentioned previously AppImages are archived files of all the files that an application requires and can be used without installing or even bothering about the distribution the system uses. Hence if you have a set of apps that you use regularly you may even mount a few AppImage files on a thumb drive and take it with you to use on multiple computers running multiple different distros without worrying whether they’ll work or not. + + + +Furthermore, the **AppImageKit** allows users from all backgrounds to build their own AppImages from applications they already have or for applications that are not provided an AppImage by their upstream developer. + +The package manager is platform independent but focuses primarily on software distribution to end users on their desktops with a dedicated daemon **AppImaged** for integrating the AppImage formats into respective desktop environments. AppImage is supported natively now by a variety of distros such as Ubuntu, Debian, openSUSE, CentOS, Fedora etc. and others may set it up as per their needs. AppImages can also be run on servers with limited functionality via the CLI tools included. + +To know more about AppImages, go to the official [**AppImage documentation**][3] page. + +* * * + +**Suggested read:** + + * [**Search Linux Applications On AppImage, Flathub And Snapcraft Platforms**][4] + + + +* * * + +### 2\. Snappy + +**Snappy** is a software deployment and package management system like AppImage or any other package manager for that instance. It is originally designed for the now defunct **Ubuntu Touch** Operating system. Snappy lets developers create software packages for use in a variety of Linux based distributions. The initial intention behind creating Snappy and deploying **“snaps”** on Ubuntu based systems is to obtain a unified single format that could be used in everything from IoT devices to full-fledged computer systems that ran some version of Ubuntu and in a larger sense Linux itself.[4] + +The lead developer behind the project is **Canonical** , the same company that pilots the Ubuntu project. Ubuntu had native snap support from version 16.04 LTS with more and more distros supporting it out of the box or via a simple setup these days. If you use Arch or Debian or openSUSE you’ll find it easy to install support for the package manager using simple commands in the terminal as explained later in this section. This is also made possible by making the necessary snap platform files available on the respective repos.[5] + +Snappy has the following important components that make up the entire package manager system.[6] + + * **Snap** – is the file format of the packages themselves. Individual applications that are deployed using Snappy are called “Snaps”. Any application may be packaged using the tools provided to make a snap that is intended to run on a different system running Linux. Snap, similar to AppImage is an all-inclusive file and contains all dependencies the application needs to run without assuming them to part of the target system. + * **Snapcraft** – is the tool that lets developers make snaps of their applications. It is basically a command that is part of the snap system as well as a framework that will let you build your own snaps. + * **Snapd** – is the background daemon that maintains all the snaps that are installed in your system. It integrates into the desktop environment and manages all the files and processes related to working with snaps. The snapd daemon also checks for updates normally **4 times a day** unless set otherwise. + * [**Snap Store**][5] – is an online gallery of sorts that lets developers upload their snaps into the repository. Snap store is also an application discovery medium for users and will let users see and experience the application library before downloading and installing them. + + + +The snapd component is written primarily in **C** and **Golang** whereas the Snapcraft framework is built using **Python**. Although both the modules use the GPLv3 license it is to be noted that snapd has proprietary code from Canonical for its server-side operations with just the client side being published under the GPL license. This is a major point of contention with developers since this involves developers signing a CLA form to participate in snap development.[7] + +Going deeper into the finer details of the Snappy package manager the following may be noted: + + 1. Snaps as noted before are all inclusive and contain all the necessary files (dependencies) that the application needs to run. Hence, developers need not to make different snaps for the different distros that they target. Being mindful of the runtimes is all that’s necessary if base runtimes are excluded from the snap. + 2. Snappy packages are meant to support transactional updates. Such a transactional update is atomic and fully reversible, meaning you can use the application while its being updated and that if an update does not behave the way its supposed to, you can reverse the same with no other effects whatsoever. The concept is also called as **delta programming** in which only changes to the application are transmitted as an update instead of the whole package. An Ubuntu derivative called **Ubuntu Core** actually promises the snappy update protocol to the OS itself.[8] + 3. A key point of difference between snaps and AppImages, is how they handle version differences. Using AppImages different versions of the application will have different AppImages allowing you to concurrently use 2 or more different versions of the same application at the same time. However, using snaps means conforming to the transactional or delta update system. While this means faster updates, it keeps you from running two instances of the same application at the same time. If you need to use the old version of an app you’ll need to reverse or uninstall the new version. Snappy does support a feature called [**“parallel install”**][6] which will let users accomplish similar goals, however, it is still in an experimental stage and cannot be considered to be a stable implementation. Snappy also makes use of channels meaning you can use the beta or the nightly build of an app and the stable version at the same time.[9] + 4. Extensive support from major Linux distros and major developers including Google, Mozilla, Microsoft, etc.[4] + 5. Snapd the desktop integration tool supports taking **“snapshots”** of the current state of all the installed snaps in the system. This will let users save the current configuration state of all the applications that are installed via the Snappy package manager and let users revert to that state whenever they desire so. The same feature can also be set to automatically take snapshots at a frequency deemed necessary by the user. Snapshots can be created using the **snap save command** in the snapd framework.[10] + 6. Snaps are designed to be sandboxed during operation. This provides a much-required layer of security and isolation to users. Users need not worry about snap-based applications messing with the rest of the software on their computer. Sandboxing is implemented using three levels of isolation viz, **classic** , **strict** and **devmode**. Each level of isolation allows the app different levels of access within the file system and computer.[11] + + + +On the flip side of things, snaps are widely criticized for being centered around **Canonical’s modus operandi**. Most of the commits to the project are by Canonical employees or contractors and other contributors are required to sign a release form (CLA). The sandboxing feature, a very important one indeed from a security standpoint, is flawed in that the sandboxing actually requires certain other core services to run (such as Mir) while applications running the X11 desktop won’t support the said isolation, hence making the said security feature irrelevant. Questionable press releases and other marketing efforts from Canonical and the “central” and closed app repository are also widely criticized aspects of Snappy. Furthermore, the file sizes of the different snaps are also **comparatively very large** compared to the app sizes of the packages made using AppImage.[7] + +For more details, check [**Snap official documentation**][7]. + +* * * + +**Related read:** + + * [**Install Snap packages in Arch Linux, and Fedora**][8] + + + +* * * + +### 3\. Flatpak + +Like the Snap/Snappy listed above, **Flatpak** is also a software deployment tool that aims to ease software distribution and use in Linux. Flatpak was previously known as **“xdg-app”** and was based on concept proposed by **Lennart Poettering** in 2004. The idea was to contain applications in a secure virtual sandbox allowing for using applications **without the need of root privileges** and without compromising on the systems security. **Alex** started tinkering with Klik (thought to be a former version of AppImage) and wanted to implement the concept better. **Alexander Larsson** who at the time was working with Red Hat wrote an implementation called xdg-app in 2015 that acted as a pre-cursor to the current Flatpak format. + +Flatpak officially came out in 2016 with backing from Red Hat, Endless Computers and Collabora. **Flathub** is the official repository of all Flatpak application packages. At its surface Flatpak like the other is a framework for building and packaging distribution agnostic applications for Linux. It simply requires the developers to conform to a few desktop environment guidelines in order for the application to be successfully integrated into the Flatpak environment. + +Targeted primarily at the three popular desktop implementations **FreeDesktop** , **KDE** , and **GNOME** , the Flatpak framework itself is written in **C** and works on a **LGPL** license. The maintenance repository can be accessed via the GitHub link **[here][9]**. + +A few features of Flatpak that make it stand apart are mentioned below. Notice that features Flatpak shares with AppImage and Snappy are omitted here. + + * Deep integration into popular Linux desktop environments such as GNOME & KDE so that users can simply use Flatpaks using Graphical software management tools instead of resorting to the terminal. Flatpak can be installed from the default repositories of major desktop environments now and once the apps themselves are set-up they can be used and provide features similar to normal desktop applications.[12][13] + * **Forward-compatibility** – Flatpaks are built from the ground up keeping the operating systems core kernel and runtimes in mind. Hence, even if you upgrade or update your distro the Flatpaks you have should still work unless there is a core update. This is especially crucial for people who prefer staying on rolling betas or development versions of their distros. For such people, since the kinks of the OS itself isn’t ironed out usually, the Flatpak application will run seamlessly without having to depend on the OS files or libraries for its operation.[13] + * **Sandboxing using Bubblewrap** – snaps are also by default sandboxed in that they run in isolation from the rest of the applications running while you’re using your computer. However, Flatpaks fully seal the application from accessing OS files and user files during its operation by default. This essentially means that system administrators can be certain that Flatpaks that are installed in their systems cannot exploit the computer and the files it contains whereas for end users this will mean that in order to access a few specific functions or user data root permission is required.[14] + * Flatpak supports decentralized distribution of application natively however the team behind Flatpak still maintains a central online repository of apps/Flatpaks called **Flathub**. Users may in fact configure Flatpak to use multiple remote repositories as they see necessary. As opposed to snap you can have multiple repositories.[13] + * Modular access through the sandbox. Although this capability comes at a great potential cost to the integrity of the system, Flatpak framework allows for channels to be created through the sandbox for exchange of specific information from within the sandbox to the host system or vice versa. The channel is in this case referred to as a portal. A con to this feature is discussed later in the section.[14] + + + +One of the most criticized aspects of Flatpak however is it’s the sandbox feature itself. Sandboxing is how package managers such as Snappy and Flatpak implement important security features. Sandboxing essentially isolates the application from everything else in the system only allowing for user defined exchange of information from within the sandbox to outside. The flaw with the concept being that the sandbox cannot be inherently impregnable. Data has to be eventually transferred between the two domains and simple Linux commands can simply get rid of the sandbox restriction meaning that malicious applications might potentially jump out of the said sandbox.[15] + +This combined with the worse than expected commitment to rolling out security updates for Flatpak has resulted in widespread criticism of the team’s tall claim of providing a secure framework. The blog (named **flatkill** ) linked at the end of this guide in fact mentions a couple of exploits that were not addressed by the Flatpak team as soon as they should’ve been.[15] + +For more details, I suggest you to read [**Flatpak official documentation**][10]. + +* * * + +**Related read:** + + * [**A Beginners Guide To Flatpak**][11] + + + +* * * + +### AppImage vs Snap vs Flatpak + +The table attached below summarizes all the above findings into a concise and technical comparison of the three frameworks. + +**Feature** | **AppImage** | **Snappy** | **Flatpak** +---|---|---|--- +**Unique feature** | Not an appstore or repository, its simply put a packaging format for software distribution. | Led by Canonical (Same company as Ubuntu), features central app repository and active contribution from Canonical. | Features an app store called FlatHub, however, individuals may still host packages and distribute it. +**Target system** | Desktops and Servers. | Desktops, Servers, IoT devices, Embedded devices etc. | Desktops and limited function on servers. +**Libraries/Dependencies** | Base system. Runtimes optional, Libraries and other dependencies packaged. | Base system or via Plugins or can be packaged. | GNOME, KDE, Freedesktop bundled or custom bundled. +**Developers** | Community Driven led by Simon Peter. | Corporate driven by Canonical Ltd. | Community driven by flatpak team supported by enterprise. +**Written in** | C. | Golang, C and Python. | C. +**Initial release** | 2004. | 2014. | 2015. +**Sandboxing** | Can be implemented. | 3 modes – strict, classic, and devmode with varying confinement capabilities. Runs in isolation. | Isolated but Uses system files to run applications by default. +**Sandboxing Platform** | Firejail, AppArmor, Bubblewrap. | AppArmor. | Bubblewrap. +**App Installation** | Not necessary. Will act as self mounted disc. | Installation using snapd. | Installed using flatpak client tools. +**App Execution** | Can be run after setting executing bit. | Using desktop integrated snap tools. Runs isolated with user defined resources. | Needs to be executed using flatpak command if CLI is used. +**User Privileges** | Can be run w/o root user access. | Can be run w/o root user access. | Selectively required. +**Hosting Applications** | Can be hosted anywhere by anybody. | Has to be hosted with Canonical servers which are proprietary. | Can be hosted anywhere by anybody. +**Portable Execution from non system locations** | Yes. | No. | Yes, after flatpak client is configured. +**Central Repository** | AppImageHub. | Snap Store. | Flathub. +**Running multiple versions of the app** | Possible, any number of versions simultaneously. | One version of the app in one channel. Has to be separately configured for more. | Yes. +**Updating applications** | Using CLI command AppImageUpdate or via an updater tool built into the AppImage. | Requires snapd installed. Supports delta updating, will automatically update. | Required flatpak installed. Update Using flatpak update command. +**Package sizes on disk** | Application remains archived. | Application remains archived. | Client side is uncompressed. + +Here is a long tabular comparison of AppImage vs. Snap vs. Flatpak features. Please note that the comparison is made from an AppImage perspective. + + * [**https://github.com/AppImage/AppImageKit/wiki/Similar-projects#comparison**][12] + + + +### Conclusion + +While all three of these platforms have a lot in common with each other and aim to be platform agnostic in approach, they offer different levels of competencies in a few areas. While Snaps can run on a variety of devices including embedded ones, AppImages and Flatpaks are built with the desktop user in mind. AppImages of popular applications on the other had have superior packaging sizes and portability whereas Flatpak really shines with its forward compatibility when its used in a set it and forget it system. + +If there are any flaws in this guide, please let us know in the comment section below. We will update the guide accordingly. + +**References:** + + * **[1]** [**Concepts — AppImage documentation**][13] + * **[2]** [**Slashdot – Point-and-klik Linux Software Installation**][14] + * **[3]** [**History of AppImage project**][15] + * **[4][Snapcraft – Snaps are universal Linux packages][16]** + * **[5][Installing snapd – Snap documentation][17]** + * **[6][Snap documentation][7]** + * **[7][On Snappy and Flatpak: business as usual in the Canonical propaganda department][18]** + * **[8][Snap Updates are getting smaller, here’s why][19]** + * **[9][What Are Linux Snap Packages? Why Use Them?][20]** + * **[10][Snapshots – Snap documentation][21]** + * **[11][Snap confinement – Snap documentation][22]** + * **[12][Desktop Integration – Flatpak documentation][23]** + * **[13][Introduction to Flatpak – Flatpak documentation][24]** + * **[14][Sandbox Permissions – Flatpak documentation][25]** + * **[15][Flatpak – a security nightmare][26]** + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/linux-package-managers-compared-appimage-vs-snap-vs-flatpak/ + +作者:[editor][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/editor/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/wp-content/uploads/2019/06/Linux-Package-Managers-Compared-1-720x340.png +[2]: https://github.com/AppImage/AppImageKit/blob/master/README.md +[3]: https://docs.appimage.org/ +[4]: https://www.ostechnix.com/search-linux-applications-on-appimage-flathub-and-snapcraft-platforms/ +[5]: https://snapcraft.io/store +[6]: https://blog.ubuntu.com/2019/06/20/parallel-installs-test-and-run-multiple-instances-of-snaps +[7]: https://docs.snapcraft.io/ +[8]: https://www.ostechnix.com/install-snap-packages-arch-linux-fedora/ +[9]: https://github.com/flatpak/flatpak +[10]: http://docs.flatpak.org/en/latest/index.html +[11]: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ +[12]: https://github.com/AppImage/AppImageKit/wiki/Similar-projects#comparison +[13]: https://docs.appimage.org/introduction/concepts.html#one-app-one-file. +[14]: https://linux.slashdot.org/story/05/01/15/1815210/point-and-klik-linux-software-installation +[15]: https://github.com/AppImage/AppImageKit/wiki/History#timeline. +[16]: https://snapcraft.io/# +[17]: https://docs.snapcraft.io/installing-snapd +[18]: https://www.happyassassin.net/2016/06/16/on-snappy-and-flatpak-business-as-usual-in-the-canonical-propaganda-department/ +[19]: https://blog.ubuntu.com/2017/08/01/snap-updates-are-getting-smaller-heres-why +[20]: https://www.feliciano.tech/blog/what-are-linux-snap-packages-why-use-them/ +[21]: https://docs.snapcraft.io/snapshots +[22]: https://docs.snapcraft.io/snap-confinement +[23]: http://docs.flatpak.org/en/latest/desktop-integration.html?highlight=desktop%20integration +[24]: http://docs.flatpak.org/en/latest/introduction.html +[25]: http://docs.flatpak.org/en/latest/sandbox-permissions.html?highlight=sandboxing +[26]: https://flatkill.org/ diff --git a/sources/tech/20190625 The cost of JavaScript in 2019 - V8.md b/sources/tech/20190625 The cost of JavaScript in 2019 - V8.md new file mode 100644 index 0000000000..e21eefd5ad --- /dev/null +++ b/sources/tech/20190625 The cost of JavaScript in 2019 - V8.md @@ -0,0 +1,178 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (The cost of JavaScript in 2019 · V8) +[#]: via: (https://v8.dev/blog/cost-of-javascript-2019) +[#]: author: (Addy Osmani https://twitter.com/addyosmani) + +The cost of JavaScript in 2019 · V8 +====== +**Note:** If you prefer watching a presentation over reading articles, then enjoy the video below! If not, skip the video and read on. + +[“The cost of JavaScript”][1] as presented by Addy Osmani at #PerfMatters Conference 2019. + +One large change to [the cost of JavaScript][2] over the last few years has been an improvement in how fast browsers can parse and compile script. **In 2019, the dominant costs of processing scripts are now download and CPU execution time.** + +User interaction can be delayed if the browser’s main thread is busy executing JavaScript, so optimizing bottlenecks with script execution time and network can be impactful. + +### Actionable high-level guidance # + +What does this mean for web developers? Parse & compile costs are **no longer as slow** as we once thought. The three things to focus on for JavaScript bundles are: + + * **Improve download time** + * Keep your JavaScript bundles small, especially for mobile devices. Small bundles improve download speeds, lower memory usage, and reduce CPU costs. + * Avoid having just a single large bundle; if a bundle exceeds ~50–100 kB, split it up into separate smaller bundles. (With HTTP/2 multiplexing, multiple request and response messages can be in flight at the same time, reducing the overhead of additional requests.) + * On mobile you’ll want to ship much less especially because of network speeds, but also to keep plain memory usage low. + * **Improve execution time** + * Avoid [Long Tasks][3] that can keep the main thread busy and can push out how soon pages are interactive. Post-download, script execution time is now a dominant cost. + * **Avoid large inline scripts** (as they’re still parsed and compiled on the main thread). A good rule of thumb is: if the script is over 1 kB, avoid inlining it (also because 1 kB is when [code caching][4] kicks in for external scripts). + + + +### Why does download and execution time matter? # + +Why is it important to optimize download and execution times? Download times are critical for low-end networks. Despite the growth in 4G (and even 5G) across the world, our [effective connection types][5] remain inconsistent with many of us running into speeds that feel like 3G (or worse) when we’re on the go. + +JavaScript execution time is important for phones with slow CPUs. Due to differences in CPU, GPU, and thermal throttling, there are huge disparities between the performance of high-end and low-end phones. This matters for the performance of JavaScript, as execution is CPU-bound. + +In fact, of the total time a page spends loading in a browser like Chrome, anywhere up to 30% of that time can be spent in JavaScript execution. Below is a page load from a site with a pretty typical workload (Reddit.com) on a high-end desktop machine:![][6]JavaScript processing represents 10–30% of time spent in V8 during page load. + +On mobile, it takes 3–4× longer for a median phone (Moto G4) to execute Reddit’s JavaScript compared to a high-end device (Pixel 3), and over 6× as long on a low-end device (the <$100 Alcatel 1X):![][7]The cost of Reddit’s JavaScript across a few different device classes (low-end, average, and high-end) + +**Note:** Reddit has different experiences for desktop and mobile web, and so the MacBook Pro results cannot be compared to the other results. + +When you’re trying to optimize JavaScript execution time, keep an eye out for [Long Tasks][8] that might be monopolizing the UI thread for long periods of time. These can block critical tasks from executing even if the page looks visually ready. Break these up into smaller tasks. By splitting up your code and prioritizing the order in which it is loaded, you can get pages interactive faster and hopefully have lower input latency.![][9]Long tasks monopolize the main thread. You should break them up. + +### What has V8 done to improve parse/compile? # + +Raw JavaScript parsing speed in V8 has increased 2× since Chrome 60. At the same time, raw parse (and compile) cost has become less visible/important due to other optimization work in Chrome that parallelizes it. + +V8 has reduced the amount of parsing and compilation work on the main thread by an average of 40% (e.g. 46% on Facebook, 62% on Pinterest) with the highest improvement being 81% (YouTube), by parsing and compiling on a worker thread. This is in addition to the existing off-main-thread streaming parse/compile.![][10]V8 parse times across different versions + +We can also visualize the CPU time impact of these changes across different versions of V8 across Chrome releases. In the same amount of time it took Chrome 61 to parse Facebook’s JS, Chrome 75 can now parse both Facebook’s JS AND 6 times Twitter’s JS.![][11]In the time it took Chrome 61 to parse Facebook’s JS, Chrome 75 can now parse both Facebook’s JS and 6 times Twitter’s JS. + +Let’s dive into how these changes were unlocked. In short, script resources can be streaming-parsed and-compiled on a worker thread, meaning: + + * V8 can parse+compile JavaScript without blocking the main thread. + * Streaming starts once the full HTML parser encounters a `