@@ -253,7 +254,7 @@ $(this).html(''); });titletitletitletitletitle ``` -接下来,定义表模型。 这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 dom 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 Ajax 连接。 请注意,使用 Groovy GString 调用 Grails **createLink()** 的方法创建 URL,在 **EmployeeController** 中指向 **browserLister** 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 +接下来,定义表模型。这是提供所有表选项的地方,包括界面的滚动,而不是分页,根据 DOM 字符串提供的装饰,将数据导出为 CSV 和其他格式的能力,以及建立与服务器的 AJAX 连接。 请注意,使用 Groovy GString 调用 Grails `createLink()` 的方法创建 URL,在 `EmployeeController` 中指向 `browserLister` 操作。同样有趣的是表格列的定义。此信息将发送到后端,后端查询数据库并返回相应的记录。 ``` var table = $('#employee_dt').DataTable( { @@ -302,7 +303,7 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen_4.png) -这是另一个屏幕截图,显示了过滤和多列排序(寻找 position 包括字符 “dev” 的员工,先按 office 排序,然后按姓氏排序): +这是另一个屏幕截图,显示了过滤和多列排序(寻找 “position” 包括字符 “dev” 的员工,先按 “office” 排序,然后按姓氏排序): ![](https://opensource.com/sites/default/files/uploads/screen_5.png) @@ -314,37 +315,37 @@ that.search(this.value).draw(); ![](https://opensource.com/sites/default/files/uploads/screen7.png) -好的,视图部分看起来非常简单; 因此,控制器必须做所有繁重的工作,对吧? 让我们来看看… +好的,视图部分看起来非常简单;因此,控制器必须做所有繁重的工作,对吧? 让我们来看看…… #### 控制器 browserLister 操作 -回想一下,我们看到过这个字符串 +回想一下,我们看到过这个字符串: ``` "${createLink(controller: 'employee', action: 'browserLister')}" ``` -对于从 DataTables 模型中调用 Ajax 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 **EmployeeController** 的链接,位于: +对于从 DataTables 模型中调用 AJAX 的 URL,是在 Grails 服务器上动态创建 HTML 链接,其 Grails 标记背后通过调用 [createLink()][17] 的方法实现的。这会最终产生一个指向 `EmployeeController` 的链接,位于: ``` embrow/grails-app/controllers/com/nuevaconsulting/embrow/EmployeeController.groovy ``` -特别是控制器方法 **browserLister()**。我在代码中留了一些 print 语句,以便在运行时能够在终端看到中间结果。 +特别是控制器方法 `browserLister()`。我在代码中留了一些 `print` 语句,以便在运行时能够在终端看到中间结果。 ```     def browserLister() {         // Applies filters and sorting to return a list of desired employees ``` -首先,打印出传递给 **browserLister()** 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 +首先,打印出传递给 `browserLister()` 的参数。我通常使用此代码开始构建控制器方法,以便我完全清楚我的控制器正在接收什么。 ```       println "employee browserLister params $params"         println() ``` -接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 **jqdtParams**的 Groovy 映射: +接下来,处理这些参数以使它们更加有用。首先,jQuery DataTables 参数,一个名为 `jqdtParams` 的 Groovy 映射: ``` def jqdtParams = [:] @@ -363,7 +364,7 @@ println "employee dataTableParams $jqdtParams" println() ``` -接下来,列数据,一个名为 **columnMap**的 Groovy 映射: +接下来,列数据,一个名为 `columnMap` 的 Groovy 映射: ``` def columnMap = jqdtParams.columns.collectEntries { k, v -> @@ -386,7 +387,7 @@ println "employee columnMap $columnMap" println() ``` -接下来,从 **columnMap** 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 **allColumnList**和 **orderList**: +接下来,从 `columnMap` 中检索的所有列表,以及在视图中应如何排序这些列表,Groovy 列表分别称为 `allColumnList` 和 `orderList` : ``` def allColumnList = columnMap.keySet() as List @@ -395,7 +396,7 @@ def orderList = jqdtParams.order.collect { k, v -> [allColumnList[v.column as In println "employee orderList $orderList" ``` -我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭; 在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 **createAlias** 以允许我们进入相关类别 Position 和 Office: +我们将使用 Grails 的 Hibernate 标准实现来实际选择要显示的元素以及它们的排序和分页。标准要求过滤器关闭;在大多数示例中,这是作为标准实例本身的创建的一部分给出的,但是在这里我们预先定义过滤器闭包。请注意,在这种情况下,“date hired” 过滤器的相对复杂的解释被视为一年并应用于建立日期范围,并使用 `createAlias` 以允许我们进入相关类别 `Position` 和 `Office`: ``` def filterer = { @@ -424,14 +425,14 @@ def filterer = { } ``` -是时候应用上述内容了。第一步是获取分页代码所需的所有 Employee 实例的总数: +是时候应用上述内容了。第一步是获取分页代码所需的所有 `Employee` 实例的总数: ```         def recordsTotal = Employee.count()         println "employee recordsTotal $recordsTotal" ``` -接下来,将过滤器应用于 Employee 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): +接下来,将过滤器应用于 `Employee` 实例以获取过滤结果的计数,该结果将始终小于或等于总数(同样,这是针对分页代码): ```         def c = Employee.createCriteria() @@ -467,7 +468,7 @@ def filterer = { 要完全清楚,JTable 中的分页代码管理三个计数:数据集中的记录总数,应用过滤器后得到的数字,以及要在页面上显示的数字(显示是滚动还是分页)。 排序应用于所有过滤的记录,并且分页应用于那些过滤的记录的块以用于显示目的。 -接下来,处理命令返回的结果,在每行中创建指向 Employee,Position 和 Office 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: +接下来,处理命令返回的结果,在每行中创建指向 `Employee`、`Position` 和 `Office` 实例的链接,以便用户可以单击这些链接以获取相关实例的所有详细信息: ```         def dollarFormatter = new DecimalFormat('$##,###.##') @@ -490,14 +491,15 @@ def filterer = { } ``` -大功告成 +大功告成。 + 如果你熟悉 Grails,这可能看起来比你原先想象的要多,但这里没有火箭式的一步到位方法,只是很多分散的操作步骤。但是,如果你没有太多接触 Grails(或 Groovy),那么需要了解很多新东西 - 闭包,代理和构建器等等。 在那种情况下,从哪里开始? 最好的地方是了解 Groovy 本身,尤其是 [Groovy closures][18] 和 [Groovy delegates and builders][19]。然后再去阅读上面关于 Grails 和 Hibernate 条件查询的建议阅读文章。 ### 结语 -jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 +jQuery DataTables 为 Grails 制作了很棒的表格数据浏览器。对视图进行编码并不是太棘手,但 DataTables 文档中提供的 PHP 示例提供的功能仅到此位置。特别是,它们不是用 Grails 程序员编写的,也不包含探索使用引用其他类(实质上是查找表)的元素的更精细的细节。 我使用这种方法制作了几个数据浏览器,允许用户选择要查看和累积记录计数的列,或者只是浏览数据。即使在相对适度的 VPS 上的百万行表中,性能也很好。 @@ -512,7 +514,7 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables 作者:[Chris Hermansen][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[jrg](https://github.com/jrglinux) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 @@ -528,11 +530,11 @@ via: https://opensource.com/article/18/9/using-grails-jquery-and-datatables [9]: http://sdkman.io/ [10]: http://guides.grails.org/creating-your-first-grails-app/guide/index.html [11]: https://opensource.com/file/410061 -[12]: https://opensource.com/sites/default/files/uploads/screen_1.png "Embrow home screen" +[12]: https://opensource.com/sites/default/files/uploads/screen_1.png [13]: https://opensource.com/file/410066 -[14]: https://opensource.com/sites/default/files/uploads/screen_2.png "Office list screenshot" +[14]: https://opensource.com/sites/default/files/uploads/screen_2.png [15]: https://opensource.com/file/410071 -[16]: https://opensource.com/sites/default/files/uploads/screen3.png "Employee controller screenshot" +[16]: https://opensource.com/sites/default/files/uploads/screen3.png [17]: https://gsp.grails.org/latest/ref/Tags/createLink.html [18]: http://groovy-lang.org/closures.html [19]: http://groovy-lang.org/dsls.html diff --git a/published/201811/20180928 What containers can teach us about DevOps.md b/published/201811/20180928 What containers can teach us about DevOps.md new file mode 100644 index 0000000000..3a0a360603 --- /dev/null +++ b/published/201811/20180928 What containers can teach us about DevOps.md @@ -0,0 +1,98 @@ +容器技术对 DevOps 的一些启发 +====== + +> 容器技术的使用支撑了目前 DevOps 三大主要实践:工作流、及时反馈、持续学习。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) + +有人说容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 设计理念的流行,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑 DevOps 的三个实践][1]。 + +### 工作流 + +#### 容器中的工作流 + +每个容器都可以看成一个独立的运行环境,对于容器内部,不需要考虑外部的宿主环境、集群环境,以及其它基础设施。在容器内部,每个功能看起来都是以传统的方式运行。从外部来看,容器内运行的应用一般作为整个应用系统架构的一部分:比如 web API、web app 用户界面、数据库、任务执行、缓存系统、垃圾回收等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 + +#### 现实中的工作流 + +那些跟“容器”一样业务功能独立的团队,也可以借鉴这种容器思维。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造 [《杰森一家》中的斯贝斯利太空飞轮][2] 等),还是技术中的工作流(开发、测试、运维、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用这种线性的工作流有效降低了工作耦合性。 + +#### DevOps 中的工作流 + +DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及问题,并最终修复它们。 + +> 践行这样的工作流后,可以避免将一个已知缺陷带到工作流的下游,避免局部优化导致可能的全局性能下降,要不断探索如何优化工作流,持续加深对于系统的理解。 + +> —— Gene Kim,《[支撑 DevOps 的三个实践][3]》,IT 革命,2017.4.25 + +### 反馈 + +#### 容器中的反馈 + +除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的状态。比如目前[流行的][5] [Prometheus][4],可以用来收集容器和容器集群中相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但同时也带来了不透明的特性,这时,从中快速收集信息来解决其内部出现的问题就显得尤为重要了。 + +#### 现实中的反馈 + +在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速地定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队被淹没在大量不相关的事件时,那些真正需要快速反馈的重要信息很容易被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快地意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不那么搞笑了)。(LCTT 译注:露西和埃塞尔是上世纪 50 年代的著名黑白情景喜剧《我爱露西》中的主角) + +#### DevOps 中的反馈 + +DevOps 中的第二条原则,就是快速收集所有相关的有用信息,这样在问题影响到其它开发流程之前就可以被识别出。DevOps 团队应该努力去“优化下游”,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时地响应。 + +> 快速的反馈对于提高技术的质量、可用性、安全性至关重要。 + +> —— Gene Kim 等人,《DevOps 手册:如何在技术组织中创造世界级的敏捷性,可靠性和安全性》,IT 革命,2016 + +### 持续学习 + +#### 容器中的持续学习 + +践行第三条原则“持续学习”是一个不小的挑战。在不需要掌握太多边缘的或难以理解的东西的情况下,容器技术让我们的开发工程师和运营团队依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录和分享。 + +#### 现实中的持续学习 + +举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被安排对一个运行着某个大学核心 IT 部门网站的 Apache 虚拟主机配置进行更改。由于没有方便的测试环境,我直接在生产站点上修改配置,当时觉得配置没问题就发布了,几分钟后,我无意中听到了隔壁同事说: + +“等会,网站挂了?” + +“没错,怎么回事?” + +很多人蒙圈了…… + +在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 —— 我老板的老板的老板 —— 来到我的工位询问发生了什么事。“别担心,”她告诉我。“我们不会责怪你,这是一个错误,现在你已经学会了。” + +而在容器中,这种情形在我的笔记本上就很容易测试了,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 + +#### DevOps 中的持续学习 + +持续学习文化的一部分是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是成长而不是惩罚,所以不要畏惧失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 + +随着系统越来越被细分,我们更需要将注意力集中在具体的点上:上面提到的两条原则主要关注整体流程,而持续学习关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 + +> 实验和冒险让我们能够不懈地改进我们的工作,但也要求我们尝试之前未用过的工作方式。 + +> —— Gene Kim 等人,《[凤凰计划:让你了解 IT、DevOps 以及如何取得商业成功][7]》,IT 革命,2013 + +### 容器技术带给 DevOps 的启迪 + +有效地应用容器技术可以学习 DevOps 的三条原则:工作流,反馈以及持续学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破隔阂,并作为一个团队工作,以提升整体表现和深度了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈机制,以便在问题发生影响之前发现问题。最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/9/containers-can-teach-us-devops + +作者:[Chris Hermansen][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[littleji](https://github.com/littleji) +校对:[pityonline](https://github.com/pityonline), [wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/clhermansen +[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ +[2]: https://en.wikipedia.org/wiki/The_Jetsons +[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops +[4]: https://prometheus.io/ +[5]: https://opensource.com/article/18/9/prometheus-operational-advantage +[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI +[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md new file mode 100644 index 0000000000..734ac021cb --- /dev/null +++ b/published/201811/20181001 Turn your book into a website and an ePub using Pandoc.md @@ -0,0 +1,259 @@ +使用 Pandoc 将你的书转换成网页和电子书 +====== + +> 通过 Markdown 和 Pandoc,可以做到编写一次,发布两次。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +Pandoc 是一个命令行工具,用于将文件从一种标记语言转换为另一种标记语言。在我 [对 Pandoc 的简介][1] 一文中,我演示了如何把 Markdown 编写的文本转换为网页、幻灯片和 PDF。 + +在这篇后续文章中,我将深入探讨 [Pandoc][2],展示如何从同一个 Markdown 源文件生成网页和 ePub 格式的电子书。我将使用我即将发布的电子书《[面向对象思想的 GRASP 原则][3]》为例进行讲解,这本电子书正是通过以下过程创建的。 + +首先,我将解释这本书使用的文件结构,然后介绍如何使用 Pandoc 生成网页并将其部署在 GitHub 上;最后,我演示了如何生成对应的 ePub 格式电子书。 + +你可以在我的 GitHub 仓库 [Programming Fight Club][4] 中找到相应代码。 + +### 设置图书结构 + +我用 Markdown 语法完成了所有的写作,你也可以使用 HTML 标记,但是当 Pandoc 将 Markdown 转换为 ePub 文档时,引入的 HTML 标记越多,出现问题的风险就越高。我的书按照每章一个文件的形式进行组织,用 Markdown 的 `H1` 标记(`#`)声明每章的标题。你也可以在每个文件中放置多个章节,但将它们放在单独的文件中可以更轻松地查找内容并在以后进行更新。 + +元信息遵循类似的模式,每种输出格式都有自己的元信息文件。元信息文件定义有关文档的信息,例如要添加到 HTML 中的文本或 ePub 的许可证。我将所有 Markdown 文档存储在名为 `parts` 的文件夹中(这对于用来生成网页和 ePub 的 Makefile 非常重要)。下面以一个例子进行说明,让我们看一下目录,前言和关于本书(分为 `toc.md`、`preface.md` 和 `about.md` 三个文件)这三部分,为清楚起见,我们将省略其余的章节。 + +关于本书这部分内容的开头部分类似: + +``` +# About this book {-} + +## Who should read this book {-} + +Before creating a complex software system one needs to create a solid foundation. +General Responsibility Assignment Software Principles (GRASP) are guidelines to assign +responsibilities to software classes in object-oriented programming. +``` + +每一章完成后,下一步就是添加元信息来设置网页和 ePub 的格式。 + +### 生成网页 + +#### 创建 HTML 元信息文件 + +我创建的网页的元信息文件(`web-metadata.yaml`)是一个简单的 YAML 文件,其中包含 ` ` 标签中的作者、标题、和版权等信息,以及 HTML 文件中开头和结尾的内容。 + +我建议(至少)包括 `web-metadata.yaml` 文件中的以下字段: + +``` +--- +title: GRASP principles for the Object-oriented mind +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +header-includes: +- | + ```{=html} + + + ``` +include-before: +- | + ```{=html} +

If you like this book, please consider + spreading the word or + + buying me a coffee + +

+ ``` +include-after: +- | + ```{=html} +
+
+
+ +
+
+ ``` +--- +``` + +下面几个变量需要注意一下: + +- `header-includes` 变量包含将要嵌入 `` 标签的 HTML 文本。 +- 调用变量后的下一行必须是 `- |`。再往下一行必须以与 `|` 对齐的三个反引号开始,否则 Pandoc 将无法识别。`{= html}` 告诉 Pandoc 其中的内容是原始文本,不应该作为 Markdown 处理。(为此,需要检查 Pandoc 中的 `raw_attribute` 扩展是否已启用。要进行此检查,键入 `pandoc --list-extensions | grep raw` 并确保返回的列表包含名为 `+ raw_html` 的项目,加号表示已启用。) +- 变量 `include-before` 在网页开头添加一些 HTML 文本,此处我请求读者帮忙宣传我的书或给我打赏。 +- `include-after` 变量在网页末尾添加原始 HTML 文本,同时显示我的图书许可证。 + +这些只是其中一部分可用的变量,查看 HTML 中的模板变量(我的文章 [Pandoc简介][1] 中介绍了如何查看 LaTeX 的模版变量,查看 HTML 模版变量的过程是相同的)对其余变量进行了解。 + +#### 将网页分成多章 + +网页可以作为一个整体生成,这会产生一个包含所有内容的长页面;也可以分成多章,我认为这样会更容易阅读。我将解释如何将网页划分为多章,以便读者不会被长网页吓到。 + +为了使网页易于在 GitHub Pages 上部署,需要创建一个名为 `docs` 的根文件夹(这是 GitHub Pages 默认用于渲染网页的根文件夹)。然后我们需要为 `docs` 下的每一章创建文件夹,将 HTML 内容放在各自的文件夹中,将文件内容放在名为 `index.html` 的文件中。 + +例如,`about.md` 文件将转换成名为 `index.html` 的文件,该文件位于名为 `about`(`about/index.html`)的文件夹中。这样,当用户键入 `http:///about/` 时,文件夹中的 `index.html` 文件将显示在其浏览器中。 + +下面的 `Makefile` 将执行上述所有操作: + +``` +# Your book files +DEPENDENCIES= toc preface about + +# Placement of your HTML files +DOCS=docs + +all: web + +web: setup $(DEPENDENCIES) +        @cp $(DOCS)/toc/index.html $(DOCS) + + +# Creation and copy of stylesheet and images into +# the assets folder. This is important to deploy the +# website to Github Pages. +setup: +        @mkdir -p $(DOCS) +        @cp -r assets $(DOCS) + + +# Creation of folder and index.html file on a +# per-chapter basis + +$(DEPENDENCIES): +        @mkdir -p $(DOCS)/$@ +        @pandoc -s --toc web-metadata.yaml parts/$@.md \ +        -c /assets/pandoc.css -o $(DOCS)/$@/index.html + +clean: +        @rm -rf $(DOCS) + +.PHONY: all clean web setup +``` + +选项 `- c /assets/pandoc.css` 声明要使用的 CSS 样式表,它将从 `/assets/pandoc.cs` 中获取。也就是说,在 `` 标签内,Pandoc 会添加这样一行: + +``` + +``` + +使用下面的命令生成网页: + +``` +make +``` + +根文件夹现在应该包含如下所示的文件结构: + +``` +.---parts +|    |--- toc.md +|    |--- preface.md +|    |--- about.md +| +|---docs +    |--- assets/ +    |--- index.html +    |--- toc +    |     |--- index.html +    | +    |--- preface +    |     |--- index.html +    | +    |--- about +          |--- index.html +    +``` + +#### 部署网页 + +通过以下步骤将网页部署到 GitHub 上: + +1. 创建一个新的 GitHub 仓库 +2. 将内容推送到新创建的仓库 +3. 找到仓库设置中的 GitHub Pages 部分,选择 `Source` 选项让 GitHub 使用主分支的内容 + +你可以在 [GitHub Pages][5] 的网站上获得更多详细信息。 + +[我的书的网页][6] 便是通过上述过程生成的,可以在网页上查看结果。 + +### 生成电子书 + +#### 创建 ePub 格式的元信息文件 + +ePub 格式的元信息文件 `epub-meta.yaml` 和 HTML 元信息文件是类似的。主要区别在于 ePub 提供了其他模板变量,例如 `publisher` 和 `cover-image` 。ePub 格式图书的样式表可能与网页所用的不同,在这里我使用一个名为 `epub.css` 的样式表。 + +``` +--- +title: 'GRASP principles for the Object-oriented Mind' +publisher: 'Programming Language Fight Club' +author: Kiko Fernandez-Reyes +rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International +cover-image: assets/cover.png +stylesheet: assets/epub.css +... +``` + +将以下内容添加到之前的 `Makefile` 中: + +``` +epub: +        @pandoc -s --toc epub-meta.yaml \ +        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub +``` + +用于产生 ePub 格式图书的命令从 HTML 版本获取所有依赖项(每章的名称),向它们添加 Markdown 扩展,并在它们前面加上每一章的文件夹路径,以便让 Pandoc 知道如何进行处理。例如,如果 `$(DEPENDENCIES` 变量只包含 “前言” 和 “关于本书” 两章,那么 `Makefile` 将会这样调用: + +``` +@pandoc -s --toc epub-meta.yaml \ +parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub +``` + +Pandoc 将提取这两章的内容,然后进行组合,最后生成 ePub 格式的电子书,并放在 `Assets` 文件夹中。 + +这是使用此过程创建 ePub 格式电子书的一个 [示例][7]。 + +### 过程总结 + +从 Markdown 文件创建网页和 ePub 格式电子书的过程并不困难,但有很多细节需要注意。遵循以下大纲可能使你更容易使用 Pandoc。 + +- HTML 图书: + - 使用 Markdown 语法创建每章内容 + - 添加元信息 + - 创建一个 `Makefile` 将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 +- ePub 电子书: + - 使用之前创建的每一章内容 + - 添加新的元信息文件 + - 创建一个 `Makefile` 以将各个部分组合在一起 + - 设置 GitHub Pages + - 部署 + + +------ + +via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc + +作者:[Kiko Fernandez-Reyes][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kikofernandez +[1]: https://linux.cn/article-10228-1.html +[2]: https://pandoc.org/ +[3]: https://www.programmingfightclub.com/ +[4]: https://github.com/kikofernandez/programmingfightclub +[5]: https://pages.github.com/ +[6]: https://www.programmingfightclub.com/grasp-principles/ +[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/published/20181002 4 open source invoicing tools for small businesses.md b/published/201811/20181002 4 open source invoicing tools for small businesses.md similarity index 100% rename from published/20181002 4 open source invoicing tools for small businesses.md rename to published/201811/20181002 4 open source invoicing tools for small businesses.md diff --git a/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md new file mode 100644 index 0000000000..58996654e5 --- /dev/null +++ b/published/201811/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md @@ -0,0 +1,70 @@ + +Greg Kroah-Hartman 解释内核社区是如何使 Linux 安全的 +============ + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel-security_0.jpg?itok=hOaTQwWV) + +> 内核维护者 Greg Kroah-Hartman 谈论内核社区如何保护 Linux 不遭受损害。 + +由于 Linux 使用量持续扩大,内核社区去提高这个世界上使用最广泛的技术 —— Linux 内核的安全性的重要性越来越高。安全不仅对企业客户很重要,它对消费者也很重要,因为 80% 的移动设备都使用了 Linux。在本文中,Linux 内核维护者 Greg Kroah-Hartman 带我们了解内核社区如何应对威胁。 + +### bug 不可避免 + +![Greg Kroah-Hartman](https://www.linux.com/sites/lcom/files/styles/floated_images/public/greg-k-h.png?itok=p4fREYuj "Greg Kroah-Hartman") + +*Greg Kroah-Hartman [Linux 基金会][1]* + +正如 Linus Torvalds 曾经说过的,大多数安全问题都是 bug 造成的,而 bug 又是软件开发过程的一部分。是软件就有 bug。 + +Kroah-Hartman 说:“就算是 bug,我们也不知道它是安全的 bug 还是不安全的 bug。我修复的一个著名 bug,在三年后才被 Red Hat 认定为安全漏洞“。 + +在消除 bug 方面,内核社区没有太多的办法,只能做更多的测试来寻找 bug。内核社区现在已经有了自己的安全团队,它们是由熟悉内核核心的内核开发者组成。 + +Kroah-Hartman 说:”当我们收到一个报告时,我们就让参与这个领域的核心开发者去修复它。在一些情况下,他们可能是同一个人,让他们进入安全团队可以更快地解决问题“。但他也强调,内核所有部分的开发者都必须清楚地了解这些问题,因为内核是一个可信环境,它必须被保护起来。 + +Kroah-Hartman 说:”一旦我们修复了它,我们就将它放到我们的栈分析规则中,以便于以后不再重新出现这个 bug。“ + +除修复 bug 之外,内核社区也不断加固内核。Kroah-Hartman 说:“我们意识到,我们需要一些主动的缓减措施,因此我们需要加固内核。” + +Kees Cook 和其他一些人付出了巨大的努力,带来了一直在内核之外的加固特性,并将它们合并或适配到内核中。在每个内核发行后,Cook 都对所有新的加固特性做一个总结。但是只加固内核是不够的,供应商们必须要启用这些新特性来让它们充分发挥作用,但他们并没有这么做。 + +Kroah-Hartman [每周发布一个稳定版内核][5],而为了长期的支持,公司们只从中挑选一个,以便于设备制造商能够利用它。但是,Kroah-Hartman 注意到,除了 Google Pixel 之外,大多数 Android 手机并不包含这些额外的安全加固特性,这就意味着,所有的这些手机都是有漏洞的。他说:“人们应该去启用这些加固特性”。 + +Kroah-Hartman 说:“我购买了基于 Linux 内核 4.4 的所有旗舰级手机,去查看它们中哪些确实升级了新特性。结果我发现只有一家公司升级了它们的内核。……我在整个供应链中努力去解决这个问题,因为这是一个很棘手的问题。它涉及许多不同的组织 —— SoC 制造商、运营商等等。关键点是,需要他们把我们辛辛苦苦设计的内核去推送给大家。” + +好消息是,与消费电子产品不一样,像 Red Hat 和 SUSE 这样的大供应商,在企业环境中持续对内核进行更新。使用容器、pod 和虚拟化的现代系统做到这一点更容易了。无需停机就可以毫不费力地更新和重启。事实上,现在来保证系统安全相比过去容易多了。 + +### Meltdown 和 Spectre + +没有任何一个关于安全的讨论能够避免提及 Meltdown 和 Spectre 缺陷。内核社区一直致力于修改新发现的和已查明的安全漏洞。不管怎样,Intel 已经因为这些事情改变了它们的策略。 + +Kroah-Hartman 说:“他们已经重新研究如何处理安全 bug,以及如何与社区合作,因为他们知道他们做错了。内核已经修复了几乎所有大的 Spectre 问题,但是还有一些小问题仍在处理中”。 + +好消息是,这些 Intel 漏洞使得内核社区正在变得更好。Kroah-Hartman 说:“我们需要做更多的测试。对于最新一轮的安全补丁,在它们被发布之前,我们自己花了四个月时间来测试它们,因为我们要防止这个安全问题在全世界扩散。而一旦这些漏洞在真实的世界中被利用,将让我们认识到我们所依赖的基础设施是多么的脆弱,我们多年来一直在做这种测试,这确保了其它人不会遭到这些 bug 的伤害。所以说,Intel 的这些漏洞在某种程度上让内核社区变得更好了”。 + +对安全的日渐关注也为那些有才华的人创造了更多的工作机会。由于安全是个极具吸引力的领域,那些希望在内核空间中有所建树的人,安全将是他们一个很好的起点。 + +Kroah-Hartman 说:“如果有人想从事这方面的工作,我们有大量的公司愿意雇佣他们。我知道一些开始去修复 bug 的人已经被他们雇佣了。” + +你可以在下面链接的视频上查看更多的内容: + +[视频](https://youtu.be/jkGVabyMh1I) + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2018/10/greg-kroah-hartman-explains-how-kernel-community-securing-linux-0 + +作者:[SWAPNIL BHARTIYA][a] +选题:[oska874][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.linux.com/users/arnieswap +[b]:https://github.com/oska874 +[1]:https://www.linux.com/licenses/category/linux-foundation +[2]:https://www.linux.com/licenses/category/creative-commons-zero +[3]:https://www.linux.com/files/images/greg-k-hpng +[4]:https://www.linux.com/files/images/kernel-securityjpg-0 +[5]:https://www.kernel.org/category/releases.html diff --git a/published/20181004 Functional programming in Python- Immutable data structures.md b/published/201811/20181004 Functional programming in Python- Immutable data structures.md similarity index 100% rename from published/20181004 Functional programming in Python- Immutable data structures.md rename to published/201811/20181004 Functional programming in Python- Immutable data structures.md diff --git a/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md new file mode 100644 index 0000000000..91718ae292 --- /dev/null +++ b/published/201811/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md @@ -0,0 +1,159 @@ +Terminalizer:一个记录您终端活动并且生成 Gif 图像的工具 +==== + +今天我们要讨论一个广为人知的主题,我们也围绕这个主题写过许多的文章,因此我不会针对这个如何记录终端会话流程给出太多具体的资料。 + +我们可以使用脚本命令来记录 Linux 的终端会话,这也是大家公认的一种办法。不过今天我们将来介绍一个能起到相同作用的工具 —— Terminalizer。 + +这个工具可以帮助我们记录用户的终端活动,以帮助我们从输出的文件中找到有用的信息。 + +### 什么是 Terminlizer + +用户可以用 Terminlizer 记录他们的终端活动并且生成一个 Gif 图像。它是一个允许高度定制的 CLI 工具。用户可以在网络播放器、在线播放器上用链接分享他们记录下的文件。 + +**推荐阅读:** + + - [Script – 一个记录您终端对话的简单工具][1] + - [在 Linux 上自动记录/捕捉所有用户的终端对话][2] + - [Teleconsole – 一个能立即与任何人分享您终端对话的工具][3] + - [tmate – 立即与任何人分享您的终端对话][4] + - [Peek – 在 Linux 里制造一个 Gif 记录器][5] + - [Kgif – 一个能生成 Gif 图片,以记录窗口活动的简单 Shell 脚本][6] +- [Gifine – 在 Ubuntu/Debian 里快速制造一个 Gif 视频][7] + +目前没有发行版拥有官方软件包来安装此实用程序,不过我们可以用 Node.js 来安装它。 + +### 如何在 Linux 上安装 Node.js + +安装 Node.js 有许多种方法。我们在这里将会教您一个常用的方法。 + +在 Ubuntu/LinuxMint 上可以使用 [APT-GET 命令][8] 或者 [APT 命令][9] 来安装 Node.js。 + +``` +$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - +$ sudo apt-get install -y nodejs +``` + +在 Debian 上使用 [APT-GET 命令][8] 或者 [APT 命令][9] 来安装 Node.js。 + +``` +# curl -sL https://deb.nodesource.com/setup_8.x | bash - +# apt-get install -y nodejs +``` + +在 RHEL/CentOS 上,使用 [YUM 命令][10] 来安装。 + +``` +$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - +$ sudo yum install epel-release +$ sudo yum -y install nodejs +``` + +在 Fedora 上,用 [DNF 命令][11] 来安装 tmux。 + +``` +$ sudo dnf install nodejs +``` + +在 Arch Linux 上,用 [Pacman 命令][12] 来安装 tmux。 + +``` +$ sudo pacman -S nodejs npm +``` + +在 openSUSE 上,用 [Zypper Command][13] 来安装 tmux。 + +``` +$ sudo zypper in nodejs6 +``` + +### 如何安装 Terminalizer + +您已经安装了 Node.js 这个先决软件包,现在是时候在您的系统上安装 Terminalizer 了。简单执行如下的 `npm` 命令即可安装。 + +``` +$ sudo npm install -g terminalizer +``` + +### 如何使用 Terminalizer + +您只需要执行如下的命令,即可使用 Terminalizer 记录您的终端会话活动。您可以敲击 `CTRL+D` 来结束并且保存记录。 + +``` +# terminalizer record 2g-session + +defaultConfigPath +The recording session is started +Press CTRL+D to exit and save the recording +``` + +这将会将您记录的会话保存成一个 YAML 文件,在这个例子里,我的文件名将会是 2g-session-activity.yml。 + +![][15] + +``` +# logout +Successfully Recorded +The recording data is saved into the file: +/home/daygeek/2g-session.yml +You can edit the file and even change the configurations. +``` + +![][16] + +### 如何播放记录下来的文件 + +使用以下命令来播放您记录的 YAML 文件。在以下操作中,请确保您已经用了您的文件名来替换 “2g-session”。 + +``` +# terminalizer play 2g-session +``` + +将记录的文件渲染成 Gif 图像。 + +``` +# terminalizer render 2g-session +``` + +注意: 以下的两个命令在此版本尚且不可用,或许在下一版本这两个命令将会付诸使用。 + +如果您想要将记录的文件分享给其他人,您可以将您的文件上传到在线播放器,并且将链接分享给对方。 + +``` +terminalizer share 2g-session +``` + +为记录的文件生成一个网络播放器。 + +``` +# terminalizer generate 2g-session +``` + + -------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[thecyanbird](https://github.com/thecyanbird) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ +[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/ +[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/ +[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/ +[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ +[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ +[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ +[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif +[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif diff --git a/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md new file mode 100644 index 0000000000..63f74a4816 --- /dev/null +++ b/published/201811/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md @@ -0,0 +1,118 @@ +服务器的 LinuxBoot:告别 UEFI、拥抱开源 +============ + +[LinuxBoot][13] 是私有的 [UEFI][15] 固件的开源 [替代品][14]。它发布于去年,并且现在已经得到主流的硬件生产商的认可成为他们产品的默认固件。去年,LinuxBoot 已经被 Linux 基金会接受并[纳入][16]开源家族。 + +这个项目最初是由 Ron Minnich 在 2017 年 1 月提出,它是 LinuxBIOS 的创造人,并且在 Google 领导 [coreboot][17] 的工作。 + +Google、Facebook、[Horizon Computing Solutions][18]、和 [Two Sigma][19] 共同合作,在运行 Linux 的服务器上开发 [LinuxBoot 项目][20](以前叫 [NERF][21])。 + +它的开放性允许服务器用户去很容易地定制他们自己的引导脚本、修复问题、构建他们自己的 [运行时环境][22] 和用他们自己的密钥去 [刷入固件][23],而不需要等待供应商的更新。 + +下面是第一次使用 NERF BIOS 去引导 [Ubuntu Xenial][24] 的视频: + +[点击看视频](https://youtu.be/HBkZAN3xkJg) + +我们来讨论一下它与 UEFI 相比在服务器硬件方面的其它优势。 + +### LinuxBoot 超越 UEFI 的优势 + +![LinuxBoot vs UEFI](https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/linuxboot-uefi.png?w=800&ssl=1) + +下面是一些 LinuxBoot 超越 UEFI 的主要优势: + +#### 启动速度显著加快 + +它能在 20 秒钟以内完成服务器启动,而 UEFI 需要几分钟的时间。 + +#### 显著的灵活性 + +LinuxBoot 可以用在 Linux 支持的各种设备、文件系统和协议上。 + +#### 更加安全 + +相比 UEFI 而言,LinuxBoot 在设备驱动程序和文件系统方面进行更加严格的检查。 + +我们可能争辩说 UEFI 是使用 [EDK II][25] 而部分开源的,而 LinuxBoot 是部分闭源的。但有人[提出][26],即便有像 EDK II 这样的代码,但也没有做适当的审查级别和像 [Linux 内核][27] 那样的正确性检查,并且在 UEFI 的开发中还大量使用闭源组件。 + +另一方面,LinuxBoot 有非常小的二进制文件,它仅用了大约几百 KB,相比而言,而 UEFI 的二进制文件有 32 MB。 + +严格来说,LinuxBoot 与 UEFI 不一样,更适合于[可信计算基础][28]。 + +LinuxBoot 有一个基于 [kexec][30] 的引导加载器,它不支持启动 Windows/非 Linux 内核,但这影响并不大,因为主流的云都是基于 Linux 的服务器。 + +### LinuxBoot 的采用者 + +自 2011 年, [Facebook][32] 发起了[开源计算项目(OCP)][31],它的一些服务器是基于[开源][33]设计的,目的是构建的数据中心更加高效。LinuxBoot 已经在下面列出的几个开源计算硬件上做了测试: + +* Winterfell +* Leopard +* Tioga Pass + +更多 [OCP][34] 硬件在[这里][35]有一个简短的描述。OCP 基金会通过[开源系统固件][36]运行一个专门的固件项目。 + +支持 LinuxBoot 的其它一些设备有: + +* [QEMU][9] 仿真的 [Q35][10] 系统 +* [Intel S2600wf][11] +* [Dell R630][12] + +上个月底(2018 年 9 月 24 日),[Equus 计算解决方案][37] [宣布][38] 发行它的 [白盒开放式™][39] M2660 和 M2760 服务器,作为它们的定制的、成本优化的、开放硬件服务器和存储平台的一部分。它们都支持 LinuxBoot 灵活定制服务器的 BIOS,以提升安全性和设计一个非常快的纯净的引导体验。 + +### 你认为 LinuxBoot 怎么样? + +LinuxBoot 在 [GitHub][40] 上有很丰富的文档。你喜欢它与 UEFI 不同的特性吗?由于 LinuxBoot 的开放式开发和未来,你愿意使用 LinuxBoot 而不是 UEFI 去启动你的服务器吗?请在下面的评论区告诉我们吧。 + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/linuxboot-uefi/ + +作者:[Avimanyu Bandyopadhyay][a] +选题:[oska874][b] +译者:[qhwdw](https://github.com/qhwdw) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/avimanyu/ +[b]:https://github.com/oska874 +[1]:https://itsfoss.com/linuxboot-uefi/# +[2]:https://itsfoss.com/linuxboot-uefi/# +[3]:https://itsfoss.com/linuxboot-uefi/# +[4]:https://itsfoss.com/linuxboot-uefi/# +[5]:https://itsfoss.com/linuxboot-uefi/# +[6]:https://itsfoss.com/linuxboot-uefi/# +[7]:https://itsfoss.com/author/avimanyu/ +[8]:https://itsfoss.com/linuxboot-uefi/#comments +[9]:https://en.wikipedia.org/wiki/QEMU +[10]:https://wiki.qemu.org/Features/Q35 +[11]:https://trmm.net/S2600 +[12]:https://trmm.net/NERF#Installing_on_a_Dell_R630 +[13]:https://www.linuxboot.org/ +[14]:https://www.phoronix.com/scan.php?page=news_item&px=LinuxBoot-OSFC-2018-State +[15]:https://itsfoss.com/check-uefi-or-bios/ +[16]:https://www.linuxfoundation.org/blog/2018/01/system-startup-gets-a-boost-with-new-linuxboot-project/ +[17]:https://en.wikipedia.org/wiki/Coreboot +[18]:http://www.horizon-computing.com/ +[19]:https://www.twosigma.com/ +[20]:https://trmm.net/LinuxBoot_34c3 +[21]:https://trmm.net/NERF +[22]:https://trmm.net/LinuxBoot_34c3#Runtimes +[23]:http://www.tech-faq.com/flashing-firmware.html +[24]:https://itsfoss.com/features-ubuntu-1604/ +[25]:https://www.tianocore.org/ +[26]:https://media.ccc.de/v/34c3-9056-bringing_linux_back_to_server_boot_roms_with_nerf_and_heads +[27]:https://medium.com/@bhumikagoyal/linux-kernel-development-cycle-52b4c55be06e +[28]:https://en.wikipedia.org/wiki/Trusted_computing_base +[29]:https://itsfoss.com/adobe-alternatives-linux/ +[30]:https://en.wikipedia.org/wiki/Kexec +[31]:https://en.wikipedia.org/wiki/Open_Compute_Project +[32]:https://github.com/facebook +[33]:https://github.com/opencomputeproject +[34]:https://www.networkworld.com/article/3266293/lan-wan/what-is-the-open-compute-project.html +[35]:http://hyperscaleit.com/ocp-server-hardware/ +[36]:https://www.opencompute.org/projects/open-system-firmware +[37]:https://www.equuscs.com/ +[38]:http://www.dcvelocity.com/products/Software_-_Systems/20180924-equus-compute-solutions-introduces-whitebox-open-m2660-and-m2760-servers/ +[39]:https://www.equuscs.com/servers/whitebox-open/ +[40]:https://github.com/linuxboot/linuxboot diff --git a/published/20181008 3 areas to drive DevOps change.md b/published/201811/20181008 3 areas to drive DevOps change.md similarity index 100% rename from published/20181008 3 areas to drive DevOps change.md rename to published/201811/20181008 3 areas to drive DevOps change.md diff --git a/published/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md b/published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md similarity index 100% rename from published/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md rename to published/201811/20181008 KeeWeb - An Open Source, Cross Platform Password Manager.md diff --git a/translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md similarity index 56% rename from translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md rename to published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md index 26d315f64b..c0859f1dc1 100644 --- a/translated/tech/20181008 Play Windows games on Fedora with Steam Play and Proton.md +++ b/published/201811/20181008 Play Windows games on Fedora with Steam Play and Proton.md @@ -3,7 +3,7 @@ ![](https://fedoramagazine.org/wp-content/uploads/2018/09/steam-proton-816x345.jpg) -几周前,Steam 宣布要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 +之前,Steam [宣布][1]要给 Steam Play 增加一个新组件,用于支持在 Linux 平台上使用 Proton 来玩 Windows 的游戏,这个组件是 WINE 的一个分支。这个功能仍然处于测试阶段,且并非对所有游戏都有效。这里有一些关于 Steam 和 Proton 的细节。 据 Steam 网站称,测试版本中有以下这些新功能: @@ -13,29 +13,27 @@ * 改进了对游戏控制器的支持,游戏自动识别所有 Steam 支持的控制器,比起游戏的原始版本,能够获得更多开箱即用的控制器兼容性。 * 和 vanilla WINE 比起来,游戏的多线程性能得到了极大的提高。 - - ### 安装 如果你有兴趣,想尝试一下 Steam 和 Proton。请按照下面这些简单的步骤进行操作。(请注意,如果你已经安装了最新版本的 Steam,可以忽略启用 Steam 测试版这个第一步。在这种情况下,你不再需要通过 Steam 测试版来使用 Proton。) -打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持22个游戏。 +打开 Steam 并登陆到你的帐户,这个截屏示例显示的是在使用 Proton 之前仅支持 22 个游戏。 ![][3] -现在点击客户端顶部的 Steam 选项,这会显示一个下拉菜单。然后选择设置。 +现在点击客户端顶部的 “Steam” 选项,这会显示一个下拉菜单。然后选择“设置”。 ![][4] -现在弹出了设置窗口,选择账户选项,并在 Beta participation 旁边,点击更改。 +现在弹出了设置窗口,选择“账户”选项,并在 “参与 Beta 测试” 旁边,点击“更改”。 ![][5] -现在将 None 更改为 Steam Beta Update。 +现在将 “None” 更改为 “Steam Beta Update”。 ![][6] -点击确定,然后系统会提示你重新启动。 +点击“确定”,然后系统会提示你重新启动。 ![][7] @@ -43,11 +41,11 @@ ![][8] -在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定有为提供支持的游戏使用 Stream Play 这个复选框,让所有的游戏都使用 Steam Play 进行运行,而不是 steam 中游戏特定的选项。兼容性工具应该是 Proton。 +在重新启动之后,返回到上面的设置窗口。这次你会看到一个新选项。确定勾选了“为提供支持的游戏使用 Stream Play” 、“让所有的游戏都使用 Steam Play 运行”,“使用这个工具替代 Steam 中游戏特定的选项”。这个兼容性工具应该就是 Proton。 ![][9] -Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 +Steam 客户端会要求你重新启动,照做,然后重新登录你的 Steam 账户,你的 Linux 的游戏库就能得到扩展了。 ![][10] @@ -69,7 +67,7 @@ Steam 客户端会要求你重新启动,照做,然后重新登陆你的 Stea ![][16] -一些游戏可能会受到 Proton 测试性质的影响,在下面这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 +一些游戏可能会受到 Proton 测试性质的影响,在这个叫 Chantelise 游戏中,没有了声音并且帧率很低。请记住这个功能仍然在测试阶段,Fedora 不会对结果负责。如果你想要了解更多,社区已经创建了一个 Google 文档,这个文档里有已经测试过的游戏的列表。 -------------------------------------------------------------------------------- @@ -79,25 +77,25 @@ via: https://fedoramagazine.org/play-windows-games-steam-play-proton/ 作者:[Francisco J. Vergara Torres][a] 选题:[lujun9972](https://github.com/lujun9972) 译者:[hopefully2333](https://github.com/hopefully2333) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/patxi/ [1]: https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 [2]: https://fedoramagazine.org/third-party-repositories-fedora/ -[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-300x197.png -[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-300x169.png -[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-300x196.png -[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4-300x272.png -[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6-300x237.png -[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7-300x126.png -[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10-300x237.png -[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-300x196.png -[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-300x196.png -[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-300x195.png -[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-300x196.png -[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-300x195.png -[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-300x169.png -[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-300x169.png +[3]: https://fedoramagazine.org/wp-content/uploads/2018/09/listOfGamesLinux-768x505.png +[4]: https://fedoramagazine.org/wp-content/uploads/2018/09/1-768x432.png +[5]: https://fedoramagazine.org/wp-content/uploads/2018/09/2-768x503.png +[6]: https://fedoramagazine.org/wp-content/uploads/2018/09/4.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/09/6.png +[8]: https://fedoramagazine.org/wp-content/uploads/2018/09/7.png +[9]: https://fedoramagazine.org/wp-content/uploads/2018/09/10.png +[10]: https://fedoramagazine.org/wp-content/uploads/2018/09/12-768x503.png +[11]: https://fedoramagazine.org/wp-content/uploads/2018/09/13-768x501.png +[12]: https://fedoramagazine.org/wp-content/uploads/2018/09/14-768x498.png +[13]: https://fedoramagazine.org/wp-content/uploads/2018/09/15-768x501.png +[14]: https://fedoramagazine.org/wp-content/uploads/2018/09/16-768x500.png +[15]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-14-59-768x432.png +[16]: https://fedoramagazine.org/wp-content/uploads/2018/09/Screenshot-from-2018-08-30-15-19-34-768x432.png [17]: https://docs.google.com/spreadsheets/d/1DcZZQ4HL_Ol969UbXJmFG8TzOHNnHoj8Q1f8DIFe8-8/edit#gid=1003113831 diff --git a/published/20181010 5 alerting and visualization tools for sysadmins.md b/published/201811/20181010 5 alerting and visualization tools for sysadmins.md similarity index 100% rename from published/20181010 5 alerting and visualization tools for sysadmins.md rename to published/201811/20181010 5 alerting and visualization tools for sysadmins.md diff --git a/published/20181010 An introduction to using tcpdump at the Linux command line.md b/published/201811/20181010 An introduction to using tcpdump at the Linux command line.md similarity index 100% rename from published/20181010 An introduction to using tcpdump at the Linux command line.md rename to published/201811/20181010 An introduction to using tcpdump at the Linux command line.md diff --git a/published/201811/20181014 How Lisp Became God-s Own Programming Language.md b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md new file mode 100644 index 0000000000..017a67799f --- /dev/null +++ b/published/201811/20181014 How Lisp Became God-s Own Programming Language.md @@ -0,0 +1,186 @@ +Lisp 是怎么成为上帝的编程语言的 +====== + +当程序员们谈论各类编程语言的相对优势时,他们通常会采用相当平淡的措词,就好像这些语言是一条工具带上的各种工具似的 —— 有适合写操作系统的,也有适合把其它程序黏在一起来完成特殊工作的。这种讨论方式非常合理;不同语言的能力不同。不声明特定用途就声称某门语言比其他语言更优秀只能导致侮辱性的无用争论。 + +但有一门语言似乎受到和用途无关的特殊尊敬:那就是 Lisp。即使是恨不得给每个说出形如“某某语言比其他所有语言都好”这类话的人都来一拳的键盘远征军们,也会承认 Lisp 处于另一个层次。 Lisp 超越了用于评判其他语言的实用主义标准,因为普通程序员并不使用 Lisp 编写实用的程序 —— 而且,多半他们永远也不会这么做。然而,人们对 Lisp 的敬意是如此深厚,甚至于到了这门语言会时而被加上神话属性的程度。 + +大家都喜欢的网络漫画合集 xkcd 就至少在两组漫画中如此描绘过 Lisp:[其中一组漫画][1]中,某人得到了某种 Lisp 启示,而这好像使他理解了宇宙的基本构架。 + +![](https://imgs.xkcd.com/comics/lisp.jpg) + +在[另一组漫画][2]中,一个穿着长袍的老程序员给他的徒弟递了一沓圆括号,说这是“文明时代的优雅武器”,暗示着 Lisp 就像原力那样拥有各式各样的神秘力量。 + +![](https://imgs.xkcd.com/comics/lisp_cycles.png) + +另一个绝佳例子是 Bob Kanefsky 的滑稽剧插曲,《上帝就在人间》。这部剧叫做《永恒之火》,撰写于 1990 年代中期;剧中描述了上帝必然是使用 Lisp 创造世界的种种原因。完整的歌词可以在 [GNU 幽默合集][3]中找到,如下是一段摘抄: + +> 因为上帝用祂的 Lisp 代码 + +> 让树叶充满绿意。 + +> 分形的花儿和递归的根: + +> 我见过的奇技淫巧之中没什么比这更可爱。 + +> 当我对着雪花深思时, + +> 从未见过两片相同的, + +> 我知道,上帝偏爱那一门 + +> 名字是四个字母的语言。 + +(LCTT 译注:参见 “四个字母”,参见:[四字神名](https://zh.wikipedia.org/wiki/%E5%9B%9B%E5%AD%97%E7%A5%9E%E5%90%8D),致谢 [no1xsyzy](https://github.com/LCTT/TranslateProject/issues/11320)) + +以下这句话我实在不好在人前说;不过,我还是觉得,这样一种 “Lisp 是奥术魔法”的文化模因实在是有史以来最奇异、最迷人的东西。Lisp 是象牙塔的产物,是人工智能研究的工具;因此,它对于编程界的俗人而言总是陌生的,甚至是带有神秘色彩的。然而,当今的程序员们[开始怂恿彼此,“在你死掉之前至少试一试 Lisp”][4],就像这是一种令人恍惚入迷的致幻剂似的。尽管 Lisp 是广泛使用的编程语言中第二古老的(只比 Fortran 年轻一岁)[^1] ,程序员们也仍旧在互相怂恿。想象一下,如果你的工作是为某种组织或者团队推广一门新的编程语言的话,忽悠大家让他们相信你的新语言拥有神力难道不是绝佳的策略吗?—— 但你如何能够做到这一点呢?或者,换句话说,一门编程语言究竟是如何变成人们口中“隐晦知识的载体”的呢? + +Lisp 究竟是怎么成为这样的? + +![Byte 杂志封面,1979年八月。][5] + +*Byte 杂志封面,1979年八月。* + +### 理论 A :公理般的语言 + +Lisp 的创造者约翰·麦卡锡John McCarthy最初并没有想过把 Lisp 做成优雅、精炼的计算法则结晶。然而,在一两次运气使然的深谋远虑和一系列优化之后,Lisp 的确变成了那样的东西。 保罗·格雷厄姆Paul Graham(我们一会儿之后才会聊到他)曾经这么写道, 麦卡锡通过 Lisp “为编程作出的贡献就像是欧几里得对几何学所做的贡献一般” [^2]。人们可能会在 Lisp 中看出更加隐晦的含义 —— 因为麦卡锡创造 Lisp 时使用的要素实在是过于基础,基础到连弄明白他到底是创造了这门语言、还是发现了这门语言,都是一件难事。 + +最初, 麦卡锡产生要造一门语言的想法,是在 1956 年的达特茅斯人工智能夏季研究项目Darthmouth Summer Research Project on Artificial Intelligence上。夏季研究项目是个持续数周的学术会议,直到现在也仍旧在举行;它是此类会议之中最早开始举办的会议之一。 麦卡锡当初还是个达特茅斯的数学助教,而“人工智能artificial intelligence(AI)”这个词事实上就是他建议举办该会议时发明的 [^3]。在整个会议期间大概有十人参加 [^4]。他们之中包括了艾伦·纽厄尔Allen Newell赫伯特·西蒙Herbert Simon,两名隶属于兰德公司RAND Corporation卡内基梅隆大学Carnegie Mellon的学者。这两人不久之前设计了一门语言,叫做 IPL。 + +当时,纽厄尔和西蒙正试图制作一套能够在命题演算中生成证明的系统。两人意识到,用电脑的原生指令集编写这套系统会非常困难;于是他们决定创造一门语言——他们的原话是“伪代码pseudo-code”,这样,他们就能更加轻松自然地表达这台“逻辑理论机器Logic Theory Machine”的底层逻辑了 [^5]。这门语言叫做 IPL,即“信息处理语言Information Processing Language”;比起我们现在认知中的编程语言,它更像是一种高层次的汇编语言方言。 纽厄尔和西蒙提到,当时人们开发的其它“伪代码”都抓着标准数学符号不放 —— 也许他们指的是 Fortran [^6];与此不同的是,他们的语言使用成组的符号方程来表示命题演算中的语句。通常,用 IPL 写出来的程序会调用一系列的汇编语言宏,以此在这些符号方程列表中对表达式进行变换和求值。 + +麦卡锡认为,一门实用的编程语言应该像 Fortran 那样使用代数表达式;因此,他并不怎么喜欢 IPL [^7]。然而,他也认为,在给人工智能领域的一些问题建模时,符号列表会是非常好用的工具 —— 而且在那些涉及演绎的问题上尤其有用。麦卡锡的渴望最终被诉诸行动;他要创造一门代数的列表处理语言 —— 这门语言会像 Fortran 一样使用代数表达式,但拥有和 IPL 一样的符号列表处理能力。 + +当然,今日的 Lisp 可不像 Fortran。在会议之后的几年中,麦卡锡关于“理想的列表处理语言”的见解似乎在逐渐演化。到 1957 年,他的想法发生了改变。他那时候正在用 Fortran 编写一个能下国际象棋的程序;越是长时间地使用 Fortran ,麦卡锡就越确信其设计中存在不当之处,而最大的问题就是尴尬的 `IF` 声明 [^8]。为此,他发明了一个替代品,即条件表达式 `true`;这个表达式会在给定的测试通过时返回子表达式 `A` ,而在测试未通过时返回子表达式 `B` ,*而且*,它只会对返回的子表达式进行求值。在 1958 年夏天,当麦卡锡设计一个能够求导的程序时,他意识到,他发明的 `true` 条件表达式让编写递归函数这件事变得更加简单自然了 [^9]。也是这个求导问题让麦卡锡创造了 `maplist` 函数;这个函数会将其它函数作为参数并将之作用于指定列表的所有元素 [^10]。在给项数多得叫人抓狂的多项式求导时,它尤其有用。 + +然而,以上的所有这些,在 Fortran 中都是没有的;因此,在 1958 年的秋天,麦卡锡请来了一群学生来实现 Lisp。因为他那时已经成了一名麻省理工助教,所以,这些学生可都是麻省理工的学生。当麦卡锡和学生们最终将他的主意变为能运行的代码时,这门语言得到了进一步的简化。这之中最大的改变涉及了 Lisp 的语法本身。最初,麦卡锡在设计语言时,曾经试图加入所谓的 “M 表达式”;这是一层语法糖,能让 Lisp 的语法变得类似于 Fortran。虽然 M 表达式可以被翻译为 S 表达式 —— 基础的、“用圆括号括起来的列表”,也就是 Lisp 最著名的特征 —— 但 S 表达式事实上是一种给机器看的低阶表达方法。唯一的问题是,麦卡锡用方括号标记 M 表达式,但他的团队在麻省理工使用的 IBM 026 键盘打孔机的键盘上根本没有方括号 [^11]。于是 Lisp 团队坚定不移地使用着 S 表达式,不仅用它们表示数据列表,也拿它们来表达函数的应用。麦卡锡和他的学生们还作了另外几样改进,包括将数学符号前置;他们也修改了内存模型,这样 Lisp 实质上就只有一种数据类型了 [^12]。 + +到 1960 年,麦卡锡发表了他关于 Lisp 的著名论文,《用符号方程表示的递归函数及它们的机器计算》。那时候,Lisp 已经被极大地精简,而这让麦卡锡意识到,他的作品其实是“一套优雅的数学系统”,而非普通的编程语言 [^13]。他后来这么写道,对 Lisp 的许多简化使其“成了一种描述可计算函数的方式,而且它比图灵机或者一般情况下用于递归函数理论的递归定义更加简洁” [^14]。在他的论文中,他不仅使用 Lisp 作为编程语言,也将它当作一套用于研究递归函数行为方式的表达方法。 + +通过“从一小撮规则中逐步实现出 Lisp”的方式,麦卡锡将这门语言介绍给了他的读者。后来,保罗·格雷厄姆在短文《[Lisp 之根][6]The Roots of Lisp》中用更易读的语言回顾了麦卡锡的步骤。格雷厄姆只用了七种原始运算符、两种函数写法,以及使用原始运算符定义的六个稍微高级一点的函数来解释 Lisp。毫无疑问,Lisp 的这种只需使用极少量的基本规则就能完整说明的特点加深了其神秘色彩。格雷厄姆称麦卡锡的论文为“使计算公理化”的一种尝试 [^15]。我认为,在思考 Lisp 的魅力从何而来时,这是一个极好的切入点。其它编程语言都有明显的人工构造痕迹,表现为 `While`,`typedef`,`public static void` 这样的关键词;而 Lisp 的设计却简直像是纯粹计算逻辑的鬼斧神工。Lisp 的这一性质,以及它和晦涩难懂的“递归函数理论”的密切关系,使它具备了获得如今声望的充分理由。 + +### 理论 B:属于未来的机器 + +Lisp 诞生二十年后,它成了著名的《[黑客词典][7]Hacker’s Dictionary》中所说的,人工智能研究的“母语”。Lisp 在此之前传播迅速,多半是托了语法规律的福 —— 不管在怎么样的电脑上,实现 Lisp 都是一件相对简单直白的事。而学者们之后坚持使用它乃是因为 Lisp 在处理符号表达式这方面有巨大的优势;在那个时代,人工智能很大程度上就意味着符号,于是这一点就显得十分重要。在许多重要的人工智能项目中都能见到 Lisp 的身影。这些项目包括了 [SHRDLU 自然语言程序][8]、[Macsyma 代数系统][9] 和 [ACL2 逻辑系统][10]。 + +然而,在 1970 年代中期,人工智能研究者们的电脑算力开始不够用了。PDP-10 就是一个典型。这个型号在人工智能学界曾经极受欢迎;但面对这些用 Lisp 写的 AI 程序,它的 18 位地址空间一天比一天显得吃紧 [^16]。许多的 AI 程序在设计上可以与人互动。要让这些既极度要求硬件性能、又有互动功能的程序在分时系统上优秀发挥,是很有挑战性的。麻省理工的彼得·杜奇Peter Deutsch给出了解决方案:那就是针对 Lisp 程序来特别设计电脑。就像是我那[关于 Chaosnet 的上一篇文章][11]所说的那样,这些Lisp 计算机Lisp machines会给每个用户都专门分配一个为 Lisp 特别优化的处理器。到后来,考虑到硬核 Lisp 程序员的需求,这些计算机甚至还配备上了完全由 Lisp 编写的开发环境。在当时那样一个小型机时代已至尾声而微型机的繁盛尚未完全到来的尴尬时期,Lisp 计算机就是编程精英们的“高性能个人电脑”。 + +有那么一会儿,Lisp 计算机被当成是未来趋势。好几家公司雨后春笋般出现,追着赶着要把这项技术商业化。其中最成功的一家叫做 Symbolics,由麻省理工 AI 实验室的前成员创立。上世纪八十年代,这家公司生产了所谓的 3600 系列计算机,它们当时在 AI 领域和需要高性能计算的产业中应用极广。3600 系列配备了大屏幕、位图显示、鼠标接口,以及[强大的图形与动画软件][12]。它们都是惊人的机器,能让惊人的程序运行起来。例如,之前在推特上跟我聊过的机器人研究者 Bob Culley,就能用一台 1985 年生产的 Symbolics 3650 写出带有图形演示的寻路算法。他向我解释说,在 1980 年代,位图显示和面向对象编程(能够通过 [Flavors 扩展][13]在 Lisp 计算机上使用)都刚刚出现。Symbolics 站在时代的最前沿。 + +![Bob Culley 的寻路程序。][14] + +*Bob Culley 的寻路程序。* + +而以上这一切导致 Symbolics 的计算机奇贵无比。在 1983 年,一台 Symbolics 3600 能卖 111,000 美金 [^16]。所以,绝大部分人只可能远远地赞叹 Lisp 计算机的威力和操作员们用 Lisp 编写程序的奇妙技术。不止他们赞叹,从 1979 年到 1980 年代末,Byte 杂志曾经多次提到过 Lisp 和 Lisp 计算机。在 1979 年八月发行的、关于 Lisp 的一期特别杂志中,杂志编辑激情洋溢地写道,麻省理工正在开发的计算机配备了“大坨大坨的内存”和“先进的操作系统” [^17];他觉得,这些 Lisp 计算机的前途是如此光明,以至于它们的面世会让 1978 和 1977 年 —— 诞生了 Apple II、Commodore PET 和 TRS-80 的两年 —— 显得黯淡无光。五年之后,在 1985 年,一名 Byte 杂志撰稿人描述了为“复杂精巧、性能强悍的 Symbolics 3670”编写 Lisp 程序的体验,并力劝读者学习 Lisp,称其为“绝大数人工智能工作者的语言选择”,和将来的通用编程语言 [^18]。 + +我问过保罗·麦克琼斯Paul McJones(他在山景城Mountain View计算机历史博物馆Computer History Museum做了许多 Lisp 的[保护工作][15]),人们是什么时候开始将 Lisp 当作高维生物的赠礼一样谈论的呢?他说,这门语言自有的性质毋庸置疑地促进了这种现象的产生;然而,他也说,Lisp 上世纪六七十年代在人工智能领域得到的广泛应用,很有可能也起到了作用。当 1980 年代到来、Lisp 计算机进入市场时,象牙塔外的某些人由此接触到了 Lisp 的能力,于是传说开始滋生。时至今日,很少有人还记得 Lisp 计算机和 Symbolics 公司;但 Lisp 得以在八十年代一直保持神秘,很大程度上要归功于它们。 + +### 理论 C:学习编程 + +1985 年,两位麻省理工的教授,哈尔·阿伯尔森Harold "Hal" Abelson杰拉尔德·瑟斯曼Gerald Sussman,外加瑟斯曼的妻子朱莉·瑟斯曼Julie Sussman,出版了一本叫做《计算机程序的构造和解释Structure and Interpretation of Computer Programs》的教科书。这本书用 Scheme(一种 Lisp 方言)向读者们示范了如何编程。它被用于教授麻省理工入门编程课程长达二十年之久。出于直觉,我认为 SICP(这本书的名字通常缩写为 SICP)倍增了 Lisp 的“神秘要素”。SICP 使用 Lisp 描绘了深邃得几乎可以称之为哲学的编程理念。这些理念非常普适,可以用任意一种编程语言展现;但 SICP 的作者们选择了 Lisp。结果,这本阴阳怪气、卓越不凡、吸引了好几代程序员(还成了一种[奇特的模因][16])的著作臭名远扬之后,Lisp 的声望也顺带被提升了。Lisp 已不仅仅是一如既往的“麦卡锡的优雅表达方式”;它现在还成了“向你传授编程的不传之秘的语言”。 + +SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日,这本书的古怪之处和 Lisp 的古怪之处是相辅相成的。书的封面就透着一股古怪。那上面画着一位朝着桌子走去,准备要施法的巫师或者炼金术士。他的一只手里抓着一副测径仪 —— 或者圆规,另一只手上拿着个球,上书“eval”和“apply”。他对面的女人指着桌子;在背景中,希腊字母 λ (lambda)漂浮在半空,释放出光芒。 + +![SICP 封面上的画作][17] + +*SICP 封面上的画作。* + +说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 演算之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。 + +然而,这本书的内容通常并不比封面正常多少。SICP 跟你读过的所有计算机科学教科书都不同。在引言中,作者们表示,这本书不只教你怎么用 Lisp 编程 —— 它是关于“现象的三个焦点:人的心智、复数的计算机程序,和计算机”的作品 [^19]。在之后,他们对此进行了解释,描述了他们对如下观点的坚信:编程不该被当作是一种计算机科学的训练,而应该是“程序性认识论procedural epistemology”的一种新表达方式 [^20]。程序是将那些偶然被送入计算机的思想组织起来的全新方法。这本书的第一章简明地介绍了 Lisp,但是之后的绝大部分都在讲述更加抽象的概念。其中包括了对不同编程范式的讨论,对于面向对象系统中“时间”和“一致性”的讨论;在书中的某一处,还有关于通信的基本限制可能会如何带来同步问题的讨论 —— 而这些基本限制在通信中就像是光速不变在相对论中一样关键 [^21]。都是些高深难懂的东西。 + +以上这些并不是说这是本糟糕的书;这本书其实棒极了。在我读过的所有作品中,这本书对于重要的编程理念的讨论是最为深刻的;那些理念我琢磨了很久,却一直无力用文字去表达。一本入门编程教科书能如此迅速地开始描述面向对象编程的根本缺陷,和函数式语言“将可变状态降到最少”的优点,实在是一件让人印象深刻的事。而这种描述之后变为了另一种震撼人心的讨论:某种(可能类似于今日的 [RxJS][18] 的)流范式能如何同时具备两者的优秀特性。SICP 用和当初麦卡锡的 Lisp 论文相似的方式提纯出了高级程序设计的精华。你读完这本书之后,会立即想要将它推荐给你的程序员朋友们;如果他们找到这本书,看到了封面,但最终没有阅读的话,他们就只会记住长着动物腿的桌子上方那神秘的、根本的、给予魔法师特殊能力的、写着 eval/apply 的东西。话说回来,书上这两人的鞋子也让我印象颇深。 + +然而,SICP 最重要的影响恐怕是,它将 Lisp 由一门怪语言提升成了必要的教学工具。在 SICP 面世之前,人们互相推荐 Lisp,以学习这门语言为提升编程技巧的途径。1979 年的 Byte 杂志 Lisp 特刊印证了这一事实。之前提到的那位编辑不仅就麻省理工的新 Lisp 计算机大书特书,还说,Lisp 这门语言值得一学,因为它“代表了分析问题的另一种视角” [^22]。但 SICP 并未只把 Lisp 作为其它语言的陪衬来使用;SICP 将其作为*入门*语言。这就暗含了一种论点,那就是,Lisp 是最能把握计算机编程基础的语言。可以认为,如今的程序员们彼此怂恿“在死掉之前至少试试 Lisp”的时候,他们很大程度上是因为 SICP 才这么说的。毕竟,编程语言 [Brainfuck][19] 想必同样也提供了“分析问题的另一种视角”;但人们学习 Lisp 而非学习 Brainfuck,那是因为他们知道,前者的那种 Lisp 视角在二十年中都被看作是极其有用的,有用到麻省理工在给他们的本科生教其它语言之前,必然会先教 Lisp。 + +### Lisp 的回归 + +在 SICP 出版的同一年,本贾尼·斯特劳斯特卢普Bjarne Stroustrup发布了 C++ 语言的首个版本,它将面向对象编程带到了大众面前。几年之后,Lisp 计算机的市场崩盘,AI 寒冬开始了。在下一个十年的变革中, C++ 和后来的 Java 成了前途无量的语言,而 Lisp 被冷落,无人问津。 + +理所当然地,确定人们对 Lisp 重新燃起热情的具体时间并不可能;但这多半是保罗·格雷厄姆发表他那几篇声称 Lisp 是首选入门语言的短文之后的事了。保罗·格雷厄姆是 Y-Combinator 的联合创始人和《Hacker News》的创始者,他这几篇短文有很大的影响力。例如,在短文《[胜于平庸][20]Beating the Averages》中,他声称 Lisp 宏使 Lisp 比其它语言更强。他说,因为他在自己创办的公司 Viaweb 中使用 Lisp,他得以比竞争对手更快地推出新功能。至少,[一部分程序员][21]被说服了。然而,庞大的主流程序员群体并未换用 Lisp。 + +实际上出现的情况是,Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表推导式。C# 有了 Linq。Ruby……嗯,[Ruby 是 Lisp 的一种][22]。就如格雷厄姆之前在 2001 年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化” [^23]。尽管其它语言变得越来越像 Lisp,Lisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年,Lisp 的诞生二十周年纪念日上,麦卡锡写道,Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优” [^24]。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产,Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前,Lisp 都不会走下神坛。 + +-------------------------------------------------------------------------------- + +[^1]: John McCarthy, “History of Lisp”, 14, Stanford University, February 12, 1979, accessed October 14, 2018, http://jmc.stanford.edu/articles/lisp/lisp.pdf + +[^2]: Paul Graham, “The Roots of Lisp”, 1, January 18, 2002, accessed October 14, 2018, http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf. + +[^3]: Martin Childs, “John McCarthy: Computer scientist known as the father of AI”, The Independent, November 1, 2011, accessed on October 14, 2018, https://www.independent.co.uk/news/obituaries/john-mccarthy-computer-scientist-known-as-the-father-of-ai-6255307.html. + +[^4]: Lisp Bulletin History. http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf + +[^5]: Allen Newell and Herbert Simon, “Current Developments in Complex Information Processing,” 19, May 1, 1956, accessed on October 14, 2018, http://bitsavers.org/pdf/rand/ipl/P-850_Current_Developments_In_Complex_Information_Processing_May56.pdf. + +[^6]: ibid. + +[^7]: Herbert Stoyan, “Lisp History”, 43, Lisp Bulletin #3, December 1979, accessed on October 14, 2018, http://www.artinfo-musinfo.org/scans/lb/lb3f.pdf + +[^8]: McCarthy, “History of Lisp”, 5. + +[^9]: ibid. + +[^10]: McCarthy “History of Lisp”, 6. + +[^11]: Stoyan, “Lisp History”, 45 + +[^12]: McCarthy, “History of Lisp”, 8. + +[^13]: McCarthy, “History of Lisp”, 2. + +[^14]: McCarthy, “History of Lisp”, 8. + +[^15]: Graham, “The Roots of Lisp”, 11. + +[^16]: Guy Steele and Richard Gabriel, “The Evolution of Lisp”, 22, History of Programming Languages 2, 1993, accessed on October 14, 2018, http://www.dreamsongs.com/Files/HOPL2-Uncut.pdf. 2 + +[^17]: Carl Helmers, “Editorial”, Byte Magazine, 154, August 1979, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1979-08/page/n153. + +[^18]: Patrick Winston, “The Lisp Revolution”, 209, April 1985, accessed on October 14, 2018, https://archive.org/details/byte-magazine-1985-04/page/n207. + +[^19]: Harold Abelson, Gerald Jay. Sussman, and Julie Sussman, Structure and Interpretation of Computer Programs (Cambridge, Mass: MIT Press, 2010), xiii. + +[^20]: Abelson, xxiii. + +[^21]: Abelson, 428. + +[^22]: Helmers, 7. + +[^23]: Paul Graham, “What Made Lisp Different”, December 2001, accessed on October 14, 2018, http://www.paulgraham.com/diff.html. + +[^24]: John McCarthy, “Lisp—Notes on its past and future”, 3, Stanford University, 1980, accessed on October 14, 2018, http://jmc.stanford.edu/articles/lisp20th/lisp20th.pdf. + +via: https://twobithistory.org/2018/10/14/lisp.html + +作者:[Two-Bit History][a] +选题:[lujun9972][b] +译者:[Northurland](https://github.com/Northurland) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://twobithistory.org +[b]: https://github.com/lujun9972 +[1]: https://xkcd.com/224/ +[2]: https://xkcd.com/297/ +[3]: https://www.gnu.org/fun/jokes/eternal-flame.en.html +[4]: https://www.reddit.com/r/ProgrammerHumor/comments/5c14o6/xkcd_lisp/d9szjnc/ +[5]: https://twobithistory.org/images/byte_lisp.jpg +[6]: http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf +[7]: https://en.wikipedia.org/wiki/Jargon_File +[8]: https://hci.stanford.edu/winograd/shrdlu/ +[9]: https://en.wikipedia.org/wiki/Macsyma +[10]: https://en.wikipedia.org/wiki/ACL2 +[11]: https://twobithistory.org/2018/09/30/chaosnet.html +[12]: https://youtu.be/gV5obrYaogU?t=201 +[13]: https://en.wikipedia.org/wiki/Flavors_(programming_language) +[14]: https://twobithistory.org/images/symbolics.jpg +[15]: http://www.softwarepreservation.org/projects/LISP/ +[16]: https://knowyourmeme.com/forums/meme-research/topics/47038-structure-and-interpretation-of-computer-programs-hugeass-image-dump-for-evidence +[17]: https://twobithistory.org/images/sicp.jpg +[18]: https://rxjs-dev.firebaseapp.com/ +[19]: https://en.wikipedia.org/wiki/Brainfuck +[20]: http://www.paulgraham.com/avg.html +[21]: https://web.archive.org/web/20061004035628/http://wiki.alu.org/Chris-Perkins +[22]: http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/ diff --git a/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md new file mode 100644 index 0000000000..01bdffbafd --- /dev/null +++ b/published/201811/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md @@ -0,0 +1,247 @@ +如何使用 chkconfig 和 systemctl 命令启用或禁用 Linux 服务 +====== + +对于 Linux 管理员来说这是一个重要(美妙)的话题,所以每个人都必须知道,并练习怎样才能更高效的使用它们。 + +在 Linux 中,无论何时当你安装任何带有服务和守护进程的包,系统默认会把这些服务的初始化及 systemd 脚本添加进去,不过此时它们并没有被启用。 + +我们需要手动的开启或者关闭那些服务。Linux 中有三个著名的且一直在被使用的初始化系统。 + +### 什么是初始化系统? + +在以 Linux/Unix 为基础的操作系统上,`init` (初始化的简称) 是内核引导系统启动过程中第一个启动的进程。 + +`init` 的进程 id (pid)是 1,除非系统关机否则它将会一直在后台运行。 + +`init` 首先根据 `/etc/inittab` 文件决定 Linux 运行的级别,然后根据运行级别在后台启动所有其他进程和应用程序。 + +BIOS、MBR、GRUB 和内核程序在启动 `init` 之前就作为 Linux 的引导程序的一部分开始工作了。 + +下面是 Linux 中可以使用的运行级别(从 0~6 总共七个运行级别): + + * `0`:关机 + * `1`:单用户模式 + * `2`:多用户模式(没有NFS) + * `3`:完全的多用户模式 + * `4`:系统未使用 + * `5`:图形界面模式 + * `6`:重启 + +下面是 Linux 系统中最常用的三个初始化系统: + + * System V(Sys V) + * Upstart + * systemd + +### 什么是 System V(Sys V)? + +System V(Sys V)是类 Unix 系统第一个也是传统的初始化系统。`init` 是内核引导系统启动过程中第一支启动的程序,它是所有程序的父进程。 + +大部分 Linux 发行版最开始使用的是叫作 System V(Sys V)的传统的初始化系统。在过去的几年中,已经发布了好几个初始化系统以解决标准版本中的设计限制,例如:launchd、Service Management Facility、systemd 和 Upstart。 + +但是 systemd 已经被几个主要的 Linux 发行版所采用,以取代传统的 SysV 初始化系统。 + +### 什么是 Upstart? + +Upstart 是一个基于事件的 `/sbin/init` 守护进程的替代品,它在系统启动过程中处理任务和服务的启动,在系统运行期间监视它们,在系统关机的时候关闭它们。 + +它最初是为 Ubuntu 而设计,但是它也能够完美的部署在其他所有 Linux系统中,用来代替古老的 System-V。 + +Upstart 被用于 Ubuntu 从 9.10 到 Ubuntu 14.10 和基于 RHEL 6 的系统,之后它被 systemd 取代。 + +### 什么是 systemd? + +systemd 是一个新的初始化系统和系统管理器,它被用于所有主要的 Linux 发行版,以取代传统的 SysV 初始化系统。 + +systemd 兼容 SysV 和 LSB 初始化脚本。它可以直接替代 SysV 初始化系统。systemd 是被内核启动的第一个程序,它的 PID 是 1。 + +systemd 是所有程序的父进程,Fedora 15 是第一个用 systemd 取代 upstart 的发行版。`systemctl` 用于命令行,它是管理 systemd 的守护进程/服务的主要工具,例如:(开启、重启、关闭、启用、禁用、重载和状态) + +systemd 使用 .service 文件而不是 bash 脚本(SysVinit 使用的)。systemd 将所有守护进程添加到 cgroups 中排序,你可以通过浏览 `/cgroup/systemd` 文件查看系统等级。 + +### 如何使用 chkconfig 命令启用或禁用引导服务? + +`chkconfig` 实用程序是一个命令行工具,允许你在指定运行级别下启动所选服务,以及列出所有可用服务及其当前设置。 + +此外,它还允许我们从启动中启用或禁用服务。前提是你有超级管理员权限(root 或者 `sudo`)运行这个命令。 + +所有的服务脚本位于 `/etc/rd.d/init.d`文件中 + +### 如何列出运行级别中所有的服务 + +`--list` 参数会展示所有的服务及其当前状态(启用或禁用服务的运行级别): + +``` +# chkconfig --list +NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off +abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off +abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off +acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off +atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off +auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +. +. +``` + +### 如何查看指定服务的状态 + +如果你想查看运行级别下某个服务的状态,你可以使用下面的格式匹配出需要的服务。 + +比如说我想查看运行级别中 `auditd` 服务的状态 + +``` +# chkconfig --list| grep auditd +auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off +``` + +### 如何在指定运行级别中启用服务 + +使用 `--level` 参数启用指定运行级别下的某个服务,下面展示如何在运行级别 3 和运行级别 5 下启用 `httpd` 服务。 + + +``` +# chkconfig --level 35 httpd on +``` + +### 如何在指定运行级别下禁用服务 + +同样使用 `--level` 参数禁用指定运行级别下的服务,下面展示的是在运行级别 3 和运行级别 5 中禁用 `httpd` 服务。 + +``` +# chkconfig --level 35 httpd off +``` + +### 如何将一个新服务添加到启动列表中 + +`-–add` 参数允许我们添加任何新的服务到启动列表中,默认情况下,新添加的服务会在运行级别 2、3、4、5 下自动开启。 + +``` +# chkconfig --add nagios +``` + +### 如何从启动列表中删除服务 + +可以使用 `--del` 参数从启动列表中删除服务,下面展示的是如何从启动列表中删除 Nagios 服务。 + +``` +# chkconfig --del nagios +``` + +### 如何使用 systemctl 命令启用或禁用开机自启服务? + +`systemctl` 用于命令行,它是一个用来管理 systemd 的守护进程/服务的基础工具,例如:(开启、重启、关闭、启用、禁用、重载和状态)。 + +所有服务创建的 unit 文件位与 `/etc/systemd/system/`。 + +### 如何列出全部的服务 + +使用下面的命令列出全部的服务(包括启用的和禁用的)。 + +``` +# systemctl list-unit-files --type=service +UNIT FILE STATE +arp-ethers.service disabled +auditd.service enabled +autovt@.service enabled +blk-availability.service disabled +brandbot.service static +chrony-dnssrv@.service static +chrony-wait.service disabled +chronyd.service enabled +cloud-config.service enabled +cloud-final.service enabled +cloud-init-local.service enabled +cloud-init.service enabled +console-getty.service disabled +console-shell.service disabled +container-getty@.service static +cpupower.service disabled +crond.service enabled +. +. +150 unit files listed. +``` + +使用下面的格式通过正则表达式匹配出你想要查看的服务的当前状态。下面是使用 `systemctl` 命令查看 `httpd` 服务的状态。 + +``` +# systemctl list-unit-files --type=service | grep httpd +httpd.service disabled +``` + +### 如何让指定的服务开机自启 + +使用下面格式的 `systemctl` 命令启用一个指定的服务。启用服务将会创建一个符号链接,如下可见: + +``` +# systemctl enable httpd +Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. +``` + +运行下列命令再次确认服务是否被启用。 + +``` +# systemctl is-enabled httpd +enabled +``` + +### 如何禁用指定的服务 + +运行下面的命令禁用服务将会移除你启用服务时所创建的符号链接。 + +``` +# systemctl disable httpd +Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. +``` + +运行下面的命令再次确认服务是否被禁用。 + +``` +# systemctl is-enabled httpd +disabled +``` + +### 如何查看系统当前的运行级别 + +使用 `systemctl` 命令确认你系统当前的运行级别,`runlevel` 命令仍然可在 systemd 下工作,不过,运行级别对于 systemd 来说是一个历史遗留的概念。所以我建议你全部使用 `systemctl` 命令。 + +我们当前处于运行级别 3, 它等同于下面显示的 `multi-user.target`。 + +``` +# systemctl list-units --type=target +UNIT LOAD ACTIVE SUB DESCRIPTION +basic.target loaded active active Basic System +cloud-config.target loaded active active Cloud-config availability +cryptsetup.target loaded active active Local Encrypted Volumes +getty.target loaded active active Login Prompts +local-fs-pre.target loaded active active Local File Systems (Pre) +local-fs.target loaded active active Local File Systems +multi-user.target loaded active active Multi-User System +network-online.target loaded active active Network is Online +network-pre.target loaded active active Network (Pre) +network.target loaded active active Network +paths.target loaded active active Paths +remote-fs.target loaded active active Remote File Systems +slices.target loaded active active Slices +sockets.target loaded active active Sockets +swap.target loaded active active Swap +sysinit.target loaded active active System Initialization +timers.target loaded active active Timers +``` + +-------------------------------------------------------------------------------- + + +via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ + + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[way-ww](https://github.com/way-ww) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 diff --git a/published/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md b/published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md similarity index 100% rename from published/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md rename to published/201811/20181015 Kali Linux- What You Must Know Before Using it - FOSS Post.md diff --git a/published/20181017 Browsing the web with Min, a minimalist open source web browser.md b/published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md similarity index 100% rename from published/20181017 Browsing the web with Min, a minimalist open source web browser.md rename to published/201811/20181017 Browsing the web with Min, a minimalist open source web browser.md diff --git a/translated/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md b/published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md similarity index 75% rename from translated/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md rename to published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md index 1245f2f5f0..b33670d461 100644 --- a/translated/tech/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md +++ b/published/201811/20181017 Chrony - An Alternative NTP Client And Server For Unix-like Systems.md @@ -1,9 +1,9 @@ -Chrony – 一个类 Unix 系统可选的 NTP 客户端和服务器 +Chrony:一个类 Unix 系统上 NTP 客户端和服务器替代品 ====== ![](https://www.ostechnix.com/wp-content/uploads/2018/10/chrony-1-720x340.jpeg) -在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上可选的 NTP 客户端和服务器。Chrony 可以更快的同步系统时钟,具有更好的时钟准确度,并且它对于那些不是一直在线的系统很有帮助。Chrony 是免费、开源的,并且支持 GNU/Linux 和 BSD 衍生版比如 FreeBSD,NetBSD,macOS 和 Solaris 等。 +在这个教程中,我们会讨论如何安装和配置 **Chrony**,一个类 Unix 系统上 NTP 客户端和服务器的替代品。Chrony 可以更快的同步系统时钟,具有更好的时钟准确度,并且它对于那些不是一直在线的系统很有帮助。Chrony 是自由开源的,并且支持 GNU/Linux 和 BSD 衍生版(比如 FreeBSD、NetBSD)、macOS 和 Solaris 等。 ### 安装 Chrony @@ -13,7 +13,7 @@ Chrony 可以从大多数 Linux 发行版的默认软件库中获得。如果你 $ sudo pacman -S chrony ``` -在 Debian,Ubuntu,Linux Mint 上: +在 Debian、Ubuntu、Linux Mint 上: ``` $ sudo apt-get install chrony @@ -25,7 +25,7 @@ $ sudo apt-get install chrony $ sudo dnf install chrony ``` -当安装完成后,如果之前没有启动过的话需启动 **chronyd.service** 守护进程: +当安装完成后,如果之前没有启动过的话需启动 `chronyd.service` 守护进程: ``` $ sudo systemctl start chronyd.service @@ -37,7 +37,7 @@ $ sudo systemctl start chronyd.service $ sudo systemctl enable chronyd.service ``` -为了确认 Chronyd.service 已经启动,运行: +为了确认 `chronyd.service` 已经启动,运行: ``` $ sudo systemctl status chronyd.service @@ -71,7 +71,7 @@ Oct 17 10:35:06 ubuntuserver chronyd[2482]: Selected source 106.10.186.200 ### 配置 Chrony -NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在 NTP 配置文件中的 **server** 或者 **pool** 项指定 NTP 服务器。通常,默认的配置文件位于 **/etc/chrony/chrony.conf** 或者 **/etc/chrony.conf**,取决于 Linux 发行版版本。为了更可靠的时间同步,建议指定至少三个服务器。 +NTP 客户端需要知道它要连接到哪个 NTP 服务器来获取当前时间。我们可以直接在该 NTP 配置文件中的 `server` 或者 `pool` 项指定 NTP 服务器。通常,默认的配置文件位于 `/etc/chrony/chrony.conf` 或者 `/etc/chrony.conf`,取决于 Linux 发行版版本。为了更可靠的同步时间,建议指定至少三个服务器。 下面几行是我的 Ubuntu 18.04 LTS 服务器上的一个示例。 @@ -87,19 +87,19 @@ pool 2.ubuntu.pool.ntp.org iburst maxsources 2 [...] ``` -从上面的输出中你可以看到,[**NTP Pool Project**][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人,NTP Pool project 是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。 +从上面的输出中你可以看到,[NTP 服务器池项目][1] 已经被设置成为了默认的时间服务器。对于那些好奇的人,NTP 服务器池项目是一个时间服务器集群,用来为全世界千万个客户端提供 NTP 服务。它是 Ubuntu 以及其他主流 Linux 发行版的默认时间服务器。 在这里, - * **iburst** 选项用来加速初始的同步过程 - * **maxsources** 代表 NTP 源的最大数量 + * `iburst` 选项用来加速初始的同步过程 + * `maxsources` 代表 NTP 源的最大数量 请确保你选择的 NTP 服务器是同步的、稳定的、离你的位置较近的,以便使用这些 NTP 源来提升时间准确度。 ### 在命令行中管理 Chronyd -Chrony 有一个命令行工具叫做 **chronyc** 用来控制和监控 **chrony** 守护进程(chronyd)。 +chrony 有一个命令行工具叫做 `chronyc` 用来控制和监控 chrony 守护进程(`chronyd`)。 -为了检查是否 **chrony** 已经同步,我们可以使用下面展示的 **tracking** 命令。 +为了检查是否 chrony 已经同步,我们可以使用下面展示的 `tracking` 命令。 ``` $ chronyc tracking @@ -135,7 +135,7 @@ MS Name/IP address Stratum Poll Reach LastRx Last sample ^- ns2.pulsation.fr 2 10 377 311 -75ms[ -73ms] +/- 250ms ``` -Chronyc 工具可以对每个源进行统计,比如使用 **sourcestats** 命令获得漂移速率和进行偏移估计。 +`chronyc` 工具可以对每个源进行统计,比如使用 `sourcestats` 命令获得漂移速率和进行偏移估计。 ``` $ chronyc sourcestats @@ -152,7 +152,7 @@ sin1.m-d.net 29 13 83m +0.049 6.060 -8466us 9940us ns2.pulsation.fr 32 17 88m +0.784 9.834 -62ms 22ms ``` -如果你的系统没有连接到 Internet,你需要告知 Chrony 系统没有连接到 Internet。为了这样做,运行: +如果你的系统没有连接到互联网,你需要告知 Chrony 系统没有连接到 互联网。为了这样做,运行: ``` $ sudo chronyc offline @@ -174,7 +174,7 @@ $ chronyc activity 可以看到,我的所有源此时都是离线状态。 -一旦你连接到 Internet,只需要使用命令告知 Chrony 你的系统已经回到在线状态: +一旦你连接到互联网,只需要使用命令告知 Chrony 你的系统已经回到在线状态: ``` $ sudo chronyc online @@ -193,11 +193,10 @@ $ chronyc activity 0 sources with unknown address ``` -所有选项和参数的详细解释,请参考帮助手册。 +所有选项和参数的详细解释,请参考其帮助手册。 ``` $ man chronyc - $ man chronyd ``` @@ -206,7 +205,6 @@ $ man chronyd 保持关注! - -------------------------------------------------------------------------------- via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-unix-like-systems/ @@ -214,7 +212,7 @@ via: https://www.ostechnix.com/chrony-an-alternative-ntp-client-and-server-for-u 作者:[SK][a] 选题:[lujun9972][b] 译者:[zianglei](https://github.com/zianglei) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181017 Design faster web pages, part 2- Image replacement.md b/published/201811/20181017 Design faster web pages, part 2- Image replacement.md similarity index 100% rename from published/20181017 Design faster web pages, part 2- Image replacement.md rename to published/201811/20181017 Design faster web pages, part 2- Image replacement.md diff --git a/published/20181017 How To Determine Which System Manager Is Running On Linux System.md b/published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md similarity index 100% rename from published/20181017 How To Determine Which System Manager Is Running On Linux System.md rename to published/201811/20181017 How To Determine Which System Manager Is Running On Linux System.md diff --git a/published/20181019 Edit your videos with Pitivi on Fedora.md b/published/201811/20181019 Edit your videos with Pitivi on Fedora.md similarity index 100% rename from published/20181019 Edit your videos with Pitivi on Fedora.md rename to published/201811/20181019 Edit your videos with Pitivi on Fedora.md diff --git a/published/20181019 How to use Pandoc to produce a research paper.md b/published/201811/20181019 How to use Pandoc to produce a research paper.md similarity index 100% rename from published/20181019 How to use Pandoc to produce a research paper.md rename to published/201811/20181019 How to use Pandoc to produce a research paper.md diff --git a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md similarity index 83% rename from translated/talk/20181019 What is an SRE and how does it relate to DevOps.md rename to published/201811/20181019 What is an SRE and how does it relate to DevOps.md index 80700d6fb9..03bd773fa7 100644 --- a/translated/talk/20181019 What is an SRE and how does it relate to DevOps.md +++ b/published/201811/20181019 What is an SRE and how does it relate to DevOps.md @@ -1,15 +1,15 @@ 什么是 SRE?它和 DevOps 是怎么关联的? ===== -大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 +> 大型企业里 SRE 角色比较常见,不过小公司也需要 SRE。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/toolbox-learn-draw-container-yearbook.png?itok=xDbwz1pP) -虽然站点可靠性工程师(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 +虽然站点可靠性工程师site reliability engineer(SRE)角色在近几年变得流行起来,但是很多人 —— 甚至是软件行业里的 —— 还不知道 SRE 是什么或者 SRE 都干些什么。为了搞清楚这些问题,这篇文章解释了 SRE 的含义,还有 SRE 怎样关联 DevOps,以及在工程师团队规模不大的组织里 SRE 该如何工作。 ### 什么是站点可靠性工程? -谷歌的几个工程师写的《 [SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” +谷歌的几个工程师写的《[SRE:谷歌运维解密][1]》被认为是站点可靠性工程的权威书籍。谷歌的工程副总裁 Ben Treynor Sloss 在二十一世纪初[创造了这个术语][2]。他是这样定义的:“当你让软件工程师设计运维功能时,SRE 就产生了。” 虽然系统管理员从很久之前就在写代码,但是过去的很多时候系统管理团队是手动管理机器的。当时他们管理的机器可能有几十台或者上百台,不过当这个数字涨到了几千甚至几十万的时候,就不能简单的靠人去解决问题了。规模如此大的情况下,很明显应该用代码去管理机器(以及机器上运行的软件)。 @@ -19,13 +19,13 @@ ### SRE 和 DevOps -站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“devs”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 +站点可靠性工程的核心,就是对 DevOps 范例的实践。[DevOps 的定义][3]有很多种方式。开发团队(“dev”)和运维(“ops”)团队相互分离的传统模式下,写代码的团队在将服务交付给用户使用之后就不再对服务状态负责了。开发团队“把代码扔到墙那边”让运维团队去部署和支持。 这种情况会导致大量失衡。开发和运维的目标总是不一致 —— 开发希望用户体验到“最新最棒”的代码,但是运维想要的是变更尽量少的稳定系统。运维是这样假定的,任何变更都可能引发不稳定,而不做任何变更的系统可以一直保持稳定。(减少软件的变更次数并不是避免故障的唯一因素,认识到这一点很重要。例如,虽然你的 web 应用保持不变,但是当用户数量涨到十倍时,服务可能就会以各种方式出问题。) DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开发团队时刻都想把新代码部署上线,那么他们也必须对新代码引起的故障负责。就像亚马逊的 [Werner Vogels 说的][4]那样,“谁开发,谁运维”(生产环境)。但是开发人员已经有一大堆问题了。他们不断的被推动着去开发老板要的产品功能。再让他们去了解基础设施,包括如何部署、配置还有监控服务,这对他们的要求有点太多了。所以就需要 SRE 了。 -开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师,图形设计师,前端工程师,后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署,配置,监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 +开发一个 web 应用的时候经常是很多人一起参与。有用户界面设计师、图形设计师、前端工程师、后端工程师,还有许多其他工种(视技术选型的具体情况而定)。如何管理写好的代码也是需求之一(例如部署、配置、监控)—— 这是 SRE 的专业领域。但是,就像前端工程师受益于后端领域的知识一样(例如从数据库获取数据的方法),SRE 理解部署系统的工作原理,知道如何满足特定的代码或者项目的具体需求。 所以 SRE 不仅仅是“写代码的运维工程师”。相反,SRE 是开发团队的成员,他们有着不同的技能,特别是在发布部署、配置管理、监控、指标等方面。但是,就像前端工程师必须知道如何从数据库中获取数据一样,SRE 也不是只负责这些领域。为了提供更容易升级、管理和监控的产品,整个团队共同努力。 @@ -37,7 +37,7 @@ DevOps 理念认为通过合并这两个岗位就能够消灭争论。如果开 让开发人员做 SRE 最显著的优点是,团队规模变大的时候也能很好的扩展。而且,开发人员将会全面地了解应用的特性。但是,许多初创公司的基础设施包含了各种各样的 SaaS 产品,这种多样性在基础设施上体现的最明显,因为连基础设施本身也是多种多样。然后你们在某个基础设施上引入指标系统、站点监控、日志分析、容器等等。这些技术解决了一部分问题,也增加了复杂度。开发人员除了要了解应用程序的核心技术(比如开发语言),还要了解上述所有技术和服务。最终,掌握所有的这些技术让人无法承受。 -另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“ 三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 +另一种方案是聘请专家专职做 SRE。他们专注于发布部署、配置管理、监控和指标,可以节省开发人员的时间。这种方案的缺点是,SRE 的时间必须分配给多个不同的应用(就是说 SRE 需要贯穿整个工程部门)。 这可能意味着 SRE 没时间对任何应用深入学习,然而他们可以站在一个能看到服务全貌的高度,知道各个部分是怎么组合在一起的。 这个“三万英尺高的视角”可以帮助 SRE 从系统整体上考虑,哪些薄弱环节需要优先修复。 有一个关键信息我还没提到:其他的工程师。他们可能很渴望了解发布部署的原理,也很想尽全力学会使用指标系统。而且,雇一个 SRE 可不是一件简单的事儿。因为你要找的是一个既懂系统管理又懂软件工程的人。(我之所以明确地说软件工程而不是说“能写代码”,是因为除了写代码之外软件工程还包括很多东西,比如编写良好的测试或文档。) @@ -54,7 +54,7 @@ via: https://opensource.com/article/18/10/sre-startup 作者:[Craig Sebenik][a] 选题:[lujun9972][b] 译者:[BeliteX](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181022 5 tips for choosing the right open source database.md b/published/201811/20181022 5 tips for choosing the right open source database.md similarity index 100% rename from published/20181022 5 tips for choosing the right open source database.md rename to published/201811/20181022 5 tips for choosing the right open source database.md diff --git a/published/20181022 How to set up WordPress on a Raspberry Pi.md b/published/201811/20181022 How to set up WordPress on a Raspberry Pi.md similarity index 100% rename from published/20181022 How to set up WordPress on a Raspberry Pi.md rename to published/201811/20181022 How to set up WordPress on a Raspberry Pi.md diff --git a/translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md b/published/201811/20181023 Getting started with functional programming in Python using the toolz library.md similarity index 67% rename from translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md rename to published/201811/20181023 Getting started with functional programming in Python using the toolz library.md index 1f2606daa2..d23a45bc77 100644 --- a/translated/tech/20181023 Getting started with functional programming in Python using the toolz library.md +++ b/published/201811/20181023 Getting started with functional programming in Python using the toolz library.md @@ -1,7 +1,7 @@ -使用Python的toolz库开始函数式编程 +使用 Python 的 toolz 库开始函数式编程 ====== -toolz库允许你操作函数,使其更容易理解,更容易测试代码。 +> toolz 库允许你操作函数,使其更容易理解,更容易测试代码。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop-music-headphones.png?itok=EQZ2WKzy) @@ -20,7 +20,11 @@ def add_one_word(words, word): 这个函数假设它的第一个参数是一个不可变的类似字典的对象,它返回一个新的类似字典的在相关位置递增的对象:这就是一个简单的频率计数器。 -但是,只有将它应用于单词流并做归纳时才有用。 我们可以使用内置模块 `functools` 中的归纳器。 `functools.reduce(function, stream, initializer)` +但是,只有将它应用于单词流并做*归纳*时才有用。 我们可以使用内置模块 `functools` 中的归纳器。 + +``` +functools.reduce(function, stream, initializer) +``` 我们想要一个函数,应用于流,并且能能返回频率计数。 @@ -30,14 +34,12 @@ def add_one_word(words, word): add_all_words = curry(functools.reduce, add_one_word) ``` -使用此版本,我们需要提供初始化程序。 但是,我们不能只将 `pyrsistent.m` 函数添加到 `curry` 函数中中; 因为这个顺序是错误的。 +使用此版本,我们需要提供初始化程序。但是,我们不能只将 `pyrsistent.m` 函数添加到 `curry` 函数中; 因为这个顺序是错误的。 ``` add_all_words_flipped = flip(add_all_words) ``` -The `flip` higher-level function returns a function that calls the original, with arguments flipped. - `flip` 这个高阶函数返回一个调用原始函数的函数,并且翻转参数顺序。 ``` @@ -46,7 +48,7 @@ get_all_words = add_all_words_flipped(pyrsistent.m()) 我们利用 `flip` 自动调整其参数的特性给它一个初始值:一个空字典。 -现在我们可以执行 `get_all_words(word_stream)` 这个函数来获取频率字典。 但是,我们如何获得一个单词流呢? Python文件是行流的。 +现在我们可以执行 `get_all_words(word_stream)` 这个函数来获取频率字典。 但是,我们如何获得一个单词流呢? Python 文件是按行供流的。 ``` def to_words(lines): @@ -60,9 +62,9 @@ def to_words(lines): words_from_file = toolz.compose(get_all_words, to_words) ``` -在这种情况下,组合只是使两个函数很容易阅读:首先将文件的行流应用于 `to_words`,然后将 `get_all_words` 应用于 `to_words` 的结果。 散文似乎与代码相反。 +在这种情况下,组合只是使两个函数很容易阅读:首先将文件的行流应用于 `to_words`,然后将 `get_all_words` 应用于 `to_words` 的结果。 但是文字上读起来似乎与代码执行相反。 -当我们开始认真对待可组合性时,这很重要。 有时可以将代码编写为一个单元序列,单独测试每个单元,最后将它们全部组合。 如果有几个组合元素时,组合的顺序可能就很难理解。 +当我们开始认真对待可组合性时,这很重要。有时可以将代码编写为一个单元序列,单独测试每个单元,最后将它们全部组合。如果有几个组合元素时,组合的顺序可能就很难理解。 `toolz` 库借用了 Unix 命令行的做法,并使用 `pipe` 作为执行相同操作的函数,但顺序相反。 @@ -70,17 +72,13 @@ words_from_file = toolz.compose(get_all_words, to_words) words_from_file = toolz.pipe(to_words, get_all_words) ``` -Now it reads more intuitively: Pipe the input into `to_words`, and pipe the results into `get_all_words`. On a command line, the equivalent would look like this: - 现在读起来更直观了:将输入传递到 `to_words`,并将结果传递给 `get_all_words`。 在命令行上,等效写法如下所示: ``` $ cat files | to_words | get_all_words ``` -The `toolz` library allows us to manipulate functions, slicing, dicing, and composing them to make our code easier to understand and to test. - -`toolz` 库允许我们操作函数,切片,分割和组合,以使我们的代码更容易理解和测试。 +`toolz` 库允许我们操作函数,切片、分割和组合,以使我们的代码更容易理解和测试。 -------------------------------------------------------------------------------- @@ -89,10 +87,10 @@ via: https://opensource.com/article/18/10/functional-programming-python-toolz 作者:[Moshe Zadka][a] 选题:[lujun9972][b] 译者:[Flowsnow](https://github.com/Flowsnow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://opensource.com/users/moshez [b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/18/10/functional-programming-python-immutable-data-structures \ No newline at end of file +[1]: https://linux.cn/article-10222-1.html diff --git a/translated/tech/20181024 4 cool new projects to try in COPR for October 2018.md b/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md similarity index 55% rename from translated/tech/20181024 4 cool new projects to try in COPR for October 2018.md rename to published/201811/20181024 4 cool new projects to try in COPR for October 2018.md index 9bec02c08d..70e2146853 100644 --- a/translated/tech/20181024 4 cool new projects to try in COPR for October 2018.md +++ b/published/201811/20181024 4 cool new projects to try in COPR for October 2018.md @@ -1,30 +1,19 @@ -2018 年 10 月在 COPR 中值得尝试的 4 个很酷的新项目 +COPR 仓库中 4 个很酷的新软件(2018.10) ====== ![](https://fedoramagazine.org/wp-content/uploads/2017/08/4-copr-945x400.jpg) -COPR是软件的个人存储库的[集合] [1],它不在标准的 Fedora 仓库中携带。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是免费和开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己签名的。但是,它是尝试新的或实验性软件的一种很好的方法。 +COPR 是软件的个人存储库的[集合] [1],它包含那些不在标准的 Fedora 仓库中的软件。某些软件不符合允许轻松打包的标准。或者它可能不符合其他 Fedora 标准,尽管它是自由开源的。COPR 可以在标准的 Fedora 包之外提供这些项目。COPR 中的软件不受 Fedora 基础设施的支持,或者是由项目自己背书的。但是,它是尝试新的或实验性软件的一种很好的方法。 这是 COPR 中一组新的有趣项目。 -### GitKraken +[编者按:这些项目里面有一个兵不适合通过 COPR 分发,所以从本文中 也删除了。相关的评论也删除了,以免误导读者。对此带来的不便,我们深表歉意。] -[GitKraken][2] 是一个有用的 git 客户端,它适合喜欢图形界面而非命令行的用户,并提供你期望的所有功能。此外,GitKraken 可以创建仓库和文件,并具有内置编辑器。GitKraken 的一个有用功能是暂存行或者文件,并快速切换分支。但是,在某些情况下,在遇到较大项目时会有性能问题。 - -![][3] - -#### 安装说明 - -该仓库目前为 Fedora 27、28、29 、Rawhide 以及 OpenSUSE Tumbleweed 提供 GitKraken。要安装 GitKraken,请使用以下命令: - -``` -sudo dnf copr enable elken/gitkraken -sudo dnf install gitkraken -``` +(LCTT 译注:本文后来移除了对“GitKraken”项目的介绍。) ### Music On Console -[Music On Console][4] 播放器或称为 mocp,是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面,并且很容易使用。你只需进入包含音乐的目录,然后选择要播放的文件或目录。此外,mocp 提供了一组命令,允许直接从命令行进行控制。 +[Music On Console][4] 播放器(简称 mocp)是一个简单的控制台音频播放器。它有一个类似于 “Midnight Commander” 的界面,并且很容易使用。你只需进入包含音乐的目录,然后选择要播放的文件或目录。此外,mocp 提供了一组命令,允许直接从命令行进行控制。 ![][5] @@ -39,7 +28,7 @@ sudo dnf install moc ### cnping -[Cnping][6]是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外,cnping 还提供 RTT 和丢包的基本统计数据。 +[Cnping][6] 是小型的图形化 ping IPv4 工具,可用于可视化显示 RTT 的变化。它提供了一个选项来控制每个数据包之间的间隔以及发送的数据大小。除了显示的图表外,cnping 还提供 RTT 和丢包的基本统计数据。 ![][7] @@ -54,7 +43,7 @@ sudo dnf install cnping ### Pdfsandwich -[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。 +[Pdfsandwich][8] 是将文本添加到图像形式的文本 PDF 文件 (如扫描书籍) 的工具。它使用光学字符识别 (OCR) 创建一个额外的图层, 包含了原始页面已识别的文本。这对于复制和处理文本很有用。 #### 安装说明 @@ -72,7 +61,7 @@ via: https://fedoramagazine.org/4-cool-new-projects-try-copr-october-2018/ 作者:[Dominik Turecek][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md b/published/201811/20181024 Get organized at the Linux command line with Calcurse.md similarity index 62% rename from translated/tech/20181024 Get organized at the Linux command line with Calcurse.md rename to published/201811/20181024 Get organized at the Linux command line with Calcurse.md index 6b6622dc5a..5d18f71ad5 100644 --- a/translated/tech/20181024 Get organized at the Linux command line with Calcurse.md +++ b/published/201811/20181024 Get organized at the Linux command line with Calcurse.md @@ -1,11 +1,11 @@ 使用 Calcurse 在 Linux 命令行中组织任务 ====== -使用 Calcurse 了解你的日历和待办事项列表。 +> 使用 Calcurse 了解你的日历和待办事项列表。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calendar.jpg?itok=jEKbhvDT) -你是否需要复杂,功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。正确的命令行工具可以完成工作并且做得很好。 +你是否需要复杂、功能丰富的图形或 Web 程序才能保持井井有条?我不这么认为。合适的命令行工具可以完成工作并且做得很好。 当然,说出命令行这个词可能会让一些 Linux 用户感到害怕。对他们来说,命令行是未知领域。 @@ -15,54 +15,51 @@ ### 获取软件 -如果你喜欢编译代码(我通常不喜欢),你可以从[Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 +如果你喜欢编译代码(我通常不喜欢),你可以从 [Calcurse 网站][1]获取源码。否则,根据你的 Linux 发行版获取[二进制安装程序][2]。你甚至可以从 Linux 发行版的软件包管理器中获取 Calcurse。检查一下不会有错的。 编译或安装 Calcurse 后(两者都不用太长时间),你就可以开始使用了。 ### 使用 Calcurse -打开终端并输入 **calcurse**。 +打开终端并输入 `calcurse`。 ![](https://opensource.com/sites/default/files/uploads/calcurse-main.png) Calcurse 的界面由三个面板组成: - * 预约(屏幕左侧) -  * 日历(右上角) -  * 待办事项清单(右下角) + * 预约Appointments(屏幕左侧) +  * 日历Calendar(右上角) +  * 待办事项清单TODO(右下角) +按键盘上的 `Tab` 键在面板之间移动。要在面板添加新项目,请按下 `a`。Calcurse 将指导你完成添加项目所需的操作。 +一个有趣的地方地是预约和日历面板配合工作。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板,你就看到了。 - -按键盘上的 Tab 键在面板之间移动。要在面板添加新项目,请按下 **a**。Calcurse 将指导你完成添加项目所需的操作。 - -一个有趣的地方地预约和日历面板一起生效。你选中日历面板并添加一个预约。在那里,你选择一个预约的日期。完成后,你回到预约面板。我知道。。。 - -按下 **a** 设置开始时间,持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 +按下 `a` 设置开始时间、持续时间(以分钟为单位)和预约说明。开始时间和持续时间是可选的。Calcurse 在它们到期的那天显示预约。 ![](https://opensource.com/sites/default/files/uploads/calcurse-appointment.png) -一天的预约看起来像: +一天的预约看起来像这样: ![](https://opensource.com/sites/default/files/uploads/calcurse-appt-list.png) -待办事项列表独立运作。选中待办面板并(再次)按下 **a**。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 +待办事项列表独立运作。选中待办面板并(再次)按下 `a`。输入任务的描述,然后设置优先级(1 表示最高,9 表示最低)。Calcurse 会在待办事项面板中列出未完成的任务。 ![](https://opensource.com/sites/default/files/uploads/calcurse-todo.png) -如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 **v** 查看描述。 +如果你的任务有很长的描述,那么 Calcurse 会截断它。你可以使用键盘上的向上或向下箭头键浏览任务,然后按下 `v` 查看描述。 ![](https://opensource.com/sites/default/files/uploads/calcurse-view-todo.png) -Calcurse 将其信息以文本形式保存在你的主目录下名为 **.calcurse** 的隐藏文件夹中,例如 **/home/scott/.calcurse**。如果 Calcurse 停止工作,那也很容易找到你的信息。 +Calcurse 将其信息以文本形式保存在你的主目录下名为 `.calcurse` 的隐藏文件夹中,例如 `/home/scott/.calcurse`。如果 Calcurse 停止工作,那也很容易找到你的信息。 ### 其他有用的功能 -Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 **r**。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 +Calcurse 其他的功能包括设置重复预约的功能。要执行此操作,找出要重复的预约,然后在预约面板中按下 `r`。系统会要求你设置频率(例如,每天或每周)以及你希望重复预约的时间。 你还可以导入 [ICAL][3] 格式的日历或以 ICAL 或 [PCAL][4] 格式导出数据。使用 ICAL,你可以与其他日历程序共享数据。使用 PCAL,你可以生成日历的 Postscript 版本。 -你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]阅读它们。 +你还可以将许多命令行参数传递给 Calcurse。你可以[在文档中][5]了解它们。 虽然很简单,但 Calcurse 可以帮助你保持井井有条。你需要更加关注自己的任务和预约,但是你将能够更好地关注你需要做什么以及你需要做的方向。 @@ -73,7 +70,7 @@ via: https://opensource.com/article/18/10/calcurse 作者:[Scott Nesbitt][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md new file mode 100644 index 0000000000..b8cfabc248 --- /dev/null +++ b/published/201811/20181025 Monitoring database health and behavior- Which metrics matter.md @@ -0,0 +1,84 @@ +监测数据库的健康和行为:有哪些重要指标? +====== + +> 对数据库的监测可能过于困难或者没有找到关键点。本文将讲述如何正确的监测数据库。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) + +我们没有对数据库讨论过多少。在这个充满监测仪器的时代,我们监测我们的应用程序、基础设施、甚至我们的用户,但有时忘记我们的数据库也值得被监测。这很大程度是因为数据库表现的很好,以至于我们单纯地信任它能把任务完成的很好。信任固然重要,但能够证明它的表现确实如我们所期待的那样就更好了。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos) + +### 为什么监测你的数据库? + +监测数据库的原因有很多,其中大多数原因与监测系统的任何其他部分的原因相同:了解应用程序的各个组件中发生的什么,会让你成为更了解情况的,能够做出明智决策的开发人员。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4) + +更具体地说,数据库是系统健康和行为的重要标志。数据库中的异常行为能够指出应用程序中出现问题的区域。另外,当应用程序中有异常行为时,你可以利用数据库的指标来迅速完成排除故障的过程。 + +### 问题 + +最轻微的调查揭示了监测数据库的一个问题:数据库有很多指标。说“很多”只是轻描淡写,如果你是史高治Scrooge McDuck(LCTT 译注:史高治,唐老鸭的舅舅,以一毛不拔著称),你不会放过任何一个可用的指标。如果这是摔角狂热Wrestlemania 比赛,那么指标就是折叠椅。监测所有指标似乎并不实用,那么你如何决定要监测哪些指标? + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt) + +### 解决方案 + +开始监测数据库的最好方式是认识一些基础的数据库指标。这些指标为理解数据库的行为创造了良好的开端。 + +### 吞吐量:数据库做了多少? + +开始检测数据库的最好方法是跟踪它所接到请求的数量。我们对数据库有较高期望;期望它能稳定的存储数据,并处理我们抛给它的所有查询,这些查询可能是一天一次大规模查询,或者是来自用户一天到晚的数百万次查询。吞吐量可以告诉我们数据库是否如我们期望的那样工作。 + +你也可以将请求按照类型(读、写、服务器端、客户端等)分组,以开始分析流量。 + +### 执行时间:数据库完成工作需要多长时间? + +这个指标看起来很明显,但往往被忽视了。你不仅想知道数据库收到了多少请求,还想知道数据库在每个请求上花费了多长时间。 然而,参考上下文来讨论执行时间非常重要:像 InfluxDB 这样的时间序列数据库中的慢与像 MySQL 这样的关系型数据库中的慢不一样。InfluxDB 中的慢可能意味着毫秒,而 MySQL 的 `SLOW_QUERY` 变量的默认值是 10 秒。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8) + +监测执行时间和提高执行时间不一样,所以如果你的应用程序中有其他问题需要修复,那么请注意在优化上花费时间的诱惑。 + +### 并发性:数据库同时做了多少工作? + +一旦你知道数据库正在处理多少请求以及每个请求需要多长时间,你就需要添加一层复杂性以开始从这些指标中获得实际值。 + +如果数据库接收到十个请求,并且每个请求需要十秒钟来完成,那么数据库是忙碌了 100 秒、10 秒,还是介于两者之间?并发任务的数量改变了数据库资源的使用方式。当你考虑连接和线程的数量等问题时,你将开始对数据库指标有更全面的了解。 + +并发性还能影响延迟,这不仅包括任务完成所需的时间(执行时间),还包括任务在处理之前需要等待的时间。 + +### 利用率:数据库繁忙的时间百分比是多少? + +利用率是由吞吐量、执行时间和并发性的峰值所确定的数据库可用的频率,或者数据库太忙而不能响应请求的频率。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP) + +该指标对于确定数据库的整体健康和性能特别有用。如果只能在 80% 的时间内响应请求,则可以重新分配资源、进行优化工作,或者进行更改以更接近高可用性。 + +### 好消息 + +监测和分析似乎非常困难,特别是因为我们大多数人不是数据库专家,我们可能没有时间去理解这些指标。但好消息是,大部分的工作已经为我们做好了。许多数据库都有一个内部性能数据库(Postgres:`pg_stats`、CouchDB:`Runtime_Statistics`、InfluxDB:`_internal` 等),数据库工程师设计该数据库来监测与该特定数据库有关的指标。你可以看到像慢速查询的数量一样广泛的内容,或者像数据库中每个事件的平均微秒一样详细的内容。 + +### 结论 + +数据库创建了足够的指标以使我们需要长时间研究,虽然内部性能数据库充满了有用的信息,但并不总是使你清楚应该关注哪些指标。从吞吐量、执行时间、并发性和利用率开始,它们为你提供了足够的信息,使你可以开始了解你的数据库中的情况。 + +![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx) + +你在监视你的数据库吗?你发现哪些指标有用?告诉我吧! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/10/database-metrics-matter + +作者:[Katy Farmer][a] +选题:[lujun9972][b] +译者:[ChiZelin](https://github.com/ChiZelin) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/thekatertot +[b]: https://github.com/lujun9972 diff --git a/published/20181025 Understanding Linux Links- Part 2.md b/published/201811/20181025 Understanding Linux Links- Part 2.md similarity index 100% rename from published/20181025 Understanding Linux Links- Part 2.md rename to published/201811/20181025 Understanding Linux Links- Part 2.md diff --git a/translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md similarity index 83% rename from translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md rename to published/201811/20181025 What breaks our systems- A taxonomy of black swans.md index 22d2bdd3df..e3aa38e75a 100644 --- a/translated/talk/20181025 What breaks our systems- A taxonomy of black swans.md +++ b/published/201811/20181025 What breaks our systems- A taxonomy of black swans.md @@ -1,27 +1,27 @@ 让系统崩溃的黑天鹅分类 ====== -在严重的故障发生之前,找到引起问题的异常事件,并修复它。 +> 在严重的故障发生之前,找到引起问题的异常事件,并修复它。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/black-swan-pair_0.png?itok=MkshwqVg) -黑天鹅用来比喻造成严重影响的小概率事件(比如 2008 年的金融危机)。在生产环境的系统中,黑天鹅是指这样的事情:它引发了你不知道的问题,造成了重大影响,不能快速修复或回滚,也不能用值班说明书上的其他标准响应来解决。它是事发几年后你还在给新人说起的事件。 +黑天鹅Black swan用来比喻造成严重影响的小概率事件(比如 2008 年的金融危机)。在生产环境的系统中,黑天鹅是指这样的事情:它引发了你不知道的问题,造成了重大影响,不能快速修复或回滚,也不能用值班说明书上的其他标准响应来解决。它是事发几年后你还在给新人说起的事件。 从定义上看,黑天鹅是不可预测的,不过有时候我们能找到其中的一些模式,针对有关联的某一类问题准备防御措施。 -例如,大部分故障的直接原因是变更(代码、环境或配置)。虽然这种方式触发的 bug 是独特的,不可预测的,但是常见的金丝雀发布对避免这类问题有一定的作用,而且自动回滚已经成了一种标准止损策略。 +例如,大部分故障的直接原因是变更(代码、环境或配置)。虽然这种方式触发的 bug 是独特的、不可预测的,但是常见的金丝雀发布对避免这类问题有一定的作用,而且自动回滚已经成了一种标准止损策略。 随着我们的专业性不断成熟,一些其他的问题也正逐渐变得容易理解,被归类到某种风险并有普适的预防策略。 ### 公布出来的黑天鹅事件 -所有科技公司都有生产环境的故障,只不过并不是所有公司都会分享他们的事故分析。那些公开讨论事故的公司帮了我们的忙。下列事故都描述了某一类问题,但它们绝对不是只属于一个类别。我们的系统中都有黑天鹅在潜伏着,只是有些人还不知道而已。 +所有科技公司都有生产环境的故障,只不过并不是所有公司都会分享他们的事故分析。那些公开讨论事故的公司帮了我们的忙。下列事故都描述了某一类问题,但它们绝对不是只一个孤例。我们的系统中都有黑天鹅在潜伏着,只是有些人还不知道而已。 #### 达到上限 达到任何类型的限制都会引发严重事故。这类问题的一个典型例子是 2017 年 2 月 [Instapaper 的一次服务中断][1]。我把这份事故报告给任何一个运维工作者看,他们读完都会脊背发凉。Instapaper 生产环境的数据库所在的文件系统有 2 TB 的大小限制,但是数据库服务团队并不知情。在没有任何报错的情况下,数据库不再接受任何写入了。完全恢复需要好几天,而且还得迁移数据库。 -资源限制有各式各样的触发场景。Sentry 遇到了 [Postgres 的最大事务 ID 限制][2]。Platform.sh 遇到了[管道缓冲区大小限制][3]。SparkPost [触发了 AWS 的 DDos 保护][4]。Foursquare 在他们的一个 [MongoDB 耗尽内存][5]时遭遇了性能骤降。 +资源限制有各式各样的触发场景。Sentry 遇到了 [Postgres 的最大事务 ID 限制][2]。Platform.sh 遇到了[管道缓冲区大小限制][3]。SparkPost [触发了 AWS 的 DDoS 保护][4]。Foursquare 在他们的一个 [MongoDB 耗尽内存][5]时遭遇了性能骤降。 提前了解系统限制的一个办法是定期做测试。好的压力测试(在生产环境的副本上做)应该包含写入事务,并且应该把每一种数据存储都写到超过当前生产环境的容量。压力测试时很容易忽略的是次要存储(比如 Zookeeper)。如果你是在测试时遇到了资源限制,那么你还有时间去解决问题。鉴于这种资源限制问题的解决方案可能涉及重大的变更(比如数据存储拆分),所以时间是非常宝贵的。 @@ -32,7 +32,7 @@ #### 扩散的慢请求 > “这个世界的关联性远比我们想象中更大。所以我们看到了更多 Nassim Taleb 所说的‘黑天鹅事件’ —— 即罕见事件以更高的频率离谱地发生了,因为世界是相互关联的” -> — [Richard Thaler][6] +> —— [Richard Thaler][6] HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服务中断给搞垮了][7],他们关于这次事故原因的分析报告很好地诠释了分布式计算系统之间存在多么大的关联。在这个事件里,负载均衡器的连接池被来自 AWS 上的客户访问占满了,因为这些连接很耗时。同样的现象还会发生在应用的线程、锁、数据库连接上 —— 任何能被慢操作占满的资源。 @@ -40,7 +40,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 重试的间隔应该用指数退避来限制一下,并加入一些时间抖动。Square 有一次服务中断是 [Redis 存储的过载][9],原因是有一段代码对失败的事务重试了 500 次,没有任何重试退避的方案,也说明了过度重试的潜在风险。另外,针对这种情况,[断路器][10]设计模式也是有用的。 -应该设计出监控仪表盘来清晰地展示所有资源的[使用率,饱和度和报错][11],这样才能快速发现问题。 +应该设计出监控仪表盘来清晰地展示所有资源的[使用率、饱和度和报错][11],这样才能快速发现问题。 #### 突发的高负载 @@ -48,7 +48,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 在预定时刻同时发生的事件并不是突发大流量的唯一原因。Slack 经历过一次短时间内的[多次服务中断][12],原因是非常多的客户端断开连接后立即重连,造成了突发的大负载。 CircleCI 也经历过一次[严重的服务中断][13],当时 Gitlab 从故障中恢复了,所以数据库里积累了大量的构建任务队列,服务变得饱和而且缓慢。 -几乎所有的服务都会受突发的高负载所影响。所以对这类可能出现的事情做应急预案——并测试一下预案能否正常工作——是必须的。客户端退避和[减载][14]通常是这些方案的核心。 +几乎所有的服务都会受突发的高负载所影响。所以对这类可能出现的事情做应急预案 —— 并测试一下预案能否正常工作 —— 是必须的。客户端退避和[减载][14]通常是这些方案的核心。 如果你的系统必须不间断地接收数据,并且数据不能被丢掉,关键是用可伸缩的方式把数据缓冲到队列中,后续再处理。 @@ -57,7 +57,7 @@ HostedGraphite 的负载均衡器并没有托管在 AWS 上,却[被 AWS 的服 > “复杂的系统本身就是有风险的系统” > —— [Richard Cook, MD][15] -过去几年里软件的运维操作趋势是更加自动化。任何可能降低系统容量的自动化操作(比如擦除磁盘,退役设备,关闭服务)都应该谨慎操作。这类自动化操作的故障(由于系统有 bug 或者有不正确的调用)能很快地搞垮你的系统,而且可能很难恢复。 +过去几年里软件的运维操作趋势是更加自动化。任何可能降低系统容量的自动化操作(比如擦除磁盘、退役设备、关闭服务)都应该谨慎操作。这类自动化操作的故障(由于系统有 bug 或者有不正确的调用)能很快地搞垮你的系统,而且可能很难恢复。 谷歌的 Christina Schulman 和 Etienne Perot 在[用安全规约协助保护你的数据中心][16]的演讲中给了一些例子。其中一次事故是将谷歌整个内部的内容分发网络(CDN)提交给了擦除磁盘的自动化系统。 @@ -69,11 +69,11 @@ Schulman 和 Perot 建议使用一个中心服务来管理规约,限制破坏 ### 防止黑天鹅事件 -可能在等着击垮系统的黑天鹅可不止上面这些。有很多其他的严重问题是能通过一些技术来避免的,像金丝雀发布,压力测试,混沌工程,灾难测试和模糊测试——当然还有冗余性和弹性的设计。但是即使用了这些技术,有时候你的系统还是会有故障。 +可能在等着击垮系统的黑天鹅可不止上面这些。有很多其他的严重问题是能通过一些技术来避免的,像金丝雀发布、压力测试、混沌工程、灾难测试和模糊测试 —— 当然还有冗余性和弹性的设计。但是即使用了这些技术,有时候你的系统还是会有故障。 -为了确保你的组织能有效地响应,在服务中断期间,请保证关键技术人员和领导层有办法沟通协调。例如,有一种你可能需要处理的烦人的事情,那就是网络完全中断。拥有故障时仍然可用的通信通道非常重要,这个通信通道要完全独立于你们自己的基础设施和基础设施的依赖。举个例子,假如你使用 AWS,那么把故障时可用的通信服务部署在 AWS 上就不明智了。在和你的主系统无关的地方,运行电话网桥或 IRC 服务器是比较好的方案。确保每个人都知道这个通信平台,并练习使用它。 +为了确保你的组织能有效地响应,在服务中断期间,请保证关键技术人员和领导层有办法沟通协调。例如,有一种你可能需要处理的烦人的事情,那就是网络完全中断。拥有故障时仍然可用的通信通道非常重要,这个通信通道要完全独立于你们自己的基础设施及对其的依赖。举个例子,假如你使用 AWS,那么把故障时可用的通信服务部署在 AWS 上就不明智了。在和你的主系统无关的地方,运行电话网桥或 IRC 服务器是比较好的方案。确保每个人都知道这个通信平台,并练习使用它。 -另一个原则是,确保监控和运维工具对生产环境系统的依赖尽可能的少。将控制平面和数据平面分开,你才能在系统不健康的时候做变更。不要让数据处理和配置变更或监控使用同一个消息队列,比如——应该使用不同的消息队列实例。在 [SparkPost: DNS 挂掉的那一天][4] 这个演讲中,Jeremy Blosser 讲了一个这类例子,很关键的工具依赖了生产环境的 DNS 配置,但是生产环境的 DNS 出了问题。 +另一个原则是,确保监控和运维工具对生产环境系统的依赖尽可能的少。将控制平面和数据平面分开,你才能在系统不健康的时候做变更。不要让数据处理和配置变更或监控使用同一个消息队列,比如,应该使用不同的消息队列实例。在 [SparkPost: DNS 挂掉的那一天][4] 这个演讲中,Jeremy Blosser 讲了一个这类例子,很关键的工具依赖了生产环境的 DNS 配置,但是生产环境的 DNS 出了问题。 ### 对抗黑天鹅的心理学 @@ -83,7 +83,7 @@ Schulman 和 Perot 建议使用一个中心服务来管理规约,限制破坏 ### 了解更多 -关于黑天鹅(或者以前的黑天鹅)事件以及应对策略,还有很多其他的事情可以说。如果你想了解更多,我强烈推荐你去看这两本书,它们是关于生产环境中的弹性和稳定性的:Susan Fowler 写的[生产微服务][19],还有 Michael T. Nygard 的 [Release It!][20]。 +关于黑天鹅(或者以前的黑天鹅)事件以及应对策略,还有很多其他的事情可以说。如果你想了解更多,我强烈推荐你去看这两本书,它们是关于生产环境中的弹性和稳定性的:Susan Fowler 写的《[生产微服务][19]》,还有 Michael T. Nygard 的 《[Release It!][20]》。 -------------------------------------------------------------------------------- @@ -92,7 +92,7 @@ via: https://opensource.com/article/18/10/taxonomy-black-swans 作者:[Laura Nolan][a] 选题:[lujun9972][b] 译者:[BeliteX](https://github.com/belitex) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181026 An Overview of Android Pie.md b/published/201811/20181026 An Overview of Android Pie.md similarity index 81% rename from translated/tech/20181026 An Overview of Android Pie.md rename to published/201811/20181026 An Overview of Android Pie.md index 9eb3ea5206..7aae6a1f0f 100644 --- a/translated/tech/20181026 An Overview of Android Pie.md +++ b/published/201811/20181026 An Overview of Android Pie.md @@ -1,40 +1,38 @@ Android 9.0 概览 ====== +> 第九代 Android 带来了更令人满意的用户体验。 + ![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/android-pie.jpg?itok=Sx4rbOWY) -我们来谈论一下 Android。尽管 Android 只是一款内核经过修改的 Linux,但经过多年的发展,Android 开发者们(或许包括正在阅读这篇文章的你)已经为这个平台的演变做出了很多值得称道的贡献。当然,可能很多人都已经知道,但我们还是要说,Android 并不完全开源,当你使用 Google 服务的时候,就已经接触到闭源的部分了。Google Play 商店就是其中之一,它不是一个开放的服务,不过这与 Android 是否开源没有太直接的联系,而是为了让你享用到美味、营养、高效、省电的馅饼(注:Android 9.0 代号为 Pie)。 +我们来谈论一下 Android。尽管 Android 只是一款内核经过修改的 Linux,但经过多年的发展,Android 开发者们(或许包括正在阅读这篇文章的你)已经为这个平台的演变做出了很多值得称道的贡献。当然,可能很多人都已经知道,但我们还是要说,Android 并不完全开源,当你使用 Google 服务的时候,就已经接触到闭源的部分了。Google Play 商店就是其中之一,它不是一个开放的服务。不过无论 Android 开源与否,这就是一个美味、营养、高效、省电的馅饼(LCTT 译注:Android 9.0 代号为 Pie)。 -我在我的 Essential PH-1 手机上运行了 Android 9.0(我真的很喜欢这款手机,也很了解这家公司的境况并不好)。在我自己体验了一段时间之后,我认为它是会被大众接受的。那么 Android 9.0 到底好在哪里呢?下面我们就来深入探讨一下。我们的出发点是用户的角度,而不是开发人员的角度,因此我也不会深入探讨太底层的方面。 +我在我的 Essential PH-1 手机上运行了 Android 9.0(我真的很喜欢这款手机,也知道这家公司的境况并不好)。在我自己体验了一段时间之后,我认为它是会被大众接受的。那么 Android 9.0 到底好在哪里呢?下面我们就来深入探讨一下。我们的出发点是用户的角度,而不是开发人员的角度,因此我也不会深入探讨太底层的方面。 ### 手势操作 Android 系统在新的手势操作方面投入了很多,但实际体验却不算太好。这个功能确实引起了我的兴趣。在这个功能发布之初,大家都对它了解甚少,纷纷猜测它会不会让用户使用多点触控的手势来浏览 Android 界面?又或者会不会是一个完全颠覆人们认知的东西? -实际上,手势操作比大多数人设想的要更加微妙和简单,因为很多功能都浓缩到了 Home 键上。打开手势操作功能之后,Recent 键的功能就合并到 Home 键上了。因此,如果需要查看最近打开的应用程序,就不能简单地通过 Recent 键来查看,而应该从 Home 键向上轻扫一下。(图1) +实际上,手势操作比大多数人设想的要更加微妙而简单,因为很多功能都浓缩到了 Home 键上。打开手势操作功能之后,Recent 键的功能就合并到 Home 键上了。因此,如果需要查看最近打开的应用程序,就不能简单地通过 Recent 键来查看,而应该从 Home 键向上轻扫一下。(图 1) ![Android Pie][2] -图 1:Android 9.0 中的”最近的应用程序“界面。 +*图 1:Android 9.0 中的”最近的应用程序“界面。* 另一个不同的地方是 App Drawer。类似于查看最近打开的应用,需要在 Home 键向上滑动才能打开 App Drawer。 -而后退按钮则没有去掉。在应用程序需要用到后退功能时,它就会出现在屏幕的左下方。有时候即使应用程序自己带有后退按钮,Android 的后退按钮也会出现。 +而后退按钮则没有去掉。在应用程序需要用到后退功能时,它就会出现在主屏幕的左下方。有时候即使应用程序自己带有后退按钮,Android 的后退按钮也会出现。 当然,如果你不喜欢使用手势操作,也可以禁用这个功能。只需要按照下列步骤操作: - 1. 打开”设置“ - - 2. 向下滑动并进入 系统 > 手势 - + 2. 向下滑动并进入“系统 > 手势” 3. 从 Home 键向上滑动 - - 4. 将 On/Off 滑块(图2)滑动至 Off 位置 + 4. 将 On/Off 滑块(图 2)滑动至 Off 位置 ![](https://www.linux.com/sites/lcom/files/styles/floated_images/public/pie_2.png?itok=cs2tqZut) -图 2:关闭手势操作。 +*图 2:关闭手势操作。* ### 电池寿命 @@ -42,25 +40,19 @@ Android 系统在新的手势操作方面投入了很多,但实际体验却不 对于这个功能的唯一一个警告是,如果人工智能出现问题并导致电池电量过早耗尽,就只能通过恢复出厂设置来解决这个问题了。尽管有这样的缺陷,在电池续航时间方面,Android 9.0 也比 Android 8.0 有所改善。 -### 分屏功能 +### 分屏功能的变化 分屏对于 Android 来说不是一个新功能,但在 Android 9.0 上,它的使用方式和以往相比略有不同,而且只对于手势操作有影响,不使用手势操作的用户不受影响。要在 Android 9.0 上使用分屏功能,需要按照下列步骤操作: + 1. 从 Home 键向上滑动,打开“最近的应用程序”。 + 2. 找到需要放置在屏幕顶部的应用程序。 + 3. 长按应用程序顶部的图标以显示新的弹出菜单。(图 3) + 4. 点击分屏,应用程序会在屏幕的上半部分打开。 + 5. 找到要打开的第二个应用程序,然后点击它添加到屏幕的下半部分。 + ![Adding an app][5] -图 3:在 Android 9.0 上将应用添加到分屏模式中。 - -[Used with permission][3] - - 1. 从 Home 键向上滑动,打开“最近的应用程序”。 - - 2. 找到需要放置在屏幕顶部的应用程序。 - - 3. 长按应用程序顶部的图标以显示新的弹出菜单。(图 3) - - 4. 点击分屏,应用程序会在屏幕的上半部分打开。 - - 5. 找到要打开的第二个应用程序,然后点击它添加到屏幕的下半部分。 +*图 3:在 Android 9.0 上将应用添加到分屏模式中。* 使用分屏功能关闭应用程序的方法和原来保持一致。 @@ -72,7 +64,7 @@ Android 系统在新的手势操作方面投入了很多,但实际体验却不 ![Actions][7] -图 4:Android 应用操作。 +*图 4:Android 应用操作。* ### 声音控制 @@ -82,17 +74,17 @@ Android 9.0 这次优化针对的是设备上快速控制声音的按钮。如 ![Sound control][9] -图 5:Android 9.0 上的声音控制。 +*图 5:Android 9.0 上的声音控制。* ### 屏幕截图 -由于我要撰写关于 Android 的文章,所以我会常常需要进行屏幕截图。而 Android 9.0 有意向我最喜欢的更新,就是分享屏幕截图。Android 9.0 可以在截取屏幕截图后,直接共享、编辑,或者删除不喜欢的截图,而不需要像以前一样打开 Google 相册、找到要共享的屏幕截图、打开图像然后共享图像。 +由于我要撰写关于 Android 的文章,所以我会常常需要进行屏幕截图。而 Android 9.0 有一项我最喜欢的更新,就是分享屏幕截图。Android 9.0 可以在截取屏幕截图后,直接共享、编辑,或者删除不喜欢的截图,而不需要像以前一样打开 Google 相册、找到要共享的屏幕截图、打开图像然后共享图像。 + +如果你想分享屏幕截图,只需要在截图后等待弹出菜单,点击分享(图 6),从标准的 Android 分享菜单中分享即可。 ![Sharing ][11] -图 6:共享屏幕截图变得更加容易。 - -如果你想分享屏幕截图,只需要在截图后等待弹出菜单,点击分享(图 6),从标准的 Android 分享菜单中分享即可。 +*图 6:共享屏幕截图变得更加容易。* ### 更令人满意的 Android 体验 @@ -105,7 +97,7 @@ via: https://www.linux.com/learn/2018/10/overview-android-pie 作者:[Jack Wallen][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md b/published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md similarity index 100% rename from published/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md rename to published/201811/20181026 Ultimate Plumber - Writing Linux Pipes With Instant Live Preview.md diff --git a/translated/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md b/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md similarity index 89% rename from translated/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md rename to published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md index c6a6e044eb..e0b157c37a 100644 --- a/translated/tech/20181027 Design faster web pages, part 3- Font and CSS tweaks.md +++ b/published/201811/20181027 Design faster web pages, part 3- Font and CSS tweaks.md @@ -1,11 +1,11 @@ -设计更快的网页(三):字体和 CSS 转换 +设计更快的网页(三):字体和 CSS 调整 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/10/designfaster3-816x345.jpg) -欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一][1]和[第二][2]部分讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS([层叠式样式表][3])和字体中减掉更多的脂肪。 +欢迎回到我们为了构建更快网页所写的系列文章。本系列的[第一部分][1]和[第二部分][2]讲述了如何通过优化和替换图片来减少浏览器脂肪。本部分会着眼于在 CSS([层叠式样式表][3])和字体中减掉更多的脂肪。 -### CSS 转换 +### 调整 CSS 首先,我们先来看看问题的源头。CSS 的出现曾是技术的一大进步。你可以用一个集中式的样式表来装饰多个网页。如今很多 Web 开发者都会使用 Bootstrap 这样的框架。 @@ -35,7 +35,7 @@ Font-awesome CSS 代表了包含未使用样式的极端。这个页面中只用 current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2 ``` -所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来摆脱这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。 +所以问题是,你需要所有的字形吗?很可能不需要。你可以通过 [FontForge][10] 来去除这些无用字形,但这需要很大的工作量。你还可以用 [Fontello][11]. 你可以使用公共实例,也可以配置你自己的版本,因为它是自由软件,可以在 [Github][12] 上找到。 这种自定义字体集的缺点在于,你必须自己来托管字体文件。你也没法使用其它在线服务来提供更新。但与更快的性能相比,这可能算不上一个缺点。 @@ -53,14 +53,14 @@ via: https://fedoramagazine.org/design-faster-web-pages-part-3-font-css-tweaks/ 作者:[Sirko Kemter][a] 选题:[lujun9972][b] 译者:[StdioA](https://github.com/StdioA) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://fedoramagazine.org/author/gnokii/ [b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/design-faster-web-pages-part-1-image-compression/ -[2]: https://fedoramagazine.org/design-faster-web-pages-part-2-image-replacement/ +[1]: https://linux.cn/article-10166-1.html +[2]: https://linux.cn/article-10217-1.html [3]: https://en.wikipedia.org/wiki/Cascading_Style_Sheets [4]: https://getfedora.org [5]: https://fedoramagazine.org/wp-content/uploads/2018/02/CSS_delivery_tool_-_Examine_how_a_page_uses_CSS_-_2018-02-24_15.00.46.png diff --git a/published/20181029 4 open source Android email clients.md b/published/201811/20181029 4 open source Android email clients.md similarity index 100% rename from published/20181029 4 open source Android email clients.md rename to published/201811/20181029 4 open source Android email clients.md diff --git a/published/20181029 Machine learning with Python- Essential hacks and tricks.md b/published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md similarity index 100% rename from published/20181029 Machine learning with Python- Essential hacks and tricks.md rename to published/201811/20181029 Machine learning with Python- Essential hacks and tricks.md diff --git a/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md new file mode 100644 index 0000000000..f675342f6f --- /dev/null +++ b/published/201811/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md @@ -0,0 +1,367 @@ +我们如何得知安装的包来自哪个仓库? +========== + +有时候你可能想知道安装的软件包来自于哪个仓库。这将帮助你在遇到包冲突问题时进行故障排除。 + +因为[第三方仓库][1]拥有最新版本的软件包,所以有时候当你试图安装一些包的时候会出现兼容性的问题。 + +在 Linux 上一切都是可能的,因为你可以安装一个即使在你的发行版系统上不能使用的包。 + +你也可以安装一个最新版本的包,即使你的发行版系统仓库还没有这个版本,怎么做到的呢? + +这就是为什么出现了第三方仓库。它们允许用户从库中安装所有可用的包。 + +几乎所有的发行版系统都允许第三方软件库。一些发行版还会官方推荐一些不会取代基础仓库的第三方仓库,例如 CentOS 官方推荐安装 [EPEL 库][2]。 + +下面是常用的仓库列表和它们的详细信息。 + + * CentOS: [EPEL][2]、[ELRepo][3] 等是 [Centos 社区认证仓库](4)。 + * Fedora: [RPMfusion 仓库][5] 是经常被很多 [Fedora][6] 用户使用的仓库。 + * ArchLinux: ArchLinux 社区仓库包含了来自于 Arch 用户仓库的可信用户审核通过的软件包。 + * openSUSE: [Packman 仓库][7] 为 openSUSE 提供了各种附加的软件包,特别是但不限于那些在 openSUSE Build Service 应用黑名单上的与多媒体相关的应用和库。它是 openSUSE 软件包的最大外部软件库。 + * Ubuntu:个人软件包归档(PPA)是一种软件仓库。开发者们可以创建这种仓库来分发他们的软件。你可以在 PPA 导航页面找到相关信息。同时,你也可以启用 Cananical 合作伙伴软件仓库。 + +### 仓库是什么? + +软件仓库是存储特定的应用程序的软件包的集中场所。 + +所有的 Linux 发行版都在维护他们自己的仓库,并允许用户在他们的机器上获取和安装包。 + +每个厂商都提供了各自的包管理工具来管理它们的仓库,例如搜索、安装、更新、升级、删除等等。 + +除了 RHEL 和 SUSE 以外大部分 Linux 发行版都是自由软件。要访问付费的仓库,你需要购买其订阅服务。 + +### 为什么我们需要启用第三方仓库? + +在 Linux 里,并不建议从源代码安装包,因为这样做可能会在升级软件和系统的时候产生很多问题,这也是为什么我们建议从库中安装包而不是从源代码安装。 + +### 在 RHEL/CentOS 系统上我们如何得知安装的软件包来自哪个仓库? + +这可以通过很多方法实现。我们会给你所有可能的选择,你可以选择一个对你来说最合适的。 + +#### 方法-1:使用 yum 命令 + +RHEL 和 CentOS 系统使用 RPM 包,因此我们能够使用 [Yum 包管理器][8] 来获得信息。 + +YUM 即 “Yellodog Updater, Modified” 是适用于基于 RPM 的系统例如 RHEL 和 CentOS 的一个开源命令行前端包管理工具。 + +`yum` 是从发行版仓库和其他第三方库中获取、安装、删除、查询和管理 RPM 包的一个主要工具。 + +``` +# yum info apachetop +Loaded plugins: fastestmirror +Loading mirror speeds from cached hostfile + * epel: epel.mirror.constant.com +Installed Packages +Name : apachetop +Arch : x86_64 +Version : 0.15.6 +Release : 1.el7 +Size : 65 k +Repo : installed +From repo : epel +Summary : A top-like display of Apache logs +URL : https://github.com/tessus/apachetop +License : BSD +Description : ApacheTop watches a logfile generated by Apache (in standard common or + : combined logformat, although it doesn't (yet) make use of any of the extra + : fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-2:使用 yumdb 命令 + +`yumdb info` 提供了类似于 `yum info` 的信息但是它又提供了包校验和数据、类型、用户信息(谁安装的软件包)。从 yum 3.2.26 开始,yum 已经开始在 rpmdatabase 之外存储额外的信息(user 表示软件是用户安装的,dep 表示它是作为依赖项引入的)。 + +``` +# yumdb info lighttpd +Loaded plugins: fastestmirror +lighttpd-1.4.50-1.el7.x86_64 + checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37 + checksum_type = sha256 + command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel + from_repo = epel + from_repo_revision = 1540756729 + from_repo_timestamp = 1540757483 + installed_by = 0 + origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm + reason = user + releasever = 7 + var_contentdir = centos + var_infra = stock + var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8 +``` + +`lighttpd` 包来自 EPEL 仓库。 + +#### 方法-3:使用 rpm 命令 + +[RPM 命令][9] 即 “Red Hat Package Manager” 是一个适用于基于 Red Hat 的系统(例如 RHEL、CentOS、Fedora、openSUSE & Mageia)的强大的命令行包管理工具。 + +这个工具允许你在你的 Linux 系统/服务器上安装、更新、移除、查询和验证软件。RPM 文件具有 .rpm 后缀名。RPM 包是用必需的库和依赖关系构建的,不会与系统上安装的其他包冲突。 + +``` +# rpm -qi apachetop +Name : apachetop +Version : 0.15.6 +Release : 1.el7 +Architecture: x86_64 +Install Date: Mon 29 Oct 2018 06:47:49 AM EDT +Group : Applications/Internet +Size : 67020 +License : BSD +Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5 +Source RPM : apachetop-0.15.6-1.el7.src.rpm +Build Date : Sat 20 Jun 2015 09:02:37 PM EDT +Build Host : buildvm-22.phx2.fedoraproject.org +Relocations : (not relocatable) +Packager : Fedora Project +Vendor : Fedora Project +URL : https://github.com/tessus/apachetop +Summary : A top-like display of Apache logs +Description : +ApacheTop watches a logfile generated by Apache (in standard common or +combined logformat, although it doesn't (yet) make use of any of the extra +fields in combined) and generates human-parsable output in realtime. +``` + +`apachetop` 包来自 EPEL 仓库。 + +#### 方法-4:使用 repoquery 命令 + +`repoquery` 是一个从 YUM 库查询信息的程序,类似于 rpm 查询。 + +``` +# repoquery -i httpd + +Name : httpd +Version : 2.4.6 +Release : 80.el7.centos.1 +Architecture: x86_64 +Size : 9817285 +Packager : CentOS BuildSystem +Group : System Environment/Daemons +URL : http://httpd.apache.org/ +Repository : updates +Summary : Apache HTTP Server +Source : httpd-2.4.6-80.el7.centos.1.src.rpm +Description : +The Apache HTTP Server is a powerful, efficient, and extensible +web server. +``` + +`httpd` 包来自 CentOS updates 仓库。 + +### 在 Fedora 系统上我们如何得知安装的包来自哪个仓库? + +DNF 是 “Dandified yum” 的缩写。DNF 是使用 hawkey/libsolv 库作为后端的下一代 yum 包管理器(yum 的分支)。从 Fedora 18 开始 Aleš Kozumplík 开始开发 DNF,并最终在 Fedora 22 上得以应用/启用。 + +[dnf 命令][10] 用于在 Fedora 22 以及之后的系统上安装、更新、搜索和删除包。它会自动解决依赖并使安装包的过程变得顺畅,不会出现任何问题。 + +``` +$ dnf info tilix +Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST. +Installed Packages +Name : tilix +Version : 1.6.4 +Release : 1.fc26 +Arch : x86_64 +Size : 3.6 M +Source : tilix-1.6.4-1.fc26.src.rpm +Repo : @System +From repo : updates +Summary : Tiling terminal emulator +URL : https://github.com/gnunn1/tilix +License : MPLv2.0 and GPLv3+ and CC-BY-SA +Description : Tilix is a tiling terminal emulator with the following features: + : + : - Layout terminals in any fashion by splitting them horizontally or vertically + : - Terminals can be re-arranged using drag and drop both within and between + : windows + : - Terminals can be detached into a new window via drag and drop + : - Input can be synchronized between terminals so commands typed in one + : terminal are replicated to the others + : - The grouping of terminals can be saved and loaded from disk + : - Terminals support custom titles + : - Color schemes are stored in files and custom color schemes can be created by + : simply creating a new file + : - Transparent background + : - Supports notifications when processes are completed out of view + : + : The application was written using GTK 3 and an effort was made to conform to + : GNOME Human Interface Guidelines (HIG). +``` + +`tilix` 包来自 Fedora updates 仓库。 + +### 在 openSUSE 系统上我们如何得知安装的包来自哪个仓库? + +Zypper 是一个使用 libzypp 的命令行包管理器。[Zypper 命令][11] 提供了存储库访问、依赖处理、包安装等功能。 + +``` +$ zypper info nano + +Loading repository data... +Reading installed packages... + + +Information for package nano: +----------------------------- +Repository : Main Repository (OSS) +Name : nano +Version : 2.4.2-5.3 +Arch : x86_64 +Vendor : openSUSE +Installed Size : 1017.8 KiB +Installed : No +Status : not installed +Source package : nano-2.4.2-5.3.src +Summary : Pico editor clone with enhancements +Description : + GNU nano is a small and friendly text editor. It aims to emulate + the Pico text editor while also offering a few enhancements. +``` + +`nano` 包来自于 openSUSE Main 仓库(OSS)。 + +### 在 ArchLinux 系统上我们如何得知安装的包来自哪个仓库? + +[Pacman 命令][12] 即包管理器工具(package manager utility ),是一个简单的用来安装、构建、删除和管理 Arch Linux 软件包的命令行工具。Pacman 使用 libalpm 作为后端来执行所有的操作。 + +``` +# pacman -Ss chromium +extra/chromium 48.0.2564.116-1 + The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +extra/qt5-webengine 5.5.1-9 (qt qt5) + Provides support for web applications using the Chromium browser project +community/chromium-bsu 0.9.15.1-2 + A fast paced top scrolling shooter +community/chromium-chromevox latest-1 + Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This + package does not contain the extension code. +community/fcitx-mozc 2.17.2313.102-1 + Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese + Input) +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +或者,我们可以使用以下选项获得关于包的详细信息。 + +``` +# pacman -Si chromium +Repository : extra +Name : chromium +Version : 48.0.2564.116-1 +Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser +Architecture : x86_64 +URL : http://www.chromium.org/ +Licenses : BSD +Groups : None +Provides : None +Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus + flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir + desktop-file-utils hicolor-icon-theme +Optional Deps : kdebase-kdialog: needed for file dialogs in KDE + gnome-keyring: for storing passwords in GNOME keyring + kwallet: for storing passwords in KWallet +Conflicts With : None +Replaces : None +Download Size : 44.42 MiB +Installed Size : 172.44 MiB +Packager : Evangelos Foutras +Build Date : Fri 19 Feb 2016 04:17:12 AM IST +Validated By : MD5 Sum SHA-256 Sum Signature +``` + +`chromium` 包来自 ArchLinux extra 仓库。 + +### 在基于 Debian 的系统上我们如何得知安装的包来自哪个仓库? + +在基于 Debian 的系统例如 Ubuntu、LinuxMint 上可以使用两种方法实现。 + +#### 方法-1:使用 apt-cache 命令 + +[apt-cache 命令][13] 可以显示存储在 APT 内部数据库的很多信息。这些信息是一种缓存,因为它们是从列在 `source.list` 文件里的不同的源中获得的。这个过程发生在 apt 更新操作期间。 + +``` +$ apt-cache policy python3 +python3: + Installed: 3.6.3-0ubuntu2 + Candidate: 3.6.3-0ubuntu3 + Version table: + 3.6.3-0ubuntu3 500 + 500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages + *** 3.6.3-0ubuntu2 500 + 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages + 100 /var/lib/dpkg/status +``` + +`python3` 包来自 Ubuntu updates 仓库。 + +#### 方法-2:使用 apt 命令 + +[APT 命令][14] 即 “Advanced Packaging Tool”,是 `apt-get` 命令的替代品,就像 DNF 是如何取代 YUM 一样。它是具有丰富功能的命令行工具并将所有的功能例如 `apt-cache`、`apt-search`、`dpkg`、`apt-cdrom`、`apt-config`、`apt-ket` 等包含在一个命令(APT)中,并且还有几个独特的功能。例如我们可以通过 APT 轻松安装 .dpkg 包,但我们不能使用 `apt-get` 命令安装,更多类似的功能都被包含进了 APT 命令。`apt-get` 因缺失了很多未被解决的特性而被 `apt` 取代。 + +``` +$ apt -a show notepadqq +Package: notepadqq +Version: 1.3.2-1~artful1 +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Download-Size: 356 kB +APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. + +Package: notepadqq +Version: 1.2.0-1~artful1 +Status: install ok installed +Priority: optional +Section: editors +Maintainer: Daniele Di Sarli +Installed-Size: 1,352 kB +Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) +Homepage: http://notepadqq.altervista.org +Download-Size: unknown +APT-Manual-Installed: yes +APT-Sources: /var/lib/dpkg/status +Description: Notepad++-like editor for Linux + Text editor with support for multiple programming + languages, multiple encodings and plugin support. +``` + +`notepadqq` 包来自 Launchpad PPA。 + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[zianglei](https://github.com/zianglei) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/repository/ +[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ +[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ +[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ +[6]: https://fedoraproject.org/wiki/Third_party_repositories +[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ +[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[9]: https://www.2daygeek.com/rpm-command-examples/ +[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ diff --git a/published/20181030 How To Analyze And Explore The Contents Of Docker Images.md b/published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md similarity index 100% rename from published/20181030 How To Analyze And Explore The Contents Of Docker Images.md rename to published/201811/20181030 How To Analyze And Explore The Contents Of Docker Images.md diff --git a/published/20181031 8 creepy commands that haunt the terminal - Opensource.com.md b/published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md similarity index 100% rename from published/20181031 8 creepy commands that haunt the terminal - Opensource.com.md rename to published/201811/20181031 8 creepy commands that haunt the terminal - Opensource.com.md diff --git a/published/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md b/published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md similarity index 100% rename from published/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md rename to published/201811/20181101 KRS- A new tool for gathering Kubernetes resource statistics.md diff --git a/translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md similarity index 83% rename from translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md rename to published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md index b51fbd8221..ef04dc33dd 100644 --- a/translated/tech/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md +++ b/published/201811/20181102 How To Create A Bootable Linux USB Drive From Windows OS 7,8 and 10.md @@ -1,10 +1,11 @@ -如何从 Windows OS 7、8 和 10 创建可启动的 Linux USB 盘? +如何从 Windows 7、8 和 10 创建可启动的 Linux USB 盘? ====== + 如果你想了解 Linux,首先要做的是在你的系统上安装 Linux 系统。 它可以通过两种方式实现,使用 Virtualbox、VMWare 等虚拟化应用,或者在你的系统上安装 Linux。 -如果你倾向从 Windows 系统迁移到 Linux 系统或计划在备用机上安装 Linux 系统,那么你须为此创建可启动的 USB 盘。 +如果你倾向于从 Windows 系统迁移到 Linux 系统或计划在备用机上安装 Linux 系统,那么你须为此创建可启动的 USB 盘。 我们已经写过许多[在 Linux 上创建可启动 USB 盘][1] 的文章,如 [BootISO][2]、[Etcher][3] 和 [dd 命令][4],但我们从来没有机会写一篇文章关于在 Windows 中创建 Linux 可启动 USB 盘的文章。不管怎样,我们今天有机会做这件事了。 @@ -22,29 +23,32 @@ 有许多程序可供使用,但我的首选是 [Universal USB Installer][6],它使用起来非常简单。只需访问 Universal USB Installer 页面并下载该程序即可。 -### 步骤3:如何使用 Universal USB Installer 创建可启动的 Ubuntu ISO +### 步骤3:创建可启动的 Ubuntu ISO 这个程序在使用上不复杂。首先连接 USB 盘,然后点击下载的 Universal USB Installer。启动后,你可以看到类似于我们的界面。 + ![][8] - * **`步骤 1:`** 选择Ubuntu 系统。 - * **`步骤 2:`** 选择 Ubuntu ISO 下载位置。 - * **`步骤 3:`** 默认它选择的是 USB 盘,但是要验证一下,接着勾选格式化选项。 - - + * 步骤 1:选择 Ubuntu 系统。 + * 步骤 2:选择 Ubuntu ISO 下载位置。 + * 步骤 3:它默认选择的是 USB 盘,但是要验证一下,接着勾选格式化选项。 ![][9] -当你点击 `Create` 按钮时,它会弹出一个带有警告的窗口。不用担心,只需点击 `Yes` 继续进行此操作即可。 +当你点击 “Create” 按钮时,它会弹出一个带有警告的窗口。不用担心,只需点击 “Yes” 继续进行此操作即可。 + ![][10] USB 盘分区正在进行中。 + ![][11] -要等待一会儿才能完成。如你您想将它移至后台,你可以点击 `Background` 按钮。 +要等待一会儿才能完成。如你您想将它移至后台,你可以点击 “Background” 按钮。 + ![][12] 好了,完成了。 + ![][13] 现在你可以进行[安装 Ubuntu 系统][14]了。但是,它也提供了一个 live 模式,如果你想在安装之前尝试,那么可以使用它。 @@ -56,7 +60,7 @@ via: https://www.2daygeek.com/create-a-bootable-live-usb-drive-from-windows-usin 作者:[Prakash Subramanian][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md similarity index 67% rename from translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md rename to published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md index cecd641b89..ea0dbe77e7 100644 --- a/translated/tech/20181105 CPod- An Open Source, Cross-platform Podcast App.md +++ b/published/201811/20181105 CPod- An Open Source, Cross-platform Podcast App.md @@ -1,35 +1,38 @@ CPod:一个开源、跨平台播客应用 ====== + 播客是一个很好的娱乐和获取信息的方式。事实上,我会听十几个不同的播客,包括技术、神秘事件、历史和喜剧。当然,[Linux 播客][1]也在此列表中。 今天,我们将看一个简单的跨平台应用来收听你的播客。 ![][2] -推荐的播客和播客搜索 + +*推荐的播客和播客搜索* ### 应用程序 -[CPod][3] 是 [Zack Guard(z -----)][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在最大的操作系统(Linux、Windows、Mac OS)上运行。 +[CPod][3] 是 [Zack Guard(z-------------)][4] 的作品。**它是一个 [Election][5] 程序**,这使它能够在大多数操作系统(Linux、Windows、Mac OS)上运行。 -一个小事:CPod 最初被命名为 Cumulonimbus。 +> 一个小事:CPod 最初被命名为 Cumulonimbus。 应用的大部分被两个面板占用,来显示内容和选项。屏幕左侧的小条让你可以使用应用的不同功能。CPod 的不同栏目包括主页、队列、订阅、浏览和设置。 ![cpod settings][6] -设置 + +*设置* ### CPod 的功能 以下是 CPod 提供的功能列表: * 简洁,干净的设计 - * 可在顶级计算机平台上使用 + * 可在主流计算机平台上使用 * 有 Snap 包 * 搜索 iTunes 的播客目录 - * 下载以及无需下载播放节目 + * 可下载也可无需下载就播放节目 * 查看播客信息和节目 * 搜索播客的个别节目 - * 黑暗模式 + * 深色模式 * 改变播放速度 * 键盘快捷键 * 将你的播客订阅与 gpodder.net 同步 @@ -39,13 +42,13 @@ CPod:一个开源、跨平台播客应用 * 多语言支持 - ![search option in cpod application][7] -搜索 ZFS 节目 + +*搜索 ZFS 节目* ### 在 Linux 上体验 CPod -我最后在两个系统上安装了 CPod:ArchLabs 和 Windows。[Arch 用户仓库]​​[8] 中有两个版本的 CPod。但是,它们都已过时,一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我不得已而有不同的体验。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。 +我最后在两个系统上安装了 CPod:ArchLabs 和 Windows。[Arch 用户仓库​][8] 中有两个版本的 CPod。但是,它们都已过时,一个是版本 1.14.0,另一个是 1.22.6。最新版本的 CPod 是 1.27.0。由于 ArchLabs 和 Windows 之间的版本差异,我的体验有所不同。在本文中,我将重点关注 1.27.0,因为它是最新且功能最多的。 我马上能够找到我最喜欢的播客。我可以粘贴 RSS 源的 URL 来添加 iTunes 列表中没有的那些播客。 @@ -55,7 +58,7 @@ CPod:一个开源、跨平台播客应用 ### 安装 CPod -在 [GitHub][11]上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。 +在 [GitHub][11] 上,你可以下载适用于 Linux 的 AppImage 或 Deb 文件,适用于 Windows 的 .exe 文件或适用于 Mac OS 的 .dmg 文件。 你可以使用 [Snap][12] 安装 CPod。你需要做的就是使用以下命令: @@ -63,14 +66,15 @@ CPod:一个开源、跨平台播客应用 sudo snap install cpod ``` -就像我之前说的那样,CPod 的 [Arch 用户仓库]​​[8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch(或基于 Arch 的发行版),我建议你这样做。 +就像我之前说的那样,CPod 的 [Arch 用户仓库][8]的版本已经过时了。我已经给其中一个打包者发了消息。如果你使用 Arch(或基于 Arch 的发行版),我建议你这样做。 ![cpod for Linux pidcasts][13] -播放其中一个我最喜欢的播客 + +*播放其中一个我最喜欢的播客* ### 最后的想法 -总的来说,我喜欢 CPod。它外观漂亮,使用简单。事实上,我更喜欢原来的名字 (Cumulonimbus),但是它有点拗口。 +总的来说,我喜欢 CPod。它外观漂亮,使用简单。事实上,我更喜欢原来的名字(Cumulonimbus),但是它有点拗口。 我刚刚在程序中遇到两个问题。首先,我希望每个播客都有评分。其次,在打开黑暗模式后,根据长度、日期、下载状态和播放进度对剧集进行排序的菜单不起作用。 @@ -85,23 +89,23 @@ via: https://itsfoss.com/cpod-podcast-app/ 作者:[John Paul][a] 选题:[lujun9972][b] 译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 [a]: https://itsfoss.com/author/john/ [b]: https://github.com/lujun9972 [1]: https://itsfoss.com/linux-podcasts/ -[2]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod1.1.jpg +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod1.1.jpg?w=800&ssl=1 [3]: https://github.com/z-------------/CPod [4]: https://github.com/z------------- [5]: https://electronjs.org/ -[6]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod2.1.png -[7]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod4.1.jpg +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod2.1.png?w=800&ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod4.1.jpg?w=800&ssl=1 [8]: https://aur.archlinux.org/packages/?O=0&K=cpod [9]: https://latenightlinux.com/ [10]: https://itsfoss.com/what-is-zfs/ [11]: https://github.com/z-------------/CPod/releases [12]: https://snapcraft.io/cumulonimbus -[13]: https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/cpod3.1.jpg +[13]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/10/cpod3.1.jpg?w=800&ssl=1 [14]: http://reddit.com/r/linuxusersgroup diff --git a/translated/tech/20181105 Commandline quick tips- How to locate a file.md b/published/201811/20181105 Commandline quick tips- How to locate a file.md similarity index 67% rename from translated/tech/20181105 Commandline quick tips- How to locate a file.md rename to published/201811/20181105 Commandline quick tips- How to locate a file.md index 1a56fcfc20..6b8d9a1109 100644 --- a/translated/tech/20181105 Commandline quick tips- How to locate a file.md +++ b/published/201811/20181105 Commandline quick tips- How to locate a file.md @@ -1,24 +1,24 @@ -命令行快捷提示:如何定位一个文件 +命令行快速技巧:如何定位一个文件 ====== ![](https://fedoramagazine.org/wp-content/uploads/2018/10/commandlinequicktips-816x345.jpg) -我们都会有文件存储在电脑里 —— 目录,相片,源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。 +我们都会有文件存储在电脑里 —— 目录、相片、源代码等等。它们是如此之多。也无疑超出了我的记忆范围。要是毫无目标,找到正确的那一个可能会很费时间。在这篇文章里我们来看一下如何在命令行里找到需要的文件,特别是快速找到你想要的那一个。 -好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:ls、tree 和 tree。 +好消息是 Linux 命令行专门设计了很多非常有用的命令行工具在你的电脑上查找文件。下面我们看一下它们其中三个:`ls`、`tree` 和 `find`。 ### ls -如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,ls 就是为此而生的。 +如果你知道文件在哪里,你只需要列出它们或者查看有关它们的信息,`ls` 就是为此而生的。 -只需运行 ls 就可以列出当下目录中所有可见的文件和目录: +只需运行 `ls` 就可以列出当下目录中所有可见的文件和目录: ``` $ ls Documents Music Pictures Videos notes.txt ``` -添加 **-l** 选项可以查看文件的相关信息。同时再加上 **-h** 选项,就可以用一种人们易读的格式查看文件的大小: +添加 `-l` 选项可以查看文件的相关信息。同时再加上 `-h` 选项,就可以用一种人们易读的格式查看文件的大小: ``` $ ls -lh @@ -30,7 +30,7 @@ drwxr-xr-x 2 adam adam 4.0K Nov 2 13:07 Videos -rw-r--r-- 1 adam adam 43K Nov 2 13:12 notes.txt ``` -**ls** 也可以搜索一个指定位置: +`ls` 也可以搜索一个指定位置: ``` $ ls Pictures/ @@ -44,7 +44,7 @@ $ ls *.txt notes.txt ``` -少了点什么?想要查看一个隐藏文件?没问题,使用 **-a** 选项: +少了点什么?想要查看一个隐藏文件?没问题,使用 `-a` 选项: ``` $ ls -a @@ -52,7 +52,7 @@ $ ls -a .. .bash_profile .vimrc Music Videos ``` -**ls** 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多: +`ls` 还有很多其他有用的选项,你可以把它们组合在一起获得你想要的效果。可以使用以下命令了解更多: ``` $ man ls @@ -60,13 +60,13 @@ $ man ls ### tree -如果你想查看你的文件的树状结构,tree 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装: +如果你想查看你的文件的树状结构,`tree` 是一个不错的选择。可能你的系统上没有默认安装它,你可以使用包管理 DNF 手动安装: ``` $ sudo dnf install tree ``` -如果不带任何选项或者参数地运行 tree,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件: +如果不带任何选项或者参数地运行 `tree`,将会以当前目录开始,显示出包含其下所有目录和文件的一个树状图。提醒一下,这个输出可能会非常大,因为它包含了这个目录下的所有目录和文件: ``` $ tree @@ -89,7 +89,7 @@ $ tree `-- notes.txt ``` -如果列出的太多了,使用 -L 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级: +如果列出的太多了,使用 `-L` 选项,并在其后加上你想查看的层级数,可以限制列出文件的层级: ``` $ tree -L 2 @@ -118,13 +118,13 @@ Documents/work/ `-- status-reports.txt ``` -如果使用 tree 列出的是一个很大的树状图,你可以把它跟 less 组合使用: +如果使用 `tree` 列出的是一个很大的树状图,你可以把它跟 `less` 组合使用: ``` $ tree | less ``` -再一次,tree 有很多其他的选项可以使用,你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项: +再一次,`tree` 有很多其他的选项可以使用,你可以把他们组合在一起发挥更强大的作用。man 手册页有所有这些选项: ``` $ man tree @@ -134,13 +134,13 @@ $ man tree 那么如果不知道文件在哪里呢?就让我们来找到它们吧! -要是你的系统中没有 find,你可以使用 DNF 安装它: +要是你的系统中没有 `find`,你可以使用 DNF 安装它: ``` $ sudo dnf install findutils ``` -运行 find 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。 +运行 `find` 时如果没有添加任何选项或者参数,它将会递归列出当前目录下的所有文件和目录。 ``` $ find @@ -167,7 +167,7 @@ $ find ./Music ``` -但是 find 真正强大的是你可以使用文件名进行搜索: +但是 `find` 真正强大的是你可以使用文件名进行搜索: ``` $ find -name do-things.sh @@ -184,6 +184,7 @@ $ find -name "*.txt" ./Documents/work/project-abc/project-notes.txt ./notes.txt ``` + 你也可以根据大小寻找文件。如果你的空间不足的时候,这种方法也许特别有用。现在来列出所有大于 1 MB 的文件: ``` @@ -207,7 +208,7 @@ $ find Documents -name "*project*" -type f Documents/work/project-abc/project-notes.txt ``` -最后再一次,find 还有很多供你使用的选项,要是你想使用它们,man 手册页绝对可以帮到你: +最后再一次,`find` 还有很多供你使用的选项,要是你想使用它们,man 手册页绝对可以帮到你: ``` $ man find @@ -220,7 +221,7 @@ via: https://fedoramagazine.org/commandline-quick-tips-locate-file/ 作者:[Adam Šamalík][a] 选题:[lujun9972][b] 译者:[dianbanjiu](https://github.com/dianbanjiu) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/translated/tech/20181105 Introducing pydbgen- A random dataframe-database table generator.md b/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md similarity index 89% rename from translated/tech/20181105 Introducing pydbgen- A random dataframe-database table generator.md rename to published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md index 233cef70b6..27bb64d37e 100644 --- a/translated/tech/20181105 Introducing pydbgen- A random dataframe-database table generator.md +++ b/published/201811/20181105 Introducing pydbgen- A random dataframe-database table generator.md @@ -1,6 +1,7 @@ pydbgen:一个数据库随机生成器 ====== -> 用这个简单的工具生成大型数据库,让你更好地研究数据科学。 + +> 用这个简单的工具生成带有多表的大型数据库,让你更好地用 SQL 研究数据科学。 ![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/features_solutions_command_data.png?itok=4_VQN3RK) @@ -38,7 +39,6 @@ from pydbgen import pydbgen myDB=pydbgen.pydb() ``` -Then you can access the various internal functions exposed by the **pydb** object. For example, to print random US cities, enter: 随后就可以调用 `pydb` 对象公开的各种内部函数了。可以按照下面的例子,输出随机的美国城市和车牌号码: ``` @@ -58,7 +58,7 @@ for _ in range(10): SZL-0934 ``` -另外,如果你输入的是 city 而不是 city_real,返回的将会是虚构的城市名。 +另外,如果你输入的是 `city()` 而不是 `city_real()`,返回的将会是虚构的城市名。 ``` print(myDB.gen_data_series(num=8,data_type='city')) @@ -97,11 +97,12 @@ fields=['name','city','street_address','email']) ``` 上面的例子种生成了一个能被 MySQL 和 SQLite 支持的 `.db` 文件。下图则显示了这个文件中的数据表在 SQLite 可视化客户端中打开的画面。 + ![](https://opensource.com/sites/default/files/uploads/pydbgen_db-browser-for-sqlite.png) ### 生成 Excel 文件 -和上面的其它示例类似,下面的代码可以生成一个具有随机数据的 Excel 文件。值得一提的是,通过将`phone_simple` 参数设为 `False` ,可以生成较长较复杂的电话号码。如果你想要提高自己在数据提取方面的能力,不妨尝试一下这个功能。 +和上面的其它示例类似,下面的代码可以生成一个具有随机数据的 Excel 文件。值得一提的是,通过将 `phone_simple` 参数设为 `False` ,可以生成较长较复杂的电话号码。如果你想要提高自己在数据提取方面的能力,不妨尝试一下这个功能。 ``` myDB.gen_excel(num=20,fields=['name','phone','time','country'], @@ -109,6 +110,7 @@ phone_simple=False,filename='TestExcel.xlsx') ``` 最终的结果类似下图所示: + ![](https://opensource.com/sites/default/files/uploads/pydbgen_excel.png) ### 生成随机电子邮箱地址 @@ -133,7 +135,7 @@ Tirtha.S@comcast.net ### 未来的改进和用户贡献 -目前的版本中并不完美。如果你发现了 pydbgen 的 bug 导致 pydbgen 在运行期间发生崩溃,请向我反馈。如果你打算对这个项目贡献代码,[也随时欢迎你][1]。当然现在也还有很多改进的方向: +目前的版本中并不完美。如果你发现了 pydbgen 的 bug 导致它在运行期间发生崩溃,请向我反馈。如果你打算对这个项目贡献代码,[也随时欢迎你][1]。当然现在也还有很多改进的方向: * pydbgen 作为随机数据生成器,可以集成一些机器学习或统计建模的功能吗? * pydbgen 是否会添加可视化功能? @@ -151,7 +153,7 @@ via: https://opensource.com/article/18/11/pydbgen-random-database-table-generato 作者:[Tirthajyoti Sarkar][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181105 Revisiting the Unix philosophy in 2018.md b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md new file mode 100644 index 0000000000..7c9931e601 --- /dev/null +++ b/published/201811/20181105 Revisiting the Unix philosophy in 2018.md @@ -0,0 +1,102 @@ +2018 重温 Unix 哲学 +====== +> 在现代微服务环境中,构建小型、单一的应用程序的旧策略又再一次流行了起来。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) + +1984 年,Rob Pike 和 Brian W. Kernighan 在 AT&T 贝尔实验室技术期刊上发表了名为 “[Unix 环境编程][1]” 的文章,其中他们使用 BSD 的 `cat -v` 例子来认证 Unix 哲学。简而言之,Unix 哲学是:构建小型、单一的应用程序 —— 不管用什么语言 —— 只做一件小而美的事情,用 `stdin` / `stdout` 进行通信,并通过管道进行连接。 + +听起来是不是有点耳熟? + +是的,我也这么认为。这就是 James Lewis 和 Martin Fowler 给出的 [微服务的定义][2] 。 + +> 简单来说,微服务架构的风格是将单个 应用程序开发为一套小型服务的方法,每个服务都运行在它的进程中,并用轻量级机制进行通信,通常是 HTTP 资源 API 。 + +虽然一个 *nix 程序或者是一个微服务本身可能非常局限甚至不是很有用,但是当这些独立工作的单元组合在一起的时候就显示出了它们真正的好处和强大。 + +### *nix程序 vs 微服务 + +下面的表格对比了 *nix 环境中的程序(例如 `cat` 或 `lsof`)与微服务环境中的程序。 + +| | *nix 程序 | 微服务 | +| ------------- | ------------------------- | ----------------------- | +| 执行单元 | 程序使用 `stdin`/`stdout` | 使用 HTTP 或 gRPC API | +| 数据流 | 管道 | ? | +| 可配置和参数化 | 命令行参数、环境变量和配置文件 | JSON/YAML 文档 | +| 发现 | 包管理器、man、make | DNS、环境变量、OpenAPI | + +让我们详细的看看每一行。 + +#### 执行单元 + +*nix 系统(如 Linux)中的执行单元是一个可执行的文件(二进制或者是脚本),理想情况下,它们从 `stdin` 读取输入并将输出写入 `stdout`。而微服务通过暴露一个或多个通信接口来提供服务,比如 HTTP 和 gRPC API。在这两种情况下,你都会发现无状态示例(本质上是纯函数行为)和有状态示例,除了输入之外,还有一些内部(持久)状态决定发生了什么。 + +#### 数据流 + +传统的,*nix 程序能够通过管道进行通信。换句话说,我们要感谢 [Doug McIlroy][3],你不需要创建临时文件来传递,而可以在每个进程之间处理无穷无尽的数据流。据我所知,除了我在 [2017 年做的基于 Apache Kafka 小实验][4],没有什么能比得上管道化的微服务了。 + +#### 可配置和参数化 + +你是如何配置程序或者服务的,无论是永久性的服务还是即时的服务?是的,在 *nix 系统上,你通常有三种方法:命令行参数、环境变量,或全面的配置文件。在微服务架构中,典型的做法是用 YAML(或者甚至是 JSON)文档,定制好一个服务的布局和配置以及依赖的组件和通信、存储和运行时配置。例如 [Kubernetes 资源定义][5]、[Nomad 工作规范][6] 或 [Docker 编排][7] 文档。这些可能参数化也可能不参数化;也就是说,除非你知道一些模板语言,像 Kubernetes 中的 [Helm][8],否则你会发现你使用了很多 `sed -i` 这样的命令。 + +#### 发现 + +你怎么知道有哪些程序和服务可用,以及如何使用它们?在 *nix 系统中通常都有一个包管理器和一个很好用的 man 页面;使用它们,应该能够回答你所有的问题。在微服务的设置中,在寻找一个服务的时候会相对更自动化一些。除了像 [Airbnb 的 SmartStack][9] 或 [Netflix 的 Eureka][10] 等可以定制以外,通常还有基于环境变量或基于 DNS 的[方法][11],允许您动态的发现服务。同样重要的是,事实上 [OpenAPI][12] 为 HTTP API 提供了一套标准文档和设计模式,[gRPC][13] 为一些耦合性强的高性能项目也做了同样的事情。最后非常重要的一点是,考虑到开发者经验(DX),应该从写一份好的 [Makefile][14] 开始,并以编写符合 [风格][15] 的文档结束。 + +### 优点和缺点 + +*nix 系统和微服务都提供了许多挑战和机遇。 + +#### 模块性 + +要设计一个简洁、有清晰的目的,并且能够很好地和其它模块配合的某个东西是很困难的。甚至是在不同版本中实现并引入相应的异常处理流程都很困难的。在微服务中,这意味着重试逻辑和超时机制,或者将这些功能外包到服务网格service mesh是不是一个更好的选择呢?这确实比较难,可如果你做好了,那它的可重用性是巨大的。 + +#### 可观测性 + +在一个独石monolith(2018 年)或是一个试图做任何事情的大型程序(1984 年),当情况恶化的时候,应当能够直接的找到问题的根源。但是在一个 + +``` +yes | tr \\n x | head -c 450m | grep n +``` + +或者在一个微服务设置中请求一个路径,例如,涉及 20 个服务,你怎么弄清楚是哪个服务的问题?幸运的是,我们有很多标准,特别是 [OpenCensus][16] 和 [OpenTracing][17]。如果您希望转向微服务,可预测性仍然可能是最大的问题。 + +#### 全局状态 + +对于 *nix 程序来说可能不是一个大问题,但在微服务中,全局状态仍然是一个需要讨论的问题。也就是说,如何确保有效的管理本地化(持久性)的状态以及尽可能在少做变更的情况下使全局保持一致。 + +### 总结一下 + +最后,问题仍然是:你是否在使用合适的工具来完成特定的工作?也就是说,以同样的方式实现一个特定的 *nix 程序在某些时候或者阶段会是一个更好的选择,它是可能在你的组织或工作过程中的一个[最好的选择][18]。无论如何,我希望这篇文章可以让你看到 Unix 哲学和微服务之间许多强有力的相似之处。也许我们可以从前者那里学到一些东西使后者受益。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/revisiting-unix-philosophy-2018 + +作者:[Michael Hausenblas][a] +选题:[lujun9972][b] +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mhausenblas +[b]: https://github.com/lujun9972 +[1]: http://harmful.cat-v.org/cat-v/ +[2]: https://martinfowler.com/articles/microservices.html +[3]: https://en.wikipedia.org/wiki/Douglas_McIlroy +[4]: https://speakerdeck.com/mhausenblas/distributed-named-pipes-and-other-inter-services-communication +[5]: http://kubernetesbyexample.com/ +[6]: https://www.nomadproject.io/docs/job-specification/index.html +[7]: https://docs.docker.com/compose/overview/ +[8]: https://helm.sh/ +[9]: https://github.com/airbnb/smartstack-cookbook +[10]: https://github.com/Netflix/eureka +[11]: https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services +[12]: https://www.openapis.org/ +[13]: https://grpc.io/ +[14]: https://suva.sh/posts/well-documented-makefiles/ +[15]: https://www.linux.com/news/improve-your-writing-gnu-style-checkers +[16]: https://opencensus.io/ +[17]: https://opentracing.io/ +[18]: https://robertnorthard.com/devops-days-well-architected-monoliths-are-okay/ diff --git a/translated/tech/20181105 Some Good Alternatives To ‘du- Command.md b/published/201811/20181105 Some Good Alternatives To ‘du- Command.md similarity index 85% rename from translated/tech/20181105 Some Good Alternatives To ‘du- Command.md rename to published/201811/20181105 Some Good Alternatives To ‘du- Command.md index 960551ebcd..cd08bac2a2 100644 --- a/translated/tech/20181105 Some Good Alternatives To ‘du- Command.md +++ b/published/201811/20181105 Some Good Alternatives To ‘du- Command.md @@ -11,9 +11,9 @@ ### tin-summer -`tin-summer` 是使用 Rust 语言编写的免费开源工具,它可以用于查找占用磁盘空间的文件,它也是 `du` 命令的另一个替代品。由于使用了多线程,因此 `tin-summer` 在计算大目录的大小时会比 `du` 命令快得多。`tin-summer` 与 `du` 命令之间的区别是前者读取文件的大小,而后者则读取磁盘使用情况。 +tin-summer 是使用 Rust 语言编写的自由开源工具,它可以用于查找占用磁盘空间的文件,它也是 `du` 命令的另一个替代品。由于使用了多线程,因此 tin-summer 在计算大目录的大小时会比 `du` 命令快得多。tin-summer 与 `du` 命令之间的区别是前者读取文件的大小,而后者则读取磁盘使用情况。 -`tin-summer` 的开发者认为它可以替代 `du`,因为它具有以下优势: +tin-summer 的开发者认为它可以替代 `du`,因为它具有以下优势: * 在大目录的操作速度上比 `du` 更快; * 在显示结果上默认采用易读格式; @@ -21,26 +21,26 @@ * 可以对输出进行排序和着色处理; * 可扩展,等等。 - - **安装 tin-summer** -要安装 `tin-summer`,只需要在终端中执行以下命令: +要安装 tin-summer,只需要在终端中执行以下命令: ``` $ curl -LSfs https://japaric.github.io/trust/install.sh | sh -s -- --git vmchale/tin-summer ``` -你也可以使用 `cargo` 软件包管理器安装 `tin-summer`,但你需要在系统上先安装 Rust。在 Rust 已经安装好的情况下,执行以下命令: +你也可以使用 `cargo` 软件包管理器安装 tin-summer,但你需要在系统上先安装 Rust。在 Rust 已经安装好的情况下,执行以下命令: ``` $ cargo install tin-summer ``` -如果上面提到的这两种方法都不能成功安装 `tin-summer`,还可以从它的[软件发布页][1]下载最新版本的二进制文件编译,进行手动安装。 +如果上面提到的这两种方法都不能成功安装 tin-summer,还可以从它的[软件发布页][1]下载最新版本的二进制文件编译,进行手动安装。 **用法** +(LCTT 译注:tin-summer 的命令名为 `sn`) + 如果需要查看当前工作目录的文件大小,可以执行以下命令: ``` @@ -80,13 +80,13 @@ $ sn sort /home/sk/ -n5 $ sn ar ``` -`tin-summer` 同样支持查找指定大小的带有构建工程的目录。例如执行以下命令可以查找到大小在 100 MB 以上的带有构建工程的目录: +tin-summer 同样支持查找指定大小的带有构建工程的目录。例如执行以下命令可以查找到大小在 100 MB 以上的带有构建工程的目录: ``` $ sn ar -t100M ``` -如上文所说,`tin-summer` 在操作大目录的时候速度比较快,因此在操作小目录的时候,速度会相对比较慢一些。不过它的开发者已经表示,将会在以后的版本中优化这个缺陷。 +如上文所说,tin-summer 在操作大目录的时候速度比较快,因此在操作小目录的时候,速度会相对比较慢一些。不过它的开发者已经表示,将会在以后的版本中优化这个缺陷。 要获取相关的帮助,可以执行以下命令: @@ -98,7 +98,7 @@ $ sn --help ### dust -`dust` (含义是 `du` + `rust` = `dust`)使用 Rust 编写,是一个免费、开源的更直观的 `du` 工具。它可以在不需要 `head` 或`sort` 命令的情况下即时显示目录占用的磁盘空间。与 `tin-summer` 一样,它会默认情况以易读的格式显示每个目录的大小。 +`dust` (含义是 `du` + `rust` = `dust`)使用 Rust 编写,是一个免费、开源的更直观的 `du` 工具。它可以在不需要 `head` 或`sort` 命令的情况下即时显示目录占用的磁盘空间。与 tin-summer 一样,它会默认情况以易读的格式显示每个目录的大小。 **安装 dust** @@ -114,7 +114,7 @@ $ cargo install du-dust $ wget https://github.com/bootandy/dust/releases/download/v0.3.1/dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz ``` -抽取文件: +抽取文件: ``` $ tar -xvf dust-v0.3.1-x86_64-unknown-linux-gnu.tar.gz @@ -283,7 +283,7 @@ via: https://www.ostechnix.com/some-good-alternatives-to-du-command/ 作者:[SK][a] 选题:[lujun9972][b] 译者:[HankChow](https://github.com/HankChow) -校对:[校对者ID](https://github.com/校对者ID) +校对:[wxy](https://github.com/wxy) 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 diff --git a/published/201811/20181107 Gitbase- Exploring git repos with SQL.md b/published/201811/20181107 Gitbase- Exploring git repos with SQL.md new file mode 100644 index 0000000000..994474d949 --- /dev/null +++ b/published/201811/20181107 Gitbase- Exploring git repos with SQL.md @@ -0,0 +1,92 @@ +gitbase:用 SQL 查询 Git 仓库 +====== + +> gitbase 是一个使用 go 开发的的开源项目,它实现了在 Git 仓库上执行 SQL 查询。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) + +Git 已经成为了代码版本控制的事实标准,但尽管 Git 相当普及,对代码仓库的深入分析的工作难度却没有因此而下降;而 SQL 在大型代码库的查询方面则已经是一种久经考验的语言,因此诸如 Spark 和 BigQuery 这样的项目都采用了它。 + +所以,source{d} 很顺理成章地将这两种技术结合起来,就产生了 gitbase(LCTT 译注:source{d} 是一家开源公司,本文作者是该公司开发者关系副总裁)。gitbase 是一个代码即数据code-as-data的解决方案,可以使用 SQL 对 git 仓库进行大规模分析。 + +[gitbase][1] 是一个完全开源的项目。它站在了很多巨人的肩上,因此得到了足够的发展竞争力。下面就来介绍一下其中的一些“巨人”。 + +![](https://opensource.com/sites/default/files/uploads/gitbase.png) + +*[gitbase playground][2] 为 gitbase 提供了一个可视化的操作环境。* + +### 用 Vitess 解析 SQL + +gitbase 通过 SQL 与用户进行交互,因此需要能够遵循 MySQL 协议来对通过网络传入的 SQL 请求作出解析和理解,万幸由 YouTube 建立的 [Vitess][3] 项目已经在这一方面给出了解决方案。Vitess 是一个横向扩展的 MySQL 数据库集群系统。 + +我们只是使用了这个项目中的部分重要代码,并将其转化为一个可以让任何人在数分钟以内编写出一个 MySQL 服务器的[开源程序][4],就像我在 [justforfunc][5] 视频系列中展示的 [CSVQL][6] 一样,它可以使用 SQL 操作 CSV 文件。 + +### 用 go-git 读取 git 仓库 + +在成功解析 SQL 请求之后,还需要对数据集中的 git 仓库进行查询才能返回结果。因此,我们还结合使用了 source{d} 最成功的 [go-git][7] 仓库。go-git 是使用纯 go 语言编写的具有高度可扩展性的 git 实现。 + +借此我们就可以很方便地将存储在磁盘上的代码仓库保存为 [siva][8] 文件格式(这同样是 source{d} 的一个开源项目),也可以通过 `git clone` 来对代码仓库进行复制。 + +### 使用 enry 检测语言、使用 babelfish 解析文件 + +gitbase 集成了我们开源的语言检测项目 [enry][9] 以及代码解析项目 [babelfish][10],因此在分析 git 仓库历史代码的能力也相当强大。babelfish 是一个自托管服务,普适于各种源代码解析,并将代码文件转换为通用抽象语法树Universal Abstract Syntax Tree(UAST)。 + +这两个功能在 gitbase 中可以被用户以函数 `LANGUAGE` 和 `UAST` 调用,诸如“查找上个月最常被修改的函数的名称”这样的请求就需要通过这两个功能实现。 + +### 提高性能 + +gitbase 可以对非常大的数据集进行分析,例如来自 GitHub 高达 3 TB 源代码的 Public Git Archive([公告][11])。面临的工作量如此巨大,因此每一点性能都必须运用到极致。于是,我们也使用到了 Rubex 和 Pilosa 这两个项目。 + +#### 使用 Rubex 和 Oniguruma 优化正则表达式速度 + +[Rubex][12] 是 go 的正则表达式标准库包的一个准替代品。之所以说它是准替代品,是因为它没有在 `regexp.Regexp` 类中实现 `LiteralPrefix` 方法,直到现在都还没有。 + +Rubex 的高性能是由于使用 [cgo][14] 调用了 [Oniguruma][13],它是一个高度优化的 C 代码库。 + +#### 使用 Pilosa 索引优化查询速度 + +索引几乎是每个关系型数据库都拥有的特性,但 Vitess 由于不需要用到索引,因此并没有进行实现。 + +于是我们引入了 [Pilosa][15] 这个开源项目。Pilosa 是一个使用 go 实现的分布式位图索引,可以显著提升跨多个大型数据集的查询的速度。通过 Pilosa,gitbase 才得以在巨大的数据集中进行查询。 + +### 总结 + +我想用这一篇文章来对开源社区表达我衷心的感谢,让我们能够不负众望的在短时间内完成 gitbase 的开发。我们 source{d} 的每一位成员都是开源的拥护者,github.com/src-d 下的每一行代码都是见证。 + +你想使用 gitbase 吗?最简单快捷的方式是从 sourced.tech/engine 下载 source{d} 引擎,就可以通过单个命令运行 gitbase 了。 + +想要了解更多,可以听听我在 [Go SF 大会][16]上的演讲录音。 + +本文在 [Medium][17] 首发,并经许可在此发布。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/gitbase + +作者:[Francesc Campoy][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/francesc +[b]: https://github.com/lujun9972 +[1]: https://github.com/src-d/gitbase +[2]: https://github.com/src-d/gitbase-web +[3]: https://github.com/vitessio/vitess +[4]: https://github.com/src-d/go-mysql-server +[5]: http://justforfunc.com/ +[6]: https://youtu.be/bcRDXAraprk +[7]: https://github.com/src-d/go-git +[8]: https://github.com/src-d/siva +[9]: https://github.com/src-d/enry +[10]: https://github.com/bblfsh/bblfshd +[11]: https://blog.sourced.tech/post/announcing-pga/ +[12]: https://github.com/moovweb/rubex +[13]: https://github.com/kkos/oniguruma +[14]: https://golang.org/cmd/cgo/ +[15]: https://github.com/pilosa/pilosa +[16]: https://www.meetup.com/golangsf/events/251690574/ +[17]: https://medium.com/sourcedtech/gitbase-exploring-git-repos-with-sql-95ec0986386c + diff --git a/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md new file mode 100644 index 0000000000..4d7112d397 --- /dev/null +++ b/published/201811/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md @@ -0,0 +1,186 @@ +在 Linux 中如何查找一个命令或进程的执行时间 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png) + +在类 Unix 系统中,你可能知道一个命令或进程开始执行的时间,以及[一个进程运行了多久][1]。 但是,你如何知道这个命令或进程何时结束或者它完成运行所花费的总时长呢? 在类 Unix 系统中,这是非常容易的! 有一个专门为此设计的程序名叫 **GNU time**。 使用 `time` 程序,我们可以轻松地测量 Linux 操作系统中命令或程序的总执行时间。 `time` 命令在大多数 Linux 发行版中都有预装,所以你不必去安装它。 + +### 在 Linux 中查找一个命令或进程的执行时间 + +要测量一个命令或程序的执行时间,运行: + +``` +$ /usr/bin/time -p ls +``` + +或者, + +``` +$ time ls +``` + +输出样例: + +``` +dir1 dir2 file1 file2 mcelog + +real 0m0.007s +user 0m0.001s +sys 0m0.004s +``` + +``` +$ time ls -a +. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful +.. .bashrc dir2 .gnupg .profile .wget-hsts +.bash_history .cache file1 .local .stack + +real 0m0.008s +user 0m0.001s +sys 0m0.005s +``` + +以上命令显示出了 `ls` 命令的总执行时间。 你可以将 `ls` 替换为任何命令或进程,以查找总的执行时间。 + +输出详解: + + 1. `real` —— 指的是命令或程序所花费的总时间 + 2. `user` —— 指的是在用户模式下程序所花费的时间 + 3. `sys` —— 指的是在内核模式下程序所花费的时间 + + + +我们也可以将命令限制为仅运行一段时间。参考如下教程了解更多细节: + +- [在 Linux 中如何让一个命令运行特定的时长](https://www.ostechnix.com/run-command-specific-time-linux/) + +### time 与 /usr/bin/time + +你可能注意到了, 我们在上面的例子中使用了两个命令 `time` 和 `/usr/bin/time` 。 所以,你可能会想知道他们的不同。 + +首先, 让我们使用 `type` 命令看看 `time` 命令到底是什么。对于那些我们不了解的 Linux 命令,`type` 命令用于查找相关命令的信息。 更多详细信息,[请参阅本指南][2]。 + +``` +$ type -a time +time is a shell keyword +time is /usr/bin/time +``` + +正如你在上面的输出中看到的一样,`time` 是两个东西: + + * 一个是 BASH shell 中内建的关键字 + * 一个是可执行文件,如 `/usr/bin/time` + +由于 shell 关键字的优先级高于可执行文件,当你没有给出完整路径只运行 `time` 命令时,你运行的是 shell 内建的命令。 但是,当你运行 `/usr/bin/time` 时,你运行的是真正的 **GNU time** 命令。 因此,为了执行真正的命令你可能需要给出完整路径。 + +在大多数 shell 中如 BASH、ZSH、CSH、KSH、TCSH 等,内建的关键字 `time` 是可用的。 `time` 关键字的选项少于该可执行文件,你可以使用的唯一选项是 `-p`。 + +你现在知道了如何使用 `time` 命令查找给定命令或进程的总执行时间。 想进一步了解 GNU time 工具吗? 继续阅读吧! + +### 关于 GNU time 程序的简要介绍 + +GNU time 程序运行带有给定参数的命令或程序,并在命令完成后将系统资源使用情况汇总到标准输出。 与 `time` 关键字不同,GNU time 程序不仅显示命令或进程的执行时间,还显示内存、I/O 和 IPC 调用等其他资源。 + +`time` 命令的语法是: + +``` +/usr/bin/time [options] command [arguments...] +``` + +上述语法中的 `options` 是指一组可以与 `time` 命令一起使用去执行特定功能的选项。 下面给出了可用的选项: + + * `-f, –format` —— 使用此选项可以根据需求指定输出格式。 + * `-p, –portability` —— 使用简要的输出格式。 + * `-o file, –output=FILE` —— 将输出写到指定文件中而不是到标准输出。 + * `-a, –append` —— 将输出追加到文件中而不是覆盖它。 + * `-v, –verbose` —— 此选项显示 `time` 命令输出的详细信息。 + * `–quiet` – 此选项可以防止 `time` 命令报告程序的状态. + +当不带任何选项使用 GNU time 命令时,你将看到以下输出。 + +``` +$ /usr/bin/time wc /etc/hosts +9 28 273 /etc/hosts +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k +0inputs+0outputs (0major+73minor)pagefaults 0swaps +``` + +如果你用 shell 关键字 `time` 运行相同的命令, 输出会有一点儿不同: + +``` +$ time wc /etc/hosts +9 28 273 /etc/hosts + +real 0m0.006s +user 0m0.001s +sys 0m0.004s +``` + +有时,你可能希望将系统资源使用情况输出到文件中而不是终端上。 为此, 你可以使用 `-o` 选项,如下所示。 + +``` +$ /usr/bin/time -o file.txt ls +dir1 dir2 file1 file2 file.txt mcelog +``` + +正如你看到的,`time` 命令不会显示到终端上。因为我们将输出写到了`file.txt` 的文件中。 让我们看一下这个文件的内容: + +``` +$ cat file.txt +0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k +0inputs+0outputs (0major+106minor)pagefaults 0swaps +``` + +当你使用 `-o` 选项时, 如果你没有一个名为 `file.txt` 的文件,它会创建一个并把输出写进去。如果文件存在,它会覆盖文件原来的内容。 + +你可以使用 `-a` 选项将输出追加到文件后面,而不是覆盖它的内容。 + +``` +$ /usr/bin/time -a file.txt ls +``` + +`-f` 选项允许用户根据自己的喜好控制输出格式。 比如说,以下命令的输出仅显示用户,系统和总时间。 + +``` +$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls +dir1 dir2 file1 file2 mcelog +0:00.00 real, 0.00 user, 0.00 sys +``` + +请注意 shell 中内建的 `time` 命令并不具有 GNU time 程序的所有功能。 + +有关 GNU time 程序的详细说明可以使用 `man` 命令来查看。 + +``` +$ man time +``` + +想要了解有关 Bash 内建 `time` 关键字的更多信息,请运行: + +``` +$ help time +``` + +就到这里吧。 希望对你有所帮助。 + +会有更多好东西分享哦。 请关注我们! + +加油哦! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[caixiangyue](https://github.com/caixiangyue) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/find-long-process-running-linux/ +[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/ diff --git a/published/201811/20181108 Choosing a printer for Linux.md b/published/201811/20181108 Choosing a printer for Linux.md new file mode 100644 index 0000000000..0d13ffd990 --- /dev/null +++ b/published/201811/20181108 Choosing a printer for Linux.md @@ -0,0 +1,79 @@ +为 Linux 选择打印机 +====== + +> Linux 为打印机提供了广泛的支持。学习如何利用它。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) + +我们在传闻已久的无纸化社会方面取得了重大进展,但我们仍需要不时打印文件。如果你是 Linux 用户,并有一台没有 Linux 安装盘的打印机,或者你正准备在市场上购买新设备,那么你很幸运。因为大多数 Linux 发行版(以及 MacOS)都使用通用 Unix 打印系统([CUPS][1]),它包含了当今大多数打印机的驱动程序。这意味着 Linux 为打印机提供了比 Windows 更广泛的支持。 + +### 选择打印机 + +如果你需要购买新打印机,了解它是否支持 Linux 的最佳方法是查看包装盒或制造商网站上的文档。你也可以搜索 [Open Printing][2] 数据库。它是检查各种打印机与 Linux 兼容性的绝佳资源。 + +以下是与 Linux 兼容的佳能打印机的一些 Open Printing 结果。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png) + +下面的截图是 Open Printing 的 Hewlett-Packard LaserJet 4050 的结果 —— 根据数据库,它应该可以“完美”工作。这里列出了建议驱动以及通用说明,让我了解它适用于 CUPS、行式打印守护程序(LPD)、LPRng 等。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png) + +在任何情况下,最好在购买打印机之前检查制造商的网站并询问其他 Linux 用户。 + +### 检查你的连接 + +有几种方法可以将打印机连接到计算机。如果你的打印机是通过 USB 连接的,那么可以在 Bash 提示符下输入 `lsusb` 来轻松检查连接。 + +``` +$ lsusb +``` + +该命令返回 “Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard” —— 这没有太多价值,但可以得知打印机已连接。我可以通过输入以下命令获得有关打印机的更多信息: + +``` +$ dmesg | grep -i usb +``` + +结果更加详细。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png) + +如果你尝试将打印机连接到并口(假设你的计算机有并口 —— 如今很少见),你可以使用此命令检查连接: + +``` +$ dmesg | grep -i parport +``` + +返回的信息可以帮助我为我的打印机选择正确的驱动程序。我发现,如果我坚持使用流行的名牌打印机,大部分时间我都能获得良好的效果。 + +### 设置你的打印机软件 + +Fedora Linux 和 Ubuntu Linux 都包含简单的打印机设置工具。[Fedora][3] 为打印问题的答案维护了一个出色的 wiki。可以在 GUI 中的设置轻松启动这些工具,也可以在命令行上调用 `system-config-printer`。 + +![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png) + +HP 支持 Linux 打印的 [HP Linux 成像和打印][4] (HPLIP) 软件可能已安装在你的 Linux 系统上。如果没有,你可以为你的发行版[下载][5]最新版本。打印机制造商 [Epson][6] 和 [Brother][7] 也有带有 Linux 打印机驱动程序和信息的网页。 + +你最喜欢的 Linux 打印机是什么?请在评论中分享你的意见。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/choosing-printer-linux + +作者:[Don Watkins][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/don-watkins +[b]: https://github.com/lujun9972 +[1]: https://www.cups.org/ +[2]: http://www.openprinting.org/printers +[3]: https://fedoraproject.org/wiki/Printing +[4]: https://developers.hp.com/hp-linux-imaging-and-printing +[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip +[6]: https://epson.com/Support/wa00821 +[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on diff --git a/published/201811/20181108 The Difference Between more, less And most Commands.md b/published/201811/20181108 The Difference Between more, less And most Commands.md new file mode 100644 index 0000000000..14e1fc87fd --- /dev/null +++ b/published/201811/20181108 The Difference Between more, less And most Commands.md @@ -0,0 +1,221 @@ +more、less 和 most 的区别 +====== +![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-less-and-most-commands-720x340.png) + +如果你是一个 Linux 方面的新手,你可能会在 `more`、`less`、`most` 这三个命令行工具之间产生疑惑。在本文当中,我会对这三个命令行工具进行对比,以及展示它们各自在 Linux 中的一些使用例子。总的来说,这几个命令行工具之间都有相通和差异,而且它们在大部分 Linux 发行版上都有自带。 + +我们首先来看看 `more` 命令。 + +### more 命令 + +`more` 是一个老式的、基础的终端分页阅读器,它可以用于打开指定的文件并进行交互式阅读。如果文件的内容太长,在一屏以内无法完整显示,就会逐页显示文件内容。使用回车键或者空格键可以滚动浏览文件的内容,但有一个限制,就是只能够单向滚动。也就是说只能按顺序往下翻页,而不能进行回看。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-command-demo.gif) + +**更正** + +有的 Linux 用户向我指出,在 `more` 当中是可以向上翻页的。不过,最原始版本的 `more` 确实只允许向下翻页,在后续出现的较新的版本中也允许了有限次数的向上翻页,只需要在浏览过程中按 `b` 键即可向上翻页。唯一的限制是 `more` 不能搭配管道使用(如 `ls | more`)。(LCTT 译注:此处原作者疑似有误,译者使用 `more` 是可以搭配管道使用的,或许与不同 `more` 版本有关) + +按 `q` 即可退出 `more`。 + +**更多示例** + +打开 `ostechnix.txt` 文件进行交互式阅读,可以执行以下命令: + +``` +$ more ostechnix.txt +``` + +在阅读过程中,如果需要查找某个字符串,只需要像下面这样输入斜杠(`/`)之后接着输入需要查找的内容: + +``` +/linux +``` + +按 `n` 键可以跳转到下一个匹配的字符串。 + +如果需要在文件的第 `10` 行开始阅读,只需要执行: + +``` +$ more +10 file +``` + +就可以从文件的第 `10` 行开始显示文件的内容了。 + +如果你需要让 `more` 提示你按空格键来翻页,可以加上 `-d` 参数: + +``` +$ more -d ostechnix.txt +``` + +![][2] + +如上图所示,`more` 会提示你可以按空格键翻页。 + +如果需要查看所有选项以及对应的按键,可以按 `h` 键。 + +要查看 `more` 的更多详细信息,可以参考手册: + +``` +$ man more +``` + +### less 命令 + +`less` 命令也是用于打开指定的文件并进行交互式阅读,它也支持翻页和搜索。如果文件的内容太长,也会对输出进行分页,因此也可以翻页阅读。比 `more` 命令更好的一点是,`less` 支持向上翻页和向下翻页,也就是可以在整个文件中任意阅读。 + +![][4] + +在使用功能方面,`less` 比 `more` 命令具有更多优点,以下列出其中几个: + + * 支持向上翻页和向下翻页 + * 支持向上搜索和向下搜索 + * 可以跳转到文件的末尾并立即从文件的开头开始阅读 + * 在编辑器中打开指定的文件 + +**更多示例** + +打开文件: + +``` +$ less ostechnix.txt +``` + +按空格键或回车键可以向下翻页,按 `b` 键可以向上翻页。 + +如果需要向下搜索,在输入斜杠(`/`)之后接着输入需要搜索的内容: + +``` +/linux +``` + +按 `n` 键可以跳转到下一个匹配的字符串,如果需要跳转到上一个匹配的字符串,可以按 `N` 键。 + +如果需要向上搜索,在输入问号(`?`)之后接着输入需要搜索的内容: + +``` +?linux +``` + +同样是按 `n` 键或 `N` 键跳转到下一个或上一个匹配的字符串。 + +只需要按 `v` 键,就会将正在阅读的文件在默认编辑器中打开,然后就可以对文件进行各种编辑操作了。 + +按 `h` 键可以查看 `less` 工具的选项和对应的按键。 + +按 `q` 键可以退出阅读。 + +要查看 `less` 的更多详细信息,可以参考手册: + +``` +$ man less +``` + +### most 命令 + +`most` 同样是一个终端阅读工具,而且比 `more` 和 `less` 的功能更为丰富。`most` 支持同时打开多个文件。你可以在打开的文件之间切换、编辑当前打开的文件、迅速跳转到文件中的某一行、分屏阅读、同时锁定或滚动多个屏幕等等功能。在默认情况下,对于较长的行,`most` 不会将其截断成多行显示,而是提供了左右滚动功能以在同一行内显示。 + +**更多示例** + +打开文件: + +``` +$ most ostechnix1.txt +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-command.png) + +按 `e` 键可以编辑当前文件。 + +如果需要向下搜索,在斜杠(`/`)或 `S` 或 `f` 之后输入需要搜索的内容,按 `n` 键就可以跳转到下一个匹配的字符串。 + +![][3] + +如果需要向上搜索,在问号(`?`)之后输入需要搜索的内容,也是通过按 `n` 键跳转到下一个匹配的字符串。 + +同时打开多个文件: + +``` +$ most ostechnix1.txt ostechnix2.txt ostechnix3.txt +``` + +在打开了多个文件的状态下,可以输入 `:n` 切换到下一个文件,使用 `↑` 或 `↓` 键选择需要切换到的文件,按回车键就可以查看对应的文件。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-2.gif) + +要打开文件并跳转到某个字符串首次出现的位置(例如 linux),可以执行以下命令: + +``` +$ most file +/linux +``` + +按 `h` 键可以查看帮助。 + +**按键操作列表** + +移动: + + * 空格键或 `D` 键 – 向下滚动一屏 + * `DELETE` 键或 `U` 键 – 向上滚动一屏 + * `↓` 键 – 向下移动一行 + * `↑` 键 – 向上移动一行 + * `T` 键 – 移动到文件开头 + * `B` 键 – 移动到文件末尾 + * `>` 键或 `TAB` 键 – 向右滚动屏幕 + * `<` 键 – 向左滚动屏幕 + * `→` 键 – 向右移动一列 + * `←` 键 – 向左移动一列 + * `J` 键或 `G` 键 – 移动到某一行,例如 `10j` 可以移动到第 10 行 + * `%` 键 – 移动到文件长度某个百分比的位置 + +窗口命令: + + * `Ctrl-X 2`、`Ctrl-W 2` – 分屏 + * `Ctrl-X 1`、`Ctrl-W 1` – 只显示一个窗口 + * `O` 键、`Ctrl-X O` – 切换到另一个窗口 + * `Ctrl-X 0` – 删除窗口 + +文件内搜索: + + * `S` 键或 `f` 键或 `/` 键 – 向下搜索 + * `?` 键 – 向上搜索 + * `n` 键 – 跳转到下一个匹配的字符串 + +退出: + + * `q` 键 – 退出 `most` ,且所有打开的文件都会被关闭 + * `:N`、`:n` – 退出当前文件并查看下一个文件(使用 `↑` 键、`↓` 键选择下一个文件) + +要查看 `most` 的更多详细信息,可以参考手册: + +``` +$ man most +``` + +### 总结 + +`more` – 传统且基础的分页阅读工具,仅支持向下翻页和有限次数的向上翻页。 + +`less` – 比 `more` 功能丰富,支持向下翻页和向上翻页,也支持文本搜索。在打开大文件的时候,比 `vi` 这类文本编辑器启动得更快。 + +`most` – 在上述两个工具功能的基础上,还加入了同时打开多个文件、同时锁定或滚动多个屏幕、分屏等等大量功能。 + +以上就是我的介绍,希望能让你通过我的文章对这三个工具有一定的认识。如果想了解这篇文章以外的关于这几个工具的详细功能,请参阅它们的 `man` 手册。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-difference-between-more-less-and-most-commands/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/more-1.png +[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/most-1-1.gif +[4]: https://www.ostechnix.com/wp-content/uploads/2018/11/less-command-demo.gif diff --git a/published/201811/20181109 7 reasons I love open source.md b/published/201811/20181109 7 reasons I love open source.md new file mode 100644 index 0000000000..f45dfa2e86 --- /dev/null +++ b/published/201811/20181109 7 reasons I love open source.md @@ -0,0 +1,41 @@ +我爱开源的 7 个理由 +====== + +> 成为开源社区的一员绝对是一个明智之举,原因有很多。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BUSINESS_lovework.png?itok=gmj9tqiG) + +这就是我为什么包括晚上和周末在内花费非常多的时间待在 [GitHub][1] 上,成为开源社区的一个活跃成员。 + +我参加过各种规模的项目,从个人项目到几个人的协作项目,乃至有数百位贡献者的项目,每一个项目都让我有新的受益。 + +![](https://opensource.com/sites/default/files/uploads/open_source_contributions.gif) + +也就是说,这里有七个原因让我为开源做出贡献: + + * **它让我的技能与时俱进。** 在咨询公司的管理职位工作,有时我觉得自己与创建软件的实际过程越来越远。参与开源项目使我可以重新回到我最热爱的编程之中。也使我能够体验新技术,学习新技术和语言,并且使我不被酷酷的孩子们落下。 + * **它教我如何与人打交道。** 与一群素未谋面的人合作开源项目在与人交往方面能够教会你很多。你很快会发现每个人有他们自己的压力,他们自己的义务,以及不同的时间表。学习如何与一群陌生人合作是一种很好的生活技能。 + * **它使我成为一个更好的沟通者。** 开源项目的维护者的时间有限。你很快就知道,要成功地贡献,你必须能够清楚、简明地表达你所做的改变、添加或修复,最重要的是,你为什么要这么做。 + * **它使我成为一个更好的开发者。** 没有什么能像成百上千的其他开发者依赖你的代码一样 —— 它敦促你更加专注软件设计、测试和文档。 + * **它使我的造物变得更好。** 可能开源背后最强大的观念是它允许你驾驭一个由有创造力、有智慧、有知识的个人组成的全球网络。我知道我自己一个人的能力是有限的,我不可能什么都知道,但与开源社区的合作有助于我改进我的创作。 + * **它告诉我小事物的价值。** 如果一个项目的文档不清楚或不完整,我会毫不犹豫地把它做得更好。一个小小的更新或修复可能只节省开发人员几分钟的时间,但是随着用户数量的增加,您一个小小的更改可能产生巨大的价值。 + * **它使我更好的营销。** 好的,这是一个奇怪的例子。有这么多伟大的开源项目在那里,感觉像一场争夺关注的拼搏。从事于开源让我学到了很多营销的价值。这不是关于讲述或创建一个华丽的网站。而是关于如何清楚地传达你所创造的,它是如何使用的,以及它带来的好处。 + +我可以继续讨论开源是如何帮助你发展伙伴、关系和朋友的,不过你应该都知道了。有许多原因让我乐于成为开源社区的一员。 + +你可能想知道这些如何用于大型金融服务机构的 IT 战略。简单来说:谁不想要一个擅长与人交流和工作,具有尖端的技能,并能够推销他们的成果的开发团队呢? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/reasons-love-open-source + +作者:[Colin Eberhardt][a] +选题:[lujun9972][b] +译者:[ChiZelin](https://github.com/ChiZelin) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/colineberhardt +[b]: https://github.com/lujun9972 +[1]: https://github.com/ColinEberhardt/ diff --git a/published/201811/20181113 4 tips for learning Golang.md b/published/201811/20181113 4 tips for learning Golang.md new file mode 100644 index 0000000000..ed80a40ded --- /dev/null +++ b/published/201811/20181113 4 tips for learning Golang.md @@ -0,0 +1,80 @@ +学习 Golang 的 4 个技巧 +====== + +> 到达 Golang 大陆:一位资深开发者之旅。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr) + +2014 年夏天…… + +> IBM:“我们需要你弄清楚这个 Docker。” + +> 我:“没问题。” + +> IBM:“那就开始吧。” + +> 我:“好的。”(内心声音):”Docker 是用 Go 编写的。是吗?“(Google 一下)“哦,一门编程语言。我在我的岗位上已经学习了很多了。这不会太难。” + +我的大学新生编程课是使用 VAX 汇编程序教授的。在数据结构课上,我们使用 Pascal —— 在图书馆计算机中心的旧电脑上使用软盘加载。在一门更高一级的课程中,我的教授教授喜欢用 ADA 去展示所有的例子。在我们的 Sun 工作站上,我通过各种 UNIX 的实用源代码学到了一点 C。在 IBM,OS/2 源代码中我们使用了 C 和一些 x86 汇编程序;在一个与 Apple 合作的项目中我们大量使用 C++ 的面向对象功能。不久后我学到了 shell 脚本,开始是 csh,但是在 90 年代中期发现 Linux 后就转到了 Bash。在 90 年代后期,我在将 IBM 的定制的 JVM 代码中的即时(JIT)编译器移植到 Linux 时,我不得不开始学习 m4(与其说是编程语言,不如说是一种宏处理器)。 + +一晃 20 年……我从未因为学习一门新的编程语言而焦灼。但是 [Go][1] 让我感觉有些不同。我打算公开贡献,上传到 GitHub,让任何有兴趣的人都可以看到!作为一个 40 多岁的资深开发者的 Go 新手,我不想成为一个笑话。我们都知道程序员的骄傲,不想丢人,不论你的经验水平如何。 + +我早期的调研显示,Go 似乎比某些语言更 “地道”。它不仅仅是让代码可以编译;也需要让代码可以 “Go Go Go”。 + +现在,我的个人的 Go 之旅四年间有了几百个拉取请求(PR),我不是致力于成为一个专家,但是现在我觉得贡献和编写代码比我在 2014 年的时候更舒服了。所以,你该怎么教一个老人新的技能或者一门编程语言呢?以下是我自己在前往 Golang 大陆之旅的四个步骤。 + +### 1、不要跳过基础 + +虽然你可以通过复制代码来进行你早期的学习(谁还有时间阅读手册!?),Go 有一个非常易读的 [语言规范][2],它写的很易于理解,即便你在语言或者编译理论方面没有取得硕士学位。鉴于 Go 的 **参数:类型** 顺序的特有习惯,以及一些有趣的语言功能,例如通道和 go 协程,搞定这些新概念是非常重要的是事情。阅读这个附属的文档 [高效 Go 编程][3],这是 Golang 创造者提供的另一个重要资源,它将为你提供有效和正确使用语言的准备。 + +### 2、从最好的中学习 + +有许多宝贵的资源可供挖掘,可以将你的 Go 知识提升到下一个等级。最近在 [GopherCon][4] 上的所有讲演都可以在网上找到,如这个 [GopherCon US 2018][5] 的详尽列表。这些讲演的专业知识和技术水平各不相同,但是你可以通过它们轻松地找到一些你所不了解的事情。[Francesc Campoy][6] 创建了一个名叫 [JustForFunc][7] 的 Go 编程视频系列,其不断增多的剧集可以用来拓宽你的 Go 知识和理解。直接搜索 “Golang" 可以为那些想要了解更多信息的人们展示许多其它视频和在线资源。 + +想要看代码?在 GitHub 上许多受欢迎的云原生项目都是用 Go 写的:[Docker/Moby][8]、[Kubernetes][9]、[Istio][10]、[containerd][11]、[CoreDNS][12],以及许多其它的。语言纯粹主义者可能会认为一些项目比另外一些更地道,但这些都是很好的起点,可以看到在高度活跃的项目的大型代码库中使用 Go 的程度。 + +### 3、使用优秀的语言工具 + +你会很快了解到 [gofmt][13] 的宝贵之处。Go 最漂亮的一个地方就在于没有关于每个项目代码格式的争论 —— **gofmt** 内置在语言的运行环境中,并且根据一系列可靠的、易于理解的语言规则对 Go 代码进行格式化。我不知道有哪个基于 Golang 的项目会在持续集成中不坚持使用 **gofmt** 检查拉取请求。 + +除了直接构建于运行环境和 SDK 中的一系列有价值的工具之外,我强烈建议使用一个对 Golang 的特性有良好支持的编辑器或者 IDE。由于我经常在命令行中进行工作,我依赖于 Vim 加上强大的 [vim-go][14] 插件。我也喜欢微软提供的 [VS Code][15],特别是它的 [Go 语言][16] 插件。 + +想要一个调试器?[Delve][17] 项目在不断的改进和成熟,它是在 Go 二进制文件上进行 [gdb][18] 式调试的强有力的竞争者。 + +### 4、写一些代码 + +你要是不开始尝试使用 Go 写代码,你永远不知道它有什么好的地方。找一个有 “需要帮助” 问题标签的项目,然后开始贡献代码。如果你已经使用了一个用 Go 编写的开源项目,找出它是否有一些可以用初学者方式解决的 Bug,然后开始你的第一个拉取请求。与生活中的大多数事情一样,实践出真知,所以开始吧。 + +事实证明,你可以教会一个资深的老开发者一门新的技能甚至编程语言。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/learning-golang + +作者:[Phill Estes][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/estesp +[b]: https://github.com/lujun9972 +[1]: https://golang.org/ +[2]: https://golang.org/ref/spec +[3]: https://golang.org/doc/effective_go.html +[4]: https://www.gophercon.com/ +[5]: https://tqdev.com/2018-gophercon-2018-videos-online +[6]: https://twitter.com/francesc +[7]: https://www.youtube.com/channel/UC_BzFbxG2za3bp5NRRRXJSw +[8]: https://github.com/moby/moby +[9]: https://github.com/kubernetes/kubernetes +[10]: https://github.com/istio/istio +[11]: https://github.com/containerd/containerd +[12]: https://github.com/coredns/coredns +[13]: https://blog.golang.org/go-fmt-your-code +[14]: https://github.com/fatih/vim-go +[15]: https://code.visualstudio.com/ +[16]: https://code.visualstudio.com/docs/languages/go +[17]: https://github.com/derekparker/delve +[18]: https://www.gnu.org/software/gdb/ diff --git a/published/201811/20181113 The alias And unalias Commands Explained With Examples.md b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md new file mode 100644 index 0000000000..1448918a1e --- /dev/null +++ b/published/201811/20181113 The alias And unalias Commands Explained With Examples.md @@ -0,0 +1,156 @@ +举例说明 alias 和 unalias 命令 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias-command-720x340.png) + +如果不是一个命令行重度用户的话,过了一段时间之后,你就可能已经忘记了这些复杂且冗长的 Linux 命令了。当然,有很多方法可以让你 [回想起遗忘的命令][1]。你可以简单的 [保存常用的命令][2] 然后按需使用。也可以在终端里 [标记重要的命令][3],然后在任何时候你想要的时间使用它们。而且,Linux 有一个内建命令 `history` 可以帮助你记忆这些命令。另外一个记住这些如此长的命令的简便方式就是为这些命令创建一个别名。你可以为任何经常重复调用的常用命令创建别名,而不仅仅是长命令。通过这种方法,你不必再过多地记忆这些命令。这篇文章中,我们将会在 Linux 环境下举例说明 `alias` 和 `unalias` 命令。 + +### alias 命令 + +`alias` 使用一个用户自定义的字符串来代替一个或者一串命令(包括多个选项、参数)。这个字符串可以是一个简单的名字或者缩写,不管这个命令原来多么复杂。`alias` 命令已经预装在 shell(包括 BASH、Csh、Ksh 和 Zsh 等) 当中。 + +`alias` 的通用语法是: + +``` +alias [alias-name[=string]...] +``` + +接下来看几个例子。 + +#### 列出别名 + +可能在你的系统中已经设置了一些别名。有些应用在你安装它们的时候可能已经自动创建了别名。要查看已经存在的别名,运行: + +``` +$ alias +``` + +或者, + +``` +$ alias -p +``` + +在我的 Arch Linux 系统中已经设置了下面这些别名。 + +``` +alias betty='/home/sk/betty/main.rb' +alias ls='ls --color=auto' +alias pbcopy='xclip -selection clipboard' +alias pbpaste='xclip -selection clipboard -o' +alias update='newsbeuter -r && sudo pacman -Syu' +``` + +#### 创建一个新的别名 + +像我之前说的,你不必去记忆这些又臭又长的命令。你甚至不必一遍一遍的运行长命令。只需要为这些命令创建一个简单易懂的别名,然后在任何你想使用的时候运行这些别名就可以了。这种方式会让你爱上命令行。 + +``` +$ du -h --max-depth=1 | sort -hr +``` + +这个命令将会查找当前工作目录下的各个子目录占用的磁盘大小,并按照从大到小的顺序进行排序。这个命令有点长。我们可以像下面这样轻易地为其创建一个 别名: + +``` +$ alias du='du -h --max-depth=1 | sort -hr' +``` + +这里的 `du` 就是这条命令的别名。这个别名可以被设置为任何名字,主要便于记忆和区别。 + +在创建一个别名的时候,使用单引号或者双引号都是可以的。这两种方法最后的结果没有任何区别。 + +现在你可以运行这个别名(例如我们这个例子中的 `du` )。它和上面的原命令将会产生相同的结果。 + +这个别名仅限于当前 shell 会话中。一旦你退出了当前 shell 会话,别名也就失效了。为了让这些别名长久有效,你需要把它们添加到你 shell 的配置文件当中。 + +BASH,编辑 `~/.bashrc` 文件: + +``` +$ nano ~/.bashrc +``` + +一行添加一个别名: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias.png) + +保存并退出这个文件。然后运行以下命令更新修改: + +``` +$ source ~/.bashrc +``` + +现在,这些别名在所有会话中都可以永久使用了。 + +ZSH,你需要添加这些别名到 `~/.zshrc`文件中。Fish,跟上面的类似,添加这些别名到 `~/.config/fish/config.fish` 文件中。 + +#### 查看某个特定的命令别名 + +像我上面提到的,你可以使用 `alias` 命令列出你系统中所有的别名。如果你想查看跟给定的别名有关的命令,例如 `du`,只需要运行: + +``` +$ alias du +alias du='du -h --max-depth=1 | sort -hr' +``` + +像你看到的那样,上面的命令可以显示与单词 `du` 有关的命令。 + +关于 `alias` 命令更多的细节,参阅 man 手册页: + +``` +$ man alias +``` + +### unalias 命令 + +跟它的名字说的一样,`unalias` 命令可以很轻松地从你的系统当中移除别名。`unalias` 命令的通用语法是: + +``` +unalias +``` + +要移除命令的别名,像我们之前创建的 `du`,只需要运行: + +``` +$ unalias du +``` + +`unalias` 命令不仅会从当前会话中移除别名,也会从你的 shell 配置文件中永久地移除别名。 + +还有一种移除别名的方法,是创建具有相同名称的新别名。 + +要从当前会话中移除所有的别名,使用 `-a` 选项: + +``` +$ unalias -a +``` + +更多细节,参阅 man 手册页。 + +``` +$ man unalias +``` + +如果你经常一遍又一遍的运行这些繁杂又冗长的命令,给它们创建别名可以节省你的时间。现在是你为常用命令创建别名的时候了。 + +这就是所有的内容了。希望可以帮到你。还有更多的干货即将到来,敬请期待! + +祝近祺! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/easily-recall-forgotten-linux-commands/ +[2]: https://www.ostechnix.com/save-commands-terminal-use-demand/ +[3]: https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ diff --git a/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md new file mode 100644 index 0000000000..2218dfcd2c --- /dev/null +++ b/published/201811/20181113 What you need to know about the GPL Cooperation Commitment.md @@ -0,0 +1,55 @@ +GPL 合作承诺的发展历程 +====== + +> GPL 合作承诺GPL Cooperation Commitment消除了开发者对许可证失效的顾虑,从而达到促进技术创新的目的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Law_balance_open_source.png?itok=5c4JhuEY) + +假如能免于顾虑,技术创新和发展将会让世界发生天翻地覆的改变。[GPL 合作承诺][1]GPL Cooperation Commitment就这样应运而生,只为通过公平、一致、可预测的许可证来让科技创新无后顾之忧。 + +去年,我曾经写过一篇文章,讨论了许可证对开源软件下游用户的影响。在进行研究的时候,我就发现许可证的约束力并不强,而且很多情况下是不可预测的。因此,我在文章中提出了一个能使开源许可证具有一致性和可预测性的潜在解决方案。但我只考虑到了诸如通过法律系统立法的“传统”方法。 + +2017 年 11 月,RedHat、IBM、Google 和 Facebook 提出了这种我从未考虑过的非传统的解决方案:GPL 合作承诺。GPL 合作承诺规定了 GPL 公平一致执行的方式。我认为,GPL 合作承诺之所以有这么深刻的意义,有以下两个原因:一是许可证的公平性和一致性对于开源社区的发展来说至关重要,二是法律对不可预测性并不容忍。 + +### 了解 GPL + +要了解 GPL 合作承诺,首先要了解什么是 GPL。GPL 是 [GNU 通用许可证][2]GNU General Public License的缩写,它是一个公共版权的开源许可证,这就意味着开源软件的分发者必须向下游用户公开源代码。GPL 还禁止对下游的使用作出限制,要求个人用户不得拒绝他人对开源软件的使用自由、研究自由、共享自由和改进自由。GPL 规定,只要下游用户满足了许可证的要求和条件,就可以使用该许可证。如果被许可人出现了不符合许可证的情况,则视为违规。 + +按照第二版 GPL(GPLv2)的描述,许可证会在任何违规的情况下自动终止,这就导致了部分开发者对 GPL 有所抗拒。而在第三版 GPL(GPLv3)中则引入了“[治愈条款][3]cure provision”,这一条款规定,被许可人可以在 30 天内对违反 GPL 的行为进行改正,如果在这个缓冲期内改正完成,许可证就不会被终止。 + +这一规定消除了许可证被无故终止的顾虑,从而让软件的开发者和用户专注于开发和创新。 + +### GPL 合作承诺做了什么 + +GPL 合作承诺将 GPLv3 的治愈条款应用于使用 GPLv2 的软件上,让使用 GPLv2 许可证的开发者避免许可证无故终止的窘境,并与 GPLv3 许可证保持一致。 + +很多软件开发者都希望正确合规地做好一件事情,但有时候却不了解具体的实施细节。因此,GPL 合作承诺的重要性就在于能够对软件开发者们做出一些引导,让他们避免因一些简单的错误导致许可证违规终止。 + +Linux 基金会技术顾问委员会在 2017 年宣布,Linux 内核项目将会[采用 GPLv3 的治愈条款][4]。在 GPL 合作承诺的推动下,很多大型科技公司和个人开发者都做出了相同的承诺,会将该条款扩展应用于他们采用 GPLv2(或 LGPLv2.1)许可证的所有软件,而不仅仅是对 Linux 内核的贡献。 + +GPL 合作承诺的广泛采用将会对开源社区产生非常积极的影响。如果更多的公司和个人开始采用 GPL 合作承诺,就能让大量正在使用 GPLv2 或 LGPLv2.1 许可证的软件以更公平和更可预测的形式履行许可证中的条款。 + +截至 2018 年 11 月,包括 IBM、Google、亚马逊、微软、腾讯、英特尔、RedHat 在内的 40 余家行业巨头公司都已经[签署了 GPL 合作承诺][5],以期为开源社区创立公平的标准以及提供可预测的执行力。GPL 合作承诺是开源社区齐心协力引领开源未来发展方向的一个成功例子。 + +GPL 合作承诺能够让下游用户了解到开发者对他们的尊重,同时也表示了开发者使用了 GPLv2 许可证的代码是安全的。如果你想查阅更多信息,包括如何将自己的名字添加到 GPL 合作承诺中,可以访问 [GPL 合作承诺的网站][6]。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/gpl-cooperation-commitment + +作者:[Brooke Driver][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bdriver +[b]: https://github.com/lujun9972 +[1]: https://gplcc.github.io/gplcc/ +[2]: https://www.gnu.org/licenses/licenses.en.html +[3]: https://opensource.com/article/18/6/gplv3-anniversary +[4]: https://www.kernel.org/doc/html/v4.16/process/kernel-enforcement-statement.html +[5]: https://gplcc.github.io/gplcc/Company/Company-List.html +[6]: http://gplcc.github.io/gplcc + diff --git a/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md new file mode 100644 index 0000000000..99a92d917b --- /dev/null +++ b/published/201811/20181114 ProtectedText - A Free Encrypted Notepad To Save Your Notes Online.md @@ -0,0 +1,79 @@ +ProtectedText:一个免费的在线加密笔记 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/protected-text-720x340.png) + +记录笔记是我们每个人必备的重要技能,它可以帮助我们把自己听到、读到、学到的内容长期地保留下来,也有很多的应用和工具都能让我们更好地记录笔记。下面我要介绍一个叫做 **ProtectedText** 的应用,这是一个可以将你的笔记在线上保存起来的免费的加密笔记。它是一个免费的 web 服务,在上面记录文本以后,它将会对文本进行加密,只需要一台支持连接到互联网并且拥有 web 浏览器的设备,就可以访问到记录的内容。 + +ProtectedText 不会向你询问任何个人信息,也不会保存任何密码,没有广告,没有 Cookies,更没有用户跟踪和注册流程。除了拥有密码能够解密文本的人,任何人都无法查看到笔记的内容。而且,使用前不需要在网站上注册账号,写完笔记之后,直接关闭浏览器,你的笔记也就保存好了。 + +### 在加密笔记本上记录笔记 + +访问 这个链接,就可以打开 ProtectedText 页面了(LCTT 译注:如果访问不了,你知道的)。这个时候你将进入网站主页,接下来需要在页面上的输入框输入一个你想用的名称,或者在地址栏后面直接加上想用的名称。这个名称是一个自定义的名称(例如 ),是你查看自己保存的笔记的专有入口。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-1.png) + +如果你选用的名称还没有被占用,你就会看到下图中的提示信息。点击 “Create” 键就可以创建你的个人笔记页了。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-2.png) + +至此你已经创建好了你自己的笔记页面,可以开始记录笔记了。目前每个笔记页的最大容量是每页 750000+ 个字符。 + +ProtectedText 使用 AES 算法对你的笔记内容进行加密和解密,而计算散列则使用了 SHA512 算法。 + +笔记记录完毕以后,点击顶部的 “Save” 键保存。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-3.png) + +按下保存键之后,ProtectedText 会提示你输入密码以加密你的笔记内容。按照它的要求输入两次密码,然后点击 “Save” 键。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-4.png) + +尽管 ProtectedText 对你使用的密码没有太多要求,但毕竟密码总是一寸长一寸强,所以还是最好使用长且复杂的密码(用到数字和特殊字符)以避免暴力破解。由于 ProtectedText 不会保存你的密码,一旦密码丢失,密码和笔记内容就都找不回来了。因此,请牢记你的密码,或者使用诸如 [Buttercup][3]、[KeeWeb][4] 这样的密码管理器来存储你的密码。 + +在使用其它设备时,可以通过访问之前创建的 URL 就可以访问你的笔记了。届时会出现如下的提示信息,只需要输入正确的密码,就可以查看和编辑你的笔记。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-5.png) + +一般情况下,只有知道密码的人才能正常访问笔记的内容。如果你希望将自己的笔记公开,只需要以 的形式访问就可以了,ProtectedText 将会自动使用 `yourPassword` 字符串解密你的笔记。 + +ProtectedText 还有配套的 [Android 应用][6] 可以让你在移动设备上进行同步笔记、离线工作、备份笔记、锁定/解锁笔记等等操作。 + +**优点** + + * 简单、易用、快速、免费 + * ProtectedText.com 的客户端代码可以在[这里][7]免费获取,如果你想了解它的底层实现,可以自行学习它的源代码 + * 存储的内容没有到期时间,只要你愿意,笔记内容可以一直保存在服务器上 + * 可以让你的数据限制为私有或公开开放 + +**缺点** + + * 尽管客户端代码是公开的,但服务端代码并没有公开,因此你无法自行搭建一个类似的服务。如果你不信任这个网站,请不要使用。 + * 由于网站不存储你的任何个人信息,包括你的密码,因此如果你丢失了密码,数据将永远无法恢复。网站方还声称他们并不清楚谁拥有了哪些数据,所以一定要牢记密码。 + + +如果你想通过一种简单的方式将笔记保存到线上,并且需要在不需要安装任何工具的情况下访问,那么 ProtectedText 会是一个好的选择。如果你还知道其它类似的应用程序,欢迎在评论区留言! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/protectedtext-a-free-encrypted-notepad-to-save-your-notes-online/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-4.png +[3]: https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/ +[4]: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ +[5]: http://www.ostechnix.com/wp-content/uploads/2018/11/Protected-Text-5.png +[6]: https://play.google.com/store/apps/details?id=com.protectedtext.android +[7]: https://www.protectedtext.com/js/main.js + diff --git a/published/201811/20181115 How to install a device driver on Linux.md b/published/201811/20181115 How to install a device driver on Linux.md new file mode 100644 index 0000000000..bd1c3fd353 --- /dev/null +++ b/published/201811/20181115 How to install a device driver on Linux.md @@ -0,0 +1,144 @@ +如何在 Linux 上安装设备驱动程序 +====== + +> 学习 Linux 设备驱动如何工作,并知道如何使用它们。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/car-penguin-drive-linux-yellow.png?itok=twWGlYAc) + +对于一个熟悉 Windows 或者 MacOS 的人,想要切换到 Linux,它们都会面临一个艰巨的问题就是怎么安装和配置设备驱动。这是可以理解的,因为 Windows 和 MacOS 都有一套机制把这个过程做得非常的友好。比如说,当你插入一个新的硬件设备, Windows 能够自动检测并会弹出一个窗口询问你是否要继续驱动程序的安装。你也可以从网络上下载驱动程序,仅仅需要双击解压或者是通过设备管理器导入驱动程序即可。 + +而这在 Linux 操作系统上并非这么简单。第一个原因是, Linux 是一个开源的操作系统,所以有 [数百种 Linux 发行版的变体][1]。也就是说不可能做一个指南来适应所有的 Linux 发行版。因为每种 Linux 安装驱动程序的过程都有差异。 + +第二,大多数默认的 Linux 驱动程序也都是开源的,并被集成到了系统中,这使得安装一些并未包含的驱动程序变得非常复杂,即使已经可以检测大多数的硬件设备。第三,不同发行版的许可也有差异。例如,[Fedora 禁止事项][2] 禁止包含专有的、受法律保护,或者是违反美国法律的驱动程序。而 Ubuntu 则让用户[避免使用受法律保护或闭源的硬件设备][3]。 + +为了更好的学习 Linux 驱动程序是如何工作的,我建议阅读 《Linux 设备驱动程序》一书中的 [设备驱动程序简介][4]。 + +### 两种方式来寻找驱动程序 + +#### 1、 用户界面 + +如果是一个刚从 Windows 或 MacOS 转过来的 Linux 新手,那你会很高兴知道 Linux 也提供了一个通过向导式的程序来查看驱动程序是否可用的方法。 Ubuntu 提供了一个 [附加驱动程序][5] 选项。其它的 Linux 发行版也提供了帮助程序,像 [GNOME 的包管理器][6],你可以使用它来检查驱动程序是否可用。 + +#### 2、 命令行 + +如果你通过漂亮的用户界面没有找到驱动程序,那又该怎么办呢?或许你只能通过没有任何图形界面的 shell?甚至你可以使用控制台来展现你的技能。你有两个选择: + +1. **通过一个仓库** + + 这和 MacOS 中的 [homebrew][7] 命令行很像。通过使用 `yum`、 `dnf`、`apt-get` 等等。你基本可以通过添加仓库,并更新包缓存。 +2. **下载、编译,然后自己构建** + + 这通常包括直接从网络,或通过 `wget` 命令下载源码包,然后运行配置和编译、安装。这超出了本文的范围,但是你可以在网络上找到很多在线指南,如果你选择的是这条路的话。 + +### 检查是否已经安装了这个驱动程序 + +在进一步学习安装 Linux 驱动程序之前,让我们来学习几条命令,用来检测驱动程序是否已经在你的系统上可用。 + +[lspci][8] 命令显示了系统上所有 PCI 总线和设备驱动程序的详细信息。 + +``` +$ lscpci +``` + +或者使用 `grep`: + +``` +$ lscpci | grep SOME_DRIVER_KEYWORD +``` + +例如,你可以使用 `lspci | grep SAMSUNG` 命令,如果你想知道是否安装过三星的驱动。 + +[dmesg][9] 命令显示了所有内核识别的驱动程序。 + +``` +$ dmesg +``` + +或配合 `grep` 使用: + +``` +$ dmesg | grep SOME_DRIVER_KEYWORD +``` + +任何识别到的驱动程序都会显示在结果中。 + +如果通过 `dmesg` 或者 `lscpi` 命令没有识别到任何驱动程序,尝试下这两个命令,看看驱动程序至少是否加载到硬盘。 + +``` +$ /sbin/lsmod +``` + +和 + +``` +$ find /lib/modules +``` + +技巧:和 `lspci` 或 `dmesg` 一样,通过在上面的命令后面加上 `| grep` 来过滤结果。 + +如果一个驱动程序已经被识别到了,但是通过 `lscpi` 或 `dmesg` 并没有找到,这意味着驱动程序已经存在于硬盘上,但是并没有加载到内核中,这种情况,你可以通过 `modprobe` 命令来加载这个模块。 + +``` +$ sudo modprobe MODULE_NAME +``` + +使用 `sudo` 来运行这个命令,因为这个模块要使用 root 权限来安装。 + +### 添加仓库并安装 + +可以通过 `yum`、`dnf` 和 `apt-get` 几种不同的方式来添加一个仓库;一个个介绍完它们并不在本文的范围。简单一点来说,这个示例将会使用 `apt-get` ,但是这个命令和其它的几个都是很类似的。 + +#### 1、删除存在的仓库,如果它存在 + +``` +$ sudo apt-get purge NAME_OF_DRIVER* +``` + +其中 `NAME_OF_DRIVER` 是你的驱动程序的可能的名称。你还可以将模式匹配加到正则表达式中来进一步过滤。 + +#### 2、将仓库加入到仓库表中,这应该在驱动程序指南中有指定 + +``` +$ sudo add-apt-repository REPOLIST_OF_DRIVER +``` + +其中 `REPOLIST_OF_DRIVER` 应该从驱动文档中有指定(例如:`epel-list`)。 + +#### 3、更新仓库列表 + +``` +$ sudo apt-get update +``` + +#### 4、安装驱动程序 + +``` +$ sudo apt-get install NAME_OF_DRIVER +``` + +#### 5、检查安装状态 + +像上面说的一样,通过 `lscpi` 命令来检查驱动程序是否已经安装成功。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/how-install-device-driver-linux + +作者:[Bryant Son][a] +选题:[lujun9972][b] +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/brson +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/List_of_Linux_distributions +[2]: https://fedoraproject.org/wiki/Forbidden_items?rd=ForbiddenItems +[3]: https://www.ubuntu.com/licensing +[4]: https://www.xml.com/ldd/chapter/book/ch01.html +[5]: https://askubuntu.com/questions/47506/how-do-i-install-additional-drivers +[6]: https://help.gnome.org/users/gnome-packagekit/stable/add-remove.html.en +[7]: https://brew.sh/ +[8]: https://en.wikipedia.org/wiki/Lspci +[9]: https://en.wikipedia.org/wiki/Dmesg diff --git a/published/201811/20181116 Akash Angle- How do you Fedora.md b/published/201811/20181116 Akash Angle- How do you Fedora.md new file mode 100644 index 0000000000..ccd764f8aa --- /dev/null +++ b/published/201811/20181116 Akash Angle- How do you Fedora.md @@ -0,0 +1,62 @@ +Akash Angle:你如何使用 Fedora? +====== + +![](https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-816x345.jpg) + +我们最近采访了Akash Angle 来了解他如何使用 Fedora。这是 Fedora Magazine 上 Fedora [系列的一部分][1]。该系列介绍 Fedora 用户以及他们如何使用 Fedora 完成工作。请通过[反馈表单][2]与我们联系表达你对成为受访者的兴趣。 + +### Akash Angle 是谁? + +Akash 是一位不久前抛弃 Windows 的 Linux 用户。作为一名过去 9 年的狂热 Fedora 用户,他已经尝试了几乎所有的 Fedora 定制版和桌面环境来完成他的日常任务。是一位校友给他介绍了 Fedora。 + +### 使用什么硬件? + +Akash 在工作时使用联想 B490。它配备了英特尔酷睿 i3-3310 处理器和 240GB 金士顿 SSD。Akash 说:“这台笔记本电脑非常适合一些日常任务,如上网、写博客,以及一些照片编辑和视频编辑。虽然不是专业的笔记本电脑,而且规格并不是那么高端,但它完美地完成了工作。“ + +他使用一个入门级的罗技无线鼠标,并希望能有一个机械键盘。他的 PC 是一台定制桌面电脑,拥有最新的第 7 代 Intel i5 7400 处理器和 8GB Corsair Vengeance 内存。 + +![][3] + +### 使用什么软件? + +Akash 是 GNOME 3 桌面环境的粉丝。他喜欢该操作系统为完成基本任务而加入的华丽功能。 + +出于实际原因,他更喜欢全新安来升级到最新 Fedora 版本。他认为 Fedora 29 可以说是最好的工作站。Akash 说这种说法得到了各种科技传播网站和开源新闻网站评论的支持。 + +为了播放视频,他的首选是打包为 [Flatpak][4] 的 VLC 视频播放器 ,它提供了最新的稳定版本。当 Akash 想截图时,他的终极工具是 [Shutter,Magazine 曾介绍过][5]。对于图形处理,GIMP 是他不能离开的工具。 + +Google Chrome 稳定版和开发版是他最常用的网络浏览器。他还使用 Chromium 和 Firefox 的默认版本,有时甚至会使用 Opera。 + +由于他是一名资深用户,所以 Akash 其余时候都使用终端。GNOME Terminal 是他使用的一个终端。 + +#### 最喜欢的壁纸 + +他最喜欢的壁纸之一是下面最初来自 Fedora 16 的壁纸: + +![][6] + +这是他目前在 Fedora 29 工作站上使用的壁纸之一: + +![][7] + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/akash-angle-how-do-you-fedora/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://fedoramagazine.org/tag/how-do-you-fedora/ +[2]: https://fedoramagazine.org/submit-an-idea-or-tip/ +[3]: https://fedoramagazine.org/wp-content/uploads/2018/11/akash-angle-desktop-300x259.png +[4]: https://fedoramagazine.org/getting-started-flatpak/ +[5]: https://fedoramagazine.org/screenshot-everything-shutter-fedora/ +[6]: https://fedoramagazine.org/wp-content/uploads/2018/11/Fedora-16-300x188.png +[7]: https://fedoramagazine.org/wp-content/uploads/2018/11/wallpaper2you_72588-300x169.jpg diff --git a/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md new file mode 100644 index 0000000000..333beaad19 --- /dev/null +++ b/published/201811/20181117 How to enter single user mode in SUSE 12 Linux.md @@ -0,0 +1,55 @@ +如何在 SUSE 12 Linux 中进入单用户模式? +====== + +> 一篇了解如何在 SUSE 12 Linux 服务器中进入单用户模式的简短文章。 + +![How to enter single user mode in SUSE 12 Linux][1] + +在这篇简短的文章中,我们将向你介绍在 SUSE 12 Linux 中进入单用户模式的步骤。在排除系统主要问题时,单用户模式始终是首选。单用户模式禁用网络并且没有其他用户登录,你可以排除许多多用户系统的情况,可以帮助你快速排除故障。单用户模式最常见的一种用处是[重置忘记的 root 密码][2]。 + +### 1、暂停启动过程 + +首先,你需要拥有机器的控制台才能进入单用户模式。如果它是虚拟机那就是虚拟机控制台,如果它是物理机那么你需要连接它的 iLO/串口控制台。重启系统并在 GRUB 启动菜单中按任意键停止内核的自动启动。 + +![Kernel selection menu at boot in SUSE 12][3] + +### 2、编辑内核的启动选项 + +进入上面的页面后,在所选内核(通常是你首选的最新内核)上按 `e` 更新其启动选项。你会看到下面的页面。 + +![grub2 edits in SUSE 12][4] + +现在,向下滚动到内核引导行,并在行尾添加 `init=/bin/bash`,如下所示。 + +![Edit to boot in single user shell][5] + +### 3、引导编辑后的内核 + +现在按 `Ctrl-x` 或 `F10` 来启动这个编辑过的内核。内核将以单用户模式启动,你将看到 `#` 号提示符,即有服务器的 root 访问权限。此时,根文件系统以只读模式挂载。因此,你对系统所做的任何更改都不会被保存。 + +运行以下命令以将根文件系统重新挂载为可重写入的。 + +``` +kerneltalks:/ # mount -o remount,rw / +``` + +这就完成了!继续在单用户模式中做你必要的事情吧。完成后不要忘了重启服务器引导到普通多用户模式。 + +-------------------------------------------------------------------------------- + +via: https://kerneltalks.com/howto/how-to-enter-single-user-mode-in-suse-12-linux/ + +作者:[kerneltalks][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://kerneltalks.com +[b]: https://github.com/lujun9972 +[1]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/How-to-enter-single-user-mode-in-SUSE-12-Linux.png +[2]: https://kerneltalks.com/linux/recover-forgotten-root-password-rhel/ +[3]: https://a1.kerneltalks.com/wp-content/uploads/2018/11/Grub-menu-in-SUSE-12.png +[4]: https://a3.kerneltalks.com/wp-content/uploads/2018/11/grub2-editor.png +[5]: https://a4.kerneltalks.com/wp-content/uploads/2018/11/Edit-to-boot-in-single-user-shell.png diff --git a/published/201811/20181119 How To Customize Bash Prompt In Linux.md b/published/201811/20181119 How To Customize Bash Prompt In Linux.md new file mode 100644 index 0000000000..190fdb914b --- /dev/null +++ b/published/201811/20181119 How To Customize Bash Prompt In Linux.md @@ -0,0 +1,313 @@ +在 Linux 上自定义 bash 命令提示符 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/BASH-720x340.jpg) + +众所周知,**bash**(the **B**ourne-**A**gain **Sh**ell)是目前绝大多数 Linux 发行版使用的默认 shell。本文将会介绍如何通过添加颜色和样式来自定义 bash 命令提示符的显示。尽管很多插件或工具都可以很轻易地满足这一需求,但我们也可以不使用插件和工具,自己手动自定义一些基本的显示方式,例如添加或者修改某些元素、更改前景色、更改背景色等等。 + +### 在 Linux 中自定义 bash 命令提示符 + +在 bash 中,我们可以通过更改 `$PS1` 环境变量的值来自定义 bash 命令提示符。 + +一般情况下,bash 命令提示符会是以下这样的形式: + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal.png) + +在上图这种默认显示形式当中,“sk” 是我的用户名,而 “ubuntuserver” 是我的主机名。 + +只要插入一些以反斜杠开头的特殊转义字符串,就可以按照你的喜好修改命令提示符了。下面我来举几个例子。 + +在开始之前,我强烈建议你预先备份 `~/.bashrc` 文件。 + +``` +$ cp ~/.bashrc ~/.bashrc.bak +``` + +#### 更改 bash 命令提示符中的 username@hostname 部分 + +如上所示,bash 命令提示符一般都带有 “username@hostname” 部分,这个部分是可以修改的。 + +只需要编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件的最后添加一行: + +``` +PS1="ostechnix> " +``` + +将上面的 “ostechnix” 替换为任意一个你想使用的单词,然后按 `ESC` 并输入 `:wq` 保存、退出文件。 + +执行以下命令使刚才的修改生效: + +``` +$ source ~/.bashrc +``` + +你就可以看见 bash 命令提示符中出现刚才添加的 “ostechnix” 了。 + +![][3] + +再来看看另一个例子,比如将 “username@hostname” 替换为 “Hello@welcome>”。 + +同样是像刚才那样修改 `~/.bashrc` 文件。 + +``` +export PS1="Hello@welcome> " +``` + +然后执行 `source ~/.bashrc` 让修改结果立即生效。 + +以下是我在 Ubuntu 18.04 LTS 上修改后的效果。 + +![](https://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-1.png) + +#### 仅显示用户名 + +如果需要仅显示用户名,只需要在 `~/.bashrc` 文件中加入以下这一行。 + +``` +export PS1="\u " +``` + +这里的 `\u` 就是一个转义字符串。 + +下面提供了一些可以添加到 `$PS1` 环境变量中的用以改变 bash 命令提示符样式的转义字符串。每次修改之后,都需要执行 `source ~/.bashrc` 命令才能立即生效。 + +#### 显示用户名和主机名 + +``` +export PS1="\u\h " +``` + +命令提示符会这样显示: + +``` +skubuntuserver +``` + +#### 显示用户名和完全限定域名 + +``` +export PS1="\u\H " +``` + +#### 在用户名和主机名之间显示其它字符 + +如果你还需要在用户名和主机名之间显示其它字符(例如 `@`),可以使用以下格式: + +``` +export PS1="\u@\h " +``` + +命令提示符会这样显示: + +``` +sk@ubuntuserver +``` + +#### 显示用户名、主机名,并在末尾添加 $ 符号 + +``` +export PS1="\u@\h\\$ " +``` + +#### 综合以上两种显示方式 + +``` +export PS1="\u@\h> " +``` + +命令提示符最终会这样显示: + +``` +sk@ubuntuserver> +``` + +相似地,还可以添加其它特殊字符,例如冒号、分号、星号、下划线、空格等等。 + +#### 显示用户名、主机名、shell 名称 + +``` +export PS1="\u@\h>\s " +``` + +#### 显示用户名、主机名、shell 名称以及 shell 版本 + +``` +export PS1="\u@\h>\s\v " +``` + +bash 命令提示符显示样式: + +![][4] + +#### 显示用户名、主机名、当前目录 + +``` +export PS1="\u@\h\w " +``` + +如果当前目录是 `$HOME` ,会以一个波浪线(`~`)显示。 + +#### 在 bash 命令提示符中显示日期 + +除了用户名和主机名,如果还想在 bash 命令提示符中显示日期,可以在 `~/.bashrc` 文件中添加以下内容: + +``` +export PS1="\u@\h>\d " +``` + +![][5] + +#### 在 bash 命令提示符中显示日期及 12 小时制时间 + +``` +export PS1="\u@\h>\d\@ " +``` + +#### 显示日期及 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\T " +``` + +#### 显示日期及 24 小时制时间 + +``` +export PS1="\u@\h>\d\A " +``` + +#### 显示日期及 24 小时制 hh:mm:ss 格式时间 + +``` +export PS1="\u@\h>\d\t " +``` + +以上是一些常见的可以改变 bash 命令提示符的转义字符串。除此以外的其它转义字符串,可以在 bash 的 man 手册 PROMPTING 章节中查阅。 + +你也可以随时执行以下命令查看当前的命令提示符样式。 + +``` +$ echo $PS1 +``` + +#### 在 bash 命令提示符中去掉 username@hostname 部分 + +如果我不想做任何调整,直接把 username@hostname 部分整个去掉可以吗?答案是肯定的。 + +如果你是一个技术方面的博主,你有可能会需要在网站或者博客中上传自己的 Linux 终端截图。或许你的用户名和主机名太拉风、太另类,不想让别人看到,在这种情况下,你就需要隐藏命令提示符中的 “username@hostname” 部分。 + +如果你不想暴露自己的用户名和主机名,只需要按照以下步骤操作。 + +编辑 `~/.bashrc` 文件: + +``` +$ vi ~/.bashrc +``` + +在文件末尾添加这一行: + +``` +PS1="\W> " +``` + +输入 `:wq` 保存并关闭文件。 + +执行以下命令让修改立即生效。 + +``` +$ source ~/.bashrc +``` + +现在看一下你的终端,“username@hostname” 部分已经消失了,只保留了一个 `~>` 标记。 + +![][6] + +如果你想要尽可能简单的操作,又不想弄乱你的 `~/.bashrc` 文件,最好的办法就是在系统中创建另一个用户(例如 “user@example”、“admin@demo”)。用带有这样的命令提示符的用户去截图或者录屏,就不需要顾虑自己的用户名或主机名被别人看见了。 + +**警告:**在某些情况下,这种做法并不推荐。例如像 zsh 这种 shell 会继承当前 shell 的设置,这个时候可能会出现一些意想不到的问题。这个技巧只用于隐藏命令提示符中的 “username@hostname” 部分,仅此而已,如果把这个技巧挪作他用,也可能会出现异常。 + +### 为 bash 命令提示符着色 + +目前我们也只是变更了 bash 命令提示符中的内容,下面介绍一下如何对命令提示符进行着色。 + +通过向 `~/.bashrc` 文件写入一些配置,可以修改 bash 命令提示符的前景色(也就是文本的颜色)和背景色。 + +例如,下面这一行配置可以令某些文本的颜色变成红色: + +``` +export PS1="\u@\[\e[31m\]\h\[\e[m\] " +``` + +添加配置后,执行 `source ~/.bashrc` 立即生效。 + +你的 bash 命令提示符就会变成这样: + +![][7] + +类似地,可以用这样的配置来改变背景色: + +``` +export PS1="\u@\[\e[31;46m\]\h\[\e[m\] " +``` + +![][8] + +### 添加 emoji + +大家都喜欢 emoji。还可以按照以下配置把 emoji 插入到命令提示符中。 + +``` +PS1="\W 🔥 >" +``` + +需要注意的是,emoji 的显示取决于使用的字体,因此某些终端可能会无法正常显示 emoji,取而代之的是一些乱码或者单色表情符号。 + +### 自定义 bash 命令提示符有点难,有更简单的方法吗? + +如果你是一个新手,编辑 `$PS1` 环境变量的过程可能会有些困难,因为命令提示符中的大量转义字符串可能会让你有点晕头转向。但不要担心,有一个在线的 bash `$PS1` 生成器可以帮助你轻松生成各种 `$PS1` 环境变量值。 + +就是这个[网站][9]: + +[![EzPrompt](https://www.ostechnix.com/wp-content/uploads/2017/10/EzPrompt.png)][9] + +只需要直接选择你想要的 bash 命令提示符样式,添加颜色、设计排序,然后就完成了。你可以预览输出,并将配置代码复制粘贴到 `~/.bashrc` 文件中。就这么简单。顺便一提,本文中大部分的示例都是通过这个网站制作的。 + +### 我把我的 ~/.bashrc 文件弄乱了,该如何恢复? + +正如我在上面提到的,强烈建议在更改 `~/.bashrc` 文件前做好备份(在更改其它重要的配置文件之前也一定要记得备份)。这样一旦出现任何问题,你都可以很方便地恢复到更改之前的配置状态。当然,如果你忘记了备份,还可以按照下面这篇文章中介绍的方法恢复为默认配置。 + +- [如何将 `~/.bashrc` 文件恢复到默认配置][10] + +这篇文章是基于 ubuntu 的,但也适用于其它的 Linux 发行版。不过事先声明,这篇文章的方法会将 `~/.bashrc` 文件恢复到系统最初时的状态,你对这个文件做过的任何修改都将丢失。 + +感谢阅读! + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/cdn-cgi/l/email-protection +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-2.png +[5]: http://www.ostechnix.com/wp-content/uploads/2017/10/bash-prompt-3.png +[6]: http://www.ostechnix.com/wp-content/uploads/2017/10/Linux-Terminal-1.png +[7]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-4/ +[8]: http://www.ostechnix.com/hide-modify-usernamelocalhost-part-terminal/bash-prompt-5/ +[9]: http://ezprompt.net/ +[10]: https://www.ostechnix.com/restore-bashrc-file-default-settings-ubuntu/ + diff --git a/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md new file mode 100644 index 0000000000..9fbf743381 --- /dev/null +++ b/published/201811/20181120 How To Change GDM Login Screen Background In Ubuntu.md @@ -0,0 +1,86 @@ +如何更换 Ubuntu 系统的 GDM 登录界面背景 +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-3.png) + +Ubuntu 18.04 LTS 桌面系统在登录、锁屏和解锁状态下,我们会看到一个纯紫色的背景。它是 GDM(GNOME 显示管理器GNOME Display Manager)从 ubuntu 17.04 版本开始使用的默认背景。有一些人可能会不喜欢这个纯色的背景,想换一个酷一点、更吸睛的!如果是这样,你找对地方了。这篇短文将会告诉你如何更换 Ubuntu 18.04 LTS 的 GDM 登录界面的背景。 + +### 更换 Ubuntu 的登录界面背景 + +这是 Ubuntu 18.04 LTS 桌面系统默认的登录界面。 + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-1.png) + +不管你喜欢与否,你总是会不经意在登录、解屏/锁屏的时面对它。别担心!你可以随便更换一个你喜欢的图片。 + +在 Ubuntu 上更换桌面壁纸和用户的资料图像不难。我们可以点击鼠标就搞定了。但更换解屏/锁屏的背景则需要修改文件 `ubuntu.css`,它位于 `/usr/share/gnome-shell/theme`。 + +修改这个文件之前,最好备份一下它。这样我们可以避免出现问题时可以恢复它。 + +``` +$ sudo cp /usr/share/gnome-shell/theme/ubuntu.css /usr/share/gnome-shell/theme/ubuntu.css.bak +``` + +修改文件 `ubuntu.css`: + +``` +$ sudo nano /usr/share/gnome-shell/theme/ubuntu.css +``` + +在文件中找到关键字 `lockDialogGroup`,如下行: + +``` +#lockDialogGroup { + background: #2c001e url(resource:///org/gnome/shell/theme/noise-texture.png); + background-repeat: repeat; +} +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/ubuntu_css.png) + +可以看到,GDM 默认登录的背景图片是 `noise-texture.png`。 + +现在修改为你自己的图片路径。也可以选择 .jpg 或 .png 格式的文件,两种格式的图片文件都是支持的。修改完成后的文件内容如下: + +``` +#lockDialogGroup { + background: #2c001e url(file:///home/sk/image.png); + background-repeat: no-repeat; + background-size: cover; + background-position: center; +} +``` + +请注意 `ubuntu.css` 文件里这个关键字的修改,我把修改点加粗了。 + +你可能注意到,我把原来的 `... url(resource:///org/gnome/shell/theme/noise-texture.png);` 修改为 `... url(file:///home/sk/image.png);`。也就是说,你可以把 `... url(resource ...` 修改为 `.. url(file ...`。 + +同时,你可以把参数 `background-repeat:` 的值 `repeat` 修改为 `no-repeat`,并增加另外两行。你可以直接复制上面几行的修改到你的 `ubuntu.css` 文件,对应的修改为你的图片路径。 + +修改完成后,保存和关闭此文件。然后系统重启生效。 + +下面是 GDM 登录界面的最新背景图片: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/GDM-login-screen-2.png) + +是不是很酷,你都看到了,更换 GDM 登录的默认背景很简单。你只需要修改 `ubuntu.css` 文件中图片的路径然后重启系统。是不是很简单也很有意思. + +你可以修改 `/usr/share/gnome-shell/theme` 目录下的文件 `gdm3.css` ,具体修改内容和修改结果和上面一样。同时记得修改前备份要修改的文件。 + +就这些了。如果有好的东东再分享了,请大家关注! + +后会有期。 + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-change-gdm-login-screen-background-in-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[Guevaraya](https://github.com/guevaraya) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 diff --git a/published/201811/20181126 How to use multiple programming languages without losing your mind.md b/published/201811/20181126 How to use multiple programming languages without losing your mind.md new file mode 100644 index 0000000000..bbb310fa4e --- /dev/null +++ b/published/201811/20181126 How to use multiple programming languages without losing your mind.md @@ -0,0 +1,71 @@ +[#]: collector: (lujun9972) +[#]: translator: (heguangzhi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (How to use multiple programming languages without losing your mind) +[#]: via: (https://opensource.com/article/18/11/multiple-programming-languages) +[#]: author: (Bart Copeland https://opensource.com/users/bartcopeland) +[#]: url: (https://linux.cn/article-10291-1.html) + +如何使用多种编程语言而又不失理智 +====== + +> 多语言编程环境是一把双刃剑,既带来好处,也带来可能威胁组织的复杂性。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/books_programming_languages.jpg?itok=KJcdnXM2) + +如今,随着各种不同的编程语言的出现,许多组织已经变成了数字多语种组织digital polyglots。开源打开了一个语言和技术堆栈的世界,开发人员可以使用这些语言和技术堆栈来完成他们的任务,包括开发、支持过时的和现代的软件应用。 + +与那些只说母语的人相比,通晓多种语言的人可以与数百万人交谈。在软件环境中,开发人员不会引入新的语言来达到特定的目的,也不会更好地交流。一些语言对于一项任务来说很棒,但是对于另一项任务来说却不行,因此使用多种编程语言可以让开发人员使用合适的工具来完成这项任务。这样,所有的开发都是多语种的;这只是野兽的本性。 + +多语种环境的创建通常是渐进的和情景化的。例如,当一家企业收购一家公司时,它就承担了该公司的技术堆栈 —— 包括其编程语言。或者,随着技术领导的改变,新的领导者可能会将不同的技术纳入其中。技术也有过时的时候,随着时间的推移,增加了组织必须维护的编程语言和技术的数量。 + +多语言环境对企业来说是一把双刃剑,既带来好处,也带来复杂性和挑战。最终,如果这种情况得不到控制,多语言将会扼杀你的企业。 + +### 棘手的技术绕口令 + +如果有多种不同的技术 —— 编程语言、过时的工具和新兴的技术堆栈 —— 就有复杂性。工程师团队花更多的时间努力改进编程语言,包括许可证、安全性和依赖性。与此同时,管理层缺乏对代码合规性的监督,无法衡量风险。 + +发生的情况是,企业具有不同程度的编程语言质量和工具支持的高度可变性。当你需要和十几个人一起工作时,很难成为一种语言的专家。一个能流利地说法语和意大利语的人和一个能用八种语言串成几个句子的人在技能水平上有很大差异。开发人员和编程语言也是如此。 + +随着更多编程语言的加入,困难只会增加,导致数字巴别塔的出现。 + +答案是不要拿走开发人员工作所需的工具。添加新的编程语言可以建立他们的技能基础,并为他们提供合适的设备来完成他们的工作。所以,你想对你的开发者说“是”,但是随着越来越多的编程语言被添加到企业中,它们会拖累你的软件开发生命周期(SDLC)。在规模上,所有这些语言和工具都可能扼杀企业。 + +企业应注意三个主要问题: + +1. **可见性:** 团队聚在一起执行项目,然后解散。应用程序已经发布,但从未更新 —— 为什么要修复那些没有被破坏的东西?因此,当发现一个关键漏洞时,企业可能无法了解哪些应用程序受到影响,这些应用程序包含哪些库,甚至无法了解它们是用什么语言构建的。这可能导致成本高昂的“勘探项目”,以确保漏洞得到适当解决。 + +2. **更新或编码:** 一些企业将更新和修复功能集中在一个团队中。其他人要求每个“比萨团队”管理自己的开发工具。无论是哪种情况,工程团队和管理层都要付出机会成本:这些团队没有编码新特性,而是不断更新和修复开源工具中的库,因为它们移动得如此之快。 + +3. **重新发明轮子:** 由于代码依赖性和库版本不断更新,当发现漏洞时,与应用程序原始版本相关联的工件可能不再可用。因此,许多开发周期都被浪费在试图重新创建一个可以修复漏洞的环境上。 + +将你组织中的每种编程语言乘以这三个问题,开始时被认为是分子一样小的东西突然看起来像珠穆朗玛峰。就像登山者一样,没有合适的设备和工具,你将无法生存。 + +### 找到你的罗塞塔石碑 + +一个全面的解决方案可以满足 SDLC 中企业及其个人利益相关者的需求。企业可以使用以下最佳实践创建解决方案: + + 1. 监控生产中运行的代码,并根据应用程序中使用的标记组件(例如,常见漏洞和暴露组件)的风险做出响应。 + 2. 定期接收更新以保持代码的最新和无错误。 + 3. 使用商业开源支持来获得编程语言版本和平台的帮助,这些版本和平台已经接近尾声,并且不受社区支持。 + 4. 标准化整个企业中的特定编程语言构建,以实现跨团队的一致环境,并最大限度地减少依赖性。 + 5. 根据相关性设置何时触发更新、警报或其他类型事件的阈值。 + 6. 为您的包管理创建一个单一的可信来源;这可能需要知识渊博的技术提供商的帮助。 + 7. 根据您的特定标准,只使用您需要的软件包获得较小的构建版本。 + +使用这些最佳实践,开发人员可以最大限度地利用他们的时间为企业创造更多价值,而不是执行基本的工具或构建工程任务。这将在软件开发生命周期(SDLC)的所有环境中创建代码一致性。由于维护编程语言和软件包分发所需的资源更少,这也将提高效率和节约成本。这种新的操作方式将使技术人员和管理人员的生活更加轻松。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/multiple-programming-languages + +作者:[Bart Copeland][a] +选题:[lujun9972][b] +译者:[heguangzhi](https://github.com/heguangzhi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/bartcopeland +[b]: https://github.com/lujun9972 diff --git a/published/20181128 Arch-Audit - A Tool To Check Vulnerable Packages In Arch Linux.md b/published/20181128 Arch-Audit - A Tool To Check Vulnerable Packages In Arch Linux.md new file mode 100644 index 0000000000..e31694679f --- /dev/null +++ b/published/20181128 Arch-Audit - A Tool To Check Vulnerable Packages In Arch Linux.md @@ -0,0 +1,122 @@ +[#]: collector: "lujun9972" +[#]: translator: "Auk7F7" +[#]: reviewer: "wxy" +[#]: publisher: "wxy" +[#]: subject: "Arch-Audit : A Tool To Check Vulnerable Packages In Arch Linux" +[#]: via: "https://www.2daygeek.com/arch-audit-a-tool-to-check-vulnerable-packages-in-arch-linux/" +[#]: author: "Prakash Subramanian https://www.2daygeek.com/author/prakash/" +[#]: url: "https://linux.cn/article-10473-1.html" + +Arch-Audit:一款在 Arch Linux 上检查易受攻击的软件包的工具 +====== + +我们必须经常更新我们的系统以减少宕机时间和问题。每月给系统打一次补丁,60 天一次或者最多 90 天一次,这是 Linux 管理员的例行任务之一。这是忙碌的工作计划,我们不能在不到一个月内做到这一点,因为它涉及到多种活动和环境。 + +基本上,基础设施会一同提供测试、开发、 QA 环境(即各个分段和产品)。 + +最初,我们会在测试环境中部署补丁,相应的团队将监视系统一周,然后他们将给出一份或好或坏的状态的报告。如果成功的话,我们将会在其他环境中继续测试,若正常运行,那么最后我们会给生产服务器打上补丁。 + +许多组织会对整个系统打上补丁,我的意思是全系统更新,对于典型基础设施这是一种常规修补计划。 + +某些基础设施中可能只有生产环境,因此,我们不应该做全系统更新,而是应该使用安全修补程序来使系统更加稳定和安全。 + +由于 Arch Linux 及其衍生的发行版属于滚动更新版本,因此可以认为它们始终是最新的,因为它使用上游软件包的最新版本。 + +在某些情况下,如果要单独更新安全修补程序,则必须使用 arch-audit 工具来标识和修复安全修补程序。 + +### 漏洞是什么? + +漏洞是软件程序或硬件组件(固件)中的安全漏洞。这是一个可以让它容易受到攻击的缺陷。 + +为了缓解这种情况,我们需要相应地修补漏洞,就像应用程序/硬件一样,它可能是代码更改或配置更改或参数更改。 + +### Arch-Audit 工具是什么? + +[Arch-audit][1] 是一个类似于 Arch Linux 的 pkg-audit 工具。它使用了令人称赞的 Arch 安全小组收集的数据。它不会扫描以发现系统中易受攻击的包(就像 `yum –security check-update & yum updateinfo` 一样列出可用的软件包),它只需解析 页面并在终端中显示结果,因此,它将显示准确的数据。(LCTT 译注:此处原作者叙述不清晰。该功能虽然不会像病毒扫描软件一样扫描系统上的文件,但是会读取已安装的软件列表,并据此查询上述网址列出风险报告。) + +Arch 安全小组是一群以跟踪 Arch Linux 软件包的安全问题为目的的志愿者。所有问题都在 Arch 安全追踪者的监视下。 + +该小组以前被称为 Arch CVE 监测小组,Arch 安全小组的使命是为提高 Arch Linux 的安全性做出贡献。 + +### 如何在 Arch Linux 上安装 Arch-Audit 工具 + +Arch-audit 工具已经存在社区的仓库中,所以你可以使用 Pacman 包管理器来安装它。 + +``` +$ sudo pacman -S arch-audit +``` + +运行 `arch-audit` 工具以查找在基于 Arch 的发行版本上的存在缺陷的包。 + +``` +$ arch-audit +Package cairo is affected by CVE-2017-7475. Low risk! +Package exiv2 is affected by CVE-2017-11592, CVE-2017-11591, CVE-2017-11553, CVE-2017-17725, CVE-2017-17724, CVE-2017-17723, CVE-2017-17722. Medium risk! +Package libtiff is affected by CVE-2018-18661, CVE-2018-18557, CVE-2017-9935, CVE-2017-11613. High risk!. Update to 4.0.10-1! +Package openssl is affected by CVE-2018-0735, CVE-2018-0734. Low risk! +Package openssl-1.0 is affected by CVE-2018-5407, CVE-2018-0734. Low risk! +Package patch is affected by CVE-2018-6952, CVE-2018-1000156. High risk!. Update to 2.7.6-7! +Package pcre is affected by CVE-2017-11164. Low risk! +Package systemd is affected by CVE-2018-6954, CVE-2018-15688, CVE-2018-15687, CVE-2018-15686. Critical risk!. Update to 239.300-1! +Package unzip is affected by CVE-2018-1000035. Medium risk! +Package webkit2gtk is affected by CVE-2018-4372. Critical risk!. Update to 2.22.4-1! +``` + +上述结果显示了系统的脆弱性风险状况,比如:低、中和严重三种情况。 + +若要仅显示易受攻击的包及其版本,请执行以下操作。 + +``` +$ arch-audit -q +cairo +exiv2 +libtiff>=4.0.10-1 +openssl +openssl-1.0 +patch>=2.7.6-7 +pcre +systemd>=239.300-1 +unzip +webkit2gtk>=2.22.4-1 +``` + +仅显示已修复的包。 + +``` +$ arch-audit --upgradable --quiet +libtiff>=4.0.10-1 +patch>=2.7.6-7 +systemd>=239.300-1 +webkit2gtk>=2.22.4-1 +``` + +为了交叉检查上述结果,我将测试在 列出的一个包以确认漏洞是否仍处于开放状态或已修复。是的,它已经被修复了,并于昨天在社区仓库中发布了更新后的包。 + +![][3] + +仅打印包名称和其相关的 CVE。 + +``` +$ arch-audit -uf "%n|%c" +libtiff|CVE-2018-18661,CVE-2018-18557,CVE-2017-9935,CVE-2017-11613 +patch|CVE-2018-6952,CVE-2018-1000156 +systemd|CVE-2018-6954,CVE-2018-15688,CVE-2018-15687,CVE-2018-15686 +webkit2gtk|CVE-2018-4372 +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/arch-audit-a-tool-to-check-vulnerable-packages-in-arch-linux/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[Auk7F7](https://github.com/Auk7F7) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://github.com/ilpianista/arch-audit +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: https://www.2daygeek.com/wp-content/uploads/2018/11/A-Tool-To-Check-Vulnerable-Packages-In-Arch-Linux.png diff --git a/published/20181128 Turn an old Linux desktop into a home media center.md b/published/20181128 Turn an old Linux desktop into a home media center.md new file mode 100644 index 0000000000..e1acc79691 --- /dev/null +++ b/published/20181128 Turn an old Linux desktop into a home media center.md @@ -0,0 +1,89 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (Turn an old Linux desktop into a home media center) +[#]: via: (https://opensource.com/article/18/11/old-linux-desktop-new-home-media-center) +[#]: author: ([Alan Formy-Duval](https://opensource.com/users/alanfdoss)) +[#]: url: (https://linux.cn/article-10446-1.html) + +将旧的 Linux 台式机变成家庭媒体中心 +====== + +> 重新利用过时的计算机来浏览互联网并在大屏电视上观看视频。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q) + +我第一次尝试搭建一台“娱乐电脑”是在 20 世纪 90 年代后期,使用了一台带 Trident ProVidia 9685 PCI 显卡的普通旧台式电脑。我使用了所谓的“电视输出”卡,它有一个额外的输出可以连接到标准电视端子上。屏幕显示看起来不太好,而且没有音频输出。并且外观很丑:有一条 S-Video 线穿过了客厅地板连接到我的 19 英寸 Sony Trinitron CRT 电视机上。 + +我在 Linux 和 Windows 98 上得到了同样令人遗憾的结果。在和那些看起来不对劲的系统挣扎之后,我放弃了几年。值得庆幸的是,如今的 HDMI 拥有更好的性能和标准化的分辨率,这使得廉价的家庭媒体中心成为现实。 + +我的新媒体中心娱乐电脑实际上是我的旧 Ubuntu Linux 桌面,最近我用更快的电脑替换了它。这台电脑在工作中太慢,但是它的 3.4GHz 的 AMD Phenom II X4 965 处理器和 8GB 的 RAM 足以满足一般浏览和视频流的要求。 + +以下是我让旧系统在新角色中发挥最佳性能所采取的步骤。 + +### 硬件 + +首先,我移除了不必要的设备,包括读卡器、硬盘驱动器、DVD 驱动器和后置 USB 卡,我添加了一块 PCI-Express 无线网卡。我将 Ubuntu 安装到单个固态硬盘 (SSD) 上,这可以切实提高任何旧系统的性能。 + +### BIOS + +在 BIOS 中,我禁用了所有未使用的设备,例如软盘和 IDE 驱动器控制器。我禁用了板载显卡,因为我安装了带 HDMI 输出的 NVidia GeForce GTX 650 PCI Express 显卡。我还禁用了板载声卡,因为 NVidia 显卡芯片组提供音频。 + +### 音频 + +Nvidia GeForce GTX 音频设备在 GNOME 控制中心的声音设置中被显示为 GK107 HDMI Audio Controller,因此单条 HDMI 线缆可同时处理音频和视频。无需将音频线连接到板载声卡的输出插孔。 + +![Sound settings screenshot][2] + +*GNOME 音频设置中的 HDMI 音频控制器。* + +### 键盘和鼠标 + +我有罗技的无线键盘和鼠标。当我安装它们时,我插入了两个外置 USB 接收器,它们可以使用,但我经常遇到信号反应问题。接着我发现其中一个被标记为联合接收器,这意味着它可以自己处理多个罗技输入设备。罗技不提供在 Linux 中配置联合接收器的软件。但幸运的是,有个开源程序 [Solaar][3] 能够做到。使用单个接收器解决了我的输入性能问题。 + +![Solaar][5] + +*Solaar 联合接收器界面。* + +### 视频 + +最初很难在我的 47 英寸平板电视上阅读文字,所以我在 Universal Access 下启用了“大文字”。我下载了一些与电视 1920x1080 分辨率相匹配的壁纸,这看起来很棒! + +### 最后处理 + +我需要在电脑的冷却需求和我对不受阻碍的娱乐的渴望之间取得平衡。由于这是一台标准的 ATX 微型塔式计算机,我确保我有足够的风扇转速,以及在 BIOS 中精心配置过的温度以减少噪音。我还把电脑放在我的娱乐控制台后面,以进一步减少风扇噪音,但同时我可以按到电源按钮。 + +最后得到一台简单的、没有巨大噪音的机器,而且只使用了两根线缆:交流电源线和 HDMI。它应该能够运行任何主流或专门的媒体中心 Linux 发行版。我不期望去玩高端的游戏,因为这可能需要更多的处理能力。 + +![Showing Ubuntu Linux About page onscreen][7] + +*Ubuntu Linux 的关于页面。* + +![YouTube on the big screen][9] + +*在大屏幕上测试 YouTube 视频。* + +我还没安装像 [Kodi][10] 这样专门的媒体中心发行版。截至目前,它运行的是 Ubuntu Linux 18.04.1 LTS,而且很稳定。 + +这是一个有趣的挑战,可以充分利用我已经拥有的东西,而不是购买新的硬件。这只是开源软件的一个好处。最终,我可能会用一个更小,更安静的带有媒体中心的系统或其他小机顶盒替换它,但是现在,它很好地满足了我的需求。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/old-linux-desktop-new-home-media-center + +作者:[Alan Formy-Duval][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/alanfdoss +[b]: https://github.com/lujun9972 +[2]: https://opensource.com/sites/default/files/uploads/soundsettings.png (Sound settings screenshot) +[3]: https://pwr.github.io/Solaar/ +[5]: https://opensource.com/sites/default/files/uploads/solaar_interface.png (Solaar) +[7]: https://opensource.com/sites/default/files/uploads/finalresult1.png (Showing Ubuntu Linux About page onscreen) +[9]: https://opensource.com/sites/default/files/uploads/finalresult2.png (YouTube on the big screen) +[10]: https://kodi.tv/ diff --git a/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md new file mode 100644 index 0000000000..0f51e0e7a9 --- /dev/null +++ b/published/201812/20111221 30 Best Sources For Linux - -BSD - Unix Documentation On the Web.md @@ -0,0 +1,401 @@ +学习 Linux/*BSD/Unix 的 30 个最佳在线文档 +====== + +手册页(man)是由系统管理员和 IT 技术开发人员写的,更多的是为了作为参考而不是教你如何使用。手册页对于已经熟悉使用 Linux、Unix 和 BSD 操作系统的人来说是非常有用的。如果你仅仅需要知道某个命令或者某个配置文件的格式那么你可以使用手册页,但是手册页对于 Linux 新手来说并没有太大的帮助。想要通过使用手册页来学习一些新东西不是一个好的选择。这里有将提供 30 个学习 Linux 和 Unix 操作系统的最佳在线网页文档。 + +![Dennis Ritchie and Ken Thompson working with UNIX PDP11][1] + +值得一提的是,相对于 Linux,BSD 的手册页更好。 + +### #1:Red Hat Enterprise Linux(RHEL) + +![Red hat Enterprise Linux 文档][2] + +RHEL 是由红帽公司开发的面向商业市场的 Linux 发行版。红帽的文档是最好的文档之一,涵盖从 RHEL 的基础到一些高级主题比如安全、SELinux、虚拟化、目录服务器、服务器集群、JBOSS 应用程序服务器、高可用性集群(HPC)等。红帽的文档已经被翻译成 22 种语言,发布成多页面 HTML、单页面 HTML、PDF、EPUB 等文件格式。好消息同样的文档你可以用于 Centos 和 Scientific Linux(社区企业发行版)。这些文档随操作系统一起下载提供,也就是说当你没有网络的时候,你也可以使用它们。RHEL 的文档**涵盖从安装到配置器群的所有内容**。唯一的缺点是你需要成为付费用户。当然这对于企业公司来说是一件完美的事。 + +1. RHEL 文档:[HTML/PDF格式][3](LCTT 译注:**此链接**需要付费用户才可以访问) +2. 是否支持论坛:只能通过红帽公司的用户网站提交支持案例。 + +#### 关于 CentOS Wiki 和论坛的说明 + +![Centos Linux Wiki][4] + +CentOS(社区企业操作系统Community ENTerprise Operating System)是由 RHEL 提供的自由源码包免费重建的。它为个人电脑或其它用途提供了可靠的、免费的企业级 Linux。你可以不用付出任何支持和认证费用就可以获得 RHEL 的稳定性。CentOS的 wiki 分为 Howto、技巧等等部分,链接如下: + +1. 文档:[wiki 格式][87] +2. 是否支持论坛:[是][88] + +### #2:Arch 的 Wiki 和论坛 + +![Arch Linux wiki 和教程][5] + +Arch linux 是一个独立开发的 Linux 操作系统,它有基于 wiki 网站形式的非常不错的文档。它是由 Arch 社区的一些用户共同协作开发出来的,并且允许任何用户添加或修改内容。这些文档教程被分为几类比如说优化、软件包管理、系统管理、X window 系统还有获取安装 Arch Linux 等。它的[官方论坛][7]在解决许多问题的时候也非常有用。它有总共 4 万多个注册用户、超过 1 百万个帖子。 该 wiki 包含一些 **其它 Linux 发行版也适用的通用信息**。 + +1. Arch 社区文档:[Wiki 格式][8] +2. 是否支持论坛:[是][7] + +### #3:Gentoo Linux Wiki 和论坛 + +![Gentoo Linux 手册和 Wiki][9] + +Gentoo Linux 基于 Portage 包管理系统。Gentoo Linux 用户根据它们选择的配置在本地编译源代码。多数 Gentoo Linux 用户都会定制自己独有的程序集。 Gentoo Linux 的文档会给你一些有关 Gentoo Linux 操作系统的说明和一些有关安装、软件包、网络和其它等主要出现的问题的解决方法。Gentoo 有对你来说 **非常有用的论坛**,论坛中有超过 13 万 4 千的用户,总共发了有 5442416 个文章。 + +1. Gentoo 社区文档:[手册][10] 和 [Wiki 格式][11] +2. 是否支持论坛:[是][12] + +### #4:Ubuntu Wiki 和文档 + +![Ubuntu Linux Wiki 和论坛][14] + +Ubuntu 是领先的台式机和笔记本电脑发行版之一。其官方文档由 Ubuntu 文档工程开发维护。你可以在从官方文档中查看大量的信息,比如如何开始使用 Ubuntu 的教程。最好的是,此处包含的这些信息也可用于基于 Debian 的其它系统。你可能会找到由 Ubuntu 的用户们创建的社区文档,这是一份有关 Ubuntu 的使用教程和技巧等。Ubuntu Linux 有着网络上最大的 Linux 社区的操作系统,它对新用户和有经验的用户均有助益。 + +1. Ubuntu 社区文档:[wiki 格式][15] +2. Ubuntu 官方文档:[wiki 格式][16] +3. 是否支持论坛:[是][17] + +### #5:IBM Developer Works + +![IBM: Linux 程序员和系统管理员用到的技术][18] + +IBM Developer Works 为 Linux 程序员和系统管理员提供技术资源,其中包含数以百计的文章、教程和技巧来协助 Linux 程序员的编程工作和应用开发还有系统管理员的日常工作。 + +1. IBM 开发者项目文档:[HTML 格式][19] +2. 是否支持论坛:[是][20] + +### #6:FreeBSD 文档和手册 + +![Freebsd Documentation][21] + +FreeBSD 的手册是由 FreeBSD 文档项目FreeBSD Documentation Project所创建的,它介绍了 FreeBSD 操作系统的安装、管理和一些日常使用技巧等内容。FreeBSD 的手册页通常比 GNU Linux 的手册页要好一点。FreeBSD **附带有全部最新手册页的文档**。 FreeBSD 手册涵盖任何你想要的内容。手册包含一些通用的 Unix 资料,这些资料同样适用于其它的 Linux 发行版。FreeBSD 官方论坛会在你遇到棘手问题时给予帮助。 + +1. FreeBSD 文档:[HTML/PDF 格式][90] +2. 是否支持论坛:[是][91] + +### #7:Bash Hackers Wiki + +![Bash Hackers wiki][22] + +这是一个对于 bash 使用者来说非常好的资源。Bash 使用者的 wiki 是为了归纳所有类型的 GNU Bash 文档。这个项目的动力是为了提供可阅读的文档和资料来避免用户被迫一点一点阅读 Bash 的手册,有时候这是非常麻烦的。Bash Hackers Wiki 分为各个类,比如说脚本和通用资料、如何使用、代码风格、bash 命令格式和其它。 + +1. Bash 用户教程:[wiki 格式][23] + +### #8:Bash 常见问题 + +![Bash 常见问题:一些有关 GNU/BASH 常见问题的解决方法][24] + +这是一个为 bash 新手设计的一个 wiki。它收集了 IRC 网络的 #bash 频道里常见问题的解决方法,这些解决方法是由该频道的普通成员提供。当你遇到问题的时候不要忘了在 [BashPitfalls][25] 部分检索查找答案。这些常见问题的解决方法可能会倾向于 Bash,或者偏向于最基本的 Bourne Shell,这决定于是谁给出的答案。大多数情况会尽力提供可移植的(Bourne)和高效的(Bash,在适当情况下)的两类答案。 + +1. Bash 常见问题:[wiki 格式][26] + +### #9: Howtoforge - Linux 教程 + +![Howtoforge][27] + +博客作者 Falko 在 Howtoforge 上有一些非常不错的东西。这个网站提供了 Linux 关于各种各样主题的教程,比如说其著名的“最佳服务器系列”,网站将主题分为几类,比如说 web 服务器、linux 发行版、DNS 服务器、虚拟化、高可用性、电子邮件和反垃圾邮件、FTP 服务器、编程主题还有一些其它的内容。这个网站也支持德语。 + +1. Howtoforge: [html 格式][28] +2. 是否支持论坛:是 + +### #10:OpenBSD 常见问题和文档 + +![OpenBSD 文档][29] + +OpenBSD 是另一个基于 BSD 的类 Unix 计算机操作系统。OpenBSD 是由 NetBSD 项目分支而来。OpenBSD 因高质量的代码和文档、对软件许可协议的坚定立场和强烈关注安全问题而闻名。OpenBSD 的文档分为多个主题类别,比如说安装、包管理、防火墙设置、用户管理、网络、磁盘和磁盘阵列管理等。 + +1. OpenBSD:[html 格式][30] +2. 是否支持论坛:否,但是可以通过 [邮件列表][31] 来咨询 + +### #11: Calomel - 开源研究和参考文档 + +![开源研究和参考文档][32] + +这个极好的网站是专门作为开源软件和那些特别专注于 OpenBSD 的软件的文档来使用的。这是最简洁的引导网站之一,专注于高质量的内容。网站内容分为多个类,比如说 DNS、OpenBSD、安全、web 服务器、Samba 文件服务器、各种工具等。 + +1. Calomel 官网:[html 格式][33] +2. 是否支持论坛:否 + +### #12:Slackware 书籍项目 + +![Slackware Linux 手册和文档][34] + +Slackware Linux 是我的第一个 Linux 发行版。Slackware 是基于 Linux 内核的最早的发行版之一,也是当前正在维护的最古老的 Linux 发行版。 这个发行版面向专注于稳定性的高级用户。 Slackware 也是很少有的的“类 Unix” 的 Linux 发行版之一。官方的 Slackware 手册是为了让用户快速开始了解 Slackware 操作系统的使用方法而设计的。 这不是说它将包含发行版的每一个方面,而是为了说明它的实用性和给使用者一些有关系统的基础工作使用方法。手册分为多个主题,比如说安装、网络和系统配置、系统管理、包管理等。 + +1. Slackware Linux 手册:[html 格式][35]、pdf 和其它格式 +2. 是否支持论坛:是 + +### #13:Linux 文档项目(TLDP) + +![Linux 学习网站和文档][36] + +Linux 文档项目Linux Documentation Project旨在给 Linux 操作系统提供自由、高质量文档。网站是由志愿者创建和维护的。网站分为具体主题的帮助、由浅入深的指南等。在此我想推荐一个非常好的[文档][37],这个文档既是一个教程也是一个 shell 脚本编程的参考文档,对于新用户来说这个 HOWTO 的[列表][38]也是一个不错的开始。 + +1. Linux [文档工程][39] 支持多种查阅格式 +2. 是否支持论坛:否 + +### #14:Linux Home Networking + +![Linux Home Networking][40] + +Linux Home Networking 是学习 linux 的另一个比较好的资源,这个网站包含了 Linux 软件认证考试的内容比如 RHCE,还有一些计算机培训课程。网站包含了许多主题,比如说网络、Samba 文件服务器、无线网络、web 服务器等。 + +1. Linux [home networking][41] 可通过 html 格式和 PDF(少量费用)格式查阅 +2. 是否支持论坛:是 + +### #15:Linux Action Show + +![Linux 播客][42] + +Linux Action Show(LAS) 是一个关于 Linux 的播客。这个网站是由 Bryan Lunduke、Allan Jude 和 Chris Fisher 共同管理的。它包含了 FOSS 的最新消息。网站内容主要是评论一些应用程序和 Linux 发行版。有时候也会发布一些和开源项目著名人物的采访视频。 + +1. Linux [action show][43] 支持音频和视频格式 +2. 是否支持论坛:是 + +### #16:Commandlinefu + +![Commandlinefu 的最优 Unix / Linux 命令][45] + +Commandlinefu 列出了各种有用或有趣的 shell 命令。这里所有命令都可以评论、讨论和投票(支持或反对)。对于所有 Unix 命令行用户来说是一个极好的资源。不要忘了查看[评选出来的最佳命令][44]。 + + 1. [Commandlinefu][46] 支持 html 格式 + 2. 是否支持论坛:否 + +### #17:Debian 管理技巧和资源 + +![Debian Linux 管理: 系统管理员技巧和教程][48] + +这个网站包含一些只和 Debian GNU/Linux 相关的主题、技巧和教程,特别是包含了关于系统管理的有趣和有用的信息。你可以在上面贡献文章、建议和问题。提交了之后不要忘记查看[最佳文章列表][47]里有没有你的文章。 + +1. Debian [系统管理][49] 支持 html 格式 +2. 是否支持论坛:否 + +### #18: Catonmat - Sed、Awk、Perl 教程 + +![Sed 流编辑器、 Awk 文本处理工具、 Perl 语言教程][50] + +这个网站是由博客作者 Peteris Krumins 维护的。主要关注命令行和 Unix 编程主题,比如说 sed 流编辑器、perl 语言、AWK 文本处理工具等。不要忘了查看 [sed 介绍][51]、sed 含义解释,还有命令行历史的[权威介绍][53]。 + +1. [catonmat][55] 支持 html 格式 +2. 是否支持论坛:否 + +### #19:Debian GNU/Linux 文档和 Wiki + +![Debian Linux 教程和 Wiki][56] + +Debian 是另外一个 Linux 操作系统,其主要使用的软件以 GNU 许可证发布。Debian 因严格坚持 Unix 和自由软件的理念而闻名,它也是很受欢迎并且有一定影响力的 Linux 发行版本之一。 Ubuntu 等发行版本都是基于 Debian 的。Debian 项目以一种易于访问的形式提供给用户合适的文档。这个网站分为 Wiki、安装指导、常见问题、支持论坛几个模块。 + +1. Debian GNU/Linux [文档][57] 支持 html 和其它格式访问 +2. Debian GNU/Linux [wiki][58] +3. 是否支持论坛:[是][59] + +### #20:Linux Sea + +Linux Sea 这本书提供了比较通俗易懂但充满技术(从最终用户角度来看)的 Linux 操作系统的介绍,使用 Gentoo Linux 作为例子。它既没有谈论 Linux 内核或 Linux 发行版的历史,也没有谈到 Linux 用户不那么感兴趣的细节。 + +1. Linux [sea][60] 支持 html 格式访问 +2. 是否支持论坛: 否 + +### #21:O'reilly Commons + +![免费 Linux / Unix / Php / Javascript / Ubuntu 学习笔记][61] + +O'reilly 出版社发布了不少 wiki 格式的文章。这个网站主要是为了给那些喜欢创作、参考、使用、修改、更新和修订来自 O'Reilly 或者其它来源的素材的社区提供资料。这个网站包含关于 Ubuntu、PHP、Spamassassin、Linux 等的免费书籍。 + +1. Oreilly [commons][62] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #22:Ubuntu 袖珍指南 + +![Ubuntu 新手书籍][63] + +这本书的作者是 Keir Thomas。这本指南(或者说是书籍)对于所有 ubuntu 用户来说都值得一读。这本书旨在向用户介绍 Ubuntu 操作系统和其所依赖的理念。你可以从官网下载这本书的 PDF 版本,也可以在亚马逊买印刷版。 + +1. Ubuntu [pocket guide][64] 支持 PDF 和印刷版本. +2. 是否支持论坛:否 + +### #23: Linux: Rute User's Tutorial and Exposition + +![GNU/LINUX system administration book][65] + +这本书涵盖了 GNU/LINUX 系统管理,主要是对主流的发布版本比如红帽和 Debian 的说明,可以作为新用户的教程和高级管理员的参考。这本书旨在给出 Unix 系统的每个面的简明彻底的解释和实践性的例子。想要全面了解 Linux 的人都不需要再看了 —— 这里没有涉及的内容。 + +1. Linux: [Rute User's Tutorial and Exposition][66] 支持印刷版和 html 格式 +2. 是否支持论坛:否 + +### #24:高级 Linux 编程 + +![高级 Linux 编程][67] + +这本书是写给那些已经熟悉了 C 语言编程的程序员的。这本书采取一种教程式的方式来讲述大多数在 GNU/Linux 系统应用编程中重要的概念和功能特性。如果你是一个已经对 GNU/Linux 系统编程有一定经验的开发者,或者是对其它类 Unix 系统编程有一定经验的开发者,或者对 GNU/Linux 软件开发有兴趣,或者想要从非 Unix 系统环境转换到 Unix 平台并且已经熟悉了优秀软件的开发原则,那你很适合读这本书。另外,你会发现这本书同样适合于 C 和 C++ 编程。 + +1. [高级 Linux 编程][68] 支持印刷版和 PDF 格式 +2. 是否支持论坛:否 + +### #25: LPI 101 Course Notes + +![Linux 国际专业协会认证书籍][69] + +LPIC 1、2、3 级是用于 Linux 系统管理员认证的。这个网站提供了 LPI 101 和 LPI 102 的测试训练。这些是根据 GNU 自由文档协议GNU Free Documentation Licence(FDL)发布的。这些课程材料基于 Linux 国际专业协会的 LPI 101 和 102 考试的目标。这个课程是为了提供给你一些必备的 Linux 系统的操作和管理的技能。 + +1. LPI [训练手册][70] 支持 PDF 格式 +2. 是否支持论坛:否 + +### #26: FLOSS 手册 + +![FLOSS Manuals is a collection of manuals about free and open source software][72] + +FLOSS 手册是一系列关于自由和开源软件以及用于创建它们的工具和使用这些工具的社区的手册。社区的成员包含作者、编辑、设计师、软件开发者、积极分子等。这些手册中说明了怎样安装使用一些自由和开源软件,如何操作(比如设计和维持在线安全)开源软件,这其中也包含如何使用或支持自由软件和格式的自由文化服务手册。你也会发现关于一些像 VLC、 [Linux 视频编辑][71]、 Linux、 OLPC / SUGAR、 GRAPHICS 等软件的手册。 + +1. 你可以浏览 [FOSS 手册][73] 支持 Wiki 格式 +2. 是否支持论坛:否 + +### #27:Linux 入门包 + +![Linux 入门包][74] + +刚接触 Linux 这个美好世界?想找一个简单的入门方式?你可以下载一个 130 页的指南来入门。这个指南会向你展示如何在你的个人电脑上安装 Linux,如何浏览桌面,掌握最主流行的 Linux 程序和修复可能出现的问题的方法。 + +1. [Linux 入门包][75]支持 PDF 格式 +2. 是否支持论坛:否 + +### #28:Linux.com - Linux 信息来源 + +Linux.com 是 Linux 基金会的一个产品。这个网站上提供一些新闻、指南、教程和一些关于 Linux 的其它信息。利用全球 Linux 用户的力量来通知、写作、连接 Linux 的事务。 + +1. 在线访问 [Linux.com][76] +2. 是否支持论坛:是 + +### #29: LWN + +LWN 是一个注重自由软件及用于 Linux 和其它类 Unix 操作系统的软件的网站。这个网站有周刊、基本上每天发布的单独文章和文章的讨论对话。该网站提供有关 Linux 和 FOSS 相关的开发、法律、商业和安全问题的全面报道。 + +1. 在线访问 [lwn.net][77] +2. 是否支持论坛:否 + +### #30:Mac OS X 相关网站 + +与 Mac OS X 相关网站的快速链接: + +* [Mac OS X 提示][78] —— 这个网站专用于苹果的 Mac OS X Unix 操作系统。网站有很多有关 Bash 和 Mac OS X 的使用建议、技巧和教程 +* [Mac OS 开发库][79] —— 苹果拥有大量和 OS X 开发相关的优秀系列内容。不要忘了看一看 [bash shell 脚本入门][80] +* [Apple 知识库][81] - 这个有点像 RHN 的知识库。这个网站提供了所有苹果产品包括 OS X 相关的指南和故障报修建议。 + +### #30: NetBSD + +(LCTT 译注:没错,又一个 30) + +NetBSD 是另一个基于 BSD Unix 操作系统的自由开源操作系统。NetBSD 项目专注于系统的高质量设计、稳定性和性能。由于 NetBSD 的可移植性和伯克利式的许可证,NetBSD 常用于嵌入式系统。这个网站提供了一些 NetBSD 官方文档和各种第三方文档的链接。 + +1. 在线访问 [netbsd][82] 文档,支持 html、PDF 格式 +2. 是否支持论坛:否 + +### 你要做的事 + +这是我的个人列表,这可能并不完全是权威的,因此如果你有你自己喜欢的独特 Unix/Linux 网站,可以在下方参与评论分享。 + +// 图片来源: [Flickr photo][83] PanelSwitchman。一些连接是用户在我们的 Facebook 粉丝页面上建议添加的。 + +### 关于作者 + +作者是 nixCraft 的创建者和经验丰富的系统管理员以及 Linux 操作系统 / Unix shell 脚本的培训师。它曾与全球客户及各行各业合作,包括 IT、教育,国防和空间研究以及一些非营利部门。可以关注作者的 [Twitter][84]、[Facebook][85]、[Google+][86]。 + +-------------------------------------------------------------------------------- + +via: https://www.cyberciti.biz/tips/linux-unix-bsd-documentations.html + +作者:[Vivek Gite][a] +译者:[ScarboroughCoral](https://github.com/ScarboroughCoral) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://www.cyberciti.biz +[1]:https://www.cyberciti.biz/media/new/tips/2011/12/unix-pdp11.jpg "Dennis Ritchie and Ken Thompson working with UNIX PDP11" +[2]:https://www.cyberciti.biz/media/new/tips/2011/12/redhat-enterprise-linux-docs.png "Red hat Enterprise Linux Docs" +[3]:https://access.redhat.com/documentation/en-us/ +[4]:https://www.cyberciti.biz/media/new/tips/2011/12/centos-linux-wiki.png "Centos Linux Wiki, Support, Documents" +[5]:https://www.cyberciti.biz/media/new/tips/2011/12/arch-linux-wiki.png "Arch Linux wiki and tutorials " +[6]:https://wiki.archlinux.org/index.php/Category:Networking_%28English%29 +[7]:https://bbs.archlinux.org/ +[8]:https://wiki.archlinux.org/ +[9]:https://www.cyberciti.biz/media/new/tips/2011/12/gentoo-linux-wiki1.png "Gentoo Linux Handbook and Wiki" +[10]:http://www.gentoo.org/doc/en/handbook/ +[11]:https://wiki.gentoo.org +[12]:https://forums.gentoo.org/ +[13]:http://gentoo-wiki.com +[14]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-linux-wiki.png "Ubuntu Linux Wiki and Forums" +[15]:https://help.ubuntu.com/community +[16]:https://help.ubuntu.com/ +[17]:https://ubuntuforums.org/ +[18]:https://www.cyberciti.biz/media/new/tips/2011/12/ibm-devel.png "IBM: Technical for Linux programmers and system administrators" +[19]:https://www.ibm.com/developerworks/learn/linux/index.html +[20]:https://www.ibm.com/developerworks/community/forums/html/public?lang=en +[21]:https://www.cyberciti.biz/media/new/tips/2011/12/freebsd-docs.png "Freebsd Documentation" +[22]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-hackers-wiki.png "Bash hackers wiki for bash users" +[23]:http://wiki.bash-hackers.org/doku.php +[24]:https://www.cyberciti.biz/media/new/tips/2011/12/bash-faq.png "Bash FAQ: Answers to frequently asked questions about GNU/BASH" +[25]:http://mywiki.wooledge.org/BashPitfalls +[26]:https://mywiki.wooledge.org/BashFAQ +[27]:https://www.cyberciti.biz/media/new/tips/2011/12/howtoforge.png "Howtoforge tutorials" +[28]:https://howtoforge.com/ +[29]:https://www.cyberciti.biz/media/new/tips/2011/12/openbsd-faq.png "OpenBSD Documenation" +[30]:https://www.openbsd.org/faq/index.html +[31]:https://www.openbsd.org/mail.html +[32]:https://www.cyberciti.biz/media/new/tips/2011/12/calomel_org.png "Open Source Research and Reference Documentation" +[33]:https://calomel.org +[34]:https://www.cyberciti.biz/media/new/tips/2011/12/slackware-linux-book.png "Slackware Linux Book and Documentation " +[35]:http://www.slackbook.org/ +[36]:https://www.cyberciti.biz/media/new/tips/2011/12/tldp.png "Linux Learning Site and Documentation " +[37]:http://tldp.org/LDP/abs/html/index.html +[38]:http://tldp.org/HOWTO/HOWTO-INDEX/howtos.html +[39]:http://tldp.org/ +[40]:https://www.cyberciti.biz/media/new/tips/2011/12/linuxhomenetworking.png "Linux Home Networking " +[41]:http://www.linuxhomenetworking.com/ +[42]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-action-show.png "Linux Podcast " +[43]:http://www.jupiterbroadcasting.com/show/linuxactionshow/ +[44]:https://www.commandlinefu.com/commands/browse/sort-by-votes +[45]:https://www.cyberciti.biz/media/new/tips/2011/12/commandlinefu.png "The best Unix / Linux Commands " +[46]:https://commandlinefu.com/ +[47]:https://www.debian-administration.org/hof +[48]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-admin.png "Debian Linux Adminstration: Tips and Tutorial For Sys Admin" +[49]:https://www.debian-administration.org/ +[50]:https://www.cyberciti.biz/media/new/tips/2011/12/catonmat.png "Sed, Awk, Perl Tutorials" +[51]:http://www.catonmat.net/blog/worlds-best-introduction-to-sed/ +[52]:https://www.catonmat.net/blog/sed-one-liners-explained-part-one/ +[53]:https://www.catonmat.net/blog/the-definitive-guide-to-bash-command-line-history/ +[54]:https://www.catonmat.net/blog/awk-one-liners-explained-part-one/ +[55]:https://catonmat.net/ +[56]:https://www.cyberciti.biz/media/new/tips/2011/12/debian-wiki.png "Debian Linux Tutorials and Wiki" +[57]:https://www.debian.org/doc/ +[58]:https://wiki.debian.org/ +[59]:https://www.debian.org/support +[60]:http://swift.siphos.be/linux_sea/ +[61]:https://www.cyberciti.biz/media/new/tips/2011/12/orelly.png "Oreilly Free Linux / Unix / Php / Javascript / Ubuntu Books" +[62]:http://commons.oreilly.com/wiki/index.php/O%27Reilly_Commons +[63]:https://www.cyberciti.biz/media/new/tips/2011/12/ubuntu-guide.png "Ubuntu Book For New Users" +[64]:http://ubuntupocketguide.com/ +[65]:https://www.cyberciti.biz/media/new/tips/2011/12/rute.png "GNU/LINUX system administration free book" +[66]:https://web.archive.org/web/20160204213406/http://rute.2038bug.com/rute.html.gz +[67]:https://www.cyberciti.biz/media/new/tips/2011/12/advanced-linux-programming.png "Download Advanced Linux Programming PDF version" +[68]:https://github.com/MentorEmbedded/advancedlinuxprogramming +[69]:https://www.cyberciti.biz/media/new/tips/2011/12/lpic.png "Download Linux Professional Institute Certification PDF Book" +[70]:http://academy.delmar.edu/Courses/ITSC1358/eBooks/LPI-101.LinuxTrainingCourseNotes.pdf +[71]://www.cyberciti.biz/faq/top5-linux-video-editing-system-software/ +[72]:https://www.cyberciti.biz/media/new/tips/2011/12/floss-manuals.png "Download manuals about free and open source software" +[73]:https://flossmanuals.net/ +[74]:https://www.cyberciti.biz/media/new/tips/2011/12/linux-starter.png "New to Linux? Start Linux starter book [ PDF version ]" +[75]:http://www.tuxradar.com/linuxstarterpack +[76]:https://linux.com +[77]:https://lwn.net/ +[78]:http://hints.macworld.com/ +[79]:https://developer.apple.com/library/mac/navigation/ +[80]:https://developer.apple.com/library/mac/#documentation/OpenSource/Conceptual/ShellScripting/Introduction/Introduction.html +[81]:https://support.apple.com/kb/index?page=search&locale=en_US&q= +[82]:https://www.netbsd.org/docs/ +[83]:https://www.flickr.com/photos/9479603@N02/3311745151/in/set-72157614479572582/ +[84]:https://twitter.com/nixcraft +[85]:https://facebook.com/nixcraft +[86]:https://plus.google.com/+CybercitiBiz +[87]:https://wiki.centos.org/ +[88]:https://www.centos.org/forums/ +[90]: https://www.freebsd.org/docs.html +[91]: https://forums.freebsd.org/ diff --git a/published/201812/20171012 7 Best eBook Readers for Linux.md b/published/201812/20171012 7 Best eBook Readers for Linux.md new file mode 100644 index 0000000000..346eed6bb6 --- /dev/null +++ b/published/201812/20171012 7 Best eBook Readers for Linux.md @@ -0,0 +1,188 @@ +7 个最佳 Linux 电子书阅读器 +====== + +**摘要:** 本文中我们涉及一些 Linux 最佳电子书阅读器。这些应用提供更佳的阅读体验甚至可以管理你的电子书。 + +![最佳 Linux 电子书阅读器][1] + +最近,随着人们发现在手持设备、Kindle 或者 PC 上阅读更加舒适,对电子图书的需求有所增加。至于 Linux 用户,也有各种电子书应用满足你阅读和整理电子书的需求。 + +在本文中,我们选出了七个最佳 Linux 电子书阅读器。这些电子书阅读器最适合 pdf、epub 和其他电子书格式。 + +我提供的是 Ubuntu 安装说明,因为我现在使用它。如果你使用的是[非 Ubuntu 发行版][2],你能在你的发行版软件仓库中找到大多数这些电子书应用。 + +### 1. Calibre + +[Calibre][3] 是 Linux 最受欢迎的电子书应用。老实说,这不仅仅是一个简单的电子书阅读器。它是一个完整的电子书解决方案。你甚至能[通过 Calibre 创建专业的电子书][4]。 + +通过强大的电子书管理和易用的界面,它提供了创建和编辑电子书的功能。Calibre 支持多种格式和与其它电子书阅读器同步。它也可以让你轻松转换一种电子书格式到另一种。 + +Calibre 最大的缺点是,资源消耗太多,因此作为一个独立的电子阅读器来说是一个艰难的选择。 + +![Calibre][5] + +#### 特性 + + * 管理电子书:Calibre 通过管理元数据来排序和分组电子书。你能从各种来源下载一本电子书的元数据或创建和编辑现有的字段。 + * 支持所有主流电子书格式:Calibre 支持所有主流电子书格式并兼容多种电子阅读器。 + * 文件转换:在转换时,你能通过改变电子书风格,创建内容表和调整边距的选项来转换任何一种电子书格式到另一种。你也能转换个人文档为电子书。 + * 从 web 下载杂志期刊:Calibre 能从各种新闻源或者通过 RSS 订阅源传递故事。 + * 分享和备份你的电子图书馆:它提供了一个选项,可以托管你电子书集合到它的服务端,从而你能与好友共享或用任何设备从任何地方访问。备份和导入/导出特性可以确保你的收藏安全和方便携带。 + +#### 安装 + +你能在主流 Linux 发行版的软件库中找到它。对于 Ubuntu,在软件中心搜索它或者使用下面的命令: + +``` +sudo apt-get install calibre +``` + +### 2. FBReader + +![FBReader: Linux 电子书阅读器][6] + +[FBReader][7] 是一个开源的轻量级多平台电子书阅读器,它支持多种格式,比如 ePub、fb2、mobi、rtf、html 等。它包括了一些可以访问的流行网络电子图书馆,那里你能免费或付费下载电子书。 + +#### 特性 + + * 支持多种文件格式和设备比如 Android、iOS、Windows、Mac 和更多。 + * 同步书集、阅读位置和书签。 + * 在线管理你图书馆,可以从你的 Linux 桌面添加任何书到所有设备。 + * 支持 Web 浏览器访问你的书集。 + * 支持将书籍存储在 Google Drive ,可以通过作者,系列或其他属性整理书籍。 + +#### 安装 + +你能从官方库或者在终端中输入以下命令安装 FBReader 电子阅读器。 + +``` +sudo apt-get install fbreader +``` + +或者你能从[这里][8]抓取一个以 .deb 包,并在你的基于 Debian 发行版的系统上安装它。 + +### 3. Okular + +[Okular][9] 是另一个开源的基于 KDE 开发的跨平台文档查看器,它已经作为 KDE 应用发布的一部分了。 + +![Okular][10] + +#### 特性 + + * Okular 支持多种文档格式像 PDF、Postscript、DjVu、CHM、XPS、ePub 和其他。 + * 支持在 PDF 文档中评论、高亮和绘制不同的形状等。 + * 无需修改原始 PDF 文件,分别保存上述这些更改。 + * 电子书中的文本能被提取到一个文本文件,并且有个名为 Jovie 的内置文本阅读服务。 + +备注:查看这个应用的时候,我发现这个应用在 Ubuntu 和它的衍生系统中不支持 ePub 文件格式。其他发行版用户仍然可以发挥它全部的潜力。 + +#### 安装 + +Ubuntu 用户可以在终端中键入下面的命令来安装它: + +``` +sudo apt-get install okular +``` + +### 4. Lucidor + +Lucidor 是一个易用的、支持 epub 文件格式和在 OPDS 格式中编目的电子阅读器。它也具有在本地书架里组织电子书集、从互联网搜索和下载,和将 Web 订阅和网页转换成电子书的功能。 + +Lucidor 是 XULRunner 应用程序,它向您展示了具有类似火狐的选项卡式布局,和存储数据和配置时的行为。它是这个列表中最简单的电子阅读器,包括诸如文本说明和滚动选项之类的配置。 + +![lucidor][11] + +你可以通过选择单词并右击“查找单词”来查找该单词在 Wiktionary.org 的定义。它也包含 web 订阅或 web 页面作为电子书的选项。 + +你能从[这里][12]下载和安装 deb 或者 RPM 包。 + +### 5. Bookworm + +![Bookworm Linux 电子阅读器][13] + +Bookworm 是另一个支持多种文件格式诸如 epub、pdf、mobi、cbr 和 cbz 的自由开源的电子阅读器。我写了一篇关于 Bookworm 应用程序的特性和安装的专题文章,到这里阅读:[Bookworm:一个简单而强大的 Linux 电子阅读器][14] + +#### 安装 + +``` +sudo apt-add-repository ppa:bookworm-team/bookworm +sudo apt-get update +sudo apt-get install bookworm +``` + +### 6. Easy Ebook Viewer + +[Easy Ebook Viewer][15] 是又一个用于读取 ePub 文件的很棒的 GTK Python 应用。具有基本章节导航、从上次阅读位置继续、从其他电子书文件格式导入、章节跳转等功能,Easy Ebook Viewer 是一个简单而简约的 ePub 阅读器. + +![Easy-Ebook-Viewer][16] + +这个应用仍然处于初始阶段,只支持 ePub 文件。 + +#### 安装 + +你可以从 [GitHub][17] 下载源代码,并自己编译它及依赖项来安装 Easy Ebook Viewer。或者,以下终端命令将执行完全相同的工作。 + +``` +sudo apt install git gir1.2-webkit-3.0 libwebkitgtk-3.0-0 gir1.2-gtk-3.0 python3-gi +git clone https://github.com/michaldaniel/Ebook-Viewer.git +cd Ebook-Viewer/ +sudo make install +``` + +成功完成上述步骤后,你可以从 Dash 启动它。 + +### 7. Buka + +Buka 主要是一个具有简单而清爽的用户界面的电子书管理器。它目前支持 PDF 格式,旨在帮助用户更加关注内容。拥有 PDF 阅读器的所有基本特性,Buka 允许你通过箭头键导航,具有缩放选项,并且能并排查看两页。 + +你可以创建单独的 PDF 文件列表并轻松地在它们之间切换。Buka 也提供了一个内置翻译工具,但是你需要有效的互联网连接来使用这个特性。 + +![Buka][19] + +#### 安装 + +你能从[官方下载页面][20]下载一个 AppImage。如果你不知道如何做,请阅读[如何在 Linux 下使用 AppImage][21]。或者,你可以通过命令行安装它: + +``` +sudo snap install buka +``` + +### 结束语 + +就我个人而言,我发现 Calibre 最适合我的需要。当然,Bookworm 看起来很有前途,这几天我经常使用它。不过,电子书应用的选择完全取决于你的喜好。 + +你使用哪个电子书应用呢?在下面的评论中让我们知道。 + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-ebook-readers-linux/ + +作者:[Ambarish Kumar][a] +译者:[zjon](https://github.com/zjon) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://itsfoss.com/author/ambarish/ +[1]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/best-ebook-readers-linux.png +[2]:https://itsfoss.com/non-ubuntu-beginner-linux/ +[3]:https://www.calibre-ebook.com +[4]:https://itsfoss.com/create-ebook-calibre-linux/ +[5]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Calibre-800x603.jpeg +[6]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/10/fbreader-800x624.jpeg +[7]:https://fbreader.org +[8]:https://fbreader.org/content/fbreader-beta-linux-desktop +[9]:https://okular.kde.org/ +[10]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Okular-800x435.jpg +[11]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/lucidor-2.png +[12]:http://lucidor.org/lucidor/download.php +[13]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/08/bookworm-ebook-reader-linux-800x450.jpeg +[14]:https://itsfoss.com/bookworm-ebook-reader-linux/ +[15]:https://github.com/michaldaniel/Ebook-Viewer +[16]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Easy-Ebook-Viewer.jpg +[17]:https://github.com/michaldaniel/Ebook-Viewer.git +[18]:https://github.com/oguzhaninan/Buka +[19]:https://i0.wp.com/itsfoss.com/wp-content/uploads/2017/09/Buka2-800x555.png +[20]:https://github.com/oguzhaninan/Buka/releases +[21]:https://itsfoss.com/use-appimage-linux/ diff --git a/published/201812/20171108 Continuous infrastructure- The other CI.md b/published/201812/20171108 Continuous infrastructure- The other CI.md new file mode 100644 index 0000000000..67a35c7c3d --- /dev/null +++ b/published/201812/20171108 Continuous infrastructure- The other CI.md @@ -0,0 +1,108 @@ +持续基础设施:另一个 CI +====== + +> 想要提升你的 DevOps 效率吗?将基础设施当成你的 CI 流程中的重要的一环。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/BIZ_darwincloud_520x292_0311LL.png?itok=74DLgd8Q) + +持续交付(CD)和持续集成(CI)是 DevOps 的两个众所周知的方面。但在 CI 大肆流行的今天却忽略了另一个关键性的 "I":基础设施infrastructure。 + +曾经有一段时间 “基础设施”就意味着无头headless的黑盒子、庞大的服务器,和高耸的机架 —— 更不用说漫长的采购流程和对盈余负载的错误估计。后来到了虚拟机时代,把基础设施处理得很好,虚拟化 —— 以前的世界从未有过这样。我们不再需要管理实体的服务器。仅仅是简单的点击,我们就可以创建和销毁、开始和停止、升级和降级我们的服务器。 + +有一个关于银行的流行故事:它们实现了数字化,并且引入了在线表格,用户需要手动填写表格、打印,然后邮寄回银行(LCTT 译注:我真的遇到过有人问我这样的需求怎么办)。这就是我们今天基础设施遇到的情况:使用新技术来做和以前一样的事情。 + +在这篇文章中,我们会看到在基础设施管理方面的进步,将基础设施视为一个版本化的组件并试着探索不可变服务器immutable server的概念。在后面的文章中,我们将了解如何使用开源工具来实现持续的基础设施。 + +![continuous infrastructure pipeline][2] + +*实践中的持续集成流程* + +这是我们熟悉的 CI,尽早发布、经常发布的循环流程。这个流程缺少一个关键的组件:基础设施。 + +突击小测试: + +* 你怎样创建和升级你的基础设施? +* 你怎样控制和追溯基础设施的改变? +* 你的基础设施是如何与你的业务进行匹配的? +* 你是如何确保在正确的基础设施配置上进行测试的? + +要回答这些问题,就要了解持续基础设施continuous infrastructure。把 CI 构建流程分为代码持续集成continuous integration code(CIc)和基础设施持续集成continuous integration infrastructure(CIi)来并行开发和构建代码和基础设施,再将两者融合到一起进行测试。把基础设施构建视为 CI 流程中的重要的一环。 + +![pipeline with infrastructure][4] + +*包含持续基础设施的 CI 流程* + +关于 CIi 定义的几个方面: + +1. 代码 + + 通过代码来创建基础设施架构,而不是通过安装。基础设施如代码Infrastructure as code(IaC)是使用配置脚本创建基础设施的现代最流行的方法。这些脚本遵循典型的编码和单元测试周期(请参阅下面关于 Terraform 脚本的示例)。 +2. 版本 + + IaC 组件在源码仓库中进行版本管理。这让基础设施的拥有了版本控制的所有好处:一致性,可追溯性,分支和标记。 +3. 管理 + + 通过编码和版本化的基础设施管理,你可以使用你所熟悉的测试和发布流程来管理基础设施的开发。 + +CIi 提供了下面的这些优势: + +1. 一致性Consistency + + 版本化和标记化的基础设施意味着你可以清楚的知道你的系统使用了哪些组件和配置。这建立了一个非常好的 DevOps 实践,用来鉴别和管理基础设施的一致性。 +2. 可重现性Reproducibility + + 通过基础设施的标记和基线,重建基础设施变得非常容易。想想你是否经常听到这个:“但是它在我的机器上可以运行!”现在,你可以在本地的测试平台中快速重现类似生产环境,从而将环境像变量一样在你的调试过程中删除。 +3. 可追溯性Traceability + + 你是否还记得曾经有过多少次寻找到底是谁更改了文件夹权限的经历,或者是谁升级了 `ssh` 包?代码化的、版本化的,发布的基础设施消除了临时性变更,为基础设施的管理带来了可追踪性和可预测性。 +4. 自动化Automation + + 借助脚本化的基础架构,自动化是下一个合乎逻辑的步骤。自动化允许你按需创建基础设施,并在使用完成后销毁它,所以你可以将更多宝贵的时间和精力用在更重要的任务上。 +5. 不变性Immutability + + CIi 带来了不可变基础设施等创新。你可以创建一个新的基础设施组件而不是通过升级(请参阅下面有关不可变设施的说明)。 + +持续基础设施是从运行基础环境到运行基础组件的进化。像处理代码一样,通过证实的 DevOps 流程来完成。对传统的 CI 的重新定义包含了缺少的那个 “i”,从而形成了连贯的 CD 。 + +**(CIc + CIi) = CI -> CD** + +### 基础设施如代码 (IaC) + +CIi 流程的一个关键推动因素是基础设施如代码infrastructure as code(IaC)。IaC 是一种使用配置文件进行基础设施创建和升级的机制。这些配置文件像其他的代码一样进行开发,并且使用版本管理系统进行管理。这些文件遵循一般的代码开发流程:单元测试、提交、构建和发布。IaC 流程拥有版本控制带给基础设施开发的所有好处,如标记、版本一致性,和修改可追溯。 + +这有一个简单的 Terraform 脚本用来在 AWS 上创建一个双层基础设施的简单示例,包括虚拟私有云(VPC)、弹性负载(ELB),安全组和一个 NGINX 服务器。[Terraform][5] 是一个通过脚本创建和更改基础设施架构和开源工具。 + +![terraform script][7] + +*Terraform 脚本创建双层架构设施的简单示例* + +完整的脚本请参见 [GitHub][8]。 + +### 不可变基础设施 + +你有几个正在运行的虚拟机,需要更新安全补丁。一个常见的做法是推送一个远程脚本单独更新每个系统。 + +要是不更新旧系统,如何才能直接丢弃它们并部署安装了新安全补丁的新系统呢?这就是不可变基础设施immutable infrastructure。因为之前的基础设施是版本化的、标签化的,所以安装补丁就只是更新该脚本并将其推送到发布流程而已。 + +现在你知道为什么要说基础设施在 CI 流程中特别重要了吗? + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/11/continuous-infrastructure-other-ci + +作者:[Girish Managoli][a] +选题:[lujun9972](https://github.com/lujun9972) +译者:[Jamskr](https://github.com/Jamskr) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/gammay +[1]:/file/376916 +[2]:https://opensource.com/sites/default/files/images/life-uploads/figure1.jpg (continuous infrastructure pipeline in use) +[3]:/file/376921 +[4]:https://opensource.com/sites/default/files/images/life-uploads/figure2.jpg (CI pipeline with infrastructure) +[5]:https://github.com/hashicorp/terraform +[6]:/file/376926 +[7]:https://opensource.com/sites/default/files/images/life-uploads/figure3_0.png (sample terraform script) +[8]:https://github.com/terraform-providers/terraform-provider-aws/tree/master/examples/two-tier diff --git a/published/201812/20171111 A CEOs Guide to Emacs.md b/published/201812/20171111 A CEOs Guide to Emacs.md new file mode 100644 index 0000000000..4a92e5710b --- /dev/null +++ b/published/201812/20171111 A CEOs Guide to Emacs.md @@ -0,0 +1,286 @@ +CEO 的 Emacs 秘籍 +=========== + +几年前,不,是几十年前,我就在用 Emacs。不论是码代码、编写文档,还是管理邮件和日程,我都用这个编辑器,或者是说操作系统,而且我还乐此不疲。许多年过去了,我也转向了其他更新、更好的工具。结果,就连最基本的文件浏览,我都已经忘了在不用鼠标的情况下该怎么操作。大约三个月前,我意识到我在应用程序和计算机之间切换上耗费了大量的时间,于是我决定再次使用 Emacs。这是个很正确的决定,原因有以下几个。其中包括用 `.emacs` 和 Dropbox 来搭建一个良好的、可移植的环境的一些技巧。 + +对于那些还没用过 Emacs 的人来说,Emacs 会让你爱恨交加。它有点像一个房子大小的鲁布·戈德堡机械Rube Goldberg machine,乍一看,它具备烤面包机的所有功能。这听起来不像是一种认可,但关键词是“乍一看”。一旦你了解了 Emacs,你就会意识到它其实是一台可以当发动机用的热核烤面包机……好吧,只是指文本处理的所有事情。当考虑到你计算机的使用周期在很大程度上都是与文本有关时,这是一个相当大胆的声明。大胆,但却是真的。 + +也许对我来说更重要的是,Emacs 是我曾经使用过的一个应用,并让我觉得我真正的拥有它,而不是把我塑造成一个匿名的“用户”,就好像位于 [Soma][30](LCTT 译注:旧金山的一个街区)或雷蒙德(LCTT 译注:微软总部所在地)附近某个高档办公室的产品营销部门把钱作为明确的目标一样。现代生产力和创作应用程序(如 Pages 或 IDE)就像碳纤维赛车,它们装备得很好,也很齐全。而 Emacs 就像一盒经典的 [Campagnolo][31] (LCTT 译注:世界上最好的三个公路自行车套件系统品牌之一)零件和一个漂亮的自行车牵引式钢框架,但缺少曲柄臂和刹车杆,你必须在网上某个小众文化中找到它们。前者更快而且很完整,后者是无尽的快乐或烦恼的源泉,当然这取决于你自己,而且这种快乐或烦恼会伴随到你死。我就是那种在找到一堆老古董或用 `Emacs Lisp` 配置编辑器时会感到高兴的人,具体情况因人而异。 + +![1933 steel bicycle](https://www.fugue.co/hubfs/Imported_Blog_Media/bicycle-1.jpg) + +*一辆我还在骑的 1933 年产的钢制自行车。你可以看看框架管差别: [https://www.youtube.com/watch?v=khJQgRLKMU0][6]* + +这可能给人一种 Emacs 已经过气或过时的印象。然而并不是,Emacs 是强大和永恒的,只要你耐心地去理解它的一些规则。Emacs 的规则很另类,也很奇怪,但其中的逻辑却引人注目,且魅力十足。对于我来说, Emacs 更像是未来而不是过去。就像牵引式钢框架在未来几十年里将会变得好用和舒适,而神奇的碳纤维自行车将会被扔进垃圾场,在撞击中粉碎一样,Emacs 也将会作为一种在最新的流行应用早已被遗忘的时候的好用的工具继续存在这里。 + +使用 Lisp 代码来构建个人工作环境,并将这个恰到好处的环境移植到任何计算机,如果这种想法打动了你,那么你将会爱上 Emacs。如果你喜欢很潮、很炫的,又不想投入太多时间和精力的情况下就能直接工作的话,那么 Emacs 可能不适合你。我已经不再写代码了(除了 Ludwig 和 Emacs Lisp),但是 Fugue 公司的很多工程师都使用 Emacs 来提高码代码的效率。我公司有 30% 的工程师用 Emacs,40% 用 IDE 和 30% 的用 vim。但这篇文章是写给 CEO 和其他[精英][32]Pointy-Haired Bosses(PHB[^1] )(以及对 Emacs 感兴趣的人)的,所以我将解释或者说辩解我为什么喜欢它以及我如何使用它。同时我也希望我能介绍清楚,从而让你有个良好的体验,而不是花上几个小时去 Google。 + +### 恒久优势 + +使用 Emacs 带来的长期优势是让生活更轻松。与最后的收获相比,最开始的付出完全值得。想想这些: + +#### 简单高效 + +Org 模式本身就值得花时间,但如果你像我一样,你通常要处理十几份左右的文件 —— 从博客帖子到会议事务清单,再到员工评估。在现代计算世界中,这通常意味着要使用多个应用程序,所有这些程序都有不同的用户界面、保存方式、排序和搜索方式。结果就是你需要不断转换思维环境,记住各种细节。我讨厌在程序间切换,这是在强人所难,因为这是个不完整界面模型[^2] ,并且我讨厌记住本该由计算机记住的东西。在单个环境下,Emacs 对 PHB 甚至比对于程序员更高效,因为程序员更多时候只需要专注于一个程序。转换思维环境的成本比表面上的要更高。操作系统和应用程序厂商已经构建了各种界面,以分散我们对这一现实的注意力。如果你是技术人员,通过快捷键(`M-:`)来访问功能强大的[语言解释器][33]会方便的多[^3] 。 + +许多应用程序可以全天全屏地用于编辑文本。但Emacs 是唯一的,因为它既是编辑器也是 Emacs Lisp 解释器。从本质上来说,你工作时只要用电脑上的一两个键就能完成。如果你略懂编程的话,就会发现这代表着你可以在 Emacs 中做 _任何事情_。一旦你在内存中存储了这些指令,你的电脑就可以在工作时几乎实时地为你提供高效的运转。你不会想用 Emacs Lisp 来重建 Excel,因为只要用简单的一两行代码就能实现 Excel 中大多数的功能。比如说我要处理数字,我更有可能转到 scratch 缓冲区,编写一些代码,而不是打开电子表格。即便是要写一封比较大的邮件时,我通常也会先在 Emacs 中写完,然后再复制粘贴到邮件客户端中。当你可以流畅的书写时,为什么要去切换呢?你可以先从一两个简单的算术开始,随着时间的推移,你可以很容易的在 Emacs 中添加你所需要处理的计算。这在应用程序中可能是独一无二的,同时还提供了让为其他的人创造的丰富特性。还记得艾萨克·阿西莫夫书中那些神奇的终端吗? Emacs 是我所遇到的最接近它们的东西[^4] 。我决定不再用什么应用程序来做这个或那个。相反,我只是工作。拥有一个伟大的工具并致力于此,这才是真正的动力和效率。 + +#### 静中造物 + +拥有所发现的最好的文本编辑功能的最终结果是什么?有一群人在做各种各样有用的补充吗?发挥了 Lisp 键盘的全部威力了吗?我用 Emacs 来完成所有的创作性工作,音乐和图片除外。 + +我办公桌上有两个显示器。其中一块竖屏是将 Emacs 全天全屏显示,另一个显示浏览器,用来搜索和阅读,我通常也会打开一个终端。我将日历、邮件等放在 OS X 的另一个桌面上,当我使用 Emacs 时,这个桌面会隐藏起来,同时我也会关掉所有通知。这样就能让我专注于我手头上在做的事了。我发现,越是先进的 UI 应用程序,消除干扰越是不可能,因为这些应用程序致力于提供帮助和易用性。我不需要经常被提醒该如何操作,我已经做了成千上万次了,我真正需要的是一张干净整洁的白纸用来思考。也许因为年龄和自己的“恶习”,我不太喜欢处在嘈杂的环境中,但我认为这值得一试。看看在你电脑环境中有一些真正的宁静是怎样的。当然,现在很多应用程序都有隐藏界面的模式,谢天谢地,苹果和微软现在都有了真正意义上的全屏模式。但是,没有并没有应用程序可以强大到足以“处理”大多数事务。除非你整天写代码,或者像出书一样,处理很长的文档,否则你仍然会面临其他应用程序的干扰。而且,大多数现代应用程序似乎同时显得自视甚高,缺乏功能和可用性[^5] 。比起 office 桌面版,我更讨厌它的在线版。 + +![](https://www.fugue.co/hubfs/Imported_Blog_Media/desktop-1.jpg) + +*我的桌面布局, Emacs 在左边* + +但是沟通呢?创造和沟通之间的差别很大。当我将这两件事在不同时间段处理时,我的效率会更高。我们 Fugue 公司使用 Slack,痛并快乐着。我把 Slack 和我的日历、电子邮件放在一个即时通讯的桌面上,这样,当我正在做事时,我就能够忽略所有的聊天信息了。虽然只要一个 Slackstorm 或一封风投或董事会董事的电子邮件,就能让我立刻丢掉手头工作。但是,大多数事情通常可以等上一两个小时。 + +#### 普适恒久 + +第三个原因是,我发现 Emacs 比其它的环境更有优势的是,你可以很容易地用它来处理事务。我的意思是,你所需要的只是通过类似于 Dropbox 的网站同步一两个目录,而不是让大量的应用程序以它们自己的方式进行交互和同步。然后,你可以在任何你已经精心打造了适合你的目的的套件的环境中工作了。我在 OS X、Windows,或有时在 Linux 都是这样做的。它非常简单可靠。这种功能很有用,以至于我害怕处理 Pages、Google Docs、Office 或其他类型的文件和应用程序,这些文件和应用程序会迫使我回到文件系统或云中的某个地方去寻找。 + +限制在计算机上永久存储的因素是文件格式。假设人类已经解决了存储问题[^6] ,随着时间的推移,我们面临的问题是我们能否够继续访问我们创建的信息。文本文件是保存时间最久的格式。你可以用 Emacs 轻松地打开 1970 年的文本文件。然而对于 Office 应用程序却并非如此。同时文本文件要比 Office 应用程序数据文件小得多,也要好的多。作为一个数码背包迷,作为一个在脑子里一闪而过就会做很多小笔记的人,拥有一个简单、轻便、永久、随时可用的东西对我来说很重要。 + +如果你准备尝试 Emacs,请继续读下去!下面的部分不是完整的教程,但是在读完后,就可以动手操作了。 + +### 驾驭之道 —— 专业定制 + +所有这些强大、精神上的平静和安宁的代价是,Emacs 有一个陡峭的学习曲线,它的一切都与你以前所习惯的不同。一开始,这会让你觉得你是在浪费时间在一个过时和奇怪的应用程序上,就好像穿越到过去。这有点像你只开过车,却要你去学骑自行车[^7] 。 + +#### 类型抉择 + +我用的是来自 GNU 的 OS X 和 Windows 的通用版本的 Emacs。你可以在 [http://emacsformacos.com/][35] 获取 OS X 版本,在 [http://www.gnu.org/software/emacs/][37] 获取 Windows 版本。市面上还有很多其他版本,尤其是 Mac 版本,但我发现,要做一些功能强大的东西(涉及到 Lisp 和许多模式),学习曲线要比实际操作低得多。下载,然后我们就可以开始了[^8] ! + +#### 驾驭之始 + +在本文中,我将使用 Emacs 的按键和组合键约定。`C` 表示 `Control` 键,`M` 表示 `meta`(通常是 `Alt` 或 `Option` 键),以及用于组合键的连字符。因此,`C-h t` 表示同时按下 `Control` 和 `h` 键,然后释放,再按下 `t`。这个组合快捷键会指向一个教程,这是你首先要做的一件事。 + +不要使用方向键或鼠标。它们可以工作,但是你应该给自己一周的时间来使用 Emacs 教程中的原生的导航命令。一旦你这些命令变为了肌肉记忆,你可能就会乐在其中,无论到哪里,你都会非常想念它们。这个 Emacs 教程在介绍它们方面做得很好,但是我将进行总结,所以你不需要阅读全部内容。最无聊的是,不用方向键,用 `C-b` 向前移动,用 `C-f` 向后移动,上一行用 `C-p`,下一行用 `C-n`。你可能会想:“我用方向键就很好,为什么还要这样做?” 有几个原因。首先,你不需要从主键盘区将你的手移开。第二,使用 `Alt`(或用 Emacs 的说法 `Meta`)键来向前或向后在单词间移动。显而易见这样更方便。第三,如果想重复某个命令,可以在命令前面加上一个数字。在编辑文档时,我经常使用这种方法,通过估计向后移动多少个单词或向上或向下移动多少行,然后按下 `C-9 C-p` 或 `M-5 M-b` 之类的快捷键。其它真正重要的导航命令基于开头用 `a` 和结尾用 `e`。在行中使用 `C-a|e`,在句中使用 `M-a|e`。为了让句中的命令正常工作,需要在句号后增加两个空格,这同时提供了一个有用的特性,并消除了脑中一个过时的[观点][38]。如果需要将文档导出到单个空间[发布环境][39],可以编写一个宏来执行此操作。 + +Emacs 所附带的教程很值得去看。对于真正缺乏耐心的人,我将介绍一些重要的命令,但那个教程非常有用。记住:用 `C-h t` 进入教程。 + +#### 驾驭之复制粘贴 + +你可以把 Emacs 设为 CUA 模式,这将会以熟悉的方式工作来操作复制粘贴,但是原生的 Emacs 方法更好,而且你一旦学会了它,就很容易。你可以使用 `Shift` 和导航命令来标记区域(如同选择)。所以 `C-F` 是选中光标前的一个字符,等等。亦可以用 `M-w` 来复制,用 `C-w` 剪切,然后用 `C-y` 粘贴。这些实际上叫做删除killing召回yanking,但它非常类似于剪切和粘贴。在删除中还有一些小技巧,但是现在,你只需要关注剪切、复制和粘贴。如果你开始尝试了,那么 `C-x u` 是撤销。 + +#### 驾驭之 Ido 模式 + +相信我,Ido 会让文件的工作变得很简单。通常,你在 Emacs 中处理文件不需要使用一个单独的访达或文件资源管理器的窗口。相反,你可以用编辑器的命令来创建、打开和保存文件。如果没有 Ido 的话,这将有点麻烦,所以我建议你在学习其他之前安装好它。 Ido 是 Emacs 的 22 版时开始出现的,但是需要对你的 `.emacs` 文件做一些调整,来确保它一直开启着。这是个配置环境的好理由。 + +Emacs 中的大多数功能都表现在模式上。要安装指定的模式,需要做两件事。嗯,一开始你需要做一些额外的事情,但这些只需要做一次,然后再做这两件事。那么,这件额外的事情是你需要一个单独的位置来放置所有 Emacs Lisp 文件,并且你需要告诉 Emacs 这个位置在哪。我建议你在 Dropbox 上创建一个单独的目录,那是你 Emacs 主目录。在这里,你需要创建一个 `.emacs` 文件和 `.emacs.d` 目录。在 `.emacs.d` 目录下,创建一个 `lisp` 的目录。就像这样: + +``` +home +| ++.emacs +| +-.emacs.d + | + -lisp +``` + +你可以将 `.el` 文件,比如说模式文件,放到 `home/.emacs.d/lisp` 目录下,然后在你的 `.emacs` 文件中添加以下代码来指明该路径: + +``` +(add-to-list 'load-path "~/.emacs.d/lisp/") +``` + +Ido 模式是 Emacs 自带的,所以你不需要在你的 `lisp` 目录中放这个 `.el` 文件,但你仍然需要添加上面代码,因为下面的介绍会使用到它. + +#### 驾驭之符号链接 + +等等,这里写的 `.emacs` 和 `.emacs.d` 都是存放在你的主目录下,但我们把它们放到了 Dropbox 的某些愚蠢的文件夹!对,这就让你的环境在任何地方都很容易使用。把所有东西都保存在 Dropbox 上,并做符号链接到 `~` 下的 `.emacs` 、`.emacs.d` 和你的主要存放文档的目录。在 OS X 上,使用 `ln -s` 命令非常简单,但在 Windows 上却很麻烦。幸运的是,Emacs 提供了一种简单的方法来替代 Windows 上的符号链接,Windows 的 `HOME` 环境变量。转到 Windows 的环境变量(Windows 10,你可以按 Windows 键然后输入 “环境变量” 来搜索,这是 Windows 10 最好的地方了),在你的帐户下创建一个指向你在 Dropbox 中 Emacs 的文件夹的 `HOME` 环境变量。如果你想方便地浏览 Dropbox 之外的本地文件,你可能想在你的实际主目录下建立一个到 Dropbox 下 Emacs 主目录的符号链接。 + +至此,你已经完成了在任意机器上指向你的 Emacs 配置和文件所需的技巧。如果你买了一台新电脑,或者用别人的电脑一小时或一天,你就可以得到你的整个工作环境。第一次操作起来似乎有点难,但是一旦你知道你在做什么,就(最多)只需要 10 分钟。 + +但我们现在是在配置 Ido …… + +按下 `C-x` `C-f` 然后输入 `~/.emacs` 和两次回车来创建 `.emacs` 文件,将下面几行添加进去: + +``` +;; set up ido mode +(require `ido) +(setq ido-enable-flex-matching t) +(setq ido-everywhere t) +(ido-mode 1) +``` + +在 `.emacs` 窗口开着的时候,执行 `M-x evaluate-buffer` 命令。如果某处弄错了的话,将得到一个错误,或者你将得到 Ido。Ido 改变了在 minibuffer 中操作文件操方式。关于这个有一篇比较好的文档,但是我也会指出一些技巧。有效地使用 `~/`;你可以在 minibuffer 的任何地方输入 `~/`,它就会跳转到主目录。这就意味着,你应该让你的大部分东西就近的放在主目录下。我用 `~/org` 目录来保存所有非代码的东西,用 `~/code` 保存代码。一旦你进入到正确的目录,通常会拥有一组具有不同扩展名的文件,特别是当你使用 Org 模式并从中发布的话。你可以输入 `.` 和想要的扩展名,无论你的在文件名的什么位置,Ido 都会将选择限制在具有该扩展名的文件中。例如,我在 Org 模式下写这篇博客,所以该文件是: + +``` +~/org/blog/emacs.org +``` + +我偶尔也会用 Org 模式发布成 HTML 格式,所以我将在同一目录下得到 `emacs.html` 文件。当我想打开该 Org 文件时,我会输入: + +``` +C-x C-f ~/o[RET]/bl[RET].or[RET] +``` + +其中 `[RET]` 是我使用 `Ido` 模式的自动补全而按下的回车键。所以,这只需要按 12 个键,如果你习惯了的话, 这将比打开访达或文件资源管理器再用鼠标点要节省 _很_ 多时间。 Ido 模式很有用,而这只是操作 Emacs 的一种实用模式而已。下面让我们去探索一些其它对完成工作很有帮助的模式吧。 + +#### 驾驭之字体风格 + +我推荐在 Emacs 中使用漂亮的字体族。它们可以使用不同的括号、0 和其他字符进行自定义。你可以在字体文件本身中构建额外的行间距。我推荐 1.5 倍的行间距,并在代码和数据中使用不等宽字体。写作中我用 `Serif` 字体,它有一种紧凑但时髦的感觉。你可以在 [http://input.fontbureau.com/][40] 上找到它们,在那里你可以根据自己的喜好进行定制。你可以使用 Emacs 中的菜单手动设置字体,但这会将代码保存到你的 `.emacs` 文件中,如果你使用多个设备,你可能需要一些不同的设置。我将我的 `.emacs` 设置为根据使用的机器的名称来相应配置屏幕。代码如下: + +``` +;; set up fonts for different OSes. OSX toggles to full screen. +(setq myfont "InputSerif") +(cond +((string-equal system-name "Sampo.local") + (set-face-attribute 'default nil :font myfont :height 144) + (toggle-frame-fullscreen)) +((string-equal system-name "Morpheus.local") + (set-face-attribute 'default nil :font myfont :height 144)) +((string-equal system-name "ILMARINEN") + (set-face-attribute 'default nil :font myfont :height 106)) +((string-equal system-name "UKKO") + (set-face-attribute 'default nil :font myfont :height 104))) +``` + +你应该将 Emacs 中的 `system-name` 的值替换成你通过 `(system-name)` 得到的值。注意,在 Sampo (我的 MacBook)上,我还将 Emacs 设置为全屏。我也想在 Windows 实现这个功能,但是 Windows 和 Emacs 好像互相嫌弃对方,当我尝试配置时,它总是不稳定。相反,我只能在启动后手动全屏。 + +我还建议去掉 Emacs 中的上世纪 90 年代出现的难看工具栏,当时比较流行在应用程序中使用工具栏。我还去掉了一些其它的“电镀层”,这样我就有了一个简单、高效的界面。把这些加到你的 `.emacs` 的文件中来去掉工具栏和滚动条,但要保留菜单(在 OS X 上,它将被隐藏,除非你将鼠标到屏幕顶部): + +``` +(if (fboundp 'scroll-bar-mode) (scroll-bar-mode -1)) +(if (fboundp 'tool-bar-mode) (tool-bar-mode -1)) +(if (fboundp 'menu-bar-mode) (menu-bar-mode 1)) +``` + +#### 驾驭之 Org 模式 + +我基本上是在 Org 模式下处理工作的。它是我创作文档、记笔记、列任务清单以及 90% 其他工作的首选环境。Org 模式是笔记和待办事项列表的组合工具,最初是由一个在会议中使用笔记本电脑的人构想出来的。我反对在会议中使用笔记本电脑,自己也不使用,所以我的用法与他的有些不同。对我来说,Org 模式主要是一种处理结构中内容的方式。在 Org 模式中有标题和副标题等,它们的作用就像一个大纲。Org 模式允许你展开或隐藏大纲树,还可以重新排列该树。这正合我意,并且我发现用这种方式使用它是一种乐趣。 + +Org 模式也有很多让生活愉快的小功能。例如,脚注处理非常好,LaTeX/PDF 输出也很好。Org 模式能够根据所有文档中的待办事项生成议程,并能很好地将它们与日期/时间联系起来。我不把它用在任何形式的外部任务上,这些任务都是在一个共享的日历上处理的,但是在创建事物和跟踪我未来需要创建的东西时,它是无价的。安装它,你只要将 `org-mode.el` 放到你的 `lisp` 目录下。如果你想要它基于文档的结构进行缩进并在打开时全部展开的话,在你的 `.emacs` 文件中添加如下代码: + +``` +;; set up org mode +(setq org-startup-indented t) +(setq org-startup-folded "showall") +(setq org-directory "~/org") +``` + +最后一行是让 Org 模式知道在哪里查找要包含在议程和其他事情中的文件。我把 Org 模式保存在我的主目录中,也就是说,像前面介绍的一样,它是 Dropbox 目录的一个符号链接。 + +我有一个总是在缓冲区中打开的 `stuff.org` 文件。我把它当作记事本。Org 模式使得提取待办事项和有期限的事情变得很容易。当你能够内联 Lisp 代码并在需要计算它时,它特别有用。拥有包含内容的代码非常方便。同样,你可以使用 Emacs 访问实际的计算机,这是一种解放。 + +##### 用 Org 模式进行发布 + +我关心的是文档的外观及格式。我刚开始工作时是个设计师,而且我认为信息可以,也应该表现得清晰和美丽。Org 模式对将 LaTeX 生成 PDF 支持的很好,LaTeX 虽然也有学习曲线,但是很容易处理一些简单的事务。 + +如果你想使用字体和样式,而不是典型的 LaTeX 字体和样式,你需要做些事。首先,你要用到 XeLaTeX,这样就可以使用普通的系统字体,而不是 LaTeX 的特殊字体。接下来,你需要将以下代码添加到 `.emacs` 中: + +``` +(setq org-latex-pdf-process + '("xelatex -interaction nonstopmode %f" + "xelatex -interaction nonstopmode %f")) +``` + +我把这个放在 `.emacs` 中 Org 模式配置部分的末尾,使文档变得更整洁。这让你在从 Org 模式发布时可以使用更多格式化选项。例如,我经常使用: + +``` +#+LaTeX_HEADER: \usepackage{fontspec} +#+LATEX_HEADER: \setmonofont[Scale=0.9]{Input Mono} +#+LATEX_HEADER: \setromanfont{Maison Neue} +#+LATEX_HEADER: \linespread{1.5} +#+LATEX_HEADER: \usepackage[margin=1.25in]{geometry} + +#+TITLE: Document Title Here +``` + +这些都可以在 `.org` 文件中找到。我们的公司规定的正文字体是 `Maison Neue`,但你也可以在这写上任何适当的东西。我很是抵制 `Maison Neue`,因为这是一种糟糕的字体,任何人都不应该使用它。 + +这个文件是一个使用该配置输出为 PDF 的实例。这就是开箱即用的 LaTeX 一样。在我看来这还不错,但是字体很平淡,而且有点奇怪。此外,如果你使用标准格式,人们会觉得他们正在阅读的东西是、或者假装是一篇学术论文。别怪我没提醒你。 + +#### 驾驭之 Ace Jump 模式 + +这只是一个辅助模式,而不是一个主模式,但是你也需要它。其工作原理有点像之前提到的 Jef Raskin 的 Leap 功能[^9] 。 按下 `C-c C-SPC`,然后输入要跳转到单词的第一个字母。它会高亮显示所有以该字母开头的单词,并将其替换为字母表中的字母。你只需键入所需位置的字母,光标就会跳转到该位置。我常将它作为导航键或是用来检索。将 `.el` 文件下到你的 `lisp` 目录下,并在 `.emacs` 文件添加如下代码: + +``` +;; set up ace-jump-mode +(add-to-list 'load-path "which-folder-ace-jump-mode-file-in/") +(require 'ace-jump-mode) +(define-key global-map (kbd "C-c C-SPC" ) 'ace-jump-mode) +``` + +### 待续 + +本文已经够详细了,你能在其中得到你所想要的。我很想知道除编程之外(或用于编程)Emacs 的使用情况,及其是否高效。在我使用 Emacs 的过程中,可能存在一些自作聪明的老板式想法,如果你能指出来,我将不胜感激。之后,我可能会写一些更新来介绍其它特性或模式。我很确定我将会向你展示如何在 Emacs 和 Ludwig 模式下使用 Fugue,因为我会将它发展成比代码高亮更有用的东西。更多想法,请在 Twitter 上 [@fugueHQ][41] 。 + +### 脚注 + +[^1]: 如果你是位精英,但从没涉及过技术方面,那么 Emacs 并不适合你。对于少数的人来说,Emacs 可能会为他们开辟一条通往计算机技术方面的道路,但这只是极少数。如果你知道怎么使用 Unix 或 Windows 的终端,或者曾编辑过 dotfile,或者说你曾写过一点代码的话,这对使用 Emacs 有很大的帮助。 + +[^2]: 参考链接: http://archive.wired.com/wired/archive/2.08/tufte.html + +[^3]: 我主要是在写作时使用这个模式来进行一些运算。比如说,当我在给一个新雇员写一封入职信时,我想要算这封入职信中有多少个选项。由于我在我的 `.emacs` 为 outstanding-shares 定义了一个变量,所以我只要按下 `M-:` 然后输入 `(* .001 outstanding-shares)` 就能再无需打开计算器或电子表格的情况下得到精度为 0.001 的结果。我使用了 _大量_ 的变量来避免程序间切换。 + +[^4]: 缺少的部分是 web。有个名为 eww 的 Emacs 网页浏览器能够让你在 Emacs 中浏览网页。我用的就是这个,因为它既能拦截广告(LCTT 译注:实质上是无法显示,/laugh),同时也在可读性方面为 web 开发者消除了大多数差劲的选项。这个其实有点类似于 Safari 的阅读模式。不幸的是,大部分网站都有很多令人讨厌的繁琐的东西以及难以转换为文本的导航, + +[^5]: 易用性和易学性这两者经常容易被搞混。易学性是指学习使用工具的难易程度。而易用性是指工具高效的程度。通常来说,这是要差别的,就想鼠标和菜单栏的差别一样。菜单栏很容易学会,但是却不怎么高效,以致于早期会存在一些键盘的快捷键。除了在 GUI 方面上,Raskin 在很多方面上的观点都很正确。如今,操作系统正在将一些合适的搜索映射到键盘的快捷键上。比如说在 OS X 和 Windows 上,我默认的导航方式就是搜索。Ubuntu 的搜索做的很差劲,如同它的 GUI 一样差劲。 + +[^6]: 在有网的情况下,[AWS S3][42] 是解决文件存储问题的有效方案。数万亿个对象存在 S3 中,但是从来没有遗失过。大部分提供云存储的服务都是在 S3 上或是模拟 S3 构建的。没人能够拥有 S3 一样的规模,所以我将重要的文件通过 Dropbox 存储在上面。 + +[^7]: 目前,你可能会想:“这个人和自行车有什么关系?”……我在各个层面上都喜欢自行车。自行车是迄今为止发明的最具机械效率的交通工具。自行车可以是真正美丽的事物。而且,只要注意点的话,自行车可以用一辈子。早在 2001 年,我曾向 Rivendell Bicycle Works 订购了一辆自行车,现在我每次看到那辆自行车依然很高兴,自行车和 Unix 是我接触过的最好的两个发明。对了,还有 Emacs。 + +[^8]: 这个网站有一个很棒的 Emacs 教程,但不是这个。当我浏览这个页面时,我确实得到了一些对获取高效的 Emacs 配置很重要的知识,但无论怎么说,这都不是个替代品。 + +[^9]: 20 世纪 80 年代,Jef Raskin 与 Steve Jobs 在 Macintosh 项目上闹翻后, Jef Raskin 又设计了 [Canon Cat 计算机][43]。这台 Cat 计算机是以文档为中心的界面(所有的计算机都应如此),并以一种全新的方式使用键盘,你现在可以用 Emacs 来模仿这种键盘。如果现在有一台现代的,功能强大的 Cat 计算机并配有一个高分辨的显示器和 Unix 系统的话,我立马会用 Mac 来换。[https://youtu.be/o_TlE_U_X3c?t=19s][28] + +-------------------------------------------------------------------------------- + +via: https://blog.fugue.co/2015-11-11-guide-to-emacs.html + +作者:[Josh Stella][a] +译者:[oneforalone](https://github.com/oneforalone) +校对:[wxy](https://github.com/wxy), [oneforalone](https://github.com/oneforalone) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://blog.fugue.co/authors/josh.html +[1]:https://blog.fugue.co/2013-10-16-vpc-on-aws-part3.html +[2]:https://blog.fugue.co/2013-10-02-vpc-on-aws-part2.html +[3]:http://ww2.fugue.co/2017-05-25_OS_AR_GartnerCoolVendor2017_01-LP-Registration.html +[4]:https://blog.fugue.co/authors/josh.html +[5]:https://twitter.com/joshstella +[6]:https://www.youtube.com/watch?v=khJQgRLKMU0 +[7]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phb +[8]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufte +[9]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreter +[10]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#eww +[11]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usability +[12]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3 +[13]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicycles +[14]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorial +[15]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncat +[16]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#phbOrigin +[17]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#tufteOrigin +[18]:http://archive.wired.com/wired/archive/2.08/tufte.html +[19]:http://archive.wired.com/wired/archive/2.08/tufte.html +[20]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#interpreterOrigin +[21]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#ewwOrigin +[22]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#usabilityOrigin +[23]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#s3Origin +[24]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#bicyclesOrigin +[25]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#nottutorialOrigin +[26]:https://blog.fugue.co/2015-11-11-guide-to-emacs.html?utm_source=wanqu.co&utm_campaign=Wanqu+Daily&utm_medium=website#canoncatOrigin +[27]:https://youtu.be/o_TlE_U_X3c?t=19s +[28]:https://youtu.be/o_TlE_U_X3c?t=19s +[29]:https://blog.fugue.co/authors/josh.html +[30]:http://www.huffingtonpost.com/zachary-ehren/soma-isnt-a-drug-san-fran_b_987841.html +[31]:http://www.campagnolo.com/US/en +[32]:http://www.businessinsider.com/best-pointy-haired-boss-moments-from-dilbert-2013-10 +[33]:http://www.webopedia.com/TERM/I/interpreter.html +[34]:http://emacsformacosx.com/ +[35]:http://emacsformacosx.com/ +[36]:http://www.gnu.org/software/emacs/ +[37]:http://www.gnu.org/software/emacs/ +[38]:http://www.huffingtonpost.com/2015/05/29/two-spaces-after-period-debate_n_7455660.html +[39]:http://practicaltypography.com/one-space-between-sentences.html +[40]:http://input.fontbureau.com/ +[41]:https://twitter.com/fugueHQ +[42]:https://baike.baidu.com/item/amazon%20s3/10809744?fr=aladdin +[43]:https://en.wikipedia.org/wiki/Canon_Cat diff --git a/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md new file mode 100644 index 0000000000..bca48001bf --- /dev/null +++ b/published/201812/20171129 TLDR pages Simplified Alternative To Linux Man Pages.md @@ -0,0 +1,106 @@ +TLDR 页:Linux 手册页的简化替代品 +============== + +[![](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu-640x360.jpg "tldr page ubuntu")][22] + +在终端上使用各种命令执行重要任务是 Linux 桌面体验中不可或缺的一部分。Linux 这个开源操作系统拥有[丰富的命令][23],任何用户都无法全部记住所有这些命令。而使事情变得更复杂的是,每个命令都有自己的一组带来丰富的功能的选项。 + +为了解决这个问题,人们创建了[手册页][12]man page,(手册 —— man 是 manual 的缩写)。首先,它是用英文写成的,包含了大量关于不同命令的深入信息。有时候,当你在寻找命令的基本信息时,它就会显得有点庞杂。为了解决这个问题,人们创建了[TLDR 页][13]。 + +### 什么是 TLDR 页? + +TLDR 页的 GitHub 仓库将其描述为简化的、社区驱动的手册页集合。在实际示例的帮助下,努力让使用手册页的体验变得更简单。如果还不知道,TLDR 取自互联网的常见俚语:太长没读Too Long Didn’t Read。 + +如果你想比较一下,让我们以 `tar` 命令为例。 通常,手册页的篇幅会超过 1000 行。`tar` 是一个归档实用程序,经常与 `bzip` 或 `gzip` 等压缩方法结合使用。看一下它的手册页: + +[![tar man page](https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg)][14] + +而另一方面,TLDR 页面让你只是浏览一下命令,看看它是如何工作的。 `tar` 的 TLDR 页面看起来像这样,并带有一些方便的例子 —— 你可以使用此实用程序完成的最常见任务: + +[![tar tldr page](https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg)][15] + +让我们再举一个例子,向你展示 TLDR 页面为 `apt` 提供的内容: + +[![tldr-page-of-apt](https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg)][16] + +如上,它向你展示了 TLDR 如何工作并使你的生活更轻松,下面让我们告诉你如何在基于 Linux 的操作系统上安装它。 + +### 如何在 Linux 上安装和使用 TLDR 页? + +最成熟的 TLDR 客户端是基于 Node.js 的,你可以使用 NPM 包管理器轻松安装它。如果你的系统上没有 Node 和 NPM,请运行以下命令: + +``` +sudo apt-get install nodejs +sudo apt-get install npm +``` + +如果你使用的是 Debian、Ubuntu 或 Ubuntu 衍生发行版以外的操作系统,你可以根据自己的情况使用`yum`、`dnf` 或 `pacman`包管理器。 + +现在,通过在终端中运行以下命令,在 Linux 机器上安装 TLDR 客户端: + +``` +sudo npm install -g tldr +``` + +一旦安装了此终端实用程序,最好在尝试之前更新其缓存。 为此,请运行以下命令: + +``` +tldr --update +``` + +执行此操作后,就可以阅读任何 Linux 命令的 TLDR 页面了。 为此,只需键入: + +``` +tldr +``` + +[![tldr kill command](https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg)][17] + +你还可以运行其[帮助命令](https://github.com/tldr-pages/tldr-node-client),以查看可与 TLDR 一起使用的各种参数,以获取所需输出。 像往常一样,这个帮助页面也附有例子。 + +### TLDR 的 web、Android 和 iOS 版本 + +你会惊喜地发现 TLDR 页不仅限于你的 Linux 桌面。 相反,它也可以在你的 Web 浏览器中使用,可以从任何计算机访问。 + +要使用 TLDR Web 版本,请访问 [tldr.ostera.io][18] 并执行所需的搜索操作。 + +或者,你也可以下载 [iOS][19] 和 [Android][20] 应用程序,并随时随地学习新命令。 + +[![tldr app ios](https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg)][21] + +你觉得这个很酷的 Linux 终端技巧很有意思吗? 请尝试一下,让我们知道您的反馈。 + +-------------------------------------------------------------------------------- + +via: https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ + +作者:[Adarsh Verma][a] +译者:[wxy](https://github.com/wxy) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://fossbytes.com/author/adarsh/ +[1]:https://fossbytes.com/watch-star-wars-command-prompt-via-telnet/ +[2]:https://fossbytes.com/use-stackoverflow-linux-terminal-mac/ +[3]:https://fossbytes.com/single-command-curl-wttr-terminal-weather-report/ +[4]:https://fossbytes.com/how-to-google-search-in-command-line-using-googler/ +[5]:https://fossbytes.com/check-bitcoin-cryptocurrency-prices-command-line-coinmon/ +[6]:https://fossbytes.com/review-torrench-download-torrents-using-terminal-linux/ +[7]:https://fossbytes.com/use-wikipedia-termnianl-wikit/ +[8]:http://www.facebook.com/sharer.php?u=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F +[9]:https://twitter.com/intent/tweet?text=TLDR+pages%3A+Simplified+Alternative+To+Linux+Man+Pages&url=https%3A%2F%2Ffossbytes.com%2Ftldr-pages-linux-man-pages-alternative%2F&via=%40fossbytes14 +[10]:http://plus.google.com/share?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/ +[11]:http://pinterest.com/pin/create/button/?url=https://fossbytes.com/tldr-pages-linux-man-pages-alternative/&media=https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[12]:https://fossbytes.com/linux-lexicon-man-pages-navigation/ +[13]:https://github.com/tldr-pages/tldr +[14]:https://fossbytes.com/wp-content/uploads/2017/11/tar-man-page.jpg +[15]:https://fossbytes.com/wp-content/uploads/2017/11/tar-tldr-page.jpg +[16]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-of-apt.jpg +[17]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-kill-command.jpg +[18]:https://tldr.ostera.io/ +[19]:https://itunes.apple.com/us/app/tldt-pages/id1071725095?ls=1&mt=8 +[20]:https://play.google.com/store/apps/details?id=io.github.hidroh.tldroid +[21]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-app-ios.jpg +[22]:https://fossbytes.com/wp-content/uploads/2017/11/tldr-page-ubuntu.jpg +[23]:https://fossbytes.com/a-z-list-linux-command-line-reference/ diff --git a/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md new file mode 100644 index 0000000000..3aa2e6f3ea --- /dev/null +++ b/published/201812/20171223 Celebrate Christmas In Linux Way With These Wallpapers.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: (jlztan) +[#]: reviewer: (wxy) +[#]: publisher: (wxy) +[#]: subject: (Celebrate Christmas In Linux Way With These Wallpapers) +[#]: via: (https://itsfoss.com/christmas-linux-wallpaper/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) +[#]: url: (https://linux.cn/article-10381-1.html) + +以 Linux 的方式庆祝圣诞节 +====== + +当前正是假日季,很多人可能已经在庆祝圣诞节了。祝你圣诞快乐,新年快乐。 + +为了延续节日氛围,我将向你展示一些非常棒的圣诞主题的 [Linux 壁纸][1]。在呈现这些壁纸之前,先来看一棵 Linux 终端下的圣诞树。 + +### 让你的桌面飘雪(针对 GNOME 用户) + +- [Let it Snow on Your Linux Desktop](https://youtu.be/1QI1ludzZuA) + +如果您在 Ubuntu 18.04 或任何其他 Linux 发行版中使用 GNOME 桌面,您可以使用一个小的 [GNOME 扩展][55]并在桌面上飘雪。 + +您可以从软件中心或 GNOME 扩展网站获取此 gsnow 扩展。我建议您阅读一些关于[使用 GNOME 扩展][55]的内容。 + +安装此扩展程序后,您会在顶部面板上看到一个小雪花图标。 如果您单击一次,您会看到桌面屏幕上的小絮状物掉落。 + +![](https://itsfoss.com/wp-content/uploads/2018/12/snowfall-on-linux-desktop-1.webm) + +你可以再次点击该图标来禁止雪花落下。 + +### 在 Linux 终端下显示圣诞树 + +![Display Christmas Tree in Linux Terminal](https://i.giphy.com/xUNda6KphvbpYxL3tm.gif) + +如果你想要在终端里显示一个动画的圣诞树,你可以使用如下命令: + +``` +curl https://raw.githubusercontent.com/sergiolepore/ChristBASHTree/master/tree-EN.sh | bash +``` + +要是不想一直从互联网上获取这棵圣诞树,也可以从它的 [GitHub 仓库][2] 中获取对应的 shell 脚本,更改权限之后按照运行普通 shell 脚本的方式运行它。 + +### 使用 Perl 在 Linux 终端下显示圣诞树 + +[![Christmas Tree in Linux terminal by NixCraft][3]][4] + +这个技巧最初由 [NixCraft][5] 分享,你需要为此安装 Perl 模块。 + +说实话,我不喜欢使用 Perl 模块,因为卸载它们真的很痛苦。所以使用这个 Perl 模块时需谨记,你必须手动移除它。 + +``` +perl -MCPAN -e 'install Acme::POE::Tree' +``` + +你可以阅读 [原文][5] 来了解更多信息。 + +### 下载 Linux 圣诞主题壁纸 + +所有这些 Linux 圣诞主题壁纸都是由 Mark Riedesel 制作的,你可以在 [他的网站][6] 上找到很多其他艺术品。 + +自 2002 年以来,他几乎每年都在制作这样的壁纸。可以理解的是,最早的一些壁纸不具有现代的宽高比。我把它们按时间倒序排列。 + +注意一个小地方,这里显示的图片都是高度压缩的,因此你要通过图片下方提供的链接进行下载。 + +![Christmas Linux Wallpaper][56] + +*[下载此壁纸][57]* + +![Christmas Linux Wallpaper][7] + +*[下载此壁纸][8]* + +[![Christmas Linux Wallpapers][9]][10] + +*[下载此壁纸][11]* + +[![Christmas Linux Wallpapers][12]][13] + +*[下载此壁纸][14]* + +[![Christmas Linux Wallpapers][15]][16] + +*[下载此壁纸][17]* + +[![Christmas Linux Wallpapers][18]][19] + +*[下载此壁纸][20]* + +[![Christmas Linux Wallpapers][21]][22] + +*[下载此壁纸][23]* + +[![Christmas Linux Wallpapers][24]][25] + +*[下载此壁纸][26]* + +[![Christmas Linux Wallpapers][27]][28] + +*[下载此壁纸][29]* + +[![Christmas Linux Wallpapers][30]][31] + +*[下载此壁纸][32]* + +[![Christmas Linux Wallpapers][33]][34] + +*[下载此壁纸][35]* + +[![Christmas Linux Wallpapers][36]][37] + +*[下载此壁纸][38]* + +[![Christmas Linux Wallpapers][39]][40] + +*[下载此壁纸][41]* + +[![Christmas Linux Wallpapers][42]][43] + +*[下载此壁纸][44]* + +[![Christmas Linux Wallpapers][45]][46] + +*[下载此壁纸][47]* + +[![Christmas Linux Wallpapers][48]][49] + +*[下载此壁纸][50]* + +### 福利:Linux 圣诞颂歌 + +这是给你的一份福利,给像我们一样的 Linux 爱好者的关于 Linux 的圣诞颂歌。 + +在 [《计算机世界》的一篇文章][51] 中,[Sandra Henry-Stocker][52] 分享了这些圣诞颂歌。摘录片段如下: + +这一段用的 [Chestnuts Roasting on an Open Fire][53] 的曲调: + +> Running merrily on open source +> +> With users happy as can be +> +> We’re using Linux and getting lots done + +> And happy everything is free + +这一段用的 [The Twelve Days of Christmas][54] 的曲调: + +> On my first day with Linux, my admin gave to me a password and a login ID +> +> On my second day with Linux my admin gave to me two new commands and a password and a login ID + +在 [这里][51] 阅读完整的颂歌。 + +Linux 快乐! + +------ + +via: https://itsfoss.com/christmas-linux-wallpaper/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[jlztan](https://github.com/jlztan) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/beautiful-linux-wallpapers/ +[2]: https://github.com/sergiolepore/ChristBASHTree +[3]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/perl-tree.gif?resize=600%2C622&ssl=1 +[4]: https://itsfoss.com/christmas-linux-wallpaper/perl-tree/ +[5]: https://www.cyberciti.biz/open-source/command-line-hacks/linux-unix-desktop-fun-christmas-tree-for-your-terminal/ +[6]: http://www.klowner.com/ +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmas-linux-wallpaper-featured.jpeg?resize=800%2C450&ssl=1 +[8]: http://klowner.com/wallery/christmas_tux_2017/download/ChristmasTux2017_3840x2160.png +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2016_3840x2160_result.jpg?resize=800%2C450&ssl=1 +[10]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2016_3840x2160_result/ +[11]: http://www.klowner.com/wallpaper/christmas_tux_2016/ +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2015_2560x1920_result.jpg?resize=800%2C600&ssl=1 +[13]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2015_2560x1920_result/ +[14]: http://www.klowner.com/wallpaper/christmas_tux_2015/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2014_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[16]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2014_2560x1440_result/ +[17]: http://www.klowner.com/wallpaper/christmas_tux_2014/ +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2013_result.jpg?resize=800%2C450&ssl=1 +[19]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2013_result/ +[20]: http://www.klowner.com/wallpaper/christmas_tux_2013/ +[21]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2012_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[22]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2012_2560x1440_result/ +[23]: http://www.klowner.com/wallpaper/christmas_tux_2012/ +[24]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2011_2560x1440_result.jpg?resize=800%2C450&ssl=1 +[25]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2011_2560x1440_result/ +[26]: http://www.klowner.com/wallpaper/christmas_tux_2011/ +[27]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/christmastux2010_5120x2880_result.jpg?resize=800%2C450&ssl=1 +[28]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2010_5120x2880_result/ +[29]: http://www.klowner.com/wallpaper/christmas_tux_2010/ +[30]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2009_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[31]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2009_1600x1200_result/ +[32]: http://www.klowner.com/wallpaper/christmas_tux_2009/ +[33]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2008_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[34]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2008_2560x1600_result/ +[35]: http://www.klowner.com/wallpaper/christmas_tux_2008/ +[36]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2007_2560x1600_result.jpg?resize=800%2C500&ssl=1 +[37]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2007_2560x1600_result/ +[38]: http://www.klowner.com/wallpaper/christmas_tux_2007/ +[39]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2006_1024x768_result.jpg?resize=800%2C600&ssl=1 +[40]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2006_1024x768_result/ +[41]: http://www.klowner.com/wallpaper/christmas_tux_2006/ +[42]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2005_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[43]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2005_1600x1200_result/ +[44]: http://www.klowner.com/wallpaper/christmas_tux_2005/ +[45]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2004_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[46]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2004_1600x1200_result/ +[47]: http://www.klowner.com/wallpaper/christmas_tux_2004/ +[48]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2002_1600x1200_result.jpg?resize=800%2C600&ssl=1 +[49]: https://itsfoss.com/christmas-linux-wallpaper/christmastux2002_1600x1200_result/ +[50]: http://www.klowner.com/wallpaper/christmas_tux_2002/ +[51]: http://www.computerworld.com/article/3151076/linux/merry-linux-to-you.html +[52]: https://twitter.com/bugfarm +[53]: https://www.youtube.com/watch?v=dhzxQCTCI3E +[54]: https://www.youtube.com/watch?v=oyEyMjdD2uk +[55]: https://itsfoss.com/gnome-shell-extensions/ +[56]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2016/12/ChristmasTux2018.jpeg?w=800&ssl=1 +[57]: http://www.klowner.com/wallery/christmas_tux_2018/download/ChristmasTux2018_4K_3840x2160.png diff --git a/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md new file mode 100644 index 0000000000..12f45713d4 --- /dev/null +++ b/published/201812/20171223 My personal Email setup - Notmuch, mbsync, postfix and dovecot.md @@ -0,0 +1,240 @@ +我的个人电子邮件系统设置:notmuch、mbsync、Postfix 和 dovecot +====== + +我使用个人电子邮件系统已经相当长的时间了,但是一直没有记录过文档。最近我换了我的笔记本电脑(职业变更导致的变动),我在试图重新创建本地邮件系统时迷茫了。所以这篇文章是一个给自己看的文档,这样我就不用费劲就能再次搭建出来。 + +### 服务器端 + +我运行自己的邮件服务器,并使用 Postfix 作为 SMTP 服务器,用 Dovecot 实现 IMAP。我不打算详细介绍如何配置这些设置,因为我的设置主要是通过使用 Jonas 为 Redpill 基础架构创建的脚本完成的。什么是 Redpill?(用 Jonas 自己的话说): + +> \ Redpill 是一个概念:一种设置 Debian hosts 去跨组织协作的方式 +> +> \ 我发展了这个概念,并将其首次用于 Redpill 网中网:redpill.dk,其中涉及到了我自己的网络(jones.dk),我的主要客户的网络(homebase.dk),一个包括 Skolelinux Germany(free-owl.de)的在德国的网络,和 Vasudev 的网络(copyninja.info) + +除此之外, 我还有一个 dovecot sieve 过滤,根据邮件的来源,对邮件进行高级分类,将其放到各种文件夹中。所有的规则都存在于每个有邮件地址的账户下的 `~/dovecot.sieve` 文件中。 + +再次,我不会详细介绍如何设置这些东西,因为这不是我这个帖子的目标。 + +### 在我的笔记本电脑上 + +在我的笔记本电脑上,我已经按照 4 个部分设置 + + 1. 邮件同步:使用 `mbsync` 命令完成 + 2. 分类:使用 notmuch 完成 + 3. 阅读:使用 notmuch-emacs 完成 + 4. 邮件发送:使用作为中继服务器和 SMTP 客户端运行的 Postfix 完成。 + +### 邮件同步 + +邮件同步是使用 `mbsync` 工具完成的, 我以前是 OfflineIMAP 的用户,最近切换到 `mbsync`,因为我觉得它比 OfflineIMAP 的配置更轻量、更简单。该命令是由 isync 包提供的。 + +配置文件是 `~/.mbsyncrc`。下面是我的例子与一些个人设置。 + +``` +IMAPAccount copyninja +Host imap.copyninja.info +User vasudev +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + + +IMAPAccount gmail-kamathvasudev +Host imap.gmail.com +User kamathvasudev@gmail.com +PassCmd "gpg -q --for-your-eyes-only --no-tty --exit-on-status-write-error --batch --passphrase-file ~/path/to/passphrase.txt -d ~/path/to/mailpass.gpg" +SSLType IMAPS +SSLVersion TLSv1.2 +CertificateFile /etc/ssl/certs/ca-certificates.crt + +IMAPStore copyninja-remote +Account copyninja + +IMAPStore gmail-kamathvasudev-remote +Account gmail-kamathvasudev + +MaildirStore copyninja-local +Path ~/Mail/vasudev-copyninja.info/ +Inbox ~/Mail/vasudev-copyninja.info/INBOX + +MaildirStore gmail-kamathvasudev-local +Path ~/Mail/Gmail-1/ +Inbox ~/Mail/Gmail-1/INBOX + +Channel copyninja +Master :copyninja-remote: +Slave :copyninja-local: +Patterns * +Create Both +SyncState * +Sync All + +Channel gmail-kamathvasudev +Master :gmail-kamathvasudev-remote: +Slave :gmail-kamathvasudev-local: +# Exclude everything under the internal [Gmail] folder, except the interesting folders +Patterns * ![Gmail]* +Create Both +SyncState * +Sync All +``` + +对上述配置中的一些有趣部分进行一下说明。一个是 PassCmd,它允许你提供 shell 命令来获取帐户的密码。这样可以避免在配置文件中填写密码。我使用 gpg 的对称加密,并在我的磁盘上存储密码。这当然是由 Unix ACL 保护安全的。 + +实际上,我想使用我的公钥来加密文件,但当脚本在后台或通过 systemd 运行时,解锁文件看起来很困难 (或者说几乎不可能)。如果你有更好的建议,我洗耳恭听:-)。 + +下一个指令部分是 Patterns。这使你可以有选择地同步来自邮件服务器的邮件。这对我来说真的很有帮助,可以排除所有的 “[Gmail]/ folders” 垃圾目录。 + +### 邮件分类 + +一旦邮件到达你的本地设备,我们需要一种方法来轻松地在邮件读取器中读取邮件。我最初的设置使用本地 dovecot 实例提供同步的 Maildir,并在 Gnus 中阅读。这种设置相比于设置所有的服务器软件是有点大题小作,但 Gnus 无法很好地应付 Maildir 格式,这是最好的方法。这个设置也有一个缺点,那就是在你快速搜索邮件时,要搜索大量邮件。而这就是 notmuch 的用武之地。 + +notmuch 允许我轻松索引上千兆字节的邮件档案而找到我需要的东西。我已经创建了一个小脚本,它结合了执行 `mbsync` 和 `notmuch`。我使用 dovecot sieve 来基于实际上创建在服务器端的 Maildirs 标记邮件。下面是我的完整 shell 脚本,它执行同步分类和删除垃圾邮件的任务。 + +``` +#!/bin/sh + +MBSYNC=$(pgrep mbsync) +NOTMUCH=$(pgrep notmuch) + +if [ -n "$MBSYNC" -o -n "$NOTMUCH" ]; then + echo "Already running one instance of mail-sync. Exiting..." + exit 0 +fi + +echo "Deleting messages tagged as *deleted*" +notmuch search --format=text0 --output=files tag:deleted |xargs -0 --no-run-if-empty rm -v + +echo "Moving spam to Spam folder" +notmuch search --format=text0 --output=files tag:Spam and \ + to:vasudev@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur +notmuch search --format=text0 --output=files tag:Spam and + to:vasudev-debian@copyninja.info | \ + xargs -0 -I {} --no-run-if-empty mv -v {} ~/Mail/vasudev-copyninja.info/Spam/cur + + +MDIR="vasudev-copyninja.info vasudev-debian Gmail-1" +mbsync -Va +notmuch new + +for mdir in $MDIR; do + echo "Processing $mdir" + for fdir in $(ls -d /home/vasudev/Mail/$mdir/*); do + if [ $(basename $fdir) != "INBOX" ]; then + echo "Tagging for $(basename $fdir)" + notmuch tag +$(basename $fdir) -inbox -- folder:$mdir/$(basename $fdir) + fi + done +done +``` + +因此,在运行 `mbsync` 之前,我搜索所有标记为“deleted”的邮件,并将其从系统中删除。接下来,我在我的帐户上查找标记为“Spam”的邮件,并将其移动到“Spam”文件夹。你没看错,这些邮件逃脱了垃圾邮件过滤器进入到我的收件箱,并被我亲自标记为垃圾邮件。 + +运行 `mbsync` 后,我基于它们的文件夹标记邮件(搜索字符串 `folder:`)。这让我可以很容易地得到一个邮件列表的内容,而不需要记住列表地址。 + +### 阅读邮件 + +现在,我们已经实现同步和分类邮件,是时候来设置阅读部分。我使用 notmuch-emacs 界面来阅读邮件。我使用 emacs 的 Spacemacs 风格,所以我花了一些时间写了一个私有层,它将我所有的快捷键和分类集中在一个地方,而不会扰乱我的整个 `.spacemacs` 文件。你可以在 [notmuch-emacs-layer 仓库][1] 找到我的私有层的代码。 + +### 发送邮件 + +能阅读邮件这还不够,我们也需要能够回复邮件。而这是最近是我感到迷茫的一个略显棘手的部分,以至于不得不写这篇文章,这样我就不会再忘记了。(当然也不必在网络上参考一些过时的帖子。) + +我的系统发送邮件使用 Postfix 作为 SMTP 客户端,使用我自己的 SMTP 服务器作为它的中继主机。中继的问题是,它不能是具有动态 IP 的主机。有两种方法可以允许具有动态 IP 的主机使用中继服务器, 一种是将邮件来源的 IP 地址放入 `my_network` 或第二个使用 SASL 身份验证。 + +我的首选方法是使用 SASL 身份验证。为此,我首先要为每台机器创建一个单独的账户,它将把邮件中继到我的主服务器上。想法是不使用我的主帐户 SASL 进行身份验证。(最初我使用的是主账户,但 Jonas 给出了可行的按账户的想法) + +``` +adduser _relay +``` + +这里替换 `` 为你的笔记本电脑的名称或任何你正在使用的设备。现在我们需要调整 Postfix 作为中继服务器。因此,在 Postfix 配置中添加以下行: + +``` +# SASL authentication +smtp_sasl_auth_enable = yes +smtp_tls_security_level = encrypt +smtp_sasl_tls_security_options = noanonymous +relayhost = [smtp.copyninja.info]:submission +smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd +``` + +因此, 这里的 `relayhost` 是用于将邮件转发到互联网的 Postfix 实例的服务器名称。`submission` 的部分 Postfix 将邮件转发到端口 587(安全端口)。`smtp_sasl_tls_security_options` 设置为不允许匿名连接。这是必须的,以便中继服务器信任你的移动主机,并同意为你转发邮件。 + +`/etc/postfix/sasl_passwd` 是你需要存储用于服务器 SASL 身份验证的帐户密码的文件。将以下内容放入其中。 + +``` +[smtp.example.com]:submission user:password +``` + +用你已放入 `relayhost` 配置的 SMTP 服务器名称替换 `smtp.example.com`。用你创建的 `_relay` 用户及其密码替换 `user` 和 `passwd`。 + +若要保护 `sasl_passwd` 文件,并为 Postfix 创建它的哈希文件,使用以下命令。 + +``` +chown root:root /etc/postfix/sasl_passwd +chmod 0600 /etc/postfix/sasl_passwd +postmap /etc/postfix/sasl_passwd +``` + +最后一条命令将创建 `/etc/postfix/sasl_passwd.db` 文件,它是你的文件的 `/etc/postfix/sasl_passwd` 的哈希文件,具有相同的所有者和权限。现在重新加载 Postfix,并使用 `mail` 命令检查邮件是否从你的系统中发出。 + +### Bonus 的部分 + +好吧,因为我有一个脚本创建以上结合了邮件的同步和分类。我继续创建了一个 systemd 计时器,以定期同步后台的邮件。就我而言,每 10 分钟一次。下面是 `mailsync.timer` 文件。 + +``` +[Unit] +Description=Check Mail Every 10 minutes +RefuseManualStart=no +RefuseManualStop=no + +[Timer] +Persistent=false +OnBootSec=5min +OnUnitActiveSec=10min +Unit=mailsync.service + +[Install] +WantedBy=default.target +``` + +下面是 mailsync.service 服务,这是 mailsync.timer 执行我们的脚本所需要的。 + +``` +[Unit] +Description=Check Mail +RefuseManualStart=no +RefuseManualStop=yes + +[Service] +Type=oneshot +ExecStart=/usr/local/bin/mail-sync +StandardOutput=syslog +StandardError=syslog +``` + +将这些文件置于 `/etc/systemd/user` 目录下并运行以下代码去开启它们: + +``` +systemctl enable --user mailsync.timer +systemctl enable --user mailsync.service +systemctl start --user mailsync.timer +``` + +这就是我从系统同步和发送邮件的方式。我从 Jonas Smedegaard 那里了解到了 afew,他审阅了这篇帖子。因此, 下一步, 我将尝试使用 afew 改进我的 notmuch 配置,当然还会有一个后续的帖子:-)。 + +-------------------------------------------------------------------------------- + +via: https://copyninja.info/blog/email_setup.html + +作者:[copyninja][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://copyninja.info +[1]:https://source.copyninja.info/notmuch-emacs-layer.git/ diff --git a/published/201812/20180101 27 open solutions to everything in education.md b/published/201812/20180101 27 open solutions to everything in education.md new file mode 100644 index 0000000000..48a4f3fa3c --- /dev/null +++ b/published/201812/20180101 27 open solutions to everything in education.md @@ -0,0 +1,91 @@ +27 个全方位的开放式教育解决方案 +====== + +> 阅读这些 2017 年 Opensource.com 发布的开放如何改进教育和学习的好文章。 + +![27 open solutions to everything in education](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/EDU_OpenEducationResources_520x292_cm.png?itok=9y4FGgRo) + +开放式理念 (从开源软件到开放硬件,再到开放原则) 正在改变教育的范式。因此,为了庆祝今年发生的一切,我收集了 2017 年(译注:本文原发布于 2018 年初)在 Opensource.com 上发表的 27 篇关于这个主题的最好的文章。我把它们分成明确的主题,而不是按人气来分类。而且,如果这 27 个故事不能满足你对教育方面开源信息的胃口,那就看看我们的合作文章吧 “[教育如何借助 Linux 和树莓派][30]”。 + +### 开放对每个人都有好处 + +1. [书评:《OPEN》探讨了开放性的广泛文化含义][1]:Scott Nesbitt 评价 David Price 的书 《OPEN》 ,该书探讨了 “开放” 不仅仅是技术转变的观点,而是 “我们未来将如何工作、生活和学习”。 +2. [通过开源技能快速开始您的职业生涯][2]: VM (Vicky) Brasseur 指出了如何借助学习开源在工作群体中脱颖而出。这个建议不仅仅是针对程序员的;设计师、作家、营销人员和其他创意专业人士也对开源的成功至关重要。 +3. [研究生学位可以让你跳槽到开源职位][3]:引用的研究表明会 Linux 技能会带来更高的薪水, Joshua Pearce 说对开源的熟练和研究生学位是无与伦比的职业技能组合。 +4. [彻底改变了宾夕法尼亚的学校文化的三种实践][4]:Charlie Reisinger 向我们展示了开放式实践是如何在宾夕法尼亚州的一个学区创造一种更具包容性、敏捷性和开放性的文化的。Charlie 说,这不仅仅是为了省钱;该区还受益于 “开放式领导原则,促进师生创新,帮助更好地吸引社区,创造一个更有活力和包容性的学习社区”。 +5. [使用开源工具促使学生进步的 15 种方法][5]:我写了开源是如何让学生自由探索、补拙和学习的,不管他们是在学习基本的数字化素养,还是通过有趣的项目来扩展这些技能。 +6. [开发人员有机会编写好的代码][6]:开源往往是对社会有益的项目的支柱。正如 Benetech Labs 副总裁 Ahn Bui 在这次采访中指出的那样:“建立开放数据标准是打破数据孤岛不可或缺的一步。这些开放标准将为互操作性提供基础,进而转化为更多的组织共同建设,往往更具成本效益。最终目标是以同样的成本甚至更低的成本为更多的人服务。” + +### 用于再融合和再利用的开放式教育资源 + +1. [学术教员可以和维基百科一起教学吗?][7]:Wiki Ed 的项目总监 LiAnna Davis 讨论开放式教育资源open educational resources (OER) ,如 Wiki Ed,是如何提供高质量且经济实惠的开源学习资源作为课堂教学工具。 +2. [书本内外?开放教育资源的状态][8]:Cable Green 是 Creative Common 开放教育主管,分享了高等教育中教育面貌是如何变化的,以及 Creative Common 正在采取哪些措施来促进教育。 +3. [急需符合标准的课程的学校系统找到了希望][9]:Karen Vaites 是 Open Up Resources 社区布道师和首席营销官,谈论了非营利组织努力为 K-12 学校提供开放的、标准一致的课程。 +4. [夏威夷大学如何解决当今高等教育的问题][10]:夏威夷大学 Manoa 分校的教育技术专家 Billy Meinke 表示,在大学课程中过渡到 ORE 将 “使教师能够控制他们教授的内容,我们预计这将为他们节省学生的费用。” +5. [开放式课程如何削减高等教育成本][11]:塞勒学院的教育总监 Devon Ritter 报告了塞勒学院是如何建立以公开许可内容为基础的大学学分课程,目的是使更多的人能够负担得起和获得高等教育。 +6. [开放教育资源运动在提速][12]:Alexis Clifton 是纽约州立大学的 OER 服务的执行董事,描述了纽约 800 万美元的投资如何刺激开放教育的增长,并使大学更实惠。 +7. [开放项目合作,从小学到大学教室][13]:来自杜克大学的 Aria F. Chernik 探索 OSPRI (开源教育学的研究与创新), 这是杜克大学和红帽的合作,旨在建立一个 21 世纪的,开放设计的 preK-12 学习生态系统。 +8. [Perma.cc 将阻止学术链接腐烂][14]::弗吉尼亚理工大学的 Phillip Young 写的关于 Perma.cc 的文章,这种一种“链接腐烂”的解决方案,在学术论文中的超链接随着时间的推移而消失或变化的概览很高。 +9. [开放教育:学生如何通过创建开放教科书来节省资金][15]:OER 先驱 Robin DeRosa 谈到 “引入公开许可教科书的自由,以及教育和学习应结合包容性生态系统,以增强公益的总体理念”。 + +### 课堂上的开源工具 + +1. [开源棋盘游戏如何拯救地球][16]:Joshua Pearce 写的关于拯救地球的一个棋盘游戏,这是一款让学生在玩乐和为创客社区做出贡献的同时解决环境问题的棋盘游戏。 +2. [一个教孩子们如何阅读的新 Android 应用程序][17]:Michael Hall 谈到了他在儿子被诊断为自闭症后为他开发的儿童识字应用 Phoenicia,以及良好编码的价值,和为什么用户测试比你想象的更重要。 +3. [8 个用于教育的开源 Android 应用程序][18]:Joshua Allen Holm 推荐了 8 个来自 F-Droid 软件库的开源应用,使我们可以将智能手机用作学习工具。 +4. [MATLA B的 3 种开源替代方案][19]:Jason Baker 更新了他 2016 年的开源数学计算软件调查报告,提供了 MATLAB 的替代方案,这是数学、物理科学、工程和经济学中几乎无处不在的昂贵的专用解决方案。 +5. [SVG 与教孩子编码有什么关系?][20]:退休工程师 Jay Nick 谈论他如何使用艺术作为一种创新的方式,向学生介绍编码。他在学校做志愿者,使用 SVG 来教授一种结合数学和艺术原理的编码方法。 +6. [5 个破灭的神话:在高等教育中使用开源][21]: 拥有德克萨斯理工大学美术博士学位的 Kyle Conway 分享他在一个由专有解决方案统治的世界中使用开源工具的经验。 Kyle 说有一种偏见,反对在计算机科学以外的学科中使用开源:“很多人认为非技术专业的学生不能使用 Linux,他们对在高级学位课程中使用 Linux 的人做出了很多假设……嗯,这是有可能的,我就是证明。” +7. [大学开源工具列表][22]:Aaron Cocker 概述了他在攻读计算机科学本科学位时使用的开源工具 (包括演示、备份和编程软件)。 +8. [5 个可帮助您学习优秀 KDE 应用程序][23]:Zsolt Szakács 提供五个 KDE 应用程序,可以帮助任何想要学习新技能或培养现有技能的人。 + +### 在教室编码 + +1. [如何尽早让下一代编码][24]:Bryson Payne 说我们需要在高中前教孩子们学会编码: 到了九年级,80% 的女孩和 60% 的男孩已经从 STEM 职业中自选。但他建议,这不仅仅是就业和缩小 IT 技能差距的问题。“教一个年轻人编写代码可能是你能给他们的最改变生活的技能。而且这不仅仅是一个职业提升者。编码是关于解决问题,它是关于创造力,更重要的是,它是提升能力”。 +2. [孩子们无法在没有计算机的情况下编码][25]:Patrick Masson 推出了 FLOSS 儿童桌面计划, 该计划教授服务不足学校的学生使用开源软件 (如 Linux、LibreOffice 和 GIMP) 重新利用较旧的计算机。该计划不仅为破旧或退役的硬件注入新的生命,还为学生提供了重要的技能,而且还为学生提供了可能转化为未来职业生涯的重要技能。 +3. [如今 Scratch 是否能像 80 年代的 LOGO 语言一样教孩子们编码?][26]:Anderson Silva 提出使用 [Scratch][27] 以激发孩子们对编程的兴趣,就像在 20 世纪 80 年代开始使用 LOGO 语言时一样。 +4. [通过这个拖放框架学习Android开发][28]:Eric Eslinger 介绍了 App Inventor,这是一个编程框架,用于构建 Android 应用程序使用可视块语言(类似 Scratch 或者 [Snap][29])。 + +在这一年里,我们了解到,教育领域的各个方面都有了开放的解决方案,我预计这一主题将在 2018 年及以后继续下去。在未来的一年里,你是否希望 Opensource.com 涵盖开放式的教育主题?如果是, 请在评论中分享你的想法。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/best-open-education + +作者:[Don Watkins][a] +译者:[lixinyuxx](https://github.com/lixinyuxx) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/don-watkins +[1]:https://opensource.com/article/17/7/book-review-open +[2]:https://opensource.com/article/17/8/jump-start-your-career +[3]:https://opensource.com/article/17/1/grad-school-open-source-academic-lab +[4]:https://opensource.com/article/17/7/open-school-leadership +[5]:https://opensource.com/article/17/7/empower-students-open-source-tools +[6]:https://opensource.com/article/17/3/interview-anh-bui-benetech-labs +[7]:https://opensource.com/article/17/1/Wiki-Education-Foundation +[8]:https://opensource.com/article/17/2/future-textbooks-cable-green-creative-commons +[9]:https://opensource.com/article/17/1/open-up-resources +[10]:https://opensource.com/article/17/2/interview-education-billy-meinke +[11]:https://opensource.com/article/17/7/college-alternatives +[12]:https://opensource.com/article/17/10/open-educational-resources-alexis-clifton +[13]:https://opensource.com/article/17/3/education-should-be-open-design +[14]:https://opensource.com/article/17/9/stop-link-rot-permacc +[15]:https://opensource.com/article/17/11/creating-open-textbooks +[16]:https://opensource.com/article/17/7/save-planet-board-game +[17]:https://opensource.com/article/17/4/phoenicia-education-software +[18]:https://opensource.com/article/17/8/8-open-source-android-apps-education +[19]:https://opensource.com/alternatives/matlab +[20]:https://opensource.com/article/17/5/coding-scalable-vector-graphics-make-steam +[21]:https://opensource.com/article/17/5/how-linux-higher-education +[22]:https://opensource.com/article/17/6/open-source-tools-university-student +[23]:https://opensource.com/article/17/6/kde-education-software +[24]:https://opensource.com/article/17/8/teach-kid-code-change-life +[25]:https://opensource.com/article/17/9/floss-desktops-kids +[26]:https://opensource.com/article/17/3/logo-scratch-teach-programming-kids +[27]:https://scratch.mit.edu/ +[28]:https://opensource.com/article/17/8/app-inventor-android-app-development +[29]:http://snap.berkeley.edu/ +[30]:https://opensource.com/article/17/12/best-opensourcecom-linux-and-raspberry-pi-education diff --git a/published/201812/20180104 How Creative Commons benefits artists and big business.md b/published/201812/20180104 How Creative Commons benefits artists and big business.md new file mode 100644 index 0000000000..aefc804479 --- /dev/null +++ b/published/201812/20180104 How Creative Commons benefits artists and big business.md @@ -0,0 +1,66 @@ +你所不知道的知识共享(CC) +====== + +> 知识共享为艺术家提供访问权限和原始素材。大公司也从中受益。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/CreativeCommons_ideas_520x292_1112JS.png?itok=otei0vKb) + +我毕业于电影学院,毕业后在一所电影学校教书,之后进入一家主流电影工作室,我一直在从事电影相关的工作。创意产业的方方面面面临着同一个问题:创作者需要原材料。有趣的是,自由文化运动提出了解决方案,具体来说是在自由文化运动中出现的知识共享Creative Commons组织。 + +### 知识共享能够为我们提供展示片段和小样 + +和其他事情一样,创造力也需要反复练习。幸运的是,在我刚开始接触电脑时,就在一本关于渲染工场的专业杂志中接触到了开源这个存在。当时我并不理解所谓的“开源”是什么,但我知道只有开源工具能帮助我在领域内稳定发展。对我来说,知识共享也是如此。知识共享可以为艺术家们提供充满丰富艺术资源的工作室。 + +我在电影学院任教时,经常需要给学生们准备练习编辑、录音、拟音、分级、评分的示例录像。在 Jim Munroe 的独立作品 [Infest Wisely][1] 中和 [Vimeo][2] 上的知识共享内容里我总能找到我想要的。这些逼真的镜头覆盖内容十分广泛,从独立制作到昂贵的高品质的升降镜头(一般都会用无人机代替)都有。 + +![](https://opensource.com/sites/default/files/u128651/bunny.png) + +对实验主义艺术来说,确有无尽可能。知识共享提供了丰富的素材,这些材料可以用来整合、混剪等等,可以满足一位视觉先锋能够想到的任何用途。 + +在接触知识共享之前,如果我想要使用写实镜头,如果在大学,只能用之前的学生和老师拍摄的或者直接使用版权库里的镜头,或者使用有受限的版权保护的镜头。 + +### 坚守版权的底线很重要 + +知识共享同样能够创造经济效益。在某大型计算机公司的渲染工场工作时,我负责在某些硬件设施上测试渲染的运行情况,而这个测试时刻面临着被搁置的风险。做这些测试时,我用的都是[大雄兔][3]的资源,因为这个电影和它的组件都是可以免费使用和分享的。如果没有这个小短片,在接触写实资源之前我都没法完成我的实验,因为对于一个计算机公司来说,雇佣一只 3D 艺术家来按需布景是不太现实的。 + +令我震惊的是,与开源类似,知识共享已经用我们难以想象的方式支撑起了大公司。知识共享的使用可能会也可能不会影响公司的日常流程,但它填补了不足,让工作流程顺利进行。我没见到谁在他们的书中将流畅工作归功于知识共享的应用,但它确实无处不在。 + +![](https://opensource.com/sites/default/files/u128651/sintel.png) + +我也见过一些开放版权的电影,比如[辛特尔][4],在最近的电视节目中播放了它的短片,电视的分辨率已经超过了标准媒体。 + +### 知识共享可以提供大量原材料 + +艺术家需要原材料。画家需要颜料、画笔和画布。雕塑家需要陶土和工具。数字内容编辑师需要数字内容,无论它是剪贴画还是音效或者是电子游戏里的现成的精灵。 + +数字媒介赋予了人们超能力,让一个人就能完成需要一组人员才能完成的工作。事实上,我们大部分都好高骛远。我们想做高大上的项目,想让我们的成果不论是视觉上还是听觉上都无与伦比。我们想塑造的是宏大的世界,紧张的情节,能引起共鸣的作品,但我们所拥有的时间精力和技能与之都不匹配,达不到想要的效果。 + +是知识共享再一次拯救了我们,在 [Freesound.org][5]、 [Openclipart.org][6]、 [OpenGameArt.org][7] 等等网站上都有大量的开放版权艺术材料。通过知识共享,艺术家可以使用各种他们自己没办法创造的原材料,来完成他们原本完不成的工作。 + +最神奇的是,不用自己投资,你放在网上给大家使用的原材料就能变成精美的作品,而这是你从没想过的。我在知识共享上面分享了很多音乐素材,它们现在用于无数的专辑和电子游戏里。有些人用了我的材料会通知我,有些是我自己发现的,所以这些材料的应用可能比我知道的还有多得多。有时我会偶然看到我亲手画的标志出现在我从没听说过的软件里。我见到过我为 [Opensource.com][8] 写的文章在别处发表,有的是论文的参考文献,白皮书或者参考资料中。 + +### 知识共享所代表的自由文化也是一种文化 + +“自由文化”这个说法过于累赘,文化,从概念上来说,是一个有机的整体。在这种文化中社会逐渐成长发展,从一个人到另一个。它是人与人之间的互动和思想交流。自由文化是自由缺失的现代世界里的特殊产物。 + +如果你也想对这样的局限进行反抗,想把你的思想、作品、你自己的文化分享给全世界的人,那么就来和我们一起,使用知识共享吧! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/creative-commons-real-world + +作者:[Seth Kenlon][a] +译者:[Valoniakim](https://github.com/Valoniakim) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/seth +[1]:http://infestwisely.com +[2]:https://vimeo.com/creativecommons +[3]:https://peach.blender.org/ +[4]:https://durian.blender.org/ +[5]:http://freesound.org +[6]:http://openclipart.org +[7]:http://opengameart.org +[8]:https://opensource.com/ diff --git a/published/201812/20180128 Getting Linux Jobs.md b/published/201812/20180128 Getting Linux Jobs.md new file mode 100644 index 0000000000..9bfaf0e1e5 --- /dev/null +++ b/published/201812/20180128 Getting Linux Jobs.md @@ -0,0 +1,95 @@ +Linux 求职建议 +====== + +通过对招聘网站数据的仔细研究,我们发现,即使是非常有经验的 Linux 程序员,也会在面试中陷入困境。 + +这就导致了很多优秀并且有经验的人无缘无故地找不到合适的工作,因为如今的就业市场需要我们有一些手段来提高自己的竞争力。 + +我有两个同事和一个表哥,他们都有 RedHat 认证,管理过比较大的服务器机房,也都收到过前雇主的认真推荐。 + +可是,在他们应聘的时候,所有的这些证书、本身的能力、工作经验好像都没有起到任何作用,他们所面对的招聘广告是某人从技术词汇中临时挑选的一些“技能片段”所组成的。 + +现如今,礼貌变得过时了,**不回应**变成了发布招聘广告的公司的新沟通方式。 + +这同样也意味着大多公司的招聘或者人事可能会**错过**非常优秀的应聘者。 + +我之所以敢说的如此肯定,是因为现在招聘广告大多数看上去都非常的滑稽。 + +[Reallylinux.com][3] 另一位特约撰稿人 Walter ,发表过一篇关于 [招聘广告疯掉了][4] 的文章。 + +他说的也许是对的,可是我认为 Linux 工作应聘者可以通过注意招聘广告的**三个关键点**避免落入陷阱。 + +**首先**,很少会有 Linux 系统管理员的招聘广告只针对 Linux 有要求。 + +一定要注意很少有 Linux 系统管理员的职位是实际在服务器上跑 Linux的,反而,很多在搜索 “Linux 管理员” 得到的职位实际上是指大量的 *NX 操作系统的。 + +举个例子,有一则关于 **Linux 管理员** 的招聘广告: + +> 该职位需要为建立系统集成提供支持,尤其是 BSD 应用的系统安装... + +或者有一些其他的要求: + +> 有 Windows 系统管理经验的。 + +最为讽刺的是,如果你在应聘面试的时候表现出专注于 Linux 的话,你可能不会被聘用。 + +另外,如果你直接把 Linux 写在你的特长或者专业上,他们可能都不会仔细看你的简历,因为他们根本区分不了 UNIX、BSD、Linux。 + +最终的结果就是,如果你太老实,只在简历上写了 Linux,你可能会被直接过掉,但是如果你把 Linux 改成 UNIX/Linux 的话,可能会走得更远。 + +我有两个同事最后修改了他们的简历,然后获得了更好的面试机会,虽然依旧没有被聘用,因为大多数招聘广告其实已经内定人员了,这些招聘信息被放出来仅仅是为了表现出他们有招聘的想法。 + +**第二点**,公司里唯一在乎系统管理员职位的只有技术主管,其他人包括人事或管理层根本不关心这个。 + +我记得有一次开会旁听的时候,听见一个执行副总裁把服务器管理人员说成“一毛钱一打的人”,这种想法是多么的奇怪啊。 + +讽刺的是,等到邮件系统出故障,电话交换机连接时不时会断开,或者核心商业文件从企业内网中消失的时候,这些总裁又是最先打电话给系统管理员的。 + +或许如果他们不整天在电话留言中说那么多空话,或者不往邮件里塞满妻子的照片和旅行途中的照片的话,服务器可能就不会崩溃。 + +请注意,招聘 Linux 运维或者服务器管理员的广告被放出来是因为公司**技术层**认为有迫切的需求。你也不需要和人事或者公司高层聊什么,搞清楚谁是招聘的技术经理然后打电话给他们。 + +你需要直接联系他们因为“有些技术问题”是人事回答不了的,即使你只有 60 秒的时间可以和他们交流,你也必须抓住这个机会和真正有需求并且懂技术的人沟通。 + +那如果人事的漂亮 MM 不让你直接联系技术怎么办呢? + +开始记得问人事一些技术性问题,比如说他们的 Linux 集群是如何建立的,它们运行在独立的虚拟机上吗?这些技术性的问题会让人事变得不耐烦,最后让你有机会问出“我能不能直接联系你们团队的技术人员”。 + +如果对方的回答是“应该可以”或者“稍后回复你”,那么他们可能已经在两周前就已经计划好了找一个人来填补这个空缺,比如说人事部员工的未婚夫。**他们只是不希望看起来太像裙带主义,而是带有一点利己主义的不确定主义。** + +所以一定要记得花点时间弄清楚到底谁是发布招聘广告的直接**技术**负责人,然后和他们聊一聊,这可能会让你少一番胡扯并且让你更有可能应聘成功。 + +**第三点**,现在的招聘广告很少有完全真实的内容了。 + +我以前见过一个招聘具有高级别专家也不会有的专门知识的初级系统管理员的广告,他们的计划是列出公司的发展计划蓝图,然后找到应聘者。 + +在这种情况下,你应聘 Linux 管理员职位应该提供几个关键性信息,例如工作经验和相关证书。 + +诀窍在于,用这些关键词尽量装点你的简历,以匹配他们的招聘信息,这样他们几乎不可能发现你缺失了哪个关键词。 + +这并不一定会让你成功找到一份工作,但它可以让你获得一次面试机会,这也算是一个巨大的进步。 + +通过理解和应用以上三点,或许可以让那些寻求 Linux 管理员工作的人能够比那些只有一线地狱机会的人领先一步。 + +即使这些建议不能让你马上得到面试机会,你也可以利用这些经验和意识去参加贸易展或公司主办的技术会议等活动。 + +我强烈建议你们也经常参加这种活动,尤其是当它们比较近的话,可以给你一个扩展人脉的机会。 + +请记住,如今的职业人脉已经失去了原来的意义了,现在只是可以用来获取“哪些公司实际上在招聘、哪些公司只是为了给股东带来增长的表象而在职位方面撒谎”的小道消息。 + + +-------------------------------------------------------------------------------- + +via: http://reallylinux.com/docs/gettinglinuxjobs.shtml + +作者:[Andrea W.Codingly][a] +译者:[Ryze-Borgia](https://github.com/Ryze-Borgia) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:http://reallylinux.com +[1]:http://www.reallylinux.com +[2]:http://reallylinux.com/docs/linuxrecessionproof.shtml +[3]:http://reallylinux.com +[4]:http://reallylinux.com/docs/wantadsmad.shtml diff --git a/published/201812/20180130 Graphics and music tools for game development.md b/published/201812/20180130 Graphics and music tools for game development.md new file mode 100644 index 0000000000..7e77e30d67 --- /dev/null +++ b/published/201812/20180130 Graphics and music tools for game development.md @@ -0,0 +1,179 @@ +用于游戏开发的图形和音乐工具 +====== +> 要在三天内打造一个可玩的游戏,你需要一些快速而稳定的好工具。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/OSDC_Life_opengame.png?itok=JPxruL3k) + +在十月初,我们的俱乐部马歇尔大学的 [Geeks and Gadgets][1] 参加了首次 [Open Jam][2],这是一个庆祝最佳开源工具的游戏 Jam。游戏 Jam 是一种活动,参与者以团队协作的方式来开发有趣的计算机游戏。Jam 一般都很短,仅有三天,并且非常累。Opensource.com 在八月下旬[发布了][3] Open Jam 活动,足有 [45 支游戏][4] 进入到了竞赛中。 + +我们的俱乐部希望在我们的项目中创建和使用开放源码软件,所以 Open Jam 自然是我们想要参与的 Jam 了。我们提交的游戏是一个实验性的游戏,名为 [Mark My Words][5]。我们使用了多种自由和开放源码 (FOSS) 工具来开发它;在这篇文章中,我们将讨论一些我们使用的工具和我们注意到可能有潜在阻碍的地方。 + +### 音频工具 + +#### MilkyTracker + +[MilkyTracker][6] 是一个可用于编曲老式视频游戏中的音乐的软件包。它是一种[音乐声道器][7]music tracker,是一个强大的 MOD 和 XM 文件创建器,带有基于特征网格的模式编辑器。在我们的游戏中,我们使用它来编曲大多数的音乐片段。这个程序最好的地方是,它比我们其它的大多数工具消耗更少的硬盘空间和内存。虽然如此,MilkyTracker 仍然非常强大。 + +![](https://opensource.com/sites/default/files/u128651/mtracker.png) + +其用户界面需要一会来习惯,这里有对一些想试用 MilkyTracker 的音乐家的一些提示: + + * 转到 “Config > Misc.” ,设置编辑模式的控制风格为 “MilkyTracker”,这将给你提供几乎全部现代键盘快捷方式。 + * 用 `Ctrl+Z` 撤销 + * 用 `Ctrl+Y` 重做 + * 用空格键切换模式编辑方式 + * 用退格键删除先前的音符 + * 用插入键来插入一行 + * 默认情况下,一个音符将持续作用,直到它在该频道上被替换。你可以明确地结束一个音符,通过使用一个反引号(`)键来插入一个 KeyOff 音符 + * 在你开始谱写乐曲前,你需要创建或查找采样。我们建议在诸如 [Freesound][9] 或 [ccMixter][10] 这样的网站上查找采用 [Creative Commons][8] 协议的采样, + +另外,把 [MilkyTracker 文档页面][11] 放在手边。它含有数不清的教程和手册的链接。一个好的起点是在该项目 wiki 上的 [MilkyTracker 指南][12]。 + +#### LMMS + +我们的两个音乐家使用多用途的现代音乐创建工具 [LMMS][13]。它带有一个绝妙的采样和效果库,以及多种多样的灵活的插件来生成独特的声音。LMMS 的学习曲线令人吃惊的低,在某种程度上是因为其好用的节拍/低音线编辑器。 + +![](https://opensource.com/sites/default/files/u128651/lmms_plugins.png) + +我们对于想试试 LMMS 的音乐家有一个建议:使用插件。对于 [chiptune][14]式音乐,我们推荐 [sfxr][15]、[BitInvader][16] 和 [FreeBoy][17]。对于其它风格,[ZynAddSubFX][18] 是一个好的选择。它配备了各种合成仪器,可以根据您的需要进行更改。 + +### 图形工具 + +#### Tiled + +在开放源码游戏开发中,[Tiled][19] 是一个流行的贴片地图编辑器。我们使用它为来为我们在游戏场景中组合连续的、复古式的背景。 + +![](https://opensource.com/sites/default/files/u128651/tiled.png) + +Tiled 可以导出地图为 XML、JSON 或普通的图片。它是稳定的、跨平台的。 + +Tiled 的功能之一允许你在地图上定义和放置随意的游戏对象,例如硬币和提升道具,但在 jam 期间我们没有使用它。你需要做的全部是以贴片集的方式加载对象的图像,然后使用“插入平铺”来放置它们。 + +一般来说,对于需要一个地图编辑器的项目,Tiled 是我们所推荐的软件中一个不可或缺的部分。 + +#### Piskel + +[Piskel][20] 是一个像素艺术编辑器,它的源文件代码以 [Apache 2.0 协议][21] 发布。在这次 Jam 期间,们的大多数的图像资源都使用 Piskel 来处理,我们当然也将在未来的工程中使用它。 + +在这个 Jam 期间,Piskel 极大地帮助我们的两个功能是洋葱皮Onion skin精灵序列图spritesheet导出。 + +##### 洋葱皮 + +洋葱皮功能将使 Piskel 以虚影显示你编辑的动画的前一帧和后一帧的,像这样: + +![](https://opensource.com/sites/default/files/u128651/onionshow.gif) + +洋葱皮是很方便的,因为它适合作为一个绘制指引和帮助你在整个动画进程中保持角色的一致形状和体积。 要启用它,只需单击屏幕右上角预览窗口下方的洋葱形图标即可。 + +![](https://opensource.com/sites/default/files/u128651/onionenable.png) + +##### 精灵序列图导出 + +Piskel 将动画导出为精灵序列图的能力也非常有用。精灵序列图是一个包含动画所有帧的光栅图像。例如,这是我们从 Piskel 导出的精灵序列图: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist.png) + +该精灵序列图包含两帧。一帧位于图像的上半部分,另一帧位于图像的下半部分。精灵序列图通过从单个文件加载整个动画,大大简化了游戏的代码。这是上面精灵序列图的动画版本: + +![](https://opensource.com/sites/default/files/u128651/sprite-artist-anim.gif) + +##### Unpiskel.py + +在 Jam 期间,我们很多次想批量转换 Piskel 文件到 PNG 文件。由于 Piskel 文件格式基于 JSON,我们写一个基于 GPLv3 协议的名为 [unpiskel.py][22] 的 Python 小脚本来做转换。 + +它像这样被调用的: + +``` +python unpiskel.py input.piskel +``` + +这个脚本将从一个 Piskel 文件(这里是 `input.piskel`)中提取 PNG 数据帧和图层,并将它们各自存储。这些文件采用模式 `NAME_XX_YY.png` 命名,在这里 `NAME` 是 Piskel 文件的缩减名称,`XX` 是帧的编号,`YY` 是层的编号。 + +因为脚本可以从一个 shell 中调用,它可以用在整个文件列表中。 + +``` +for f in *.piskel; do python unpiskel.py "$f"; done +``` + +### Python、Pygame 和 cx_Freeze + +#### Python 和 Pygame + +我们使用 [Python][23] 语言来制作我们的游戏。它是一个脚本语言,通常被用于文本处理和桌面应用程序开发。它也可以用于游戏开发,例如像 [Angry Drunken Dwarves][24] 和 [Ren'Py][25] 这样的项目所展示的。这两个项目都使用一个称为 [Pygame][26] 的 Python 库来显示图形和产生声音,所以我们也决定在 Open Jam 中使用这个库。 + +Pygame 被证明是既稳定又富有特色,并且它对我们创建的街机式游戏来说是很棒的。在低分辨率时,库的速度足够快的,但是在高分辨率时,它只用 CPU 的渲染开始变慢。这是因为 Pygame 不使用硬件加速渲染。然而,开发者可以充分利用 OpenGL 基础设施的优势。 + +如果你正在寻找一个好的 2D 游戏编程库,Pygame 是值得密切注意的一个。它的网站有 [一个好的教程][27] 可以作为起步。务必看看它! + +#### cx_Freeze + +准备发行我们的游戏是有趣的。我们知道,Windows 用户不喜欢装一套 Python,并且要求他们来安装它可能很过分。除此之外,他们也可能必须安装 Pygame,在 Windows 上,这不是一个简单的工作。 + +很显然:我们必须放置我们的游戏到一个更方便的格式中。很多其他的 Open Jam 参与者使用专有的游戏引擎 Unity,它能够使他们的游戏在网页浏览器中来玩。这使得它们非常方便地来玩。便利性是一个我们的游戏中根本不存在的东西。但是,感谢生机勃勃的 Python 生态系统,我们有选择。已有的工具可以帮助 Python 程序员将他们的游戏做成 Windows 上的发布版本。我们考虑过的两个工具是 [cx_Freeze][28] 和 [Pygame2exe][29](它使用 [py2exe][30])。我们最终决定用 cx_Freeze,因为它是跨平台的。 + +在 cx_Freeze 中,你可以把一个单脚本游戏打包成发布版本,只要在 shell 中运行一个命令,像这样: + +``` +cxfreeze main.py --target-dir dist +``` + +`cxfreeze` 的这个调用将把你的脚本(这里是 `main.py`)和在你系统上的 Python 解释器捆绑到到 `dist` 目录。一旦完成,你需要做的是手动复制你的游戏的数据文件到 `dist` 目录。你将看到,`dist` 目录包含一个可以运行来开始你的游戏的可执行文件。 + +这里有使用 cx_Freeze 的更复杂的方法,允许你自动地复制数据文件,但是我们发现简单的调用 `cxfreeze` 足够满足我们的需要。感谢这个工具,我们使我们的游戏玩起来更便利一些。 + +### 庆祝开源 + +Open Jam 是庆祝开源模式的软件开发的重要活动。这是一个分析开源工具的当前状态和我们在未来工作中需求的一个机会。对于游戏开发者探求其工具的使用极限,学习未来游戏开发所必须改进的地方,游戏 Jam 或许是最好的时机。 + +开源工具使人们能够在不损害自由的情况下探索自己的创造力,而无需预先投入资金。虽然我们可能不会成为专业的游戏开发者,但我们仍然能够通过我们的简短的实验性游戏 [Mark My Words][5] 获得一点点体验。它是一个以语言学为主题的游戏,描绘了虚构的书写系统在其历史中的演变。还有很多其他不错的作品提交给了 Open Jam,它们都值得一试。 真的,[去看看][31]! + +在本文结束前,我们想要感谢所有的 [参加俱乐部的成员][32],使得这次经历真正的有价值。我们也想要感谢 [Michael Clayton][33]、[Jared Sprague][34] 和 [Opensource.com][35] 主办 Open Jam。简直酷毙了。 + +现在,我们对读者提出了一些问题。你是一个 FOSS 游戏开发者吗?你选择的工具是什么?务必在下面留下一个评论! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/1/graphics-music-tools-game-dev + +作者:[Charlie Murphy][a] +译者:[robsean](https://github.com/robsean) +校对:[wxy](https://github.com/wxy) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/rsg167 +[1]:http://mugeeks.org/ +[2]:https://itch.io/jam/open-jam-1 +[3]:https://opensource.com/article/17/8/open-jam-announcement +[4]:https://opensource.com/article/17/11/open-jam +[5]:https://mugeeksalpha.itch.io/mark-omy-words +[6]:http://milkytracker.titandemo.org/ +[7]:https://en.wikipedia.org/wiki/Music_tracker +[8]:https://creativecommons.org/ +[9]:https://freesound.org/ +[10]:http://ccmixter.org/view/media/home +[11]:http://milkytracker.titandemo.org/documentation/ +[12]:https://github.com/milkytracker/MilkyTracker/wiki/MilkyTracker-Guide +[13]:https://lmms.io/ +[14]:https://en.wikipedia.org/wiki/Chiptune +[15]:https://github.com/grimfang4/sfxr +[16]:https://lmms.io/wiki/index.php?title=BitInvader +[17]:https://lmms.io/wiki/index.php?title=FreeBoy +[18]:http://zynaddsubfx.sourceforge.net/ +[19]:http://www.mapeditor.org/ +[20]:https://www.piskelapp.com/ +[21]:https://github.com/piskelapp/piskel/blob/master/LICENSE +[22]:https://raw.githubusercontent.com/MUGeeksandGadgets/MarkMyWords/master/tools/unpiskel.py +[23]:https://www.python.org/ +[24]:https://www.sacredchao.net/~piman/angrydd/ +[25]:https://renpy.org/ +[26]:https://www.Pygame.org/ +[27]:http://Pygame.org/docs/tut/PygameIntro.html +[28]:https://anthony-tuininga.github.io/cx_Freeze/ +[29]:https://Pygame.org/wiki/Pygame2exe +[30]:http://www.py2exe.org/ +[31]:https://itch.io/jam/open-jam-1/entries +[32]:https://github.com/MUGeeksandGadgets/MarkMyWords/blob/3e1e8aed12ebe13acccf0d87b06d4f3bd124b9db/README.md#credits +[33]:https://twitter.com/mwcz +[34]:https://twitter.com/caramelcode +[35]:https://opensource.com/ diff --git a/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md new file mode 100644 index 0000000000..4272904f5c --- /dev/null +++ b/published/201812/20180131 For your first HTML code lets help Batman write a love letter.md @@ -0,0 +1,869 @@ +编写你的第一行 HTML 代码,来帮助蝙蝠侠写一封情书 +====== + +![](https://cdn-images-1.medium.com/max/1000/1*kZxbQJTdb4jn_frfqpRg9g.jpeg) + +在一个美好的夜晚,你的肚子拒绝消化你在晚餐吃的大块披萨,所以你不得不在睡梦中冲进洗手间。 + +在浴室里,当你在思考为什么会发生这种情况时,你听到一个来自通风口的低沉声音:“嘿,我是蝙蝠侠。” + +这时,你会怎么做呢? + +在你恐慌并处于关键时刻之前,蝙蝠侠说:“我需要你的帮助。我是一个超级极客,但我不懂 HTML。我需要用 HTML 写一封情书,你愿意帮助我吗?” + +谁会拒绝蝙蝠侠的请求呢,对吧?所以让我们用 HTML 来写一封蝙蝠侠的情书。 + +### 你的第一个 HTML 文件 + +HTML 网页与你电脑上的其它文件一样。就同一个 .doc 文件以 MS Word 打开,.jpg 文件在图像查看器中打开一样,一个 .html 文件在浏览器中打开。 + +那么,让我们来创建一个 .html 文件。你可以在 Notepad 或其它任何编辑器中完成此任务,但我建议使用 VS Code。[在这里下载并安装 VS Code][2]。它是免费的,也是我唯一喜欢的微软产品。 + +在系统中创建一个目录,将其命名为 “HTML Practice”(不带引号)。在这个目录中,再创建一个名为 “Batman's Love Letter”(不带引号)的目录,这将是我们的项目根目录。这意味着我们所有与这个项目相关的文件都会在这里。 + +打开 VS Code,按下 `ctrl+n` 创建一个新文件,按下 `ctrl+s` 保存文件。切换到 “Batman's Love Letter” 文件夹并将其命名为 “loveletter.html”,然后单击保存。 + +现在,如果你在文件资源管理器中双击它,它将在你的默认浏览器中打开。我建议使用 Firefox 来进行 web 开发,但 Chrome 也可以。 + +让我们将这个过程与我们已经熟悉的东西联系起来。还记得你第一次拿到电脑吗?我做的第一件事是打开 MS Paint 并绘制一些东西。你在 Paint 中绘制一些东西并将其另存为图像,然后你可以在图像查看器中查看该图像。之后,如果要再次编辑该图像,你在 Paint 中重新打开它,编辑并保存它。 + +我们目前的流程非常相似。正如我们使用 Paint 创建和编辑图像一样,我们使用 VS Code 来创建和编辑 HTML 文件。就像我们使用图像查看器查看图像一样,我们使用浏览器来查看我们的 HTML 页面。 + +### HTML 中的段落 + +我们有一个空的 HTML 文件,以下是蝙蝠侠想在他的情书中写的第一段。 + +“After all the battles we fought together, after all the difficult times we saw together, and after all the good and bad moments we’ve been through, I think it’s time I let you know how I feel about you.” + +复制这些到 VS Code 中的 loveletter.html。单击 “View -> Toggle Word Wrap (alt+z)” 自动换行。 + +保存并在浏览器中打开它。如果它已经打开,单击浏览器中的刷新按钮。 + +瞧!那是你的第一个网页! + +我们的第一段已准备就绪,但这不是在 HTML 中编写段落的推荐方法。我们有一种特定的方法让浏览器知道一个文本是一个段落。 + +如果你用 `

` 和 `

` 来包裹文本,那么浏览器将识别 `

` 和 `

` 中的文本是一个段落。我们这样做: + +``` +

After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you.

+``` + +通过在 `

` 和 `

`中编写段落,你创建了一个 HTML 元素。一个网页就是 HTML 元素的集合。 + +让我们首先来认识一些术语:`

` 是开始标签,`

` 是结束标签,“p” 是标签名称。元素开始和结束标签之间的文本是元素的内容。 + +### “style” 属性 + +在上面,你将看到文本覆盖屏幕的整个宽度。 + +我们不希望这样。没有人想要阅读这么长的行。让我们设定段落宽度为 550px。 + +我们可以通过使用元素的 `style` 属性来实现。你可以在其 `style` 属性中定义元素的样式(例如,在我们的示例中为宽度)。以下行将在 `p` 元素上创建一个空样式属性: + +``` +

...

+``` + +你看到那个空的 `""` 了吗?这就是我们定义元素外观的地方。现在我们要将宽度设置为 550px。我们这样做: + +``` +

+ After all the battles we fought together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+``` + +我们将 `width` 属性设置为 `550px`,用冒号 `:` 分隔,以分号 `;` 结束。 + +另外,注意我们如何将 `

` 和 `

` 放在单独的行中,文本内容用一个制表符缩进。像这样设置代码使其更具可读性。 + +### HTML 中的列表 + +接下来,蝙蝠侠希望列出他所钦佩的人的一些优点,例如: + +``` +You complete my darkness with your light. I love: +- the way you see good in the worst things +- the way you handle emotionally difficult situations +- the way you look at Justice +I have learned a lot from you. You have occupied a special place in my heart over time. +``` + +这看起来很简单。 + +让我们继续,在 `

` 下面复制所需的文本: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love: + - the way you see good in the worse + - the way you handle emotionally difficult situations + - the way you look at Justice + I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新浏览器。 + +![](https://cdn-images-1.medium.com/max/1000/1*M0Ae5ZpRTucNyucfaaz4uw.jpeg) + +哇!这里发生了什么,我们的列表在哪里? + +如果你仔细观察,你会发现没有显示换行符。在代码中我们在新的一行中编写列表项,但这些项在浏览器中显示在一行中。 + +如果你想在 HTML(新行)中插入换行符,你必须使用 `
`。让我们来使用 `
`,看看它长什么样: + +``` +

+ After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

+

+ You complete my darkness with your light. I love:
+ - the way you see good in the worse
+ - the way you handle emotionally difficult situations
+ - the way you look at Justice
+ I have learned a lot from you. You have occupied a special place in my heart over the time. +

+``` + +保存并刷新: + +![](https://cdn-images-1.medium.com/max/1000/1*Mj4Sr_jUliidxFpEtu0pXw.jpeg) + +好的,现在它看起来就像我们想要的那样! + +另外,注意我们没有写一个 `
`。有些标签不需要结束标签(它们被称为自闭合标签)。 + +还有一件事:我们没有在两个段落之间使用 `
`,但第二个段落仍然是从一个新行开始,这是因为 `

` 元素会自动插入换行符。 + +我们使用纯文本编写列表,但是有两个标签可以供我们使用来达到相同的目的:`

    ` and `
  • `。 + +让我们解释一下名字的意思:ul 代表无序列表Unordered List,li 代表列表项目List Item。让我们使用它们来展示我们的列表: + +``` +

    + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

    +``` + +``` +

    + You complete my darkness with your light. I love: +

      +
    • the way you see good in the worse
    • +
    • the way you handle emotionally difficult situations
    • +
    • the way you look at Justice
    • +
    + I have learned a lot from you. You have occupied a special place in my heart over the time. +

    +``` + +在复制代码之前,注意差异部分: + +* 我们删除了所有的 `
    `,因为每个 `
  • ` 会自动显示在新行中 +* 我们将每个列表项包含在 `
  • ` 和 `
  • ` 之间 +* 我们将所有列表项的集合包裹在 `
      ` 和 `
    ` 之间 +* 我们没有像 `

    ` 元素那样定义 `

      ` 元素的宽度。这是因为 `
        ` 是 `

        ` 的子节点,`

        ` 已经被约束到 550px,所以 `

          ` 不会超出这个范围。 + +让我们保存文件并刷新浏览器以查看结果: + +![](https://cdn-images-1.medium.com/max/1000/1*aPlMpYVZESPwgUO3Iv-qCA.jpeg) + +你会立即注意到在每个列表项之前显示了重点标志。我们现在不需要在每个列表项之前写 “-”。 + +经过仔细检查,你会注意到最后一行超出 550px 宽度。这是为什么?因为 HTML 不允许 `
            ` 元素出现在 `

            ` 元素中。让我们将第一行和最后一行放在单独的 `

            ` 元素中: + +``` +

            + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

            +``` + +``` +

            + You complete my darkness with your light. I love: +

            +``` + +``` +
              +
            • the way you see good in the worse
            • +
            • the way you handle emotionally difficult situations
            • +
            • the way you look at Justice
            • +
            +``` + +``` +

            + I have learned a lot from you. You have occupied a special place in my heart over the time. +

            +``` + +保存并刷新。 + +注意,这次我们还定义了 `
              ` 元素的宽度。那是因为我们现在已经将 `
                ` 元素放在了 `

                ` 元素之外。 + +定义情书中所有元素的宽度会变得很麻烦。我们有一个特定的元素用于此目的:`

                ` 元素。一个 `
                ` 元素就是一个通用容器,用于对内容进行分组,以便轻松设置样式。 + +让我们用 `
                ` 元素包装整个情书,并为其赋予宽度:550px 。 + +``` +
                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +棒极了,我们的代码现在看起来简洁多了。 + +### HTML 中的标题 + +到目前为止,蝙蝠侠对结果很高兴,他希望在情书上标题。他想写一个标题: “Bat Letter”。当然,你已经看到这个名字了,不是吗?:D + +你可以使用 `

                `、`

                `、`

                `、`

                `、`

                ` 和 `
                ` 标签来添加标题,`

                ` 是最大的标题和最主要的标题,`

                ` 是最小的标题。 + +![](https://cdn-images-1.medium.com/max/1000/1*Ud-NzfT-SrMgur1WX4LCkQ.jpeg) + +让我们在第二段之前使用 `

                ` 做主标题和一个副标题: + +``` +
                +

                Bat Letter

                +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +保存,刷新。 + +![](https://cdn-images-1.medium.com/max/1000/1*rzyIl-gHug3nQChqfscU3w.jpeg) + +### HTML 中的图像 + +我们的情书尚未完成,但在继续之前,缺少一件大事:蝙蝠侠标志。你见过是蝙蝠侠的东西但没有蝙蝠侠的标志吗? + +并没有。 + +所以,让我们在情书中添加一个蝙蝠侠标志。 + +在 HTML 中包含图像就像在一个 Word 文件中包含图像一样。在 MS Word 中,你到 “菜单 -> 插入 -> 图像 -> 然后导航到图像位置为止 -> 选择图像 -> 单击插入”。 + +在 HTML 中,我们使用 `` 标签让浏览器知道我们需要加载的图像,而不是单击菜单。我们在 `src` 属性中写入文件的位置和名称。如果图像在项目根目录中,我们可以简单地在 `src` 属性中写入图像文件的名称。 + +在我们深入编码之前,从[这里][3]下载蝙蝠侠标志。你可能希望裁剪图像中的额外空白区域。复制项目根目录中的图像并将其重命名为 “bat-logo.jpeg”。 + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +我们在第 3 行包含了 `` 标签。这个标签也是一个自闭合的标签,所以我们不需要写 ``。在 `src` 属性中,我们给出了图像文件的名称。这个名称应与图像名称完全相同,包括扩展名(.jpeg)及其大小写。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*uMNWAISOACJlzDOONcrGXw.jpeg) + +该死的!刚刚发生了什么? + +当使用 `` 标签包含图像时,默认情况下,图像将以其原始分辨率显示。在我们的例子中,图像比 550px 宽得多。让我们使用 `style` 属性定义它的宽度: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +
                +``` + +你会注意到,这次我们定义宽度使用了 “%” 而不是 “px”。当我们在 “%” 中定义宽度时,它将占据父元素宽度的百分比。因此,100% 的 550px 将为我们提供 550px。 + +保存并刷新,查看结果。 + +![](https://cdn-images-1.medium.com/max/1000/1*5c0ngx3BFVlyyP6UNtfYyg.jpeg) + +太棒了!这让蝙蝠侠的脸露出了羞涩的微笑 :)。 + +### HTML 中的粗体和斜体 + +现在蝙蝠侠想在最后几段中承认他的爱。他有以下文本供你用 HTML 编写: + +“I have a confession to make + +It feels like my chest _does_ have a heart. You make my heart beat. Your smile brings a smile to my face, your pain brings pain to my heart. + +I don’t show my emotions, but I think this man behind the mask is falling for you.” + +当阅读到这里时,你会问蝙蝠侠:“等等,这是给谁的?”蝙蝠侠说: + +“这是给超人的。” + +![](https://cdn-images-1.medium.com/max/1000/1*UNDvfIZQJ1Q_goHc-F-IPA.jpeg) + +你说:哦!我还以为是给神奇女侠的呢。 + +蝙蝠侠说:不,这是给超人的,请在最后写上 “I love you Superman.”。 + +好的,我们来写: + + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +这封信差不多完成了,蝙蝠侠另外想再做两次改变。蝙蝠侠希望在最后段落的第一句中的 “does” 一词是斜体,而 “I love you Superman” 这句话是粗体的。 + +我们使用 `` 和 `` 以斜体和粗体显示文本。让我们来更新这些更改: + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +![](https://cdn-images-1.medium.com/max/1000/1*6hZdQJglbHUcEEHzouk2eA.jpeg) + +### HTML 中的样式 + +你可以通过三种方式设置样式或定义 HTML 元素的外观: + +* 内联样式:我们使用元素的 `style` 属性来编写样式。这是我们迄今为止使用的,但这不是一个好的实践。 +* 嵌入式样式:我们在由 `` 包裹的 “style” 元素中编写所有样式。 +* 链接样式表:我们在具有 .css 扩展名的单独文件中编写所有元素的样式。此文件称为样式表。 + +让我们来看看如何定义 `
                ` 的内联样式: + +``` +
                +``` + +我们可以在 `` 里面写同样的样式: + +``` +div{ + width:550px; +} +``` + +在嵌入式样式中,我们编写的样式是与元素分开的。所以我们需要一种方法来关联元素及其样式。第一个单词 “div” 就做了这样的活。它让浏览器知道花括号 `{...}` 里面的所有样式都属于 “div” 元素。由于这种语法确定要应用样式的元素,因此它称为一个选择器。 + +我们编写样式的方式保持不变:属性(`width`)和值(`550px`)用冒号(`:`)分隔,以分号(`;`)结束。 + +让我们从 `
                ` 和 `` 元素中删除内联样式,将其写入 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +保存并刷新,结果应保持不变。 + +但是有一个大问题,如果我们的 HTML 文件中有多个 `
                ` 和 `` 元素该怎么办?这样我们在 ` +``` + +``` +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +``` + +``` +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +HTML 已经准备好了嵌入式样式。 + +但是,你可以看到,随着我们包含越来越多的样式,`` 将变得很大。这可能很快会混乱我们的主 HTML 文件。 + +因此,让我们更进一步,通过将 ``。 + +我们需要使用 HTML 文件中的 `` 标签来将新创建的 CSS 文件链接到 HTML 文件。以下是我们如何做到这一点: + +``` + +``` + +我们使用 `` 元素在 HTML 文档中包含外部资源,它主要用于链接样式表。我们使用的三个属性是: + +* `rel`:关系。链接文件与文档的关系。具有 .css 扩展名的文件称为样式表,因此我们保留 rel=“stylesheet”。 +* `type`:链接文件的类型;对于一个 CSS 文件来说它是 “text/css”。 +* `href`:超文本参考。链接文件的位置。 + +link 元素的结尾没有 ``。因此,`` 也是一个自闭合的标签。 + +``` + +``` + +如果只是得到一个女朋友,那么很容易:D + +可惜没有那么简单,让我们继续前进。 + +这是我们 “loveletter.html” 的内容: + +``` + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                +``` + +“style.css” 内容: + +``` +#letter-container{ + width:550px; +} +#header-bat-logo{ + width:100%; +} +``` + +保存文件并刷新,浏览器中的输出应保持不变。 + +### 一些手续 + +我们的情书已经准备好给蝙蝠侠,但还有一些正式的片段。 + +与其他任何编程语言一样,HTML 自出生以来(1990 年)经历过许多版本,当前版本是 HTML5。 + +那么,浏览器如何知道你使用哪个版本的 HTML 来编写页面呢?要告诉浏览器你正在使用 HTML5,你需要在页面顶部包含 ``。对于旧版本的 HTML,这行不同,但你不需要了解它们,因为我们不再使用它们了。 + +此外,在之前的 HTML 版本中,我们曾经将整个文档封装在 `` 标签内。整个文件分为两个主要部分:头部在 `` 里面,主体在 `` 里面。这在 HTML5 中不是必须的,但由于兼容性原因,我们仍然这样做。让我们用 ``, ``、 `` 和 `` 更新我们的代码: + +``` + + + + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +主要内容在 `` 里面,元信息在 `` 里面。所以我们把 `
                ` 保存在 `` 里面并加载 `` 里面的样式表。 + +保存并刷新,你的 HTML 页面应显示与之前相同的内容。 + +### HTML 的标题 + +我发誓,这是最后一次改变。 + +你可能已经注意到选项卡的标题正在显示 HTML 文件的路径: + +![](https://cdn-images-1.medium.com/max/1000/1*PASKm4ji29hbcZXVSP8afg.jpeg) + +我们可以使用 `` 标签来定义 HTML 文件的标题。标题标签也像链接标签一样在 `<head>` 内部。让我们我们在标题中加上 “Bat Letter”: + +``` +<!DOCTYPE html> +<html> +<head> + <title>Bat Letter + + + +
                +

                Bat Letter

                + +

                + After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. +

                +

                You are the light of my life

                +

                + You complete my darkness with your light. I love: +

                +
                  +
                • the way you see good in the worse
                • +
                • the way you handle emotionally difficult situations
                • +
                • the way you look at Justice
                • +
                +

                + I have learned a lot from you. You have occupied a special place in my heart over the time. +

                +

                I have a confession to make

                +

                + It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. +

                +

                + I don't show my emotions, but I think this man behind the mask is falling for you. +

                +

                I love you Superman.

                +

                + Your not-so-secret-lover,
                + Batman +

                +
                + + +``` + +保存并刷新,你将看到在选项卡上显示的是 “Bat Letter” 而不是文件路径。 + +蝙蝠侠的情书现在已经完成。 + +恭喜!你用 HTML 制作了蝙蝠侠的情书。 + +![](https://cdn-images-1.medium.com/max/1000/1*qC8qtrYtxAB6cJfm9aVOOQ.jpeg) + +### 我们学到了什么 + +我们学习了以下新概念: + + * 一个 HTML 文档的结构 + * 在 HTML 中如何写元素(`

                `) + * 如何使用 style 属性在元素内编写样式(这称为内联样式,尽可能避免这种情况) + * 如何在 `` 中编写元素的样式(这称为嵌入式样式) + * 在 HTML 中如何使用 `` 在单独的文件中编写样式并链接它(这称为链接样式表) + * 什么是标签名称,属性,开始标签和结束标签 + * 如何使用 id 属性为一个元素赋予 id + * CSS 中的标签选择器和 id 选择器 + +我们学习了以下 HTML 标签: + + * `

                `:用于段落 + * `
                `:用于换行 + * `

                  `、`
                • `:显示列表 + * `
                  `:用于分组我们信件的元素 + * `

                  `、`

                  `:用于标题和子标题 + * ``:用于插入图像 + * ``、``:用于粗体和斜体文字样式 + * `. - -* Linked stylesheet: We write styles of all the elements in a separate file with .css extension. This file is called Stylesheet. - -Let’s have a look at how we defined the inline style of the “div” until now: - -``` -
                  -``` - -We can write this same style inside `` like this: - -``` -div{ - width:550px; -} -``` - -In embedded styling, the styles we write are separate from the elements. So we need a way to relate the element and its style. The first word “div” does exactly that. It lets the browser know that whatever style is inside the curly braces `{…}` belongs to the “div” element. Since this phrase determines which element to apply the style to, it’s called a selector. - -The way we write style remains same: property(width) and value(550px) separated by a colon(:) and ended by a semicolon(;). - -Let’s remove inline style from our “div” and “img” element and write it inside the ` -``` - -``` -
                  -

                  Bat Letter

                  - -

                  - After all the battles we faught together, after all the difficult times we saw together, after all the good and bad moments we've been through, I think it's time I let you know how I feel about you. -

                  -``` - -``` -

                  You are the light of my life

                  -

                  - You complete my darkness with your light. I love: -

                  -
                    -
                  • the way you see good in the worse
                  • -
                  • the way you handle emotionally difficult situations
                  • -
                  • the way you look at Justice
                  • -
                  -

                  - I have learned a lot from you. You have occupied a special place in my heart over the time. -

                  -

                  I have a confession to make

                  -

                  - It feels like my chest does have a heart. You make my heart beat. Your smile brings smile on my face, your pain brings pain to my heart. -

                  -

                  - I don't show my emotions, but I think this man behind the mask is falling for you. -

                  -

                  I love you Superman.

                  -

                  - Your not-so-secret-lover,
                  - Batman -

                  -
                  -``` - -Save and refresh, and the result should remain the same. - -There is one big problem though — what if there is more than one “div” and “img” element in our HTML file? The styles that we defined for div and img inside the “style” element will apply to every div and img on the page. - -If you add another div in your code in the future, then that div will also become 550px wide. We don’t want that. - -We want to apply our styles to the specific div and img that we are using right now. To do this, we need to give our div and img element unique ids. Here’s how you can give an id to an element using its “id” attribute: - -``` -
                  -``` - -and here’s how to use th diff --git a/sources/tech/20180217 Louis-Philippe Véronneau .md b/sources/tech/20180217 Louis-Philippe Véronneau .md deleted file mode 100644 index bab2f4c169..0000000000 --- a/sources/tech/20180217 Louis-Philippe Véronneau .md +++ /dev/null @@ -1,59 +0,0 @@ -Louis-Philippe Véronneau - -====== -I've been watching [Critical Role][1]1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do. - -I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr. - -After the 10th time opening the terminal on my phone to download the podcast using some `wget` magic I decided enough was enough: I was going to write a dumb script to download them all in one batch. - -I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using `youtube-dl` to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google. - -I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use: -``` -https://itunes.apple.com/lookup?id=1243705452&entity=podcast - -``` - -Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page [has a proper RSS feed][2]. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page. - -Anyway, once you have the RSS feed, it's only a matter of using `grep` and `sed` until you get what you want. - -Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance! - -Video clip loop of the Critical Role doing a victory dance. - -### Script - -Here's the bash script I wrote. You will need `recode` to run it, as the RSS feed includes some HTML entities. -``` -# Get the whole RSS feed -wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/ - -# Extract the URLS and the episode titles -mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) ) -titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o ".\+" \ - | sed -r 's@@@g; s@ @\\@g' | recode html..utf8) ) - -# Download all the episodes under their titles -for i in ${!titles[*]} -do - wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]} -done - -``` - -1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it.. - --------------------------------------------------------------------------------- - -via: https://veronneau.org/downloading-all-the-critical-role-podcasts-in-one-batch.html - -作者:[Louis-Philippe Véronneau][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://veronneau.org/ -[1]:https://en.wikipedia.org/wiki/Critical_Role -[2]:http://criticalrolepodcast.geekandsundry.com/feed/ diff --git a/sources/tech/20180220 JSON vs XML vs TOML vs CSON vs YAML.md b/sources/tech/20180220 JSON vs XML vs TOML vs CSON vs YAML.md new file mode 100644 index 0000000000..eeb290c82b --- /dev/null +++ b/sources/tech/20180220 JSON vs XML vs TOML vs CSON vs YAML.md @@ -0,0 +1,212 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (JSON vs XML vs TOML vs CSON vs YAML) +[#]: via: (https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/) +[#]: author: (Tim Anderson https://www.zionandzion.com) + +JSON vs XML vs TOML vs CSON vs YAML +====== + + +### A Super Serious Segment About Sets, Subsets, and Supersets of Sample Serialization + +I’m a developer. I read code. I write code. I write code that writes code. I write code that writes code for other code to read. It’s all very mumbo-jumbo, but beautiful in its own way. However, that last bit, writing code that writes code for other code to read, can get more convoluted than this paragraph—quickly. There are a lot of ways to do it. One not-so-convoluted way and a favorite among the developer community is through data serialization. For those who aren’t savvy on the super buzzword I just threw at you, data serialization is the process of taking some information from one system, churning it into a format that other systems can read, and then passing it along to those other systems. + +While there are enough [data serialization formats][1] out there to bury the Burj Khalifa, they all mostly fall into two categories: + + * simplicity for humans to read and write, + * and simplicity for machines to read and write. + + + +It’s difficult to have both as we humans enjoy loosely typed, flexible formatting standards that allow us to be more expressive, whereas machines tend to enjoy being told exactly what everything is without doubt or lack of detail, and consider “strict specifications” to be their favorite flavor of Ben & Jerry’s. + +Since I’m a web developer and we’re an agency who creates websites, we’ll stick to those special formats that web systems can understand, or be made to understand without much effort, and that are particularly useful for human readability: XML, JSON, TOML, CSON, and YAML. Each has benefits, cons, and appropriate use cases. + +### Facts First + +Back in the early days of the interwebs, [some really smart fellows][2] decided to put together a standard language which every system could read and creatively named it Standard Generalized Markup Language, or SGML for short. SGML was incredibly flexible and well defined by its publishers. It became the father of languages such as XML, SVG, and HTML. All three fall under the SGML specification, but are subsets with stricter rules and shorter flexibility. + +Eventually, people started seeing a great deal of benefit in having very small, concise, easy to read, and easy to generate data that could be shared programmatically between systems with very little overhead. Around that time, JSON was born and was able to fulfil all requirements. In turn, other languages began popping up to deal with more specialized cases such as CSON, TOML, and YAML. + +### XML: Ixnayed + +Originally, the XML language was amazingly flexible and easy to write, but its drawback was that it was verbose, difficult for humans to read, really difficult for computers to read, and had a lot of syntax that wasn’t entirely necessary to communicate information. + +Today, it’s all but dead for data serialization purposes on the web. Unless you’re writing HTML or SVG, both siblings to XML, you probably aren’t going to see XML in too many other places. Some outdated systems still use it today, but using it to pass data around tends to be overkill for the web. + +I can already hear the XML greybeards beginning to scribble upon their stone tablets as to why XML is ah-may-zing, so I’ll provide a small addendum: XML can be easy to read and write by systems and people. However, it is really, and I mean ridiculously, hard to create a system that can read it to specification. Here’s a simple, beautiful example of XML: + +``` + +Gambardella, Matthew +XML Developer's Guide +Computer +44.95 +2000-10-01 +An in-depth look at creating applications +with XML. + +``` + +Wonderful. Easy to read, reason about, write, and code a system that can read and write. But consider this example: + +``` +b"> ]> + + +b b + d + +``` + +The above is 100% valid XML. Impossible to read, understand, or reason about. Writing code that can consume and understand this would cost at least 36 heads of hair and 248 pounds of coffee grounds. We don’t have that kind of time nor coffee, and most of us greybeards are balding nowadays. So let’s let it live only in our memory alongside [css hacks][3], [internet explorer 6][4], and [vacuum tubes][5]. + +### JSON: Juxtaposition Jamboree + +Okay, we’re all in agreement. XML = bad. So, what’s a good alternative? JavaScript Object Notation, or JSON for short. JSON (read like the name Jason) was invented by Brendan Eich, and made popular by the great and powerful Douglas Crockford, the [Dutch Uncle of JavaScript][6]. It’s used just about everywhere nowadays. The format is easy to write by both human and machine, fairly easy to [parse][7] with strict rules in the specification, and flexible—allowing deep nesting of data, all of the primitive data types, and interpretation of collections as either arrays or objects. JSON became the de facto standard for transferring data from one system to another. Nearly every language out there has built-in functionality for reading and writing it. + +JSON syntax is straightforward. Square brackets denote arrays, curly braces denote records, and two values separated by semicolons denote properties (or ‘keys’) on the left, and values on the right. All keys must be wrapped in double quotes: + +``` +{ +"books": [ +{ +"id": "bk102", +"author": "Crockford, Douglas", +"title": "JavaScript: The Good Parts", +"genre": "Computer", +"price": 29.99, +"publish_date": "2008-05-01", +"description": "Unearthing the Excellence in JavaScript" +} +] +} +``` + +This should make complete sense to you. It’s nice and concise, and has stripped much of the extra nonsense from XML to convey the same amount of information. JSON is king right now, and the rest of this article will go into other language formats that are nothing more than JSON boiled down in an attempt to be either more concise or more readable by humans, but follow very similar structure. + +### TOML: Truncated to Total Altruism + +TOML (Tom’s Obvious, Minimal Language) allows for defining deeply-nested data structures rather quickly and succinctly. The name-in-the-name refers to the inventor, [Tom Preston-Werner][8], an inventor and software developer who’s active in our industry. The syntax is a bit awkward when compared to JSON, and is more akin to an [ini file][9]. It’s not a bad syntax, but could take some getting used to: + +``` +[[books]] +id = 'bk101' +author = 'Crockford, Douglas' +title = 'JavaScript: The Good Parts' +genre = 'Computer' +price = 29.99 +publish_date = 2008-05-01T00:00:00+00:00 +description = 'Unearthing the Excellence in JavaScript' +``` + +A couple great features have been integrated into TOML, such as multiline strings, auto-escaping of reserved characters, datatypes such as dates, time, integers, floats, scientific notation, and “table expansion”. That last bit is special, and is what makes TOML so concise: + +``` +[a.b.c] +d = 'Hello' +e = 'World' +``` + +The above expands to the following: + +``` +{ +"a": { +"b": { +"c": { +"d": "Hello" +"e": "World" +} +} +} +} +``` + +You can definitely see how much you can save in both time and file length using TOML. There are few systems which use it or something very similar for configuration, and that is its biggest con. There simply aren’t very many languages or libraries out there written to interpret TOML. + +### CSON: Simple Samples Enslaved by Specific Systems + +First off, there are two CSON specifications. One stands for CoffeeScript Object Notation, the other stands for Cursive Script Object Notation. The latter isn’t used too often, so we won’t be getting into it. Let’s just focus on the CoffeeScript one. + +[CSON][10] will take a bit of intro. First, let’s talk about CoffeeScript. [CoffeeScript][11] is a language that runs through a compiler to generate JavaScript. It allows you to write JavaScript in a more syntactically concise way, and have it [transcompiled][12] into actual JavaScript, which you would then use in your web application. CoffeeScript makes writing JavaScript easier by removing a lot of the extra syntax necessary in JavaScript. A big one that CoffeeScript gets rid of is curly braces—no need for them. In that same token, CSON is JSON without the curly braces. It instead relies on indentation to determine hierarchy of your data. CSON is very easy to read and write and usually requires fewer lines of code than JSON because there are no brackets. + +CSON also offers up some extra niceties that JSON doesn’t have to offer. Multiline strings are incredibly easy to write, you can enter [comments][13] by starting a line with a hash, and there’s no need for separating key-value pairs with commas. + +``` +books: [ +id: 'bk102' +author: 'Crockford, Douglas' +title: 'JavaScript: The Good Parts' +genre: 'Computer' +price: 29.99 +publish_date: '2008-05-01' +description: 'Unearthing the Excellence in JavaScript' +] +``` + +Here’s the big issue with CSON. It’s **CoffeeScript** Object Notation. Meaning CoffeeScript is what you use to parse/tokenize/lex/transcompile or otherwise use CSON. CoffeeScript is the system that reads the data. If the intent of data serialization is to allow data to be passed from one system to another, and here we have a data serialization format that’s only read by a single system, well that makes it about as useful as a fireproof match, or a waterproof sponge, or that annoyingly flimsy fork part of a spork. + +If this format is adopted by other systems, it could be pretty useful in the developer world. Thus far that hasn’t happened in a comprehensive manner, so using it in alternative languages such as PHP or JAVA are a no-go. + +### YAML: Yielding Yips from Youngsters + +Developers rejoice, as YAML comes into the scene from [one of the contributors to Python][14]. YAML has the same feature set and similar syntax as CSON, a boatload of new features, and parsers available in just about every web programming language there is. It also has some extra features, like circular referencing, soft-wraps, multi-line keys, typecasting tags, binary data, object merging, and [set maps][15]. It has incredibly good human readability and writability, and is a superset of JSON, so you can use fully qualified JSON syntax inside YAML and all will work well. You almost never need quotes, and it can interpret most of your base data types (strings, integers, floats, booleans, etc.). + +``` +books: +- id: bk102 +author: Crockford, Douglas +title: 'JavaScript: The Good Parts' +genre: Computer +price: 29.99 +publish_date: !!str 2008-05-01 +description: Unearthing the Excellence in JavaScript +``` + +The younglings of the industry are rapidly adopting YAML as their preferred data serialization and system configuration format. They are smart to do so. YAML has all the benefits of being as terse as CSON, and all the features of datatype interpretation as JSON. YAML is as easy to read as Canadians are to hang out with. + +There are two issues with YAML that stick out to me, and the first is a big one. At the time of this writing, YAML parsers haven’t yet been built into very many languages, so you’ll need to use a third-party library or extension for your chosen language to parse .yaml files. This wouldn’t be a big deal, however it seems most developers who’ve created parsers for YAML have chosen to throw “additional features” into their parsers at random. Some allow [tokenization][16], some allow [chain referencing][17], some even allow inline calculations. This is all well and good (sort of), except that none of these features are part of the specification, and so are difficult to find amongst other parsers in other languages. This results in system-locking; you end up with the same issue that CSON is subject to. If you use a feature found in only one parser, other parsers won’t be able to interpret the input. Most of these features are nonsense that don’t belong in a dataset, but rather in your application logic, so it’s best to simply ignore them and write your YAML to specification. + +The second issue is there are few parsers that yet completely implement the specification. All the basics are there, but it can be difficult to find some of the more complex and newer things like soft-wraps, document markers, and circular references in your preferred language. I have yet to see an absolute need for these things, so hopefully they shouldn’t slow you down too much. With the above considered, I tend to keep to the more matured feature set presented in the [1.1 specification][18], and avoid the newer stuff found in the [1.2 specification][19]. However, programming is an ever-evolving monster, so by the time you finish reading this article, you’re likely to be able to use the 1.2 spec. + +### Final Philosophy + +The final word here is that each serialization language should be treated with a case-by-case reverence. Some are the bee’s knees when it comes to machine readability, some are the cat’s meow for human readability, and some are simply gilded turds. Here’s the ultimate breakdown: If you are writing code for other code to read, use YAML. If you are writing code that writes code for other code to read, use JSON. Finally, if you are writing code that transcompiles code into code that other code will read, rethink your life choices. + +-------------------------------------------------------------------------------- + +via: https://www.zionandzion.com/json-vs-xml-vs-toml-vs-cson-vs-yaml/ + +作者:[Tim Anderson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.zionandzion.com +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Comparison_of_data_serialization_formats +[2]: https://en.wikipedia.org/wiki/Standard_Generalized_Markup_Language#History +[3]: https://www.quirksmode.org/css/csshacks.html +[4]: http://www.ie6death.com/ +[5]: https://en.wikipedia.org/wiki/Vacuum_tube +[6]: https://twitter.com/BrendanEich/status/773403975865470976 +[7]: https://en.wikipedia.org/wiki/Parsing#Parser +[8]: https://en.wikipedia.org/wiki/Tom_Preston-Werner +[9]: https://en.wikipedia.org/wiki/INI_file +[10]: https://github.com/bevry/cson#what-is-cson +[11]: http://coffeescript.org/ +[12]: https://en.wikipedia.org/wiki/Source-to-source_compiler +[13]: https://en.wikipedia.org/wiki/Comment_(computer_programming) +[14]: http://clarkevans.com/ +[15]: http://exploringjs.com/es6/ch_maps-sets.html +[16]: https://www.tutorialspoint.com/compiler_design/compiler_design_lexical_analysis.htm +[17]: https://en.wikipedia.org/wiki/Fluent_interface +[18]: http://yaml.org/spec/1.1/current.html +[19]: http://www.yaml.org/spec/1.2/spec.html diff --git a/sources/tech/20180226 -Getting to Done- on the Linux command line.md b/sources/tech/20180226 -Getting to Done- on the Linux command line.md deleted file mode 100644 index c325a0d884..0000000000 --- a/sources/tech/20180226 -Getting to Done- on the Linux command line.md +++ /dev/null @@ -1,126 +0,0 @@ -'Getting to Done' on the Linux command line -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_terminals.png?itok=CfBqYBah) -There is a lot of talk about getting things done at the command line. How many articles are there about using obscure flags with `ls`, nifty regular expressions with Sed and Awk, and how to parse out lots of text with Perl? That isn't what this is about. - -This is about [Getting _to_ Done][1], making sure that the stuff we have to do actually gets tracked and done using tools that don't require a graphical desktop, a web browser, or an internet connection. To do this, we'll look at four ways of tracking your to-do list: plaintext files, Todo.txt, TaskWarrior, and Org-mode. - -### Plain (and simple) text - - -![plaintext][3] - -I like to use Vim, but you can use Nano too. - -The most straightforward way to manage your to-do list is using a plaintext file in your editor of choice. Just open an empty file and add tasks, one per line. When you are done, delete the line. Simple, effective, and it doesn't matter what you use to do it. There are a couple of drawbacks to this method, though. Once you delete a line and save the file, it is gone forever. That can be a problem if you have to report on what you have done this week or last week. And while using a simple file is flexible, it can also get cluttered really easily. - -### Todo.txt: Plaintext leveled up - - -![todo.txt screen][5] - -Neat, organized, and easy to use - -That leads us to the [Todo.txt][6] file format and application. Installation is simple—[download][7] the latest release from GitHub and run `sudo make install` from the unpacked archive. - - -![Installing todo.txt][9] - -It works from a Git clone as well. - -Todo.txt makes it very easy to add tasks, list tasks, and mark them as done: - -| `todo.sh add "Some Task"` | add "Some Task" to my todo list | -| `todo.sh ls` | list all my tasks | -| `todo.sh ls due:2018-02-15` | list all tasks due on February 15, 2018 | -| `todo.sh do 3` | mark task number 3 as "done" | - -The actual list is still in plaintext, and you can edit it with your favorite text editor as long as you follow the [correct format][10]. - -There is also a very robust help built into the application. - - -![Syntax highlighting in todo.txt][12] - -You can even get syntax highlighting. - -There is also a large selection of add-ons, as well as specifications for writing your own. There are even browser extensions, mobile apps, and desktop apps that support the Todo.txt format. - - -![GNOME extensions in todo.txt][14] - -Even GNOME extensions. - -The biggest drawback to Todo.txt is the lack of an automatic or built-in synchronization mechanism. Most (if not all) of the browser extensions and mobile apps require Dropbox to perform synchronization between the app and the copy on your desktop. If you would like something with sync built-in, we have... - -### Taskwarrior: Now we're cooking with Python - -[Taskwarrior][15] is a Python application with many of the same features as Todo.txt. However, it stores the data in a database and has built-in synchronization capabilities. It also keeps track of what is next, notes how old tasks are, and will warn you if you have something more important to do than what you just did. - -[Installation][16] of Taskwarrior can be done either with your distribution's package manager, through Python's `pip` utility, or built from source. Using it is also pretty straightforward, with commands similar to Todo.txt: - -| `task add "Some Task"` | Add "Some Task" to the list | -| `task list` | List all tasks | -| `task list due ``:today` | List all tasks due on today's date | -| `task do 3` | Complete task number 3 | - -Taskwarrior also has some pretty nice text user interfaces. - -![Taskwarrior in Vit][18] - -I like Vit, which was inspired by Vim. - -Unlike Todo.txt, Taskwarrior can synchronize with a local or remote server. A very basic synchronization server called `taskd` is available if you wish to run your own, and there are several services available if you do not. - -Taskwarrior also has a thriving and extensive ecosystem of add-ons and extensions, as well as mobile and desktop apps. - -![Taskwarrior on GNOME][20] - -Taskwarrior looks really nice on GNOME. - -The only disadvantage to Taskwarrior is that, unlike the other programs listed here, you cannot directly modify the to-do list itself. You can export the task list to various formats, modify the export, and then re-import the files, but it is a lot clunkier than just opening the file directly in a text editor. - -Which brings us to the most powerful of them all... - -### Emacs Org-mode: Hulk smash tasks - -![Org-mode][22] - -Emacs has everything. - -Emacs [Org-mode][23] is by far the most powerful, most flexible open source to-do list manager out there. It supports multiple files, uses plaintext, is almost infinitely customizable, and understands calendars, due dates, and schedules. It is also significantly more complicated to set up than the other applications listed here. But once it is set up, it does everything the other applications do and more. If you are familiar with or a fan of [Bullet Journals][24], Org-mode is possibly the closest you can get on a computer. - -Org-mode will run anywhere Emacs runs, and there are a few mobile applications that can interact with it as well. Unfortunately, there are no desktop apps or browser extensions that support Org. Despite all that, Org-mode is still one of the best applications for tracking your to-do list, since it is so very powerful. - -### Choose your tool - -In the end, the goal of all these programs is to help you track what you need to do and make sure you don't forget to do something. While they all have the same basic functions, choosing which one is right for you depends on a lot of factors. Do you want synchronization built-in or not? Do you need a mobile app? Do any of the add-ons include a "must have" feature? Whatever your choice, remember that the program alone cannot make you more organized, but it can help. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/2/getting-to-done-agile-linux-command-line - -作者:[Kevin Sonney][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[1]:https://www.scruminc.com/getting-done/ -[3]:https://opensource.com/sites/default/files/u128651/plain-text.png (plaintext) -[5]:https://opensource.com/sites/default/files/u128651/todo-txt.png (todo.txt screen) -[6]:http://todotxt.org/ -[7]:https://github.com/todotxt/todo.txt-cli/releases -[9]:https://opensource.com/sites/default/files/u128651/todo-txt-install.png (Installing todo.txt) -[10]:https://github.com/todotxt/todo.txt -[12]:https://opensource.com/sites/default/files/u128651/todo-txt-vim.png (Syntax highlighting in todo.txt) -[14]:https://opensource.com/sites/default/files/u128651/tod-txt-gnome.png (GNOME extensions in todo.txt) -[15]:https://taskwarrior.org/ -[16]:https://taskwarrior.org/download/ -[18]:https://opensource.com/sites/default/files/u128651/taskwarrior-vit.png (Taskwarrior in Vit) -[20]:https://opensource.com/sites/default/files/u128651/taskwarrior-gnome.png (Taskwarrior on GNOME) -[22]:https://opensource.com/sites/default/files/u128651/emacs-org-mode.png (Org-mode) -[23]:https://orgmode.org/ -[24]:http://bulletjournal.com/ diff --git a/sources/tech/20180302 How to manage your workstation configuration with Ansible.md b/sources/tech/20180302 How to manage your workstation configuration with Ansible.md deleted file mode 100644 index fd24cd48ed..0000000000 --- a/sources/tech/20180302 How to manage your workstation configuration with Ansible.md +++ /dev/null @@ -1,170 +0,0 @@ -How to manage your workstation configuration with Ansible -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) - -Configuration management is a very important aspect of both server administration and DevOps. The "infrastructure as code" methodology makes it easy to deploy servers in various configurations and dynamically scale an organization's resources to keep up with user demands. But less attention is paid to individual administrators who want to automate the setup of their own laptops and desktops (workstations). - -In this series, I'll show you how to automate your workstation setup via [Ansible][1] , which will allow you to easily restore your entire configuration if you want or need to reload your machine. In addition, if you have multiple workstations, you can use this same approach to make the configuration identical on each. In this first article, we'll set up basic configuration management for our personal or work computers and set the foundation for the rest of the series. By the end of this article, you'll have a working setup to benefit from right away. Each article will automate more things and grow in complexity. - -### Why Ansible? - -Many configuration management solutions are available, including Salt Stack, Chef, and Puppet. I prefer Ansible because it's lighter in terms of resource utilization, its syntax is easier to read, and when harnessed properly it can revolutionize your configuration management. Ansible's lightweight nature is especially relevant to the topic at hand, because we may not want to run an entire server just to automate the setup of our laptops and desktops. Ideally, we want something fast; something we can use to get up and running quickly should we need to restore our workstations or synchronize our configuration between multiple machines. My specific method for Ansible (which I'll demonstrate in this article) is perfect for this—there's no server to maintain. You just download your configuration and run it. - -### My approach - -Typically, Ansible is run from a central server. It utilizes an inventory file, which is a text file that contains a list of all the hosts and their IP addresses or domain names we want Ansible to manage. This is great for static environments, but it is not ideal for workstations. The reason being we really don't know what the status of our workstations will be at any one moment. Perhaps I powered down my desktop or my laptop may be suspended and stowed in my bag. In either case, the Ansible server would complain, as it can't reach my machines if they are offline. We need something that's more of an on-demand approach, and the way we'll accomplish that is by utilizing `ansible-pull`. The `ansible-pull` command, which is part of Ansible, allows you to download your configuration from a Git repository and apply it immediately. You won't need to maintain a server or an inventory list; you simply run the `ansible-pull` command, feed it a Git repository URL, and it will do the rest for you. - -### Getting started - -First, install Ansible on the computer you want it to manage. One problem is that a lot of distributions ship with an older version. I can tell you from experience you'll definitely want the latest version available. New features are introduced into Ansible quite frequently, and if you're running an older version, example syntax you find online may not be functional because it's using features that aren't implemented in the version you have installed. Even point releases have quite a few new features. One example of this is the `dconf` module, which is new to Ansible as of 2.4. If you try to utilize syntax that makes use of this module, unless you have 2.4 or newer it will fail. In Ubuntu and its derivatives, we can easily install the latest version of Ansible with the official personal package archive ([PPA][2]). The following commands will do the trick: -``` -sudo apt-get install software-properties-common - -sudo apt-add-repository ppa:ansible/ansible - -sudo apt-get update - -sudo apt-get install ansible - -``` - -If you're not using Ubuntu, [consult Ansible's documentation][3] on how to obtain it for your platform. - -Next, we'll need a Git repository to hold our configuration. The easiest way to satisfy this requirement is to create an empty repository on GitHub, or you can utilize your own Git server if you have one. To keep things simple, I'll assume you're using GitHub, so adjust the commands if you're using something else. Create a repository in GitHub; you'll end up with a repository URL that will be similar to this: -``` -git@github.com:/ansible.git - -``` - -Clone that repository to your local working directory (ignore any message that complains that the repository is empty): -``` -git clone git@github.com:/ansible.git - -``` - -Now we have an empty repository we can work with. Change your working directory to be inside the repository (`cd ./ansible` for example) and create a file named `local.yml` in your favorite text editor. Place the following configuration in that file: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install htop - -    apt: name=htop - -``` - -The file you just created is known as a **playbook** , and the instruction to install `htop` (a package I arbitrarily picked to serve as an example) is known as a **play**. The playbook itself is a file in the YAML format, which is a simple to read markup language. A full walkthrough of YAML is beyond the scope of this article, but you don't need to have an expert understanding of it to be proficient with Ansible. The configuration is easy to read; by simply looking at this file, you can easily glean that we're installing the `htop` package. Pay special attention to the `apt` module on the last line, which will only work on Debian-based systems. You can change this to `yum` instead of `apt` if you're using a Red Hat platform or change it to `dnf` if you're using Fedora. The `name` line simply gives information regarding our task and will be shown in the output. Therefore, you'll want to make sure the name is descriptive so it's easy to find if you need to troubleshoot multiple plays. - -Next, let's commit our new file to our repository: -``` -git add local.yml - -git commit -m "initial commit" - -git push origin master - -``` - -Now our new playbook should be present in our repository on GitHub. We can apply the playbook we created with the following command: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -If executed properly, the `htop` package should be installed on your system. You might've seen some warnings near the beginning that complain about the lack of an inventory file. This is fine, as we're not using an inventory file (nor do we need to for this use). At the end of the output, it will give you an overview of what it did. If `htop` was installed properly, you should see `changed=1` on the last line of the output. - -How did this work? The `ansible-pull` command uses the `-U` option, which expects a repository URL. I gave it the `https` version of the repository URL for security purposes because I don't want any hosts to have write access back to the repository (`https` is read-only by default). The `local.yml` playbook name is assumed, so we didn't need to provide a filename for the playbook—it will automatically run a playbook named `local.yml` if it finds it in the repository's root. Next, we used `sudo` in front of the command since we are modifying the system. - -Let's go ahead and add additional packages to our playbook. I'll add two additional packages so that it looks like this: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install htop - -    apt: name=htop - - - -  - name: Install mc - -    apt: name=mc - -    - -  - name: Install tmux - -    apt: name=tmux - -``` - -I added additional plays (tasks) for installing two other packages, `mc` and `tmux`. It doesn't matter what packages you choose to have this playbook install; I just picked these arbitrarily. You should install whichever packages you want all your systems to have. The only caveat is that you have to know that the packages exist in the repository for your distribution ahead of time. - -Before we commit and apply this updated playbook, we should clean it up. It will work fine as it is, but (to be honest) it looks kind of messy. Let's try installing all three packages in just one play. Replace the contents of your `local.yml` with this: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install packages - -    apt: name={{item}} - -    with_items: - -      - htop - -      - mc - -      - tmux - -``` - -Now that looks cleaner and more efficient. We used `with_items` to consolidate our package list into one play. If we want to add additional packages, we simply add another line with a hyphen and a package name. Consider `with_items` to be similar to a `for` loop. Every package we list will be installed. - -Commit our new changes back to the repository: -``` -git add local.yml - -git commit -m "added additional packages, cleaned up formatting" - -git push origin master - -``` - -Now we can run our playbook to benefit from the new configuration: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -Admittedly, this example doesn't do much yet; all it does is install a few packages. You could've installed these packages much faster just using your package manager. However, as this series continues, these examples will become more complex and we'll automate more things. By the end, the Ansible configuration you'll create will automate more and more tasks. For example, the one I use automates the installation of hundreds of packages, sets up `cron` jobs, handles desktop configuration, and more. - -From what we've accomplished so far, you can probably already see the big picture. All we had to do was create a repository, put a playbook in that repository, then utilize the `ansible-pull` command to pull down that repository and apply it to our machine. We didn't need to set up a server. In the future, if we want to change our config, we can pull down the repo, update it, then push it back to our repository and apply it. If we're setting up a new machine, we only need to install Ansible and apply the configuration. - -In the next article, we'll automate this even further via `cron` and some additional items. In the meantime, I've copied the code for this article into [my GitHub repository][4] so you can check your syntax against mine. I'll update the code as we go along. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/manage-workstation-ansible - -作者:[Jay LaCroix][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://www.ansible.com/ -[2]:https://launchpad.net/ubuntu/+ppas -[3]:http://docs.ansible.com/ansible/latest/intro_installation.html -[4]:https://github.com/jlacroix82/ansible_article diff --git a/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md b/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md deleted file mode 100644 index df00e7e05e..0000000000 --- a/sources/tech/20180307 Protecting Code Integrity with PGP - Part 4- Moving Your Master Key to Offline Storage.md +++ /dev/null @@ -1,167 +0,0 @@ -Protecting Code Integrity with PGP — Part 4: Moving Your Master Key to Offline Storage -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/industry-1920.jpg?itok=gI3QraS8) -In this tutorial series, we're providing practical guidelines for using PGP. You can catch up on previous articles here: - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -Here in part 4, we continue the series with a look at how and why to move your master key from your home directory to offline storage. Let's get started. - -### Checklist - - * Prepare encrypted detachable storage (ESSENTIAL) - - * Back up your GnuPG directory (ESSENTIAL) - - * Remove the master key from your home directory (NICE) - - * Remove the revocation certificate from your home directory (NICE) - - - - -#### Considerations - -Why would you want to remove your master [C] key from your home directory? This is generally done to prevent your master key from being stolen or accidentally leaked. Private keys are tasty targets for malicious actors -- we know this from several successful malware attacks that scanned users' home directories and uploaded any private key content found there. - -It would be very damaging for any developer to have their PGP keys stolen -- in the Free Software world, this is often tantamount to identity theft. Removing private keys from your home directory helps protect you from such events. - -##### Back up your GnuPG directory - -**!!!Do not skip this step!!!** - -It is important to have a readily available backup of your PGP keys should you need to recover them (this is different from the disaster-level preparedness we did with paperkey). - -##### Prepare detachable encrypted storage - -Start by getting a small USB "thumb" drive (preferably two!) that you will use for backup purposes. You will first need to encrypt them: - -For the encryption passphrase, you can use the same one as on your master key. - -##### Back up your GnuPG directory - -Once the encryption process is over, re-insert the USB drive and make sure it gets properly mounted. Find out the full mount point of the device, for example by running the mount command (under Linux, external media usually gets mounted under /media/disk, under Mac it's /Volumes). - -Once you know the full mount path, copy your entire GnuPG directory there: -``` -$ cp -rp ~/.gnupg [/media/disk/name]/gnupg-backup - -``` - -(Note: If you get any Operation not supported on socket errors, those are benign and you can ignore them.) - -You should now test to make sure everything still works: -``` -$ gpg --homedir=[/media/disk/name]/gnupg-backup --list-key [fpr] - -``` - -If you don't get any errors, then you should be good to go. Unmount the USB drive and distinctly label it, so you don't blow it away next time you need to use a random USB drive. Then, put in a safe place -- but not too far away, because you'll need to use it every now and again for things like editing identities, adding or revoking subkeys, or signing other people's keys. - -##### Remove the master key - -The files in our home directory are not as well protected as we like to think. They can be leaked or stolen via many different means: - - * By accident when making quick homedir copies to set up a new workstation - - * By systems administrator negligence or malice - - * Via poorly secured backups - - * Via malware in desktop apps (browsers, pdf viewers, etc) - - * Via coercion when crossing international borders - - - - -Protecting your key with a good passphrase greatly helps reduce the risk of any of the above, but passphrases can be discovered via keyloggers, shoulder-surfing, or any number of other means. For this reason, the recommended setup is to remove your master key from your home directory and store it on offline storage. - -###### Removing your master key - -Please see the previous section and make sure you have backed up your GnuPG directory in its entirety. What we are about to do will render your key useless if you do not have a usable backup! - -First, identify the keygrip of your master key: -``` -$ gpg --with-keygrip --list-key [fpr] - -``` - -The output will be something like this: -``` -pub rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD - Keygrip = AAAA999988887777666655554444333322221111 -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -sub rsa2048 2017-12-06 [E] - Keygrip = BBBB999988887777666655554444333322221111 -sub rsa2048 2017-12-06 [S] - Keygrip = CCCC999988887777666655554444333322221111 - -``` - -Find the keygrip entry that is beneath the pub line (right under the master key fingerprint). This will correspond directly to a file in your home .gnupg directory: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ ls -AAAA999988887777666655554444333322221111.key -BBBB999988887777666655554444333322221111.key -CCCC999988887777666655554444333322221111.key - -``` - -All you have to do is simply remove the .key file that corresponds to the master keygrip: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ rm AAAA999988887777666655554444333322221111.key - -``` - -Now, if you issue the --list-secret-keys command, it will show that the master key is missing (the # indicates it is not available): -``` -$ gpg --list-secret-keys -sec# rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -ssb rsa2048 2017-12-06 [E] -ssb rsa2048 2017-12-06 [S] - -``` - -##### Remove the revocation certificate - -Another file you should remove (but keep in backups) is the revocation certificate that was automatically created with your master key. A revocation certificate allows someone to permanently mark your key as revoked, meaning it can no longer be used or trusted for any purpose. You would normally use it to revoke a key that, for some reason, you can no longer control -- for example, if you had lost the key passphrase. - -Just as with the master key, if a revocation certificate leaks into malicious hands, it can be used to destroy your developer digital identity, so it's better to remove it from your home directory. -``` -cd ~/.gnupg/openpgp-revocs.d -rm [fpr].rev - -``` - -Next time, you'll learn how to secure your subkeys as well. Stay tuned. - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage - -作者:[Konstantin Ryabitsev][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md b/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md deleted file mode 100644 index ac862ff64e..0000000000 --- a/sources/tech/20180314 Protecting Code Integrity with PGP - Part 5- Moving Subkeys to a Hardware Device.md +++ /dev/null @@ -1,303 +0,0 @@ -Protecting Code Integrity with PGP — Part 5: Moving Subkeys to a Hardware Device -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/pgp-keys.jpg?itok=aS6IWGpq) - -In this tutorial series, we're providing practical guidelines for using PGP. If you missed the previous article, you can catch up with the links below. But, in this article, we'll continue our discussion about securing your keys and look at some tips for moving your subkeys to a specialized hardware device. - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -[Part 4: Moving Your Master Key to Offline Storage][4] - -### Checklist - - * Get a GnuPG-compatible hardware device (NICE) - - * Configure the device to work with GnuPG (NICE) - - * Set the user and admin PINs (NICE) - - * Move your subkeys to the device (NICE) - - - - -### Considerations - -Even though the master key is now safe from being leaked or stolen, the subkeys are still in your home directory. Anyone who manages to get their hands on those will be able to decrypt your communication or fake your signatures (if they know the passphrase). Furthermore, each time a GnuPG operation is performed, the keys are loaded into system memory and can be stolen from there by sufficiently advanced malware (think Meltdown and Spectre). - -The best way to completely protect your keys is to move them to a specialized hardware device that is capable of smartcard operations. - -#### The benefits of smartcards - -A smartcard contains a cryptographic chip that is capable of storing private keys and performing crypto operations directly on the card itself. Because the key contents never leave the smartcard, the operating system of the computer into which you plug in the hardware device is not able to retrieve the private keys themselves. This is very different from the encrypted USB storage device we used earlier for backup purposes -- while that USB device is plugged in and decrypted, the operating system is still able to access the private key contents. Using external encrypted USB media is not a substitute to having a smartcard-capable device. - -Some other benefits of smartcards: - - * They are relatively cheap and easy to obtain - - * They are small and easy to carry with you - - * They can be used with multiple devices - - * Many of them are tamper-resistant (depends on manufacturer) - - - - -#### Available smartcard devices - -Smartcards started out embedded into actual wallet-sized cards, which earned them their name. You can still buy and use GnuPG-capable smartcards, and they remain one of the cheapest available devices you can get. However, actual smartcards have one important downside: they require a smartcard reader, and very few laptops come with one. - -For this reason, manufacturers have started providing small USB devices, the size of a USB thumb drive or smaller, that either have the microsim-sized smartcard pre-inserted, or that simply implement the smartcard protocol features on the internal chip. Here are a few recommendations: - - * [Nitrokey Start][5]: Open hardware and Free Software: one of the cheapest options for GnuPG use, but with fewest extra security features - - * [Nitrokey Pro][6]: Similar to the Nitrokey Start, but is tamper-resistant and offers more security features (but not U2F, see the Fido U2F section of the guide) - - * [Yubikey 4][7]: Proprietary hardware and software, but cheaper than Nitrokey Pro and comes available in the USB-C form that is more useful with newer laptops; also offers additional security features such as U2F - - - - -Our recommendation is to pick a device that is capable of both smartcard functionality and U2F, which, at the time of writing, means a Yubikey 4. - -#### Configuring your smartcard device - -Your smartcard device should Just Work (TM) the moment you plug it into any modern Linux or Mac workstation. You can verify it by running: -``` -$ gpg --card-status - -``` - -If you didn't get an error, but a full listing of the card details, then you are good to go. Unfortunately, troubleshooting all possible reasons why things may not be working for you is way beyond the scope of this guide. If you are having trouble getting the card to work with GnuPG, please seek support via your operating system's usual support channels. - -##### PINs don't have to be numbers - -Note, that despite having the name "PIN" (and implying that it must be a "number"), neither the user PIN nor the admin PIN on the card need to be numbers. - -Your device will probably have default user and admin PINs set up when it arrives. For Yubikeys, these are 123456 and 12345678, respectively. If those don't work for you, please check any accompanying documentation that came with your device. - -##### Quick setup - -To configure your smartcard, you will need to use the GnuPG menu system, as there are no convenient command-line switches: -``` -$ gpg --card-edit -[...omitted...] -gpg/card> admin -Admin commands are allowed -gpg/card> passwd - -``` - -You should set the user PIN (1), Admin PIN (3), and the Reset Code (4). Please make sure to record and store these in a safe place -- especially the Admin PIN and the Reset Code (which allows you to completely wipe the smartcard). You so rarely need to use the Admin PIN, that you will inevitably forget what it is if you do not record it. - -Getting back to the main card menu, you can also set other values (such as name, sex, login data, etc), but it's not necessary and will additionally leak information about your smartcard should you lose it. - -#### Moving the subkeys to your smartcard - -Exit the card menu (using "q") and save all changes. Next, let's move your subkeys onto the smartcard. You will need both your PGP key passphrase and the admin PIN of the card for most operations. Remember, that [fpr] stands for the full 40-character fingerprint of your key. -``` -$ gpg --edit-key [fpr] - -Secret subkeys are available. - -pub rsa4096/AAAABBBBCCCCDDDD - created: 2017-12-07 expires: 2019-12-07 usage: C - trust: ultimate validity: ultimate -ssb rsa2048/1111222233334444 - created: 2017-12-07 expires: never usage: E -ssb rsa2048/5555666677778888 - created: 2017-12-07 expires: never usage: S -[ultimate] (1). Alice Engineer -[ultimate] (2) Alice Engineer - -gpg> - -``` - -Using --edit-key puts us into the menu mode again, and you will notice that the key listing is a little different. From here on, all commands are done from inside this menu mode, as indicated by gpg>. - -First, let's select the key we'll be putting onto the card -- you do this by typing key 1 (it's the first one in the listing, our [E] subkey): -``` -gpg> key 1 - -``` - -The output should be subtly different: -``` -pub rsa4096/AAAABBBBCCCCDDDD - created: 2017-12-07 expires: 2019-12-07 usage: C - trust: ultimate validity: ultimate -ssb* rsa2048/1111222233334444 - created: 2017-12-07 expires: never usage: E -ssb rsa2048/5555666677778888 - created: 2017-12-07 expires: never usage: S -[ultimate] (1). Alice Engineer -[ultimate] (2) Alice Engineer - -``` - -Notice the * that is next to the ssb line corresponding to the key -- it indicates that the key is currently "selected." It works as a toggle, meaning that if you type key 1 again, the * will disappear and the key will not be selected any more. - -Now, let's move that key onto the smartcard: -``` -gpg> keytocard -Please select where to store the key: - (2) Encryption key -Your selection? 2 - -``` - -Since it's our [E] key, it makes sense to put it into the Encryption slot. When you submit your selection, you will be prompted first for your PGP key passphrase, and then for the admin PIN. If the command returns without an error, your key has been moved. - -**Important:** Now type key 1 again to unselect the first key, and key 2 to select the [S] key: -``` -gpg> key 1 -gpg> key 2 -gpg> keytocard -Please select where to store the key: - (1) Signature key - (3) Authentication key -Your selection? 1 - -``` - -You can use the [S] key both for Signature and Authentication, but we want to make sure it's in the Signature slot, so choose (1). Once again, if your command returns without an error, then the operation was successful. - -Finally, if you created an [A] key, you can move it to the card as well, making sure first to unselect key 2. Once you're done, choose "q": -``` -gpg> q -Save changes? (y/N) y - -``` - -Saving the changes will delete the keys you moved to the card from your home directory (but it's okay, because we have them in our backups should we need to do this again for a replacement smartcard). - -##### Verifying that the keys were moved - -If you perform --list-secret-keys now, you will see a subtle difference in the output: -``` -$ gpg --list-secret-keys -sec# rsa4096 2017-12-06 [C] [expires: 2019-12-06] - 111122223333444455556666AAAABBBBCCCCDDDD -uid [ultimate] Alice Engineer -uid [ultimate] Alice Engineer -ssb> rsa2048 2017-12-06 [E] -ssb> rsa2048 2017-12-06 [S] - -``` - -The > in the ssb> output indicates that the subkey is only available on the smartcard. If you go back into your secret keys directory and look at the contents there, you will notice that the .key files there have been replaced with stubs: -``` -$ cd ~/.gnupg/private-keys-v1.d -$ strings *.key - -``` - -The output should contain shadowed-private-key to indicate that these files are only stubs and the actual content is on the smartcard. - -#### Verifying that the smartcard is functioning - -To verify that the smartcard is working as intended, you can create a signature: -``` -$ echo "Hello world" | gpg --clearsign > /tmp/test.asc -$ gpg --verify /tmp/test.asc - -``` - -This should ask for your smartcard PIN on your first command, and then show "Good signature" after you run gpg --verify. - -Congratulations, you have successfully made it extremely difficult to steal your digital developer identity! - -### Other common GnuPG operations - -Here is a quick reference for some common operations you'll need to do with your PGP key. - -In all of the below commands, the [fpr] is your key fingerprint. - -#### Mounting your master key offline storage - -You will need your master key for any of the operations below, so you will first need to mount your backup offline storage and tell GnuPG to use it. First, find out where the media got mounted, for example, by looking at the output of the mount command. Then, locate the directory with the backup of your GnuPG directory and tell GnuPG to use that as its home: -``` -$ export GNUPGHOME=/media/disk/name/gnupg-backup -$ gpg --list-secret-keys - -``` - -You want to make sure that you see sec and not sec# in the output (the # means the key is not available and you're still using your regular home directory location). - -##### Updating your regular GnuPG working directory - -After you make any changes to your key using the offline storage, you will want to import these changes back into your regular working directory: -``` -$ gpg --export | gpg --homedir ~/.gnupg --import -$ unset GNUPGHOME - -``` - -#### Extending key expiration date - -The master key we created has the default expiration date of 2 years from the date of creation. This is done both for security reasons and to make obsolete keys eventually disappear from keyservers. - -To extend the expiration on your key by a year from current date, just run: -``` -$ gpg --quick-set-expire [fpr] 1y - -``` - -You can also use a specific date if that is easier to remember (e.g. your birthday, January 1st, or Canada Day): -``` -$ gpg --quick-set-expire [fpr] 2020-07-01 - -``` - -Remember to send the updated key back to keyservers: -``` -$ gpg --send-key [fpr] - -``` - -#### Revoking identities - -If you need to revoke an identity (e.g., you changed employers and your old email address is no longer valid), you can use a one-liner: -``` -$ gpg --quick-revoke-uid [fpr] 'Alice Engineer ' - -``` - -You can also do the same with the menu mode using gpg --edit-key [fpr]. - -Once you are done, remember to send the updated key back to keyservers: -``` -$ gpg --send-key [fpr] - -``` - -Next time, we'll look at how Git supports multiple levels of integration with PGP. - -Learn more about Linux through the free ["Introduction to Linux" ][8]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-5-moving-subkeys-hardware-device - -作者:[KONSTANTIN RYABITSEV][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage -[5]:https://shop.nitrokey.com/shop/product/nitrokey-start-6 -[6]:https://shop.nitrokey.com/shop/product/nitrokey-pro-3 -[7]:https://www.yubico.com/product/yubikey-4-series/ -[8]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md b/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md deleted file mode 100644 index 0169d96ad6..0000000000 --- a/sources/tech/20180321 Protecting Code Integrity with PGP - Part 6- Using PGP with Git.md +++ /dev/null @@ -1,318 +0,0 @@ -Protecting Code Integrity with PGP — Part 6: Using PGP with Git -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/global-network.jpg?itok=h_hhZc36) -In this tutorial series, we're providing practical guidelines for using PGP, including basic concepts and generating and protecting your keys. If you missed the previous articles, you can catch up below. In this article, we look at Git's integration with PGP, starting with signed tags, then introducing signed commits, and finally adding support for signed pushes. - -[Part 1: Basic Concepts and Tools][1] - -[Part 2: Generating Your Master Key][2] - -[Part 3: Generating PGP Subkeys][3] - -[Part 4: Moving Your Master Key to Offline Storage][4] - -[Part 5: Moving Subkeys to a Hardware Device][5] - -One of the core features of Git is its decentralized nature -- once a repository is cloned to your system, you have full history of the project, including all of its tags, commits and branches. However, with hundreds of cloned repositories floating around, how does anyone verify that the repository you downloaded has not been tampered with by a malicious third party? You may have cloned it from GitHub or some other official-looking location, but what if someone had managed to trick you? - -Or what happens if a backdoor is discovered in one of the projects you've worked on, and the "Author" line in the commit says it was done by you, while you're pretty sure you had [nothing to do with it][6]? - -To address both of these issues, Git introduced PGP integration. Signed tags prove the repository integrity by assuring that its contents are exactly the same as on the workstation of the developer who created the tag, while signed commits make it nearly impossible for someone to impersonate you without having access to your PGP keys. - -### Checklist - - * Understand signed tags, commits, and pushes (ESSENTIAL) - - * Configure git to use your key (ESSENTIAL) - - * Learn how tag signing and verification works (ESSENTIAL) - - * Configure git to always sign annotated tags (NICE) - - * Learn how commit signing and verification works (ESSENTIAL) - - * Configure git to always sign commits (NICE) - - * Configure gpg-agent options (ESSENTIAL) - - - - -### Considerations - -Git implements multiple levels of integration with PGP, first starting with signed tags, then introducing signed commits, and finally adding support for signed pushes. - -#### Understanding Git Hashes - -Git is a complicated beast, but you need to know what a "hash" is in order to have a good grasp on how PGP integrates with it. We'll narrow it down to two kinds of hashes: tree hashes and commit hashes. - -##### Tree hashes - -Every time you commit a change to a repository, git records checksum hashes of all objects in it -- contents (blobs), directories (trees), file names and permissions, etc, for each subdirectory in the repository. It only does this for trees and blobs that have changed with each commit, so as not to re-checksum the entire tree unnecessarily if only a small part of it was touched. - -Then it calculates and stores the checksum of the toplevel tree, which will inevitably be different if any part of the repository has changed. - -##### Commit hashes - -Once the tree hash has been created, git will calculate the commit hash, which will include the following information about the repository and the change being made: - - * The checksum hash of the tree - - * The checksum hash of the tree before the change (parent) - - * Information about the author (name, email, time of authorship) - - * Information about the committer (name, email, time of commit) - - * The commit message - - - - -##### Hashing function - -At the time of writing, git still uses the SHA1 hashing mechanism to calculate checksums, though work is under way to transition to a stronger algorithm that is more resistant to collisions. Note, that git already includes collision avoidance routines, so it is believed that a successful collision attack against git remains impractical. - -#### Annotated tags and tag signatures - -Git tags allow developers to mark specific commits in the history of each git repository. Tags can be "lightweight" \-- more or less just a pointer at a specific commit, or they can be "annotated," which becomes its own object in the git tree. An annotated tag object contains all of the following information: - - * The checksum hash of the commit being tagged - - * The tag name - - * Information about the tagger (name, email, time of tagging) - - * The tag message - - - - -A PGP-signed tag is simply an annotated tag with all these entries wrapped around in a PGP signature. When a developer signs their git tag, they effectively assure you of the following: - - * Who they are (and why you should trust them) - - * What the state of their repository was at the time of signing: - - * The tag includes the hash of the commit - - * The commit hash includes the hash of the toplevel tree - - * Which includes hashes of all files, contents, and subtrees - * It also includes all information about authorship - - * Including exact times when changes were made - - - - -When you clone a git repository and verify a signed tag, that gives you cryptographic assurance that all contents in the repository, including all of its history, are exactly the same as the contents of the repository on the developer's computer at the time of signing. - -#### Signed commits - -Signed commits are very similar to signed tags -- the contents of the commit object are PGP-signed instead of the contents of the tag object. A commit signature also gives you full verifiable information about the state of the developer's tree at the time the signature was made. Tag signatures and commit PGP signatures provide exact same security assurances about the repository and its entire history. - -#### Signed pushes - -This is included here for completeness' sake, since this functionality needs to be enabled on the server receiving the push before it does anything useful. As we saw above, PGP-signing a git object gives verifiable information about the developer's git tree, but not about their intent for that tree. - -For example, you can be working on an experimental branch in your own git fork trying out a promising cool feature, but after you submit your work for review, someone finds a nasty bug in your code. Since your commits are properly signed, someone can take the branch containing your nasty bug and push it into master, introducing a vulnerability that was never intended to go into production. Since the commit is properly signed with your key, everything looks legitimate and your reputation is questioned when the bug is discovered. - -Ability to require PGP-signatures during git push was added in order to certify the intent of the commit, and not merely verify its contents. - -#### Configure git to use your PGP key - -If you only have one secret key in your keyring, then you don't really need to do anything extra, as it becomes your default key. - -However, if you happen to have multiple secret keys, you can tell git which key should be used ([fpr] is the fingerprint of your key): -``` -$ git config --global user.signingKey [fpr] - -``` - -NOTE: If you have a distinct gpg2 command, then you should tell git to always use it instead of the legacy gpg from version 1: -``` -$ git config --global gpg.program gpg2 - -``` - -#### How to work with signed tags - -To create a signed tag, simply pass the -s switch to the tag command: -``` -$ git tag -s [tagname] - -``` - -Our recommendation is to always sign git tags, as this allows other developers to ensure that the git repository they are working with has not been maliciously altered (e.g. in order to introduce backdoors). - -##### How to verify signed tags - -To verify a signed tag, simply use the verify-tag command: -``` -$ git verify-tag [tagname] - -``` - -If you are verifying someone else's git tag, then you will need to import their PGP key. Please refer to the "Trusted Team communication" document in the same repository for guidance on this topic. - -##### Verifying at pull time - -If you are pulling a tag from another fork of the project repository, git should automatically verify the signature at the tip you're pulling and show you the results during the merge operation: -``` -$ git pull [url] tags/sometag - -``` - -The merge message will contain something like this: -``` -Merge tag 'sometag' of [url] - -[Tag message] - -# gpg: Signature made [...] -# gpg: Good signature from [...] - -``` - -#### Configure git to always sign annotated tags - -Chances are, if you're creating an annotated tag, you'll want to sign it. To force git to always sign annotated tags, you can set a global configuration option: -``` -$ git config --global tag.forceSignAnnotated true - -``` - -Alternatively, you can just train your muscle memory to always pass the -s switch: -``` -$ git tag -asm "Tag message" tagname - -``` - -#### How to work with signed commits - -It is easy to create signed commits, but it is much more difficult to incorporate them into your workflow. Many projects use signed commits as a sort of "Committed-by:" line equivalent that records code provenance -- the signatures are rarely verified by others except when tracking down project history. In a sense, signed commits are used for "tamper evidence," and not to "tamper-proof" the git workflow. - -To create a signed commit, you just need to pass the -S flag to the git commit command (it's capital -S due to collision with another flag): -``` -$ git commit -S - -``` - -Our recommendation is to always sign commits and to require them of all project members, regardless of whether anyone is verifying them (that can always come at a later time). - -##### How to verify signed commits - -To verify a single commit you can use verify-commit: -``` -$ git verify-commit [hash] - -``` - -You can also look at repository logs and request that all commit signatures are verified and shown: -``` -$ git log --pretty=short --show-signature - -``` - -##### Verifying commits during git merge - -If all members of your project sign their commits, you can enforce signature checking at merge time (and then sign the resulting merge commit itself using the -S flag): -``` -$ git merge --verify-signatures -S merged-branch - -``` - -Note, that the merge will fail if there is even one commit that is not signed or does not pass verification. As it is often the case, technology is the easy part -- the human side of the equation is what makes adopting strict commit signing for your project difficult. - -##### If your project uses mailing lists for patch management - -If your project uses a mailing list for submitting and processing patches, then there is little use in signing commits, because all signature information will be lost when sent through that medium. It is still useful to sign your commits, just so others can refer to your publicly hosted git trees for reference, but the upstream project receiving your patches will not be able to verify them directly with git. - -You can still sign the emails containing the patches, though. - -#### Configure git to always sign commits - -You can tell git to always sign commits: -``` -git config --global commit.gpgSign true - -``` - -Or you can train your muscle memory to always pass the -S flag to all git commit operations (this includes --amend). - -#### Configure gpg-agent options - -The GnuPG agent is a helper tool that will start automatically whenever you use the gpg command and run in the background with the purpose of caching the private key passphrase. This way you only have to unlock your key once to use it repeatedly (very handy if you need to sign a bunch of git operations in an automated script without having to continuously retype your passphrase). - -There are two options you should know in order to tweak when the passphrase should be expired from cache: - - * default-cache-ttl (seconds): If you use the same key again before the time-to-live expires, the countdown will reset for another period. The default is 600 (10 minutes). - - * max-cache-ttl (seconds): Regardless of how recently you've used the key since initial passphrase entry, if the maximum time-to-live countdown expires, you'll have to enter the passphrase again. The default is 30 minutes. - - - - -If you find either of these defaults too short (or too long), you can edit your ~/.gnupg/gpg-agent.conf file to set your own values: -``` -# set to 30 minutes for regular ttl, and 2 hours for max ttl -default-cache-ttl 1800 -max-cache-ttl 7200 - -``` - -##### Bonus: Using gpg-agent with ssh - -If you've created an [A] (Authentication) key and moved it to the smartcard, you can use it with ssh for adding 2-factor authentication for your ssh sessions. You just need to tell your environment to use the correct socket file for talking to the agent. - -First, add the following to your ~/.gnupg/gpg-agent.conf: -``` -enable-ssh-support - -``` - -Then, add this to your .bashrc: -``` -export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket) - -``` - -You will need to kill the existing gpg-agent process and start a new login session for the changes to take effect: -``` -$ killall gpg-agent -$ bash -$ ssh-add -L - -``` - -The last command should list the SSH representation of your PGP Auth key (the comment should say cardno:XXXXXXXX at the end to indicate it's coming from the smartcard). - -To enable key-based logins with ssh, just add the ssh-add -L output to ~/.ssh/authorized_keys on remote systems you log in to. Congratulations, you've just made your ssh credentials extremely difficult to steal. - -As a bonus, you can get other people's PGP-based ssh keys from public keyservers, should you need to grant them ssh access to anything: -``` -$ gpg --export-ssh-key [keyid] - -``` - -This can come in super handy if you need to allow developers access to git repositories over ssh. Next time, we'll provide tips for protecting your email accounts as well as your PGP keys. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-6-using-pgp-git - -作者:[KONSTANTIN RYABITSEV][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/mricon -[1]:https://www.linux.com/blog/learn/2018/2/protecting-code-integrity-pgp-part-1-basic-pgp-concepts-and-tools -[2]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-2-generating-and-protecting-your-master-pgp-key -[3]:https://www.linux.com/blog/learn/pgp/2018/2/protecting-code-integrity-pgp-part-3-generating-pgp-subkeys -[4]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-4-moving-your-master-key-offline-storage -[5]:https://www.linux.com/blog/learn/pgp/2018/3/protecting-code-integrity-pgp-part-5-moving-subkeys-hardware-device -[6]:https://github.com/jayphelps/git-blame-someone-else diff --git a/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md b/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md deleted file mode 100644 index 21821a070c..0000000000 --- a/sources/tech/20180326 Manage your workstation with Ansible- Automating configuration.md +++ /dev/null @@ -1,236 +0,0 @@ -Manage your workstation with Ansible: Automating configuration -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/robot_arm_artificial_ai.png?itok=8CUU3U_7) -Ansible is an amazing automation and configuration management tool. It is mainly used for servers and cloud deployments, and it gets far less attention for its use in workstations, both desktops and laptops, which is the focus of this series. - -In the [first part][1] of this series, I showed you basic usage of the `ansible-pull` command, and we created a playbook that installs a handful of packages. That wasn't extremely useful by itself, but it set the stage for further automation. - -In this article, everything comes together full circle, and by the end we will have a fully working solution for automating workstation configuration. This time, we'll set up our Ansible configuration such that future changes we make will automatically be applied to our workstations. At this point, I'm assuming you already worked through part one. If you haven't, feel free to do that now and then return to this article when you're done. You should already have a GitHub repository with the code from the first article inside it. We're going to build directly on what we did before. - -First, we need to do some reorganization because we're going to do more than just install packages. At this point, we currently have a playbook named `local.yml` with the following content: -``` -- hosts: localhost - -  become: true - -  tasks: - -  - name: Install packages - -    apt: name={{item}} - -    with_items: - -      - htop - -      - mc - -      - tmux - -``` - -That's all well and good if we only want to perform one task. As we add new things to our configuration, this file will become quite large and get very cluttered. It's better to organize our plays into individual files with each responsible for a different type of configuration. To achieve this, create what's called a taskbook, which is very much like a playbook but the contents are more streamlined. Let's create a directory for our taskbooks inside our Git repository: -``` -mkdir tasks - -``` - -The code inside our current `local.yml` playbook lends itself well to become a taskbook for installing packages. Let's move this file into the `tasks` directory we just created with a new name: -``` -mv local.yml tasks/packages.yml - -``` - -Now, we can edit our `packages.yml` taskbook and strip it down quite a bit. In fact, we can strip everything except for the individual task itself. Let's make `packages.yml` look like this: -``` -- name: Install packages - -  apt: name={{item}} - -  with_items: - -    - htop - -    - mc - -    - tmux - -``` - -As you can see, it uses the same syntax, but we stripped out everything that isn't necessary to the task it's performing. Now we have a dedicated taskbook for installing packages. However, we still need a file named `local.yml`, since `ansible-pull` still expects to find a file with that name. So we'll create a fresh one with this content in the root of our repository (not in the `tasks` directory): -``` -- hosts: localhost - -  become: true - -  pre_tasks: - -    - name: update repositories - -      apt: update_cache=yes - -      changed_when: False - - - -  tasks: - -    - include: tasks/packages.yml - -``` - -This new `local.yml` acts as an index that will import all our taskbooks. I've added a few new things to this file that you haven't seen yet in this series. First, at the beginning of the file, I added `pre_tasks`, which allows us to have Ansible perform a task before all the other tasks run. In this case, we're telling Ansible to update our distribution's repository index. This line does that for us: -``` -apt: update_cache=yes - -``` - -Normally the `apt` module allows us to install packages, but we can also tell it to update our repository index. The idea is that we want all our individual plays to work with a fresh index each time Ansible runs. This will help ensure we don't have an issue with a stale index while attempting to install a package. Note that the `apt` module works only with Debian, Ubuntu, and their derivatives. If you're running a different distribution, you'll want to use a module specific to your distribution rather than `apt`. See the documentation for Ansible if you need to use a different module. - -The following line is also worth further explanation: -``` -changed_when: False - -``` - -This line on an individual task stops Ansible from reporting the results of the play as changed even when it results in a change in the system. In this case, we don't care if the repository index contains new data; it almost always will, since repositories are always changing. We don't care about changes to `apt` repositories, as index changes are par for the course. If we omit this line, we'll see the summary at the end of the process report that something has changed, even if it was merely about the repository being updated. It's better to ignore these types of changes. - -Next is our normal tasks section, and we import the taskbook we created. Each time we add another taskbook, we add another line here: -``` -tasks: - -  - include: tasks/packages.yml - -``` - -If you were to run the `ansible-pull` command here, it should essentially do the same thing as it did in the last article. The difference is that we have improved our organization and we can more efficiently expand on it. The `ansible-pull` command syntax, to save you from finding the previous article, is this: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -If you recall, the `ansible-pull` command pulls down a Git repository and applies the configuration it contains. - -Now that our foundation is in place, we can expand upon our Ansible config and add features. Specifically, we'll add configuration to automate the deployment of future changes to our workstations. To support this goal, the first thing we should do is to create a user specifically to apply our Ansible configuration. This isn't required—we can continue to run our Ansible configuration under our own user. But using a separate user segregates this to a system process that will run in the background, without our involvement. - -We could create this user with the normal method, but since we're using Ansible, we should shy away from manual changes. Instead, we'll create a taskbook to handle user creation. This taskbook will create just one user for now, but you can always add additional plays to this taskbook to add additional users. I'll call this user `ansible`, but you can name it something else if you wish (if you do, make sure to update all occurrences). Let's create a taskbook named `users.yml` and place this code inside of it: -``` -- name: create ansible user - -  user: name=ansible uid=900 - -``` - -Next, we need to edit our `local.yml` file and append this new taskbook to the file, so it will look like this: -``` -- hosts: localhost - -  become: true - -  pre_tasks: - -    - name: update repositories - -      apt: update_cache=yes - -      changed_when: False - - - -  tasks: - -    - include: tasks/users.yml - -    - include: tasks/packages.yml - -``` - -Now when we run our `ansible-pull` command, a user named `ansible` will be created on the system. Note that I specifically declared `User ID 900` for this user by using the `UID` option. This isn't required, but it's recommended. The reason is that UIDs under 1,000 are typically not shown on the login screen, which is great because there's no reason we would need to log into a desktop session as our `ansible` user. UID 900 is arbitrary; it should be any number under 1,000 that's not already in use. You can find out if UID 900 is in use on your system with the following command: -``` -cat /etc/passwd |grep 900 - -``` - -However, you shouldn't run into a problem with this UID because I've never seen it used by default in any distribution I've used so far. - -Now, we have an `ansible` user that will later be used to apply our Ansible configuration automatically. Next, we can create the actual cron job that will be used to automate this. Rather than place this in the `users.yml` taskbook we just created, we should separate this into its own file. Create a taskbook named `cron.yml` in the tasks directory and place the following code inside: -``` -- name: install cron job (ansible-pull) - -  cron: user="ansible" name="ansible provision" minute="*/10" job="/usr/bin/ansible-pull -o -U https://github.com//ansible.git > /dev/null" - -``` - -The syntax for the cron module should be fairly self-explanatory. With this play, we're creating a cron job to be run as the `ansible` user. The job will execute every 10 minutes, and the command it will execute is this: -``` -/usr/bin/ansible-pull -o -U https://github.com//ansible.git > /dev/null - -``` - -Also, we can put additional cron jobs we want all our workstations to have into this one file. We just need to add additional plays to add new cron jobs. However, simply adding a new taskbook for cron isn't enough—we'll also need to add it to our `local.yml` file so it will be called. Place the following line with the other includes: -``` -- include: tasks/cron.yml - -``` - -Now when `ansible-pull` is run, it will set up a new cron job that will be run as the `ansible` user every 10 minutes. But, having an Ansible job running every 10 minutes isn't ideal because it will take considerable CPU power. It really doesn't make sense for Ansible to run every 10 minutes unless we've changed something in the Git repository. - -However, we've already solved this problem. Notice the `-o` option I added to the `ansible-pull` command in the cron job that we've never used before. This option tells Ansible to run only if the repository has changed since the last time `ansible-pull` was called. If the repository hasn't changed, it won't do anything. This way, you're not wasting valuable CPU for no reason. Sure, some CPU will be used when it pulls down the repository, but not nearly as much as it would use if it were applying your entire configuration all over again. When `ansible-pull` does run, it will go through all the tasks in the Playbook and taskbooks, but at least it won't run for no purpose. - -Although we've added all the required components to automate `ansible-pull`, it actually won't work properly yet. The `ansible-pull` command will run with `sudo`, which would give it access to perform system-level commands. However, our `ansible` user is not set up to perform tasks as `sudo`. Therefore, when the cron job triggers, it will fail. Normally we could just use `visudo` and manually set the `ansible` user up to have this access. However, we should do things the Ansible way from now on, and this is a great opportunity to show you how the `copy` module works. The `copy` module allows you to copy a file from your Ansible repository to somewhere else in the filesystem. In our case, we'll copy a config file for `sudo` to `/etc/sudoers.d/` so that the `ansible` user can perform administrative tasks. - -Open up the `users.yml` taskbook, and add the following play to the bottom: -``` -- name: copy sudoers_ansible - -  copy: src=files/sudoers_ansible dest=/etc/sudoers.d/ansible owner=root group=root mode=0440 - -``` - -The `copy` module, as we can see, copies a file from our repository to somewhere else. In this case, we're grabbing a file named `sudoers_ansible` (which we will create shortly) and copying it to `/etc/sudoers.d/ansible` with `root` as the owner. - -Next, we need to create the file that we'll be copying. In the root of your Ansible repository, create a `files` directory:​ -``` -mkdir files - -``` - -Then, in the `files` directory we just created, create the `sudoers_ansible` file with the following content: -``` -ansible ALL=(ALL) NOPASSWD: ALL - -``` - -Creating a file in `/etc/sudoers.d`, like we're doing here, allows us to configure `sudo` for a specific user. Here we're allowing the `ansible` user full access to the system via `sudo` without a password prompt. This will allow `ansible-pull` to run as a background task without us needing to run it manually. - -Now, you can run `ansible-pull` again to pull down the latest changes: -``` -sudo ansible-pull -U https://github.com//ansible.git - -``` - -From this point forward, the cron job for `ansible-pull` will run every 10 minutes in the background and check your repository for changes. If it finds changes, it will run your playbook and apply your taskbooks. - -So now we have a fully working solution. When you first set up a new laptop or desktop, you'll run the `ansible-pull` command manually, but only the first time. From that point forward, the `ansible` user will take care of subsequent runs in the background. When you want to make a change to your workstation machines, you simply pull down your Git repository, make the changes, then push those changes back to the repository. Then, the next time the cron job fires on each machine, it will pull down those changes and apply them. You now only have to make changes once, and all your workstations will follow suit. This method may be a bit unconventional though. Normally, you'd have an `inventory` file with your machines listed and several roles each machine could be a member of. However, the `ansible-pull` method, as described in this article, is a very efficient way of managing workstation configuration. - -I have updated the code in my [GitHub repository][2] for this article, so feel free to browse the code there and check your syntax against mine. Also, I moved the code from the previous article into its own directory in that repository. - -In part 3, we'll close out the series by using Ansible to configure GNOME desktop settings. I'll show you how to set your wallpaper and lock screen, apply a desktop theme, and more. - -In the meantime, it's time for a little homework assignment. Most of us have configuration files we like to maintain for various applications we use. This could be configuration files for Bash, Vim, or whatever tools you use. I challenge you now to automate copying these configuration files to your machines via the Ansible repository we've been working on. In this article, I've shown you how to copy a file, so take a look at that and see if you can apply that knowledge to your personal files. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/3/manage-your-workstation-configuration-ansible-part-2 - -作者:[Jay LaCroix][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://opensource.com/article/18/3/manage-workstation-ansible -[2]:https://github.com/jlacroix82/ansible_article.git diff --git a/sources/tech/20180328 What NASA Has Been Doing About Open Science.md b/sources/tech/20180328 What NASA Has Been Doing About Open Science.md deleted file mode 100644 index 96d3aaa4a0..0000000000 --- a/sources/tech/20180328 What NASA Has Been Doing About Open Science.md +++ /dev/null @@ -1,114 +0,0 @@ -What NASA Has Been Doing About Open Science -====== -![][1] - -We have recently started a new [Science category][2] on It’s FOSS. We covered [how open source approach is impacting Science][3] in the last article. In this open science article, we discuss [NASA][4]‘s actively growing efforts that involve their dynamic role in boosting scientific research by encouraging open source practices. - -### How NASA is using Open Source approach to improve science - -It was a great [initiative][5] by NASA that they made their entire research library freely available on the public domain. - -Yes! Entire research library for everyone to access and get benefit from it in their research. - -Their open science resources can now be mainly classified into these three categories as follows: - - * Open Source NASA - * Open API - * Open Data - - - -#### 1\. Open Source NASA - -Here’s an interesting interview of [Chris Wanstrath][6], co-founder and CEO of [GitHub][7], about how it all began to form many years ago: - -Uniquely named “[code.nasa.gov][8]“, NASA now has precisely 365 scientific software available as [open source via GitHub][9] as of the time of this post. So if you are a developer who loves coding, you can study each one of them every day for a year’s time! - -Even if you are not a developer, you can still browse through the fantastic collection of open source packages enlisted on the portal! - -One of the interesting open source packages listed here is the source code of [Apollo 11][10]‘s guidance computer. The Apollo 11 spaceflight took [the first two humans to the Moon][11], namely, [Neil Armstrong][12] and [Edwin Aldrin][13] ! If you want to know more about Edwin Aldrin, you might want to pay a visit [here][14]. - -##### Licenses being used by NASA’s Open Source Initiative: - -Here are the different [open source licenses][15] categorized as under: - -#### 2\. Open API - -An Open [Application Programming Interface][16] or API plays a significant role in practicing Open Science. Just like [The Open Source Initiative][17], there is also one for API, called [The Open API Initiative][18]. Here is a simple illustration of how an API bridges an application with its developer: - -![][19] - -Do check out the link in the caption in the image above. The API has been explained in a straightforward manner. It concludes with five exciting takeaways in the end. - -![][20] - -Makes one wonder how different [an open vs a proprietary API][21] would be. - -![][22] - -Targeted towards application developers, [NASA’s open API][23] is an initiative to significantly improve the accessibility of data that could also contain image content. The site has a live editor, allowing you check out the [API][16] behind [Astronomy Picture of the Day (APOD)][24]. - -#### 3\. Open Data - -![][25] - -In [our first science article][3], we shared with you the various open data models of three countries mentioned under the “Open Science” section, namely, France, India and the U.S. NASA also has a similar approach towards the same idea. This is a very important ideology that is being adopted by [many countries][26]. - -[NASA’s Open Data Portal][27] focuses on openness by having an ever-growing catalog of data, available for anyone to access freely. The inclusion of datasets within this collection is an essential and radical step towards the development of research of any kind. NASA have even taken a fantastic initiative to ask for dataset suggestions for submission on their portal and that’s really very innovative, considering the growing trends of [data science][28], [AI and deep learning][29]. - -The following video shows students and scientists coming up with their own definitions of Data Science based on their personal experience and research. That is really encouraging! [Dr. Murtaza Haider][30], Ted Rogers School of Management, Ryerson University, mentions the difference Open Source is making in the field of Data Science before the video ends. He explains in very simple terms, how development models transitioned from a closed source approach to an open one. The vision has proved to be sufficiently true enough in today’s time. - -![][31] - -Now anyone can suggest a dataset of any kind on NASA. Coming back to the video above, NASA’s initiative can be related so much with submitting datasets and working on analyzing them for better Data Science! - -![][32] - -You just need to signup for free. This initiative will have a very positive effect in the future, considering open discussions on the public forum and the significance of datasets in every type of analytical field that could exist. Statistical studies will also significantly improve for sure. We will talk about these concepts in detail in a future article and also about their relativeness to an open source model. - -So thus concludes an exploration into NASA’s open science model. See you soon in another open science article! - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/nasa-open-science/ - -作者:[Avimanyu Bandyopadhyay][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/avimanyu/ -[1]:https://itsfoss.com/wp-content/uploads/2018/03/tux-in-space.jpg -[2]:https://itsfoss.com/category/science/ -[3]:https://itsfoss.com/open-source-impact-on-science/ -[4]:https://www.nasa.gov/ -[5]:https://futurism.com/free-science-nasa-just-opened-its-entire-research-library-to-the-public/ -[6]:http://chriswanstrath.com/ -[7]:https://github.com/ -[8]:http://code.nasa.gov -[9]:https://github.com/open-source -[10]:https://www.nasa.gov/mission_pages/apollo/missions/apollo11.html -[11]:https://www.space.com/16758-apollo-11-first-moon-landing.html -[12]:https://www.jsc.nasa.gov/Bios/htmlbios/armstrong-na.html -[13]:https://www.jsc.nasa.gov/Bios/htmlbios/aldrin-b.html -[14]:https://buzzaldrin.com/the-man/ -[15]:https://itsfoss.com/open-source-licenses-explained/ -[16]:https://en.wikipedia.org/wiki/Application_programming_interface -[17]:https://opensource.org/ -[18]:https://www.openapis.org/ -[19]:https://itsfoss.com/wp-content/uploads/2018/03/api-bridge.jpeg -[20]:https://itsfoss.com/wp-content/uploads/2018/03/open-api-diagram.jpg -[21]:http://www.apiacademy.co/resources/api-strategy-lesson-201-private-apis-vs-open-apis/ -[22]:https://itsfoss.com/wp-content/uploads/2018/03/nasa-open-api-live-example.jpg -[23]:https://api.nasa.gov/ -[24]:https://apod.nasa.gov/apod/astropix.html -[25]:https://itsfoss.com/wp-content/uploads/2018/03/nasa-open-data-portal.jpg -[26]:https://www.xbrl.org/the-standard/why/ten-countries-with-open-data/ -[27]:https://data.nasa.gov/ -[28]:https://en.wikipedia.org/wiki/Data_science -[29]:https://www.kdnuggets.com/2017/07/ai-deep-learning-explained-simply.html -[30]:https://www.ryerson.ca/tedrogersschool/bm/programs/real-estate-management/murtaza-haider/ -[31]:https://itsfoss.com/wp-content/uploads/2018/03/suggest-dataset-nasa-1.jpg -[32]:https://itsfoss.com/wp-content/uploads/2018/03/suggest-dataset-nasa-2-1.jpg diff --git a/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md b/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md deleted file mode 100644 index 4efe606f51..0000000000 --- a/sources/tech/20180331 Emacs -4- Automated emails to org-mode and org-mode syncing.md +++ /dev/null @@ -1,72 +0,0 @@ -Emacs #4: Automated emails to org-mode and org-mode syncing -====== -This is fourth in [a series on Emacs and org-mode][1]. - -Hopefully by now you’ve started to see how powerful and useful org-mode is. If you’re like me, you’re thinking: - -“I’d really like to have this in sync across all my devices.” - -and, perhaps: - -“Can I forward emails into org-mode?” - -This being Emacs, the answers, of course, are “Yes.” - -### Syncing - -Since org-mode just uses text files, syncing is pretty easily accomplished using any number of tools. I use git with git-remote-gcrypt. Due to some limitations of git-remote-gcrypt, each machine tends to push to its own branch, and to master on command. Each machine merges from all the other branches and pushes the result to master after a merge. A cron job causes pushes to the machine’s branch to happen, and a bit of elisp coordinates it all — making sure to save buffers before a sync, refresh them from disk after, etc. - -The code for this post is somewhat more extended, so I will be linking to it on github rather than posting inline. - -I have a directory $HOME/org where all my org-stuff lives. In ~/org lives [a Makefile][2] that handles the syncing. It defines these targets: - - * push: adds, commits, and pushes to a branch named after the machine’s hostname - * fetch: does a simple git fetch - * sync: adds, commits, pulls remote changes, merges, and (assuming the merge was successful) pushes to the branch named after the machine’s hostname plus master - - - -Now, in my user’s crontab, I have this: -``` -*/15 * * * * make -C $HOME/org push fetch 2>&1 | logger --tag 'orgsync' - -``` - -The [accompanying elisp code][3] defines a shortcut (C-c s) to cause a sync to occur. Thanks to the cronjob, as long as files were saved — even if I didn’t explicitly sync on the other boxen — they’ll be pulled in. - -I have found this setup to work really well. - -### Emailing to org-mode - -Before going down this path, one should ask the question: do you really need it? I use org-mode with mu4e, and the integration is excellent; any org task can link to an email by message-id, and this is ideal — it lets a person do things like make a reminder to reply to a message in a week. - -However, org is not just about reminders. It’s also a knowledge base, authoring system, etc. And, not all of my mail clients use mu4e. (Note: things like MobileOrg exist for mobile devices). I don’t actually use this as much as I thought I would, but it has its uses and I thought I’d document it here too. - -Now I didn’t want to just be able to accept plain text email. I wanted to be able to handle attachments, HTML mail, etc. This quickly starts to sound problematic — but with tools like ripmime and pandoc, it’s not too bad. - -The first step is to set up some way to get mail into a specific folder. A plus-extension, special user, whatever. I then use a [fetchmail configuration][4] to pull it down and run my [insorgmail][5] script. - -This script is where all the interesting bits happen. It starts with ripmime to process the message. HTML bits are converted from HTML to org format using pandoc. an org hierarchy is made to represent the structure of the email as best as possible. emails can get pretty complicated, with HTML and the rest, but I have found this does an acceptable job with my use cases. - -### Up next… - -My last post on org-mode will talk about using it to write documents and prepare slides — a use for which I found myself surprisingly pleased with it, but which needed a bit of tweaking. - - - --------------------------------------------------------------------------------- - -via: http://changelog.complete.org/archives/9898-emacs-4-automated-emails-to-org-mode-and-org-mode-syncing - -作者:[John Goerzen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://changelog.complete.org/ -[1]:https://changelog.complete.org/archives/tag/emacs2018 -[2]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/Makefile -[3]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/emacs-config.org -[4]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/fetchmailrc.orgmail -[5]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/org-tools/insorgmail diff --git a/sources/tech/20180402 An introduction to the Flask Python web app framework.md b/sources/tech/20180402 An introduction to the Flask Python web app framework.md new file mode 100644 index 0000000000..ffb6e9c441 --- /dev/null +++ b/sources/tech/20180402 An introduction to the Flask Python web app framework.md @@ -0,0 +1,451 @@ +[#]: collector: (lujun9972) +[#]: translator: (fuowang) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (An introduction to the Flask Python web app framework) +[#]: via: (https://opensource.com/article/18/4/flask) +[#]: author: (Nicholas Hunt-Walker https://opensource.com/users/nhuntwalker) +[#]: url: ( ) + +An introduction to the Flask Python web app framework +====== +In the first part in a series comparing Python frameworks, learn about Flask. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python-programming-code-keyboard.png?itok=fxiSpmnd) + +If you're developing a web app in Python, chances are you're leveraging a framework. A [framework][1] "is a code library that makes a developer's life easier when building reliable, scalable, and maintainable web applications" by providing reusable code or extensions for common operations. There are a number of frameworks for Python, including [Flask][2], [Tornado][3], [Pyramid][4], and [Django][5]. New Python developers often ask: Which framework should I use? + + * New visitors to the site should be able to register new accounts. + * Registered users can log in, log out, see information for their profiles, and edit their information. + * Registered users can create new task items, see their existing tasks, and edit existing tasks. + + + +This series is designed to help developers answer that question by comparing those four frameworks. To compare their features and operations, I'll take each one through the process of constructing an API for a simple To-Do List web application. The API is itself fairly straightforward: + +All this rounds out to a compact set of API endpoints that each backend must implement, along with the allowed HTTP methods: + + * `GET /` + * `POST /accounts` + * `POST /accounts/login` + * `GET /accounts/logout` + * `GET, PUT, DELETE /accounts/` + * `GET, POST /accounts//tasks` + * `GET, PUT, DELETE /accounts//tasks/` + + + +Each framework has a different way to put together its routes, models, views, database interaction, and overall application configuration. I'll describe those aspects of each framework in this series, which will begin with Flask. + +### Flask startup and configuration + +Like most widely used Python libraries, the Flask package is installable from the [Python Package Index][6] (PPI). First create a directory to work in (something like `flask_todo` is a fine directory name) then install the `flask` package. You'll also want to install `flask-sqlalchemy` so your Flask application has a simple way to talk to a SQL database. + +I like to do this type of work within a Python 3 virtual environment. To get there, enter the following on the command line: + +``` +$ mkdir flask_todo +$ cd flask_todo +$ pipenv install --python 3.6 +$ pipenv shell +(flask-someHash) $ pipenv install flask flask-sqlalchemy +``` + +If you want to turn this into a Git repository, this is a good place to run `git init`. It'll be the root of the project, and if you want to export the codebase to a different machine, it will help to have all the necessary setup files here. + +A good way to get moving is to turn the codebase into an installable Python distribution. At the project's root, create `setup.py` and a directory called `todo` to hold the source code. + +The `setup.py` should look something like this: + +``` +from setuptools import setup, find_packages + +requires = [ +    'flask', +    'flask-sqlalchemy', +    'psycopg2', +] + +setup( +    name='flask_todo', +    version='0.0', +    description='A To-Do List built with Flask', +    author='', +    author_email='', +    keywords='web flask', +    packages=find_packages(), +    include_package_data=True, +    install_requires=requires +) +``` + +This way, whenever you want to install or deploy your project, you'll have all the necessary packages in the `requires` list. You'll also have everything you need to set up and install the package in `site-packages`. For more information on how to write an installable Python distribution, check out [the docs on setup.py][7]. + +Within the `todo` directory containing your source code, create an `app.py` file and a blank `__init__.py` file. The `__init__.py` file allows you to import from `todo` as if it were an installed package. The `app.py` file will be the application's root. This is where all the `Flask` application goodness will go, and you'll create an environment variable that points to that file. If you're using `pipenv` (like I am), you can locate your virtual environment with `pipenv --venv` and set up that environment variable in your environment's `activate` script. + +``` +# in your activate script, probably at the bottom (but anywhere will do) + +export FLASK_APP=$VIRTUAL_ENV/../todo/app.py +export DEBUG='True' +``` + +When you installed `Flask`, you also installed the `flask` command-line script. Typing `flask run` will prompt the virtual environment's Flask package to run an HTTP server using the `app` object in whatever script the `FLASK_APP` environment variable points to. The script above also includes an environment variable named `DEBUG` that will be used a bit later. + +Let's talk about this `app` object. + +In `todo/app.py`, you'll create an `app` object, which is an instance of the `Flask` object. It'll act as the central configuration object for the entire application. It's used to set up pieces of the application required for extended functionality, e.g., a database connection and help with authentication. + +It's regularly used to set up the routes that will become the application's points of interaction. To explain what this means, let's look at the code it corresponds to. + +``` +from flask import Flask + +app = Flask(__name__) + +@app.route('/') +def hello_world(): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +This is the most basic complete Flask application. `app` is an instance of `Flask`, taking in the `__name__` of the script file. This lets Python know how to import from files relative to this one. The `app.route` decorator decorates the first **view** function; it can specify one of the routes used to access the application. (We'll look at this later.) + +Any view you specify must be decorated by `app.route` to be a functional part of the application. You can have as many functions as you want scattered across the application, but in order for that functionality to be accessible from anything external to the application, you must decorate that function and specify a route to make it into a view. + +In the example above, when the app is running and accessed at `http://domainname/`, a user will receive `"Hello, World!"` as a response. + +### Connecting the database in Flask + +While the code example above represents a complete Flask application, it doesn't do anything interesting. One interesting thing a web application can do is persist user data, but it needs the help of and connection to a database. + +Flask is very much a "do it yourself" web framework. This means there's no built-in database interaction, but the `flask-sqlalchemy` package will connect a SQL database to a Flask application. The `flask-sqlalchemy` package needs just one thing to connect to a SQL database: The database URL. + +Note that a wide variety of SQL database management systems can be used with `flask-sqlalchemy`, as long as the DBMS has an intermediary that follows the [DBAPI-2 standard][8]. In this example, I'll use PostgreSQL (mainly because I've used it a lot), so the intermediary to talk to the Postgres database is the `psycopg2` package. Make sure `psycopg2` is installed in your environment and include it in the list of required packages in `setup.py`. You don't have to do anything else with it; `flask-sqlalchemy` will recognize Postgres from the database URL. + +Flask needs the database URL to be part of its central configuration through the `SQLALCHEMY_DATABASE_URI` attribute. A quick and dirty solution is to hardcode a database URL into the application. + +``` +# top of app.py +from flask import Flask +from flask_sqlalchemy import SQLAlchemy + +app = Flask(__name__) +app.config['SQLALCHEMY_DATABASE_URI'] = 'postgres://localhost:5432/flask_todo' +db = SQLAlchemy(app) +``` + +However, this is not a sustainable solution. If you change databases or don't want your database URL visible in source control, you'll have to take extra steps to ensure your information is appropriate for the environment. + +You can make things simpler by using environment variables. They will ensure that, no matter what machine the code runs on, it always points at the right stuff if that stuff is configured in the running environment. It also ensures that, even though you need that information to run the application, it never shows up as a hardcoded value in source control. + +In the same place you declared `FLASK_APP`, declare a `DATABASE_URL` pointing to the location of your Postgres database. Development tends to work locally, so just point to your local database. + +``` +# also in your activate script + +export DATABASE_URL='postgres://localhost:5432/flask_todo' +``` + +Now in `app.py`, include the database URL in your app configuration. + +``` +app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') +db = SQLAlchemy(app) +``` + +And just like that, your application has a database connection! + +### Defining objects in Flask + +Having a database to talk to is a good first step. Now it's time to define some objects to fill that database. + +In application development, a "model" refers to the data representation of some real or conceptual object. For example, if you're building an application for a car dealership, you may define a `Car` model that encapsulates all of a car's attributes and behavior. + +In this case, you're building a To-Do List with Tasks, and each Task belongs to a User. Before you think too deeply about how they're related to each other, start by defining objects for Tasks and Users. + +The `flask-sqlalchemy` package leverages [SQLAlchemy][9] to set up and inform the database structure. You'll define a model that will live in the database by inheriting from the `db.Model` object and define the attributes of those models as `db.Column` instances. For each column, you must specify a data type, so you'll pass that data type into the call to `db.Column` as the first argument. + +Because the model definition occupies a different conceptual space than the application configuration, make `models.py` to hold model definitions separate from `app.py`. The Task model should be constructed to have the following attributes: + + * `id`: a value that's a unique identifier to pull from the database + * `name`: the name or title of the task that the user will see when the task is listed + * `note`: any extra comments that a person might want to leave with their task + * `creation_date`: the date and time the task was created + * `due_date`: the date and time the task is due to be completed (if at all) + * `completed`: a way to indicate whether or not the task has been completed + + + +Given this attribute list for Task objects, the application's `Task` object can be defined like so: + +``` +from .app import db +from datetime import datetime + +class Task(db.Model): +    """Tasks for the To Do list.""" +    id = db.Column(db.Integer, primary_key=True) +    name = db.Column(db.Unicode, nullable=False) +    note = db.Column(db.Unicode) +    creation_date = db.Column(db.DateTime, nullable=False) +    due_date = db.Column(db.DateTime) +    completed = db.Column(db.Boolean, default=False) + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.creation_date = datetime.now() +``` + +Note the extension of the class constructor method. At the end of the day, any model you construct is still a Python object and therefore must go through construction in order to be instantiated. It's important to ensure that the creation date of the model instance reflects its actual date of creation. You can explicitly set that relationship by effectively saying, "when an instance of this model is constructed, record the date and time and set it as the creation date." + +### Model relationships + +In a given web application, you may want to be able to express relationships between objects. In the To-Do List example, users own multiple tasks, and each task is owned by only one user. This is an example of a "many-to-one" relationship, also known as a foreign key relationship, where the tasks are the "many" and the user owning those tasks is the "one." + +In Flask, a many-to-one relationship can be specified using the `db.relationship` function. First, build the User object. + +``` +class User(db.Model): +    """The User object that owns tasks.""" +    id = db.Column(db.Integer, primary_key=True) +    username = db.Column(db.Unicode, nullable=False) +    email = db.Column(db.Unicode, nullable=False) +    password = db.Column(db.Unicode, nullable=False) +    date_joined = db.Column(db.DateTime, nullable=False) +    token = db.Column(db.Unicode, nullable=False) + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.date_joined = datetime.now() +        self.token = secrets.token_urlsafe(64) +``` + +It looks very similar to the Task object; you'll find that most objects have the same basic format of class attributes as table columns. Every once in a while, you'll run into something a little different, including some multiple-inheritance magic, but this is the norm. + +Now that the `User` model is created, you can set up the foreign key relationship. For the "many," set fields for the `user_id` of the `User` that owns this task, as well as the `user` object with that ID. Also make sure to include a keyword argument (`back_populates`) that updates the User model when the task gets a user as an owner. + +For the "one," set a field for the `tasks` the specific user owns. Similar to maintaining the two-way relationship on the Task object, set a keyword argument on the User's relationship field to update the Task when it is assigned to a user. + +``` +# on the Task object +user_id = db.Column(db.Integer, db.ForeignKey('user.id'), nullable=False) +user = db.relationship("user", back_populates="tasks") + +# on the User object +tasks = db.relationship("Task", back_populates="user") +``` + +### Initializing the database + +Now that the models and model relationships are set, start setting up your database. Flask doesn't come with its own database-management utility, so you'll have to write your own (to some degree). You don't have to get fancy with it; you just need something to recognize what tables are to be created and some code to create them (or drop them should the need arise). If you need something more complex, like handling updates to database tables (i.e., database migrations), you'll want to look into a tool like [Flask-Migrate][10] or [Flask-Alembic][11]. + +Create a script called `initializedb.py` next to `setup.py` for managing the database. (Of course, it doesn't need to be called this, but why not give names that are appropriate to a file's function?) Within `initializedb.py`, import the `db` object from `app.py` and use it to create or drop tables. `initializedb.py` should end up looking something like this: + +``` +from todo.app import db +import os + +if bool(os.environ.get('DEBUG', '')): +    db.drop_all() +db.create_all() +``` + +If a `DEBUG` environment variable is set, drop tables and rebuild. Otherwise, just create the tables once and you're good to go. + +### Views and URL config + +The last bits needed to connect the entire application are the views and routes. In web development, a "view" (in concept) is functionality that runs when a specific access point (a "route") in your application is hit. These access points appear as URLs: paths to functionality in an application that return some data or handle some data that has been provided. The views will be logical structures that handle specific HTTP requests from a given client and return some HTTP response to that client. + +In Flask, views appear as functions; for example, see the `hello_world` view above. For simplicity, here it is again: + +``` +@app.route('/') +def hello_world(): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +When the route of `http://domainname/` is accessed, the client receives the response, "Hello, world!" + +With Flask, a function is marked as a view when it is decorated by `app.route`. In turn, `app.route` adds to the application's central configuration a map from the specified route to the function that runs when that route is accessed. You can use this to start building out the rest of the API. + +Start with a view that handles only `GET` requests, and respond with the JSON representing all the routes that will be accessible and the methods that can be used to access them. + +``` +from flask import jsonify + +@app.route('/api/v1', methods=["GET"]) +def info_view(): +    """List of routes for this API.""" +    output = { +        'info': 'GET /api/v1', +        'register': 'POST /api/v1/accounts', +        'single profile detail': 'GET /api/v1/accounts/', +        'edit profile': 'PUT /api/v1/accounts/', +        'delete profile': 'DELETE /api/v1/accounts/', +        'login': 'POST /api/v1/accounts/login', +        'logout': 'GET /api/v1/accounts/logout', +        "user's tasks": 'GET /api/v1/accounts//tasks', +        "create task": 'POST /api/v1/accounts//tasks', +        "task detail": 'GET /api/v1/accounts//tasks/', +        "task update": 'PUT /api/v1/accounts//tasks/', +        "delete task": 'DELETE /api/v1/accounts//tasks/' +    } +    return jsonify(output) +``` + +Since you want your view to handle one specific type of HTTP request, use `app.route` to add that restriction. The `methods` keyword argument will take a list of strings as a value, with each string a type of possible HTTP method. In practice, you can use `app.route` to restrict to one or more types of HTTP request or accept any by leaving the `methods` keyword argument alone. + +Whatever you intend to return from your view function **must** be a string or an object that Flask turns into a string when constructing a properly formatted HTTP response. The exceptions to this rule are when you're trying to handle redirects and exceptions thrown by your application. What this means for you, the developer, is that you need to be able to encapsulate whatever response you're trying to send back to the client into something that can be interpreted as a string. + +A good structure that contains complexity but can still be stringified is a Python dictionary. Therefore, I recommend that, whenever you want to send some data to the client, you choose a Python `dict` with whatever key-value pairs you need to convey information. To turn that dictionary into a properly formatted JSON response, headers and all, pass it as an argument to Flask's `jsonify` function (`from flask import jsonify`). + +The view function above takes what is effectively a listing of every route that this API intends to handle and sends it to the client whenever the `http://domainname/api/v1` route is accessed. Note that, on its own, Flask supports routing to exactly matching URIs, so accessing that same route with a trailing `/` would create a 404 error. If you wanted to handle both with the same view function, you'd need stack decorators like so: + +``` +@app.route('/api/v1', methods=["GET"]) +@app.route('/api/v1/', methods=["GET"]) +def info_view(): +    # blah blah blah more code +``` + +An interesting case is that if the defined route had a trailing slash and the client asked for the route without the slash, you wouldn't need to double up on decorators. Flask would redirect the client's request appropriately. It's odd that it doesn't work both ways. + +### Flask requests and the DB + +At its base, a web framework's job is to handle incoming HTTP requests and return HTTP responses. The previously written view doesn't really have much to do with HTTP requests aside from the URI that was accessed. It doesn't process any data. Let's look at how Flask behaves when data needs handling. + +The first thing to know is that Flask doesn't provide a separate `request` object to each view function. It has **one** global request object that every view function can use, and that object is conveniently named `request` and is importable from the Flask package. + +The next thing is that Flask's route patterns can have a bit more nuance. One scenario is a hardcoded route that must be matched perfectly to activate a view function. Another scenario is a route pattern that can handle a range of routes, all mapping to one view by allowing a part of that route to be variable. If the route in question has a variable, the corresponding value can be accessed from the same-named variable in the view's parameter list. + +``` +@app.route('/a/sample//route) +def some_view(variable): +    # some code blah blah blah +``` + +To communicate with the database within a view, you must use the `db` object that was populated toward the top of the script. Its `session` attribute is your connection to the database when you want to make changes. If you just want to query for objects, the objects built from `db.Model` have their own database interaction layer through the `query` attribute. + +Finally, any response you want from a view that's more complex than a string must be built deliberately. Previously you built a response using a "jsonified" dictionary, but certain assumptions were made (e.g., 200 status code, status message "OK," Content-Type of "text/plain"). Any special sauce you want in your HTTP response must be added deliberately. + +Knowing these facts about working with Flask views allows you to construct a view whose job is to create new `Task` objects. Let's look at the code (below) and address it piece by piece. + +``` +from datetime import datetime +from flask import request, Response +from flask_sqlalchemy import SQLAlchemy +import json + +from .models import Task, User + +app = Flask(__name__) +app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get('DATABASE_URL', '') +db = SQLAlchemy(app) + +INCOMING_DATE_FMT = '%d/%m/%Y %H:%M:%S' + +@app.route('/api/v1/accounts//tasks', methods=['POST']) +def create_task(username): +    """Create a task for one user.""" +    user = User.query.filter_by(username=username).first() +    if user: +        task = Task( +            name=request.form['name'], +            note=request.form['note'], +            creation_date=datetime.now(), +            due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +            completed=bool(request.form['completed']), +            user_id=user.id, +        ) +        db.session.add(task) +        db.session.commit() +        output = {'msg': 'posted'} +        response = Response( +            mimetype="application/json", +            response=json.dumps(output), +            status=201 +        ) +        return response +``` + +Let's start with the `@app.route` decorator. The route is `'/api/v1/accounts//tasks'`, where `` is a route variable. Put angle brackets around any part of the route you want to be variable, then include that part of the route on the next line in the parameter list **with the same name**. The only parameters that should be in the parameter list should be the variables in your route. + +Next comes the query: + +``` +user = User.query.filter_by(username=username).first() +``` + +To look for one user by username, conceptually you need to look at all the User objects stored in the database and find the users with the username matching the one that was requested. With Flask, you can ask the `User` object directly through the `query` attribute for the instance matching your criteria. This type of query would provide a list of objects (even if it's only one object or none at all), so to get the object you want, just call `first()`. + +``` +task = Task( +    name=request.form['name'], +    note=request.form['note'], +    creation_date=datetime.now(), +    due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +    completed=bool(request.form['completed']), +    user_id=user.id, +) +``` + +Whenever data is sent to the application, regardless of the HTTP method used, that data is stored on the `form` attribute of the `request` object. The name of the field on the frontend will be the name of the key mapped to that data in the `form` dictionary. It'll always come in the form of a string, so if you want your data to be a specific data type, you'll have to make it explicit by casting it as the appropriate type. + +The other thing to note is the assignment of the current user's user ID to the newly instantiated `Task`. This is how that foreign key relationship is maintained. + +``` +db.session.add(task) +db.session.commit() +``` + +Creating a new `Task` instance is great, but its construction has no inherent connection to tables in the database. In order to insert a new row into the corresponding SQL table, you must use the `session` attached to the `db` object. The `db.session.add(task)` stages the new `Task` instance to be added to the table, but doesn't add it yet. While it's done only once here, you can add as many things as you want before committing. The `db.session.commit()` takes all the staged changes, or "commits," and applies them to the corresponding tables in the database. + +``` +output = {'msg': 'posted'} +response = Response( +    mimetype="application/json", +    response=json.dumps(output), +    status=201 +) +``` + +The response is an actual instance of a `Response` object with its `mimetype`, body, and `status` set deliberately. The goal for this view is to alert the user they created something new. Seeing how this view is supposed to be part of a backend API that sends and receives JSON, the response body must be JSON serializable. A dictionary with a simple string message should suffice. Ensure that it's ready for transmission by calling `json.dumps` on your dictionary, which will turn your Python object into valid JSON. This is used instead of `jsonify`, as `jsonify` constructs an actual response object using its input as the response body. In contrast, `json.dumps` just takes a given Python object and converts it into a valid JSON string if possible. + +By default, the status code of any response sent with Flask will be `200`. That will work for most circumstances, where you're not trying to send back a specific redirection-level or error-level message. Since this case explicitly lets the frontend know when a new item has been created, set the status code to be `201`, which corresponds to creating a new thing. + +And that's it! That's a basic view for creating a new `Task` object in Flask given the current setup of your To-Do List application. Similar views could be constructed for listing, editing, and deleting tasks, but this example offers an idea of how it could be done. + +### The bigger picture + +There is much more that goes into an application than one view for creating new things. While I haven't discussed anything about authorization/authentication systems, testing, database migration management, cross-origin resource sharing, etc., the details above should give you more than enough to start digging into building your own Flask applications. + +Learn more Python at [PyCon Cleveland 2018][12]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/4/flask + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nhuntwalker +[b]: https://github.com/lujun9972 +[1]: https://www.fullstackpython.com/web-frameworks.html +[2]: http://flask.pocoo.org/ +[3]: http://www.tornadoweb.org/en/stable/ +[4]: https://trypyramid.com/ +[5]: https://www.djangoproject.com/ +[6]: https://pypi.python.org +[7]: https://docs.python.org/3/distutils/setupscript.html +[8]: https://www.python.org/dev/peps/pep-0249/ +[9]: https://www.sqlalchemy.org/ +[10]: https://flask-migrate.readthedocs.io/en/latest/ +[11]: https://flask-alembic.readthedocs.io/en/stable/ +[12]: https://us.pycon.org/2018/ diff --git a/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md b/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md deleted file mode 100644 index 06c8dc8856..0000000000 --- a/sources/tech/20180404 Emacs -5- Documents and Presentations with org-mode.md +++ /dev/null @@ -1,177 +0,0 @@ -Emacs #5: Documents and Presentations with org-mode -====== - -### 1 About org-mode exporting - -#### 1.1 Background - -org-mode isn't just an agenda-making program. It can also export to lots of formats: LaTeX, PDF, Beamer, iCalendar (agendas), HTML, Markdown, ODT, plain text, man pages, and more complicated formats such as a set of web pages. - -This isn't just some afterthought either; it's a core part of the system and integrates very well. - -One file can be source code, automatically-generated output, task list, documentation, and presentation, all at once. - -Some use org-mode as their preferred markup format, even for things like LaTeX documents. The org-mode manual has an extensive [section on exporting][13]. - -#### 1.2 Getting started - -From any org-mode document, just hit C-c C-e. From there will come up a menu, letting you choose various export formats and options. These are generally single-key options so it's easy to set and execute. For instance, to export a document to a PDF, use C-c C-e l p or for HTML export, C-c C-e h h. - -There are lots of settings available for all of these export options; see the manual. It is, in fact, quite possible to use LaTeX-format equations in both LaTeX and HTML modes, to insert arbitrary preambles and settings for different modes, etc. - -#### 1.3 Add-on packages - -ELPA containts many addition exporters for org-mode as well. Check there for details. - -### 2 Beamer slides with org-mode - -#### 2.1 About Beamer - -[Beamer][14] is a LaTeX environment for making presentations. Its features include: - -* Automated generating of structural elements in the presentation (see, for example, [the Marburg theme][1]). This provides a visual reference for the audience of where they are in the presentation. - -* Strong help for structuring the presentation - -* Themes - -* Full LaTeX available - -#### 2.2 Benefits of Beamer in org-mode - -org-mode has a lot of benefits for working with Beamer. Among them: - -* org-mode's very easy and strong support for visualizing and changing the structure makes it very quick to reorganize your material. - -* Combined with org-babel, live source code (with syntax highlighting) and results can be embedded. - -* The syntax is often easier to work with. - -I have completely replaced my usage of LibreOffice/Powerpoint/GoogleDocs with org-mode and beamer. It is, in fact, rather frustrating when I have to use one of those tools, as they are nowhere near as strong as org-mode for visualizing a presentation structure. - -#### 2.3 Headline Levels - -org-mode's Beamer export will convert sections of your document (defined by headings) into slides. The question, of course, is: which sections? This is governed by the H [export setting][15] (org-export-headline-levels). - -There are many ways to go, which suit people. I like to have my presentation like this: - -``` -#+OPTIONS: H:2 -#+BEAMER_HEADER: \AtBeginSection{\frame{\sectionpage}} -``` - -This gives a standalone section slide for each major topic, to highlight major transitions, and then takes the level 2 (two asterisks) headings to set the slide. Many Beamer themes expect a third level of indirection, so you would set H:3 for them. - -#### 2.4 Themes and settings - -You can configure many Beamer and LaTeX settings in your document by inserting lines at the top of your org file. This document, for instance, defines: - -``` -#+TITLE: Documents and presentations with org-mode -#+AUTHOR: John Goerzen -#+BEAMER_HEADER: \institute{The Changelog} -#+PROPERTY: comments yes -#+PROPERTY: header-args :exports both :eval never-export -#+OPTIONS: H:2 -#+BEAMER_THEME: CambridgeUS -#+BEAMER_COLOR_THEME: default -``` - -#### 2.5 Advanced settings - -I like to change some colors, bullet formatting, and the like. I round out my document with: - -``` -# We can't just +BEAMER_INNER_THEME: default because that picks the theme default. -# Override per https://tex.stackexchange.com/questions/11168/change-bullet-style-formatting-in-beamer -#+BEAMER_INNER_THEME: default -#+LaTeX_CLASS_OPTIONS: [aspectratio=169] -#+BEAMER_HEADER: \definecolor{links}{HTML}{0000A0} -#+BEAMER_HEADER: \hypersetup{colorlinks=,linkcolor=,urlcolor=links} -#+BEAMER_HEADER: \setbeamertemplate{itemize items}[default] -#+BEAMER_HEADER: \setbeamertemplate{enumerate items}[default] -#+BEAMER_HEADER: \setbeamertemplate{items}[default] -#+BEAMER_HEADER: \setbeamercolor*{local structure}{fg=darkred} -#+BEAMER_HEADER: \setbeamercolor{section in toc}{fg=darkred} -#+BEAMER_HEADER: \setlength{\parskip}{\smallskipamount} -``` - -Here, aspectratio=169 sets a 16:9 aspect ratio, and the remaining are standard LaTeX/Beamer configuration bits. - -#### 2.6 Shrink (to fit) - -Sometimes you've got some really large code examples and you might prefer to just shrink the slide to fit. - -Just type C-c C-x p, set the BEAMER_opt property to shrink=15\. - -(Or a larger value of shrink). The previous slide uses this here. - -#### 2.7 Result - -Here's the end result: - - [![screenshot1](https://farm1.staticflickr.com/889/26366340577_fbde8ff266_o.png)][16] - -### 3 Interactive Slides - -#### 3.1 Interactive Emacs Slideshows - -With the [org-tree-slide package][17], you can display your slideshow from right within Emacs. Just run M-x org-tree-slide-mode. Then, use C-> and C-< to move between slides. - -You might find C-c C-x C-v (which is org-toggle-inline-images) helpful to cause the system to display embedded images. - -#### 3.2 HTML Slideshows - -There are a lot of ways to export org-mode presentations to HTML, with various levels of JavaScript integration. See the [non-beamer presentations section][18] of the org-mode wiki for details. - -### 4 Miscellaneous - -#### 4.1 Additional resources to accompany this post - -* [orgmode.org beamer tutorial][2] - -* [LaTeX wiki][3] - -* [Generating section title slides][4] - -* [Shrinking content to fit on slide][5] - -* A great resource: refcard-org-beamer See its [Github repo][6] Make sure to check out both the PDF and the .org file - -* A nice [Theme matrix][7] - -#### 4.2 Up next in my Emacs series… - -mu4e for email! - - --------------------------------------------------------------------------------- - -via: http://changelog.complete.org/archives/9900-emacs-5-documents-and-presentations-with-org-mode - -作者:[John Goerzen][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://changelog.complete.org/archives/author/jgoerzen -[1]:https://hartwork.org/beamer-theme-matrix/all/beamer-albatross-Marburg-1.png -[2]:https://orgmode.org/worg/exporters/beamer/tutorial.html -[3]:https://en.wikibooks.org/wiki/LaTeX/Presentations -[4]:https://tex.stackexchange.com/questions/117658/automatically-generate-section-title-slides-in-beamer/117661 -[5]:https://tex.stackexchange.com/questions/78514/content-doesnt-fit-in-one-slide -[6]:https://github.com/fniessen/refcard-org-beamer -[7]:https://hartwork.org/beamer-theme-matrix/ -[8]:https://changelog.complete.org/archives/tag/emacs2018 -[9]:https://github.com/jgoerzen/public-snippets/blob/master/emacs/emacs-org-beamer/emacs-org-beamer.org -[10]:http://changelog.complete.org/archives/9900-emacs-5-documents-and-presentations-with-org-mode -[11]:https://github.com/jgoerzen/public-snippets/raw/master/emacs/emacs-org-beamer/emacs-org-beamer.pdf -[12]:https://github.com/jgoerzen/public-snippets/raw/master/emacs/emacs-org-beamer/emacs-org-beamer-document.pdf -[13]:https://orgmode.org/manual/Exporting.html#Exporting -[14]:https://en.wikipedia.org/wiki/Beamer_(LaTeX) -[15]:https://orgmode.org/manual/Export-settings.html#Export-settings -[16]:https://www.flickr.com/photos/jgoerzen/26366340577/in/dateposted/ -[17]:https://orgmode.org/worg/org-tutorials/non-beamer-presentations.html#org-tree-slide -[18]:https://orgmode.org/worg/org-tutorials/non-beamer-presentations.html diff --git a/sources/tech/20180409 How to create LaTeX documents with Emacs.md b/sources/tech/20180409 How to create LaTeX documents with Emacs.md deleted file mode 100644 index 7dc16bcf10..0000000000 --- a/sources/tech/20180409 How to create LaTeX documents with Emacs.md +++ /dev/null @@ -1,281 +0,0 @@ -How to create LaTeX documents with Emacs -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -In his excellent article, [An introduction to creating documents in LaTeX][1], author [Aaron Cocker][2] introduces the [LaTeX typesetting system][3] and explains how to create a LaTeX document using [TeXstudio][4]. He also lists a few LaTeX editors that many users find helpful in creating LaTeX documents. - -This comment on the article by [Greg Pittman][5] caught my attention: "LaTeX seems like an awful lot of typing when you first start...". This is true. LaTeX involves a lot of typing and debugging, if you missed a special character like an exclamation mark, which can discourage many users, especially beginners. In this article, I will introduce you to [GNU Emacs][6] and describe how to use it to create LaTeX documents. - -### Creating your first document - -Launch Emacs by typing: -``` -emacs -q --no-splash helloworld.org - -``` - -The `-q` flag ensures that no Emacs initializations will load. The `--no-splash-screen` flag prevents splash screens to ensure that only one window is open, with the file `helloworld.org`. - -![Emacs startup screen][8] - -GNU Emacs with the helloworld.org file opened in a buffer window - -Let's add some LaTeX headers the Emacs way: Go to **Org** in the menu bar and select **Export/Publish**. - -![template_flow.png][10] - -Inserting a default template - -In the next window, Emacs offers options to either export or insert a template. Insert the template by entering **#** ([#] Insert template). This will move a cursor to a mini-buffer, where the prompt reads **Options category:**. At this time you may not know the category names; press Tab to see possible completions. Type "default" and press Enter. The following content will be inserted: -``` -#+TITLE: helloworld - -#+DATE: <2018-03-12 Mon> - -#+AUTHOR: - -#+EMAIL: makerpm@nubia - -#+OPTIONS: ':nil *:t -:t ::t <:t H:3 \n:nil ^:t arch:headline - -#+OPTIONS: author:t c:nil creator:comment d:(not "LOGBOOK") date:t - -#+OPTIONS: e:t email:nil f:t inline:t num:t p:nil pri:nil stat:t - -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:t todo:t |:t - -#+CREATOR: Emacs 25.3.1 (Org mode 8.2.10) - -#+DESCRIPTION: - -#+EXCLUDE_TAGS: noexport - -#+KEYWORDS: - -#+LANGUAGE: en - -#+SELECT_TAGS: export - -``` - -Change the title, date, author, and email as you wish. Mine looks like this: -``` -#+TITLE: Hello World! My first LaTeX document - -#+DATE: \today - -#+AUTHOR: Sachin Patil - -#+EMAIL: psachin@redhat.com - -``` - -We don't want to create a Table of Contents yet, so change the value of `toc` from `t` to `nil` inline, as shown below: -``` -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:nil todo:t |:t - -``` - -Let's add a section and paragraphs. A section starts with an asterisk (*). We'll copy the content of some paragraphs from Aaron's post (from the [Lipsum Lorem Ipsum generator][11]): -``` -* Introduction - - - -  \paragraph{} - -  Lorem ipsum dolor sit amet, consectetur adipiscing elit. Cras lorem - -  nisi, tincidunt tempus sem nec, elementum feugiat ipsum. Nulla in - -  diam libero. Nunc tristique ex a nibh egestas sollicitudin. - - - -  \paragraph{} - -  Mauris efficitur vitae ex id egestas. Vestibulum ligula felis, - -  pulvinar a posuere id, luctus vitae leo. Sed ac imperdiet orci, non - -  elementum leo. Nullam molestie congue placerat. Phasellus tempor et - -  libero maximus commodo. - -``` - - -![helloworld_file.png][13] - -The helloworld.org file - -With the content in place, we'll export the content as a PDF. Select **Export/Publish** from the **Org** menu again, but this time, type **l** (export to LaTeX), followed by **o** (as PDF file and open). This not only opens PDF file for you to view, but also saves the file as `helloworld.pdf` in the same path as `helloworld.org`. - -![org_to_pdf.png][15] - -Exporting helloworld.org to helloworld.pdf - -![org_and_pdf_file.png][17] - -Opening the helloworld.pdf file - -You can also export org to PDF by pressing `Alt + x`, then typing "org-latex-export-to-pdf". Use Tab to auto-complete. - -Emacs also creates the `helloworld.tex` file to give you control over the content. - -![org_tex_pdf.png][19] - -Emacs with LaTeX, org, and PDF files open in three different windows - -You can compile the `.tex` file to `.pdf` using the command: -``` -pdflatex helloworld.tex - -``` - -You can also export the `.org` file to HTML or as a simple text file. What I like about .org files is they can be pushed to [GitHub][20], where they are rendered just like any other markdown formats. - -### Creating a LaTeX Beamer presentation - -Let's go a step further and create a LaTeX [Beamer][21] presentation using the same file with some modifications as shown below: -``` -#+TITLE: LaTeX Beamer presentation - -#+DATE: \today - -#+AUTHOR: Sachin Patil - -#+EMAIL: psachin@redhat.com - -#+OPTIONS: ':nil *:t -:t ::t <:t H:3 \n:nil ^:t arch:headline - -#+OPTIONS: author:t c:nil creator:comment d:(not "LOGBOOK") date:t - -#+OPTIONS: e:t email:nil f:t inline:t num:t p:nil pri:nil stat:t - -#+OPTIONS: tags:t tasks:t tex:t timestamp:t toc:nil todo:t |:t - -#+CREATOR: Emacs 25.3.1 (Org mode 8.2.10) - -#+DESCRIPTION: - -#+EXCLUDE_TAGS: noexport - -#+KEYWORDS: - -#+LANGUAGE: en - -#+SELECT_TAGS: export - -#+LATEX_CLASS: beamer - -#+BEAMER_THEME: Frankfurt - -#+BEAMER_INNER_THEME: rounded - - - - - -* Introduction - -*** Programming - -    - Python - -    - Ruby - - - -*** Paragraph one - - - -    Lorem ipsum dolor sit amet, consectetur adipiscing - -    elit. Cras lorem nisi, tincidunt tempus sem nec, elementum feugiat - -    ipsum. Nulla in diam libero. Nunc tristique ex a nibh egestas - -    sollicitudin. - - - -*** Paragraph two - - - -    Mauris efficitur vitae ex id egestas. Vestibulum - -    ligula felis, pulvinar a posuere id, luctus vitae leo. Sed ac - -    imperdiet orci, non elementum leo. Nullam molestie congue - -    placerat. Phasellus tempor et libero maximus commodo. - - - -* Thanks - -*** Links - -    - Link one - -    - Link two - -``` - -We have added three more lines to the header: -``` -#+LATEX_CLASS: beamer - -#+BEAMER_THEME: Frankfurt - -#+BEAMER_INNER_THEME: rounded - -``` - -To export to PDF, press `Alt + x` and type "org-beamer-export-to-pdf". - -![latex_beamer_presentation.png][23] - -The Latex Beamer presentation, created using Emacs and Org mode - -I hope you enjoyed creating this LaTeX and Beamer document using Emacs (note that it's faster to use keyboard shortcuts than a mouse). Emacs Org-mode offers much more than I can cover in this post; you can learn more at [orgmode.org][24]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/how-create-latex-documents-emacs - -作者:[Sachin Patil][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/psachin -[1]:https://opensource.com/article/17/6/introduction-latex -[2]:https://opensource.com/users/aaroncocker -[3]:https://www.latex-project.org -[4]:http://www.texstudio.org/ -[5]:https://opensource.com/users/greg-p -[6]:https://www.gnu.org/software/emacs/ -[7]:/file/392261 -[8]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/emacs_startup.png?itok=UnT4PgK5 (Emacs startup screen) -[9]:/file/392266 -[10]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/insert_template_flow.png?itok=V_c2KipO (template_flow.png) -[11]:https://www.lipsum.com/feed/html -[12]:/file/392271 -[13]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/helloworld_file.png?itok=o8IX0TsJ (helloworld_file.png) -[14]:/file/392276 -[15]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_to_pdf.png?itok=fNnC1Y-L (org_to_pdf.png) -[16]:/file/392281 -[17]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_and_pdf_file.png?itok=HEhtw-cu (org_and_pdf_file.png) -[18]:/file/392286 -[19]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/org_tex_pdf.png?itok=poZZV_tj (org_tex_pdf.png) -[20]:https://github.com -[21]:https://www.sharelatex.com/learn/Beamer -[22]:/file/392291 -[23]:https://opensource.com/sites/default/files/styles/panopoly_image_original/public/images/life-uploads/latex_beamer_presentation.png?itok=rsPSeIuM (latex_beamer_presentation.png) -[24]:https://orgmode.org/worg/org-tutorials/org-latex-export.html diff --git a/sources/tech/20180411 How To Setup Static File Server Instantly.md b/sources/tech/20180411 How To Setup Static File Server Instantly.md deleted file mode 100644 index b388b389fa..0000000000 --- a/sources/tech/20180411 How To Setup Static File Server Instantly.md +++ /dev/null @@ -1,171 +0,0 @@ -How To Setup Static File Server Instantly -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/serve-720x340.png) -Ever wanted to share your files or project over network, but don’t know how to do? No worries! Here is a simple utility named **“serve”** to share your files instantly over network. This simple utility will instantly turn your system into a static file server, allowing you to serve your files over network. You can access the files from any devices regardless of their operating system. All you need is a web browser. This utility also can be used to serve static websites. It is formerly known as “list” and “micro-list”, but now the name has been changed to “serve”, which is much more suitable for the purpose of this utility. - -### Setup Static File Server Using Serve - -To install “serve”, you need to install NodeJS and NPM first. Refer the following link to install NodeJS and NPM in your Linux box. - -Once NodeJS and NPM installed, run the following command to install “serve”. -``` -$ npm install -g serve - -``` - -Done! Now is the time to serve the files or folders. - -The typical syntax to use “serve” is: -``` -$ serve [options] - -``` - -### Serve Specific files or folders - -For example, let us share the contents of the **Documents** directory. To do so, run: -``` -$ serve Documents/ - -``` - -Sample output would be: - -![][2] - -As you can see in the above screenshot, the contents of the given directory have been served over network via two URLs. - -To access the contents from the local system itself, all you have to do is open your web browser and navigate to **** URL. - -![][3] - -The Serve utility displays the contents of the given directory in a simple layout. You can download (right click on the files and choose “Save link as..”) or just view them in the browser. - -If you want to open local address automatically in the browser, use **-o** flag. -``` -$ serve -o Documents/ - -``` - -Once you run the above command, The Serve utility will open your web browser automatically and display the contents of the shared item. - -Similarly, to access the shared directory from a remote system over network, type **** in the browser’s address bar. Replace 192.168.43.192 with your system’s IP. - -**Serve contents via different port** - -As you may noticed, The serve utility uses port **5000** by default. So, make sure the port 5000 is allowed in your firewall or router. If it is blocked for some reason, you can serve the contents using different port using **-p** flag. -``` -$ serve -p 1234 Documents/ - -``` - -The above command will serve the contents of Documents directory via port **1234**. - -![][4] - -To serve a file, instead of a folder, just give it’s full path like below. -``` -$ serve Documents/Papers/notes.txt - -``` - -The contents of the shared directory can be accessed by any user on the network as long as they know the path. - -**Serve the entire $HOME directory** - -Open your Terminal and type: -``` -$ serve - -``` - -This will share the contents of your entire $HOME directory over network. - -To stop the sharing, press **CTRL+C**. - -**Serve selective files or folders** - -You may not want to share all files or directories, but only a few in a directory. You can do this by excluding the files or directories using **-i** flag. -``` -$ serve -i Downloads/ - -``` - -The above command will serve entire file system except **Downloads** directory. - -**Serve contents only on localhost** - -Sometimes, you want to serve the contents only on the local system itself, not on the entire network. To do so, use **-l** flag as shown below: -``` -$ serve -l Documents/ - -``` - -This command will serve the **Documents** directory only on localhost. - -![][5] - -This can be useful when you’re working on a shared server. All users in the in the system can access the share, but not the remote users. - -**Serve content using SSL** - -Since we serve the contents over the local network, we need not to use SSL. However, Serve utility has the ability to shares contents using SSL using **–ssl** flag. -``` -$ serve --ssl Documents/ - -``` - -![][6] - -To access the shares via web browser use “ or “. - -![][7] - -**Serve contents with authentication** - -In all above examples, we served the contents without any authentication. So anyone on the network can access them without any authentication. You might feel some contents should be accessed with username and password. - -To do so, use: -``` -$ SERVE_USER=ostechnix SERVE_PASSWORD=123456 serve --auth - -``` - -Now the users need to enter the username (i.e **ostechnix** in our case) and password (123456) to access the shares. - -![][8] - -The Serve utility has some other features, such as disable [**Gzip compression**][9], setup * CORS headers to allow requests from any origin, prevent copy address automatically to clipboard etc. You can read the complete help section by running the following command: -``` -$ serve help - -``` - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-setup-static-file-server-instantly/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-1.png -[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-2.png -[4]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-4.png -[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-3.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-6.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-5-1.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/serve-7-1.png -[9]:https://www.ostechnix.com/how-to-compress-and-decompress-files-in-linux/ diff --git a/sources/tech/20180412 A new approach to security instrumentation.md b/sources/tech/20180412 A new approach to security instrumentation.md deleted file mode 100644 index 0a6a98c0f2..0000000000 --- a/sources/tech/20180412 A new approach to security instrumentation.md +++ /dev/null @@ -1,82 +0,0 @@ -Translating by hopefully2333 - -A new approach to security instrumentation -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security_privacy_lock.png?itok=ZWjrpFzx) - -How many of us have ever uttered the following phrase: “I hope this works!”? - -Without a doubt, most of us have, likely more than once. It’s not a phrase that inspires confidence, as it reveals doubts about our abilities or the functionality of whatever we are testing. Unfortunately, this very phrase defines our traditional security model all too well. We operate based on the assumption and the hope that the controls we put in place—from vulnerability scanning on web applications to anti-virus on endpoints—prevent malicious actors and software from entering our systems and damaging or stealing our information. - -Penetration testing took a step to combat relying on assumptions by actively trying to break into the network, inject malicious code into a web application, or spread “malware” by sending out phishing emails. Composed of finding and poking holes in our different security layers, pen testing fails to account for situations in which holes are actively opened. In security experimentation, we intentionally create chaos in the form of controlled, simulated incident behavior to objectively instrument our ability to detect and deter these types of activities. - -> “Security experimentation provides a methodology for the experimentation of the security of distributed systems to build confidence in the ability to withstand malicious conditions.” - -When it comes to security and complex distributed systems, a common adage in the chaos engineering community reiterates that “hope is not an effective strategy.” How often do we proactively instrument what we have designed or built to determine if the controls are failing? Most organizations do not discover that their security controls are failing until a security incident results from that failure. We believe that “Security incidents are not detective measures” and “Hope is not an effective strategy” should be the mantras of IT professionals operating effective security practices. - -The industry has traditionally emphasized preventative security measures and defense-in-depth, whereas our mission is to drive new knowledge and insights into the security toolchain through detective experimentation. With so much focus on the preventative mechanisms, we rarely attempt beyond one-time or annual pen testing requirements to validate whether or not those controls are performing as designed. - -With all of these constantly changing, stateless variables in modern distributed systems, it becomes next to impossible for humans to adequately understand how their systems behave, as this can change from moment to moment. One way to approach this problem is through robust systematic instrumentation and monitoring. For instrumentation in security, you can break down the domain into two primary buckets: **testing** , and what we call **experimentation**. Testing is the validation or assessment of a previously known outcome. In plain terms, we know what we are looking for before we go looking for it. On the other hand, experimentation seeks to derive new insights and information that was previously unknown. While testing is an important practice for mature security teams, the following example should help further illuminate the differences between the two, as well as provide a more tangible depiction of the added value of experimentation. - -### Example scenario: Craft beer delivery - -Consider a simple web service or web application that takes orders for craft beer deliveries. - -This is a critical service for this craft beer delivery company, whose orders come in from its customers' mobile devices, the web, and via its API from restaurants that serve its craft beer. This critical service runs in the company's AWS EC2 environment and is considered by the company to be secure. The company passed its PCI compliance with flying colors last year and annually performs third-party penetration tests, so it assumes that its systems are secure. - -This company also prides itself on its DevOps and continuous delivery practices by deploying sometimes twice in the same day. - -After learning about chaos engineering and security experimentation, the company's development teams want to determine, on a continuous basis, how resilient and effective its security systems are to real-world events, and furthermore, to ensure that they are not introducing new problems into the system that the security controls are not able to detect. - -The team wants to start small by evaluating port security and firewall configurations for their ability to detect, block, and alert on misconfigured changes to the port configurations on their EC2 security groups. - - * The team begins by performing a summary of their assumptions about the normal state. - * Develops a hypothesis for port security in their EC2 instances - * Selects and configures the YAML file for the Unauthorized Port Change experiment. - * This configuration would designate the objects to randomly select from for targeting, as well as the port ranges and number of ports that should be changed. - * The team also configures when to run the experiment and shrinks the scope of its blast radius to ensure minimal business impact. - * For this first test, the team has chosen to run the experiment in their stage environments and run a single run of the test. - * In true Game Day style, the team has elected a Master of Disaster to run the experiment during a predefined two-hour window. During that window of time, the Master of Disaster will execute the experiment on one of the EC2 Instance Security Groups. - * Once the Game Day has finished, the team begins to conduct a thorough, blameless post-mortem exercise where the focus is on the results of the experiment against the steady state and the original hypothesis. The questions would be something similar to the following: - - - -### Post-mortem questions - - * Did the firewall detect the unauthorized port change? - * If the change was detected, was it blocked? - * Did the firewall report log useful information to the log aggregation tool? - * Did the SIEM throw an alert on the unauthorized change? - * If the firewall did not detect the change, did the configuration management tool discover the change? - * Did the configuration management tool report good information to the log aggregation tool? - * Did the SIEM finally correlate an alert? - * If the SIEM threw an alert, did the Security Operations Center get the alert? - * Was the SOC analyst who got the alert able to take action on the alert, or was necessary information missing? - * If the SOC alert determined the alert to be credible, was Security Incident Response able to conduct triage activities easily from the data? - - - -The acknowledgment and anticipation of failure in our systems have already begun unraveling our assumptions about how our systems work. Our mission is to take what we have learned and apply it more broadly to begin to truly address security weaknesses proactively, going beyond the reactive processes that currently dominate traditional security models. - -As we continue to explore this new domain, we will be sure to post our findings. For those interested in learning more about the research or getting involved, please feel free to contact [Aaron Rinehart][1] or [Grayson Brewer][2]. - -Special thanks to Samuel Roden for the insights and thoughts provided in this article. - -**[See our related story,[Is the term DevSecOps necessary?][3]]** - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/new-approach-security-instrumentation - -作者:[Aaron Rinehart][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/aaronrinehart -[1]:https://twitter.com/aaronrinehart -[2]:https://twitter.com/BrewerSecurity -[3]:https://opensource.com/article/18/4/devsecops diff --git a/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md b/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md deleted file mode 100644 index 2f86fe2f01..0000000000 --- a/sources/tech/20180412 NGINX Unit 1.0 An App Server That Supports Go.md +++ /dev/null @@ -1,108 +0,0 @@ -Announcing NGINX Unit 1.0 -============================================================ - -Today, April 12, marks a significant milestone in the development of [NGINX Unit][8], our dynamic web and application server. Approximately six months after its [first public release][9], we’re now happy to announce that NGINX Unit is generally available and production‑ready. NGINX Unit is our new open source initiative led by Igor Sysoev, creator of the original NGINX Open Source software, which is now used by more than [409 million websites][10]. - -“I set out to make an application server which will be remotely and dynamically configured, and able to switch dynamically from one language or application version to another,” explains Igor. “Dynamic configuration and switching I saw as being certainly the main problem. People want to reconfigure servers without interrupting client processing.” - -NGINX Unit is dynamically configured using a REST API; there is no static configuration file. All configuration changes happen directly in memory. Configuration changes take effect without requiring process reloads or service interruptions. - -![NGINX runs Go, Perl, PHP, Python, and Ruby together on the same server](https://cdn-1.wp.nginx.com/wp-content/uploads/2017/09/dia-FM-2018-04-11-what-is-nginx-unit-01_1024x725-1024x725.png) -NGINX Unit runs multiple languages simultaneously - -“The dynamic switching requires that we can run different languages and language versions in one server,” continues Igor. - -As of Release 1.0, NGINX Unit supports Go, Perl, PHP, Python, and Ruby on the same server. Multiple language versions are also supported, so you can, for instance, run applications written for PHP 5 and PHP 7 on the same server. Support for additional languages, including Java, is planned for future NGINX Unit releases. - -Note: We have an additional blog post on [how to configure NGINX, NGINX Unit, and WordPress][11] to work together. - -Igor studied at Moscow State Technical University, which was a pioneer in the Russian space program, and April 12 has a special significance. “This is the anniversary of the first manned spaceflight in history, made by [Yuri Gagarin][12]. The first public version of NGINX [0.1.0] was released on [[October 4, 2004][7],] the anniversary of the [Sputnik][13] launch, and NGINX 1.0 was launched on April 12, 2011.” - -### What Is NGINX Unit? - -NGINX Unit is a dynamic web and application server, suitable for both stand‑alone applications and distributed, microservices application architectures. It launches and scales application processes on demand, executing each application instance in its own secure sandbox. - -NGINX Unit manages and routes all incoming network transactions to the application through a separate “router” process, so it can rapidly implement configuration changes without interrupting service. - -“The configuration is in JSON format, so users can edit it manually, and it’s very suitable for scripting. We hope to add capabilities to [NGINX Controller][14] and [NGINX Amplify][15] to work with Unit configuration too,” explains Igor. - -The NGINX Unit configuration process is described thoroughly in the [documentation][16]. - -“Now Unit can run Python, PHP, Ruby, Perl and Go – five languages. For example, during our beta, one of our users used Unit to run a number of different PHP platform versions on a single host,” says Igor. - -NGINX Unit’s ability to run multiple language runtimes is based on its internal separation between the router process, which terminates incoming HTTP requests, and groups of application processes, which implement the application runtime and execute application code. - -![NGINX Unit architecture](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/04/dia-FM-2018-04-11-Unit-1.0.0-blog-router-process-01-horiz_1024x576-1024x576.png) -NGINX Unit architecture - -The router process is persistent – it never restarts – meaning that configuration updates can be implemented seamlessly, without any interruption in service. Each application process is deployed in its own sandbox (with support for [Linux control groups][17] [cgroups] under active development), so that NGINX Unit provides secure isolation for user code. - -### What’s Next for NGINX Unit? - -The next milestones for the NGINX Unit engineering team after Release 1.0 are concerned with HTTP maturity, serving static content, and additional language support. - -“We plan to add SSL and HTTP/2 capabilities in Unit,” says Igor. “Also, we plan to support routing in configurations; currently, we have direct mapping from one listen port to one application. We plan to add routing using URIs and hostnames, etc.” - -“In addition, we want to add more language support to Unit. We are completing the Ruby implementation, and next we will consider Node.js and Java. Java will be added in a Tomcat‑compatible fashion.” - -The end goal for NGINX Unit is to create an open source platform for distributed, polyglot applications which can run application code securely, reliably, and with the best possible performance. The platform will self‑manage, with capabilities such as autoscaling to meet SLAs within resource constraints, and service discovery and internal load balancing to make it easy to create a [service mesh][18]. - -### NGINX Unit and the NGINX Application Platform - -An NGINX Unit platform will typically be delivered with a front‑end tier of NGINX Open Source or NGINX Plus reverse proxies to provide ingress control, edge load balancing, and security. The joint platform (NGINX Unit and NGINX or NGINX Plus) can then be managed fully using NGINX Controller to monitor, configure, and control the entire platform. - -![NGINX Application Platform for microservices and monolithic applications with NGINX Controller, NGINX Plus, and NGINX Unit](https://cdn-1.wp.nginx.com/wp-content/uploads/2018/03/nginx.com-NAP-diagram-01ag_Main-Products-print-Roboto-white-1024x1008.png) -The NGINX Application Platform is our vision for building microservices - -Together, these three components – NGINX Plus, NGINX Unit, and NGINX Controller – make up the [NGINX Application Platform][19]. The NGINX Application Platform is a product suite that delivers load balancing, caching, API management, a WAF, and application serving, with rich management and control planes that simplify the tasks of operating monolithic, microservices, and transitional applications. - -### Getting Started with NGINX Unit - -NGINX Unit is free and open source. Please see the [installation instructions][20] to get started. We have prebuilt packages for most operating systems, including Ubuntu and Red Hat Enterprise Linux. We also make a [Docker container][21] available on Docker Hub. - -The source code is available in our [Mercurial repository][22] and [mirrored on GitHub][23]. The code is available under the Apache 2.0 license. You can compile NGINX Unit yourself on most popular Linux and Unix systems. - -If you have any questions, please use the [GitHub issues board][24] or the [NGINX Unit mailing list][25]. We’d love to hear how you are using NGINX Unit, and we welcome [code contributions][26] too. - -We’re also happy to extend technical support for NGINX Unit to NGINX Plus customers with Professional or Enterprise support contracts. Please refer to our [Support page][27] for details of the support services we can offer. - --------------------------------------------------------------------------------- - -via: https://www.nginx.com/blog/nginx-unit-1-0-released/ - -作者:[www.nginx.com ][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:www.nginx.com -[1]:https://twitter.com/intent/tweet?text=Announcing+NGINX+Unit+1.0+by+%40nginx+https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[2]:http://www.linkedin.com/shareArticle?mini=true&url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&summary=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D -[3]:https://news.ycombinator.com/submitlink?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&t=Announcing%20NGINX%20Unit%201.0&text=Today,%20April%2012,%20marks%20a%20significant%20milestone%20in%20the%20development%20of%20NGINX%C2%A0Unit,%20our%20dynamic%20web%20and%20application%20server.%20Approximately%20six%20months%20after%20its%20first%20public%20release,%20we%E2%80%99re%20now%20happy%20to%20announce%20that%20NGINX%C2%A0Unit%20is%20generally%20available%20and%20production%E2%80%91ready.%20NGINX%C2%A0Unit%20is%20our%20new%20open%20source%20initiative%20led%20by%20Igor%C2%A0Sysoev,%20creator%20of%20the%20original%20NGINX%20Open%20Source%20[%E2%80%A6] -[4]:https://www.facebook.com/sharer/sharer.php?u=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[5]:https://plus.google.com/share?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F -[6]:http://www.reddit.com/submit?url=https%3A%2F%2Fwww.nginx.com%2Fblog%2Fnginx-unit-1-0-released%2F&title=Announcing+NGINX+Unit+1.0&text=Today%2C+April+12%2C+marks+a+significant+milestone+in+the+development+of+NGINX%26nbsp%3BUnit%2C+our+dynamic+web+and+application+server.+Approximately+six+months+after+its+first+public+release%2C+we%E2%80%99re+now+happy+to+announce+that+NGINX%26nbsp%3BUnit+is+generally+available+and+production%26%238209%3Bready.+NGINX%26nbsp%3BUnit+is+our+new+open+source+initiative+led+by+Igor%26nbsp%3BSysoev%2C+creator+of+the+original+NGINX+Open+Source+%5B%26hellip%3B%5D -[7]:http://nginx.org/en/CHANGES -[8]:https://www.nginx.com/products/nginx-unit/ -[9]:https://www.nginx.com/blog/introducing-nginx-unit/ -[10]:https://news.netcraft.com/archives/2018/03/27/march-2018-web-server-survey.html -[11]:https://www.nginx.com/blog/installing-wordpress-with-nginx-unit/ -[12]:https://en.wikipedia.org/wiki/Yuri_Gagarin -[13]:https://en.wikipedia.org/wiki/Sputnik_1 -[14]:https://www.nginx.com/products/nginx-controller/ -[15]:https://www.nginx.com/products/nginx-amplify/ -[16]:http://unit.nginx.org/configuration/ -[17]:https://en.wikipedia.org/wiki/Cgroups -[18]:https://www.nginx.com/blog/what-is-a-service-mesh/ -[19]:https://www.nginx.com/products -[20]:http://unit.nginx.org/installation/ -[21]:https://hub.docker.com/r/nginx/unit/ -[22]:http://hg.nginx.org/unit -[23]:https://github.com/nginx/unit -[24]:https://github.com/nginx/unit/issues -[25]:http://mailman.nginx.org/mailman/listinfo/unit -[26]:https://unit.nginx.org/contribution/ -[27]:https://www.nginx.com/support -[28]:https://www.nginx.com/blog/tag/releases/ -[29]:https://www.nginx.com/blog/tag/nginx-unit/ diff --git a/sources/tech/20180416 Cgo and Python.md b/sources/tech/20180416 Cgo and Python.md index e5688d43c8..422f510949 100644 --- a/sources/tech/20180416 Cgo and Python.md +++ b/sources/tech/20180416 Cgo and Python.md @@ -1,3 +1,4 @@ +translating by name1e5s Cgo and Python ============================================================ diff --git a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md b/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md deleted file mode 100644 index 1ebf17ef68..0000000000 --- a/sources/tech/20180417 How To Browse Stack Overflow From Terminal.md +++ /dev/null @@ -1,138 +0,0 @@ -How To Browse Stack Overflow From Terminal -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/how2-720x340.png) -A while ago, we have written about [**SoCLI**][1], a python script to search and browse Stack Overflow website from command line. Today, we will discuss about a similar tool named **“how2”**. It is a command line utility to browse Stack Overflow from Terminal. You can query in the plain English as the way you do in [**Google search**][2] and it uses Google and Stackoverflow APIs to search for the given queries. It is free and open source utility written using NodeJS. - -### Browse Stack Overflow From Terminal Using how2 - -Since how2 is a NodeJS package, we can install it using Npm package manager. If you haven’t installed Npm and NodeJS already, refer the following guide. - -After installing Npm and NodeJS, run the following command to install how2 utility. -``` -$ npm install -g how2 - -``` - -Now let us see how to browse Stack Overflow uisng this program. The typical usage to search through Stack Overflow site using “how2” utility is: -``` -$ how2 - -``` - -For example, I am going to search for how to create tgz archive. -``` -$ how2 create archive tgz - -``` - -Oops! I get the following error. -``` -/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js:59 -Transport.prototype.__proto__ = EventEmitter.prototype; - ^ - - TypeError: Cannot read property 'prototype' of undefined - at Object. (/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js:59:46) - at Module._compile (internal/modules/cjs/loader.js:654:30) - at Object.Module._extensions..js (internal/modules/cjs/loader.js:665:10) - at Module.load (internal/modules/cjs/loader.js:566:32) - at tryModuleLoad (internal/modules/cjs/loader.js:506:12) - at Function.Module._load (internal/modules/cjs/loader.js:498:3) - at Module.require (internal/modules/cjs/loader.js:598:17) - at require (internal/modules/cjs/helpers.js:11:18) - at Object. (/home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/stream.js:8:17) - at Module._compile (internal/modules/cjs/loader.js:654:30) - -``` - -I may be a bug. I hope it gets fixed in the future versions. However, I find a workaround posted [**here**][3]. - -To fix this error temporarily, you need to edit the **transport.js** file using command: -``` -$ vi /home/sk/.nvm/versions/node/v9.11.1/lib/node_modules/how2/node_modules/devnull/transports/transport.js - -``` - -The actual path of this file will be displayed in your error output. Replace the above file path with your own. Then find the following line: -``` -var EventEmitter = process.EventEmitter; - -``` - -and replace it with following line: -``` -var EventEmitter = require('events'); - -``` - -Press ESC and type **:wq** to save and quit the file. - -Now search again the query. -``` -$ how2 create archive tgz - -``` - -Here is the sample output from my Ubuntu system. - -[![][4]][5] - -If the answer you’re looking for is not displayed in the above output, press **SPACE BAR** key to start the interactive search where you can go through all suggested questions and answers from the Stack Overflow site. - -[![][4]][6] - -Use UP/DOWN arrows to move between the results. Once you got the right answer/question, hit SPACE BAR or ENTER key to open it in the Terminal. - -[![][4]][7] - -To go back and exit, press **ESC**. - -**Search answers for specific language** - -If you don’t specify a language it **defaults to Bash** unix command line and give you immediately the most likely answer as above. You can also narrow the results to a specific language, for example perl, python, c, Java etc. - -For instance, to search for queries related to “Python” language only using **-l** flag as shown below. -``` -$ how2 -l python linked list - -``` - -[![][4]][8] - -To get a quick help, type: -``` -$ how2 -h - -``` - -### Conclusion - -The how2 utility is a basic command line program to quickly search for questions and answers from Stack Overflow without leaving your Terminal and it does this job pretty well. However, it is just CLI browser for Stack overflow. For some advanced features such as searching most voted questions, searching queries using multiple tags, colored interface, submitting a new question and viewing questions stats etc., **SoCLI** is good to go. - -And, that’s all for now. Hope this was useful. I will be soon here with another useful guide. Until then, stay tuned with OSTechNix! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-browse-stack-overflow-from-terminal/ - -作者:[SK][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/search-browse-stack-overflow-website-commandline/ -[2]:https://www.ostechnix.com/google-search-navigator-enhance-keyboard-navigation-in-google-search/ -[3]:https://github.com/santinic/how2/issues/79 -[4]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[5]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-1.png -[6]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-2.png -[7]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-3.png -[8]:http://www.ostechnix.com/wp-content/uploads/2018/04/stack-overflow-4.png diff --git a/sources/tech/20180419 Migrating to Linux- Network and System Settings.md b/sources/tech/20180419 Migrating to Linux- Network and System Settings.md deleted file mode 100644 index f3930f1777..0000000000 --- a/sources/tech/20180419 Migrating to Linux- Network and System Settings.md +++ /dev/null @@ -1,116 +0,0 @@ -Migrating to Linux: Network and System Settings -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/animals-birds-flock-55832.jpg?itok=NUGAyhDO) -In this series, we provide an overview of fundamentals to help you successfully make the transition to Linux from another operating system. If you missed the earlier articles in the series, you can find them here: - -[Part 1 - An Introduction][1] - -[Part 2 - Disks, Files, and Filesystems][2] - -[Part 3 - Graphical Environments][3] - -[Part 4 - The Command Line][4] - -[Part 5 - Using sudo][5] - -[Part 6 - Installing Software][6] - -Linux gives you a lot of control over network and system settings. On your desktop, Linux lets you tweak just about anything on the system. Most of these settings are exposed in plain text files under the /etc directory. Here I describe some of the most common settings you’ll use on your desktop Linux system. - -A lot of settings can be found in the Settings program, and the available options will vary by Linux distribution. Usually, you can change the background, tweak sound volume, connect to printers, set up displays, and more. While I won't talk about all of the settings here, you can certainly explore what's in there. - -### Connect to the Internet - -Connecting to the Internet in Linux is often fairly straightforward. If you are wired through an Ethernet cable, Linux will usually get an IP address and connect automatically when the cable is plugged in or at startup if the cable is already connected. - -If you are using wireless, in most distributions there is a menu, either in the indicator panel or in settings (depending on your distribution), where you can select the SSID for your wireless network. If the network is password protected, it will usually prompt you for the password. Afterward, it connects, and the process is fairly smooth. - -You can adjust network settings in the graphical environment by going into settings. Sometimes this is called System Settings or just Settings. Often you can easily spot the settings program because its icon is a gear or a picture of tools (Figure 1). - - -![Network Settings][8] - -Figure 1: Gnome Desktop Network Settings Indicator Icon. - -[Used with permission][9] - -### Network Interface Names - -Under Linux, network devices have names. Historically, these are given names like eth0 and wlan0 -- or Ethernet and wireless, respectively. Newer Linux systems have been using different names that appear more esoteric, like enp4s0 and wlp5s0. If the name starts with en, it's a wired Ethernet interface. If it starts with wl, it's a wireless interface. The rest of the letters and numbers reflect how the device is connected to hardware. - -### Network Management from the Command Line - -If you want more control over your network settings, or if you are managing network connections without a graphical desktop, you can also manage the network from the command line. - -Note that the most common service used to manage networks in a graphical desktop is the Network Manager, and Network Manager will often override setting changes made on the command line. If you are using the Network Manager, it's best to change your settings in its interface so it doesn't undo the changes you make from the command line or someplace else. - -Changing settings in the graphical environment is very likely to be interacting with the Network Manager, and you can also change Network Manager settings from the command line using the tool called nmtui. The nmtui tool provides all the settings that you find in the graphical environment but gives it in a text-based semi-graphical interface that works on the command line (Figure 2). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-2_0.png?itok=1QVjDdbJ) - -On the command line, there is an older tool called ifconfig to manage networks and a newer one called ip. On some distributions, ifconfig is considered to be deprecated and is not even installed by default. On other distributions, ifconfig is still in use. - -Here are some commands that will allow you to display and change network settings: - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screen_shot_2018-04-17_at_3.11.48_pm.png?itok=EZsjb-GQ) - -### Process and System Information - -In Windows, you can go into the Task Manager to see a list of the all the programs and services that are running. You can also stop programs from running. And you can view system performance in some of the tabs displayed there. - -You can do similar things in Linux both from the command line and from graphical tools. In Linux, there are a few graphical tools available depending on your distribution. The most common ones are System Monitor or KSysGuard. In these tools, you can see system performance, see a list of processes, and even kill processes (Figure 3). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-3_2.png?itok=ePeXj9PA) - -In these tools, you can also view global network traffic on your system (Figure 4). - - -![System Monitor][11] - -Figure 4: Screenshot of Gnome System Monitor. - -[Used with permission][9] - -### Managing Process and System Usage - -There are also quite a few tools you can use from the command line. The command ps can be used to list processes on your system. By default, it will list processes running in your current terminal session. But you can list other processes by giving it various command line options. You can get more help on ps with the commands info ps, or man ps. - -Most folks though want to get a list of processes because they would like to stop the one that is using up too much memory or CPU time. In this case, there are two commands that make this task much easier. These are top and htop (Figure 5). - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-5_0.png?itok=2nm5EmAl) - -The top and htop tools work very similarly to each other. These commands update their list every second or two and re-sort the list so that the task using the most CPU is at the top. You can also change the sorting to sort by other resources as well such as memory usage. - -In either of these programs (top and htop), you can type '?' to get help, and 'q' to quit. With top, you can press 'k' to kill a process and then type in the unique PID number for the process to kill it. - -With htop, you can highlight a task by pressing down arrow or up arrow to move the highlight bar, and then press F9 to kill the task followed by Enter to confirm. - -The information and tools provided in this series will help you get started with Linux. With a little time and patience, you'll feel right at home. - -Learn more about Linux through the free ["Introduction to Linux" ][12]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/2018/4/migrating-linux-network-and-system-settings - -作者:[John Bonesio][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/johnbonesio -[1]:https://www.linux.com/blog/learn/intro-to-linux/2017/10/migrating-linux-introduction -[2]:https://www.linux.com/blog/learn/intro-to-linux/2017/11/migrating-linux-disks-files-and-filesystems -[3]:https://www.linux.com/blog/learn/2017/12/migrating-linux-graphical-environments -[4]:https://www.linux.com/blog/learn/2018/1/migrating-linux-command-line -[5]:https://www.linux.com/blog/learn/2018/3/migrating-linux-using-sudo -[6]:https://www.linux.com/blog/learn/2018/3/migrating-linux-installing-software -[7]:https://www.linux.com/files/images/figure-1png-2 -[8]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-1_2.png?itok=J-C6q-t5 (Network Settings) -[9]:https://www.linux.com/licenses/category/used-permission -[10]:https://www.linux.com/files/images/figure-4png-1 -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/figure-4_1.png?itok=boI-L1mF (System Monitor) -[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md b/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md deleted file mode 100644 index 0e4318c858..0000000000 --- a/sources/tech/20180420 How To Remove Password From A PDF File in Linux.md +++ /dev/null @@ -1,220 +0,0 @@ -How To Remove Password From A PDF File in Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/04/Remove-Password-From-A-PDF-File-720x340.png) -Today I happen to share a password protected PDF file to one of my friend. I knew the password of that PDF file, but I didn’t want to disclose it. Instead, I just wanted to remove the password and send the file to him. I started to looking for some easy ways to remove the password protection from the pdf files on Internet. After a quick google search, I came up with four methods to remove password from a PDF file in Linux. The funny thing is I had already done it few years ago and I almost forgot it. If you’re wondering how to remove password from a PDF file in Linux, read on! It is not that difficult. - -### Remove Password From A PDF File in Linux - -**Method 1 – Using Qpdf** - -The **Qpdf** is a PDF transformation software which is used to encrypt and decrypt PDF files, convert PDF files to another equivalent pdf files. Qpdf is available in the default repositories of most Linux distributions, so you can install it using the default package manager. - -For example, Qpdf can be installed on Arch Linux and its variants using [**pacman**][1] as shown below. -``` -$ sudo pacman -S qpdf - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ sudo apt-get install qpdf - -``` - -Now let us remove the password from a pdf file using qpdf. - -I have a password-protected PDF file named **“secure.pdf”**. Whenever I open this file, it prompts me to enter the password to display its contents. - -![][3] - -I know the password of the above pdf file. However, I don’t want to share the password with anyone. So what I am going to do is to simply remove the password of the PDF file using Qpdf utility with following command. -``` -$ qpdf --password='123456' --decrypt secure.pdf output.pdf - -``` - -Quite easy, isn’t it? Yes, it is! Here, **123456** is the password of the **secure.pdf** file. Replace the password with your own. - -**Method 2 – Using Pdftk** - -**Pdftk** is yet another great software for manipulating pdf documents. Pdftk can do almost all sort of pdf operations, such as; - - * Encrypt and decrypt pdf files. - * Merge PDF documents. - * Collate PDF page Scans. - * Split PDF pages. - * Rotate PDF files or pages. - * Fill PDF forms with X/FDF data and/or flatten forms. - * Generate FDF data stencils from PDF forms. - * Apply a background watermark or a foreground stamp. - * Report PDF metrics, bookmarks and metadata. - * Add/update PDF bookmarks or metadata. - * Attach files to PDF pages or the PDF document. - * Unpack PDF attachments. - * Burst a PDF file into single pages. - * Compress and decompress page streams. - * Repair corrupted PDF file. - - - -Pddftk is available in AUR, so you can install it using any AUR helper programs on Arch Linux its derivatives. - -Using [**Pacaur**][4]: -``` -$ pacaur -S pdftk - -``` - -Using [**Packer**][5]: -``` -$ packer -S pdftk - -``` - -Using [**Trizen**][6]: -``` -$ trizen -S pdftk - -``` - -Using [**Yay**][7]: -``` -$ yay -S pdftk - -``` - -Using [**Yaourt**][8]: -``` -$ yaourt -S pdftk - -``` - -On Debian, Ubuntu, Linux Mint, run: -``` -$ sudo apt-get instal pdftk - -``` - -On CentOS, Fedora, Red Hat: - -First, Install EPEL repository: -``` -$ sudo yum install epel-release - -``` - -Or -``` -$ sudo dnf install epel-release - -``` - -Then install PDFtk application using command: -``` -$ sudo yum install pdftk - -``` - -Or -``` -$ sudo dnf install pdftk - -``` - -Once pdftk installed, you can remove the password from a pdf document using command: -``` -$ pdftk secure.pdf input_pw 123456 output output.pdf - -``` - -Replace ‘123456’ with your correct password. This command decrypts the “secure.pdf” file and create an equivalent non-password protected file named “output.pdf”. - -**Also read:** - -**Method 3 – Using Poppler** - -**Poppler** is a PDF rendering library based on the xpdf-3.0 code base. It contains the following set of command line utilities for manipulating PDF documents. - - * **pdfdetach** – lists or extracts embedded files. - * **pdffonts** – font analyzer. - * **pdfimages** – image extractor. - * **pdfinfo** – document information. - * **pdfseparate** – page extraction tool. - * **pdfsig** – verifies digital signatures. - * **pdftocairo** – PDF to PNG/JPEG/PDF/PS/EPS/SVG converter using Cairo. - * **pdftohtml** – PDF to HTML converter. - * **pdftoppm** – PDF to PPM/PNG/JPEG image converter. - * **pdftops** – PDF to PostScript (PS) converter. - * **pdftotext** – text extraction. - * **pdfunite** – document merging tool. - - - -For the purpose of this guide, we only use the “pdftops” utility. - -To install Poppler on Arch Linux based distributions, run: -``` -$ sudo pacman -S poppler - -``` - -On Debian, Ubuntu, Linux Mint: -``` -$ sudo apt-get install poppler-utils - -``` - -On RHEL, CentOS, Fedora: -``` -$ sudo yum install poppler-utils - -``` - -Once Poppler installed, run the following command to decrypt the password protected pdf file and create a new equivalent file named output.pdf. -``` -$ pdftops -upw 123456 secure.pdf output.pdf - -``` - -Again, replace ‘123456’ with your pdf password. - -As you might noticed in all above methods, we just converted the password protected pdf file named “secure.pdf” to another equivalent pdf file named “output.pdf”. Technically speaking, we really didn’t remove the password from the source file, instead we decrypted it and saved it as another equivalent pdf file without password protection. - -**Method 4 – Print to a file -** - -This is the easiest method in all of the above methods. You can use your existing PDF viewer such as Atril document viewer, Evince etc., and print the password protected pdf file to another file. - -Open the password protected file in your PDF viewer application. Go to **File - > Print**. And save the pdf file in any location of your choice. - -![][9] - -And, that’s all. Hope this was useful. Do you know/use any other methods to remove the password protection from PDF files? Let us know in the comment section below. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-remove-password-from-a-pdf-file-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:https://www.ostechnix.com/getting-started-pacman/ -[2]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[3]:http://www.ostechnix.com/wp-content/uploads/2018/04/Remove-Password-From-A-PDF-File-1.png -[4]:https://www.ostechnix.com/install-pacaur-arch-linux/ -[5]:https://www.ostechnix.com/install-packer-arch-linux-2/ -[6]:https://www.ostechnix.com/trizen-lightweight-aur-package-manager-arch-based-systems/ -[7]:https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[8]:https://www.ostechnix.com/install-yaourt-arch-linux/ diff --git a/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md b/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md deleted file mode 100644 index aea3b6b035..0000000000 --- a/sources/tech/20180422 Command Line Tricks For Data Scientists - kade killary.md +++ /dev/null @@ -1,526 +0,0 @@ -Command Line Tricks For Data Scientists • kade killary -====== - -![](https://i.imgur.com/0mzQMcB.png) - -For many data scientists, data manipulation begins and ends with Pandas or the Tidyverse. In theory, there is nothing wrong with this notion. It is, after all, why these tools exist in the first place. Yet, these options can often be overkill for simple tasks like delimiter conversion. - -Aspiring to master the command line should be on every developer’s list, especially data scientists. Learning the ins and outs of your shell will undeniably make you more productive. Beyond that, the command line serves as a great history lesson in computing. For instance, awk - a data-driven scripting language. Awk first appeared in 1977 with the help of [Brian Kernighan][1], the K in the legendary [K&R book][2]. Today, some near 50 years later, awk remains relevant with [new books][3] still appearing every year! Thus, it’s safe to assume that an investment in command line wizardry won’t depreciate any time soon. - -### What We’ll Cover - - * ICONV - * HEAD - * TR - * WC - * SPLIT - * SORT & UNIQ - * CUT - * PASTE - * JOIN - * GREP - * SED - * AWK - - - -### ICONV - -File encodings can be tricky. For the most part files these days are all UTF-8 encoded. To understand some of the magic behind UTF-8, check out this [excellent video][4]. Nonetheless, there are times where we receive a file that isn’t in this format. This can lead to some wonky attempts at swapping the encoding schema. Here, `iconv` is a life saver. Iconv is a simple program that will take text in one encoding and output the text in another. -``` -# Converting -f (from) latin1 (ISO-8859-1) -# -t (to) standard UTF_8 - -iconv -f ISO-8859-1 -t UTF-8 < input.txt > output.txt - -``` - - * Useful options: - - * `iconv -l` list all known encodings - * `iconv -c` silently discard characters that cannot be converted - - - -### HEAD - -If you are a frequent Pandas user then `head` will be familiar. Often when dealing with new data the first thing we want to do is get a sense of what exists. This leads to firing up Pandas, reading in the data and then calling `df.head()` \- strenuous, to say the least. Head, without any flags, will print out the first 10 lines of a file. The true power of `head` lies in testing out cleaning operations. For instance, if we wanted to change the delimiter of a file from commas to pipes. One quick test would be: `head mydata.csv | sed 's/,/|/g'`. -``` -# Prints out first 10 lines - -head filename.csv - -# Print first 3 lines - -head -n 3 filename.csv - -``` - - * Useful options: - - * `head -n` print a specific number of lines - * `head -c` print a specific number of bytes - - - -### TR - -Tr is analogous to translate. This powerful utility is a workhorse for basic file cleaning. An ideal use case is for swapping out the delimiters within a file. -``` -# Converting a tab delimited file into commas - -cat tab_delimited.txt | tr "\t" "," comma_delimited.csv - -``` - -Another feature of `tr` is all the built in `[:class:]` variables at your disposal. These include: -``` -[:alnum:] all letters and digits -[:alpha:] all letters -[:blank:] all horizontal whitespace -[:cntrl:] all control characters -[:digit:] all digits -[:graph:] all printable characters, not including space -[:lower:] all lower case letters -[:print:] all printable characters, including space -[:punct:] all punctuation characters -[:space:] all horizontal or vertical whitespace -[:upper:] all upper case letters -[:xdigit:] all hexadecimal digits - -``` - -You can chain a variety of these together to compose powerful programs. The following is a basic word count program you could use to check your READMEs for overuse. -``` -cat README.md | tr "[:punct:][:space:]" "\n" | tr "[:upper:]" "[:lower:]" | grep . | sort | uniq -c | sort -nr - -``` - -Another example using basic regex: -``` -# Converting all upper case letters to lower case - -cat filename.csv | tr '[A-Z]' '[a-z]' - -``` - - * Useful options: - - * `tr -d` delete characters - * `tr -s` squeeze characters - * `\b` backspace - * `\f` form feed - * `\v` vertical tab - * `\NNN` character with octal value NNN - - - -### WC - -Word count. Its value is primarily derived from the `-l` flag, which will give you the line count. -``` -# Will return number of lines in CSV - -wc -l gigantic_comma.csv - -``` - -This tool comes in handy to confirm the output of various commands. So, if we were to convert the delimiters within a file and then run `wc -l` we would expect the total lines to be the same. If not, then we know something went wrong. - - * Useful options: - - * `wc -c` print the byte counts - * `wc -m` print the character counts - * `wc -L` print length of longest line - * `wc -w` print word counts - - - -### SPLIT - -File sizes can range dramatically. And depending on the job, it could be beneficial to split up the file - thus `split`. The basic syntax for split is: -``` -# We will split our CSV into new_filename every 500 lines - -split -l 500 filename.csv new_filename_ - -# filename.csv -# ls output -# new_filename_aaa -# new_filename_aab -# new_filename_aac - -``` - -Two quirks are the naming convention and lack of file extensions. The suffix convention can be numeric via the `-d` flag. To add file extensions, you’ll need to run the following `find` command. It will change the names of ALL files within the current directory by appending `.csv`, so be careful. -``` -find . -type f -exec mv '{}' '{}'.csv \; - -# ls output -# filename.csv.csv -# new_filename_aaa.csv -# new_filename_aab.csv -# new_filename_aac.csv - -``` - - * Useful options: - - * `split -b` split by certain byte size - * `split -a` generate suffixes of length N - * `split -x` split using hex suffixes - - - -### SORT & UNIQ - -The preceding commands are obvious: they do what they say they do. These two provide the most punch in tandem (i.e. unique word counts). This is due to `uniq`, which only operates on duplicate adjacent lines. Thus, the reason to `sort` before piping the output through. One interesting note is that `sort -u` will achieve the same results as the typical `sort file.txt | uniq` pattern. - -Sort does have a sneakily useful ability for data scientists: the ability to sort an entire CSV based on a particular column. -``` -# Sorting a CSV file by the second column alphabetically - -sort -t"," -k2,2 filename.csv - -# Numerically - -sort -t"," -k2n,2 filename.csv - -# Reverse order - -sort -t"," -k2nr,2 filename.csv - -``` - -The `-t` option here is to specify the comma as our delimiter. More often than not spaces or tabs are assumed. Furthermore, the `-k` flag is for specifying our key. The syntax for this is `-km,n`, with `m` being the starting field and `n` being the last. - - * Useful options: - - * `sort -f` ignore case - * `sort -r` reverse sort order - * `sort -R` scramble order - * `uniq -c` count number of occurrences - * `uniq -d` only print duplicate lines - - - -### CUT - -Cut is for removing columns. To illustrate, if we only wanted the first and third columns. -``` -cut -d, -f 1,3 filename.csv - -``` - -To select every column other than the first. -``` -cut -d, -f 2- filename.csv - -``` - -In combination with other commands, `cut` serves as a filter. -``` -# Print first 10 lines of column 1 and 3, where "some_string_value" is present - -head filename.csv | grep "some_string_value" | cut -d, -f 1,3 - -``` - -Finding out the number of unique values within the second column. -``` -cat filename.csv | cut -d, -f 2 | sort | uniq | wc -l - -# Count occurences of unique values, limiting to first 10 results - -cat filename.csv | cut -d, -f 2 | sort | uniq -c | head - -``` - -### PASTE - -Paste is a niche command with an interesting function. If you have two files that you need merged, and they are already sorted, `paste` has you covered. -``` -# names.txt -adam -john -zach - -# jobs.txt -lawyer -youtuber -developer - -# Join the two into a CSV - -paste -d ',' names.txt jobs.txt > person_data.txt - -# Output -adam,lawyer -john,youtuber -zach,developer - -``` - -For a more SQL_-esque variant, see below. - -### JOIN - -Join is a simplistic, quasi-tangential, SQL. The largest differences being that `join` will return all columns and matches can only be on one field. By default, `join` will try and use the first column as the match key. For a different result, the following syntax is necessary: -``` -# Join the first file (-1) by the second column -# and the second file (-2) by the first - -join -t"," -1 2 -2 1 first_file.txt second_file.txt - -``` - -The standard join is an inner join. However, an outer join is also viable through the `-a` flag. Another noteworthy quirk is the `-e` flag, which can be used to substitute a value if a missing field is found. -``` -# Outer join, replace blanks with NULL in columns 1 and 2 -# -o which fields to substitute - 0 is key, 1.1 is first column, etc... - -join -t"," -1 2 -a 1 -a2 -e ' NULL' -o '0,1.1,2.2' first_file.txt second_file.txt - -``` - -Not the most user-friendly command, but desperate times, desperate measures. - - * Useful options: - - * `join -a` print unpairable lines - * `join -e` replace missing input fields - * `join -j` equivalent to `-1 FIELD -2 FIELD` - - - -### GREP - -Global search for a regular expression and print, or `grep`; likely, the most well known command, and with good reason. Grep has a lot of power, especially for finding your way around large codebases. Within the realm of data science, it acts as a refining mechanism for other commands. Although its standard usage is valuable as well. -``` -# Recursively search and list all files in directory containing 'word' - -grep -lr 'word' . - -# List number of files containing word - -grep -lr 'word' . | wc -l - -``` - -Count total number of lines containing word / pattern. -``` -grep -c 'some_value' filename.csv - -# Same thing, but in all files in current directory by file name - -grep -c 'some_value' * - -``` - -Grep for multiple values using the or operator - `\|`. -``` -grep "first_value\|second_value" filename.csv - -``` - - * Useful options - - * `alias grep="grep --color=auto"` make grep colorful - * `grep -E` use extended regexps - * `grep -w` only match whole words - * `grep -l` print name of files with match - * `grep -v` inverted matching - - - -### THE BIG GUNS - -Sed and Awk are the two most powerful commands in this article. For brevity, I’m not going to go into exhausting detail about either. Instead, I will cover a variety of commands that prove their impressive might. If you want to know more, [there is a book][5] just for that. - -### SED - -At its core `sed` is a stream editor. It excels at substitutions, but can also be leveraged for all out refactoring. - -The most basic `sed` command consists of `s/old/new/g`. This translates to search for old value, replace with new globally. Without the `/g` our command would terminate after the first occurrence. - -To get a quick taste of the power lets dive into an example. In this scenario you’ve been given the following file: -``` -balance,name -$1,000,john -$2,000,jack - -``` - -The first thing we may want to do is remove the dollar signs. The `-i` flag indicates in-place. The `''` is to indicate a zero-length file extension, thus overwriting our initial file. Ideally, you would test each of these individually and then output to a new file. -``` -sed -i '' 's/\$//g' data.txt - -# balance,name -# 1,000,john -# 2,000,jack - -``` - -Next up, the commas in our `balance` column values. -``` -sed -i '' 's/\([0-9]\),\([0-9]\)/\1\2/g' data.txt - -# balance,name -# 1000,john -# 2000,jack - -``` - -Lastly, Jack up and decided to quit one day. So, au revoir, mon ami. -``` -sed -i '' '/jack/d' data.txt - -# balance,name -# 1000,john - -``` - -As you can see, `sed` packs quite a punch, but the fun doesn’t stop there. - -### AWK - -The best for last. Awk is much more than a simple command: it is a full-blown language. Of everything covered in this article, `awk` is by far the coolest. If you find yourself impressed there are loads of great resources - see [here][6], [here][7] and [here][8]. - -Common use cases for `awk` include: - - * Text processing - * Formatted text reports - * Performing arithmetic operations - * Performing string operations - - - -Awk can parallel `grep` in its most nascent form. -``` -awk '/word/' filename.csv - -``` - -Or with a little more magic the combination of `grep` and `cut`. Here, `awk` prints the third and fourth column, tab separated, for all lines with our word. `-F,` merely changes our delimiter to a comma. -``` -awk -F, '/word/ { print $3 "\t" $4 }' filename.csv - -``` - -Awk comes with a lot of nifty variables built-in. For instance, `NF` \- number of fields - and `NR` \- number of records. To get the fifty-third record in a file: -``` -awk -F, 'NR == 53' filename.csv - -``` - -An added wrinkle is the ability to filter based off of one or more values. The first example, below, will print the line number and columns for records where the first column equals string. -``` -awk -F, ' $1 == "string" { print NR, $0 } ' filename.csv - -# Filter based off of numerical value in second column - -awk -F, ' $2 == 1000 { print NR, $0 } ' filename.csv - -``` - -Multiple numerical expressions: -``` -# Print line number and columns where column three greater -# than 2005 and column five less than one thousand - -awk -F, ' $3 >= 2005 && $5 <= 1000 { print NR, $0 } ' filename.csv - -``` - -Sum the third column: -``` -awk -F, '{ x+=$3 } END { print x }' filename.csv - -``` - -The sum of the third column, for values where the first column equals “something”. -``` -awk -F, '$1 == "something" { x+=$3 } END { print x }' filename.csv - -``` - -Get the dimensions of a file: -``` -awk -F, 'END { print NF, NR }' filename.csv - -# Prettier version - -awk -F, 'BEGIN { print "COLUMNS", "ROWS" }; END { print NF, NR }' filename.csv - -``` - -Print lines appearing twice: -``` -awk -F, '++seen[$0] == 2' filename.csv - -``` - -Remove duplicate lines: -``` -# Consecutive lines -awk 'a !~ $0; {a=$0}'] - -# Nonconsecutive lines -awk '! a[$0]++' filename.csv - -# More efficient -awk '!($0 in a) {a[$0];print} - -``` - -Substitute multiple values using built-in function `gsub()`. -``` -awk '{gsub(/scarlet|ruby|puce/, "red"); print}' - -``` - -This `awk` command will combine multiple CSV files, ignoring the header and then append it at the end. -``` -awk 'FNR==1 && NR!=1{next;}{print}' *.csv > final_file.csv - -``` - -Need to downsize a massive file? Welp, `awk` can handle that with help from `sed`. Specifically, this command breaks one big file into multiple smaller ones based on a line count. This one-liner will also add an extension. -``` -sed '1d;$d' filename.csv | awk 'NR%NUMBER_OF_LINES==1{x="filename-"++i".csv";}{print > x}' - -# Example: splitting big_data.csv into data_(n).csv every 100,000 lines - -sed '1d;$d' big_data.csv | awk 'NR%100000==1{x="data_"++i".csv";}{print > x}' - -``` - -### CLOSING - -The command line boasts endless power. The commands covered in this article are enough to elevate you from zero to hero in no time. Beyond those covered, there are many utilities to consider for daily data operations. [Csvkit][9], [xsv][10] and [q][11] are three of note. If you’re looking to take an even deeper dive into command line data science, then look no further than [this book][12]. It’s also available online [for free][13]! - --------------------------------------------------------------------------------- - -via: http://kadekillary.work/post/cli-4-ds/ - -作者:[Kade Killary][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://kadekillary.work/authors/kadekillary -[1]:https://en.wikipedia.org/wiki/Brian_Kernighan -[2]:https://en.wikipedia.org/wiki/The_C_Programming_Language -[3]:https://www.amazon.com/Learning-AWK-Programming-cutting-edge-text-processing-ebook/dp/B07BT98HDS -[4]:https://www.youtube.com/watch?v=MijmeoH9LT4 -[5]:https://www.amazon.com/sed-awk-Dale-Dougherty/dp/1565922255/ref=sr_1_1?ie=UTF8&qid=1524381457&sr=8-1&keywords=sed+and+awk -[6]:https://www.amazon.com/AWK-Programming-Language-Alfred-Aho/dp/020107981X/ref=sr_1_1?ie=UTF8&qid=1524388936&sr=8-1&keywords=awk -[7]:http://www.grymoire.com/Unix/Awk.html -[8]:https://www.tutorialspoint.com/awk/index.htm -[9]:http://csvkit.readthedocs.io/en/1.0.3/ -[10]:https://github.com/BurntSushi/xsv -[11]:https://github.com/harelba/q -[12]:https://www.amazon.com/Data-Science-Command-Line-Time-Tested/dp/1491947853/ref=sr_1_1?ie=UTF8&qid=1524390894&sr=8-1&keywords=data+science+at+the+command+line -[13]:https://www.datascienceatthecommandline.com/ diff --git a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md b/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md deleted file mode 100644 index 4e69d04837..0000000000 --- a/sources/tech/20180425 Things to do After Installing Ubuntu 18.04.md +++ /dev/null @@ -1,294 +0,0 @@ -Things to do After Installing Ubuntu 18.04 -====== -**Brief: This list of things to do after installing Ubuntu 18.04 helps you get started with Bionic Beaver for a smoother desktop experience.** - -[Ubuntu][1] 18.04 Bionic Beaver releases today. You are perhaps already aware of the [new features in Ubuntu 18.04 LTS][2] release. If not, here’s the video review of Ubuntu 18.04 LTS: - -[Subscribe to YouTube Channel for more Ubuntu Videos][3] - -If you opted to install Ubuntu 18.04, I have listed out a few recommended steps that you can follow to get started with it. - -### Things to do after installing Ubuntu 18.04 Bionic Beaver - -![Things to do after installing Ubuntu 18.04][4] - -I should mention that the list of things to do after installing Ubuntu 18.04 depends a lot on you and your interests and needs. If you are a programmer, you’ll focus on installing programming tools. If you are a graphic designer, you’ll focus on installing graphics tools. - -Still, there are a few things that should be applicable to most Ubuntu users. This list is composed of those things plus a few of my of my favorites. - -Also, this list is for the default [GNOME desktop][5]. If you are using some other flavor like [Kubuntu][6], Lubuntu etc then the GNOME-specific stuff won’t be applicable to your system. - -You don’t have to follow each and every point on the list blindly. You should see if the recommended action suits your requirements or not. - -With that said, let’s get started with this list of things to do after installing Ubuntu 18.04. - -#### 1\. Update the system - -This is the first thing you should do after installing Ubuntu. Update the system without fail. It may sound strange because you just installed a fresh OS but still, you must check for the updates. - -In my experience, if you don’t update the system right after installing Ubuntu, you might face issues while trying to install a new program. - -To update Ubuntu 18.04, press Super Key (Windows Key) to launch the Activity Overview and look for Software Updater. Run it to check for updates. - -![Software Updater in Ubuntu 17.10][7] - -**Alternatively** , you can use these famous commands in the terminal ( Use Ctrl+Alt+T): -``` -sudo apt update && sudo apt upgrade - -``` - -#### 2\. Enable additional repositories for more software - -[Ubuntu has several repositories][8] from where it provides software for your system. These repositories are: - - * Main – Free and open-source software supported by Ubuntu team - * Universe – Free and open-source software maintained by the community - * Restricted – Proprietary drivers for devices. - * Multiverse – Software restricted by copyright or legal issues. - * Canonical Partners – Software packaged by Ubuntu for their partners - - - -Enabling all these repositories will give you access to more software and proprietary drivers. - -Go to Activity Overview by pressing Super Key (Windows key), and search for Software & Updates: - -![Software and Updates in Ubuntu 17.10][9] - -Under the Ubuntu Software tab, make sure you have checked all of the Main, Universe, Restricted and Multiverse repository checked. - -![Setting repositories in Ubuntu 18.04][10] - -Now move to the **Other Software** tab, check the option of **Canonical Partners**. - -![Enable Canonical Partners repository in Ubuntu 17.10][11] - -You’ll have to enter your password in order to update the software sources. Once it completes, you’ll find more applications to install in the Software Center. - -#### 3\. Install media codecs - -In order to play media files like MP#, MPEG4, AVI etc, you’ll need to install media codecs. Ubuntu has them in their repository but doesn’t install it by default because of copyright issues in various countries. - -As an individual, you can install these media codecs easily using the Ubuntu Restricted Extra package. Click on the link below to install it from the Software Center. - -[Install Ubuntu Restricted Extras][12] - -Or alternatively, use the command below to install it: -``` -sudo apt install ubuntu-restricted-extras - -``` - -#### 4\. Install software from the Software Center - -Now that you have setup the repositories and installed the codecs, it is time to get software. If you are absolutely new to Ubuntu, please follow this [guide to installing software in Ubuntu][13]. - -There are several ways to install software. The most convenient way is to use the Software Center that has thousands of software available in various categories. You can install them in a few clicks from the software center. - -![Software Center in Ubuntu 17.10 ][14] - -It depends on you what kind of software you would like to install. I’ll suggest some of my favorites here. - - * **VLC** – media player for videos - * **GIMP** – Photoshop alternative for Linux - * **Pinta** – Paint alternative in Linux - * **Calibre** – eBook management tool - * **Chromium** – Open Source web browser - * **Kazam** – Screen recorder tool - * [**Gdebi**][15] – Lightweight package installer for .deb packages - * **Spotify** – For streaming music - * **Skype** – For video messaging - * **Kdenlive** – [Video editor for Linux][16] - * **Atom** – [Code editor][17] for programming - - - -You may also refer to this list of [must-have Linux applications][18] for more software recommendations. - -#### 5\. Install software from the Web - -Though Ubuntu has thousands of applications in the software center, you may not find some of your favorite applications despite the fact that they support Linux. - -Many software vendors provide ready to install .deb packages. You can download these .deb files from their website and install it by double-clicking on it. - -[Google Chrome][19] is one such software that you can download from the web and install it. - -#### 6\. Opt out of data collection in Ubuntu 18.04 (optional) - -Ubuntu 18.04 collects some harmless statistics about your system hardware and your system installation preference. It also collects crash reports. - -You’ll be given the option to not send this data to Ubuntu servers when you log in to Ubuntu 18.04 for the first time. - -![Opt out of data collection in Ubuntu 18.04][20] - -If you miss it that time, you can disable it by going to System Settings -> Privacy and then set the Problem Reporting to Manual. - -![Privacy settings in Ubuntu 18.04][21] - -#### 7\. Customize the GNOME desktop (Dock, themes, extensions and more) - -The GNOME desktop looks good in Ubuntu 18.04 but doesn’t mean you cannot change it. - -You can do a few visual changes from the System Settings. You can change the wallpaper of the desktop and the lock screen, you can change the position of the dock (launcher on the left side), change power settings, Bluetooth etc. In short, you can find many settings that you can change as per your need. - -![Ubuntu 17.10 System Settings][22] - -Changing themes and icons are the major way to change the looks of your system. I advise going through the list of [best GNOME themes][23] and [icons for Ubuntu][24]. Once you have found the theme and icon of your choice, you can use them with GNOME Tweaks tool. - -You can install GNOME Tweaks via the Software Center or you can use the command below to install it: -``` -sudo apt install gnome-tweak-tool - -``` - -Once it is installed, you can easily [install new themes and icons][25]. - -![Change theme is one of the must to do things after installing Ubuntu 17.10][26] - -You should also have a look at [use GNOME extensions][27] to further enhance the looks and capabilities of your system. I made this video about using GNOME extensions in 17.10 and you can follow the same for Ubuntu 18.04. - -If you are wondering which extension to use, do take a look at this list of [best GNOME extensions][28]. - -I also recommend reading this article on [GNOME customization in Ubuntu][29] so that you can know the GNOME desktop in detail. - -#### 8\. Prolong your battery and prevent overheating - -Let’s move on to [prevent overheating in Linux laptops][30]. TLP is a wonderful tool that controls CPU temperature and extends your laptops’ battery life in the long run. - -Make sure that you haven’t installed any other power saving application such as [Laptop Mode Tools][31]. You can install it using the command below in a terminal: -``` -sudo apt install tlp tlp-rdw - -``` - -Once installed, run the command below to start it: -``` -sudo tlp start - -``` - -#### 9\. Save your eyes with Nightlight - -Nightlight is my favorite feature in GNOME desktop. Keeping [your eyes safe at night][32] from the computer screen is very important. Reducing blue light helps reducing eye strain at night. - -![flux effect][33] - -GNOME provides a built-in Night Light option, which you can activate in the System Settings. - -Just go to System Settings-> Devices-> Displays and turn on the Night Light option. - -![Enabling night light is a must to do in Ubuntu 17.10][34] - -#### 9\. Disable automatic suspend for laptops - -Ubuntu 18.04 comes with a new automatic suspend feature for laptops. If the system is running on battery and is inactive for 20 minutes, it will go in suspend mode. - -I understand that the intention is to save battery life but it is an inconvenience as well. You can’t keep the power plugged in all the time because it’s not good for the battery life. And you may need the system to be running even when you are not using it. - -Thankfully, you can change this behavior. Go to System Settings -> Power. Under Suspend & Power Button section, either turn off the Automatic Suspend option or extend its time period. - -![Disable automatic suspend in Ubuntu 18.04][35] - -You can also change the screen dimming behavior in here. - -#### 10\. System cleaning - -I have written in detail about [how to clean up your Ubuntu system][36]. I recommend reading that article to know various ways to keep your system free of junk. - -Normally, you can use this little command to free up space from your system: -``` -sudo apt autoremove - -``` - -It’s a good idea to run this command every once a while. If you don’t like the command line, you can use a GUI tool like [Stacer][37] or [Bleach Bit][38]. - -#### 11\. Going back to Unity or Vanilla GNOME (not recommended) - -If you have been using Unity or GNOME in the past, you may not like the new customized GNOME desktop in Ubuntu 18.04. Ubuntu has customized GNOME so that it resembles Unity but at the end of the day, it is neither completely Unity nor completely GNOME. - -So if you are a hardcore Unity or GNOMEfan, you may want to use your favorite desktop in its ‘real’ form. I wouldn’t recommend but if you insist here are some tutorials for you: - -#### 12\. Can’t log in to Ubuntu 18.04 after incorrect password? Here’s a workaround - -I noticed a [little bug in Ubuntu 18.04][39] while trying to change the desktop session to Ubuntu Community theme. It seems if you try to change the sessions at the login screen, it rejects your password first and at the second attempt, the login gets stuck. You can wait for 5-10 minutes to get it back or force power it off. - -The workaround here is that after it displays the incorrect password message, click Cancel, then click your name, then enter your password again. - -#### 13\. Experience the Community theme (optional) - -Ubuntu 18.04 was supposed to have a dashing new theme developed by the community. The theme could not be completed so it could not become the default look of Bionic Beaver release. I am guessing that it will be the default theme in Ubuntu 18.10. - -![Ubuntu 18.04 Communitheme][40] - -You can try out the aesthetic theme even today. [Installing Ubuntu Community Theme][41] is very easy. Just look for it in the software center, install it, restart your system and then at the login choose the Communitheme session. - -#### 14\. Get Windows 10 in Virtual Box (if you need it) - -In a situation where you must use Windows for some reasons, you can [install Windows in virtual box inside Linux][42]. It will run as a regular Ubuntu application. - -It’s not the best way but it still gives you an option. You can also [use WINE to run Windows software on Linux][43]. In both cases, I suggest trying the alternative native Linux application first before jumping to virtual machine or WINE. - -#### What do you do after installing Ubuntu? - -Those were my suggestions for getting started with Ubuntu. There are many more tutorials that you can find under [Ubuntu 18.04][44] tag. You may go through them as well to see if there is something useful for you. - -Enough from myside. Your turn now. What are the items on your list of **things to do after installing Ubuntu 18.04**? The comment section is all yours. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/things-to-do-after-installing-ubuntu-18-04/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://itsfoss.com/author/abhishek/ -[1]:https://www.ubuntu.com/ -[2]:https://itsfoss.com/ubuntu-18-04-release-features/ -[3]:https://www.youtube.com/c/itsfoss?sub_confirmation=1 -[4]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/things-to-after-installing-ubuntu-18-04-featured-800x450.jpeg -[5]:https://www.gnome.org/ -[6]:https://kubuntu.org/ -[7]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-update-ubuntu-17-10.jpg -[8]:https://help.ubuntu.com/community/Repositories/Ubuntu -[9]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-updates-ubuntu-17-10.jpg -[10]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/repositories-ubuntu-18.png -[11]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/software-repository-ubuntu-17-10.jpeg -[12]:apt://ubuntu-restricted-extras -[13]:https://itsfoss.com/remove-install-software-ubuntu/ -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Ubuntu-software-center-17-10-800x551.jpeg -[15]:https://itsfoss.com/gdebi-default-ubuntu-software-center/ -[16]:https://itsfoss.com/best-video-editing-software-linux/ -[17]:https://itsfoss.com/best-modern-open-source-code-editors-for-linux/ -[18]:https://itsfoss.com/essential-linux-applications/ -[19]:https://www.google.com/chrome/ -[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/opt-out-of-data-collection-ubuntu-18-800x492.jpeg -[21]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/privacy-ubuntu-18-04-800x417.png -[22]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/System-Settings-Ubuntu-17-10-800x573.jpeg -[23]:https://itsfoss.com/best-gtk-themes/ -[24]:https://itsfoss.com/best-icon-themes-ubuntu-16-04/ -[25]:https://itsfoss.com/install-themes-ubuntu/ -[26]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/GNOME-Tweak-Tool-Ubuntu-17-10.jpeg -[27]:https://itsfoss.com/gnome-shell-extensions/ -[28]:https://itsfoss.com/best-gnome-extensions/ -[29]:https://itsfoss.com/gnome-tricks-ubuntu/ -[30]:https://itsfoss.com/reduce-overheating-laptops-linux/ -[31]:https://wiki.archlinux.org/index.php/Laptop_Mode_Tools -[32]:https://itsfoss.com/night-shift-flux-ubuntu-linux/ -[33]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2016/03/flux-eyes-strain.jpg -[34]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2017/10/Enable-Night-Light-Feature-Ubuntu-17-10-800x396.jpeg -[35]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/disable-automatic-suspend-ubuntu-18-800x586.jpeg -[36]:https://itsfoss.com/free-up-space-ubuntu-linux/ -[37]:https://itsfoss.com/optimize-ubuntu-stacer/ -[38]:https://itsfoss.com/bleachbit-2-release/ -[39]:https://gitlab.gnome.org/GNOME/gnome-shell/issues/227 -[40]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/04/ubunt-18-theme.jpeg -[41]:https://itsfoss.com/ubuntu-community-theme/ -[42]:https://itsfoss.com/install-windows-10-virtualbox-linux/ -[43]:https://itsfoss.com/use-windows-applications-linux/ -[44]:https://itsfoss.com/tag/ubuntu-18-04/ diff --git a/sources/tech/20180428 A Beginners Guide To Flatpak.md b/sources/tech/20180428 A Beginners Guide To Flatpak.md deleted file mode 100644 index db1dfa8181..0000000000 --- a/sources/tech/20180428 A Beginners Guide To Flatpak.md +++ /dev/null @@ -1,311 +0,0 @@ -A Beginners Guide To Flatpak -====== - -![](https://www.ostechnix.com/wp-content/uploads/2016/06/flatpak-720x340.jpg) - -A while, we have written about [**Ubuntu’s Snaps**][1]. Snaps are introduced by Canonical for Ubuntu operating system, and later it was adopted by other Linux distributions such as Arch, Gentoo, and Fedora etc. A snap is a single binary package bundled with all required libraries and dependencies, and you can install it on any Linux distribution, regardless of its version and architecture. Similar to Snaps, there is also another tool called **Flatpak**. As you may already know, packaging distributed applications for different Linux distributions are quite time consuming and difficult process. Each distributed application has different set of libraries and dependencies for various Linux distributions. But, Flatpak, the new framework for desktop applications that completely reduces this burden. Now, you can build a single Flatpak app and install it on various operating systems. How cool, isn’t it? - -Also, the users don’t have to worry about the libraries and dependencies, everything is bundled within the app itself. Most importantly, Flaptpak apps are sandboxed and isolated from the rest of the host operating system, and other applications. Another notable feature is we can install multiple versions of the same application at the same time in the same system. For example, you can install VLC player version 2.1, 2.2, and 2.3 on the same system. So, the developers can test different versions of same application at a time. - -In this tutorial, we will see how to install Flatpak in GNU/Linux. - -### Install Flatpak - -Flatpak is available for many popular Linux distributions such as Arch Linux, Debian, Fedora, Gentoo, Red Hat, Linux Mint, openSUSE, Solus, Mageia and Ubuntu distributions. - -To install Flatpak on Arch Linux, run: -``` -$ sudo pacman -S flatpak - -``` - -Flatpak is available in the default repositories of Debian Stretch and newer. To install it, run: -``` -$ sudo apt install flatpak - -``` - -On Fedora, Flatpak is installed by default. All you have to do is enable enable Flathub as described in the next section. - -Just in case, it is not installed for any reason, run: -``` -$ sudo dnf install flatpak - -``` - -On RHEL 7, run: -``` -$ sudo yum install flatpak - -``` - -On Linux Mint 18.3, flatpak is installed by default. So, no setup required. - -On openSUSE Tumbleweed, Flatpak can also be installed using Zypper: -``` -$ sudo zypper install flatpak - -``` - -On Ubuntu, add the following repository and install Flatpak as shown below. -``` -$ sudo add-apt-repository ppa:alexlarsson/flatpak - -$ sudo apt update - -$ sudo apt install flatpak - -``` - -The Flatpak plugin for the Software app makes it possible to install apps without needing the command line. To install this plugin, run: -``` -$ sudo apt install gnome-software-plugin-flatpak - -``` - -For other Linux distributions, refer the official installation [**link**][2]. - -### Getting Started With Flatpak - -There are many popular applications such as Gimp, Kdenlive, Steam, Spotify, Visual studio code etc., available as flatpaks. - -Let us now see the basic usage of flatpak command. - -First of all, we need to add remote repositories. - -#### Adding Remote Repositories** - -**Enable Flathub Repository:** - -**Flathub** is nothing but a central repository where all flatpak applications available to users. To enable it, just run: -``` -$ sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo - -``` - -Flathub is enough to install most popular apps. Just in case you wanted to try some GNOME apps, add the GNOME repository. - -**Enable GNOME Repository:** - -The GNOME repository contains all GNOME core applications. GNOME flatpak repository itself is available as two versions, **stable** and **nightly**. - -To add GNOME stable repository, run the following commands: -``` -$ wget https://sdk.gnome.org/keys/gnome-sdk.gpg - -$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg --if-not-exists gnome-apps https://sdk.gnome.org/repo-apps/ - -``` - -Applications in this repository require the **3.20 version of the org.gnome.Platform runtime**. - -To install the stable runtimes, run: -``` -$ sudo flatpak remote-add --gpg-import=gnome-sdk.gpg gnome https://sdk.gnome.org/repo/ - -``` - -To add the GNOME nightly apps repository, run: -``` -$ wget https://sdk.gnome.org/nightly/keys/nightly.gpg - -$ sudo flatpak remote-add --gpg-import=nightly.gpg --if-not-exists gnome-nightly-apps https://sdk.gnome.org/nightly/repo-apps/ - -``` - -Applications in this repository require the **nightly version of the org.gnome.Platform runtime**. - -To install the nightly runtimes, run: -``` -$ sudo flatpak remote-add --gpg-import=nightly.gpg gnome-nightly https://sdk.gnome.org/nightly/repo/ - -``` - -#### Listing Remotes - -To list all configured remote repositories, run: -``` -$ flatpak remotes -Name Options -flathub system -gnome system -gnome-apps system -gnome-nightly system -gnome-nightly-apps system - -``` - -As you can see, the above command lists the remotes that you have added in your system. It also lists whether the remote has been added per-user or system-wide. - -#### Removing Remotes - -To remove a remote, for example flathub, simply do; -``` -$ sudo flatpak remote-delete flathub - -``` - -Here **flathub** is remote name. - -#### Installing Flatpak Applications - -In this section, we will see how to install flatpak apps. To install a flatpak application - -To install an application, simply do: -``` -$ sudo flatpak install flathub com.spotify.Client - -``` - -All the apps in the GNOME stable repository uses the version name of “stable”. - -To install any Stable GNOME applications, for example **Evince** , run: -``` -$ sudo flatpak install gnome-apps org.gnome.Evince stable - -``` - -All the apps in the GNOME nightly repository uses the version name of “master”. - -For example, to install gedit, run: -``` -$ sudo flatpak install gnome-nightly-apps org.gnome.gedit master - -``` - -If you don’t want to install apps system-wide, you also can install flatpak apps per-user like below. -``` -$ flatpak install --user - -``` - -All installed apps will be stored in **$HOME/.var/app/** location. -``` -$ ls $HOME/.var/app/ -com.spotify.Client - -``` - -#### Running Flatpak Applications - -You can launch the installed applications at any time from the application launcher. From command line, you can run it, for example Spotify, using command: -``` -$ flatpak run com.spotify.Client - -``` - -#### Listing Applications - -To view the installed applications and runtimes, run: -``` -$ flatpak list - -``` - -To view only the applications, not run times, use this command instead. -``` -$ flatpak list --app - -``` - -You can also view the list of available applications and runtimes from all remotes using command: -``` -$ flatpak remote-ls - -``` - -To list only applications not runtimes, run: -``` -$ flatpak remote-ls --app - -``` - -To list applications and runtimes from a specific repository, for example **gnome-apps** , run: -``` -$ flatpak remote-ls gnome-apps - -``` - -To list only the applications from a remote repository, run: -``` -$ flatpak remote-ls flathub --app - -``` - -#### Updating Applications - -To update all your flatpak applications, run: -``` -$ flatpak update - -``` - -To update a specific application, we do: -``` -$ flatpak update com.spotify.Client - -``` - -#### Getting Details Of Applications - -To display the details of a installed application, run: -``` -$ flatpak info io.github.mmstick.FontFinder - -``` - -Sample output: -``` -Ref: app/io.github.mmstick.FontFinder/x86_64/stable -ID: io.github.mmstick.FontFinder -Arch: x86_64 -Branch: stable -Origin: flathub -Date: 2018-04-11 15:10:31 +0000 -Subject: Workaround appstream issues (391ef7f5) -Commit: 07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 -Parent: dbff9150fce9fdfbc53d27e82965010805f16491ec7aa1aa76bf24ec1882d683 -Location: /var/lib/flatpak/app/io.github.mmstick.FontFinder/x86_64/stable/07164e84148c9fc8b0a2a263c8a468a5355b89061b43e32d95008fc5dc4988f4 -Installed size: 2.5 MB -Runtime: org.gnome.Platform/x86_64/3.28 - -``` - -#### Removing Applications - -To remove a flatpak application, run: -``` -$ sudo flatpak uninstall com.spotify.Client - -``` - -For details, refer flatpak help section. -``` -$ flatpak --help - -``` - -And, that’s all for now. Hope you had basic idea about Flatpak. - -If you find this guide useful, please share it on your social, professional networks and support OSTechNix. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/flatpak-new-framework-desktop-applications-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.ostechnix.com/author/sk/ -[1]:http://www.ostechnix.com/introduction-ubuntus-snap-packages/ -[2]:https://flatpak.org/setup/ diff --git a/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md b/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md deleted file mode 100644 index 7a30127b07..0000000000 --- a/sources/tech/20180503 11 Methods To Find System-Server Uptime In Linux.md +++ /dev/null @@ -1,193 +0,0 @@ -11 Methods To Find System/Server Uptime In Linux -====== -Do you want to know, how long your Linux system has been running without downtime? when the system is up and what date. - -There are multiple commands is available in Linux to check server/system uptime and most of users prefer the standard and very famous command called `uptime` to get this details. - -Server uptime is not important for some people but it’s very important for server administrators when the server running with mission-critical applications such as online shopping portal, netbanking portal, etc,. - -It must be zero downtime because if there is a down time then it will impact badly to million users. - -As i told, many commands are available to check server uptime in Linux. In this tutorial we are going teach you how to check this using below 11 methods. - -Uptime means how long the server has been up since its last shutdown or reboot. - -The uptime command the fetch the details from `/proc` files and print the server uptime, the `/proc` file is not directly readable by humans. - -The below commands will print how long the system has been running and up. It also shows some additional information. - -### Method-1 : Using uptime Command - -uptime command will tell how long the system has been running. It gives a one line display of the following information. - -The current time, how long the system has been running, how many users are currently logged on, and the system load averages for the past 1, 5, and 15 minutes. -``` -# uptime - - 08:34:29 up 21 days, 5:46, 1 user, load average: 0.06, 0.04, 0.00 - -``` - -### Method-2 : Using w Command - -w command provides a quick summary of every user logged into a computer, what each user is currently doing, -and what load all the activity is imposing on the computer itself. The command is a one-command combination of several other Unix programs: who, uptime, and ps -a. -``` -# w - - 08:35:14 up 21 days, 5:47, 1 user, load average: 0.26, 0.09, 0.02 -USER TTY FROM [email protected] IDLE JCPU PCPU WHAT -root pts/1 103.5.134.167 08:34 0.00s 0.01s 0.00s w - -``` - -### Method-3 : Using top Command - -Top command is one of the basic command to monitor real-time system processes in Linux. It display system information and running processes information like uptime, average load, tasks running, number of users logged in, number of CPUs & cpu utilization, Memory & swap information. Run top command then hit E to bring the memory utilization in MB. - -**Suggested Read :** [TOP Command Examples to Monitor Server Performance][1] -``` -# top -c - -top - 08:36:01 up 21 days, 5:48, 1 user, load average: 0.12, 0.08, 0.02 -Tasks: 98 total, 1 running, 97 sleeping, 0 stopped, 0 zombie -Cpu(s): 0.0%us, 0.3%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st -Mem: 1872888k total, 1454644k used, 418244k free, 175804k buffers -Swap: 2097148k total, 0k used, 2097148k free, 1098140k cached - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init - 2 root 20 0 0 0 0 S 0.0 0.0 0:00.00 [kthreadd] - 3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [migration/0] - 4 root 20 0 0 0 0 S 0.0 0.0 0:34.32 [ksoftirqd/0] - 5 root RT 0 0 0 0 S 0.0 0.0 0:00.00 [stopper/0] - -``` - -### Method-4 : Using who Command - -who command displays a list of users who are currently logged into the computer. The who command is related to the command w, which provides the same information but also displays additional data and statistics. -``` -# who -b - -system boot 2018-04-12 02:48 - -``` - -### Method-5 : Using last Command - -The last command displays a list of last logged in users. Last searches back through the file /var/log/wtmp and displays a list of all users logged in (and out) since that file was created. -``` -# last reboot -F | head -1 | awk '{print $5,$6,$7,$8,$9}' - -Thu Apr 12 02:48:04 2018 - -``` - -### Method-6 : Using /proc/uptime File - -This file contains information detailing how long the system has been on since its last restart. The output of `/proc/uptime` is quite minimal. - -The first number is the total number of seconds the system has been up. The second number is how much of that time the machine has spent idle, in seconds. -``` -# cat /proc/uptime - -1835457.68 1809207.16 - -``` - -# date -d “$(Method-7 : Using tuptime Command - -Tuptime is a tool for report the historical and statistical running time of the system, keeping it between restarts. Like uptime command but with more interesting output. -``` -$ tuptime - -``` - -### Method-8 : Using htop Command - -htop is an interactive process viewer for Linux which was developed by Hisham using ncurses library. Htop have many of features and options compared to top command. - -**Suggested Read :** [Monitor system resources using Htop command][2] -``` -# htop - - CPU[| 0.5%] Tasks: 48, 5 thr; 1 running - Mem[||||||||||||||||||||||||||||||||||||||||||||||||||| 165/1828MB] Load average: 0.10 0.05 0.01 - Swp[ 0/2047MB] Uptime: 21 days, 05:52:35 - - PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command -29166 root 20 0 110M 2484 1240 R 0.0 0.1 0:00.03 htop -29580 root 20 0 11464 3500 1032 S 0.0 0.2 55:15.97 /bin/sh ./OSWatcher.sh 10 1 - 1 root 20 0 19340 1492 1172 S 0.0 0.1 0:01.04 /sbin/init - 486 root 16 -4 10780 900 348 S 0.0 0.0 0:00.07 /sbin/udevd -d - 748 root 18 -2 10780 932 360 S 0.0 0.0 0:00.00 /sbin/udevd -d - -``` - -### Method-9 : Using glances Command - -Glances is a cross-platform curses-based system monitoring tool written in Python. We can say all in one place, like maximum of information in a minimum of space. It uses psutil library to get information from your system. - -Glances capable to monitor CPU, Memory, Load, Process list, Network interface, Disk I/O, Raid, Sensors, Filesystem (and folders), Docker, Monitor, Alert, System info, Uptime, Quicklook (CPU, MEM, LOAD), etc,. - -**Suggested Read :** [Glances (All in one Place)– An Advanced Real Time System Performance Monitoring Tool for Linux][3] -``` -glances - -ubuntu (Ubuntu 17.10 64bit / Linux 4.13.0-37-generic) - IP 192.168.1.6/24 Uptime: 21 days, 05:55:15 - -CPU [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 90.6%] CPU - 90.6% nice: 0.0% ctx_sw: 4K MEM \ 78.4% active: 942M SWAP - 5.9% LOAD 2-core -MEM [||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 78.0%] user: 55.1% irq: 0.0% inter: 1797 total: 1.95G inactive: 562M total: 12.4G 1 min: 4.35 -SWAP [|||| 5.9%] system: 32.4% iowait: 1.8% sw_int: 897 used: 1.53G buffers: 14.8M used: 749M 5 min: 4.38 - idle: 7.6% steal: 0.0% free: 431M cached: 273M free: 11.7G 15 min: 3.38 - -NETWORK Rx/s Tx/s TASKS 211 (735 thr), 4 run, 207 slp, 0 oth sorted automatically by memory_percent, flat view -docker0 0b 232b -enp0s3 12Kb 4Kb Systemd 7 Services loaded: 197 active: 196 failed: 1 -lo 616b 616b -_h478e48e 0b 232b CPU% MEM% VIRT RES PID USER NI S TIME+ R/s W/s Command - 63.8 18.9 2.33G 377M 2536 daygeek 0 R 5:57.78 0 0 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -DefaultGateway 83ms 78.5 10.9 3.46G 217M 2039 daygeek 0 S 21:07.46 0 0 /usr/bin/gnome-shell - 8.5 10.1 2.32G 201M 2464 daygeek 0 S 8:45.69 0 0 /usr/lib/firefox/firefox -new-window -DISK I/O R/s W/s 1.1 8.5 2.19G 170M 2653 daygeek 0 S 2:56.29 0 0 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -dm-0 0 0 1.7 7.2 2.15G 143M 2880 daygeek 0 S 7:10.46 0 0 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 - -``` - -### Method-10 : Using stat Command - -stat command displays the detailed status of a particular file or a file system. -``` -# stat /var/log/dmesg | grep Modify - -Modify: 2018-04-12 02:48:04.027999943 -0400 - -``` - -### Method-11 : Using procinfo Command - -procinfo gathers some system data from the /proc directory and prints it nicely formatted on the standard output device. -``` -# procinfo | grep Bootup - -Bootup: Fri Apr 20 19:40:14 2018 Load average: 0.16 0.05 0.06 1/138 16615 - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/11-methods-to-find-check-system-server-uptime-in-linux/ - -作者:[Magesh Maruthamuthu][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/magesh/ -[1]:https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/ -[2]:https://www.2daygeek.com/htop-command-examples-to-monitor-system-resources/ -[3]:https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/ diff --git a/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md new file mode 100644 index 0000000000..03a6fa6494 --- /dev/null +++ b/sources/tech/20180514 An introduction to the Pyramid web framework for Python.md @@ -0,0 +1,617 @@ +[#]: collector: (lujun9972) +[#]: translator: (Flowsnow) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (An introduction to the Pyramid web framework for Python) +[#]: via: (https://opensource.com/article/18/5/pyramid-framework) +[#]: author: (Nicholas Hunt-Walker https://opensource.com/users/nhuntwalker) +[#]: url: ( ) + +An introduction to the Pyramid web framework for Python +====== +In the second part in a series comparing Python frameworks, learn about Pyramid. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/pyramid.png?itok=hX73LWtl) + +In the [first article][1] in this four-part series comparing different Python web frameworks, I explained how to create a To-Do List web application in the [Flask][2] web framework. In this second article, I'll do the same task with the [Pyramid][3] web framework. Future articles will look at [Tornado][4] and [Django][5]; as I go along, I'll explore more of the differences among them. + +### Installing, starting up, and doing configuration + +Self-described as "the start small, finish big, stay finished framework," Pyramid is much like Flask in that it takes very little effort to get it up and running. In fact, you'll recognize many of the same patterns as you build out this application. The major difference between the two, however, is that Pyramid comes with several useful utilities, which I'll describe shortly. + +To get started, create a virtual environment and install the package. + +``` +$ mkdir pyramid_todo +$ cd pyramid_todo +$ pipenv install --python 3.6 +$ pipenv shell +(pyramid-someHash) $ pipenv install pyramid +``` + +As with Flask, it's smart to create a `setup.py` file to make the app you build an easily installable Python distribution. + +``` +# setup.py +from setuptools import setup, find_packages + +requires = [ +    'pyramid', +    'paster_pastedeploy', +    'pyramid-ipython', +    'waitress' +] + +setup( +    name='pyramid_todo', +    version='0.0', +    description='A To-Do List build with Pyramid', +    author='', +    author_email='', +    keywords='web pyramid pylons', +    packages=find_packages(), +    include_package_data=True, +    install_requires=requires, +    entry_points={ +        'paste.app_factory': [ +            'main = todo:main', +        ] +    } +) +``` + +`entry_points` section near the end sets up entry points into the application that other services can use. This allows the `plaster_pastedeploy` package to access what will be the `main` function in the application for building an application object and serving it. (I'll circle back to this in a bit.) + +Thesection near the end sets up entry points into the application that other services can use. This allows thepackage to access what will be thefunction in the application for building an application object and serving it. (I'll circle back to this in a bit.) + +When you installed `pyramid`, you also gained a few Pyramid-specific shell commands; the main ones to pay attention to are `pserve` and `pshell`. `pserve` will take an INI-style configuration file specified as an argument and serve the application locally. `pshell` will also take a configuration file as an argument, but instead of serving the application, it'll open up a Python shell that is aware of the application and its internal configuration. + +The configuration file is pretty important, so it's worth a closer look. Pyramid can take its configuration from environment variables or a configuration file. To avoid too much confusion around what is where, in this tutorial you'll write most of your configuration in the configuration file, with only a select few, sensitive configuration parameters set in the virtual environment. + +Create a file called `config.ini` + +``` +[app:main] +use = egg:todo +pyramid.default_locale_name = en + +[server:main] +use = egg:waitress#main +listen = localhost:6543 +``` + +This says a couple of things: + + * The actual application will come from the `main` function located in the `todo` package installed in the environment + * To serve this app, use the `waitress` package installed in the environment and serve on localhost port 6543 + + + +When serving an application and working in development, it helps to set up logging so you can see what's going on. The following configuration will handle logging for the application: + +``` +# continuing on... +[loggers] +keys = root, todo + +[handlers] +keys = console + +[formatters] +keys = generic + +[logger_root] +level = INFO +handlers = console + +[logger_todo] +level = DEBUG +handlers = +qualname = todo + +[handler_console] +class = StreamHandler +args = (sys.stderr,) +level = NOTSET +formatter = generic + +[formatter_generic] +format = %(asctime)s %(levelname)-5.5s [%(name)s:%(lineno)s][%(threadName)s] %(message)s +``` + +In short, this configuration asks to log everything to do with the application to the console. If you want less output, set the logging level to `WARN` so a message will fire only if there's a problem. + +Because Pyramid is meant for an application that grows, plan out a file structure that could support that growth. Web applications can, of course, be built however you want. In general, the conceptual blocks you'll want to cover will contain: + + * **Models** for containing the code and logic for dealing with data representations + * **Views** for code and logic pertaining to the request-response cycle + * **Routes** for the paths for access to the functionality of your application + * **Scripts** for any code that might be used in configuration or management of the application itself + + + +Given the above, the file structure can look like so: + +``` +setup.py +config.ini +todo/ +    __init__.py +    models.py +    routes.py +    views.py +    scripts/ +``` + +Much like Flask's `app` object, Pyramid has its own central configuration. It comes from its `config` module and is known as the `Configurator` object. This object will handle everything from route configuration to pointing to where models and views exist. All this is done in an inner directory called `todo` within an `__init__.py` file. + +``` +# todo/__init__.py + +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.scan() +    return config.make_wsgi_app() +``` + +The `main` function looks for some global configuration from your environment as well as any settings that came through the particular configuration file you provide when you run the application. It takes those settings and uses them to build an instance of the `Configurator` object, which (for all intents and purposes) is the factory for your application. Finally, `config.scan()` looks for any views you'd like to attach to your application that are marked as Pyramid views. + +Wow, that was a lot to configure. + +### Using routes and views + +Now that a chunk of the configuration is done, you can start adding functionality to the application. Functionality comes in the form of URL routes that external clients can hit, which then map to functions that Python can run. + +With Pyramid, all functionality must be added to the `Configurator` in some way, shape, or form. For example, say you want to build the same simple `hello_world` view that you built with Flask, mapping to the route of `/`. With Pyramid, you can register the `/` route with the `Configurator` using the `.add_route()` method. This method takes as arguments the name of the route that you want to add as well as the actual pattern that must be matched to access that route. For this case, add the following to your `Configurator`: + +``` +config.add_route('home', '/') +``` + +Until you create a view and attach it to that route, that path into your application sits open and alone. When you add the view, make sure to include the `request` object in the parameter list. Every Pyramid view must have the `request` object as its first parameter, as that's what's being passed as the first argument to the view when it's called by Pyramid. + +One similarity that Pyramid views share with Flask is that you can mark a function as a view with a decorator. Specifically, the `@view_config` decorator from `pyramid.view`. + +In `views.py`, build the view that you want to see in the world. + +``` +from pyramid.view import view_config + +@view_config(route_name="hello", renderer="string") +def hello_world(request): +    """Print 'Hello, world!' as the response body.""" +    return 'Hello, world!' +``` + +With the `@view_config` decorator, you have to at least specify the name of the route that will map to this particular view. You can stack `view_config` decorators on top of one another to map to multiple routes if you want, but you have to have at least one to connect view the view at all, and each one must include the name of a route. **[NOTE: Is "to connect view the view" phrased correctly?]** + +The other argument, `renderer`, is optional but not really. If you don't specify a renderer, you have to deliberately construct the HTTP response you want to send back to the client using the `Response` object from `pyramid.response`. By specifying the `renderer` as a string, Pyramid knows to take whatever is returned by this function and wrap it in that same `Response` object with the MIME type of `text/plain`. By default, Pyramid allows you to use `string` and `json` as renderers. If you've attached a templating engine to your application because you want to have Pyramid generate your HTML as well, you can point directly to your HTML template as your renderer. + +The first view is done. Here's what `__init__.py` looks like now with the attached route. + +``` +# in __init__.py +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.add_route('hello', '/') +    config.scan() +    return config.make_wsgi_app() +``` + +Spectacular! Getting here was no easy feat, but now that you're set up, you can add functionality with significantly less difficulty. + +### Smoothing a rough edge + +Right now the application only has one route, but it's easy to see that a large application can have many dozens or even hundreds of routes. Containing them all in the same `main` function with your central configuration isn't really the best idea, because it would become cluttered. Thankfully, it's fairly easy to include routes with a few tweaks to the application. + +**One** : In the `routes.py` file, create a function called `includeme` (yes, it must actually be named this) that takes a configurator object as an argument. + +``` +# in routes.py +def includeme(config): +    """Include these routes within the application.""" +``` + +**Two** : Move the `config.add_route` method call from `__init__.py` into the `includeme` function: + +``` +def includeme(config): +    """Include these routes within the application.""" +    config.add_route('hello', '/') +``` + +**Three** : Alert the Configurator that you need to include this `routes.py` file as part of its configuration. Because it's in the same directory as `__init__.py`, you can get away with specifying the import path to this file as `.routes`. + +``` +# in __init__.py +from pyramid.config import Configurator + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    config = Configurator(settings=settings) +    config.include('.routes') +    config.scan() +    return config.make_wsgi_app() +``` + +### Connecting the database + +As with Flask, you'll want to persist data by connecting a database. Pyramid will leverage [SQLAlchemy][6] directly instead of using a specially tailored package. + +First get the easy part out of the way. `psycopg2` and `sqlalchemy` are required to talk to the Postgres database and manage the models, so add them to `setup.py`. + +``` +# in setup.py +requires = [ +    'pyramid', +    'pyramid-ipython', +    'waitress', +    'sqlalchemy', +    'psycopg2' +] +# blah blah other code +``` + +Now, you have a decision to make about how you'll include the database's URL. There's no wrong answer here; what you do will depend on the application you're building and how public your codebase needs to be. + +The first option will keep as much configuration in one place as possible by hard-coding the database URL into the `config.ini` file. One drawback is this creates a security risk for applications with a public codebase. Anyone who can view the codebase will be able to see the full database URL, including username, password, database name, and port. Another is maintainability; if you needed to change environments or the application's database location, you'd have to modify the `config.ini` file directly. Either that or you'll have to maintain one configuration file for each new environment, which adds the potential for discontinuity and errors in the application. **If you choose this option** , modify the `config.ini` file under the `[app:main]` heading to include this key-value pair: + +``` +sqlalchemy.url = postgres://localhost:5432/pyramid_todo +``` + +The second option specifies the location of the database URL when you create the `Configurator`, pointing to an environment variable whose value can be set depending on the environment where you're working. One drawback is that you're further splintering the configuration, with some in the `config.ini` file and some directly in the Python codebase. Another drawback is that when you need to use the database URL anywhere else in the application (e.g., in a database management script), you have to code in a second reference to that same environment variable (or set up the variable in one place and import from that location). **If you choose this option** , add the following: + +``` +# in __init__.py +import os +from pyramid.config import Configurator + +SQLALCHEMY_URL = os.environ.get('DATABASE_URL', '') + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    settings['sqlalchemy.url'] = SQLALCHEMY_URL # <-- important! +    config = Configurator(settings=settings) +    config.include('.routes') +    config.scan() +    return config.make_wsgi_app() +``` + +### Defining objects + +OK, so now you have a database. Now you need `Task` and `User` objects. + +Because it uses SQLAlchemy directly, Pyramid differs somewhat from Flash on how objects are built. First, every object you want to construct must inherit from SQLAlchemy's [declarative base class][7]. It'll keep track of everything that inherits from it, enabling simpler management of the database. + +``` +# in models.py +from sqlalchemy.ext.declarative import declarative_base + +Base = declarative_base() + +class Task(Base): +    pass + +class User(Base): +    pass +``` + +The columns, data types for those columns, and model relationships will be declared in much the same way as with Flask, although they'll be imported directly from SQLAlchemy instead of some pre-constructed `db` object. Everything else is the same. + +``` +# in models.py +from datetime import datetime +import secrets + +from sqlalchemy import ( +    Column, Unicode, Integer, DateTime, Boolean, relationship +) +from sqlalchemy.ext.declarative import declarative_base + +Base = declarative_base() + +class Task(Base): +    """Tasks for the To Do list.""" +    id = Column(Integer, primary_key=True) +    name = Column(Unicode, nullable=False) +    note = Column(Unicode) +    creation_date = Column(DateTime, nullable=False) +    due_date = Column(DateTime) +    completed = Column(Boolean, default=False) +    user_id = Column(Integer, ForeignKey('user.id'), nullable=False) +    user = relationship("user", back_populates="tasks") + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.creation_date = datetime.now() + +class User(Base): +    """The User object that owns tasks.""" +    id = Column(Integer, primary_key=True) +    username = Column(Unicode, nullable=False) +    email = Column(Unicode, nullable=False) +    password = Column(Unicode, nullable=False) +    date_joined = Column(DateTime, nullable=False) +    token = Column(Unicode, nullable=False) +    tasks = relationship("Task", back_populates="user") + +    def __init__(self, *args, **kwargs): +        """On construction, set date of creation.""" +        super().__init__(*args, **kwargs) +        self.date_joined = datetime.now() +        self.token = secrets.token_urlsafe(64) +``` + +Note that there's no `config.include` line for `models.py` anywhere because it's not needed. A `config.include` line is needed only if some part of the application's configuration needs to be changed. This has only created two objects, inheriting from some `Base` class that SQLAlchemy gave us. + +### Initializing the database + +Now that the models are done, you can write a script to talk to and initialize the database. In the `scripts` directory, create two files: `__init__.py` and `initializedb.py`. The first is simply to turn the `scripts` directory into a Python package. The second is the script needed for database management. + +`initializedb.py` needs a function to set up the necessary tables in the database. Like with Flask, this script must be aware of the `Base` object, whose metadata keeps track of every class that inherits from it. The database URL is required to point to and modify its tables. + +As such, this database initialization script will work: + +``` +# initializedb.py +from sqlalchemy import engine_from_config +from todo import SQLALCHEMY_URL +from todo.models import Base + +def main(): +    settings = {'sqlalchemy.url': SQLALCHEMY_URL} +    engine = engine_from_config(settings, prefix='sqlalchemy.') +    if bool(os.environ.get('DEBUG', '')): +        Base.metadata.drop_all(engine) +    Base.metadata.create_all(engine) +``` + +**Important note:** This will work only if you include the database URL as an environment variable in `todo/__init__.py` (the second option above). If the database URL was stored in the configuration file, you'll have to include a few lines to read that file. It will look something like this: + +``` +# alternate initializedb.py +from pyramid.paster import get_appsettings +from pyramid.scripts.common import parse_vars +from sqlalchemy import engine_from_config +import sys +from todo.models import Base + +def main(): +    config_uri = sys.argv[1] +    options = parse_vars(sys.argv[2:]) +    settings = get_appsettings(config_uri, options=options) +    engine = engine_from_config(settings, prefix='sqlalchemy.') +    if bool(os.environ.get('DEBUG', '')): +        Base.metadata.drop_all(engine) +    Base.metadata.create_all(engine) +``` + +Either way, in `setup.py`, add a console script that will access and run this function. + +``` +# bottom of setup.py +setup( +    # ... other stuff +    entry_points={ +        'paste.app_factory': [ +            'main = todo:main', +        ], +        'console_scripts': [ +            'initdb = todo.scripts.initializedb:main', +        ], +    } +) +``` + +When this package is installed, you'll have access to a new console script called `initdb`, which will construct the tables in your database. If the database URL is stored in the configuration file, you'll have to include the path to that file when you invoke the command. It'll look like `$ initdb /path/to/config.ini`. + +### Handling requests and the database + +Ok, here's where it gets a little deep. Let's talk about **transactions**. A "transaction," in an abstract sense, is any change made to an existing database. As with Flask, transactions are persisted no sooner than when they are committed. If changes have been made that haven't yet been committed, and you don't want those to occur (maybe there's an error thrown in the process), you can **rollback** a transaction and abort those changes. + +In Python, the [transaction package][8] allows you to interact with transactions as objects, which can roll together multiple changes into one single commit. `transaction` provides **transaction managers** , which give applications a straightforward, thread-aware way of handling transactions so all you need to think about is what to change. The `pyramid_tm` package will take the transaction manager from `transaction` and wire it up in a way that's appropriate for Pyramid's request-response cycle, attaching a transaction manager to every incoming request. + +Normally, with Pyramid the `request` object is populated when the route mapping to a view is accessed and the view function is called. Every view function will have a `request` object to work with**.** However, Pyramid allows you to modify its configuration to add whatever you might need to the `request` object. You can use the transaction manager that you'll be adding to the `request` to create a session with every request and add that session to the request. + +Yay, so why is this important? + +By attaching a transaction-managed session to the `request` object, when the view finishes processing the request, any changes made to the database session will be committed without you needing to explicitly commit**.** Here's what all these concepts look like in code. + +``` +# __init__.py +import os +from pyramid.config import Configurator +from sqlalchemy import engine_from_config +from sqlalchemy.orm import sessionmaker +import zope.sqlalchemy + +SQLALCHEMY_URL = os.environ.get('DATABASE_URL', '') + +def get_session_factory(engine): +    """Return a generator of database session objects.""" +    factory = sessionmaker() +    factory.configure(bind=engine) +    return factory + +def get_tm_session(session_factory, transaction_manager): +    """Build a session and register it as a transaction-managed session.""" +    dbsession = session_factory() +    zope.sqlalchemy.register(dbsession, transaction_manager=transaction_manager) +    return dbsession + +def main(global_config, **settings): +    """Returns a Pyramid WSGI application.""" +    settings['sqlalchemy.url'] = SQLALCHEMY_URL +    settings['tm.manager_hook'] = 'pyramid_tm.explicit_manager' +    config = Configurator(settings=settings) +    config.include('.routes') +    config.include('pyramid_tm') +    session_factory = get_session_factory(engine_from_config(settings, prefix='sqlalchemy.')) +    config.registry['dbsession_factory'] = session_factory +    config.add_request_method( +        lambda request: get_tm_session(session_factory, request.tm), +        'dbsession', +        reify=True +    ) + +    config.scan() +    return config.make_wsgi_app() +``` + +That looks like a lot, but it only did was what was explained above, plus it added an attribute to the `request` object called `request.dbsession`. + +A few new packages were included here, so update `setup.py` with those packages. + +``` +# in setup.py +requires = [ +    'pyramid', +    'pyramid-ipython', +    'waitress', +    'sqlalchemy', +    'psycopg2', +    'pyramid_tm', +    'transaction', +    'zope.sqlalchemy' +] +# blah blah other stuff +``` + +### Revisiting routes and views + +You need to make some real views that handle the data within the database and the routes that map to them. + +Start with the routes. You created the `routes.py` file to handle your routes but didn't do much beyond the basic `/` route. Let's fix that. + +``` +# routes.py +def includeme(config): +    config.add_route('info', '/api/v1/') +    config.add_route('register', '/api/v1/accounts') +    config.add_route('profile_detail', '/api/v1/accounts/{username}') +    config.add_route('login', '/api/v1/accounts/login') +    config.add_route('logout', '/api/v1/accounts/logout') +    config.add_route('tasks', '/api/v1/accounts/{username}/tasks') +    config.add_route('task_detail', '/api/v1/accounts/{username}/tasks/{id}') +``` + +Now, it not only has static URLs like `/api/v1/accounts`, but it can handle some variable URLs like `/api/v1/accounts/{username}/tasks/{id}` where any variable in a URL will be surrounded by curly braces. + +To create the view to create an individual task in your application (like in the Flash example), you can use the `@view_config` decorator to ensure that it only takes incoming `POST` requests and check out how Pyramid handles data from the client. + +Take a look at the code, then check out how it differs from Flask's version. + +``` +# in views.py +from datetime import datetime +from pyramid.view import view_config +from todo.models import Task, User + +INCOMING_DATE_FMT = '%d/%m/%Y %H:%M:%S' + +@view_config(route_name="tasks", request_method="POST", renderer='json') +def create_task(request): +    """Create a task for one user.""" +    response = request.response +    response.headers.extend({'Content-Type': 'application/json'}) +    user = request.dbsession.query(User).filter_by(username=request.matchdict['username']).first() +    if user: +        due_date = request.json['due_date'] +        task = Task( +            name=request.json['name'], +            note=request.json['note'], +            due_date=datetime.strptime(due_date, INCOMING_DATE_FMT) if due_date else None, +            completed=bool(request.json['completed']), +            user_id=user.id +        ) +        request.dbsession.add(task) +        response.status_code = 201 +        return {'msg': 'posted'} +``` + +To start, note on the `@view_config` decorator that the only type of request you want this view to handle is a "POST" request. If you want to specify one type of request or one set of requests, provide either the string noting the request or a tuple/list of such strings. + +``` +response = request.response +response.headers.extend({'Content-Type': 'application/json'}) +# ...other code... +response.status_code = 201 +``` + +The HTTP response sent to the client is generated based on `request.response`. Normally, you wouldn't have to worry about that object. It would just produce a properly formatted HTTP response and you'd never know the difference. However, because you want to do something specific, like modify the response's status code and headers, you need to access that response and its methods/attributes. + +Unlike with Flask, you don't need to modify the view function parameter list just because you have variables in the route URL. Instead, any time a variable exists in the route URL, it is collected in the `matchdict` attribute of the `request`. It will exist there as a key-value pair, where the key will be the variable (e.g., "username") and the value will be whatever value was specified in the route (e.g., "bobdobson"). Regardless of what value is passed in through the route URL, it'll always show up as a string in the `matchdict`. So, when you want to pull the username from the incoming request URL, access it with `request.matchdict['username']` + +``` +user = request.dbsession.query(User).filter_by(username=request.matchdict['username']).first() +``` + +Querying for objects when using `sqlalchemy` directly differs significantly from what the `flask-sqlalchemy` package allows. Recall that when you used `flask-sqlalchemy` to build your models, the models inherited from the `db.Model` object. That `db` object already contained a connection to the database, so that connection could perform a straightforward operation like `User.query.all()`. + +That simple interface isn't present here, as the models in the Pyramid app inherit from `Base`, which is generated from `declarative_base()`, coming directly from the `sqlalchemy` package. It has no direct awareness of the database it'll be accessing. That awareness was attached to the `request` object via the app's central configuration as the `dbsession` attribute. Here's the code from above that did that: + +``` +config.add_request_method( +    lambda request: get_tm_session(session_factory, request.tm), +    'dbsession', +    reify=True +) +``` + +With all that said, whenever you want to query OR modify the database, you must work through `request.dbsession`. In the case, you want to query your "users" table for a specific user by using their username as their identifier. As such, the `User` object is provided as an argument to the `.query` method, then the normal SQLAlchemy operations are done from there. + +An interesting thing about this way of querying the database is that you can query for more than just one object or list of one type of object. You can query for: + + * Object attributes on their own, e.g., `request.dbsession.query(User.username)` would query for usernames + * Tuples of object attributes, e.g., `request.dbsession.query(User.username, User.date_joined)` + * Tuples of multiple objects, e.g., `request.dbsession.query(User, Task)` + + + +The data sent along with the incoming request will be found within the `request.json` dictionary. + +The last major difference is, because of all the machinations necessary to attach the committing of a session's activity to Pyramid's request-response cycle, you don't have to call `request.dbsession.commit()` at the end of your view. It's convenient, but there is one thing to be aware of moving forward. If instead of a new add to the database, you wanted to edit a pre-existing object in the database, you couldn't use `request.dbsession.commit()`. Pyramid will throw an error, saying something along the lines of "commit behavior is being handled by the transaction manager, so you can't call it on your own." And if you don't do something that resembles committing your changes, your changes won't stick. + +The solution here is to use `request.dbsession.flush()`. The job of `.flush()` is to signal to the database that some changes have been made and need to be included with the next commit. + +### Planning for the future + +At this point, you've set up most of the important parts of Pyramid, analogous to what you constructed with Flask in part one. There's much more that goes into an application, but much of the meat is handled here. Other view functions will follow similar formatting, and of course, there's always the question of security (which Pyramid has built in!). + +One of the major differences I see in the setup of a Pyramid application is that it has a much more intense configuration step than there is with Flask. I broke down those configuration steps to explain more about what's going on when a Pyramid application is constructed. However, it'd be disingenuous to act like I've known all of this since I started programming. My first experience with the Pyramid framework was with Pyramid 1.7 and its scaffolding system of `pcreate`, which builds out most of the necessary configuration, so all you need to do is think about the functionality you want to build. + +As of Pyramid 1.8, `pcreate` has been deprecated in favor of [cookiecutter][9], which effectively does the same thing. The difference is that it's maintained by someone else, and there are cookiecutter templates for more than just Pyramid projects. Now that we've gone through the components of a Pyramid project, I'd never endorse building a Pyramid project from scratch again when a cookiecutter template is available. Why do the hard work if you don't have to? In fact, the [pyramid-cookiecutter-alchemy][10] template would accomplish much of what I've written here (and a little bit more). It's actually similar to the `pcreate` scaffold I used when I first learned Pyramid. + +Learn more Python at [PyCon Cleveland 2018][11]. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/5/pyramid-framework + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nhuntwalker +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/4/flask +[2]: http://flask.pocoo.org/ +[3]: https://trypyramid.com/ +[4]: http://www.tornadoweb.org/en/stable/ +[5]: https://www.djangoproject.com/ +[6]: https://www.sqlalchemy.org/ +[7]: http://docs.sqlalchemy.org/en/latest/orm/extensions/declarative/api.html#api-reference +[8]: http://zodb.readthedocs.io/en/latest/transactions.html +[9]: https://cookiecutter.readthedocs.io/en/latest/ +[10]: https://github.com/Pylons/pyramid-cookiecutter-alchemy +[11]: https://us.pycon.org/2018/ diff --git a/sources/tech/20180518 How to Manage Fonts in Linux.md b/sources/tech/20180518 How to Manage Fonts in Linux.md deleted file mode 100644 index 0faca7fa17..0000000000 --- a/sources/tech/20180518 How to Manage Fonts in Linux.md +++ /dev/null @@ -1,145 +0,0 @@ -How to Manage Fonts in Linux -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_main.jpg?itok=qcJks7-c) - -Not only do I write technical documentation, I write novels. And because I’m comfortable with tools like GIMP, I also create my own book covers (and do graphic design for a few clients). That artistic endeavor depends upon a lot of pieces falling into place, including fonts. - -Although font rendering has come a long way over the past few years, it continues to be an issue in Linux. If you compare the look of the same fonts on Linux vs. macOS, the difference is stark. This is especially true when you’re staring at a screen all day. But even though the rendering of fonts has yet to find perfection in Linux, one thing that the open source platform does well is allow users to easily manage their fonts. From selecting, adding, scaling, and adjusting, you can work with fonts fairly easily in Linux. - -Here, I’ll share some of the tips I’ve depended on over the years to help extend my “font-ability” in Linux. These tips will especially help those who undertake artistic endeavors on the open source platform. Because there are so many desktop interfaces available for Linux (each of which deal with fonts in a different way), when a desktop environment becomes central to the management of fonts, I’ll be focusing primarily on GNOME and KDE. - -With that said, let’s get to work. - -### Adding new fonts - -For the longest time, I have been a collector of fonts. Some might say I have a bit of an obsession. And since my early days of using Linux, I’ve always used the same process for adding fonts to my desktops. There are two ways to do this: - - * Make the fonts available on a per-user basis. - - * Make the fonts available system-wide. - - - - -Because my desktops never have other users (besides myself), I only ever work with fonts on a per-user basis. However, I will show you how to do both. First, let’s see how to add fonts on a per-user basis. The first thing you must do is find fonts. Both True Type Fonts (TTF) and Open Type Fonts (OTF) can be added. I add fonts manually. Do this is, I create a new hidden directory in ~/ called ~/.fonts. This can be done with the command: -``` -mkdir ~/.fonts - -``` - -With that folder created, I then move all of my TTF and OTF files into the directory. That’s it. Every font you add into that directory will now be available for use to your installed apps. But remember, those fonts will only be available to that one user. - -If you want to make that collection of fonts available to all, here’s what you do: - - 1. Open up a terminal window. - - 2. Change into the directory housing all of your fonts. - - 3. Copy all of those fonts with the commands sudo cp *.ttf *.TTF /usr/share/fonts/truetype/ and sudo cp *.otf *.OTF /usr/share/fonts/opentype - - - - -The next time a user logs in, they’ll have access to all those glorious fonts. - -### GUI Font Managers - -There are a few ways to manage your fonts in Linux, via GUI. How it’s done will depend on your desktop environment. Let’s examine KDE first. With the KDE that ships with Kubuntu 18.04, you’ll find a Font Management tool pre-installed. Open that tool and you can easily add, remove, enable, and disable fonts (as well as get information about all of the installed fonts. This tool also makes it easy for you to add and remove fonts for personal and system-wide use. Let’s say you want to add a particular font for personal usage. To do this, download your font and then open up the Font Management tool. In this tool (Figure 1), click on Personal Fonts and then click the + Add button. - - -![adding fonts][2] - -Figure 1: Adding personal fonts in KDE. - -[Used with permission][3] - -Navigate to the location of your fonts, select them, and click Open. Your fonts will then be added to the Personal section and are immediately available for you to use (Figure 2). - - -![KDE Font Manager][5] - -Figure 2: Fonts added with the KDE Font Manager. - -[Used with permission][3] - -To do the same thing in GNOME requires the installation of an application. Open up either GNOME Software or Ubuntu Software (depending upon the distribution you’re using) and search for Font Manager. Select Font Manager and then click the Install button. Once the software is installed, launch it from the desktop menu. With the tool open, let’s install fonts on a per-user basis. Here’s how: - - 1. Select User from the left pane (Figure 3). - - 2. Click the + button at the top of the window. - - 3. Navigate to and select the downloaded fonts. - - 4. Click Open. - - - - -![Adding fonts ][7] - -Figure 3: Adding fonts in GNOME. - -[Used with permission][3] - -### Tweaking fonts - -There are three concepts you must first understand: - - * **Font Hinting:** The use of mathematical instructions to adjust the display of a font outline so that it lines up with a rasterized grid. - - * **Anti-aliasing:** The technique used to add greater realism to a digital image by smoothing jagged edges on curved lines and diagonals. - - * **Scaling factor:** **** A scalable unit that allows you to multiple the point size of a font. So if you’re font is 12pt and you have an scaling factor of 1, the font size will be 12pt. If your scaling factor is 2, the font size will be 24pt. - - - - -Let’s say you’ve installed your fonts, but they don’t look quite as good as you’d like. How do you tweak the appearance of fonts? In both the KDE and GNOME desktops, you can make a few adjustments. One thing to consider with the tweaking of fonts is that taste is very much subjective. You might find yourself having to continually tweak until you get the fonts looking exactly how you like (dictated by your needs and particular taste). Let’s first look at KDE. - -Open up the System Settings tool and clock on Fonts. In this section, you can not only change various fonts, you can also enable and configure both anti-aliasing and enable font scaling factor (Figure 4). - - -![Configuring fonts][9] - -Figure 4: Configuring fonts in KDE. - -[Used with permission][3] - -To configure anti-aliasing, select Enabled from the drop-down and then click Configure. In the resulting window (Figure 5), you can configure an exclude range, sub-pixel rendering type, and hinting style. - -Once you’ve made your changes, click Apply. Restart any running applications and the new settings will take effect. - -To do this in GNOME, you have to have either use Font Manager or GNOME Tweaks installed. For this, GNOME Tweaks is the better tool. If you open the GNOME Dash and cannot find Tweaks installed, open GNOME Software (or Ubuntu Software), and install GNOME Tweaks. Once installed, open it and click on the Fonts section. Here you can configure hinting, anti-aliasing, and scaling factor (Figure 6). - -![Tweaking fonts][11] - -Figure 6: Tweaking fonts in GNOME. - -[Used with permission][3] - -### Make your fonts beautiful - -And that’s the gist of making your fonts look as beautiful as possible in Linux. You may not see a macOS-like rendering of fonts, but you can certainly improve the look. Finally, the fonts you choose will have a large impact on how things look. Make sure you’re installing clean, well-designed fonts; otherwise, you’re fighting a losing battle. - -Learn more about Linux through the free ["Introduction to Linux" ][12] course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/5/how-manage-fonts-linux - -作者:[Jack Wallen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/jlwallen -[2]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_1.jpg?itok=7yTTe6o3 (adding fonts) -[3]:https://www.linux.com/licenses/category/used-permission -[5]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_2.jpg?itok=_g0dyVYq (KDE Font Manager) -[7]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_3.jpg?itok=8o884QKs (Adding fonts ) -[9]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_4.jpg?itok=QJpPzFED (Configuring fonts) -[11]:https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fonts_6.jpg?itok=4cQeIW9C (Tweaking fonts) -[12]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180523 How to dual-boot Linux and Windows.md b/sources/tech/20180523 How to dual-boot Linux and Windows.md deleted file mode 100644 index 372097c866..0000000000 --- a/sources/tech/20180523 How to dual-boot Linux and Windows.md +++ /dev/null @@ -1,224 +0,0 @@ -translating by Auk7F7 -How to dual-boot Linux and Windows -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/migration_innovation_computer_software.png?itok=VCFLtd0q) - -Even though Linux is a great operating system with widespread hardware and software support, the reality is that sometimes you have to use Windows, perhaps due to key apps that won't run under Linux. Thankfully, dual-booting Windows and Linux is very straightforward—and I'll show you how to set it up, with Windows 10 and Ubuntu 18.04, in this article. - -Before you get started, make sure you've backed up your computer. Although the dual-boot setup process is not very involved, accidents can still happen. So take the time to back up your important files in case chaos theory comes into play. In addition to backing up your files, consider taking an image backup of the disk as well, though that's not required and can be a more advanced process. - -### Prerequisites - -To get started, you will need the following five items: - -#### 1\. Two USB flash drives (or DVD-Rs) - -I recommend installing Windows and Ubuntu via flash drives since they're faster than DVDs. It probably goes without saying, but creating bootable media erases everything on the flash drive. Therefore, make sure the flash drives are empty or contain data you don't care about losing. - -If your machine doesn't support booting from USB, you can create DVD media instead. Unfortunately, because no two computers seem to have the same DVD-burning software, I can't walk you through that process. However, if your DVD-burning application has an option to burn from an ISO image, that's the option you need. - -#### 2\. A Windows 10 license - -If Windows 10 came with your PC, the license will be built into the computer, so you don't need to worry about entering it during installation. If you bought the retail edition, you should have a product key, which you will need to enter during the installation process. - -#### 3\. Windows 10 Media Creation Tool - -Download and launch the Windows 10 [Media Creation Tool][1]. Once you launch the tool, it will walk you through the steps required to create the Windows media on a USB or DVD-R. Note: Even if you already have Windows 10 installed, it's a good idea to create bootable media anyway, just in case something goes wrong and you need to reinstall it. - -#### 4\. Ubuntu 18.04 installation media - -Download the [Ubuntu 18.04][2] ISO image. - -#### 5\. Etcher software (for making a bootable Ubuntu USB drive) - -For creating bootable media for any Linux distribution, I recommend [Etcher][3]. Etcher works on all three major operating systems (Linux, MacOS, and Windows) and is careful not to let you overwrite your current operating system partition. - -Once you have downloaded and launched Etcher, click Select image, and point it to the Ubuntu ISO you downloaded in step 4. Next, click Select drive to choose your flash drive, and click Flash! to start the process of turning a flash drive into an Ubuntu installer. (If you're using a DVD-R, use your computer's DVD-burning software instead.) - -### Install Windows and Ubuntu - -You should be ready to begin. At this point, you should have accomplished the following: - - * Backed up your important files - * Created Windows installation media - * Created Ubuntu installation media - - - -There are two ways of going about the installation. First, if you already have Windows 10 installed, you can have the Ubuntu installer resize the partition, and the installation will proceed in the empty space. Or, if you haven't installed Windows 10, install it on a smaller partition you can set up during the installation process. (I'll describe how to do that below.) The second way is preferred and less error-prone. There's a good chance you won't have any issues either way, but installing Windows manually and giving it a smaller partition, then installing Ubuntu, is the easiest way to go. - -If you already have Windows 10 on your computer, skip the following Windows installation instructions and proceed to Installing Ubuntu. - -#### Installing Windows - -Insert the Windows installation media you created into your computer and boot from it. How you do this depends on your computer, but most have a key you can press to initiate the boot menu. On a Dell PC for example, that key is F12. If the flash drive doesn't show up as an option, you may need to restart the computer. Sometimes it will show up only if you've inserted the media before turning on the computer. If you see a message like, "press any key to boot from the installation media," press a key. You should see the following screen. Select your language and keyboard style and click Next. - -![Windows setup][5] - -Click on Install now to start the Windows installer. - -On the next screen, it will ask for your product key. If you don't have one because Windows 10 came with your PC, select "I don't have a product key." It should automatically activate after the installation once it catches up with updates. If you do have a product key, type that in and click Next. - - -![Enter product key][7] - - -Select which version of Windows you want to install. If you have a retail copy, the label will tell you what version you have. Otherwise, it is typically located with the documentation that came with your computer. In most cases, it's going to be either Windows 10 Home or Windows 10 Pro. Most PCs that come with the Home edition have a label that simply reads "Windows 10," while Pro is clearly marked. - - -![Select Windows version][10] - - -Accept the license agreement by checking the box, then click Next. - - -![Accept license terms][12] - - -After accepting the agreement, you have two installation options available. Choose the second option, Custom: Install Windows only (advanced). - - -![Select type of Windows installation][14] - - -The next screen should show your current hard disk configuration. - - -![Hard drive configuration][16] - - -Your results will probably look different than mine. I have never used this hard disk before, so it's completely unallocated. You will probably see one or more partitions for your current operating system. Highlight each partition and remove it. - -At this point, your screen will show your entire disk as unallocated. To continue, create a new partition. - - -![Create a new partition][18] - - -Here you can see that I divided the drive in half (or close enough) by creating a partition of 81,920MB (which is close to half of 160GB). Give Windows at least 40GB, preferably 64GB or more. Leave the rest of the drive unallocated, as that's where you'll install Ubuntu later. - -Your results will look similar to this: - - -![Leaving a partition with unallocated space][20] - - -Confirm the partitioning looks good to you and click Next. Windows will begin installing. - - -![Installing Windows][22] - - -If your computer successfully boots into Windows, you're all set to move on to the next step. - -![Windows desktop][24] - - -#### Installing Ubuntu - -Whether it was already there or you worked through the steps above, at this point you should have Windows installed. Now use the Ubuntu installation media you created earlier to boot into Ubuntu. Go ahead and insert the media and boot your computer from it. Again, the exact sequence of keys to access the boot menu varies from one computer to another, so check your documentation if you're not sure. If all goes well, you see the following screen once the media finishes loading: - - -![Ubuntu installation welcome screen][26] - - -Here, you can select between Try Ubuntu or Install Ubuntu. Don't install just yet; instead, click Try Ubuntu. After it finishes loading, you should see the Ubuntu desktop. - - -![Ubuntu desktop][28] - -By clicking Try Ubuntu, you have opted to try out Ubuntu before you install it. Here, in Live mode, you can play around with Ubuntu and make sure everything works before you commit to the installation. Ubuntu works with most PC hardware, but it's always better to test it out beforehand. Make sure you can access the internet and get audio and video playback. Going to YouTube and playing a video is a good way of doing all of that at once. If you need to connect to a wireless network, click on the networking icon at the top-right of the screen. There, you can find a list of wireless networks and connect to yours. - -Once you're ready to go, double-click on the Install Ubuntu 18.04 LTS icon on the desktop to launch the installer. - -Choose the language you want to use for the installation process, then click Continue. - - -![Select language in Ubuntu][30] - - -Next, choose the keyboard layout. Once you've made your selection, click Continue. - - -![Select keyboard in Ubuntu][32] - -You have a few options on the screen below. One, you can choose a Normal or a Minimal installation. For most people, the Normal installation is ideal. Advanced users may want to do a Minimal install instead, which has fewer software applications installed by default. In addition, you can choose to download updates and whether or not to include third-party software and drivers. I recommend checking both of those boxes. When done, click Continue. - - -![Choose Ubuntu installation options][34] - -The next screen asks whether you want to erase the disk or set up a dual-boot. Since you're dual-booting, choose Install Ubuntu alongside Windows 10. Click Install Now. - - -![install Ubuntu alongside Windows][36] - - -The following screen may appear. If you installed Windows from scratch and left unallocated space on the disk, Ubuntu will automatically set itself up in the empty space, so you won't see this screen. If you already had Windows 10 installed and it's taking up the entire drive, this screen will appear and give you an option to select a disk at the top. If you have just one disk, you can choose how much space to steal from Windows and apply to Ubuntu. You can drag the vertical line in the middle left and right with your mouse to take space away from one and gives it to the other. Adjust this exactly the way you want it, then click Install Now. - - -![Allocate drive space][38] - - -You should see a confirmation screen indicating what Ubuntu plans on doing. If everything looks right, click Continue. - -Ubuntu is now installing in the background. You still have some configuration to do, though. While Ubuntu tries its best to figure out your location, you can click on the map to narrow it down to ensure your time zone and other things are set correctly. - -Next, fill in the user account information: your name, computer name, username, and password. Click Continue when you're done. - -There you have it! The installation is complete. Go ahead and reboot the PC. - -If all went according to plan, you should see a screen similar to this when your computer restarts. Choose Ubuntu or Windows 10; the other options are for troubleshooting, so I won't go into them. - -Try booting into both Ubuntu and Windows to test them out and make sure everything works as expected. If it does, you now have both Windows and Ubuntu installed on your computer. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/5/dual-boot-linux - -作者:[Jay LaCroix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/jlacroix -[1]:https://www.microsoft.com/en-us/software-download/windows10 -[2]:https://www.ubuntu.com/download/desktop -[3]:http://www.etcher.io -[4]:/file/397066 -[5]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_01.png (Windows setup) -[6]:/file/397076 -[7]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_03.png (Enter product key) -[8]:data:image/gif;base64,R0lGODlhAQABAPABAP///wAAACH5BAEKAAAALAAAAAABAAEAAAICRAEAOw== (Click and drag to move) -[9]:/file/397081 -[10]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_04.png (Select Windows version) -[11]:/file/397086 -[12]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_05.png (Accept license terms) -[13]:/file/397091 -[14]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_06.png (Select type of Windows installation) -[15]:/file/397096 -[16]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_07.png (Hard drive configuration) -[17]:/file/397101 -[18]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_08.png (Create a new partition) -[19]:/file/397106 -[20]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_09.png (Leaving a partition with unallocated space) -[21]:/file/397111 -[22]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_10.png (Installing Windows) -[23]:/file/397116 -[24]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_11.png (Windows desktop) -[25]:/file/397121 -[26]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_12.png (Ubuntu installation welcome screen) -[27]:/file/397126 -[28]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_13.png (Ubuntu desktop) -[29]:/file/397131 -[30]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_15.png (Select language in Ubuntu) -[31]:/file/397136 -[32]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_16.png (Select keyboard in Ubuntu) -[33]:/file/397141 -[34]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_17.png (Choose Ubuntu installation options) -[35]:/file/397146 -[36]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_18.png (Install Ubuntu alongside Windows) -[37]:/file/397151 -[38]:https://opensource.com/sites/default/files/uploads/linux-dual-boot_18b.png (Allocate drive space) diff --git a/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md b/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md deleted file mode 100644 index 386149400c..0000000000 --- a/sources/tech/20180525 How to Set Different Wallpaper for Each Monitor in Linux.md +++ /dev/null @@ -1,89 +0,0 @@ -How to Set Different Wallpaper for Each Monitor in Linux -====== -**Brief: If you want to display different wallpapers on multiple monitors on Ubuntu 18.04 or any other Linux distribution with GNOME, MATE or Budgie desktop environment, this nifty tool will help you achieve this.** - -Multi-monitor setup often leads to multiple issues on Linux but I am not going to discuss those issues in this article. I have rather a positive article on multiple monitor support on Linux. - -If you are using multiple monitor, perhaps you would like to setup a different wallpaper for each monitor. I am not sure about other Linux distributions and desktop environments, but Ubuntu with [GNOME desktop][1] doesn’t provide this functionality on its own. - -Fret not! In this quick tutorial, I’ll show you how to set a different wallpaper for each monitor on Linux distributions with GNOME desktop environment. - -### Setting up different wallpaper for each monitor on Ubuntu 18.04 and other Linux distributions - -![Different wallaper on each monitor in Ubuntu][2] - -I am going to use a nifty tool called [HydraPaper][3] for setting different backgrounds on different monitors. HydraPaper is a [GTK][4] based application to set different backgrounds for each monitor in [GNOME desktop environment][5]. - -It also supports on [MATE][6] and [Budgie][7] desktop environments. Which means Ubuntu MATE and [Ubuntu Budgie][8] users can also benefit from this application. - -#### Install HydraPaper on Linux using FlatPak - -HydraPaper can be installed easily using [FlatPak][9]. Ubuntu 18.04 already provides support for FlatPaks so all you need to do is to download the application file and double click on it to open it with the GNOME Software Center. - -You can refer to this article to learn [how to enable FlatPak support][10] on your distribution. Once you have the FlatPak support enabled, just download it from [FlatHub][11] and install it. - -[Download HydraPaper][12] - -#### Using HydraPaper for setting different background on different monitors - -Once installed, just look for HydraPaper in application menu and start the application. You’ll see images from your Pictures folder here because by default the application takes images from the Pictures folder of the user. - -You can add your own folder(s) where you keep your wallpapers. Do note that it doesn’t find images recursively. If you have nested folders, it will only show images from the top folder. - -![Setting up different wallpaper for each monitor on Linux][13] - -Using HydraPaper is absolutely simple. Just select the wallpapers for each monitor and click on the apply button at the top. You can easily identify external monitor(s) termed with HDMI. - -![Setting up different wallpaper for each monitor on Linux][14] - -You can also add selected wallpapers to ‘Favorites’ for quick access. Doing this will move the ‘favorite wallpapers’ from Wallpapers tab to Favorites tab. - -![Setting up different wallpaper for each monitor on Linux][15] - -You don’t need to start HydraPaper at each boot. Once you set different wallpaper for different monitor, the settings are saved and you’ll see your chosen wallpapers even after restart. This would be expected behavior of course but I thought I would mention the obvious. - -One big downside of HydraPaper is in the way it is designed to work. You see, HydraPaper combines your selected wallpapers into one single image and stretches it across the screens giving an impression of having different background on each display. And this becomes an issue when you remove the external display. - -For example, when I tried using my laptop without the external display, it showed me an background image like this. - -![Dual Monitor wallpaper HydraPaper][16] - -Quite obviously, this is not what I would expect. - -#### Did you like it? - -HydraPaper makes setting up different backgrounds on different monitors a painless task. It supports more than two monitors and monitors with different orientation. Simple interface with only the required features makes it an ideal application for those who always use dual monitors. - -How do you set different wallpaper for different monitor on Linux? Do you think HydraPaper is an application worth installing? - -Do share your views and if you find this article, please share it on various social media channels such as Twitter and [Reddit][17]. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/wallpaper-multi-monitor/ - -作者:[Abhishek Prakash][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/abhishek/ -[1]:https://www.gnome.org/ -[2]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/multi-monitor-wallpaper-setup-800x450.jpeg -[3]:https://github.com/GabMus/HydraPaper -[4]:https://www.gtk.org/ -[5]:https://itsfoss.com/gnome-tricks-ubuntu/ -[6]:https://mate-desktop.org/ -[7]:https://budgie-desktop.org/home/ -[8]:https://itsfoss.com/ubuntu-budgie-18-review/ -[9]:https://flatpak.org -[10]:https://flatpak.org/setup/ -[11]:https://flathub.org -[12]:https://flathub.org/apps/details/org.gabmus.hydrapaper -[13]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-2-800x631.jpeg -[14]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-1.jpeg -[15]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/different-wallpaper-each-monitor-hydrapaper-3.jpeg -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/05/hydra-paper-dual-monitor-800x450.jpeg -[17]:https://www.reddit.com/r/LinuxUsersGroup/ diff --git a/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md b/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md new file mode 100644 index 0000000000..ac756223f1 --- /dev/null +++ b/sources/tech/20180527 Streaming Australian TV Channels to a Raspberry Pi.md @@ -0,0 +1,209 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Streaming Australian TV Channels to a Raspberry Pi) +[#]: via: (https://blog.dxmtechsupport.com.au/streaming-australian-tv-channels-to-a-raspberry-pi/) +[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/) + +Streaming Australian TV Channels to a Raspberry Pi +====== + +If you’re anything like me, it’s been years since you’ve even thought about hooking an antenna to your television. With so much of the good stuff available by streaming and download, it’s easy go a very long time without even thinking about free-to-air TV. + +But every now and again, something comes up – perhaps the cricket, news and current affairs shows, the FIFA World Cup – where the easiest thing would be to just chuck on the telly. + +When I first started tinkering with the Raspberry Pi as a gaming and media centre platform, the standard advice for watching broadcast TV always seemed to involve an antenna and a USB TV tuner. + +Which I guess is fine if you can be arsed. + +But what if you utterly can’t? + +What if you bitterly resent the idea of more clutter, more cords to add to the mess, more stuff to buy? What if every USB port is precious and jealously guarded for your keyboard, mouse, game controllers and removable storage? What if the wall port for your roof antenna is in a different room? + +That’s all a bit of a hassle for a thing you might use only a few times a year. + +In 2018, shouldn’t we just be able to stream free TV from the internet? + +It turns out that, yes, we can access legal and high quality TV streams from any Australian IP using [Freeview][1]. And thanks to a cool Kodi Add-on by [Matt Huisman][2], it’s now really easy to access this service from a Raspberry Pi. + +I’ve tested this to work on a Model 3 B+ running Retropie 4.4 and Kodi 17.6. But it should work similarly for other models and operating systems, so long as you’re using a reasonably up-to-date version of Kodi. + +Let’s jump right in. + +### If You Already Have Kodi Installed + +If you’re already using your Raspberry Pi to watch movies and TV shows, there’s a good chance you’ve already installed Kodi. + +Most Raspberry Pi operating systems intended for media centre use – such as OSMC or Xbian – come with Kodi installed by default. + +It’s fairly easy to get running on other Linux operating systems, and you might have already installed it there too. + +If your version of Kodi is more than a year or so old, it might be an idea to update it. The following instructions are written for the interface on Kodi 17 (Krypton). + +You can do that by typing the following commands at the command line: + +``` +sudo apt-get update +sudo apt-get upgrade +``` + +And now you can skip ahead to the next section. + +### Installing Kodi + +Installing Kodi on Retropie and other versions of Raspbian is fairly simple. Other Linux operating systems should be able to run it, perhaps with a bit of coaxing. + +You will need to be connected to the internet to install it. + +If you’re using something, such as Risc OS – you probably can’t install kodi. You will need to either swap in another SD card, or use a boot loader to boot into a media centre OS for your TV viewing. + +#### Installing Kodi on Retropie + +It’s really easy to install Kodi using the Retropie menu system. + +Here’s how: + + 1. Navigate to the Retropie main screen – that’s that horizontal menu where you can scroll left and right through all your different consoles + 2. Select “Retropie” + 3. Select “Retropie setup” + 4. Select “Manage Packages” + 5. Select “Manage Optional Packages” + 6. Scroll down and select “Kodi” + 7. Select “Install from Binary” + + + +This will take a minute or two to install. Once it’s installed, you can exit out of the Retropie Setup screen. When you next restart Retropie, you will see Kodi under the “Ports” section of the Retropie main screen. + +#### Installing Kodi on Raspbian + +If you’re running Raspbian without Retropie. But that’s okay, because it’s pretty easy to do it from the command line + +Just type: + +``` +sudo apt-get update +sudo apt-get install kodi +``` + +At this point you have a vanilla installation of Kodi. You can run it by typing: + +``` +kodi +``` + +It’s possible to delve a lot further into setting up Kodi from the command line. Check out [this guide][3] if you’re interested. + +If not, what you’ve just installed will work just fine. + +#### Installing Kodi on Other Versions of Linux + +If you’re using a different flavour of Linux, such as Pidora or Arch Linux ARM, then the above might or might not work – I’m not really sure, because I don’t really use these operating systems. + +If you get stuck, it might be worth a look at the [how-to guide][4] on the Kodi wiki. + +#### Dual Booting a Media Centre OS + +If your operating system of choice isn’t suitable for Kodi – or is just too confusing and difficult to figure out – it might be easiest to use a boot loader for multiple operating systems on the one SD card. + +You can set this up using an OS installer like [PINN][5]. + +Using PINN, you can install a media centre OS like [OSMC][6] to use Kodi – it will be installed with the operating system – and then your preferred OS for your other uses. + +It’s even possible to [move your existing OS over][7]. + +### Adding Australian TV Channels to Kodi + +With Kodi installed and running, you’ve got a pretty good media player for the files on your network and hard drive. + +But we need to install an add-on if we want to use it to chuck on the telly. This only takes a minute or so. + +#### Installing Matt Huisman’s Kodi Repository + +Ready? Let’s get started. + + 1. Open Kodi + 2. Click the cog icon at the top left to enter the settings + 3. Click “System Settings” + 4. Select “Add-ons” + 5. Make sure that “Unknown Sources” is enabled + 6. Right click anywhere on the screen to navigate back to the settings menu + 7. Click “File Manager” + 8. Click “Add Source” + 9. Double-click “Add Source” + 10. Select “” + 11. Type in exactly **** + 12. Select “OK” + 13. Click the text input underneath the label “Enter a name for this media source.” + 14. Type in exactly **MJH** + 15. Click “OK” + 16. Right click twice anywhere on the screen to navigate back to the main menu + 17. Select “Add-ons” + 18. Click “My Add-ons” + 19. Click “..” + 20. Click “Install from zip file” + 21. Click “MJH” + 22. Select “repository.matthuisman.zip” + + + +The repository is now installing. + +If you get stuck with any of this, here’s a video from Matt that starts by installing the repository. + + + +#### Installing the Freeview Australia Add-On + +We’re nearly there! Just a few more steps. + + 1. Right click anywhere on the screen a couple of times to navigate back to the main menu + 2. Select “Add-ons” + 3. Click “My add-ons” + 4. Click “..” + 5. Click “Install from repository” + 6. Click “MattHuisman.nz Repository” + 7. Click “Video add-ons” + 8. Click “AU Freeview” + 9. Click “Install” + + + +You can now have every free-to-air TV channel in your Add-ons main menu item. + +### Watching TV + +When you want to chuck the telly on, all you need to do is click “AU Freeview” in the Add-ons main menu item. This will give you a list of channels to browse through and select. + +If you want, you can also add individual channels to your Favourites menu by right clicking them and selecting “Add to favourites”. + +By default you will be watching Melbourne television. You can change the region by right clicking on “AU Freeview” and clicking “settings”. + +When you first tune in, it sometimes jumps a bit for a few seconds, but after that it’s pretty smooth. + +After spending a few minutes with this, you’ll quickly realise that free-to-air TV hasn’t improved in the years since you last looked at. Unfortunately, I don’t think there’s a fix for that. + +But at least it’s there now for when you want it. + +-------------------------------------------------------------------------------- + +via: https://blog.dxmtechsupport.com.au/streaming-australian-tv-channels-to-a-raspberry-pi/ + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/ +[b]: https://github.com/lujun9972 +[1]: http://www.freeview.com.au/ +[2]: https://www.matthuisman.nz/ +[3]: https://www.raspberrypi.org/forums/viewtopic.php?t=192499 +[4]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux +[5]: https://github.com/procount/pinn +[6]: https://osmc.tv/ +[7]: https://github.com/procount/pinn/wiki/How-to-Create-a-Multi-Boot-SD-card-out-of-2-existing-OSes-using-PINN diff --git a/sources/tech/20180604 4 Firefox extensions worth checking out.md b/sources/tech/20180604 4 Firefox extensions worth checking out.md deleted file mode 100644 index 2091afe6fe..0000000000 --- a/sources/tech/20180604 4 Firefox extensions worth checking out.md +++ /dev/null @@ -1,109 +0,0 @@ -4 Firefox extensions worth checking out -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/firefox_blue_lead.jpg?itok=gYaubJUv) - -I've been a Firefox user since v2.0 came out about 12 years ago. There were times when it wasn't the best web browser out there, but still, I kept going back to it for one reason: My favorite extensions wouldn't work with anything else. - -Today, I like the current state of Firefox itself for being fast, customizable, and open source, but I also appreciate extensions for manifesting ideas the original developers never thought of: What if you want to browse without a mouse? What if you don't like staring at bright light coming out of the monitor at night? What about using a dedicated media player for YouTube and other video hosting websites for better performance and extended playback controls? And what if you need a more sophisticated way to disable trackers and speed up loading pages? - -Fortunately, there's an answer for each of these questions, and I'm going to give them to you in the form of my favorite extensions—all of which are free software or open source (i.e., distributed under the [GNU GPL][1], [MPL][2], or [Apache][3] license) and make an excellent browser even better. - -Although the terms add-on and extension have slightly different meanings, I'll use them interchangeably in this article. - -### Tridactyl - -![Tridactyl screenshot][5] - -Tridactyl's new tab page, showcasing link hinting. - -[Tridactyl][6] enables you to use your keyboard for most of your browsing activities. It's inspired by the now-defunct [Vimperator][7] and [Pentadactyl][8], which were inspired by the default keybindings of [Vim][9]. Since I'm already used to Vim and other command-line applications, I find features like being able to navigate with the keys `h/j/k/l`, interact with hyperlinks with `f/F`, and create custom keybindings and commands very convenient. - -Tridactyl's optional native messenger (for now, available only for GNU/Linux and Mac OSX), which was implemented recently, offers even more cool features to boot. With it, for example, you can hide some elements of the GUI of Firefox (à la Vimperator and Pentadactyl), open a link or the current page in an external program (I often use [mpv][10] and [youtube-dl][11] for videos) and edit the content of text areas with your favorite text editor by pressing `Ctrl-I` (or any key combination of your choice). - -Having said that, keep in mind that it's a relatively young project and may still be rough around the edges. On the other hand, its development is very active, and when you look past its childhood illnesses, it can be a pleasure to use. - -### Open With - -![Open With Screenshot][13] - -A context menu provided by Open With. I can open the current page with one of the external programs listed here. - -Speaking of interaction with external programs, sometimes it's nice to have the ability to do that with the mouse. That's where [Open With][14] comes in. - -Apart from the added context menu (shown in the screenshot), you can find your own defined commands by clicking on the extension's icon on the add-on bar. As its icon and the description on [its page on Mozilla Add-ons][14] suggest, it was primarily intended to work with other web browsers, but I can use it with mpv and youtube-dl with ease as well. - -Keyboard shortcuts are available here, too, but they're severely limited. There are no more than three different combinations that can be selected in a drop-down list in the extension's settings. In contrast, Tridactyl lets me assign commands to virtually anything that isn't blocked by Firefox. Open With is currently for the mouse, really. - -### Stylus - -![Stylus Screenshot][16] - -In this screenshot, I've just searched for and installed a dark theme for the site I'm currently on with Stylus. Even the popup has custom style (called Deepdark Stylus)! - -[Stylus][17] is a userstyle manager, which means that by writing custom CSS rules and loading them with Stylus, you can change the appearance of any webpage. If you don't know CSS, there are a plethora of userstyles made by others on websites such as [userstyles.org][18]. - -Now, you may be asking, "Isn't that exactly what [Stylish][19] does?" You would be correct! You see, Stylus is based on Stylish and provides additional improvements: It respects your privacy by not containing any telemetry, all development is done in the open (although Stylish is still actively developed, I haven't been able to find the source code for recent versions), and it supports [UserCSS][20], among other things. - -UserCSS is an interesting format, especially for developers. I've written several userstyles for various websites (mainly dark themes and tweaks for better readability), and while the internal editor of Stylus is excellent, I still prefer editing code with Neovim. For that, all I need to do is load a local file with its name ending with ".user.css" in Stylus, enable the option "Live Reload", and any changes will be applied as soon as I modify and save that file in Neovim. Remote UserCSS files are also supported, so whenever I push changes to GitHub or any git-based development platforms, they'll automatically become available for users. (I provide a link to the raw version of the file so that they can access it easily.) - -### uMatrix - -![uMatrix Screenshot][22] - -The user interface of uMatrix, showing the current rules for the currently visited webpage. - -Jeremy Garcia mentioned uBlock Origin in [his article][23] here on Opensource.com as an excellent blocker. I'd like to draw attention to another extension made by [gorhill][24]: uMatrix. - -[uMatrix][25] allows you to set blocking rules for certain requests on a webpage, which can be toggled by clicking on the add-on's popup (seen in the screenshot above). These requests are distinguished by the categories of scripts, requests made by scripts, cookies, CSS rules, images, media content, frames, and anything else labeled as "other" by uMatrix. You can set up global rules to, for instance, allow all requests by default and add only particular ones to the blacklist (the more convenient approach), or block everything by default and whitelist certain requests manually (the safer approach). If you've been using NoScript or RequestPolicy, you can [import][26] your whitelist rules from them, too. - -In addition, uMatrix supports [hosts files][27], which can be used to block requests from certain domains. These are not to be confused with the filter lists used by uBlock Origin, which use the same syntax as the filters set by Adblock Plus. By default, uMatrix blocks domains of servers known to distribute ads, trackers, and malware with the help of a few hosts files, and you can add more external sources if you want to. - -So which one shall you choose—uBlock Origin or uMatrix? Personally, I use both on my desktop PC and only uMatrix on my Android phone. There's some overlap between the two, [according to gorhill][28], but they have a different target userbase and goals. If all you want is an easy way to block trackers and ads, uBlock Origin is a better choice. On the other hand, if you want granular control over what a webpage can or can't do inside your browser, even if it takes some time to configure and it can prevent sites from functioning as intended, uMatrix is the way to go. - -### Conclusion - -Currently, these are my favorite extensions for Firefox. Tridactyl is for speeding up browsing navigation by relying on the keyboard and interacting with external programs; Open With is there if I need to open something in another program with the mouse; Stylus is the definitive userstyle manager, appealing to both users and developers alike; and uMatrix is essentially a firewall within Firefox for filtering out requests on unknown territories. - -Even though I almost exclusively discussed the benefits of these add-ons, no software is ever perfect. If you like any of them and think they can be improved in any way, I recommend that you go to their GitHub page and look for their contribution guides. Usually, developers of free and open source software welcome bug reports and pull requests. Telling your friends about them or saying thanks are also excellent ways to help the developers, especially if they work on their projects in their spare time. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/firefox-open-source-extensions - -作者:[Zsolt Szakács][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/zsolt -[1]:https://www.gnu.org/licenses/gpl-3.0.en.html -[2]:https://www.mozilla.org/en-US/MPL/ -[3]:https://www.apache.org/licenses/LICENSE-2.0 -[4]:/file/398411 -[5]:https://opensource.com/sites/default/files/uploads/tridactyl.png (Tridactyl's new tab page, showcasing link hinting) -[6]:https://addons.mozilla.org/en-US/firefox/addon/tridactyl-vim/ -[7]:https://github.com/vimperator/vimperator-labs -[8]:https://addons.mozilla.org/en-US/firefox/addon/pentadactyl/ -[9]:https://www.vim.org/ -[10]:https://mpv.io/ -[11]:https://rg3.github.io/youtube-dl/index.html -[12]:/file/398416 -[13]:https://opensource.com/sites/default/files/uploads/openwith.png (A context menu provided by Open With. I can open the current page with one of the external programs listed here.) -[14]:https://addons.mozilla.org/en-US/firefox/addon/open-with/ -[15]:/file/398421 -[16]:https://opensource.com/sites/default/files/uploads/stylus.png (In this screenshot, I've just searched for and installed a dark theme for the site I'm currently on with Stylus. Even the popup has custom style (called Deepdark Stylus)!) -[17]:https://addons.mozilla.org/en-US/firefox/addon/styl-us/ -[18]:https://userstyles.org/ -[19]:https://addons.mozilla.org/en-US/firefox/addon/stylish/ -[20]:https://github.com/openstyles/stylus/wiki/Usercss -[21]:/file/398426 -[22]:https://opensource.com/sites/default/files/uploads/umatrix.png (The user interface of uMatrix, showing the current rules for the currently visited webpage.) -[23]:https://opensource.com/article/18/5/firefox-extensions -[24]:https://addons.mozilla.org/en-US/firefox/user/gorhill/ -[25]:https://addons.mozilla.org/en-US/firefox/addon/umatrix -[26]:https://github.com/gorhill/uMatrix/wiki/FAQ -[27]:https://en.wikipedia.org/wiki/Hosts_(file) -[28]:https://github.com/gorhill/uMatrix/issues/32#issuecomment-61372436 diff --git a/sources/tech/20180606 Working with modules in Fedora 28.md b/sources/tech/20180606 Working with modules in Fedora 28.md deleted file mode 100644 index 9a45d3367b..0000000000 --- a/sources/tech/20180606 Working with modules in Fedora 28.md +++ /dev/null @@ -1,139 +0,0 @@ -Working with modules in Fedora 28 -====== -![](https://fedoramagazine.org/wp-content/uploads/2018/05/modules-workingwith-816x345.jpg) -The recent Fedora Magazine article entitled [Modularity in Fedora 28 Server Edition][1] did a great job of explaining Modularity in Fedora 28. It also pointed out a few example modules and explained the problems they solve. This article puts one of those modules to practical use, covering installation and setup of Review Board 3.0 using modules. - -### Getting started - -To follow along with this article and use modules, you need a system running [Fedora 28 Server Edition][2] along with [sudo administrative privileges][3]. Also, run this command to make sure all the packages on the system are current: -``` -sudo dnf -y update - -``` - -While you can use modules on Fedora 28 non-server editions, be aware of the [caveats described in the comments of the previous article][4]. - -### Examining modules - -First, take a look at what modules are available for Fedora 28. Run the following command: -``` -dnf module list - -``` - -The output lists a collection of modules that shows the associated stream, version, and available installation profiles for each. A [d] next to a particular module stream indicates the default stream used if the named module is installed. - -The output also shows most modules have a profile named default. That’s not a coincidence, since default is the name used for the default profile. - -To see where all those modules are coming from, run: -``` -dnf repolist - -``` - -Along with the usual [fedora and updates package repositories][5], the output shows the fedora-modular and updates-modular repositories. - -The introduction stated you’d be setting up Review Board 3.0. Perhaps a module named reviewboard caught your attention in the earlier output. Next, to get some details about that module, run this command: -``` -dnf module info reviewboard - -``` - -The description confirms it is the Review Board module, but also says it’s the 2.5 stream. However, you want 3.0. Look at the available reviewboard modules: -``` -dnf module list reviewboard - -``` - -The [d] next to the 2.5 stream means it is configured as the default stream for reviewboard. Therefore, be explicit about the stream you want: -``` -dnf module info reviewboard:3.0 - -``` - -Now for even more details about the reviewboard:3.0 module, add the verbose option: -``` -dnf module info reviewboard:3.0 -v - -``` - -### Installing the Review Board 3.0 module - -Now that you’ve tracked down the module you want, install it with this command: -``` -sudo dnf -y module install reviewboard:3.0 - -``` - -The output shows the ReviewBoard package was installed, along with several other dependent packages, including several from the django:1.6 module. The installation also enabled the reviewboard:3.0 module and the dependent django:1.6 module. - -Next, to see enabled modules, use this command: -``` -dnf module list --enabled - -``` - -The output shows [e] for enabled streams, and [i] for installed profiles. In the case of the reviewboard:3.0 module, the default profile was installed. You could have specified a different profile when installing the module. In fact, you still can — and this time you don’t need to specify the 3.0 stream since it was already enabled: -``` -sudo dnf -y module install reviewboard/server - -``` - -However, installation of the reviewboard:3.0/server profile is rather uneventful. The reviewboard:3.0 module’s server profile is the same as the default profile — so there’s nothing more to install. - -### Spin up a Review Board site - -Now that the Review Board 3.0 module and its dependent packages are installed, [create a Review Board site][6] running on the local system. Without further ado or explanation, copy and paste the following commands to do that: -``` -sudo rb-site install --noinput \ - --domain-name=localhost --db-type=sqlite3 \ - --db-name=/var/www/rev.local/data/reviewboard.db \ - --admin-user=rbadmin --admin-password=secret \ - /var/www/rev.local -sudo chown -R apache /var/www/rev.local/htdocs/media/uploaded \ - /var/www/rev.local/data -sudo ln -s /var/www/rev.local/conf/apache-wsgi.conf \ - /etc/httpd/conf.d/reviewboard-localhost.conf -sudo setsebool -P httpd_can_sendmail=1 httpd_can_network_connect=1 \ - httpd_can_network_memcache=1 httpd_unified=1 -sudo systemctl enable --now httpd - -``` - -Now fire up a web browser on the system, point it at , and enjoy the shiny new Review Board site! To login as the Review Board admin, use the userid and password seen in the rb-site command above. - -### Module cleanup - -It’s good practice to clean up after yourself. To do that, remove the Review Board module and the site directory: -``` -sudo dnf -y module remove reviewboard:3.0 -sudo rm -rf /var/www/rev.local - -``` - -### Closing remarks - -Now that you’ve explored how to examine and administer the Review Board module, go experiment with the other modules available in Fedora 28. - -Learn more about using modules in Fedora 28 on the [Fedora Modularity][7] web site. The dnf manual page’s Module Command section also contains useful information. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/working-modules-fedora-28/ - -作者:[Merlin Mathesius][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://fedoramagazine.org/author/merlinm/ -[1]:https://fedoramagazine.org/modularity-fedora-28-server-edition/ -[2]:https://getfedora.org/server/ -[3]:https://fedoramagazine.org/howto-use-sudo/ -[4]:https://fedoramagazine.org/modularity-fedora-28-server-edition/#comment-476696 -[5]:https://fedoraproject.org/wiki/Repositories -[6]:https://www.reviewboard.org/docs/manual/dev/admin/installation/creating-sites/ -[7]:https://docs.pagure.org/modularity/ diff --git a/sources/talk/20180611 12 fiction books for Linux and open source types.md b/sources/tech/20180611 12 fiction books for Linux and open source types.md similarity index 100% rename from sources/talk/20180611 12 fiction books for Linux and open source types.md rename to sources/tech/20180611 12 fiction books for Linux and open source types.md diff --git a/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md b/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md deleted file mode 100644 index 254f725a36..0000000000 --- a/sources/tech/20180625 8 reasons to use the Xfce Linux desktop environment.md +++ /dev/null @@ -1,85 +0,0 @@ -8 reasons to use the Xfce Linux desktop environment -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/linux_penguin_green.png?itok=ENdVzW22) - -For several reasons (including curiosity), a few weeks ago I started using [Xfce][1] as my Linux desktop. One reason was trouble with background daemons eating up all the CPU and I/O bandwidth on my very powerful main workstation. Of course, some of the instability may be due to my removal of some of the RPM packages that provide those background daemons. However, even before I removed the RPMs, the fact is KDE was unstable and causing performance and stability issues. I needed to use a different desktop to avoid these problems. - -I realized in looking back over my series of articles on Linux desktops that I had neglected Xfce. This article is intended to rectify that oversight. I like Xfce a lot and am enjoying the speed and lightness of it more than I thought I would. - -As part of my research, I googled to try to learn what Xfce means. There is a historical reference to XForms Common Environment, but Xfce no longer uses the XForms tools. Some years ago, I found a reference to "Xtra fine computing environment," and I like that a lot. I will use that (despite not being able to find the page reference again). - -### Eight reasons for recommending Xfce - -#### 1\. Lightweight construction - -Xfce has a very small memory footprint and CPU usage compared to some other desktops, such as KDE and GNOME. On my system, the programs that make up the Xfce desktop take a tiny amount of memory for such a powerful desktop. Very low CPU usage is also a hallmark of the Xfce desktop. With such a small memory footprint, I am not especially surprised that Xfce is also very sparing of CPU cycles. - -#### 2\. Simplicity - -The Xfce desktop is simple and uncluttered with fluff. The basic desktop has two panels and a vertical line of icons on the left side. Panel 0 is at the bottom and consists of some basic application launchers, as well as the Applications icon, which provides access to all the applications on the system. Panel 1 is at the top and has an Applications launcher as well as a Workspace Switcher that allows the user to switch between multiple workspaces. The panels can be modified with additional items, such as new launchers, or by altering their height and width. - -The icons down the left side of the desktop consist of the Home directory and Trash icons. It can also display icons for the complete filesystem directory tree and any connected pluggable USB storage devices. These icons can be used to mount and unmount the device, as well as to open the default file manager. They can also be hidden if you prefer, and the Filesystem, Trash, and Home directory icons are separately controllable. The removable drives can be hidden or displayed as a group. - -#### 3\. File management - -Thunar, Xfce's default file manager, is simple, easy to use and configure, and very easy to learn. While not as fancy as file managers like Konqueror or Dolphin, it is quite capable and very fast. Thunar can't create multiple panes in its window, but it does provide tabs so multiple directories can be open at the same time. Thunar also has a very nice sidebar that, like the desktop, shows the same icons for the complete filesystem directory tree and any connected USB storage devices. Devices can be mounted and unmounted, and removable media such as CDs can be ejected. Thunar can also use helper applications such as Ark to open archive files when they are clicked. Archives, such as ZIP, TAR, and RPM files, can be viewed, and individual files can be copied out of them. - - -![Xfce desktop][3] - -The Xfce desktop with Thunar and the Xfce terminal emulator. - -Having used many different applications for my [series on file managers][4], I must say that I like Thunar for its simplicity and ease of use. It is easy to navigate the filesystem using the sidebar. - -#### 4\. Stability - -The Xfce desktop is very stable. New releases seem to be on a three-year cycle, although updates are provided as necessary. The current version is 4.12, which was released in February 2015. The rock-solid nature of the Xfce desktop is very reassuring after having issues with KDE. The Xfce desktop has never crashed for me, and it has never spawned daemons that gobbled up system resources. It just sits there and works—which is what I want. - -#### 5\. Elegance - -Xfce is simply elegant. In my new book, The Linux Philosophy for SysAdmins, which will be available this fall, I talk about the many advantages of simplicity, including the fact that simplicity is one of the hallmarks of elegance. Clearly, the programmers who write and maintain Xfce and its component applications are great fans of simplicity. This simplicity is very likely the reason that Xfce is so stable, but it also results in a clean look, a responsive interface, an easily navigable structure that feels natural, and an overall elegance that makes it a pleasure to use. - -#### 6\. Terminal emulation - -The Xfce4 terminal emulator is a powerful emulator that uses tabs to allow multiple terminals in a single window, like many other terminal emulators. This terminal emulator is simple compared to emulators like Tilix, Terminator, and Konsole, but it gets the job done. The tab names can be changed, and the tabs can be rearranged by drag and drop, using the arrow icons on the toolbar, or selecting the options on the menu bar. One thing I especially like about the tabs on the Xfce terminal emulator is that they display the name of the host to which they are connected regardless of how many other hosts are connected through to make that connection, e.g., `host1==>host2==>host3==>host4` properly shows `host4` in the tab. Other emulators show `host2` at best. - -Other aspects of its function and appearance can be easily configured to suit your needs. Like other Xfce components, this terminal emulator uses very little in the way of system resources. - -#### 7\. Configurability - -Within its limits, Xfce is very configurable. While not offering as much configurability as a desktop like KDE, it is far more configurable (and more easily so) than GNOME, for example. I found that the Settings Manager is the doorway to everything needed to configure Xfce. The individual configuration apps are separately available, but the Settings Manager collects them all into one window for ease of access. All the important aspects of the desktop can be configured to meet my needs and preferences. - -#### 8\. Modularity - -Xfce has a number of individual projects that make up the whole, and not all parts of Xfce are necessarily installed by your distro. [Xfce's projects][5] page lists the main projects, so you can find additional parts you might want to install. The items that weren't installed on my Fedora 28 workstation when I installed the Xfce group were mostly the applications at the bottom of that page. - -There is also a [documentation page][6], and a wiki called [Xfce Goodies Project][7] lists other Xfce-related projects that provide applications, artwork, and plugins for Thunar and the Xfce panels. - -### Conclusions - -The Xfce desktop is thin and fast with an overall elegance that makes it easy to figure out how to do things. Its lightweight construction conserves both memory and CPU cycles. This makes it ideal for older hosts with few resources to spare for a desktop. However, Xfce is flexible and powerful enough to satisfy my needs as a power user. - -I've learned that changing to a new Linux desktop can take some work to configure it as I want—with all of my favorite application launchers on the panel, my preferred wallpaper, and much more. I have changed to new desktops or updates of old ones many times over the years. It takes some time and a bit of patience. - -I think of it like when I've moved cubicles or offices at work. Someone carries my stuff from the old office to the new one, and I connect my computer, unpack the boxes, and place their contents in appropriate locations in my new office. Moving into the Xfce desktop was the easiest move I have ever made. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/xfce-desktop - -作者:[David Both][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dboth -[1]:https://xfce.org/ -[2]:/file/401856 -[3]:https://opensource.com/sites/default/files/uploads/xfce-desktop-01.png (Xfce desktop) -[4]:https://opensource.com/sitewide-search?search_api_views_fulltext=David%20Both%20File%20managers -[5]:https://xfce.org/projects -[6]:https://docs.xfce.org/ -[7]:https://goodies.xfce.org/ diff --git a/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md b/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md deleted file mode 100644 index 0321080a4b..0000000000 --- a/sources/tech/20180625 Checking out the notebookbar and other improvements in LibreOffice 6.0 - FOSS adventures.md +++ /dev/null @@ -1,111 +0,0 @@ -Checking out the notebookbar and other improvements in LibreOffice 6.0 – FOSS adventures -====== - -With any new openSUSE release, I am interested in the improvements that the big applications have made. One of these big applications is LibreOffice. Ever since LibreOffice has forked from OpenOffice.org, there has been a constant delivery of new features and new fixes every 6 months. openSUSE Leap 15 brought us the upgrade from LibreOffice 5.3.3 to LibreOffice 6.0.4. In this post, I will highlight the improvements that I found most newsworthy. - -### Notebookbar - -One of the experimental features of LibreOffice 5.3 was the Notebookbar. In LibreOffice 6.0 this feature has matured a lot and has gained a new form: the groupedbar. Lets take a look at the 3 variants. You can enable the Notebookbar by clicking on View –> Toolbar Layout and then Notebookbar. - -![][1] - -Please be aware that switching back to the Default Toolbar Layout is a bit of a hassle. To list the tricks: - - * The contextual groups notebookbar shows the menubar by default. Make sure that you don’t hide it. Change the Layout via the View menu in the menubar. - * The tabbed notebookbar has a hamburger menu on the upper right side. Select menubar. Then change the Layout via the View menu in the menubar. - * The groupedbar notebookbar has a menu dropdown menu on the upper right side. Make sure to maximize the window. Otherwise it might be hidden. - - - -The most talked about version of the notebookbar is the tabbed version. This looks similar to the Microsoft Office 2007 ribbon. That fact alone is enough to ruffle some feathers in the open source community. In comparison to the ribbon, the tabs (other than Home) can feel rather empty. The reason for that is that the icons are not designed to be big and bold. Another reason is that there are no sub-sections in the tabs. In the Microsoft version of the ribbon, you have names of the sub-sections underneath the icons. This helps to fill the empty space. However, in terms of ease of use, this design does the job. It provides you with a lot of functions in an easy to understand interface. - -![][2] - -The most successful version of the notebookbar is in my opinion the groupedbar. It gives you all of the most needed functions in a single overview. And the dropdown menus (File / Edit / Styles / Format / Paragraph / Insert / Reference) all show useful functions that are not so often used. - -![][3] - -By the way, it also works great for Calc (Spreadsheets) and Impress (Presentations). - -![][4] - -![][5] - -Finally there is the contextual groups version. The “groups” version is not very helpful. It shows a very limited number of basic functions. And it takes up a lot of space. If you want to use more advanced functions, you need to use the traditional menubar. The traditional menubar works perfectly, but in that case I rather combine it with the Default toolbar layout. - -![][6] - -The contextual single version is the better version. If you compare it to the “normal” single toolbar, it contains more functions and the order in which the functions are arranged is easier to use. - -![][7] - -There is no real need to make the switch to the notebookbar. But it provides you with choice. One of these user interfaces might just suit your taste. - -### Microsoft Office compatibility - -Microsoft Office compatibility (especially .docx, .xlsx and .pptx) is one of the things that I find very important. As a former Business Consultant I have created a lot of documents in the past. I have created 200+ page reports. They need to work flawless, including getting the page brakes right, which is incredibly difficult as the margins are never the same. Also the index, headers, footers, grouped drawings and SmartArt drawings need to display as originally composed. I have created large PowerPoint presentations with branded slides with +30 layouts, grouped drawings and SmartArt drawings. I need these to render perfectly in the slideshow. Furthermore, I have created large multi-tabbed Excel sheets with filters, pivot tables, graphs and conditional formatting. All of these need to be conserved when I open these files in LibreOffice. - -And no, LibreOffice is still not perfect. But damn, it is close. This time I have seen no major problems when opening older documents. Which means that LibreOffice finally gets SmartArt drawings right. In Writer, the page breaks in different places compared to Microsoft Word. That has always been an issue. But I don’t see many other issues. In Calc, the rendering of the graphs is less beautiful. But it’s similar enough to Excel. In Impress, presentations can look strange, because sometimes you see bigger/smaller fonts in the same slide (and that is not on purpose). But I was very impressed to see branded slides with multiple sections render correctly. If I needed to score it, I would give LibreOffice a 7 out of 10 for Microsoft Office compatibility. A very solid score. Below some examples of compatibility done right. - -![][8] - -![][9] - -![][10] - -### Noteworthy features - -Finally, there are the noteworthy features. I will only highlight the ones that I find cool. The first one is the ability to rotate images in any degree. Below is an example of me rotating a Gecko. - -![][11] - -The second cool feature is that the old collection of autoformat table styles are now replaced with a new collection of table styles. You can access these styles via the menubar: Table –> AutoFormat Styles. In the screenshots below, I show how to change a table from the Box List Green to the Box List Red format. - -![][12] - -![][13] - -The third feature is the ability to copy-past unformatted text in Calc. This is something I will use a lot, making it a cool feature. - -![][14] - -The final feature is the new and improved LibreOffice Online help. This is not the same as the LibreOffice help (press F1 to see what I mean). That is still there (and as far as I know unchanged). But this is the online wiki that you will find on the LibreOffice.org website. Some contributors obviously put a lot of effort in this feature. It looks good, now also on a mobile device. Kudos! - -![][15] - -If you want to learn about all of the other introduced features, read the [release notes][16]. They are really well written. - -### And that’s not all folks - -I discussed LibreOffice on openSUSE Leap 15. However, LibreOffice is also available on Android and in the Cloud. You can get the Android version from the [Google Play Store][17]. And you can see the Cloud version in action if you go to the [Collabora website][18]. Check them out for yourselves. - --------------------------------------------------------------------------------- - -via: https://www.fossadventures.com/checking-out-the-notebookbar-and-other-improvements-in-libreoffice-6-0/ - -作者:[Martin De Boer][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.fossadventures.com/author/martin_de_boer/ -[1]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice06.jpeg -[2]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice09.jpeg -[3]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice11.jpeg -[4]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice10.jpeg -[5]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice08.jpeg -[6]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice07.jpeg -[7]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice12.jpeg -[8]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice14.jpeg -[9]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice15.jpeg -[10]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice16.jpeg -[11]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice01.jpeg -[12]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice02.jpeg -[13]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice03.jpeg -[14]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice04.jpeg -[15]:https://www.fossadventures.com/wp-content/uploads/2018/06/LibreOffice05.jpeg -[16]:https://wiki.documentfoundation.org/ReleaseNotes/6.0 -[17]:https://play.google.com/store/apps/details?id=org.documentfoundation.libreoffice&hl=en -[18]:https://www.collaboraoffice.com/press-releases/collabora-office-6-0-released/ diff --git a/sources/tech/20180625 The life cycle of a software bug.md b/sources/tech/20180625 The life cycle of a software bug.md deleted file mode 100644 index cacceb7a15..0000000000 --- a/sources/tech/20180625 The life cycle of a software bug.md +++ /dev/null @@ -1,67 +0,0 @@ -The life cycle of a software bug -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bug_software_issue_tracking_computer_screen.jpg?itok=6qfIHR5y) - -In 1947, the first computer bug was found—a moth trapped in a computer relay. - -If only all bugs were as simple to uncover. As software has become more complex, so too has the process of testing and debugging. Today, the life cycle of a software bug can be lengthy—though the right technology and business processes can help. For open source software, developers use rigorous ticketing services and collaboration to find and mitigate bugs. - -### Confirming a computer bug - -During the process of testing, bugs are reported to the development team. Quality assurance testers describe the bug in as much detail as possible, reporting on their system state, the processes they were undertaking, and how the bug manifested itself. - -Despite this, some bugs are never confirmed; they may be reported in testing but can never be reproduced in a controlled environment. In such cases they may not be resolved but are instead closed. - -It can be difficult to confirm a computer bug due to the wide array of platforms in use and the many different types of user behavior. Some bugs only occur intermittently or under very specific situations, and others may occur seemingly at random. - -Many people use and interact with open source software, and many bugs and issues may be non-repeatable or may not be adequately described. Still, because every user and developer also plays the role of quality assurance tester, at least in part, there is a good chance that bugs will be revealed. - -When a bug is confirmed, work begins. - -### Assigning a bug to be fixed - -A confirmed bug is assigned to a developer or a development team to be addressed. At this stage, the bug needs to be reproduced, the issue uncovered, and the associated code fixed. Developers may categorize this bug as an issue to be fixed later if the bug is low-priority, or they may assign someone directly if it is high-priority. Either way, a ticket is opened during the process of development, and the bug becomes a known issue. - -In open source solutions, developers may select from the bugs that they want to tackle, either choosing the areas of the program with which they are most familiar or working from the top priorities. Consolidated solutions such as [GitHub][1] make it easy for multiple developers to work on solutions without interfering with each other's work. - -When assigning bugs to be fixed, reporters may also select a priority level for the bug. Major bugs may have a high priority level, whereas bugs related to appearance only, for example, may have a lower level. This priority level determines how and when the development team is assigned to resolve these issues. Either way, all bugs need to be resolved before a product can be considered complete. Using proper traceability back to prioritized requirements can also be helpful in this regard. - -### Resolving the bug - -Once a bug has been fixed, it is usually be sent back to Quality Assurance as a resolved bug. Quality Assurance then puts the product through its paces again to reproduce the bug. If the bug cannot be reproduced, Quality Assurance will assume that it has been properly resolved. - -In open source situations, any changes are distributed—often as a tentative release that is being tested. This test release is distributed to users, who again fulfill the role of Quality Assurance and test the product. - -If the bug occurs again, the issue is sent back to the development team. At this stage, the bug is reopened, and it is up to the development team to repeat the cycle of resolving the bug. This may occur multiple times, especially if the bug is unpredictable or intermittent. Intermittent bugs are notoriously difficult to resolve. - -If the bug does not occur again, the issue will be marked as resolved. In some cases, the initial bug is resolved, but other bugs emerge as a result of the changes made. When this happens, new bug reports may need to be initiated, starting the process over again. - -### Closing the bug - -After a bug has been addressed, identified, and resolved, the bug is closed and developers can move on to other areas of software development and testing. A bug will also be closed if it was never found or if developers were never able to reproduce it—either way, the next stage of development and testing will begin. - -Any changes made to the solution in the testing version will be rolled into the next release of the product. If the bug was a serious one, a patch or a hotfix may be provided for current users until the release of the next version. This is common for security issues. - -Software bugs can be difficult to find, but by following set processes and procedures, developers can make the process faster, easier, and more consistent. Quality Assurance is an important part of this process, as QA testers must find and identify bugs and help developers reproduce them. Bugs cannot be closed and resolved until the error no longer occurs. - -Open source solutions distribute the burden of quality assurance testing, development, and mitigation, which often leads to bugs being discovered and mitigated more quickly and comprehensively. However, because of the nature of open source technology, the speed and accuracy of this process often depends upon the popularity of the solution and the dedication of its maintenance and development team. - -_Rich Butkevic is a PMP certified project manager, certified scum master, and runs[Project Zendo][2] , a website for project management professionals to discover strategies to simplify and improve their project results. Connect with Rich at [Richbutkevic.com][3] or on [LinkedIn][4]._ - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/life-cycle-software-bug - -作者:[Rich Butkevic][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/rich-butkevic -[1]:https://github.com/ -[2]:https://projectzendo.com -[3]:https://richbutkevic.com -[4]:https://www.linkedin.com/in/richbutkevic diff --git a/sources/tech/20180626 8 great pytest plugins.md b/sources/tech/20180626 8 great pytest plugins.md deleted file mode 100644 index c2c6c2bab7..0000000000 --- a/sources/tech/20180626 8 great pytest plugins.md +++ /dev/null @@ -1,70 +0,0 @@ -translating---geekpi - -8 great pytest plugins -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A) - -We are big fans of [pytest][1] and use it as our default Python testing tool for work and open source projects. For this month's Python column, we're sharing why we love pytest and some of the plugins that make testing with pytest so much fun. - -### What is pytest? - -As the tool's website says, "The pytest framework makes it easy to write small tests, yet scales to support complex functional testing for applications and libraries." - -`test_*.py` and as functions that begin with `test_*`. Pytest will then find all your tests, across your whole project, and run them automatically when you run `pytest` in your console. Pytest accepts `set_trace()` function that can be entered into your test; this will pause your tests and allow you to interact with your variables and otherwise "poke around" in the console to debug your project. - -Pytest allows you to define your tests in any file calledand as functions that begin with. Pytest will then find all your tests, across your whole project, and run them automatically when you runin your console. Pytest accepts [flags and arguments][2] that can change when the testrunner stops, how it outputs results, which tests are run, and what information is included in the output. It also includes afunction that can be entered into your test; this will pause your tests and allow you to interact with your variables and otherwise "poke around" in the console to debug your project. - -One of the best aspects of pytest is its robust plugin ecosystem. Because pytest is such a popular testing library, over the years many plugins have been created to extend, customize, and enhance its capabilities. These eight plugins are among our favorites. - -### Great 8 - -**1.[pytest-sugar][3]** -`pytest-sugar` changes the default look and feel of pytest, adds a progress bar, and shows failing tests instantly. It requires no configuration; just `pip install pytest-sugar`, run your tests with `pytest`, and enjoy the prettier, more useful output. - -**2.[pytest-cov][4]** -`pytest-cov` adds coverage support for pytest to show which lines of code have been tested and which have not. It will also include the percentage of test coverage for your project. - -**3.[pytest-picked][5]** -`pytest-picked` runs tests based on code that you have modified but not committed to `git` yet. Install the library and run your tests with `pytest --picked` to test only files that have been changed since your last commit. - -**4.[pytest-instafail][6]** -`pytest-instafail` modifies pytest's default behavior to show failures and errors immediately instead of waiting until pytest has finished running every test. - -**5.[pytest-tldr][7]** -A brand-new pytest plugin that limits the output to just the things you need. `pytest-tldr` (the `tldr` stands for "too long, didn't read"), like `pytest-sugar`, requires no configuration other than basic installation. Instead of pytest's default output, which is pretty verbose, `pytest-tldr`'s default limits the output to only tracebacks for failing tests and omits the color-coding that some find annoying. Adding a `-v` flag returns the more verbose output for those who prefer it. - -**6.[pytest-xdist][8]** -`pytest-xdist` allows you to run multiple tests in parallel via the `-n` flag: `pytest -n 2`, for example, would run your tests on two CPUs. This can significantly speed up your tests. It also includes the `--looponfail` flag, which will automatically re-run your failing tests. - -**7.[pytest-django][9]** -`pytest-django` adds pytest support to Django applications and projects. Specifically, `pytest-django` introduces the ability to test Django projects using pytest fixtures, omits the need to import `unittest` and copy/paste other boilerplate testing code, and runs faster than the standard Django test suite. - -**8.[django-test-plus][10]** -`django-test-plus` isn't specific to pytest, but it now supports pytest. It includes its own `TestCase` class that your tests can inherit from and enables you to use fewer keystrokes to type out frequent test cases, like checking for specific HTTP error codes. - -The libraries we mentioned above are by no means your only options for extending your pytest usage. The landscape for useful pytest plugins is vast. Check out the [Pytest Plugins Compatibility][11] page to explore on your own. Which ones are your favorites? - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/6/pytest-plugins - -作者:[Jeff Triplett;Lacery Williams Henschel][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/sites/default/files/styles/byline_thumbnail/public/pictures/dcus-2017-bw.jpg?itok=s8PhD7Ok -[1]:https://docs.pytest.org/en/latest/ -[2]:https://docs.pytest.org/en/latest/usage.html -[3]:https://github.com/Frozenball/pytest-sugar -[4]:https://github.com/pytest-dev/pytest-cov -[5]:https://github.com/anapaulagomes/pytest-picked -[6]:https://github.com/pytest-dev/pytest-instafail -[7]:https://github.com/freakboy3742/pytest-tldr -[8]:https://github.com/pytest-dev/pytest-xdist -[9]:https://pytest-django.readthedocs.io/en/latest/ -[10]:https://django-test-plus.readthedocs.io/en/latest/ -[11]:https://plugincompat.herokuapp.com/ diff --git a/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md b/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md new file mode 100644 index 0000000000..b1f8d97305 --- /dev/null +++ b/sources/tech/20180626 Playing Badass Acorn Archimedes Games on a Raspberry Pi.md @@ -0,0 +1,539 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Playing Badass Acorn Archimedes Games on a Raspberry Pi) +[#]: via: (https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/) +[#]: author: (James Mawson https://blog.dxmtechsupport.com.au/author/james-mawson/) + +Playing Badass Acorn Archimedes Games on a Raspberry Pi +====== + +![Cannon Fodder on the Raspberry Pi][1] + +The Acorn Archimedes was an excellent machine and years ahead of its time. + +Debuting in 1987, it featured a point and click graphic interface not so different to Windows 95, 32 bit processing, and enough 3D graphics power to portal you to a new decade. + +These days, it’s best remembered for launching the Acorn RISC Machines processor. ARM processors went on to rule the world. You almost certainly keep one in your pocket. + +What’s less well appreciated is that the Archimedes was rad for games. For a few years, it was the most powerful desktop in the world and developers were eager to show what they could do with it. + +But with such power came a great price tag. The Archimedes was never going to be in as many homes to make as many memories as Sega or Nintendo. + +But now, the Raspberry Pi’s ARM chip makes it cheap and easy to play these games on the same operating system and CPU architecture they were written for. + +Even better, the rights holders to much of this machine’s gaming catalogue have been generous enough to allow hobbyists to legally download their work for free. + +This is a cheap and easy project. In fact, if you already run a Raspberry Pi home theatre or retro gaming rig, all you really need is a spare SD card. + +### Introduction + +None of this will be on the exam, so if you already know the story of the Acorn Archimedes – or just want to get straight into gaming – feel free to skip ahead to the next section. + +But if you’re wondering what we’re even talking about, here it is: + +#### What on Earth is an Acorn Archimedes? + +Me and Acorn computers go way back. + +For the earliest part of my life that I can remember, Dad ran his business from home, writing timetabling software for schools. This was the early 80s, before the desktop computer market had been whittled down to Mac and PC. There were Amstrad CPCs, Sinclairs, Commodores, Ataris, TRSs, the list goes on. + +They all had their operating systems and ran their own software. If you wanted to port your software over to a new platform, you had to buy it. + +So, at a time when it was somewhat novel for a family to have even one computer, we had about a dozen, many of them already quite antique. There was a Microbee, an Apple IIc, an IBM XT, all sorts of stuff. + +The ones Dad liked most though were the BBC machines by [Acorn Computers][2]. He had several. There was a Model B, a Master 128 and a Master Compact. + +They were named that way because the British Broadcasting Corporation were developing a course to teach children how to program and they needed. Because of this educational focus, they’d found their way into a lot of schools – exactly the market he was selling to. + +At some point, I figured out you could play games on these things. I was straight away hooked like it was crack. All I cared about was games games games. It must have taken me several years to figure out that computers had a different use, because I can vividly recall how annoyed I was to be starting school while Dad got to stay home and play games all day. + +On my 7th birthday I got a second hand BBC Master Compact all of my own. This was probably as much to keep me away from his work computers as it was to introduce me to computing. I started learning to program in BASIC and Logo. I also played epic amounts of space shooters, 2D platformers and adventure games. + +Being obsessed with these things, I tagged along to the local BBC Users Group. This was a monthly get-together where enthusiasts would discuss what was new, bring their machines to show off what they’re doing and engage in some casual software piracy. Back before internet forums and torrents, people did this in person. + +This was where I first saw an Archimedes. I can’t really remember the exact year or the exact model – I just remember my jaw dropping to the floor at the 3D graphics and the 8 channel stereo sound. It would be about a decade before I saw anything similar on a gaming console. + + + +#### The Birth of a Legend + +Looking back, this has very good claim to be the first modern desktop computer. It was a 32-bit machine, an interface that looks more like what we use today than anything built in the 1980s, a palette of 4096 colours, and more horsepower than a lot of people knew what to do with. + +Now, don’t get me wrong: the 8-bit BBC machines were loads of fun and great for what they were – but what they were was fairly primitive. It was basically just a big box you typed commands on to make it beep and stuff. In theory it had 8 colours, but when you saw one in the wild it was usually hooked up to a monochrome screen and you didn’t feel like you were missing out on too much because of it. + +In 1984, Apple launched their Macintosh, featuring the first Graphical User Interface available on the mass market. Acorn knew they’d need a point and click graphic interface to stay in the game. And they knew the aging MOS 6502 they’d used in all their machines so far was just not going to be the CPU of the future. + +So, what to replace it with? + +The Acorn engineers looked at the available processors and found that none of them could quite do what they want. They decided to build their own – and it would be radically different. + +Up until that point, chip makers took a bit of a Swiss Army Knife approach to processor design – to compete, you added more and more useful stuff to the instruction set. + +There was a certain logic to this – hardware could be mass produced, while good software engineers were expensive. It made sense to handle as much as possible in the hardware. For device manufacturers with bills to pay, it was a real selling point. + +But this came at a cost – more and more complex instructions required more and more clock cycles to complete. Often there was a whole extra layer of processing to convert the complex machine code instructions into smaller instructions. As RAM became bigger and faster, CPUs were struggling to keep pace with the available memory bandwidth. + +Acorn turned this idea on its head, with a stripped-back approach in the great British tradition of the [de Havilland Mosquito][3]: Every instruction could be completed in a single cycle. + +While testing the prototype CPU, the engineers noticed something weird: they’d disconnected the power and yet the chip was running. What they’d built was so power-efficient that it kept running on residual power from nearby components. + +It was also 25 times faster than the 6502 CPU they used in the old BBC machines. Even better, it was several times more powerful than the Motorola 68000 found in the Apple Macintosh, Atari ST and Commodore Amiga – and several time more powerful than the 386 in the new Compaq too. + +With such radically new hardware, they needed a new operating system. What they come up with was Risc OS, and it was operated entirely through a graphic point-and-click desktop interface with a pinboard and an icon bar. This was pretty much Windows 95, 8 years before it happened. + +In a single step, Acorn had gone from producing some perfectly serviceable 8-bit box that kids could learn to code one, to having the most powerful desktop computer in the world. I mean, it was technically possible to get something more powerful – but it would have been some kind of server or mainframe. As far as something that could sit on your desk, this was top of the pile. + +It sold well in Acorn’s traditional education market in the UK. The sheer grunt also made it popular for certain power-hungry business tasks, like desktop publishing. + +#### The Crucifixion + +It wasn’t too long before Dad got an Archimedes – I can’t remember exactly which model. By this time, he’d moved his business out of home to an office. When school holidays rolled around, I’d sometimes have to spend the day at his work, where I had all the time in the world to fiddle around on it. + +The software it came with was enough to keep a child entertained for a while. It came with a demo game called Lander – this was more about showing off the machine’s 3D graphics power than providing any lasting value. There was a card game, and also some drawing programs. + + + +I played with the demo disc until I got bored – which I think was the most use that this particular machine ever got. For all the power under the hood, all the applications Dad used to actually run his business ran on DOS and Windows. + +He’d spent more than $4000 in today’s money for the most sophisticated and advanced piece of computing technology for a mile in any direction and it just sat there. + +He might have at least salvaged some beneficial utility out of it if he’d followed my advice of getting some games for it and letting me take it home. + +He never got around to writing any software on it. The Archimedes was apparently a big hit with British schools, but never really got popular enough with his Australian customer base to be worth coding for. + +Which I guess is kind of sums up where it all ultimately went wrong for the Acorn desktop. + +As the 80s wore on to the ’90s, Compaq reverse engineered the BIOS on the IBM PC to release their own fully compatible PC, and big players like Amstrad left their proprietary platforms to produce their own compatible MS-DOS machines. It was also became increasingly easy for just about anyone with a slight technical bunt to build their own PC-compatible clone from off-the-shelf parts – and to upgrade old PCs with new hard drives, sound cards, and the latest 386 and 486 processors. + +Small, independent computer shops and other local businesses started building their owns PCs and hardware manufacturers competed to sell parts to them. This was a computing platform that could serve all price points. + +With so much of the user base now on MS-DOS, software developers followed. Which only reinforced the idea that this was the obvious system to buy, which in turn reinforced that it was the system to code for. + +The days when just any computer maker could make a go of it with their own proprietary hardware and operating system had passed. Third-party support was everything. It didn’t actually matter how good your technology was if nothing would run on it. Even Apple nearly went to the wall. + +Acorn hung on through the 90s, and there was even a successor to the Archimedes called the [RiscPC][4]. But while the technology itself was again very good, these things were relatively marginal affairs in the marketplace. The era of the Acorn desktop had passed. + +#### The Resurrection + +It was definitely good for our family business when the market consolidated to Mac and PC. We didn’t need to maintain so many versions of the same software. + +But the Acorn machines had so much sentimental value. We both liked them and were sad to see them go. I’ve never been that into sport, but watching them slowly disappear might have been a bit like watching your football team lose match after match before finally going broke. + +We totally had no idea that they were, very quietly, on a path to total domination. + +The ARM was originally only built to go in the Archimedes. But it turned out that having a massively powerful processor with a simple instruction set and very little heat emission was useful for all sorts of stuff: DVD players, set top boxes, photocopiers, televisions, vending machines, home and small business routers, you name it. + +The ARM’s low power consumption made it especially useful for portable devices like PDAs, digital cameras, GPS navigators and – eventually – tablets and smartphones. Intel tried to compete in the smartphone market, but was [eventually forced to admit that this technology was just better for phones][5]. + +So in the end, Dad’s old BBC machines went on to conquer the world. + +### The Acorn Archimedes as a Gaming Platform + +While Microsoft operating systems were ultimately to become the only real choice for the serious desktop gamer, for a while the Archimedes was the most powerful desktop computer in the world. This attracted a lot of games developers, eager to show what they could do with it. + +This would have been about more than just selling to a well moneyed section of the desktop computer market that was clearly quite willing to throw cash at shiny things. It would have been a chance to make your reputation in the industry with a flagship product that just wasn’t possible on lesser hardware. + +So it is that you see Star Fighter 3000, Chocks Away and Zarch all charting new territory in what was possible on a desktop computer. + +But while the 3D graphics power was this system’s headline feature, the late 80s and early 90s were really the era of Sonic and Mario: the heyday of 2D platform games. Here, the Archimedes also excels, with offerings like Mad Professor Mariarti, Bug Hunter, F.R.E.D. and Hamsters, all of which are massively playable, have vibrant graphics and a boatload of personality. + +As you dig further into the library, you also find a few games that show that not every developer really knew what to do with this machine. Some games – like Repton 3 – are just old BBC micro games given the most meagre of facelifts. + +Many of the games in the Archimedes catalogue you’ll recognise from other platforms: Populous, Lemmings, James Pond, Battle Chess, the list goes on. + +Here, the massive hardware advantage of the Archimedes means that it usually had the best version of the game to play. You’re not getting a whole new game here: but it’s noticeably smoother graphics and gameplay, especially compared to the console releases. + +All in all, the Archimedes never had a catalogue as expansive as MS-DOS, the Commodore Amiga, or the Sega and Nintendo consoles. But there are enough truly excellent games to make it worth an SD card. + +### Configuring Your Raspberry Pi + +This is a bit different to other retro gaming options on the Raspberry Pi – we’re not running an emulator. The ARM chip in the Pi is a direct descendant of the one in the Archimedes, and there’s an [open source version of Risc OS][6] we can install on it. + +For the most hardcore retro gaming purist, nothing less than using the hardware will do. For everyone else, using the same operating system from back in the day to load up your games means that your retro gaming rig becomes just that little bit more of a time machine. + +But even with all these similarities, there’s still going to be a few things that change in 30 years of computing. + +The most visible difference is that our Raspberry Pi doesn’t come with an internal 3.5″ floppy disk drive. You might be able to hook up a USB one, but most of us don’t have this lying around and don’t really want one. So we’re going to need a different way to boot floppy images. + +The more major difference is how much RAM the operating system is written to handle. The earliest versions of Risc OS made efficient use of the ARM’s 32-bit register by using 26 bits for the memory address and the remaining 6 bits for status flags. A 26-bit scheme gives you enough addressing space for up to 64 megabytes of RAM. + +When this was first devised, the fact that an Archimedes came with a whole megabyte of RAM was considered incredibly luxurious by the standards of the day. By contrast, the first Commodore Amiga had 256kb of RAM. The Sega Mega Drive had 72kb. + +But as time wore on, and later versions of Risc OS moved to a 32-bit addressing scheme. This is what we have on our Raspberry Pi. A few games have been [recompiled to run on 32 bit addressing][7], but most have not. + +The Archimedes also used different display drivers for different screens. These days, our GPU can handle all of this for us. We just need to install a patch to get that working. + +There are free utilities you can download to handle all of these things. + +#### Hardware Requirements + +I’ve tested this to work with a Raspberry Pi Model 3 B, but I expect that any Pi from the Model A onwards should manage this. The ARM processor on the slowest Pi is a great many times more powerful than the on the fastest Archimedes. + +The lack of ports on a Raspberry Pi Zero means it’s probably not the most convenient choice, but if you can navigate around this, then it should be powerful enough. + +In addition to the board, you’ll need something to put it in, a micro SD card, a USB SD card adapter, a power supply, a screen with an HDMI input, a USB keyboard and a 3 button mouse – a clickable scroll wheel works fine for your middle button. + +If you already have a Raspberry Pi home theatre or retro gaming rig, then you’ve already got all this, so all you really need is a new micro SD card to swap in for Risc OS. + +#### Installing Risc OS Open + +When I first wrote this guide, Risc OS wasn’t an available option for the Raspberry Pi 3 on the NOOBS and PINN installers. That meant you had to download the image from the [Risc OS Open downloads page][8] and burn it to a new micro SD card. + +You can still do this if you like, and if you can’t connect your Raspberry Pi to the internet via Wi-Fi or Ethernet then that’s still your best option. If you’re not sure how to write an image to an SD card, here’s some good guides for [Windows][9] and for [Mac][10]. + +For everyone else, now that Risc OS is available in the [NOOBS installer][11] again, I recommend using that. What’s really cool about NOOBS is that it makes it super simple to dual boot with something like [Retropie][12] or [Recalbox][13] for the ultimate all-in-one retro gaming device. + +Risc OS is as an extremely good option for a dual boot machine because it only uses a few gigabytes – a small fraction of even the smallest SD cards around these days. This leaves most of it available for other operating systems and saves you having to swap cards, which can be a right pain if you have to unscrew the case. The games themselves vary from about 300kb to about 5 megabytes at the very largest, so don’t worry about that. + +This image requires a card with at least 2 gigabytes, which for what we’re doing is plenty. Don’t worry about tracking down the largest SD card you can find. The operating system is extremely lightweight and the games themselves vary from about 300kb to about 5 megabytes at the very largest. Even a very small card offers enough space for hundreds of games – more than you will ever play. + +If you’re unsure how to use the NOOBS installer, please [click here for instructions][14]. + +#### Navigating Risc OS + +Compared to your first Linux experience, using Risc OS for the first time is, in my opinion, far more gentle. This is in large part thanks to a graphical interface that’s both fairly intuitive and actually very useful for configuring things. + +The command line is there if you want it, but we won’t need it just to play some games. You can kind of tell that this was first built with a mass-market audience in mind. + +So let’s jump right in. + +Insert your card into your Pi, hook it up to your screen, keyboard, mouse and power supply. It shouldn’t take all that long before you’re at the desktop. + +###### The Risc OS Mouse + +Risc OS uses a three button mouse. + +You use the left button – or “Select” button – in much the same way as you’re already used to: one click to select icons and a double click to open folders and execute programs. + +The middle button – ie, your scroll wheel – is used to open menus in much the same way as the right mouse button is used in Windows. We’ll be using this a lot. + +The right button – or “Adjust” button – has behaviours that vary between applications. + +###### Browsing Directories + +Ok, so let’s start taking a look around. At the bottom of the screen you’ll see an icon bar. In the bottom left are icons for storage devices. + +Click once with the left mouse button on the SD card and a window will open to show you what’s on it. You can take a look inside the directories by double clicking. + +###### The Pling + +As you start to browse Risc OS, you’ll notice that a lot of what you see inside the directories has exclamation marks at the beginning. This is said out aloud as a “pling”, and it’s used to mark an application that you can execute. + +One of the quirks of Risc OS is that these executables aren’t really files – they’re directories. If you hold shift while double clicking, you can open them and start looking inside, same as any other directory – but be careful with this, it’s a good way to break stuff. + +Risc OS comes with some applications installed – including a web browser called !NetSurf. We’ll be using that soon to download our games and some small utilities we need to get them running. + +###### Creating Directories + +Risc OS comes with a basic directory structure, but you’ll probably want to create some new ones to put your floppy images and .zip files. + +To do this, click with the middle button inside the folder where you want to place your new directory. This will open up a menu. Move your mouse over the arrow next to “New Directory” and a prompt will open where you can name it and press OK. + +###### Copying Files and Directories + +To copy a file or directory somewhere else, just drag and drop it with the left mouse button to the new location. + +###### Forcing Applications to Quit + +Sometimes, if you haven’t configured something right, if you’ve downloaded something that just doesn’t work, or if you plain forgot to look up the controls in the manual, you might find yourself stuck inside an application that has a blank screen or isn’t responding. + +Here, you can press Ctrl-Shift-F12 to quit back to the desktop. + +###### Shutting Down and Rebooting + +If you want to power down or reboot your Pi, just click the middle button on the raspberry icon in the bottom right corner and select “Shutdown”. This will give you an option to reboot the Pi or you can just remove the power cable. + +#### Connecting to the Internet + +Okay, so I’ve got good news and bad news. I’ll get the bad news right out of the way first: + +Risc OS Open doesn’t yet support wireless networking through either the onboard wireless or a wireless dongle in the USB port. It’s on the [to-do list][15]. + +In the meantime, if you can a way to connect to the internet through the Ethernet port, it makes the rest of this project a lot easier. If you were going to use an Ethernet cable anyway, this will be no big deal. And if you have a wireless bridge handy, you can just use that. + +If you don’t have a wireless bridge, but do have a second Raspberry Pi board lying around (hey, who doesn’t these days :p), you can [set it up as a wireless bridge][16]. This is what I did and it’s actually pretty easy if you just follow the steps. + +Another option might be to set up a temporary tinkering area next to your router so that you can plug straight in to get everything in configured. + +Ok, so what’s the good news? + +It’s this: once you’ve got the internet in your front hole, the rest is rather easy. In fact, the only bit that’s not done for your is configuring name servers. + +So let’s get to it. + +Double-click on !Configure, click once on Network, click on Internet and then on Host Names. Then enter the IPs of your name servers in the name server fields. If you’re not sure what IP to put in here, just use Google’s publicly available name servers – 8.8.8.8 and 8.8.4.4. + +When you click Set, it will ask you if you want to reboot. Click yes. + +Now double-click on !NetSurf. You’ll see the logo is now added to the bottom right corner. Click on this to open a new browser window. + +Compared to Chrome, Firefox, et al, !NetSurf is a primitive web browser. I do not recommend it as a daily driver. But to download Risc OS software directly to the Pi, it’s actually pretty damn convenient. + +###### Short Link to This Guide + +As you go through the rest of this guide, it’s going to get annoying copying by hand all the URLs you’ll want to visit. + +To save you this trouble, type bit.do/riscpi into the browser bar to load this page. With this loaded, you can follow the links. + +###### If You’re Still Getting Host Name Error Messages + +One little quirk of Risc OS is that it seems to check for name servers as part of the boot process. If it doesn’t find them then, it assumes they’re not there for the rest of the session. + +This means that if you connect your Pi to the internet when it’s already booted, you will get an error message when you try to browse the internet with !NetSurf. + +To fix this, just double check that your wireless bridge is switched on or that your Pi is plugged into the router, reboot, and the OS should find the name servers. + +###### If You Can’t Connect to the Internet + +If this is all too hard and you absolutely can’t connect to the internet, there’s always sneakernet – downloading files to another machine and then transferring by USB stick. + +This is what I tried at first; It does work, but I found it terribly annoying. + +One frustration is that using a Windows 10 machine to download Risc OS software seems to strip out the filetype information – even when you aren’t unzipping the archives. It’s not that difficult to repair this, it’s just tedious when you have to do it all the time. + +The other problem is that running USB sticks from computer to computer all the time just got a bit old. + +Still, if you have to do it, it’s an option. + +#### Unzipping Compressed Files + +Most of the files we’ll be downloading will come in .zip format – this is a good thing, because it preserves the file type information. But we’ll need a way to uncompress these files. + +For this we’ll use a program called !SparkFS. This is proprietary software, but you can download a read-only version for free. This will let us extract files from .zip archives. + +To download and install !SparkFS, click [this link][17] and follow the instructions. You want the version of this software for machines with more than 2MB of RAM. + +#### Installing ADFFS and Anymode + +Now we need to install ADFFS, a floppy disk imaging program written by Jon Abbot of the [Archimedes Software Preservation Project][18]. + +This gives us a virtual floppy drive we can use to boot floppy images. It also takes care of the 26 bit memory addressing issues. + +To get your copy, browse to the [ADFFS subforum][19] and click the thread for the latest public release – at the time of writing that’s 2.64. + +Download the .zip file, open it and then drag and drop !ADFFS to somewhere on your SD card where it’s conveniently accessible – we’ll be using it a lot. + +###### Configuring Boot Scripts + +For ADFFS to work properly, we’re going to need to add a little boot script. + +Follow these instructions carefully – if you do the wrong thing here you can really mess up your OS, or even brick your Pi. + +###### Creating !Boot.Loader.CMDLINE/TXT + +Remember how I showed you that you could open up applications as directories by holding down shift when you double-click? We can also do this to get inside the Risc OS boot process. We’ll need to do this now to add our boot script. + +Start by left clicking once on the SD card icon on the icon bar, then hold down shift and double-click !Boot with your left mouse button. Then double click the folder labeled Loader to open it. This is where we’re going to put our script. + +To write our script, click Apps on the icon bar, then double-click !StrongEd. Now click on the fist icon that appeared on the bottom right of the icon bar to open a text editor window, and type: + +``` +disable_mode_changes +``` + +Computers are fussy so take a moment to double-check your spelling. + +To save this file, click the middle button on the text editor and hover your cursor over the arrow next to Save. Then delete the existing text in the Name field and replace it with: + +``` +CMDLINE/TXT +``` + +Now, see that button marked Save? It’s a trap! Instead, drag and drop the pen and paper icon to the Loader folder. + +We’re now finished with this folder, so you can close it and also close the text editor. + +###### Installing Anymode + +Now we need to install the Anymode module. This is to make the screen modes on our software play nice with our GPU and HDMI output. + +Download Anymode from [here,][20] copy the .zip file to somewhere temporary and open it. + +Now go back to the root directory on your SD card, double-click on !Boot again, then open the folders marked Choices, Boot and Predesk. + +Then use your left mouse button to drag and drop the Anymode module from your .zip file to the Predesk folder. + +#### Configuring USB Joysticks and Gamepads + +Over at the Archimedes Software Preservation Project, there’s a [USB joystick driver][21] in development. + +This module is still in alpha testing, and you’ll need to use the command line to configure it, but it’s there if you’d like to give it a try. + +If you can’t get this working, don’t worry too much. It was actually pretty normal back in the day for people either to not have a joystick, or not to be able to get it to work. So pretty much every game can be played with keyboard and mouse. + +I’ll be updating this section as this project develops. + +#### Setting File Types + +Risc OS uses an [ADFS][22] filesystem, different to anything used on Windows or Linux. + +Instead of using a filename extension, ADFS files have a “file type” associated with them to show what. When these files pass through a different operating system, this information can get stripped from the file. + +In theory, if we don’t open our .zip archives until it reaches our Risc OS machine, the file type should be preserved. Usually this works, but sometimes you unzip the archive and find files with no file type attached. You’ll be able to tell when this has happened because you’ll be faced with a green icon labeled “data”. + +Fortunately, this is pretty easy to fix. + +Just click on the file with your middle button. A menu will appear. + +The second item on this menu will include the name of the file and it will have an arrow next to it. Hover your cursor over the arrow and a second menu will appear. + +Near the bottom of this menu will be an option marked “Set Type”, and it will also have an arrow next to it. Hover your cursor over this arrow and a field will appear where you can enter the file type. + +To set the file type on a floppy image, type: + +``` +&FCE +``` + +[Click here for more file type codes.][23] + +### Finding, Loading and Playing Games + +The best start looking for floppy images is in the [Games subforum][24] at the Archimedes Software Preservation Project. + +There’s also a [Risc OS downloads section at Acorn Arcade][25]. + +There are definitely other websites that host Archimedes games, but I have no idea how legal these copies are. + +###### Booting and Switching Floppy Disc Images + +Some games might have specific instructions for how to boot the floppy. If so, then follow them. + +Generally, though, you drag and drop the image file to, then click on it with the middle button and press “boot floppy”. Your game should start straight away. + +Many of the games use more than one floppy disc. To play these, boot disc 1. When you’re asked to switch floppy discs, press control and shift and the function key corresponding to the disc you want to change to. + +### Which Games Should You Play? + +This is a matter of opinion really and everyone’s taste differs. + +Still, if you’re wondering what to try, here are my recommendations. + +This is still a work in progress. I’ll be adding more games as I find what I like. + +#### Cannon Fodder + + + +This is a top-down action/strategy game that’s extremely playable and wickedly funny. + +You control a team of soldiers indirectly by clicking on areas of the screen to tell them where to move and who to kill. Everyone dies with a single shot. + +At the start your enemies are all very easy to beat but the game progresses in difficulty. As you go, you’ll need to start dividing your team up into squads to command separately. + +I used to play this on the Mega Drive back in the day, but it’s so much more playable with an actual mouse. + +[Click here to get Cannon Fodder.][26] + +#### Star Fighter 3000 + + + +This is a 3D space shooter that really laid down the gauntlet for what the Archimedes could do. + +You fly around and blast stuff with lasers and missiles. It’s pretty awesome. It’s kind of a forerunner to Terminal Velocity, if you ever played that. + +It was later ported to the 3D0, Sega Saturn and Playstation, but they could never render the 3D graphics to the same distance. + +[Click here to get Star Fighter 3000.][27] + +You want the download marked “Star Fighter 3000 version 3.20”. This one doesn’t use a floppy image, so don’t use ADFFS to run this file. Just double click the program and go. + +#### Aggressor + + + +This is a side-scrolling run-and-gun where you have unlimited ammo and a whole lot of aliens and robots to kill. Badass. + +#### Bug Hunter + + + +This is a really unique puzzle/platform game – you’re a robot with sticky legs who can walk up and down walls and across the ceiling, and your job is to squash bugs by dropping objects lying around. + +Which is harder than it sounds, because you can easily get yourself into situations where you dropped something in the wrong place, making it impossible to complete your objective, so your only way out is to initiate your self destruct sequence in futility and shame. Which I guess is kinda rather dark, if you dwell on it. + +It’s fun though. + +[Click here to get Bug Hunter.][28] + +#### Mad Professor Mariarti + + + +This is a platformer where you’re a mad scientist who shoots spanners and other weapons at bad guys. It has good music and gameplay and an immersive puzzle element as well. + +[Click here to get Mad Professor Mariarti.][29] + +#### Chuckie Egg + +Ok, now we’re getting really retro. + +Strictly speaking, this doesn’t really belong in this list, because it’s not even an Archimedes game – it’s an old BBC Micro game that I played the hell out of back in the day that some nice chap has ported to Risc OS. + +But there’s a version that runs and it’s awesome so you should play it. + +Basically you’re just this guy who goes around stealing eggs. That’s it. That’s all you do. + +It’s absolutely amazing. + +If you’ve never played it, you really should check it out. + +You can [get Chuckie Egg here][30]. + +This isn’t a floppy image, so you don’t need ADFFS to run it. Just double click on the program and go. + +### Over to You + +Got any favourite Acorn Archimedes games? + +Got any tips for getting them running on the Pi? + +Please let me know in the comments section 🙂 + +-------------------------------------------------------------------------------- + +via: https://blog.dxmtechsupport.com.au/playing-badass-acorn-archimedes-games-on-a-raspberry-pi/ + +作者:[James Mawson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.dxmtechsupport.com.au/author/james-mawson/ +[b]: https://github.com/lujun9972 +[1]: https://blog.dxmtechsupport.com.au/wp-content/uploads/2018/06/cannonfodder-1024x768.jpg +[2]: http://www.computinghistory.org.uk/det/897/Acorn-Computers/ +[3]: http://davetrott.co.uk/2017/03/strategy-is-sacrifice/ +[4]: http://www.old-computers.com/museum/computer.asp?c=1015 +[5]: https://www.theverge.com/2016/8/16/12507568/intel-arm-mobile-chips-licensing-deal-idf-2016 +[6]: https://www.riscosopen.org/content/ +[7]: https://www.riscosopen.org/wiki/documentation/show/ARMv6%2Fv7%20software%20compatibility%20list#games +[8]: https://www.riscosopen.org/content/downloads/raspberry-pi +[9]: http://www.raspberry-projects.com/pi/pi-operating-systems/win32diskimager +[10]: http://osxdaily.com/2018/01/11/write-img-to-sd-card-mac-etcher/ +[11]: https://www.raspberrypi.org/downloads/noobs/ +[12]: https://retropie.org.uk/ +[13]: https://www.recalbox.com/ +[14]: https://www.raspberrypi.org/documentation/installation/noobs.md +[15]: https://www.riscosopen.org/wiki/documentation/show/RISC%20OS%20Roadmap +[16]: https://pimylifeup.com/raspberry-pi-wifi-bridge/ +[17]: http://www.riscos.com/ftp_space/generic/sparkfs/index.htm +[18]: https://forums.jaspp.org.uk/forum/index.php +[19]: https://forums.jaspp.org.uk/forum/viewforum.php?f=14&sid=d0f037e95c560144f3910503b776aef5 +[20]: http://www.pi-star.co.uk/anymode/ +[21]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=8&t=396 +[22]: https://en.wikipedia.org/wiki/Advanced_Disc_Filing_System +[23]: https://www.riscosopen.org/wiki/documentation/show/File%20Types +[24]: https://forums.jaspp.org.uk/forum/viewforum.php?f=25 +[25]: http://www.acornarcade.com/downloads/ +[26]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=188 +[27]: http://starfighter.acornarcade.com/ +[28]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=330 +[29]: https://forums.jaspp.org.uk/forum/viewtopic.php?f=25&t=148 +[30]: http://homepages.paradise.net.nz/mjfoot/riscos.htm diff --git a/sources/tech/20180703 10 killer tools for the admin in a hurry.md b/sources/tech/20180703 10 killer tools for the admin in a hurry.md deleted file mode 100644 index 363f401709..0000000000 --- a/sources/tech/20180703 10 killer tools for the admin in a hurry.md +++ /dev/null @@ -1,87 +0,0 @@ -10 killer tools for the admin in a hurry -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud_tools_hardware.png?itok=PGjJenqT) - -Administering networks and systems can get very stressful when the workload piles up. Nobody really appreciates how long anything takes, and everyone wants their specific thing done yesterday. - -So it's no wonder so many of us are drawn to the open source spirit of figuring out what works and sharing it with everyone. Because, when deadlines are looming, and there just aren't enough hours in the day, it really helps if you can just find free answers you can implement immediately. - -So, without further ado, here's my Swiss Army Knife of stuff to get you out of the office before dinner time. - -### Server configuration and scripting - -Let's jump right in. - -**[NixCraft][1]** -Use the site's internal search function. With more than a decade of regular updates, there's gold to be found here—useful scripts and handy hints that can solve your problem straight away. This is often the second place I look after Google. - -**[Webmin][2]** -This gives you a nice web interface to remotely edit your configuration files. It cuts down on a lot of time spent having to juggle directory paths and `sudo nano`, which is handy when you're handling several customers. - -**[Windows Subsystem for Linux][3]** -The reality of the modern workplace is that most employees are on Windows, while the grown-up gear in the server room is on Linux. So sometimes you find yourself trying to do admin tasks from (gasp) a Windows desktop. - -What do you do? Install a virtual machine? It's actually much faster and far less work to configure if you install the Windows Subsystem for Linux compatibility layer, now available at no cost on Windows 10. - -This gives you a Bash terminal in a window where you can run Bash scripts and Linux binaries on the local machine, have full access to both Windows and Linux filesystems, and mount network drives. It's available in Ubuntu, OpenSUSE, SLES, Debian, and Kali flavors. - -**[mRemoteNG][4]** -This is an excellent SSH and remote desktop client for when you have 100+ servers to manage. - -### Setting up a network so you don't have to do it again - -A poorly planned network is the sworn enemy of the admin who hates working overtime. - -**[IP Addressing Schemes that Scale][5]** -The diabolical thing about running out of IP addresses is that, when it happens, the network's grown large enough that a new addressing scheme is an expensive, time-consuming pain in the proverbial. - -Ain't nobody got time for that! - -At some point, IPv6 will finally arrive to save the day. Until then, these one-size-fits-most IP addressing schemes should keep you going, no matter how many network-connected wearables, tablets, smart locks, lights, security cameras, VoIP headsets, and espresso machines the world throws at us. - -**[Linux Chmod Permissions Cheat Sheet][6]** -A short but sweet cheat sheet of Bash commands to set permissions across the network. This is so when Bill from Customer Service falls for that ransomware scam, you're recovering just his files and not the entire company's. - -**[VLSM Subnet Calculator][7]** -Just put in the number of networks you want to create from an address space and the number of hosts you want per network, and it calculates what the subnet mask should be for everything. - -### Single-purpose Linux distributions - -Need a Linux box that does just one thing? It helps if someone else has already sweated the small stuff on an operating system you can install and have ready immediately. - -Each of these has, at one point, made my work day so much easier. - -**[Porteus Kiosk][8]** -This is for when you want a computer totally locked down to just a web browser. With a little tweaking, you can even lock the browser down to just one website. This is great for public access machines. It works with touchscreens or with a keyboard and mouse. - -**[Parted Magic][9]** -This is an operating system you can boot from a USB drive to partition hard drives, recover data, and run benchmarking tools. - -**[IPFire][10]** -Hahahaha, I still can't believe someone called a router/firewall/proxy combo "I pee fire." That's my second favorite thing about this Linux distribution. My favorite is that it's a seriously solid software suite. It's so easy to set up and configure, and there is a heap of plugins available to extend it. - -So, how about you? What tools, resources, and cheat sheets have you found to make the workday easier? I'd love to know. Please share in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/tools-admin - -作者:[Grant Hamono][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/grantdxm -[1]:https://www.cyberciti.biz/ -[2]:http://www.webmin.com/ -[3]:http://wsl-guide.org/en/latest/ -[4]:https://mremoteng.org/ -[5]:https://blog.dxmtechsupport.com.au/ip-addressing-for-a-small-business-that-might-grow/ -[6]:https://isabelcastillo.com/linux-chmod-permissions-cheat-sheet -[7]:http://www.vlsm-calc.net/ -[8]:http://porteus-kiosk.org/ -[9]:https://partedmagic.com/ -[10]:https://www.ipfire.org/ diff --git a/sources/tech/20180707 Version Control Before Git with CVS.md b/sources/tech/20180707 Version Control Before Git with CVS.md deleted file mode 100644 index f1c34177a6..0000000000 --- a/sources/tech/20180707 Version Control Before Git with CVS.md +++ /dev/null @@ -1,312 +0,0 @@ -Version Control Before Git with CVS -====== -Github was launched in 2008. If your software engineering career, like mine, is no older than Github, then Git may be the only version control software you have ever used. While people sometimes grouse about its steep learning curve or unintuitive interface, Git has become everyone’s go-to for version control. In Stack Overflow’s 2015 developer survey, 69.3% of respondents used Git, almost twice as many as used the second-most-popular version control system, Subversion. After 2015, Stack Overflow stopped asking developers about the version control systems they use, perhaps because Git had become so popular that the question was uninteresting. - -Git itself is not much older than Github. Linus Torvalds released the first version of Git in 2005. Though today younger developers might have a hard time conceiving of a world where the term “version control software” didn’t more or less just mean Git, such a world existed not so long ago. There were lots of alternatives to choose from. Open source developers preferred Subversion, enterprises and video game companies used Perforce (some still do), while the Linux kernel project famously relied on a version control system called BitKeeper. - -Some of these systems, particularly BitKeeper, might feel familiar to a young Git user transported back in time. Most would not. BitKeeper aside, the version control systems that came before Git worked according to a fundamentally different paradigm. In a taxonomy offered by Eric Sink, author of Version Control by Example, Git is a third-generation version control system, while most of Git’s predecessors, the systems popular in the 1990s and early 2000s, are second-generation version control systems. Where third-generation version control systems are distributed, second-generation version control systems are centralized. You have almost certainly heard Git described as a “distributed” version control system before. I never quite understood the distributed/centralized distinction, at least not until I installed and experimented with a centralized second-generation version control system myself. - -The system I installed was CVS. CVS, short for Concurrent Versions System, was the very first second-generation version control system. It was also the most popular version control system for about a decade until it was replaced in 2000 by Subversion. Even then, Subversion was supposed to be “CVS but better,” which only underscores how dominant CVS had become throughout the 1990s. - -CVS was first developed in 1986 by a Dutch computer scientist named Dick Grune, who was looking for a way to collaborate with his students on a compiler project. CVS was initially little more than a collection of shell scripts wrapping RCS (Revision Control System), a first-generation version control system that Grune wanted to improve. RCS works according to a pessimistic locking model, meaning that no two programmers can work on a single file at once. In order to edit a file, you have to first ask RCS for an exclusive lock on the file, which you keep until you are finished editing. If someone else is already editing a file you need to edit, you have to wait. CVS improved on RCS and ushered in the second generation of version control systems by trading the pessimistic locking model for an optimistic one. Programmers could now edit the same file at the same time, merging their edits and resolving any conflicts later. (Brian Berliner, an engineer who later took over the CVS project, wrote a very readable [paper][1] about CVS’ innovations in 1990.) - -In that sense, CVS wasn’t all that different from Git, which also works according to an optimistic model. But that’s where the similarities end. In fact, when Linus Torvalds was developing Git, one of his guiding principles was WWCVSND, or “What Would CVS Not Do.” Whenever he was in doubt about a decision, he strove to choose the option that had not been chosen in the design of CVS. So even though CVS predates Git by over a decade, it influenced Git as a kind of negative template. - -I’ve really enjoyed playing around with CVS. I think there’s no better way to understand why Git’s distributed nature is such an improvement on what came before. So I invite you to come along with me on an exciting journey and spend the next ten minutes of your life learning about a piece of software nobody has used in the last decade. (See correction.) - -### Getting Started with CVS - -Instructions for installing CVS can be found on the [project’s homepage][2]. On MacOS, you can install CVS using Homebrew. - -Since CVS is centralized, it distinguishes between the client-side universe and the server-side universe in a way that something like Git does not. The distinction is not so pronounced that there are different executables. But in order to start using CVS, even on your own machine, you’ll have to set up the CVS backend. - -The CVS backend, the central store for all your code, is called the repository. Whereas in Git you would typically have a repository for every project, in CVS the repository holds all of your projects. There is one central repository for everything, though there are ways to work with only a project at a time. - -To create a local repository, you run the `init` command. You would do this somewhere global like your home directory. - -``` -$ cvs -d ~/sandbox init -``` - -CVS allows you to pass options to either the `cvs` command itself or to the `init` subcommand. Options that appear after the `cvs` command are global in nature, while options that appear after the subcommand are specific to the subcommand. In this case, the `-d` flag is global. Here it happens to tell CVS where we want to create our repository, but in general the `-d` flag points to the location of the repository we want to use for any given action. It can be tedious to supply the `-d` flag all the time, so the `CVSROOT` environment variable can be set instead. - -Since we’re working locally, we’ve just passed a path for our `-d` argument, but we could also have included a hostname. - -The command creates a directory called `sandbox` in your home directory. If you list the contents of `sandbox`, you’ll find that it contains another directory called `CVSROOT`. This directory, not to be confused with the environment variable, holds administrative files for the repository. - -Congratulations! You’ve just created your first CVS repository. - -### Checking In Code - -Let’s say that you’ve decided to keep a list of your favorite colors. You are an artistically inclined but extremely forgetful person. You type up your list of colors and save it as a file called `favorites.txt`: - -``` -blue -orange -green - -definitely not yellow -``` - -Let’s also assume that you’ve saved your file in a new directory called `colors`. Now you’d like to put your favorite color list under version control, because fifty years from now it will be interesting to look back and see how your tastes changed through time. - -In order to do that, you will have to import your directory as a new CVS project. You can do that using the `import` command: - -``` -$ cvs -d ~/sandbox import -m "" colors colors initial -N colors/favorites.txt - -No conflicts created by this import -``` - -Here we are specifying the location of our repository with the `-d` flag again. The remaining arguments are passed to the `import` subcommand. We have to provide a message, but here we don’t really need one, so we’ve left it blank. The next argument, `colors`, specifies the name of our new directory in the repository; here we’ve just used the same name as the directory we are in. The last two arguments specify the vendor tag and the release tag respectively. We’ll talk more about tags in a minute. - -You’ve just pulled your “colors” project into the CVS repository. There are a couple different ways to go about bringing code into CVS, but this is the method recommended by [Pragmatic Version Control Using CVS][3], the Pragmatic Programmer book about CVS. What makes this method a little awkward is that you then have to check out your work fresh, even though you’ve already got an existing `colors` directory. Instead of using that directory, you’re going to delete it and then check out the version that CVS already knows about: - -``` -$ cvs -d ~/sandbox co colors -cvs checkout: Updating colors -U colors/favorites.txt -``` - -This will create a new directory, also called `colors`. In this directory you will find your original `favorites.txt` file along with a directory called `CVS`. The `CVS` directory is basically CVS’ equivalent of the `.git` directory in every Git repository. - -### Making Changes - -Get ready for a trip. - -Just like Git, CVS has a `status` subcommand: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Up-to-date - - Working revision: 1.1.1.1 2018-07-06 19:27:54 -0400 - Repository revision: 1.1.1.1 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: fD7GYxt035GNg8JA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -This is where things start to look alien. CVS doesn’t have commit objects. In the above, there is something called a “Commit Identifier,” but this might be only a relatively recent edition—no mention of a “Commit Identifier” appears in Pragmatic Version Control Using CVS, which was published in 2003. (The last update to CVS was released in 2008.) - -Whereas with Git you’d talk about the version of a file associated with commit `45de392`, in CVS files are versioned separately. The first version of your file is version 1.1, the next version is 1.2, and so on. When branches are involved, extra numbers are appended, so you might end up with something like the `1.1.1.1` above, which appears to be the default in our case even though we haven’t created any branches. - -If you were to run `cvs log` (equivalent to `git log`) in a project with lots of files and commits, you’d see an individual history for each file. You might have a file at version 1.2 and a file at version 1.14 in the same project. - -Let’s go ahead and make a change to version 1.1 of our `favorites.txt` file: - -``` - blue - orange - green -+cyan - - definitely not yellow -``` - -Once we’ve made the change, we can run `cvs diff` to see what CVS thinks we’ve done: - -``` -$ cvs diff -cvs diff: Diffing . -Index: favorites.txt -=================================================================== -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.1.1.1 -diff -r1.1.1.1 favorites.txt -3a4 -> cyan -``` - -CVS recognizes that we added a new line containing the color “cyan” to the file. (Actually, it says we’ve made changes to the “RCS” file; you can see that CVS never fully escaped its original association with RCS.) The diff we are being shown is the diff between the copy of `favorites.txt` in our working directory and the 1.1.1.1 version stored in the repository. - -In order to update the version stored in the repository, we have to commit the change. In Git, this would be a multi-step process. We’d have to stage the change so that it appears in our index. Then we’d commit the change. Finally, to make the change visible to anyone else, we’d have to push the commit up to the origin repository. - -In CVS, all of these things happen when you run `cvs commit`. CVS just bundles up all the changes it can find and puts them in the repository: - -``` -$ cvs commit -m "Add cyan to favorites." -cvs commit: Examining . -/Users/sinclairtarget/sandbox/colors/favorites.txt,v <-- favorites.txt -new revision: 1.2; previous revision: 1.1 -``` - -I’m so used to Git that this strikes me as terrifying. Without an opportunity to stage changes, any old thing that you’ve touched in your working directory might end up as part of the public repository. Did you passive-aggressively rewrite a coworker’s poorly implemented function out of cathartic necessity, never intending for him to know? Too bad, he now thinks you’re a dick. You also can’t edit your commits before pushing them, since a commit is a push. Do you enjoy spending 40 minutes repeatedly running `git rebase -i` until your local commit history flows like the derivation of a mathematical proof? Sorry, you can’t do that here, and everyone is going to find out that you don’t actually write your tests first. - -But I also now understand why so many people find Git needlessly complicated. If `cvs commit` is what you were used to, then I’m sure staging and pushing changes would strike you as a pointless chore. - -When people talk about Git being a “distributed” system, this is primarily the difference they mean. In CVS, you can’t make commits locally. A commit is a submission of code to the central repository, so it’s not something you can do without a connection. All you’ve got locally is your working directory. In Git, you have a full-fledged local repository, so you can make commits all day long even while disconnected. And you can edit those commits, revert, branch, and cherry pick as much as you want, without anybody else having to know. - -Since commits were a bigger deal, CVS users often made them infrequently. Commits would contain as many changes as today we might expect to see in a ten-commit pull request. This was especially true if commits triggered a CI build and an automated test suite. - -If we now run `cvs status`, we can see that we have a new version of our file: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Up-to-date - - Working revision: 1.2 2018-07-06 21:18:59 -0400 - Repository revision: 1.2 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: pQx5ooyNk90wW8JA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -### Merging - -As mentioned above, in CVS you can edit a file that someone else is already editing. That was CVS’ big improvement on RCS. What happens when you need to bring your changes back together? - -Let’s say that you have invited some friends to add their favorite colors to your list. While they are adding their colors, you decide that you no longer like the color green and remove it from the list. - -When you go to commit your changes, you might discover that CVS notices a problem: - -``` -$ cvs commit -m "Remove green" -cvs commit: Examining . -cvs commit: Up-to-date check failed for `favorites.txt' -cvs [commit aborted]: correct above errors first! -``` - -It looks like your friends committed their changes first. So your version of `favorites.txt` is not up-to-date with the version in the repository. If you run `cvs status`, you’ll see that your local copy of `favorites.txt` is version 1.2 with some local changes, but the repository version is 1.3: - -``` -$ cvs status -cvs status: Examining . -=================================================================== -File: favorites.txt Status: Needs Merge - - Working revision: 1.2 2018-07-07 10:42:43 -0400 - Repository revision: 1.3 /Users/sinclairtarget/sandbox/colors/favorites.txt,v - Commit Identifier: 2oZ6n0G13bDaldJA - Sticky Tag: (none) - Sticky Date: (none) - Sticky Options: (none) -``` - -You can run `cvs diff` to see exactly what the differences between 1.2 and 1.3 are: - -``` -$ cvs diff -r HEAD favorites.txt -Index: favorites.txt -=================================================================== -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.3 -diff -r1.3 favorites.txt -3d2 -< green -7,10d5 -< -< pink -< hot pink -< bubblegum pink -``` - -It seems that our friends really like pink. In any case, they’ve edited a different part of the file than we have, so the changes are easy to merge. CVS can do that for us when we run `cvs update`, which is similar to `git pull`: - -``` -$ cvs update -cvs update: Updating . -RCS file: /Users/sinclairtarget/sandbox/colors/favorites.txt,v -retrieving revision 1.2 -retrieving revision 1.3 -Merging differences between 1.2 and 1.3 into favorites.txt -M favorites.txt -``` - -If you now take a look at `favorites.txt`, you’ll find that it has been modified to include the changes that your friends made to the file. Your changes are still there too. Now you are free to commit the file: - -``` -$ cvs commit -cvs commit: Examining . -/Users/sinclairtarget/sandbox/colors/favorites.txt,v <-- favorites.txt -new revision: 1.4; previous revision: 1.3 -``` - -The end result is what you’d get in Git by running `git pull --rebase`. Your changes have been added on top of your friends’ changes. There is no “merge commit.” - -Sometimes, changes to the same file might be incompatible. If your friends had changed “green” to “olive,” for example, that would have conflicted with your change removing “green” altogether. In the early days of CVS, this was exactly the kind of case that caused people to worry that CVS wasn’t safe; RCS’ pessimistic locking ensured that such a case could never arise. But CVS guarantees safety by making sure that nobody’s changes get overwritten automatically. You have to tell CVS which change you want to keep going forward, so when you run `cvs update`, CVS marks up the file with both changes in the same way that Git does when Git detects a merge conflict. You then have to manually edit the file and pick the change you want to keep. - -The interesting thing to note here is that merge conflicts have to be fixed before you can commit. This is another consequence of CVS’ centralized nature. In Git, you don’t have to worry about resolving merges until you push the commits you’ve got locally. - -Since CVS doesn’t have easily addressable commit objects, the only way to group a collection of changes is to mark a particular working directory state with a tag. - -Creating a tag is easy: - -``` -$ cvs tag VERSION_1_0 -cvs tag: Tagging . -T favorites.txt -``` - -You’ll later be able to return files to this state by running `cvs update` and passing the tag to the `-r` flag: - -``` -$ cvs update -r VERSION_1_0 -cvs update: Updating . -U favorites.txt -``` - -Because you need a tag to rewind to an earlier working directory state, CVS encourages a lot of preemptive tagging. Before major refactors, for example, you might create a `BEFORE_REFACTOR_01` tag that you could later use if the refactor went wrong. People also used tags if they wanted to generate project-wide diffs. Basically, all the things we routinely do today with commit hashes have to be anticipated and planned for with CVS, since you needed to have the tags available already. - -Branches can be created in CVS, sort of. Branches are just a special kind of tag: - -``` -$ cvs rtag -b TRY_EXPERIMENTAL_THING colors -cvs rtag: Tagging colors -``` - -That only creates the branch (in full view of everyone, by the way), so you still need to switch to it using `cvs update`: - -``` -$ cvs update -r TRY_EXPERIMENTAL_THING -``` - -The above commands switch onto the new branch in your current working directory, but Pragmatic Version Control Using CVS actually advises that you create a new directory to hold your new branch. Presumably its authors found switching directories easier than switching branches in CVS. - -Pragmatic Version Control Using CVS also advises against creating branches off of an existing branch. They recommend only creating branches off of the mainline branch, which in Git is known as `master`. In general, branching was considered an “advanced” CVS skill. In Git, you might start a new branch for almost any trivial reason, but in CVS branching was typically used only when really necessary, such as for releases. - -A branch could later be merged back into the mainline using `cvs update` and the `-j` flag: - -``` -$ cvs update -j TRY_EXPERIMENTAL_THING -``` - -### Thanks for the Commit Histories - -In 2007, Linus Torvalds gave [a talk][4] about Git at Google. Git was very new then, so the talk was basically an attempt to persuade a roomful of skeptical programmers that they should use Git, even though Git was so different from anything then available. If you haven’t already seen the talk, I highly encourage you to watch it. Linus is an entertaining speaker, even if he never fails to be his brash self. He does an excellent job of explaining why the distributed model of version control is better than the centralized one. A lot of his criticism is reserved for CVS in particular. - -Git is a [complex tool][5]. Learning it can be a frustrating experience. But I’m also continually amazed at the things that Git can do. In comparison, CVS is simple and straightforward, though often unable to do many of the operations we now take for granted. Going back and using CVS for a while is an excellent way to find yourself with a new appreciation for Git’s power and flexibility. It illustrates well why understanding the history of software development can be so beneficial—picking up and re-examining obsolete tools will teach you volumes about the why behind the tools we use today. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][6] on Twitter or subscribe to the [RSS feed][7] to make sure you know when a new post is out. - -#### Correction - -I’ve been told that there are many organizations, particularly risk-adverse organizations that do things like make medical device software, that still use CVS. Programmers in these organizations have developed little tricks for working around CVS’ limitations, such as making a new branch for almost every change to avoid committing directly to `HEAD`. (Thanks to Michael Kohne for pointing this out.) - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/07/07/cvs.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://docs.freebsd.org/44doc/psd/28.cvs/paper.pdf -[2]: https://www.nongnu.org/cvs/ -[3]: http://shop.oreilly.com/product/9780974514000.do -[4]: https://www.youtube.com/watch?v=4XpnKHJAok8 -[5]: https://xkcd.com/1597/ -[6]: https://twitter.com/TwoBitHistory -[7]: https://twobithistory.org/feed.xml diff --git a/sources/tech/20180709 5 Firefox extensions to protect your privacy.md b/sources/tech/20180709 5 Firefox extensions to protect your privacy.md deleted file mode 100644 index 848856fe07..0000000000 --- a/sources/tech/20180709 5 Firefox extensions to protect your privacy.md +++ /dev/null @@ -1,54 +0,0 @@ -5 Firefox extensions to protect your privacy -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/biz_cinderblock_cloud_yellowhat.jpg?itok=sJdlsYTF) - -In the wake of the Cambridge Analytica story, I took a hard look at how far I had let Facebook penetrate my online presence. As I'm generally concerned about single points of failure (or compromise), I am not one to use social logins. I use a password manager and create unique logins for every site (and you should, too). - -What I was most perturbed about was the pervasive intrusion Facebook was having on my digital life. I uninstalled the Facebook mobile app almost immediately after diving into the Cambridge Analytica story. I also [disconnected all apps, games, and websites][1] from Facebook. Yes, this will change your experience on Facebook, but it will also protect your privacy. As a veteran with friends spread out across the globe, maintaining the social connectivity of Facebook is important to me. - -I went about the task of scrutinizing other services as well. I checked Google, Twitter, GitHub, and more for any unused connected applications. But I know that's not enough. I need my browser to be proactive in preventing behavior that violates my privacy. I began the task of figuring out how best to do that. Sure, I can lock down a browser, but I need to make the sites and tools I use work while trying to keep them from leaking data. - -Following are five tools that will protect your privacy while using your browser. The first three extensions are available for Firefox and Chrome, while the latter two are only available for Firefox. - -### Privacy Badger - -[Privacy Badger][2] has been my go-to extension for quite some time. Do other content or ad blockers do a better job? Maybe. The problem with a lot of content blockers is that they are "pay for play." Meaning they have "partners" that get whitelisted for a fee. That is the antithesis of why content blockers exist. Privacy Badger is made by the Electronic Frontier Foundation (EFF), a nonprofit entity with a donation-based business model. Privacy Badger promises to learn from your browsing habits and requires minimal tuning. For example, I have only had to whitelist a handful of sites. Privacy Badger also allows granular controls of exactly which trackers are enabled on what sites. It's my #1, must-install extension, no matter the browser. - -### DuckDuckGo Privacy Essentials - -The search engine DuckDuckGo has typically been privacy-conscious. [DuckDuckGo Privacy Essentials][3] works across major mobile devices and browsers. It's unique in the sense that it grades sites based on the settings you give them. For example, Facebook gets a D, even with Privacy Protection enabled. Meanwhile, [chrisshort.net][4] gets a B with Privacy Protection enabled and a C with it disabled. If you're not keen on EFF or Privacy Badger for whatever reason, I would recommend DuckDuckGo Privacy Essentials (choose one, not both, as they essentially do the same thing). - -### HTTPS Everywhere - -[HTTPS Everywhere][5] is another extension from the EFF. According to HTTPS Everywhere, "Many sites on the web offer some limited support for encryption over HTTPS, but make it difficult to use. For instance, they may default to unencrypted HTTP or fill encrypted pages with links that go back to the unencrypted site. The HTTPS Everywhere extension fixes these problems by using clever technology to rewrite requests to these sites to HTTPS." While a lot of sites and browsers are getting better about implementing HTTPS, there are a lot of sites that still need help. HTTPS Everywhere will try its best to make sure your traffic is encrypted. - -### NoScript Security Suite - -[NoScript Security Suite][6] is not for the faint of heart. While the Firefox-only extension "allows JavaScript, Java, Flash, and other plugins to be executed only by trusted websites of your choice," it doesn't do a great job at figuring out what your choices are. But, make no mistake, a surefire way to prevent leaking data is not executing code that could leak it. NoScript enables that via its "whitelist-based preemptive script blocking." This means you will need to build the whitelist as you go for sites not already on it. Note that NoScript is only available for Firefox. - -### Facebook Container - -[Facebook Container][7] makes Firefox the only browser where I will use Facebook. "Facebook Container works by isolating your Facebook identity into a separate container that makes it harder for Facebook to track your visits to other websites with third-party cookies." This means Facebook cannot snoop on activity happening elsewhere in your browser. Suddenly those creepy ads will stop appearing so frequently (assuming you uninstalled the Facebook app from your mobile devices). Using Facebook in an isolated space will prevent any additional collection of data. Remember, you've given Facebook data already, and Facebook Container can't prevent that data from being shared. - -These are my go-to extensions for browser privacy. What are yours? Please share them in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/firefox-extensions-protect-privacy - -作者:[Chris Short][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/chrisshort -[1]:https://www.facebook.com/help/211829542181913 -[2]:https://www.eff.org/privacybadger -[3]:https://duckduckgo.com/app -[4]:https://chrisshort.net -[5]:https://www.eff.org/https-everywhere -[6]:https://noscript.net/ -[7]:https://addons.mozilla.org/en-US/firefox/addon/facebook-container/ diff --git a/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md b/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md deleted file mode 100644 index f390109123..0000000000 --- a/sources/tech/20180709 Anbox- How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way.md +++ /dev/null @@ -1,101 +0,0 @@ -Anbox: How To Install Google Play Store And Enable ARM (libhoudini) Support, The Easy Way -====== -**[Anbox][1], or Android in a Box, is a free and open source tool that allows running Android applications on Linux.** It works by running the Android runtime environment in an LXC container, recreating the directory structure of Android as a mountable loop image, while using the native Linux kernel to execute applications. - -Its key features are security, performance, integration and convergence (scales across different form factors), according to its website. - -**Using Anbox, each Android application or game is launched in a separate window, just like system applications** , and they behave more or less like regular windows, showing up in the launcher, can be tiled, etc. - -By default, Anbox doesn't ship with the Google Play Store or support for ARM applications. To install applications you must download each app APK and install it manually using adb. Also, installing ARM applications or games doesn't work by default with Anbox - trying to install ARM apps results in the following error being displayed: -``` -Failed to install PACKAGE.NAME.apk: Failure [INSTALL_FAILED_NO_MATCHING_ABIS: Failed to extract native libraries, res=-113] - -``` - -You can set up both Google Play Store and support for ARM applications (through libhoudini) manually for Android in a Box, but it's a quite complicated process. **To make it easier to install Google Play Store and Google Play Services on Anbox, and get it to support ARM applications and games (using libhoudini), the folks at[geeks-r-us.de][2] (linked article is in German) have created a [script][3] that automates these tasks.** - -Before using this, I'd like to make it clear that not all Android applications and games work in Anbox, even after integrating libhoudini for ARM support. Some Android applications and games may not show up in the Google Play Store at all, while others may be available for installation but will not work. Also, some features may not be available in some applications. - -### Install Google Play Store and enable ARM applications / games support on Anbox (Android in a Box) - -These instructions will obviously not work if Anbox is not already installed on your Linux desktop. If you haven't already, install Anbox by following the installation instructions found - -`anbox.appmgr` - -at least once after installing Anbox and before using this script, to avoid running into issues. - -1\. Install the required dependencies (`wget` , `lzip` , `unzip` and `squashfs-tools`). - -In Debian, Ubuntu or Linux Mint, use this command to install the required dependencies: -``` -sudo apt install wget lzip unzip squashfs-tools - -``` - -2\. Download and run the script that automatically downloads and installs Google Play Store (and Google Play Services) and libhoudini (for ARM apps / games support) on your Android in a Box installation. - -**Warning: never run a script you didn't write without knowing what it does. Before running this script, check out its [code][4]. ** - -To download the script, make it executable and run it on your Linux desktop, use these commands in a terminal: -``` -wget https://raw.githubusercontent.com/geeks-r-us/anbox-playstore-installer/master/install-playstore.sh -chmod +x install-playstore.sh -sudo ./install-playstore.sh - -``` - -3\. To get Google Play Store to work in Anbox, you need to enable all the permissions for both Google Play Store and Google Play Services - -To do this, run Anbox: -``` -anbox.appmgr - -``` - -Then go to `Settings > Apps > Google Play Services > Permissions` and enable all available permissions. Do the same for Google Play Store! - -You should now be able to login using a Google account into Google Play Store. - -Without enabling all permissions for Google Play Store and Google Play Services, you may encounter an issue when trying to login to your Google account, with the following error message: " _Couldn't sign in. There was a problem communicating with Google servers. Try again later_ ", as you can see in this screenshot: - -After logging in, you can disable some of the Google Play Store / Google Play Services permissions. - -**If you're encountering some connectivity issues when logging in to your Google account on Anbox,** make sure the `anbox-bride.sh` is running: - - * to start it: - - -``` -sudo /snap/anbox/current/bin/anbox-bridge.sh start - -``` - - * to restart it: - - -``` -sudo /snap/anbox/current/bin/anbox-bridge.sh restart - -``` - -You may also need to install the dnsmasq package if you continue to have connectivity issues with Anbox, according to - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/07/anbox-how-to-install-google-play-store.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://anbox.io/ -[2]:https://geeks-r-us.de/2017/08/26/android-apps-auf-dem-linux-desktop/ -[3]:https://github.com/geeks-r-us/anbox-playstore-installer/ -[4]:https://github.com/geeks-r-us/anbox-playstore-installer/blob/master/install-playstore.sh -[5]:https://docs.anbox.io/userguide/install.html -[6]:https://github.com/anbox/anbox/issues/118#issuecomment-295270113 diff --git a/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md b/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md deleted file mode 100644 index 6083111a32..0000000000 --- a/sources/tech/20180710 Users, Groups, and Other Linux Beasts.md +++ /dev/null @@ -1,153 +0,0 @@ -Users, Groups, and Other Linux Beasts -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/flamingo-2458782_1920.jpg?itok=_gkzGGx5) - -Having reached this stage, [after seeing how to manipulate folders/directories][1], but before flinging ourselves headlong into fiddling with files, we have to brush up on the matter of _permissions_ , _users_ and _groups_. Luckily, [there is already an excellent and comprehensive tutorial on this site that covers permissions][2], so you should go and read that right now. In a nutshell: you use permissions to establish who can do stuff to files and directories and what they can do with each file and directory -- read from it, write to it, move it, erase it, etc. - -To try everything this tutorial covers, you'll need to create a new user on your system. Let's be practical and make a user for anybody who needs to borrow your computer, that is, what we call a _guest account_. - -**WARNING:** _Creating and especially deleting users, along with home directories, can seriously damage your system if, for example, you remove your own user and files by mistake. You may want to practice on another machine which is not your main work machine or on a virtual machine. Regardless of whether you want to play it safe, or not, it is always a good idea to back up your stuff frequently, check the backups have worked correctly, and save yourself a lot of gnashing of teeth later on._ - -### A New User - -You can create a new user with the `useradd` command. Run `useradd` with superuser/root privileges, that is using `sudo` or `su`, depending on your system, you can do: -``` -sudo useradd -m guest - -``` - -... and input your password. Or do: -``` -su -c "useradd -m guest" - -``` - -... and input the password of root/the superuser. - -( _For the sake of brevity, we'll assume from now on that you get superuser/root privileges by using`sudo`_ ). - -By including the `-m` argument, `useradd` will create a home directory for the new user. You can see its contents by listing _/home/guest_. - -Next you can set up a password for the new user with -``` -sudo passwd guest - -``` - -Or you could also use `adduser`, which is interactive and asks you a bunch of questions, including what shell you want to assign the user (yes, there are more than one), where you want their home directory to be, what groups you want them to belong to (more about that in a second) and so on. At the end of running `adduser`, you get to set the password. Note that `adduser` is not installed by default on many distributions, while `useradd` is. - -Incidentally, you can get rid of a user with `userdel`: -``` -sudo userdel -r guest - -``` - -With the `-r` option, `userdel` not only removes the _guest_ user, but also deletes their home directory and removes their entry in the mailing spool, if they had one. - -### Skeletons at Home - -Talking of users' home directories, depending on what distro you're on, you may have noticed that when you use the `-m` option, `useradd` populates a user's directory with subdirectories for music, documents, and whatnot as well as an assortment of hidden files. To see everything in you guest's home directory run `sudo ls -la /home/guest`. - -What goes into a new user's directory is determined by a skeleton directory which is usually _/etc/skel_. Sometimes it may be a different directory, though. To check which directory is being used, run: -``` -useradd -D -GROUP=100 -HOME=/home -INACTIVE=-1 -EXPIRE= -SHELL=/bin/bash -SKEL=/etc/skel -CREATE_MAIL_SPOOL=no - -``` - -This gives you some extra interesting information, but what you're interested in right now is the `SKEL=/etc/skel` line. In this case, and as is customary, it is pointing to _/etc/skel/_. - -As everything is customizable in Linux, you can, of course, change what gets put into a newly created user directory. Try this: Create a new directory in _/etc/skel/_ : -``` -sudo mkdir /etc/skel/Documents - -``` - -And create a file containing a welcome text and copy it over: -``` -sudo cp welcome.txt /etc/skel/Documents - -``` - -Now delete the guest account: -``` -sudo userdel -r guest - -``` - -And create it again: -``` -sudo useradd -m guest - -``` - -Hey presto! Your _Documents/_ directory and _welcome.txt_ file magically appear in the guest's home directory. - -You can also modify other things when you create a user by editing _/etc/default/useradd_. Mine looks like this: -``` -GROUP=users -HOME=/home -INACTIVE=-1 -EXPIRE= -SHELL=/bin/bash -SKEL=/etc/skel -CREATE_MAIL_SPOOL=no - -``` - -Most of these options are self-explanatory, but let's take a closer look at the `GROUP` option. - -### Herd Mentality - -Instead of assigning permissions and privileges to users one by one, Linux and other Unix-like operating systems rely on _groups_. A group is a what you imagine it to be: a bunch of users that are related in some way. On your system you may have a group of users that are allowed to use the printer. They would belong to the _lp_ (for " _line printer_ ") group. The members of the _wheel_ group were traditionally the only ones who could become superuser/root by using _su_. The _network_ group of users can bring up and power down the network. And so on and so forth. - -Different distributions have different groups and groups with the same or similar names have different privileges also depending on the distribution you are using. So don't be surprised if what you read in the prior paragraph doesn't match what is going on in your system. - -Either way, to see which groups are on your system you can use: -``` -getent group - -``` - -The `getent` command lists the contents of some of the system's databases. - -To find out which groups your current user belongs to, try: -``` -groups - -``` - -When you create a new user with `useradd`, unless you specify otherwise, the user will only belong to one group: their own. A _guest_ user will belong to a _guest_ group and the group gives the user the power to administer their own stuff and that is about it. - -You can create new groups and then add users to them at will with the `groupadd` command: -``` -sudo groupadd photos - -``` - -will create the _photos_ group, for example. Next time, we’ll use this to build a shared directory all members of the group can read from and write to, and we'll learn even more about permissions and privileges. Stay tuned! - -Learn more about Linux through the free ["Introduction to Linux" ][3]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux -[2]:https://www.linux.com/learn/understanding-linux-file-permissions -[3]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md b/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md deleted file mode 100644 index b164bb6cf5..0000000000 --- a/sources/tech/20180716 Users, Groups and Other Linux Beasts- Part 2.md +++ /dev/null @@ -1,110 +0,0 @@ -Users, Groups and Other Linux Beasts: Part 2 -====== -![](https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2) -In this ongoing tour of Linux, we’ve looked at [how to manipulate folders/directories][1], and now we’re continuing our discussion of _permissions_ , _users_ and _groups_ , which are necessary to establish who can manipulate which files and directories. [Last time,][2] we showed how to create new users, and now we’re going to dive right back in: - -You can create new groups and then add users to them at will with the `groupadd` command. For example, using: -``` -sudo groupadd photos - -``` - -will create the _photos_ group. - -You’ll need to [create a directory][1] hanging off the root directory: -``` -sudo mkdir /photos - -``` - -If you run `ls -l /`, one of the lines will be: -``` -drwxr-xr-x 1 root root 0 jun 26 21:14 photos - -``` - -The first _root_ in the output is the user owner and the second _root_ is the group owner. - -To transfer the ownership of the _/photos_ directory to the _photos_ group, use -``` -chgrp photos /photos - -``` - -The `chgrp` command typically takes two parameters, the first parameter is the group that will take ownership of the file or directory and the second is the file or directory you want to give over to the the group. - -Next, run `ls -l /` and you'll see the line has changed to: -``` -drwxr-xr-x 1 root photos 0 jun 26 21:14 photos - -``` - -You have successfully transferred the ownership of your new directory over to the _photos_ group. - -Then, add your own user and the _guest_ user to the _photos_ group: -``` -sudo usermod -a -G photos -sudo usermod guest -a -G photos - -``` - -You may have to log out and log back in to see the changes, but, when you do, running `groups` will show _photos_ as one of the groups you belong to. - -A couple of things to point out about the `usermod` command shown above. First: Be careful not to use the `-g` option instead of `-G`. The `-g` option changes your primary group and could lock you out of your stuff if you use it by accident. `-G`, on the other hand, _adds_ you to the groups listed and doesn't mess with the primary group. If you want to add your user to more groups than one, list them one after another, separated by commas, no spaces, after `-G`: -``` -sudo usermod -a -G photos,pizza,spaceforce - -``` - -Second: Be careful not to forget the `-a` parameter. The `-a` parameter stands for _append_ and attaches the list of groups you pass to `-G` to the ones you already belong to. This means that, if you don't include `-a`, the list of groups you already belong to, will be overwritten, again locking you out from stuff you need. - -Neither of these are catastrophic problems, but it will mean you will have to add your user back manually to all the groups you belonged to, which can be a pain, especially if you have lost access to the _sudo_ and _wheel_ group. - -### Permits, Please! - -There is still one more thing to do before you can copy images to the _/photos_ directory. Notice how, when you did `ls -l /` above, permissions for that folder came back as _drwxr-xr-x_. - -If you read [the article I recommended at the beginning of this post][3], you'll know that the first _d_ indicates that the entry in the file system is a directory, and then you have three sets of three characters ( _rwx_ , _r-x_ , _r-x_ ) that indicate the permissions for the user owner ( _rwx_ ) of the directory, then the group owner ( _r-x_ ), and finally the rest of the users ( _r-x_ ). This means that the only person who has write permissions so far, that is, the only person who can copy or create files in the _/photos_ directory, is the _root_ user. - -But [that article I mentioned also tells you how to change the permissions for a directory or file][3]: -``` -sudo chmod g+w /photos - -``` - -Running `ls -l /` after that will give you _/photos_ permissions as _drwxrwxr-x_ which is what you want: group members can now write into the directory. - -Now you can try and copy an image or, indeed, any other file to the directory and it should go through without a problem: -``` -cp image.jpg /photos - -``` - -The _guest_ user will also be able to read and write from the directory. They will also be able to read and write to it, and even move or delete files created by other users within the shared directory. - -### Conclusion - -The permissions and privileges system in Linux has been honed over decades. inherited as it is from the old Unix systems of yore. As such, it works very well and is well thought out. Becoming familiar with it is essential for any Linux sysadmin. In fact, you can't do much admining at all unless you understand it. But, it's not that hard. - -Next time, we'll be dive into files and see the different ways of creating, manipulating, and destroying them in creative ways. Always fun, that last one. - -See you then! - -Learn more about Linux through the free ["Introduction to Linux" ][4]course from The Linux Foundation and edX. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts-part-2 - -作者:[Paul Brown][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/bro66 -[1]:https://www.linux.com/blog/learn/2018/5/manipulating-directories-linux -[2]:https://www.linux.com/learn/intro-to-linux/2018/7/users-groups-and-other-linux-beasts -[3]:https://www.linux.com/learn/understanding-linux-file-permissions -[4]:https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20180725 Build an interactive CLI with Node.js.md b/sources/tech/20180725 Build an interactive CLI with Node.js.md deleted file mode 100644 index f240e51efd..0000000000 --- a/sources/tech/20180725 Build an interactive CLI with Node.js.md +++ /dev/null @@ -1,533 +0,0 @@ -translating by Flowsnow - -Build an interactive CLI with Node.js -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming_keyboard_coding.png?itok=E0Vvam7A) - -Node.js can be very useful when it comes to building command-line interfaces (CLIs). In this post, I'll teach you how to use [Node.js][1] to build a CLI that asks some questions and creates a file based on the answers. - -### Get started - -Let's start by creating a brand new [npm][2] package. (Npm is the JavaScript package manager.) -``` -mkdir my-script - -cd my-script - -npm init - -``` - -Npm will ask some questions. After that, we need to install some packages. -``` -npm install --save chalk figlet inquirer shelljs - -``` - -Here's what these packages do: - - * **Chalk:** Terminal string styling done right - * **Figlet:** A program for making large letters out of ordinary text - * **Inquirer:** A collection of common interactive command-line user interfaces - * **ShellJS:** Portable Unix shell commands for Node.js - - - -### Make an index.js file - -Now we'll create an `index.js` file with the following content: -``` -#!/usr/bin/env node - - - -const inquirer = require("inquirer"); - -const chalk = require("chalk"); - -const figlet = require("figlet"); - -const shell = require("shelljs"); - -``` - -### Plan the CLI - -It's always good to plan what a CLI needs to do before writing any code. This CLI will do just one thing: **create a file**. - -The CLI will ask two questions—what is the filename and what is the extension?—then create the file, and show a success message with the created file path. -``` -// index.js - - - -const run = async () => { - -  // show script introduction - -  // ask questions - -  // create the file - -  // show success message - -}; - - - -run(); - -``` - -The first function is the script introduction. Let's use `chalk` and `figlet` to get the job done. -``` -const init = () => { - -  console.log( - -    chalk.green( - -      figlet.textSync("Node JS CLI", { - -        font: "Ghost", - -        horizontalLayout: "default", - -        verticalLayout: "default" - -      }) - -    ) - -  ); - -} - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  // create the file - -  // show success message - -}; - - - -run(); - -``` - -Second, we'll write a function that asks the questions. -``` -const askQuestions = () => { - -  const questions = [ - -    { - -      name: "FILENAME", - -      type: "input", - -      message: "What is the name of the file without extension?" - -    }, - -    { - -      type: "list", - -      name: "EXTENSION", - -      message: "What is the file extension?", - -      choices: [".rb", ".js", ".php", ".css"], - -      filter: function(val) { - -        return val.split(".")[1]; - -      } - -    } - -  ]; - -  return inquirer.prompt(questions); - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  // show success message - -}; - -``` - -Notice the constants FILENAME and EXTENSIONS that came from `inquirer`. - -The next step will create the file. -``` -const createFile = (filename, extension) => { - -  const filePath = `${process.cwd()}/${filename}.${extension}` - -  shell.touch(filePath); - -  return filePath; - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -}; - -``` - -And last but not least, we'll show the success message along with the file path. -``` -const success = (filepath) => { - -  console.log( - -    chalk.white.bgGreen.bold(`Done! File created at ${filepath}`) - -  ); - -}; - - - -// ... - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -  success(filePath); - -}; - -``` - -Let's test the script by running `node index.js`. Here's what we get: - -### The full code - -Here is the final code: -``` -#!/usr/bin/env node - - - -const inquirer = require("inquirer"); - -const chalk = require("chalk"); - -const figlet = require("figlet"); - -const shell = require("shelljs"); - - - -const init = () => { - -  console.log( - -    chalk.green( - -      figlet.textSync("Node JS CLI", { - -        font: "Ghost", - -        horizontalLayout: "default", - -        verticalLayout: "default" - -      }) - -    ) - -  ); - -}; - - - -const askQuestions = () => { - -  const questions = [ - -    { - -      name: "FILENAME", - -      type: "input", - -      message: "What is the name of the file without extension?" - -    }, - -    { - -      type: "list", - -      name: "EXTENSION", - -      message: "What is the file extension?", - -      choices: [".rb", ".js", ".php", ".css"], - -      filter: function(val) { - -        return val.split(".")[1]; - -      } - -    } - -  ]; - -  return inquirer.prompt(questions); - -}; - - - -const createFile = (filename, extension) => { - -  const filePath = `${process.cwd()}/${filename}.${extension}` - -  shell.touch(filePath); - -  return filePath; - -}; - - - -const success = filepath => { - -  console.log( - -    chalk.white.bgGreen.bold(`Done! File created at ${filepath}`) - -  ); - -}; - - - -const run = async () => { - -  // show script introduction - -  init(); - - - -  // ask questions - -  const answers = await askQuestions(); - -  const { FILENAME, EXTENSION } = answers; - - - -  // create the file - -  const filePath = createFile(FILENAME, EXTENSION); - - - -  // show success message - -  success(filePath); - -}; - - - -run(); - -``` - -### Use the script anywhere - -To execute this script anywhere, add a `bin` section in your `package.json` file and run `npm link`. -``` -{ - -  "name": "creator", - -  "version": "1.0.0", - -  "description": "", - -  "main": "index.js", - -  "scripts": { - -    "test": "echo \"Error: no test specified\" && exit 1", - -    "start": "node index.js" - -  }, - -  "author": "", - -  "license": "ISC", - -  "dependencies": { - -    "chalk": "^2.4.1", - -    "figlet": "^1.2.0", - -    "inquirer": "^6.0.0", - -    "shelljs": "^0.8.2" - -  }, - -  "bin": { - -    "creator": "./index.js" - -  } - -} - -``` - -Running `npm link` makes this script available anywhere. - -That's what happens when you run this command: -``` -/usr/bin/creator -> /usr/lib/node_modules/creator/index.js - -/usr/lib/node_modules/creator -> /home/hugo/code/creator - -``` - -It links the `index.js` file as an executable. This is only possible because of the first line of the CLI script: `#!/usr/bin/env node`. - -Now we can run this script by calling: -``` -$ creator - -``` - -### Wrapping up - -As you can see, Node.js makes it very easy to build nice command-line tools! If you want to go even further, check this other packages: - - * [meow][3] – a simple command-line helper - * [yargs][4] – a command-line opt-string parser - * [pkg][5] – package your Node.js project into an executable - - - -Tell us about your experience building a CLI in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/node-js-interactive-cli - -作者:[Hugo Dias][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/hugodias -[1]:https://nodejs.org/en/ -[2]:https://www.npmjs.com/ -[3]:https://github.com/sindresorhus/meow -[4]:https://github.com/yargs/yargs -[5]:https://github.com/zeit/pkg diff --git a/sources/tech/20180727 How to analyze your system with perf and Python.md b/sources/tech/20180727 How to analyze your system with perf and Python.md index 4573ed1ee0..c1be98cc0e 100644 --- a/sources/tech/20180727 How to analyze your system with perf and Python.md +++ b/sources/tech/20180727 How to analyze your system with perf and Python.md @@ -1,7 +1,3 @@ -**translating by [erlinux](https://github.com/erlinux)** -**PROJECT MANAGEMENT TOOL called [gn2.sh](https://github.com/lctt/lctt-cli)** - - How to analyze your system with perf and Python ====== diff --git a/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md b/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md deleted file mode 100644 index c3b2d2b77e..0000000000 --- a/sources/tech/20180806 GPaste Is A Great Clipboard Manager For Gnome Shell.md +++ /dev/null @@ -1,96 +0,0 @@ -GPaste Is A Great Clipboard Manager For Gnome Shell -====== -**[GPaste][1] is a clipboard management system that consists of a library, daemon, and interfaces for the command line and Gnome (using a native Gnome Shell extension).** - -A clipboard manager allows keeping track of what you're copying and pasting, providing access to previously copied items. GPaste, with its native Gnome Shell extension, makes the perfect addition for those looking for a Gnome clipboard manager. - -[![GPaste Gnome Shell extension Ubuntu 18.04][2]][3] -GPaste Gnome Shell extension -**Using GPaste in Gnome, you get a configurable, searchable clipboard history, available with a click on the top panel. GPaste remembers not only the text you copy, but also file paths and images** (the latter needs to be enabled from its settings as it's disabled by default). - -What's more, GPaste can detect growing lines, meaning it can detect when a new text copy is an extension of another and replaces it if it's true, useful for keeping your clipboard clean. - -From the extension menu you can pause GPaste from tracking the clipboard, and remove items from the clipboard history or the whole history. You'll also find a button that launches the GPaste user interface window. - -**If you prefer to use the keyboard, you can use a key shortcut to open the GPaste history from the top bar** (`Ctrl + Alt + H`), **or open the full GPaste GUI** (`Ctrl + Alt + G`). - -The tool also incorporates keyboard shortcuts to (can be changed): - - * delete the active item from history: `Ctrl + Alt + V` - - * **mark the active item as being a password (which obfuscates the clipboard entry in GPaste):** `Ctrl + Alt + S` - - * sync the clipboard to the primary selection: `Ctrl + Alt + O` - - * sync the primary selection to the clipboard: `Ctrl + Alt + P` - - * upload the active item to a pastebin service: `Ctrl + Alt + U` - -[![][4]][5] -GPaste GUI - -The GPaste interface window provides access to the clipboard history (with options to clear, edit or upload items), which can be searched, an option to pause GPaste from tracking the clipboard, restart the GPaste daemon, backup current clipboard history, as well as to its settings. - -[![][6]][7] -GPaste GUI - -From the GPaste UI you can change settings like: - - * Enable or disable the Gnome Shell extension - * Sync the daemon state with the extension's one - * Primary selection affects history - * Synchronize clipboard with primary selection - * Image support - * Trim items - * Detect growing lines - * Save history - * History settings like max history size, memory usage, max text item length, and more - * Keyboard shortcuts - - - -### Download GPaste - -[Download GPaste](https://github.com/Keruspe/GPaste) - -The Gpaste project page does not link to any GPaste binaries, and only source installation instructions. Users running Linux distributions other than Debian or Ubuntu (for which you'll find GPaste installation instructions below) can search their distro repositories for GPaste. - -Do not confuse GPaste with the GPaste Integration extension posted on the Gnome Shell extension website. That is a Gnome Shell extension that uses GPaste daemon, which is no longer maintained. The native Gnome Shell extension built into GPaste is still maintained. - -#### Install GPaste in Ubuntu (18.04, 16.04) or Debian (Jessie and newer) - -**For Debian, GPaste is available for Jessie and newer, while for Ubuntu, GPaste is in the repositories for 16.04 and newer (so it's available in the Ubuntu 18.04 Bionic Beaver).** - -**You can install GPaste (the daemon and the Gnome Shell extension) in Debian or Ubuntu using this command:** -``` -sudo apt install gnome-shell-extensions-gpaste gpaste - -``` - -After the installation completes, restart Gnome Shell by pressing `Alt + F2` and typing `r` , then pressing the `Enter` key. The GPaste Gnome Shell extension should now be enabled and its icon should show up on the top Gnome Shell panel. If it's not, use Gnome Tweaks (Gnome Tweak Tool) to enable the extension. - -**The GPaste 3.28.0 package from[Debian][8] and [Ubuntu][9] has a bug that makes it crash if the image support option is enabled, so do not enable this feature for now.** This was marked as - - --------------------------------------------------------------------------------- - -via: https://www.linuxuprising.com/2018/08/gpaste-is-great-clipboard-manager-for.html - -作者:[Logix][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://plus.google.com/118280394805678839070 -[1]:https://github.com/Keruspe/GPaste -[2]:https://2.bp.blogspot.com/-2ndArDBcrwY/W2gyhMc1kEI/AAAAAAAABS0/ZAe_onuGCacMblF733QGBX3XqyZd--WuACLcBGAs/s400/gpaste-gnome-shell-extension-ubuntu1804.png (Gpaste Gnome Shell) -[3]:https://2.bp.blogspot.com/-2ndArDBcrwY/W2gyhMc1kEI/AAAAAAAABS0/ZAe_onuGCacMblF733QGBX3XqyZd--WuACLcBGAs/s1600/gpaste-gnome-shell-extension-ubuntu1804.png -[4]:https://2.bp.blogspot.com/-7FBRsZJvYek/W2gyvzmeRxI/AAAAAAAABS4/LhokMFSn8_kZndrNB-BTP4W3e9IUuz9BgCLcBGAs/s640/gpaste-gui_1.png -[5]:https://2.bp.blogspot.com/-7FBRsZJvYek/W2gyvzmeRxI/AAAAAAAABS4/LhokMFSn8_kZndrNB-BTP4W3e9IUuz9BgCLcBGAs/s1600/gpaste-gui_1.png -[6]:https://4.bp.blogspot.com/-047ShYc6RrQ/W2gyz5FCf_I/AAAAAAAABTA/-o6jaWzwNpsSjG0QRwRJ5Xurq_A6dQ0sQCLcBGAs/s640/gpaste-gui_2.png -[7]:https://4.bp.blogspot.com/-047ShYc6RrQ/W2gyz5FCf_I/AAAAAAAABTA/-o6jaWzwNpsSjG0QRwRJ5Xurq_A6dQ0sQCLcBGAs/s1600/gpaste-gui_2.png -[8]:https://packages.debian.org/buster/gpaste -[9]:https://launchpad.net/ubuntu/+source/gpaste -[10]:https://www.imagination-land.org/posts/2018-04-13-gpaste-3.28.2-released.html diff --git a/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md b/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md deleted file mode 100644 index 8ad6a4ac7d..0000000000 --- a/sources/tech/20180807 5 reasons the i3 window manager makes Linux better.md +++ /dev/null @@ -1,111 +0,0 @@ -5 reasons the i3 window manager makes Linux better -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/cloud-windows.png?itok=jd5sBNQH) - -One of the nicest things about Linux (and open source software in general) is the freedom to choose among different alternatives to address our needs. - -I've been using Linux for a long time, but I was never entirely happy with the desktop environment options available. Until last year, [Xfce][1] was the closest to what I consider a good compromise between features and performance. Then I found [i3][2], an amazing piece of software that changed my life. - -I3 is a tiling window manager. The goal of a window manager is to control the appearance and placement of windows in a windowing system. Window managers are often used as part a full-featured desktop environment (such as GNOME or Xfce), but some can also be used as standalone applications. - -A tiling window manager automatically arranges the windows to occupy the whole screen in a non-overlapping way. Other popular tiling window managers include [wmii][3] and [xmonad][4]. - -![i3 tiled window manager screenshot][6] - -Screenshot of i3 with three tiled windows - -Following are the top five reasons I use the i3 window manager and recommend it for a better Linux desktop experience. - -### 1\. Minimalism - -I3 is fast. It is neither bloated nor fancy. It is designed to be simple and efficient. As a developer, I value these features, as I can use the extra capacity to power my favorite development tools or test stuff locally using containers or virtual machines. - -In addition, i3 is a window manager and, unlike full-featured desktop environments, it does not dictate the applications you should use. Do you want to use Thunar from Xfce as your file manager? GNOME's gedit to edit text? I3 does not care. Pick the tools that make the most sense for your workflow, and i3 will manage them all in the same way. - -### 2\. Screen real estate - -As a tiling window manager, i3 will automatically "tile" or position the windows in a non-overlapping way, similar to laying tiles on a wall. Since you don't need to worry about window positioning, i3 generally makes better use of your screen real estate. It also allows you to get to what you need faster. - -There are many useful cases for this. For example, system administrators can open several terminals to monitor or work on different remote systems simultaneously; and developers can use their favorite IDE or editor and a few terminals to test their programs. - -In addition, i3 is flexible. If you need more space for a particular window, enable full-screen mode or switch to a different layout, such as stacked or tabbed. - -### 3\. Keyboard-driven workflow - -I3 makes extensive use of keyboard shortcuts to control different aspects of your environment. These include opening the terminal and other programs, resizing and positioning windows, changing layouts, and even exiting i3. When you start using i3, you need to memorize a few of those shortcuts to get around and, with time, you'll use more of them. - -The main benefit is that you don't often need to switch contexts from the keyboard to the mouse. With practice, it means you'll improve the speed and efficiency of your workflow. - -For example, to open a new terminal, press `+`. Since the windows are automatically positioned, you can start typing your commands right away. Combine that with a nice terminal-driven text editor (e.g., Vim) and a keyboard-focused browser for a fully keyboard-driven workflow. - -In i3, you can define shortcuts for everything. Here are some examples: - - * Open terminal - * Open browser - * Change layouts - * Resize windows - * Control music player - * Switch workspaces - - - -Now that I am used to this workflow, I can't see myself going back to a regular desktop environment. - -### 4\. Flexibility - -I3 strives to be minimal and use few system resources, but that does not mean it can't be pretty. I3 is flexible and can be customized in several ways to improve the visual experience. Because i3 is a window manager, it doesn't provide tools to enable customizations; you need external tools for that. Some examples: - - * Use `feh` to define a background picture for your desktop. - * Use a compositor manager such as `compton` to enable effects like window fading and transparency. - * Use `dmenu` or `rofi` to enable customizable menus that can be launched from a keyboard shortcut. - * Use `dunst` for desktop notifications. - - - -I3 is fully configurable, and you can control every aspect of it by updating the default configuration file. From changing all keyboard shortcuts, to redefining the name of the workspaces, to modifying the status bar, you can make i3 behave in any way that makes the most sense for your needs. - -![i3 with rofi menu and dunst desktop notifications][8] - -i3 with `rofi` menu and `dunst` desktop notifications - -Finally, for more advanced users, i3 provides a full interprocess communication ([IPC][9]) interface that allows you to use your favorite language to develop scripts or programs for even more customization options. - -### 5\. Workspaces - -In i3, a workspace is an easy way to group windows. You can group them in different ways according to your workflow. For example, you can put the browser on one workspace, the terminal on another, an email client on a third, etc. You can even change i3's configuration to always assign specific applications to their own workspaces. - -Switching workspaces is quick and easy. As usual in i3, do it with a keyboard shortcut. Press `+num` to switch to workspace `num`. If you get into the habit of always assigning applications/groups of windows to the same workspace, you can quickly switch between them, which makes workspaces a very useful feature. - -In addition, you can use workspaces to control multi-monitor setups, where each monitor gets an initial workspace. If you switch to that workspace, you switch to that monitor—without moving your hand off the keyboard. - -Finally, there is another, special type of workspace in i3: the scratchpad. It is an invisible workspace that shows up in the middle of the other workspaces by pressing a shortcut. This is a convenient way to access windows or programs that you frequently use, such as an email client or your music player. - -### Give it a try - -If you value simplicity and efficiency and are not afraid of working with the keyboard, i3 is the window manager for you. Some say it is for advanced users, but that is not necessarily the case. You need to learn a few basic shortcuts to get around at the beginning, but they'll soon feel natural and you'll start using them without thinking. - -This article just scratches the surface of what i3 can do. For more details, consult [i3's documentation][10]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/i3-tiling-window-manager - -作者:[Ricardo Gerardi][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/rgerardi -[1]:https://xfce.org/ -[2]:https://i3wm.org/ -[3]:https://code.google.com/archive/p/wmii/ -[4]:https://xmonad.org/ -[5]:/file/406476 -[6]:https://opensource.com/sites/default/files/uploads/i3_screenshot.png (i3 tiled window manager screenshot) -[7]:/file/405161 -[8]:https://opensource.com/sites/default/files/uploads/rofi_dunst.png (i3 with rofi menu and dunst desktop notifications) -[9]:https://i3wm.org/docs/ipc.html -[10]:https://i3wm.org/docs/userguide.html diff --git a/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md b/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md deleted file mode 100644 index 1c7d7e9226..0000000000 --- a/sources/tech/20180809 Perform robust unit tests with PyHamcrest.md +++ /dev/null @@ -1,176 +0,0 @@ -Perform robust unit tests with PyHamcrest -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) - -At the base of the [testing pyramid][1] are unit tests. Unit tests test one unit of code at a time—usually one function or method. - -Often, a single unit test is designed to test one particular flow through a function, or a specific branch choice. This enables easy mapping of a unit test that fails and the bug that made it fail. - -Ideally, unit tests use few or no external resources, isolating them and making them faster. - -_Good_ tests increase developer productivity by catching bugs early and making testing faster. _Bad_ tests decrease developer productivity. - -Unit test suites help maintain high-quality products by signaling problems early in the development process. An effective unit test catches bugs before the code has left the developer machine, or at least in a continuous integration environment on a dedicated branch. This marks the difference between good and bad unit tests:tests increase developer productivity by catching bugs early and making testing faster.tests decrease developer productivity. - -Productivity usually decreases when testing _incidental features_. The test fails when the code changes, even if it is still correct. This happens because the output is different, but in a way that is not part of the function's contract. - -A good unit test, therefore, is one that helps enforce the contract to which the function is committed. - -If a unit test breaks, the contract is violated and should be either explicitly amended (by changing the documentation and tests), or fixed (by fixing the code and leaving the tests as is). - -While limiting tests to enforce only the public contract is a complicated skill to learn, there are tools that can help. - -One of these tools is [Hamcrest][2], a framework for writing assertions. Originally invented for Java-based unit tests, today the Hamcrest framework supports several languages, including [Python][3]. - -Hamcrest is designed to make test assertions easier to write and more precise. -``` -def add(a, b): - -    return a + b - - - -from hamcrest import assert_that, equal_to - - - -def test_add(): - -    assert_that(add(2, 2), equal_to(4))   - -``` - -This is a simple assertion, for simple functionality. What if we wanted to assert something more complicated? -``` -def test_set_removal(): - -    my_set = {1, 2, 3, 4} - -    my_set.remove(3) - -    assert_that(my_set, contains_inanyorder([1, 2, 4])) - -    assert_that(my_set, is_not(has_item(3))) - -``` - -Note that we can succinctly assert that the result has `1`, `2`, and `4` in any order since sets do not guarantee order. - -We also easily negate assertions with `is_not`. This helps us write _precise assertions_ , which allow us to limit ourselves to enforcing public contracts of functions. - -Sometimes, however, none of the built-in functionality is _precisely_ what we need. In those cases, Hamcrest allows us to write our own matchers. - -Imagine the following function: -``` -def scale_one(a, b): - -    scale = random.randint(0, 5) - -    pick = random.choice([a,b]) - -    return scale * pick - -``` - -We can confidently assert that the result divides into at least one of the inputs evenly. - -A matcher inherits from `hamcrest.core.base_matcher.BaseMatcher`, and overrides two methods: -``` -class DivisibleBy(hamcrest.core.base_matcher.BaseMatcher): - - - -    def __init__(self, factor): - -        self.factor = factor - - - -    def _matches(self, item): - -        return (item % self.factor) == 0 - - - -    def describe_to(self, description): - -        description.append_text('number divisible by') - -        description.append_text(repr(self.factor)) - -``` - -Writing high-quality `describe_to` methods is important, since this is part of the message that will show up if the test fails. -``` -def divisible_by(num): - -    return DivisibleBy(num) - -``` - -By convention, we wrap matchers in a function. Sometimes this gives us a chance to further process the inputs, but in this case, no further processing is needed. -``` -def test_scale(): - -    result = scale_one(3, 7) - -    assert_that(result, - -                any_of(divisible_by(3), - -                       divisible_by(7))) - -``` - -Note that we combined our `divisible_by` matcher with the built-in `any_of` matcher to ensure that we test only what the contract commits to. - -While editing this article, I heard a rumor that the name "Hamcrest" was chosen as an anagram for "matches". Hrm... -``` ->>> assert_that("matches", contains_inanyorder(*"hamcrest") - -Traceback (most recent call last): - -  File "", line 1, in - -  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 43, in assert_that - -    _assert_match(actual=arg1, matcher=arg2, reason=arg3) - -  File "/home/moshez/src/devops-python/build/devops/lib/python3.6/site-packages/hamcrest/core/assert_that.py", line 57, in _assert_match - -    raise AssertionError(description) - -AssertionError: - -Expected: a sequence over ['h', 'a', 'm', 'c', 'r', 'e', 's', 't'] in any order - -      but: no item matches: 'r' in ['m', 'a', 't', 'c', 'h', 'e', 's'] - -``` - -Researching more, I found the source of the rumor: It is an anagram for "matchers". -``` ->>> assert_that("matchers", contains_inanyorder(*"hamcrest")) - ->>> - -``` - -If you are not yet writing unit tests for your Python code, now is a good time to start. If you are writing unit tests for your Python code, using Hamcrest will allow you to make your assertion _precise_ —neither more nor less than what you intend to test. This will lead to fewer false positives when modifying code and less time spent modifying tests for working code. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/robust-unit-tests-hamcrest - -作者:[Moshe Zadka][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/moshez -[1]:https://martinfowler.com/bliki/TestPyramid.html -[2]:http://hamcrest.org/ -[3]:https://www.python.org/ diff --git a/sources/tech/20180814 5 open source strategy and simulation games for Linux.md b/sources/tech/20180814 5 open source strategy and simulation games for Linux.md deleted file mode 100644 index 1f7e94c22f..0000000000 --- a/sources/tech/20180814 5 open source strategy and simulation games for Linux.md +++ /dev/null @@ -1,111 +0,0 @@ -5 open source strategy and simulation games for Linux -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/arcade_game_gaming.jpg?itok=84Rjk_32) - -Gaming has traditionally been one of Linux's weak points. That has changed somewhat in recent years thanks to Steam, GOG, and other efforts to bring commercial games to multiple operating systems, but those games are often not open source. Sure, the games can be played on an open source operating system, but that is not good enough for an open source purist. - -So, can someone who only uses free and open source software find games that are polished enough to present a solid gaming experience without compromising their open source ideals? Absolutely. While open source games are unlikely ever to rival some of the AAA commercial games developed with massive budgets, there are plenty of open source games, in many genres, that are fun to play and can be installed from the repositories of most major Linux distributions. Even if a particular game is not packaged for a particular distribution, it is usually easy to download the game from the project's website to install and play it. - -This article looks at strategy and simulation games. I have already written about [arcade-style games][1], [board & card games][2], [puzzle games][3], [racing & flying games][4], and [role-playing games][5]. - -### Freeciv - -![](https://opensource.com/sites/default/files/uploads/freeciv.png) - -[Freeciv][6] is an open source version of the [Civilization series][7] of computer games. Gameplay is most similar to the earlier games in the Civilization series, and Freeciv even has options to use Civilization 1 and Civilization 2 rule sets. Freeciv involves building cities, exploring the world map, developing technologies, and competing with other civilizations trying to do the same. Victory conditions include defeating all the other civilizations, developing a space colony, or hitting deadline if neither of the first two conditions are met. The game can be played against AI opponents or other human players. Different tile-sets are available to change the look of the game's map. - -To install Freeciv, run the following command: - - * On Fedora: `dnf install freeciv` - * On Debian/Ubuntu: `apt install freeciv` - - - -### MegaGlest - -![](https://opensource.com/sites/default/files/uploads/megaglest.png) - -[MegaGlest][8] is an open source real-time strategy game in the style of Blizzard Entertainment's [Warcraft][9] and [StarCraft][10] games. Players control one of several different factions, building structures and recruiting units to explore the map and battle their opponents. At the beginning of the match, a player can build only the most basic buildings and recruit the weakest units. To build and recruit better things, players must work their way up their factions technology tree by building structures and recruiting units that unlock more advanced options. Combat units will attack when enemy units come into range, but for optimal strategy, it is best to manage the battle directly by controlling the units. Simultaneously managing the construction of new structures, recruiting new units, and managing battles can be a challenge, but that is the point of a real-time strategy game. MegaGlest provides a nice variety of factions, so there are plenty of reasons to try new and different strategies. - -To install MegaGlest, run the following command: - - * On Fedora: `dnf install megaglest` - * On Debian/Ubuntu: `apt install megaglest` - - - -### OpenTTD - -![](https://opensource.com/sites/default/files/uploads/openttd.png) - -[OpenTTD][11] (see also [our review][12]) is an open source implementation of [Transport Tycoon Deluxe][13]. The object of the game is to create a transportation network and earn money, which allows the player to build an even bigger transportation network. The network can include boats, buses, trains, trucks, and planes. By default, gameplay takes place between 1950 and 2050, with players aiming to get the highest performance rating possible before time runs out. The performance rating is based on things like the amount of cargo delivered, the number of vehicles they have, and how much money they earned. - -To install OpenTTD, run the following command: - - * On Fedora: `dnf install openttd` - * On Debian/Ubuntu: `apt install openttd` - - - -### The Battle for Wesnoth - -![](https://opensource.com/sites/default/files/uploads/the_battle_for_wesnoth.png) - -[The Battle for Wesnoth][14] is one of the most polished open source games available. This turn-based strategy game has a fantasy setting. Play takes place on a hexagonal grid, where individual units battle each other for control. Each type of unit has unique strengths and weaknesses, which requires players to plan their attacks accordingly. There are many different campaigns available for The Battle for Wesnoth, each with different objectives and storylines. The Battle for Wesnoth also comes with a map editor for players interested in creating their own maps or campaigns. - -To install The Battle for Wesnoth, run the following command: - - * On Fedora: `dnf install wesnoth` - * On Debian/Ubuntu: `apt install wesnoth` - - - -### UFO: Alien Invasion - -![](https://opensource.com/sites/default/files/uploads/ufo_alien_invasion.png) - -[UFO: Alien Invasion][15] is an open source tactical strategy game inspired by the [X-COM series][20]. There are two distinct gameplay modes: geoscape and tactical. In geoscape mode, the player takes control of the big picture and deals with managing their bases, researching new technologies, and controlling overall strategy. In tactical mode, the player controls a squad of soldiers and directly confronts the alien invaders in a turn-based battle. Both modes provide different gameplay styles, but both require complex strategy and tactics. - -To install UFO: Alien Invasion, run the following command: - - * On Debian/Ubuntu: `apt install ufoai` - - - -Unfortunately, UFO: Alien Invasion is not packaged for Fedora. - -Did I miss one of your favorite open source strategy or simulation games? Share it in the comments below. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/strategy-simulation-games-linux - -作者:[Joshua Allen Holm][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/holmja -[1]:https://opensource.com/article/18/1/arcade-games-linux -[2]:https://opensource.com/article/18/3/card-board-games-linux -[3]:https://opensource.com/article/18/6/puzzle-games-linux -[4]:https://opensource.com/article/18/7/racing-flying-games-linux -[5]:https://opensource.com/article/18/8/role-playing-games-linux -[6]:http://www.freeciv.org/ -[7]:https://en.wikipedia.org/wiki/Civilization_(series) -[8]:https://megaglest.org/ -[9]:https://en.wikipedia.org/wiki/Warcraft -[10]:https://en.wikipedia.org/wiki/StarCraft -[11]:https://www.openttd.org/ -[12]:https://opensource.com/life/15/7/linux-game-review-openttd -[13]:https://en.wikipedia.org/wiki/Transport_Tycoon#Transport_Tycoon_Deluxe -[14]:https://www.wesnoth.org/ -[15]:https://ufoai.org/ -[16]:https://opensource.com/downloads/cheat-sheets?intcmp=7016000000127cYAAQ -[17]:https://opensource.com/alternatives?intcmp=7016000000127cYAAQ -[18]:https://opensource.com/tags/linux?intcmp=7016000000127cYAAQ -[19]:https://developers.redhat.com/cheat-sheets/advanced-linux-commands/?intcmp=7016000000127cYAAQ -[20]:https://en.wikipedia.org/wiki/X-COM diff --git a/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md b/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md deleted file mode 100644 index 410f692ad9..0000000000 --- a/sources/tech/20180814 HTTP request routing and validation with gorilla-mux.md +++ /dev/null @@ -1,674 +0,0 @@ -HTTP request routing and validation with gorilla/mux -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/data_metrics_analytics_desktop_laptop.png?itok=9QXd7AUr) - -The Go networking library includes the `http.ServeMux` structure type, which supports HTTP request multiplexing (routing): A web server routes an HTTP request for a hosted resource, with a URI such as /sales4today, to a code handler; the handler performs the appropriate logic before sending an HTTP response, typically an HTML page. Here’s a sketch of the architecture: -``` -                 +------------+     +--------+     +---------+ -HTTP request---->| web server |---->| router |---->| handler | -                 +------------+     +--------+     +---------+ -``` - -In a call to the `ListenAndServe` method to start an HTTP server -``` -http.ListenAndServe(":8888", nil) // args: port & router -``` - -a second argument of `nil` means that the `DefaultServeMux` is used for request routing. - -The `gorilla/mux` package has a `mux.Router` type as an alternative to either the `DefaultServeMux` or a customized request multiplexer. In the `ListenAndServe` call, a `mux.Router` instance would replace `nil` as the second argument. What makes the `mux.Router` so appealing is best shown through a code example: - -### 1\. A sample crud web app - -The crud web application (see below) supports the four CRUD (Create Read Update Delete) operations, which match four HTTP request methods: POST, GET, PUT, and DELETE, respectively. In the crud app, the hosted resource is a list of cliche pairs, each a cliche and a conflicting cliche such as this pair: -``` -Out of sight, out of mind. Absence makes the heart grow fonder. - -``` - -New cliche pairs can be added, and existing ones can be edited or deleted. - -**The crud web app** -``` -package main - -import ( -   "gorilla/mux" -   "net/http" -   "fmt" -   "strconv" -) - -const GETALL string = "GETALL" -const GETONE string = "GETONE" -const POST string   = "POST" -const PUT string    = "PUT" -const DELETE string = "DELETE" - -type clichePair struct { -   Id      int -   Cliche  string -   Counter string -} - -// Message sent to goroutine that accesses the requested resource. -type crudRequest struct { -   verb     string -   cp       *clichePair -   id       int -   cliche   string -   counter  string -   confirm  chan string -} - -var clichesList = []*clichePair{} -var masterId = 1 -var crudRequests chan *crudRequest - -// GET / -// GET /cliches -func ClichesAll(res http.ResponseWriter, req *http.Request) { -   cr := &crudRequest{verb: GETALL, confirm: make(chan string)} -   completeRequest(cr, res, "read all") -} - -// GET /cliches/id -func ClichesOne(res http.ResponseWriter, req *http.Request) { -   id := getIdFromRequest(req) -   cr := &crudRequest{verb: GETONE, id: id, confirm: make(chan string)} -   completeRequest(cr, res, "read one") -} - -// POST /cliches - -func ClichesCreate(res http.ResponseWriter, req *http.Request) { - -   cliche, counter := getDataFromRequest(req) - -   cp := new(clichePair) - -   cp.Cliche = cliche - -   cp.Counter = counter - -   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)} - -   completeRequest(cr, res, "create") - -} - - - -// PUT /cliches/id - -func ClichesEdit(res http.ResponseWriter, req *http.Request) { - -   id := getIdFromRequest(req) - -   cliche, counter := getDataFromRequest(req) - -   cr := &crudRequest{verb: PUT, id: id, cliche: cliche, counter: counter, confirm: make(chan string)} - -   completeRequest(cr, res, "edit") - -} - - - -// DELETE /cliches/id - -func ClichesDelete(res http.ResponseWriter, req *http.Request) { - -   id := getIdFromRequest(req) - -   cr := &crudRequest{verb: DELETE, id: id, confirm: make(chan string)} - -   completeRequest(cr, res, "delete") - -} - - - -func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) { - -   crudRequests<-cr - -   msg := <-cr.confirm - -   res.Write([]byte(msg)) - -   logIt(logMsg) - -} - - - -func main() { - -   populateClichesList() - - - -   // From now on, this gorountine alone accesses the clichesList. - -   crudRequests = make(chan *crudRequest, 8) - -   go func() { // resource manager - -      for { - -         select { - -         case req := <-crudRequests: - -         if req.verb == GETALL { - -            req.confirm<-readAll() - -         } else if req.verb == GETONE { - -            req.confirm<-readOne(req.id) - -         } else if req.verb == POST { - -            req.confirm<-addPair(req.cp) - -         } else if req.verb == PUT { - -            req.confirm<-editPair(req.id, req.cliche, req.counter) - -         } else if req.verb == DELETE { - -            req.confirm<-deletePair(req.id) - -         } - -      } - -   }() - -   startServer() - -} - - - -func startServer() { - -   router := mux.NewRouter() - - - -   // Dispatch map for CRUD operations. - -   router.HandleFunc("/", ClichesAll).Methods("GET") - -   router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET") - - - -   router.HandleFunc("/cliches", ClichesCreate).Methods("POST") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT") - -   router.HandleFunc("/cliches/{id:[0-9]+}", ClichesDelete).Methods("DELETE") - - - -   http.Handle("/", router) // enable the router - - - -   // Start the server. - -   port := ":8888" - -   fmt.Println("\nListening on port " + port) - -   http.ListenAndServe(port, router); // mux.Router now in play - -} - - - -// Return entire list to requester. - -func readAll() string { - -   msg := "\n" - -   for _, cliche := range clichesList { - -      next := strconv.Itoa(cliche.Id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n" - -      msg += next - -   } - -   return msg - -} - - - -// Return specified clichePair to requester. - -func readOne(id int) string { - -   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n" - - - -   index := findCliche(id) - -   if index >= 0 { - -      cliche := clichesList[index] - -      msg = "\n" + strconv.Itoa(id) + ": " + cliche.Cliche + "  " + cliche.Counter + "\n" - -   } - -   return msg - -} - - - -// Create a new clichePair and add to list - -func addPair(cp *clichePair) string { - -   cp.Id = masterId - -   masterId++ - -   clichesList = append(clichesList, cp) - -   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n" - -} - - - -// Edit an existing clichePair - -func editPair(id int, cliche string, counter string) string { - -   msg := "\n" + "Bad Id: " + strconv.Itoa(id) + "\n" - -   index := findCliche(id) - -   if index >= 0 { - -      clichesList[index].Cliche = cliche - -      clichesList[index].Counter = counter - -      msg = "\nCliche edited: " + cliche + " " + counter + "\n" - -   } - -   return msg - -} - - - -// Delete a clichePair - -func deletePair(id int) string { - -   idStr := strconv.Itoa(id) - -   msg := "\n" + "Bad Id: " + idStr + "\n" - -   index := findCliche(id) - -   if index >= 0 { - -      clichesList = append(clichesList[:index], clichesList[index + 1:]...) - -      msg = "\nCliche " + idStr + " deleted\n" - -   } - -   return msg - -} - - - -//*** utility functions - -func findCliche(id int) int { - -   for i := 0; i < len(clichesList); i++ { - -      if id == clichesList[i].Id { - -         return i; - -      } - -   } - -   return -1 // not found - -} - - - -func getIdFromRequest(req *http.Request) int { - -   vars := mux.Vars(req) - -   id, _ := strconv.Atoi(vars["id"]) - -   return id - -} - - - -func getDataFromRequest(req *http.Request) (string, string) { - -   // Extract the user-provided data for the new clichePair - -   req.ParseForm() - -   form := req.Form - -   cliche := form["cliche"][0]    // 1st and only member of a list - -   counter := form["counter"][0]  // ditto - -   return cliche, counter - -} - - - -func logIt(msg string) { - -   fmt.Println(msg) - -} - - - -func populateClichesList() { - -   var cliches = []string { - -      "Out of sight, out of mind.", - -      "A penny saved is a penny earned.", - -      "He who hesitates is lost.", - -   } - -   var counterCliches = []string { - -      "Absence makes the heart grow fonder.", - -      "Penny-wise and dollar-foolish.", - -      "Look before you leap.", - -   } - - - -   for i := 0; i < len(cliches); i++ { - -      cp := new(clichePair) - -      cp.Id = masterId - -      masterId++ - -      cp.Cliche = cliches[i] - -      cp.Counter = counterCliches[i] - -      clichesList = append(clichesList, cp) - -   } - -} - -``` - -To focus on request routing and validation, the crud app does not use HTML pages as responses to requests. Instead, requests result in plaintext response messages: A list of the cliche pairs is the response to a GET request, confirmation that a new cliche pair has been added to the list is a response to a POST request, and so on. This simplification makes it easy to test the app, in particular, the `gorilla/mux` components, with a command-line utility such as [curl][1]. - -The `gorilla/mux` package can be installed from [GitHub][2]. The crud app runs indefinitely; hence, it should be terminated with a Control-C or equivalent. The code for the crud app, together with a README and sample curl tests, is available on [my website][3]. - -### 2\. Request routing - -The `mux.Router` extends REST-style routing, which gives equal weight to the HTTP method (e.g., GET) and the URI or path at the end of a URL (e.g., /cliches). The URI serves as the noun for the HTTP verb (method). For example, in an HTTP request a startline such as -``` -GET /cliches - -``` - -means get all of the cliche pairs, whereas a startline such as -``` -POST /cliches - -``` - -means create a cliche pair from data in the HTTP body. - -In the crud web app, there are five functions that act as request handlers for five variations of an HTTP request: -``` -ClichesAll(...)    # GET: get all of the cliche pairs - -ClichesOne(...)    # GET: get a specified cliche pair - -ClichesCreate(...) # POST: create a new cliche pair - -ClichesEdit(...)   # PUT: edit an existing cliche pair - -ClichesDelete(...) # DELETE: delete a specified cliche pair - -``` - -Each function takes two arguments: an `http.ResponseWriter` for sending a response back to the requester, and a pointer to an `http.Request`, which encapsulates information from the underlying HTTP request. The `gorilla/mux` package makes it easy to register these request handlers with the web server, and to perform regex-based validation. - -The `startServer` function in the crud app registers the request handlers. Consider this pair of registrations, with `router` as a `mux.Router` instance: -``` -router.HandleFunc("/", ClichesAll).Methods("GET") - -router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -``` - -These statements mean that a GET request for either the single slash / or /cliches should be routed to the `ClichesAll` function, which then handles the request. For example, the curl request (with % as the command-line prompt) -``` -% curl --request GET localhost:8888/ - -``` - -produces this response: -``` -1: Out of sight, out of mind.  Absence makes the heart grow fonder. - -2: A penny saved is a penny earned.  Penny-wise and dollar-foolish. - -3: He who hesitates is lost.  Look before you leap. - -``` - -The three cliche pairs are the initial data in the crud app. - -In this pair of registration statements -``` -router.HandleFunc("/cliches", ClichesAll).Methods("GET") - -router.HandleFunc("/cliches", ClichesCreate).Methods("POST") - -``` - -the URI is the same (/cliches) but the verbs differ: GET in the first case, and POST in the second. This registration exemplifies REST-style routing because the difference in the verbs alone suffices to dispatch the requests to two different handlers. - -More than one HTTP method is allowed in a registration, although this strains the spirit of REST-style routing: -``` -router.HandleFunc("/cliches", DoItAll).Methods("POST", "GET") - -``` - -HTTP requests can be routed on features besides the verb and the URI. For example, the registration -``` -router.HandleFunc("/cliches", ClichesCreate).Schemes("https").Methods("POST") - -``` - -requires HTTPS access for a POST request to create a new cliche pair. In similar fashion, a registration might require a request to have a specified HTTP header element (e.g., an authentication credential). - -### 3\. Request validation - -The `gorilla/mux` package takes an easy, intuitive approach to request validation through regular expressions. Consider this request handler for a get one operation: -``` -router.HandleFunc("/cliches/{id:[0-9]+}", ClichesOne).Methods("GET") - -``` - -This registration rules out HTTP requests such as -``` -% curl --request GET localhost:8888/cliches/foo - -``` - -because foo is not a decimal numeral. The request results in the familiar 404 (Not Found) status code. Including the regex pattern in this handler registration ensures that the `ClichesOne` function is called to handle a request only if the request URI ends with a decimal integer value: -``` -% curl --request GET localhost:8888/cliches/3  # ok - -``` - -As a second example, consider the request -``` -% curl --request PUT --data "..." localhost:8888/cliches - -``` - -This request results in a status code of 405 (Bad Method) because the /cliches URI is registered, in the crud app, only for GET and POST requests. A PUT request, like a GET one request, must include a numeric id at the end of the URI: -``` -router.HandleFunc("/cliches/{id:[0-9]+}", ClichesEdit).Methods("PUT") - -``` - -### 4\. Concurrency issues - -The `gorilla/mux` router executes each call to a registered request handler as a separate goroutine, which means that concurrency is baked into the package. For example, if there are ten simultaneous requests such as -``` -% curl --request POST --data "..." localhost:8888/cliches - -``` - -then the `mux.Router` launches ten goroutines to execute the `ClichesCreate` handler. - -Of the five request operations GET all, GET one, POST, PUT, and DELETE, the last three alter the requested resource, the shared `clichesList` that houses the cliche pairs. Accordingly, the crudapp needs to guarantee safe concurrency by coordinating access to the `clichesList`. In different but equivalent terms, the crud app must prevent a race condition on the `clichesList`. In a production environment, a database system might be used to store a resource such as the `clichesList`, and safe concurrency then could be managed through database transactions. - -The crud app takes the recommended Go approach to safe concurrency: - - * Only a single goroutine, the resource manager started in the crud app `startServer` function, has access to the `clichesList` once the web server starts listening for requests. - * The request handlers such as `ClichesCreate` and `ClichesAll` send a (pointer to) a `crudRequest` instance to a Go channel (thread-safe by default), and the resource manager alone reads from this channel. The resource manager then performs the requested operation on the `clichesList`. - - - -The safe-concurrency architecture can be sketched as follows: -``` -                 crudRequest                   read/write - -request handlers------------->resource manager------------>clichesList - -``` - -With this architecture, no explicit locking of the `clichesList` is needed because only one goroutine, the resource manager, accesses the `clichesList` once CRUD requests start coming in. - -To keep the crud app as concurrent as possible, it’s essential to have an efficient division of labor between the request handlers, on the one side, and the single resource manager, on the other side. Here, for review, is the `ClichesCreate` request handler: -``` -func ClichesCreate(res http.ResponseWriter, req *http.Request) { - -   cliche, counter := getDataFromRequest(req) - -   cp := new(clichePair) - -   cp.Cliche = cliche - -   cp.Counter = counter - -   cr := &crudRequest{verb: POST, cp: cp, confirm: make(chan string)} - -   completeRequest(cr, res, "create") - -}ClichesCreateres httpResponseWriterreqclichecountergetDataFromRequestreqcpclichePaircpClicheclichecpCountercountercr&crudRequestverbPOSTcpcpconfirmcompleteRequestcrres - -``` - -`ClichesCreate` calls the utility function `getDataFromRequest`, which extracts the new cliche and counter-cliche from the POST request. The `ClichesCreate` function then creates a new `ClichePair`, sets two fields, and creates a `crudRequest` to be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because the `clichesList` is not being accessed yet. - -The request handlercalls the utility function, which extracts the new cliche and counter-cliche from the POST request. Thefunction then creates a new, sets two fields, and creates ato be sent to the single resource manager. This request includes a confirmation channel, which the resource manager uses to return information back to the request handler. All of the setup work can be done without involving the resource manager because theis not being accessed yet. - -The `completeRequest` utility function called at the end of the `ClichesCreate` function and the other request handlers -``` -completeRequest(cr, res, "create") // shown above - -``` - -brings the resource manager into play by putting a `crudRequest` into the `crudRequests` channel: -``` -func completeRequest(cr *crudRequest, res http.ResponseWriter, logMsg string) { - -   crudRequests<-cr          // send request to resource manager - -   msg := <-cr.confirm       // await confirmation string - -   res.Write([]byte(msg))    // send confirmation back to requester - -   logIt(logMsg)             // print to the standard output - -} - -``` - -For a POST request, the resource manager calls the utility function `addPair`, which changes the `clichesList` resource: -``` -func addPair(cp *clichePair) string { - -   cp.Id = masterId  // assign a unique ID - -   masterId++        // update the ID counter - -   clichesList = append(clichesList, cp) // update the list - -   return "\nCreated: " + cp.Cliche + " " + cp.Counter + "\n" - -} - -``` - -The resource manager calls similar utility functions for the other CRUD operations. It’s worth repeating that the resource manager is the only goroutine to read or write the `clichesList` once the web server starts accepting requests. - -For web applications of any type, the `gorilla/mux` package provides request routing, request validation, and related services in a straightforward, intuitive API. The crud web app highlights the package’s main features. Give the package a test drive, and you’ll likely be a buyer. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/http-request-routing-validation-gorillamux - -作者:[Marty Kalin][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/mkalindepauledu -[1]:https://curl.haxx.se/ -[2]:https://github.com/gorilla/mux -[3]:http://condor.depaul.edu/mkalin diff --git a/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md b/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md deleted file mode 100644 index 8cec20916d..0000000000 --- a/sources/tech/20180828 An Introduction to Quantum Computing with Open Source Cirq Framework.md +++ /dev/null @@ -1,228 +0,0 @@ -An Introduction to Quantum Computing with Open Source Cirq Framework -====== -As the title suggests what we are about to begin discussing, this article is an effort to understand how far we have come in Quantum Computing and where we are headed in the field in order to accelerate scientific and technological research, through an Open Source perspective with Cirq. - -First, we will introduce you to the world of Quantum Computing. We will try our best to explain the basic idea behind the same before we look into how Cirq would be playing a significant role in the future of Quantum Computing. Cirq, as you might have heard of recently, has been breaking news in the field and in this Open Science article, we will try to find out why. - - - -Before we start with what Quantum Computing is, it is essential to get to know about the term Quantum, that is, a [subatomic particle][1] referring to the smallest known entity. The word [Quantum][2] is based on the Latin word Quantus, meaning, “how little”, as described in this short video: - - - -It will be easier for us to understand Quantum Computing by comparing it first to Classical Computing. Classical Computing refers to how today’s conventional computers are designed to work. The device with which you are reading this article right now, can also be referred to as a Classical Computing Device. - -### Classical Computing - -Classical Computing is just another way to describe how a conventional computer works. They work via a binary system, i.e, information is stored using either 1 or 0. Our Classical computers cannot understand any other form. - -In literal terms inside the computer, a transistor can be either on (1) or off (0). Whatever information we provide input to, is translated into 0s and 1s, so that the computer can understand and store that information. Everything is represented only with the help of a combination of 0s and 1s. - - - -### Quantum Computing - -Quantum Computing, on the other hand, does not follow an “on or off” model like Classical Computing. Instead, it can simultaneously handle multiple states of information with help of two phenomena called [superimposition and entanglement][3], thus accelerating computing at a much faster rate and also facilitating greater productivity in information storage. - -Please note that superposition and entanglement are [not the same phenomena][4]. - - - -![][5] - -So, if we have bits in Classical Computing, then in the case of Quantum Computing, we would have qubits (or Quantum bits) instead. To know more about the vast difference between the two, check this [page][6] from where the above pic was obtained for explanation. - -Quantum Computers are not going to replace our Classical Computers. But, there are certain humongous tasks that our Classical Computers will never be able to accomplish and that is when Quantum Computers would prove extremely resourceful. The following video describes the same in detail while also describing how Quantum Computers work: - - - -A comprehensive video on the progress in Quantum Computing so far: - - - -### Noisy Intermediate Scale Quantum - -According to the very recently updated research paper (31st July 2018), the term “Noisy” refers to inaccuracy because of producing an incorrect value caused by imperfect control over qubits. This inaccuracy is why there will be serious limitations on what Quantum devices can achieve in the near term. - -“Intermediate Scale” refers to the size of Quantum Computers which will be available in the next few years, where the number of qubits can range from 50 to a few hundred. 50 qubits is a significant milestone because that’s beyond what can be simulated by [brute force][7] using the most powerful existing digital [supercomputers][8]. Read more in the paper [here][9]. - -With the advent of Cirq, a lot is about to change. - -### What is Cirq? - -Cirq is a python framework for creating, editing, and invoking Noisy Intermediate Scale Quantum (NISQ) circuits that we just talked about. In other words, Cirq can address challenges to improve accuracy and reduce noise in Quantum Computing. - -Cirq does not necessarily require an actual Quantum Computer for execution. Cirq can also use a simulator-like interface to perform Quantum circuit simulations. - -Cirq is gradually grabbing a lot of pace, with one of its first users being [Zapata][10], formed last year by a [group of scientists][11] from Harvard University focused on Quantum Computing. - -### Getting started with Cirq on Linux - -The developers of the Open Source [Cirq library][12] recommend the installation in a [virtual python environment][13] like [virtualenv][14]. The developers’ installation guide for Linux can be found [here][15]. - -However, we successfully installed and tested Cirq directly for Python3 on an Ubuntu 16.04 system via the following steps: - -#### Installing Cirq on Ubuntu - -![Cirq Framework for Quantum Computing in Linux][16] - -First, we would require pip or pip3 to install Cirq. [Pip][17] is a tool recommended for installing and managing Python packages. - -For Python 3.x versions, Pip can be installed with: -``` -sudo apt-get install python3-pip - -``` - -Python3 packages can be installed via: -``` -pip3 install - -``` - -We went ahead and installed the Cirq library with Pip3 for Python3: -``` -pip3 install cirq - -``` - -#### Enabling Plot and PDF generation (optional) - -Optional system dependencies not install-able with pip can be installed with: -``` -sudo apt-get install python3-tk texlive-latex-base latexmk - -``` - - * python3-tk is Python’s own graphic library which enables plotting functionality. - * texlive-latex-base and latexmk enable PDF writing functionality. - - - -Later, we successfully tested Cirq with the following command and code: -``` -python3 -c 'import cirq; print(cirq.google.Foxtail)' - -``` - -We got the resulting output as: - -![][18] - -#### Configuring Pycharm IDE for Cirq - -We also configured a Python IDE [PyCharm on Ubuntu][19] to test the same results: - -Since we installed Cirq for Python3 on our Linux system, we set the path to the project interpreter in the IDE settings to be: -``` -/usr/bin/python3 - -``` - -![][20] - -In the output above, you can note that the path to the project interpreter that we just set, is shown along with the path to the test program file (test.py). An exit code of 0 shows that the program has finished executing successfully without errors. - -So, that’s a ready-to-use IDE environment where you can import the Cirq library to start programming with Python and simulate Quantum circuits. - -#### Get started with Cirq - -A good place to start are the [examples][21] that have been made available on Cirq’s Github page. - -The developers have included this [tutorial][22] on GitHub to get started with learning Cirq. If you are serious about learning Quantum Computing, they recommend an excellent book called [“Quantum Computation and Quantum Information” by Nielsen and Chuang][23]. - -#### OpenFermion-Cirq - -[OpenFermion][24] is an open source library for obtaining and manipulating representations of fermionic systems (including Quantum Chemistry) for simulation on Quantum Computers. Fermionic systems are related to the generation of [fermions][25], which according to [particle physics][26], follow [Fermi-Dirac statistics][27]. - -OpenFermion has been hailed as [a great practice tool][28] for chemists and researchers involved with [Quantum Chemistry][29]. The main focus of Quantum Chemistry is the application of [Quantum Mechanics][30] in physical models and experiments of chemical systems. Quantum Chemistry is also referred to as [Molecular Quantum Mechanics][31]. - -The advent of Cirq has now made it possible for OpenFermion to extend its functionality by providing routines and tools for using Cirq to compile and compose circuits for Quantum simulation algorithms. - -#### Google Bristlecone - -On March 5, 2018, Google presented [Bristlecone][32], their new Quantum processor, at the annual [American Physical Society meeting][33] in Los Angeles. The [gate-based superconducting system][34] provides a test platform for research into [system error rates][35] and [scalability][36] of Google’s [qubit technology][37], along-with applications in Quantum [simulation][38], [optimization][39], and [machine learning.][40] - -In the near future, Google wants to make its 72 qubit Bristlecone Quantum processor [cloud accessible][41]. Bristlecone will gradually become quite capable to perform a task that a Classical Supercomputer would not be able to complete in a reasonable amount of time. - -Cirq would make it easier for researchers to directly write programs for Bristlecone on the cloud, serving as a very convenient interface for real-time Quantum programming and testing. - -Cirq will allow us to: - - * Fine tune control over Quantum circuits, - * Specify [gate][42] behavior using native gates, - * Place gates appropriately on the device & - * Schedule the timing of these gates. - - - -### The Open Science Perspective on Cirq - -As we all know Cirq is Open Source on GitHub, its addition to the Open Source Scientific Communities, especially those which are focused on Quantum Research, can now efficiently collaborate to solve the current challenges in Quantum Computing today by developing new ways to reduce error rates and improve accuracy in the existing Quantum models. - -Had Cirq not followed an Open Source model, things would have definitely been a lot more challenging. A great initiative would have been missed out and we would not have been one step closer in the field of Quantum Computing. - -### Summary - -To summarize in the end, we first introduced you to the concept of Quantum Computing by comparing it to existing Classical Computing techniques followed by a very important video on recent developmental updates in Quantum Computing since last year. We then briefly discussed Noisy Intermediate Scale Quantum, which is what Cirq is specifically built for. - -We saw how we can install and test Cirq on an Ubuntu system. We also tested the installation for usability on an IDE environment with some resources to get started to learn the concept. - -Finally, we also saw two examples of how Cirq would be an essential advantage in the development of research in Quantum Computing, namely OpenFermion and Bristlecone. We concluded the discussion by highlighting some thoughts on Cirq with an Open Science Perspective. - -We hope we were able to introduce you to Quantum Computing with Cirq in an easy to understand manner. If you have any feedback related to the same, please let us know in the comments section. Thank you for reading and we look forward to see you in our next Open Science article. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/qunatum-computing-cirq-framework/ - -作者:[Avimanyu Bandyopadhyay][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://itsfoss.com/author/avimanyu/ -[1]:https://en.wikipedia.org/wiki/Subatomic_particle -[2]:https://en.wikipedia.org/wiki/Quantum -[3]:https://www.clerro.com/guide/491/quantum-superposition-and-entanglement-explained -[4]:https://physics.stackexchange.com/questions/148131/can-quantum-entanglement-and-quantum-superposition-be-considered-the-same-phenom -[5]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/bit-vs-qubit.jpg -[6]:http://www.rfwireless-world.com/Terminology/Difference-between-Bit-and-Qubit.html -[7]:https://en.wikipedia.org/wiki/Proof_by_exhaustion -[8]:https://www.explainthatstuff.com/how-supercomputers-work.html -[9]:https://arxiv.org/abs/1801.00862 -[10]:https://www.xconomy.com/san-francisco/2018/07/19/google-partners-with-zapata-on-open-source-quantum-computing-effort/ -[11]:https://www.zapatacomputing.com/about/ -[12]:https://github.com/quantumlib/Cirq -[13]:https://itsfoss.com/python-setup-linux/ -[14]:https://virtualenv.pypa.io -[15]:https://cirq.readthedocs.io/en/latest/install.html#installing-on-linux -[16]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-framework-linux.jpeg -[17]:https://pypi.org/project/pip/ -[18]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-test-output.jpg -[19]:https://itsfoss.com/install-pycharm-ubuntu/ -[20]:https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/08/cirq-tested-on-pycharm.jpg -[21]:https://github.com/quantumlib/Cirq/tree/master/examples -[22]:https://github.com/quantumlib/Cirq/blob/master/docs/tutorial.md -[23]:http://mmrc.amss.cas.cn/tlb/201702/W020170224608149940643.pdf -[24]:http://openfermion.org -[25]:https://en.wikipedia.org/wiki/Fermion -[26]:https://en.wikipedia.org/wiki/Particle_physics -[27]:https://en.wikipedia.org/wiki/Fermi-Dirac_statistics -[28]:https://phys.org/news/2018-03-openfermion-tool-quantum-coding.html -[29]:https://en.wikipedia.org/wiki/Quantum_chemistry -[30]:https://en.wikipedia.org/wiki/Quantum_mechanics -[31]:https://ocw.mit.edu/courses/chemical-engineering/10-675j-computational-quantum-mechanics-of-molecular-and-extended-systems-fall-2004/lecture-notes/ -[32]:https://techcrunch.com/2018/03/05/googles-new-bristlecone-processor-brings-it-one-step-closer-to-quantum-supremacy/ -[33]:http://meetings.aps.org/Meeting/MAR18/Content/3475 -[34]:https://en.wikipedia.org/wiki/Superconducting_quantum_computing -[35]:https://en.wikipedia.org/wiki/Quantum_error_correction -[36]:https://en.wikipedia.org/wiki/Scalability -[37]:https://research.googleblog.com/2015/03/a-step-closer-to-quantum-computation.html -[38]:https://research.googleblog.com/2017/10/announcing-openfermion-open-source.html -[39]:https://research.googleblog.com/2016/06/quantum-annealing-with-digital-twist.html -[40]:https://arxiv.org/abs/1802.06002 -[41]:https://www.computerworld.com.au/article/644051/google-launches-quantum-framework-cirq-plans-bristlecone-cloud-move/ -[42]:https://en.wikipedia.org/wiki/Logic_gate diff --git a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md b/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md deleted file mode 100644 index c25239b7ba..0000000000 --- a/sources/tech/20180831 Publishing Markdown to HTML with MDwiki.md +++ /dev/null @@ -1,73 +0,0 @@ -Publishing Markdown to HTML with MDwiki -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o) - -There are plenty of reasons to like Markdown, a simple language with an easy-to-learn syntax that can be used with any text editor. Using tools like [Pandoc][1], you can convert Markdown text to [a variety of popular formats][2], including HTML. You can also automate that conversion process in a web server. An HTML5 and JavaScript application called [MDwiki][3], created by Timo Dörr, can take a stack of Markdown files and turn them into a website when requested from a browser. The MDwiki site includes a how-to guide and other information to help you get started: - -![MDwiki site getting started][5] - -What an Mdwiki site looks like. - -Inside the web server, a basic MDwiki site looks like this: - -![MDwiki site inside web server][7] - -What the webserver folder for that site looks like. - -I renamed the MDwiki HTML file `START.HTML` for this project. There is also one Markdown file that deals with navigation and a JSON file to hold a few configuration settings. Everything else is site content. - -While the overall website design is pretty much fixed by MDwiki, the content, styling, and number of pages are not. You can view a selection of different sites generated by MDwiki at [the MDwiki site][8]. It is fair to say that MDwiki sites lack the visual appeal that a web designer could achieve—but they are functional, and users should balance their simple appearance against the speed and ease of creating and editing them. - -Markdown comes in various flavors that extend a stable core functionality for different specific purposes. MDwiki uses GitHub flavor [Markdown][9], which adds features such as formatted code blocks and syntax highlighting for popular programming languages, making it well-suited for producing program documentation and tutorials. - -MDwiki also supports what it calls "gimmicks," which add extra functionality such as embedding YouTube video content and displaying mathematical formulas. These are worth exploring if you need them for specific projects. I find MDwiki an ideal tool for creating technical documentation and educational resources. I have also discovered some tricks and hacks that might not be immediately apparent. - -MDwiki works with any modern web browser when deployed in a web server; however, you do not need a web server if you access MDwiki with Mozilla Firefox. Most MDwiki users will opt to deploy completed projects on a web server to avoid excluding potential users, but development and testing can be done with just a text editor and Firefox. Completed MDwiki projects that are loaded into a Moodle Virtual Learning Environment (VLE) can be read by any modern browser, which could be useful in educational contexts. (This is probably also true for other VLE software, but you should test that.) - -MDwiki's default color scheme is not ideal for all projects, but you can replace it with another theme downloaded from [Bootswatch.com][10]. To do this, simply open the MDwiki HTML file in an editor, take out the `extlib/css/bootstrap-3.0.0.min.css` code, and insert the downloaded Bootswatch theme. There is also an MDwiki gimmick that lets users choose a Bootswatch theme to replace the default after MDwiki loads in their browser. I often work with users who have visual impairments, and they tend to prefer high-contrast themes, with white text on a dark background. - -![MDwiki screen with Bootswatch Superhero theme][12] - -MDwiki screen using the Bootswatch Superhero theme - -MDwiki, Markdown files, and static images are fine for many purposes. However, you might sometimes want to include, say, a JavaScript slideshow or a feedback form. Markdown files can include HTML code, but mixing Markdown with HTML can get confusing. One solution is to create the feature you want in a separate HTML file and display it inside a Markdown file with an iframe tag. I took this idea from the [Twine Cookbook][13], a support site for the Twine interactive fiction engine. The Twine Cookbook doesn’t actually use MDwiki, but combining Markdown and iframe tags opens up a wide range of creative possibilities. - -Here is an example: - -This HTML will display an HTML page created by the Twine interactive fiction engine inside a Markdown file. -``` - -``` - -The result in an MDwiki-generated site looks like this: - -![](https://opensource.com/sites/default/files/uploads/4_-_mdwiki_site_summary.png) - -In short, MDwiki is an excellent small application that achieves its purpose extremely well. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/8/markdown-html-publishing - -作者:[Peter Cheer][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/petercheer -[1]: https://pandoc.org/ -[2]: https://opensource.com/downloads/pandoc-cheat-sheet -[3]: http://dynalon.github.io/mdwiki/#!index.md -[4]: https://opensource.com/file/407306 -[5]: https://opensource.com/sites/default/files/uploads/1_-_mdwiki_screenshot.png (MDwiki site getting started) -[6]: https://opensource.com/file/407311 -[7]: https://opensource.com/sites/default/files/uploads/2_-_mdwiki_inside_web_server.png (MDwiki site inside web server) -[8]: http://dynalon.github.io/mdwiki/#!examples.md -[9]: https://guides.github.com/features/mastering-markdown/ -[10]: https://bootswatch.com/ -[11]: https://opensource.com/file/407316 -[12]: https://opensource.com/sites/default/files/uploads/3_-_mdwiki_bootswatch_superhero.png (MDwiki screen with Bootswatch Superhero theme) -[13]: https://github.com/iftechfoundation/twine-cookbook diff --git a/sources/tech/20180911 Know Your Storage- Block, File - Object.md b/sources/tech/20180911 Know Your Storage- Block, File - Object.md deleted file mode 100644 index 24f179d9d5..0000000000 --- a/sources/tech/20180911 Know Your Storage- Block, File - Object.md +++ /dev/null @@ -1,62 +0,0 @@ -Know Your Storage: Block, File & Object -====== - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/block2_1920.jpg?itok=s1y6RLhT) - -Dealing with the tremendous amount of data generated today presents a big challenge for companies who create or consume such data. It’s a challenge for tech companies that are dealing with related storage issues. - -“Data is growing exponentially each year, and we find that the majority of data growth is due to increased consumption and industries adopting transformational projects to expand value. Certainly, the Internet of Things (IoT) has contributed greatly to data growth, but the key challenge for software-defined storage is how to address the use cases associated with data growth,” said Michael St. Jean, principal product marketing manager, Red Hat Storage. - -Every challenge is an opportunity. “The deluge of data being generated by old and new sources today is certainly presenting us with opportunities to meet our customers escalating needs in the areas of scale, performance, resiliency, and governance,” said Tad Brockway, General Manager for Azure Storage, Media and Edge. - -### Trinity of modern software-defined storage - -There are three different kinds of storage solutions -- block, file, and object -- each serving a different purpose while working with the others. - -Block storage is the oldest form of data storage, where data is stored in fixed-length blocks or chunks of data. Block storage is used in enterprise storage environments and usually is accessed using Fibre Channel or iSCSI interface. “Block storage requires an application to map where the data is stored on the storage device,” according to SUSE’s Larry Morris, Sr. Product Manager, Software Defined Storage. - -Block storage is virtualized in storage area network and software defined storage systems, which are abstracted logical devices that reside on a shared hardware infrastructure and are created and presented to the host operating system of a server, virtual server, or hypervisor via protocols like SCSI, SATA, SAS, FCP, FCoE, or iSCSI. - -“Block storage splits a single storage volume (like a virtual or cloud storage node, or a good old fashioned hard disk) into individual instances known as blocks,” said St. Jean. - -Each block exists independently and can be formatted with its own data transfer protocol and operating system — giving users complete configuration autonomy. Because block storage systems aren’t burdened with the same investigative file-finding duties as the file storage systems, block storage is a faster storage system. Pairing that speed with configuration flexibility makes block storage ideal for raw server storage or rich media databases. - -Block storage can be used to host operating systems, applications, databases, entire virtual machines and containers. Traditionally, block storage can only be accessed by individual machine, or machines in a cluster, to which it has been presented. - -### File-based storage - -File-based storage uses a filesystem to map where the data is stored on the storage device. It’s a dominant technology used on direct- and networked-attached storage system, and it takes care of two things: organizing data and representing it to users. “With file storage, data is arranged on the server side in the exact same format as the clients see it. This allows the user to request a file by some unique identifier — like a name, location, or URL — which is communicated to the storage system using specific data transfer protocols,” said St. Jean. - -The result is a type of hierarchical file structure that can be navigated from top to bottom. File storage is layered on top of block storage, allowing users to see and access data as files and folders, but restricting access to the blocks that stand up those files and folders. - -“File storage is typically represented by shared filesystems like NFS and CIFS/SMB that can be accessed by many servers over an IP network. Access can be controlled at a file, directory, and export level via user and group permissions. File storage can be used to store files needed by multiple users and machines, application binaries, databases, virtual machines, and can be used by containers,” explained Brockway. - -### Object storage - -Object storage is the newest form of data storage, and it provides a repository for unstructured data which separates the content from the indexing and allows the concatenation of multiple files into an object. An object is a piece of data paired with any associated metadata that provides context about the bytes contained within the object (things like how old or big the data is). Those two things together — the data and metadata — make an object. - -One advantage of object storage is the unique identifier associated with each piece of data. Accessing the data involves using the unique identifier and does not require the application or user to know where the data is actually stored. Object data is accessed through APIs. - -“The data stored in objects is uncompressed and unencrypted, and the objects themselves are arranged in object stores (a central repository filled with many other objects) or containers (a package that contains all of the files an application needs to run). Objects, object stores, and containers are very flat in nature — compared to the hierarchical structure of file storage systems — which allow them to be accessed very quickly at huge scale,” explained St. Jean. - -Object stores can scale to many petabytes to accommodate the largest datasets and are a great choice for images, audio, video, logs, backups, and data used by analytics services. - -### Conclusion - -Now you know about the various types of storage and how they are used. Stay tuned to learn more about software-defined storage as we examine the topic in the future. - -Join us at [Open Source Summit + Embedded Linux Conference Europe][1] in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more. - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/9/know-your-storage-block-file-object - -作者:[Swapnil Bhartiya][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.linux.com/users/arnieswap -[1]: https://events.linuxfoundation.org/events/elc-openiot-europe-2018/ diff --git a/sources/tech/20180912 How to turn on an LED with Fedora IoT.md b/sources/tech/20180912 How to turn on an LED with Fedora IoT.md deleted file mode 100644 index 007cfc27ab..0000000000 --- a/sources/tech/20180912 How to turn on an LED with Fedora IoT.md +++ /dev/null @@ -1,201 +0,0 @@ -How to turn on an LED with Fedora IoT -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/08/LED-IoT-816x345.jpg) - -Do you enjoy running Fedora, containers, and have a Raspberry Pi? What about using all three together to play with LEDs? This article introduces Fedora IoT and shows you how to install a preview image on a Raspberry Pi. You’ll also learn how to interact with GPIO in order to light up an LED. - -### What is Fedora IoT? - -Fedora IoT is one of the current Fedora Project objectives, with a plan to become a full Fedora Edition. The result will be a system that runs on ARM (aarch64 only at the moment) devices such as the Raspberry Pi, as well as on the x86_64 architecture. - -![][1] - -Fedora IoT is based on OSTree, like [Fedora Silverblue][2] and the former [Atomic Host][3]. - -### Download and install Fedora IoT - -The official Fedora IoT images are coming with the Fedora 29 release. However, in the meantime you can download a [Fedora 28-based image][4] for this experiment. - -You have two options to install the system: either flash the SD card using a dd command, or use a fedora-arm-installer tool. The Fedora Wiki offers more information about [setting up a physical device][5] for IoT. Also, remember that you might need to resize the third partition. - -Once you insert the SD card into the device, you’ll need to complete the installation by creating a user. This step requires either a serial connection, or a HDMI display with a keyboard to interact with the device. - -When the system is installed and ready, the next step is to configure a network connection. Log in to the system with the user you have just created choose one of the following options: - - * If you need to configure your network manually, run a command similar to the following. Remember to use the right addresses for your network: -``` - $ nmcli connection add con-name cable ipv4.addresses \ - 192.168.0.10/24 ipv4.gateway 192.168.0.1 \ - connection.autoconnect true ipv4.dns "8.8.8.8,1.1.1.1" \ - type ethernet ifname eth0 ipv4.method manual - -``` - - * If there’s a DHCP service on your network, run a command like this: - -``` - $ nmcli con add type ethernet con-name cable ifname eth0 -``` - - - - -### **The GPIO interface in Fedora** - -Many tutorials about GPIO on Linux focus on a legacy GPIO sysfis interface. This interface is deprecated, and the upstream Linux kernel community plan to remove it completely, due to security and other issues. - -The Fedora kernel is already compiled without this legacy interface, so there’s no /sys/class/gpio on the system. This tutorial uses a new character device /dev/gpiochipN provided by the upstream kernel. This is the current way of interacting with GPIO. - -To interact with this new device, you need to use a library and a set of command line interface tools. The common command line tools such as echo or cat won’t work with this device. - -You can install the CLI tools by installing the libgpiod-utils package. A corresponding Python library is provided by the python3-libgpiod package. - -### **Creating a container with Podman** - -[Podman][6] is a container runtime with a command line interface similar to Docker. The big advantage of Podman is it doesn’t run any daemon in the background. That’s especially useful for devices with limited resources. Podman also allows you to start containerized services with systemd unit files. Plus, it has many additional features. - -We’ll create a container in these two steps: - - 1. Create a layered image containing the required packages. - 2. Create a new container starting from our image. - - - -First, create a file Dockerfile with the content below. This tells podman to build an image based on the latest Fedora image available in the registry. Then it updates the system inside and installs some packages: - -``` -FROM fedora:latest -RUN  dnf -y update -RUN  dnf -y install libgpiod-utils python3-libgpiod - -``` - -You have created a build recipe of a container image based on the latest Fedora with updates, plus packages to interact with GPIO. - -Now, run the following command to build your base image: - -``` -$ sudo podman build --tag fedora:gpiobase -f ./Dockerfile - -``` - -You have just created your custom image with all the bits in place. You can play with this base container images as many times as you want without installing the packages every time you run it. - -### Working with Podman - -To verify the image is present, run the following command: - -``` -$ sudo podman images -REPOSITORY                 TAG        IMAGE ID       CREATED          SIZE -localhost/fedora           gpiobase   67a2b2b93b4b   10 minutes ago  488MB -docker.io/library/fedora   latest     c18042d7fac6   2 days ago     300MB - -``` - -Now, start the container and do some actual experiments. Containers are normally isolated and don’t have an access to the host system, including the GPIO interface. Therefore, you need to mount it inside while starting the container. To do this, use the –device option in the following command: - -``` -$ sudo podman run -it --name gpioexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash - -``` - -You are now inside the running container. Before you move on, here are some more container commands. For now, exit the container by typing exit or pressing **Ctrl+D**. - -To list the the existing containers, including those not currently running, such as the one you just created, run: - -``` -$ sudo podman container ls -a -CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS                              PORTS   NAMES -64e661d5d4e8   localhost/fedora:gpiobase   /bin/bash 37 seconds ago Exited (0) Less than a second ago           gpioexperiment - -``` - -To create a new container, run this command: - -``` -$ sudo podman run -it --name newexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash - -``` - -Delete it with the following command: - -``` -$ sudo podman rm newexperiment - -``` - -### **Turn on an LED** - -Now you can use the container you already created. If you exited from the container, start it again with this command: - -``` -$ sudo podman start -ia gpioexperiment - -``` - -As already discussed, you can use the CLI tools provided by the libgpiod-utils package in Fedora. To list the available GPIO chips, run: - -``` -$ gpiodetect -gpiochip0 [pinctrl-bcm2835] (54 lines) - -``` - -To get the list of the lines exposed by a specific chip, run: - -``` -$ gpioinfo gpiochip0 - -``` - -Notice there’s no correlation between the number of physical pins and the number of lines printed by the previous command. What’s important is the BCM number, as shown on [pinout.xyz][7]. It is not advised to play with the lines that don’t have a corresponding BCM number. - -Now, connect an LED to the physical pin 40, that is BCM 21. Remember: the shorter leg of the LED (the negative leg, called the cathode) must be connected to a GND pin of the Raspberry Pi with a 330 ohm resistor, and the long leg (the anode) to the physical pin 40. - -To turn the LED on, run the following command. It will stay on until you press **Ctrl+C** : - -``` -$ gpioset --mode=wait gpiochip0 21=1 - -``` - -To light it up for a certain period of time, add the -b (run in the background) and -s NUM (how many seconds) parameters, as shown below. For example, to light the LED for 5 seconds, run: - -``` -$ gpioset -b -s 5 --mode=time gpiochip0 21=1 - -``` - -Another useful command is gpioget. It gets the status of a pin (high or low), and can be useful to detect buttons and switches. - -![Closeup of LED connection with GPIO][8] - -### **Conclusion** - -You can also play with LEDs using Python — [there are some examples here][9]. And you can also use the i2c devices inside the container as well. In addition, Podman is not strictly related to this Fedora edition. You can install it on any existing Fedora Edition, or try it on the two new OSTree-based systems in Fedora: [Fedora Silverblue][2] and [Fedora CoreOS][10]. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/turnon-led-fedora-iot/ - -作者:[Alessio Ciregia][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://alciregi.id.fedoraproject.org/ -[1]: https://fedoramagazine.org/wp-content/uploads/2018/08/oled-1024x768.png -[2]: https://teamsilverblue.org/ -[3]: https://www.projectatomic.io/ -[4]: https://kojipkgs.fedoraproject.org/compose/iot/latest-Fedora-IoT-28/compose/IoT/ -[5]: https://fedoraproject.org/wiki/InternetOfThings/GettingStarted#Setting_up_a_Physical_Device -[6]: https://github.com/containers/libpod -[7]: https://pinout.xyz/ -[8]: https://fedoramagazine.org/wp-content/uploads/2018/08/breadboard-1024x768.png -[9]: https://github.com/brgl/libgpiod/tree/master/bindings/python/examples -[10]: https://coreos.fedoraproject.org/ diff --git a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md b/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md deleted file mode 100644 index 365b5eb5f8..0000000000 --- a/sources/tech/20180913 Lab 1- PC Bootstrap and GCC Calling Conventions.md +++ /dev/null @@ -1,616 +0,0 @@ -Lab 1: PC Bootstrap and GCC Calling Conventions -====== -### Lab 1: Booting a PC - -#### Introduction - -This lab is split into three parts. The first part concentrates on getting familiarized with x86 assembly language, the QEMU x86 emulator, and the PC's power-on bootstrap procedure. The second part examines the boot loader for our 6.828 kernel, which resides in the `boot` directory of the `lab` tree. Finally, the third part delves into the initial template for our 6.828 kernel itself, named JOS, which resides in the `kernel` directory. - -##### Software Setup - -The files you will need for this and subsequent lab assignments in this course are distributed using the [Git][1] version control system. To learn more about Git, take a look at the [Git user's manual][2], or, if you are already familiar with other version control systems, you may find this [CS-oriented overview of Git][3] useful. - -The URL for the course Git repository is . To install the files in your Athena account, you need to _clone_ the course repository, by running the commands below. You must use an x86 Athena machine; that is, `uname -a` should mention `i386 GNU/Linux` or `i686 GNU/Linux` or `x86_64 GNU/Linux`. You can log into a public Athena host with `ssh -X athena.dialup.mit.edu`. - -``` -athena% mkdir ~/6.828 -athena% cd ~/6.828 -athena% add git -athena% git clone https://pdos.csail.mit.edu/6.828/2018/jos.git lab -Cloning into lab... -athena% cd lab -athena% - -``` - -Git allows you to keep track of the changes you make to the code. For example, if you are finished with one of the exercises, and want to checkpoint your progress, you can _commit_ your changes by running: - -``` -athena% git commit -am 'my solution for lab1 exercise 9' -Created commit 60d2135: my solution for lab1 exercise 9 - 1 files changed, 1 insertions(+), 0 deletions(-) -athena% - -``` - -You can keep track of your changes by using the git diff command. Running git diff will display the changes to your code since your last commit, and git diff origin/lab1 will display the changes relative to the initial code supplied for this lab. Here, `origin/lab1` is the name of the git branch with the initial code you downloaded from our server for this assignment. - -We have set up the appropriate compilers and simulators for you on Athena. To use them, run add -f 6.828. You must run this command every time you log in (or add it to your `~/.environment` file). If you get obscure errors while compiling or running `qemu`, double check that you added the course locker. - -If you are working on a non-Athena machine, you'll need to install `qemu` and possibly `gcc` following the directions on the [tools page][4]. We've made several useful debugging changes to `qemu` and some of the later labs depend on these patches, so you must build your own. If your machine uses a native ELF toolchain (such as Linux and most BSD's, but notably _not_ OS X), you can simply install `gcc` from your package manager. Otherwise, follow the directions on the tools page. - -##### Hand-In Procedure - -You will turn in your assignments using the [submission website][5]. You need to request an API key from the submission website before you can turn in any assignments or labs. - -The lab code comes with GNU Make rules to make submission easier. After committing your final changes to the lab, type make handin to submit your lab. - -``` -athena% git commit -am "ready to submit my lab" -[lab1 c2e3c8b] ready to submit my lab - 2 files changed, 18 insertions(+), 2 deletions(-) - -athena% make handin -git archive --prefix=lab1/ --format=tar HEAD | gzip > lab1-handin.tar.gz -Get an API key for yourself by visiting https://6828.scripts.mit.edu/2018/handin.py/ -Please enter your API key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX - % Total % Received % Xferd Average Speed Time Time Time Current - Dload Upload Total Spent Left Speed -100 50199 100 241 100 49958 414 85824 --:--:-- --:--:-- --:--:-- 85986 -athena% - -``` - -make handin will store your API key in _myapi.key_. If you need to change your API key, just remove this file and let make handin generate it again ( _myapi.key_ must not include newline characters). - -If use make handin and you have either uncomitted changes or untracked files, you will see output similar to the following: - -``` - M hello.c -?? bar.c -?? foo.pyc -Untracked files will not be handed in. Continue? [y/N] - -``` - -Inspect the above lines and make sure all files that your lab solution needs are tracked i.e. not listed in a line that begins with ??. - -In the case that make handin does not work properly, try fixing the problem with the curl or Git commands. Or you can run make tarball. This will make a tar file for you, which you can then upload via our [web interface][5]. - -You can run make grade to test your solutions with the grading program. The [web interface][5] uses the same grading program to assign your lab submission a grade. You should check the output of the grader (it may take a few minutes since the grader runs periodically) and ensure that you received the grade which you expected. If the grades don't match, your lab submission probably has a bug -- check the output of the grader (resp-lab*.txt) to see which particular test failed. - -For Lab 1, you do not need to turn in answers to any of the questions below. (Do answer them for yourself though! They will help with the rest of the lab.) - -#### Part 1: PC Bootstrap - -The purpose of the first exercise is to introduce you to x86 assembly language and the PC bootstrap process, and to get you started with QEMU and QEMU/GDB debugging. You will not have to write any code for this part of the lab, but you should go through it anyway for your own understanding and be prepared to answer the questions posed below. - -##### Getting Started with x86 assembly - -If you are not already familiar with x86 assembly language, you will quickly become familiar with it during this course! The [PC Assembly Language Book][6] is an excellent place to start. Hopefully, the book contains mixture of new and old material for you. - -_Warning:_ Unfortunately the examples in the book are written for the NASM assembler, whereas we will be using the GNU assembler. NASM uses the so-called _Intel_ syntax while GNU uses the _AT &T_ syntax. While semantically equivalent, an assembly file will differ quite a lot, at least superficially, depending on which syntax is used. Luckily the conversion between the two is pretty simple, and is covered in [Brennan's Guide to Inline Assembly][7]. - -Exercise 1. Familiarize yourself with the assembly language materials available on [the 6.828 reference page][8]. You don't have to read them now, but you'll almost certainly want to refer to some of this material when reading and writing x86 assembly. - -We do recommend reading the section "The Syntax" in [Brennan's Guide to Inline Assembly][7]. It gives a good (and quite brief) description of the AT&T assembly syntax we'll be using with the GNU assembler in JOS. - -Certainly the definitive reference for x86 assembly language programming is Intel's instruction set architecture reference, which you can find on [the 6.828 reference page][8] in two flavors: an HTML edition of the old [80386 Programmer's Reference Manual][9], which is much shorter and easier to navigate than more recent manuals but describes all of the x86 processor features that we will make use of in 6.828; and the full, latest and greatest [IA-32 Intel Architecture Software Developer's Manuals][10] from Intel, covering all the features of the most recent processors that we won't need in class but you may be interested in learning about. An equivalent (and often friendlier) set of manuals is [available from AMD][11]. Save the Intel/AMD architecture manuals for later or use them for reference when you want to look up the definitive explanation of a particular processor feature or instruction. - -##### Simulating the x86 - -Instead of developing the operating system on a real, physical personal computer (PC), we use a program that faithfully emulates a complete PC: the code you write for the emulator will boot on a real PC too. Using an emulator simplifies debugging; you can, for example, set break points inside of the emulated x86, which is difficult to do with the silicon version of an x86. - -In 6.828 we will use the [QEMU Emulator][12], a modern and relatively fast emulator. While QEMU's built-in monitor provides only limited debugging support, QEMU can act as a remote debugging target for the [GNU debugger][13] (GDB), which we'll use in this lab to step through the early boot process. - -To get started, extract the Lab 1 files into your own directory on Athena as described above in "Software Setup", then type make (or gmake on BSD systems) in the `lab` directory to build the minimal 6.828 boot loader and kernel you will start with. (It's a little generous to call the code we're running here a "kernel," but we'll flesh it out throughout the semester.) - -``` -athena% cd lab -athena% make -+ as kern/entry.S -+ cc kern/entrypgdir.c -+ cc kern/init.c -+ cc kern/console.c -+ cc kern/monitor.c -+ cc kern/printf.c -+ cc kern/kdebug.c -+ cc lib/printfmt.c -+ cc lib/readline.c -+ cc lib/string.c -+ ld obj/kern/kernel -+ as boot/boot.S -+ cc -Os boot/main.c -+ ld boot/boot -boot block is 380 bytes (max 510) -+ mk obj/kern/kernel.img - -``` - -(If you get errors like "undefined reference to `__udivdi3'", you probably don't have the 32-bit gcc multilib. If you're running Debian or Ubuntu, try installing the gcc-multilib package.) - -Now you're ready to run QEMU, supplying the file `obj/kern/kernel.img`, created above, as the contents of the emulated PC's "virtual hard disk." This hard disk image contains both our boot loader (`obj/boot/boot`) and our kernel (`obj/kernel`). - -``` -athena% make qemu - -``` - -or - -``` -athena% make qemu-nox - -``` - -This executes QEMU with the options required to set the hard disk and direct serial port output to the terminal. Some text should appear in the QEMU window: - -``` -Booting from Hard Disk... -6828 decimal is XXX octal! -entering test_backtrace 5 -entering test_backtrace 4 -entering test_backtrace 3 -entering test_backtrace 2 -entering test_backtrace 1 -entering test_backtrace 0 -leaving test_backtrace 0 -leaving test_backtrace 1 -leaving test_backtrace 2 -leaving test_backtrace 3 -leaving test_backtrace 4 -leaving test_backtrace 5 -Welcome to the JOS kernel monitor! -Type 'help' for a list of commands. -K> - -``` - -Everything after '`Booting from Hard Disk...`' was printed by our skeletal JOS kernel; the `K>` is the prompt printed by the small _monitor_ , or interactive control program, that we've included in the kernel. If you used make qemu, these lines printed by the kernel will appear in both the regular shell window from which you ran QEMU and the QEMU display window. This is because for testing and lab grading purposes we have set up the JOS kernel to write its console output not only to the virtual VGA display (as seen in the QEMU window), but also to the simulated PC's virtual serial port, which QEMU in turn outputs to its own standard output. Likewise, the JOS kernel will take input from both the keyboard and the serial port, so you can give it commands in either the VGA display window or the terminal running QEMU. Alternatively, you can use the serial console without the virtual VGA by running make qemu-nox. This may be convenient if you are SSH'd into an Athena dialup. To quit qemu, type Ctrl+a x. - -There are only two commands you can give to the kernel monitor, `help` and `kerninfo`. - -``` -K> help -help - display this list of commands -kerninfo - display information about the kernel -K> kerninfo -Special kernel symbols: - entry f010000c (virt) 0010000c (phys) - etext f0101a75 (virt) 00101a75 (phys) - edata f0112300 (virt) 00112300 (phys) - end f0112960 (virt) 00112960 (phys) -Kernel executable memory footprint: 75KB -K> - -``` - -The `help` command is obvious, and we will shortly discuss the meaning of what the `kerninfo` command prints. Although simple, it's important to note that this kernel monitor is running "directly" on the "raw (virtual) hardware" of the simulated PC. This means that you should be able to copy the contents of `obj/kern/kernel.img` onto the first few sectors of a _real_ hard disk, insert that hard disk into a real PC, turn it on, and see exactly the same thing on the PC's real screen as you did above in the QEMU window. (We don't recommend you do this on a real machine with useful information on its hard disk, though, because copying `kernel.img` onto the beginning of its hard disk will trash the master boot record and the beginning of the first partition, effectively causing everything previously on the hard disk to be lost!) - -##### The PC's Physical Address Space - -We will now dive into a bit more detail about how a PC starts up. A PC's physical address space is hard-wired to have the following general layout: - -``` -+------------------+ <- 0xFFFFFFFF (4GB) -| 32-bit | -| memory mapped | -| devices | -| | -/\/\/\/\/\/\/\/\/\/\ - -/\/\/\/\/\/\/\/\/\/\ -| | -| Unused | -| | -+------------------+ <- depends on amount of RAM -| | -| | -| Extended Memory | -| | -| | -+------------------+ <- 0x00100000 (1MB) -| BIOS ROM | -+------------------+ <- 0x000F0000 (960KB) -| 16-bit devices, | -| expansion ROMs | -+------------------+ <- 0x000C0000 (768KB) -| VGA Display | -+------------------+ <- 0x000A0000 (640KB) -| | -| Low Memory | -| | -+------------------+ <- 0x00000000 - -``` - -The first PCs, which were based on the 16-bit Intel 8088 processor, were only capable of addressing 1MB of physical memory. The physical address space of an early PC would therefore start at 0x00000000 but end at 0x000FFFFF instead of 0xFFFFFFFF. The 640KB area marked "Low Memory" was the _only_ random-access memory (RAM) that an early PC could use; in fact the very earliest PCs only could be configured with 16KB, 32KB, or 64KB of RAM! - -The 384KB area from 0x000A0000 through 0x000FFFFF was reserved by the hardware for special uses such as video display buffers and firmware held in non-volatile memory. The most important part of this reserved area is the Basic Input/Output System (BIOS), which occupies the 64KB region from 0x000F0000 through 0x000FFFFF. In early PCs the BIOS was held in true read-only memory (ROM), but current PCs store the BIOS in updateable flash memory. The BIOS is responsible for performing basic system initialization such as activating the video card and checking the amount of memory installed. After performing this initialization, the BIOS loads the operating system from some appropriate location such as floppy disk, hard disk, CD-ROM, or the network, and passes control of the machine to the operating system. - -When Intel finally "broke the one megabyte barrier" with the 80286 and 80386 processors, which supported 16MB and 4GB physical address spaces respectively, the PC architects nevertheless preserved the original layout for the low 1MB of physical address space in order to ensure backward compatibility with existing software. Modern PCs therefore have a "hole" in physical memory from 0x000A0000 to 0x00100000, dividing RAM into "low" or "conventional memory" (the first 640KB) and "extended memory" (everything else). In addition, some space at the very top of the PC's 32-bit physical address space, above all physical RAM, is now commonly reserved by the BIOS for use by 32-bit PCI devices. - -Recent x86 processors can support _more_ than 4GB of physical RAM, so RAM can extend further above 0xFFFFFFFF. In this case the BIOS must arrange to leave a _second_ hole in the system's RAM at the top of the 32-bit addressable region, to leave room for these 32-bit devices to be mapped. Because of design limitations JOS will use only the first 256MB of a PC's physical memory anyway, so for now we will pretend that all PCs have "only" a 32-bit physical address space. But dealing with complicated physical address spaces and other aspects of hardware organization that evolved over many years is one of the important practical challenges of OS development. - -##### The ROM BIOS - -In this portion of the lab, you'll use QEMU's debugging facilities to investigate how an IA-32 compatible computer boots. - -Open two terminal windows and cd both shells into your lab directory. In one, enter make qemu-gdb (or make qemu-nox-gdb). This starts up QEMU, but QEMU stops just before the processor executes the first instruction and waits for a debugging connection from GDB. In the second terminal, from the same directory you ran `make`, run make gdb. You should see something like this, - -``` -athena% make gdb -GNU gdb (GDB) 6.8-debian -Copyright (C) 2008 Free Software Foundation, Inc. -License GPLv3+: GNU GPL version 3 or later -This is free software: you are free to change and redistribute it. -There is NO WARRANTY, to the extent permitted by law. Type "show copying" -and "show warranty" for details. -This GDB was configured as "i486-linux-gnu". -+ target remote localhost:26000 -The target architecture is assumed to be i8086 -[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b -0x0000fff0 in ?? () -+ symbol-file obj/kern/kernel -(gdb) - -``` - -We provided a `.gdbinit` file that set up GDB to debug the 16-bit code used during early boot and directed it to attach to the listening QEMU. (If it doesn't work, you may have to add an `add-auto-load-safe-path` in your `.gdbinit` in your home directory to convince `gdb` to process the `.gdbinit` we provided. `gdb` will tell you if you have to do this.) - -The following line: - -``` -[f000:fff0] 0xffff0: ljmp $0xf000,$0xe05b - -``` - -is GDB's disassembly of the first instruction to be executed. From this output you can conclude a few things: - - * The IBM PC starts executing at physical address 0x000ffff0, which is at the very top of the 64KB area reserved for the ROM BIOS. - * The PC starts executing with `CS = 0xf000` and `IP = 0xfff0`. - * The first instruction to be executed is a `jmp` instruction, which jumps to the segmented address `CS = 0xf000` and `IP = 0xe05b`. - - - -Why does QEMU start like this? This is how Intel designed the 8088 processor, which IBM used in their original PC. Because the BIOS in a PC is "hard-wired" to the physical address range 0x000f0000-0x000fffff, this design ensures that the BIOS always gets control of the machine first after power-up or any system restart - which is crucial because on power-up there _is_ no other software anywhere in the machine's RAM that the processor could execute. The QEMU emulator comes with its own BIOS, which it places at this location in the processor's simulated physical address space. On processor reset, the (simulated) processor enters real mode and sets CS to 0xf000 and the IP to 0xfff0, so that execution begins at that (CS:IP) segment address. How does the segmented address 0xf000:fff0 turn into a physical address? - -To answer that we need to know a bit about real mode addressing. In real mode (the mode that PC starts off in), address translation works according to the formula: _physical address_ = 16 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated _segment_ \+ _offset_. So, when the PC sets CS to 0xf000 and IP to 0xfff0, the physical address referenced is: - -``` - 16 * 0xf000 + 0xfff0 # in hex multiplication by 16 is - = 0xf0000 + 0xfff0 # easy--just append a 0. - = 0xffff0 - -``` - -`0xffff0` is 16 bytes before the end of the BIOS (`0x100000`). Therefore we shouldn't be surprised that the first thing that the BIOS does is `jmp` backwards to an earlier location in the BIOS; after all how much could it accomplish in just 16 bytes? - -Exercise 2. Use GDB's si (Step Instruction) command to trace into the ROM BIOS for a few more instructions, and try to guess what it might be doing. You might want to look at [Phil Storrs I/O Ports Description][14], as well as other materials on the [6.828 reference materials page][8]. No need to figure out all the details - just the general idea of what the BIOS is doing first. - -When the BIOS runs, it sets up an interrupt descriptor table and initializes various devices such as the VGA display. This is where the "`Starting SeaBIOS`" message you see in the QEMU window comes from. - -After initializing the PCI bus and all the important devices the BIOS knows about, it searches for a bootable device such as a floppy, hard drive, or CD-ROM. Eventually, when it finds a bootable disk, the BIOS reads the _boot loader_ from the disk and transfers control to it. - -#### Part 2: The Boot Loader - -Floppy and hard disks for PCs are divided into 512 byte regions called _sectors_. A sector is the disk's minimum transfer granularity: each read or write operation must be one or more sectors in size and aligned on a sector boundary. If the disk is bootable, the first sector is called the _boot sector_ , since this is where the boot loader code resides. When the BIOS finds a bootable floppy or hard disk, it loads the 512-byte boot sector into memory at physical addresses 0x7c00 through 0x7dff, and then uses a `jmp` instruction to set the CS:IP to `0000:7c00`, passing control to the boot loader. Like the BIOS load address, these addresses are fairly arbitrary - but they are fixed and standardized for PCs. - -The ability to boot from a CD-ROM came much later during the evolution of the PC, and as a result the PC architects took the opportunity to rethink the boot process slightly. As a result, the way a modern BIOS boots from a CD-ROM is a bit more complicated (and more powerful). CD-ROMs use a sector size of 2048 bytes instead of 512, and the BIOS can load a much larger boot image from the disk into memory (not just one sector) before transferring control to it. For more information, see the ["El Torito" Bootable CD-ROM Format Specification][15]. - -For 6.828, however, we will use the conventional hard drive boot mechanism, which means that our boot loader must fit into a measly 512 bytes. The boot loader consists of one assembly language source file, `boot/boot.S`, and one C source file, `boot/main.c` Look through these source files carefully and make sure you understand what's going on. The boot loader must perform two main functions: - - 1. First, the boot loader switches the processor from real mode to _32-bit protected mode_ , because it is only in this mode that software can access all the memory above 1MB in the processor's physical address space. Protected mode is described briefly in sections 1.2.7 and 1.2.8 of [PC Assembly Language][6], and in great detail in the Intel architecture manuals. At this point you only have to understand that translation of segmented addresses (segment:offset pairs) into physical addresses happens differently in protected mode, and that after the transition offsets are 32 bits instead of 16. - 2. Second, the boot loader reads the kernel from the hard disk by directly accessing the IDE disk device registers via the x86's special I/O instructions. If you would like to understand better what the particular I/O instructions here mean, check out the "IDE hard drive controller" section on [the 6.828 reference page][8]. You will not need to learn much about programming specific devices in this class: writing device drivers is in practice a very important part of OS development, but from a conceptual or architectural viewpoint it is also one of the least interesting. - - - -After you understand the boot loader source code, look at the file `obj/boot/boot.asm`. This file is a disassembly of the boot loader that our GNUmakefile creates _after_ compiling the boot loader. This disassembly file makes it easy to see exactly where in physical memory all of the boot loader's code resides, and makes it easier to track what's happening while stepping through the boot loader in GDB. Likewise, `obj/kern/kernel.asm` contains a disassembly of the JOS kernel, which can often be useful for debugging. - -You can set address breakpoints in GDB with the `b` command. For example, b *0x7c00 sets a breakpoint at address 0x7C00. Once at a breakpoint, you can continue execution using the c and si commands: c causes QEMU to continue execution until the next breakpoint (or until you press Ctrl-C in GDB), and si _N_ steps through the instructions _`N`_ at a time. - -To examine instructions in memory (besides the immediate next one to be executed, which GDB prints automatically), you use the x/i command. This command has the syntax x/ _N_ i _ADDR_ , where _N_ is the number of consecutive instructions to disassemble and _ADDR_ is the memory address at which to start disassembling. - -Exercise 3. Take a look at the [lab tools guide][16], especially the section on GDB commands. Even if you're familiar with GDB, this includes some esoteric GDB commands that are useful for OS work. - -Set a breakpoint at address 0x7c00, which is where the boot sector will be loaded. Continue execution until that breakpoint. Trace through the code in `boot/boot.S`, using the source code and the disassembly file `obj/boot/boot.asm` to keep track of where you are. Also use the `x/i` command in GDB to disassemble sequences of instructions in the boot loader, and compare the original boot loader source code with both the disassembly in `obj/boot/boot.asm` and GDB. - -Trace into `bootmain()` in `boot/main.c`, and then into `readsect()`. Identify the exact assembly instructions that correspond to each of the statements in `readsect()`. Trace through the rest of `readsect()` and back out into `bootmain()`, and identify the begin and end of the `for` loop that reads the remaining sectors of the kernel from the disk. Find out what code will run when the loop is finished, set a breakpoint there, and continue to that breakpoint. Then step through the remainder of the boot loader. - -Be able to answer the following questions: - - * At what point does the processor start executing 32-bit code? What exactly causes the switch from 16- to 32-bit mode? - * What is the _last_ instruction of the boot loader executed, and what is the _first_ instruction of the kernel it just loaded? - * _Where_ is the first instruction of the kernel? - * How does the boot loader decide how many sectors it must read in order to fetch the entire kernel from disk? Where does it find this information? - - - -##### Loading the Kernel - -We will now look in further detail at the C language portion of the boot loader, in `boot/main.c`. But before doing so, this is a good time to stop and review some of the basics of C programming. - -Exercise 4. Read about programming with pointers in C. The best reference for the C language is _The C Programming Language_ by Brian Kernighan and Dennis Ritchie (known as 'K &R'). We recommend that students purchase this book (here is an [Amazon Link][17]) or find one of [MIT's 7 copies][18]. - -Read 5.1 (Pointers and Addresses) through 5.5 (Character Pointers and Functions) in K&R. Then download the code for [pointers.c][19], run it, and make sure you understand where all of the printed values come from. In particular, make sure you understand where the pointer addresses in printed lines 1 and 6 come from, how all the values in printed lines 2 through 4 get there, and why the values printed in line 5 are seemingly corrupted. - -There are other references on pointers in C (e.g., [A tutorial by Ted Jensen][20] that cites K&R heavily), though not as strongly recommended. - -_Warning:_ Unless you are already thoroughly versed in C, do not skip or even skim this reading exercise. If you do not really understand pointers in C, you will suffer untold pain and misery in subsequent labs, and then eventually come to understand them the hard way. Trust us; you don't want to find out what "the hard way" is. - -To make sense out of `boot/main.c` you'll need to know what an ELF binary is. When you compile and link a C program such as the JOS kernel, the compiler transforms each C source ('`.c`') file into an _object_ ('`.o`') file containing assembly language instructions encoded in the binary format expected by the hardware. The linker then combines all of the compiled object files into a single _binary image_ such as `obj/kern/kernel`, which in this case is a binary in the ELF format, which stands for "Executable and Linkable Format". - -Full information about this format is available in [the ELF specification][21] on [our reference page][8], but you will not need to delve very deeply into the details of this format in this class. Although as a whole the format is quite powerful and complex, most of the complex parts are for supporting dynamic loading of shared libraries, which we will not do in this class. The [Wikipedia page][22] has a short description. - -For purposes of 6.828, you can consider an ELF executable to be a header with loading information, followed by several _program sections_ , each of which is a contiguous chunk of code or data intended to be loaded into memory at a specified address. The boot loader does not modify the code or data; it loads it into memory and starts executing it. - -An ELF binary starts with a fixed-length _ELF header_ , followed by a variable-length _program header_ listing each of the program sections to be loaded. The C definitions for these ELF headers are in `inc/elf.h`. The program sections we're interested in are: - - * `.text`: The program's executable instructions. - * `.rodata`: Read-only data, such as ASCII string constants produced by the C compiler. (We will not bother setting up the hardware to prohibit writing, however.) - * `.data`: The data section holds the program's initialized data, such as global variables declared with initializers like `int x = 5;`. - - - -When the linker computes the memory layout of a program, it reserves space for _uninitialized_ global variables, such as `int x;`, in a section called `.bss` that immediately follows `.data` in memory. C requires that "uninitialized" global variables start with a value of zero. Thus there is no need to store contents for `.bss` in the ELF binary; instead, the linker records just the address and size of the `.bss` section. The loader or the program itself must arrange to zero the `.bss` section. - -Examine the full list of the names, sizes, and link addresses of all the sections in the kernel executable by typing: - -``` -athena% objdump -h obj/kern/kernel - -(If you compiled your own toolchain, you may need to use i386-jos-elf-objdump) - -``` - -You will see many more sections than the ones we listed above, but the others are not important for our purposes. Most of the others are to hold debugging information, which is typically included in the program's executable file but not loaded into memory by the program loader. - -Take particular note of the "VMA" (or _link address_ ) and the "LMA" (or _load address_ ) of the `.text` section. The load address of a section is the memory address at which that section should be loaded into memory. - -The link address of a section is the memory address from which the section expects to execute. The linker encodes the link address in the binary in various ways, such as when the code needs the address of a global variable, with the result that a binary usually won't work if it is executing from an address that it is not linked for. (It is possible to generate _position-independent_ code that does not contain any such absolute addresses. This is used extensively by modern shared libraries, but it has performance and complexity costs, so we won't be using it in 6.828.) - -Typically, the link and load addresses are the same. For example, look at the `.text` section of the boot loader: - -``` -athena% objdump -h obj/boot/boot.out - -``` - -The boot loader uses the ELF _program headers_ to decide how to load the sections. The program headers specify which parts of the ELF object to load into memory and the destination address each should occupy. You can inspect the program headers by typing: - -``` -athena% objdump -x obj/kern/kernel - -``` - -The program headers are then listed under "Program Headers" in the output of objdump. The areas of the ELF object that need to be loaded into memory are those that are marked as "LOAD". Other information for each program header is given, such as the virtual address ("vaddr"), the physical address ("paddr"), and the size of the loaded area ("memsz" and "filesz"). - -Back in boot/main.c, the `ph->p_pa` field of each program header contains the segment's destination physical address (in this case, it really is a physical address, though the ELF specification is vague on the actual meaning of this field). - -The BIOS loads the boot sector into memory starting at address 0x7c00, so this is the boot sector's load address. This is also where the boot sector executes from, so this is also its link address. We set the link address by passing `-Ttext 0x7C00` to the linker in `boot/Makefrag`, so the linker will produce the correct memory addresses in the generated code. - -Exercise 5. Trace through the first few instructions of the boot loader again and identify the first instruction that would "break" or otherwise do the wrong thing if you were to get the boot loader's link address wrong. Then change the link address in `boot/Makefrag` to something wrong, run make clean, recompile the lab with make, and trace into the boot loader again to see what happens. Don't forget to change the link address back and make clean again afterward! - -Look back at the load and link addresses for the kernel. Unlike the boot loader, these two addresses aren't the same: the kernel is telling the boot loader to load it into memory at a low address (1 megabyte), but it expects to execute from a high address. We'll dig in to how we make this work in the next section. - -Besides the section information, there is one more field in the ELF header that is important to us, named `e_entry`. This field holds the link address of the _entry point_ in the program: the memory address in the program's text section at which the program should begin executing. You can see the entry point: - -``` -athena% objdump -f obj/kern/kernel - -``` - -You should now be able to understand the minimal ELF loader in `boot/main.c`. It reads each section of the kernel from disk into memory at the section's load address and then jumps to the kernel's entry point. - -Exercise 6. We can examine memory using GDB's x command. The [GDB manual][23] has full details, but for now, it is enough to know that the command x/ _N_ x _ADDR_ prints _`N`_ words of memory at _`ADDR`_. (Note that both '`x`'s in the command are lowercase.) _Warning_ : The size of a word is not a universal standard. In GNU assembly, a word is two bytes (the 'w' in xorw, which stands for word, means 2 bytes). - -Reset the machine (exit QEMU/GDB and start them again). Examine the 8 words of memory at 0x00100000 at the point the BIOS enters the boot loader, and then again at the point the boot loader enters the kernel. Why are they different? What is there at the second breakpoint? (You do not really need to use QEMU to answer this question. Just think.) - -#### Part 3: The Kernel - -We will now start to examine the minimal JOS kernel in a bit more detail. (And you will finally get to write some code!). Like the boot loader, the kernel begins with some assembly language code that sets things up so that C language code can execute properly. - -##### Using virtual memory to work around position dependence - -When you inspected the boot loader's link and load addresses above, they matched perfectly, but there was a (rather large) disparity between the _kernel's_ link address (as printed by objdump) and its load address. Go back and check both and make sure you can see what we're talking about. (Linking the kernel is more complicated than the boot loader, so the link and load addresses are at the top of `kern/kernel.ld`.) - -Operating system kernels often like to be linked and run at very high _virtual address_ , such as 0xf0100000, in order to leave the lower part of the processor's virtual address space for user programs to use. The reason for this arrangement will become clearer in the next lab. - -Many machines don't have any physical memory at address 0xf0100000, so we can't count on being able to store the kernel there. Instead, we will use the processor's memory management hardware to map virtual address 0xf0100000 (the link address at which the kernel code _expects_ to run) to physical address 0x00100000 (where the boot loader loaded the kernel into physical memory). This way, although the kernel's virtual address is high enough to leave plenty of address space for user processes, it will be loaded in physical memory at the 1MB point in the PC's RAM, just above the BIOS ROM. This approach requires that the PC have at least a few megabytes of physical memory (so that physical address 0x00100000 works), but this is likely to be true of any PC built after about 1990. - -In fact, in the next lab, we will map the _entire_ bottom 256MB of the PC's physical address space, from physical addresses 0x00000000 through 0x0fffffff, to virtual addresses 0xf0000000 through 0xffffffff respectively. You should now see why JOS can only use the first 256MB of physical memory. - -For now, we'll just map the first 4MB of physical memory, which will be enough to get us up and running. We do this using the hand-written, statically-initialized page directory and page table in `kern/entrypgdir.c`. For now, you don't have to understand the details of how this works, just the effect that it accomplishes. Up until `kern/entry.S` sets the `CR0_PG` flag, memory references are treated as physical addresses (strictly speaking, they're linear addresses, but boot/boot.S set up an identity mapping from linear addresses to physical addresses and we're never going to change that). Once `CR0_PG` is set, memory references are virtual addresses that get translated by the virtual memory hardware to physical addresses. `entry_pgdir` translates virtual addresses in the range 0xf0000000 through 0xf0400000 to physical addresses 0x00000000 through 0x00400000, as well as virtual addresses 0x00000000 through 0x00400000 to physical addresses 0x00000000 through 0x00400000. Any virtual address that is not in one of these two ranges will cause a hardware exception which, since we haven't set up interrupt handling yet, will cause QEMU to dump the machine state and exit (or endlessly reboot if you aren't using the 6.828-patched version of QEMU). - -Exercise 7. Use QEMU and GDB to trace into the JOS kernel and stop at the `movl %eax, %cr0`. Examine memory at 0x00100000 and at 0xf0100000. Now, single step over that instruction using the stepi GDB command. Again, examine memory at 0x00100000 and at 0xf0100000. Make sure you understand what just happened. - -What is the first instruction _after_ the new mapping is established that would fail to work properly if the mapping weren't in place? Comment out the `movl %eax, %cr0` in `kern/entry.S`, trace into it, and see if you were right. - -##### Formatted Printing to the Console - -Most people take functions like `printf()` for granted, sometimes even thinking of them as "primitives" of the C language. But in an OS kernel, we have to implement all I/O ourselves. - -Read through `kern/printf.c`, `lib/printfmt.c`, and `kern/console.c`, and make sure you understand their relationship. It will become clear in later labs why `printfmt.c` is located in the separate `lib` directory. - -Exercise 8. We have omitted a small fragment of code - the code necessary to print octal numbers using patterns of the form "%o". Find and fill in this code fragment. - -Be able to answer the following questions: - - 1. Explain the interface between `printf.c` and `console.c`. Specifically, what function does `console.c` export? How is this function used by `printf.c`? - - 2. Explain the following from `console.c`: -``` - 1 if (crt_pos >= CRT_SIZE) { - 2 int i; - 3 memmove(crt_buf, crt_buf + CRT_COLS, (CRT_SIZE - CRT_COLS) 选题模板.txt 中文排版指北.md comic core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE Makefile published README.md sign.md sources translated sizeof(uint16_t)); - 4 for (i = CRT_SIZE - CRT_COLS; i < CRT_SIZE; i++) - 5 crt_buf[i] = 0x0700 | ' '; - 6 crt_pos -= CRT_COLS; - 7 } - -``` - - 3. For the following questions you might wish to consult the notes for Lecture 2. These notes cover GCC's calling convention on the x86. - -Trace the execution of the following code step-by-step: -``` - int x = 1, y = 3, z = 4; - cprintf("x %d, y %x, z %d\n", x, y, z); - -``` - - * In the call to `cprintf()`, to what does `fmt` point? To what does `ap` point? - * List (in order of execution) each call to `cons_putc`, `va_arg`, and `vcprintf`. For `cons_putc`, list its argument as well. For `va_arg`, list what `ap` points to before and after the call. For `vcprintf` list the values of its two arguments. - 4. Run the following code. -``` - unsigned int i = 0x00646c72; - cprintf("H%x Wo%s", 57616, &i); - -``` - -What is the output? Explain how this output is arrived at in the step-by-step manner of the previous exercise. [Here's an ASCII table][24] that maps bytes to characters. - -The output depends on that fact that the x86 is little-endian. If the x86 were instead big-endian what would you set `i` to in order to yield the same output? Would you need to change `57616` to a different value? - -[Here's a description of little- and big-endian][25] and [a more whimsical description][26]. - - 5. In the following code, what is going to be printed after `'y='`? (note: the answer is not a specific value.) Why does this happen? -``` - cprintf("x=%d y=%d", 3); - -``` - - 6. Let's say that GCC changed its calling convention so that it pushed arguments on the stack in declaration order, so that the last argument is pushed last. How would you have to change `cprintf` or its interface so that it would still be possible to pass it a variable number of arguments? - - - - -Challenge Enhance the console to allow text to be printed in different colors. The traditional way to do this is to make it interpret [ANSI escape sequences][27] embedded in the text strings printed to the console, but you may use any mechanism you like. There is plenty of information on [the 6.828 reference page][8] and elsewhere on the web on programming the VGA display hardware. If you're feeling really adventurous, you could try switching the VGA hardware into a graphics mode and making the console draw text onto the graphical frame buffer. - -##### The Stack - -In the final exercise of this lab, we will explore in more detail the way the C language uses the stack on the x86, and in the process write a useful new kernel monitor function that prints a _backtrace_ of the stack: a list of the saved Instruction Pointer (IP) values from the nested `call` instructions that led to the current point of execution. - -Exercise 9. Determine where the kernel initializes its stack, and exactly where in memory its stack is located. How does the kernel reserve space for its stack? And at which "end" of this reserved area is the stack pointer initialized to point to? - -The x86 stack pointer (`esp` register) points to the lowest location on the stack that is currently in use. Everything _below_ that location in the region reserved for the stack is free. Pushing a value onto the stack involves decreasing the stack pointer and then writing the value to the place the stack pointer points to. Popping a value from the stack involves reading the value the stack pointer points to and then increasing the stack pointer. In 32-bit mode, the stack can only hold 32-bit values, and esp is always divisible by four. Various x86 instructions, such as `call`, are "hard-wired" to use the stack pointer register. - -The `ebp` (base pointer) register, in contrast, is associated with the stack primarily by software convention. On entry to a C function, the function's _prologue_ code normally saves the previous function's base pointer by pushing it onto the stack, and then copies the current `esp` value into `ebp` for the duration of the function. If all the functions in a program obey this convention, then at any given point during the program's execution, it is possible to trace back through the stack by following the chain of saved `ebp` pointers and determining exactly what nested sequence of function calls caused this particular point in the program to be reached. This capability can be particularly useful, for example, when a particular function causes an `assert` failure or `panic` because bad arguments were passed to it, but you aren't sure _who_ passed the bad arguments. A stack backtrace lets you find the offending function. - -Exercise 10. To become familiar with the C calling conventions on the x86, find the address of the `test_backtrace` function in `obj/kern/kernel.asm`, set a breakpoint there, and examine what happens each time it gets called after the kernel starts. How many 32-bit words does each recursive nesting level of `test_backtrace` push on the stack, and what are those words? - -Note that, for this exercise to work properly, you should be using the patched version of QEMU available on the [tools][4] page or on Athena. Otherwise, you'll have to manually translate all breakpoint and memory addresses to linear addresses. - -The above exercise should give you the information you need to implement a stack backtrace function, which you should call `mon_backtrace()`. A prototype for this function is already waiting for you in `kern/monitor.c`. You can do it entirely in C, but you may find the `read_ebp()` function in `inc/x86.h` useful. You'll also have to hook this new function into the kernel monitor's command list so that it can be invoked interactively by the user. - -The backtrace function should display a listing of function call frames in the following format: - -``` -Stack backtrace: - ebp f0109e58 eip f0100a62 args 00000001 f0109e80 f0109e98 f0100ed2 00000031 - ebp f0109ed8 eip f01000d6 args 00000000 00000000 f0100058 f0109f28 00000061 - ... - -``` - -Each line contains an `ebp`, `eip`, and `args`. The `ebp` value indicates the base pointer into the stack used by that function: i.e., the position of the stack pointer just after the function was entered and the function prologue code set up the base pointer. The listed `eip` value is the function's _return instruction pointer_ : the instruction address to which control will return when the function returns. The return instruction pointer typically points to the instruction after the `call` instruction (why?). Finally, the five hex values listed after `args` are the first five arguments to the function in question, which would have been pushed on the stack just before the function was called. If the function was called with fewer than five arguments, of course, then not all five of these values will be useful. (Why can't the backtrace code detect how many arguments there actually are? How could this limitation be fixed?) - -The first line printed reflects the _currently executing_ function, namely `mon_backtrace` itself, the second line reflects the function that called `mon_backtrace`, the third line reflects the function that called that one, and so on. You should print _all_ the outstanding stack frames. By studying `kern/entry.S` you'll find that there is an easy way to tell when to stop. - -Here are a few specific points you read about in K&R Chapter 5 that are worth remembering for the following exercise and for future labs. - - * If `int *p = (int*)100`, then `(int)p + 1` and `(int)(p + 1)` are different numbers: the first is `101` but the second is `104`. When adding an integer to a pointer, as in the second case, the integer is implicitly multiplied by the size of the object the pointer points to. - * `p[i]` is defined to be the same as `*(p+i)`, referring to the i'th object in the memory pointed to by p. The above rule for addition helps this definition work when the objects are larger than one byte. - * `&p[i]` is the same as `(p+i)`, yielding the address of the i'th object in the memory pointed to by p. - - - -Although most C programs never need to cast between pointers and integers, operating systems frequently do. Whenever you see an addition involving a memory address, ask yourself whether it is an integer addition or pointer addition and make sure the value being added is appropriately multiplied or not. - -Exercise 11. Implement the backtrace function as specified above. Use the same format as in the example, since otherwise the grading script will be confused. When you think you have it working right, run make grade to see if its output conforms to what our grading script expects, and fix it if it doesn't. _After_ you have handed in your Lab 1 code, you are welcome to change the output format of the backtrace function any way you like. - -If you use `read_ebp()`, note that GCC may generate "optimized" code that calls `read_ebp()` _before_ `mon_backtrace()`'s function prologue, which results in an incomplete stack trace (the stack frame of the most recent function call is missing). While we have tried to disable optimizations that cause this reordering, you may want to examine the assembly of `mon_backtrace()` and make sure the call to `read_ebp()` is happening after the function prologue. - -At this point, your backtrace function should give you the addresses of the function callers on the stack that lead to `mon_backtrace()` being executed. However, in practice you often want to know the function names corresponding to those addresses. For instance, you may want to know which functions could contain a bug that's causing your kernel to crash. - -To help you implement this functionality, we have provided the function `debuginfo_eip()`, which looks up `eip` in the symbol table and returns the debugging information for that address. This function is defined in `kern/kdebug.c`. - -Exercise 12. Modify your stack backtrace function to display, for each `eip`, the function name, source file name, and line number corresponding to that `eip`. - -In `debuginfo_eip`, where do `__STAB_*` come from? This question has a long answer; to help you to discover the answer, here are some things you might want to do: - - * look in the file `kern/kernel.ld` for `__STAB_*` - * run objdump -h obj/kern/kernel - * run objdump -G obj/kern/kernel - * run gcc -pipe -nostdinc -O2 -fno-builtin -I. -MD -Wall -Wno-format -DJOS_KERNEL -gstabs -c -S kern/init.c, and look at init.s. - * see if the bootloader loads the symbol table in memory as part of loading the kernel binary - - - -Complete the implementation of `debuginfo_eip` by inserting the call to `stab_binsearch` to find the line number for an address. - -Add a `backtrace` command to the kernel monitor, and extend your implementation of `mon_backtrace` to call `debuginfo_eip` and print a line for each stack frame of the form: - -``` -K> backtrace -Stack backtrace: - ebp f010ff78 eip f01008ae args 00000001 f010ff8c 00000000 f0110580 00000000 - kern/monitor.c:143: monitor+106 - ebp f010ffd8 eip f0100193 args 00000000 00001aac 00000660 00000000 00000000 - kern/init.c:49: i386_init+59 - ebp f010fff8 eip f010003d args 00000000 00000000 0000ffff 10cf9a00 0000ffff - kern/entry.S:70: +0 -K> - -``` - -Each line gives the file name and line within that file of the stack frame's `eip`, followed by the name of the function and the offset of the `eip` from the first instruction of the function (e.g., `monitor+106` means the return `eip` is 106 bytes past the beginning of `monitor`). - -Be sure to print the file and function names on a separate line, to avoid confusing the grading script. - -Tip: printf format strings provide an easy, albeit obscure, way to print non-null-terminated strings like those in STABS tables. `printf("%.*s", length, string)` prints at most `length` characters of `string`. Take a look at the printf man page to find out why this works. - -You may find that some functions are missing from the backtrace. For example, you will probably see a call to `monitor()` but not to `runcmd()`. This is because the compiler in-lines some function calls. Other optimizations may cause you to see unexpected line numbers. If you get rid of the `-O2` from `GNUMakefile`, the backtraces may make more sense (but your kernel will run more slowly). - -**This completes the lab.** In the `lab` directory, commit your changes with git commit and type make handin to submit your code. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: -[b]: https://github.com/lujun9972 -[1]: http://www.git-scm.com/ -[2]: http://www.kernel.org/pub/software/scm/git/docs/user-manual.html -[3]: http://eagain.net/articles/git-for-computer-scientists/ -[4]: https://pdos.csail.mit.edu/6.828/2018/tools.html -[5]: https://6828.scripts.mit.edu/2018/handin.py/ -[6]: https://pdos.csail.mit.edu/6.828/2018/readings/pcasm-book.pdf -[7]: http://www.delorie.com/djgpp/doc/brennan/brennan_att_inline_djgpp.html -[8]: https://pdos.csail.mit.edu/6.828/2018/reference.html -[9]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm -[10]: http://www.intel.com/content/www/us/en/processors/architectures-software-developer-manuals.html -[11]: http://developer.amd.com/resources/developer-guides-manuals/ -[12]: http://www.qemu.org/ -[13]: http://www.gnu.org/software/gdb/ -[14]: http://web.archive.org/web/20040404164813/members.iweb.net.au/~pstorr/pcbook/book2/book2.htm -[15]: https://pdos.csail.mit.edu/6.828/2018/readings/boot-cdrom.pdf -[16]: https://pdos.csail.mit.edu/6.828/2018/labguide.html -[17]: http://www.amazon.com/C-Programming-Language-2nd/dp/0131103628/sr=8-1/qid=1157812738/ref=pd_bbs_1/104-1502762-1803102?ie=UTF8&s=books -[18]: http://library.mit.edu/F/AI9Y4SJ2L5ELEE2TAQUAAR44XV5RTTQHE47P9MKP5GQDLR9A8X-10422?func=item-global&doc_library=MIT01&doc_number=000355242&year=&volume=&sub_library= -[19]: https://pdos.csail.mit.edu/6.828/2018/labs/lab1/pointers.c -[20]: https://pdos.csail.mit.edu/6.828/2018/readings/pointers.pdf -[21]: https://pdos.csail.mit.edu/6.828/2018/readings/elf.pdf -[22]: http://en.wikipedia.org/wiki/Executable_and_Linkable_Format -[23]: https://sourceware.org/gdb/current/onlinedocs/gdb/Memory.html -[24]: http://web.cs.mun.ca/~michael/c/ascii-table.html -[25]: http://www.webopedia.com/TERM/b/big_endian.html -[26]: http://www.networksorcery.com/enp/ien/ien137.txt -[27]: http://rrbrandt.dee.ufcg.edu.br/en/docs/ansi/ diff --git a/sources/tech/20180927 Lab 2- Memory Management.md b/sources/tech/20180927 Lab 2- Memory Management.md deleted file mode 100644 index 386bf6ceaf..0000000000 --- a/sources/tech/20180927 Lab 2- Memory Management.md +++ /dev/null @@ -1,272 +0,0 @@ -Lab 2: Memory Management -====== -### Lab 2: Memory Management - -#### Introduction - -In this lab, you will write the memory management code for your operating system. Memory management has two components. - -The first component is a physical memory allocator for the kernel, so that the kernel can allocate memory and later free it. Your allocator will operate in units of 4096 bytes, called _pages_. Your task will be to maintain data structures that record which physical pages are free and which are allocated, and how many processes are sharing each allocated page. You will also write the routines to allocate and free pages of memory. - -The second component of memory management is _virtual memory_ , which maps the virtual addresses used by kernel and user software to addresses in physical memory. The x86 hardware's memory management unit (MMU) performs the mapping when instructions use memory, consulting a set of page tables. You will modify JOS to set up the MMU's page tables according to a specification we provide. - -##### Getting started - -In this and future labs you will progressively build up your kernel. We will also provide you with some additional source. To fetch that source, use Git to commit changes you've made since handing in lab 1 (if any), fetch the latest version of the course repository, and then create a local branch called `lab2` based on our lab2 branch, `origin/lab2`: - -``` - athena% cd ~/6.828/lab - athena% add git - athena% git pull - Already up-to-date. - athena% git checkout -b lab2 origin/lab2 - Branch lab2 set up to track remote branch refs/remotes/origin/lab2. - Switched to a new branch "lab2" - athena% -``` - -The git checkout -b command shown above actually does two things: it first creates a local branch `lab2` that is based on the `origin/lab2` branch provided by the course staff, and second, it changes the contents of your `lab` directory to reflect the files stored on the `lab2` branch. Git allows switching between existing branches using git checkout _branch-name_ , though you should commit any outstanding changes on one branch before switching to a different one. - -You will now need to merge the changes you made in your `lab1` branch into the `lab2` branch, as follows: - -``` - athena% git merge lab1 - Merge made by recursive. - kern/kdebug.c | 11 +++++++++-- - kern/monitor.c | 19 +++++++++++++++++++ - lib/printfmt.c | 7 +++---- - 3 files changed, 31 insertions(+), 6 deletions(-) - athena% -``` - -In some cases, Git may not be able to figure out how to merge your changes with the new lab assignment (e.g. if you modified some of the code that is changed in the second lab assignment). In that case, the git merge command will tell you which files are _conflicted_ , and you should first resolve the conflict (by editing the relevant files) and then commit the resulting files with git commit -a. - -Lab 2 contains the following new source files, which you should browse through: - - * `inc/memlayout.h` - * `kern/pmap.c` - * `kern/pmap.h` - * `kern/kclock.h` - * `kern/kclock.c` - - - -`memlayout.h` describes the layout of the virtual address space that you must implement by modifying `pmap.c`. `memlayout.h` and `pmap.h` define the `PageInfo` structure that you'll use to keep track of which pages of physical memory are free. `kclock.c` and `kclock.h` manipulate the PC's battery-backed clock and CMOS RAM hardware, in which the BIOS records the amount of physical memory the PC contains, among other things. The code in `pmap.c` needs to read this device hardware in order to figure out how much physical memory there is, but that part of the code is done for you: you do not need to know the details of how the CMOS hardware works. - -Pay particular attention to `memlayout.h` and `pmap.h`, since this lab requires you to use and understand many of the definitions they contain. You may want to review `inc/mmu.h`, too, as it also contains a number of definitions that will be useful for this lab. - -Before beginning the lab, don't forget to add -f 6.828 to get the 6.828 version of QEMU. - -##### Lab Requirements - -In this lab and subsequent labs, do all of the regular exercises described in the lab and _at least one_ challenge problem. (Some challenge problems are more challenging than others, of course!) Additionally, write up brief answers to the questions posed in the lab and a short (e.g., one or two paragraph) description of what you did to solve your chosen challenge problem. If you implement more than one challenge problem, you only need to describe one of them in the write-up, though of course you are welcome to do more. Place the write-up in a file called `answers-lab2.txt` in the top level of your `lab` directory before handing in your work. - -##### Hand-In Procedure - -When you are ready to hand in your lab code and write-up, add your `answers-lab2.txt` to the Git repository, commit your changes, and then run make handin. - -``` - athena% git add answers-lab2.txt - athena% git commit -am "my answer to lab2" - [lab2 a823de9] my answer to lab2 - 4 files changed, 87 insertions(+), 10 deletions(-) - athena% make handin -``` - -As before, we will be grading your solutions with a grading program. You can run make grade in the `lab` directory to test your kernel with the grading program. You may change any of the kernel source and header files you need to in order to complete the lab, but needless to say you must not change or otherwise subvert the grading code. - -#### Part 1: Physical Page Management - -The operating system must keep track of which parts of physical RAM are free and which are currently in use. JOS manages the PC's physical memory with _page granularity_ so that it can use the MMU to map and protect each piece of allocated memory. - -You'll now write the physical page allocator. It keeps track of which pages are free with a linked list of `struct PageInfo` objects (which, unlike xv6, are not embedded in the free pages themselves), each corresponding to a physical page. You need to write the physical page allocator before you can write the rest of the virtual memory implementation, because your page table management code will need to allocate physical memory in which to store page tables. - -Exercise 1. In the file `kern/pmap.c`, you must implement code for the following functions (probably in the order given). - -`boot_alloc()` -`mem_init()` (only up to the call to `check_page_free_list(1)`) -`page_init()` -`page_alloc()` -`page_free()` - -`check_page_free_list()` and `check_page_alloc()` test your physical page allocator. You should boot JOS and see whether `check_page_alloc()` reports success. Fix your code so that it passes. You may find it helpful to add your own `assert()`s to verify that your assumptions are correct. - -This lab, and all the 6.828 labs, will require you to do a bit of detective work to figure out exactly what you need to do. This assignment does not describe all the details of the code you'll have to add to JOS. Look for comments in the parts of the JOS source that you have to modify; those comments often contain specifications and hints. You will also need to look at related parts of JOS, at the Intel manuals, and perhaps at your 6.004 or 6.033 notes. - -#### Part 2: Virtual Memory - -Before doing anything else, familiarize yourself with the x86's protected-mode memory management architecture: namely _segmentation_ and _page translation_. - -Exercise 2. Look at chapters 5 and 6 of the [Intel 80386 Reference Manual][1], if you haven't done so already. Read the sections about page translation and page-based protection closely (5.2 and 6.4). We recommend that you also skim the sections about segmentation; while JOS uses the paging hardware for virtual memory and protection, segment translation and segment-based protection cannot be disabled on the x86, so you will need a basic understanding of it. - -##### Virtual, Linear, and Physical Addresses - -In x86 terminology, a _virtual address_ consists of a segment selector and an offset within the segment. A _linear address_ is what you get after segment translation but before page translation. A _physical address_ is what you finally get after both segment and page translation and what ultimately goes out on the hardware bus to your RAM. - -``` - Selector +--------------+ +-----------+ - ---------->| | | | - | Segmentation | | Paging | -Software | |-------->| |----------> RAM - Offset | Mechanism | | Mechanism | - ---------->| | | | - +--------------+ +-----------+ - Virtual Linear Physical - -``` - -A C pointer is the "offset" component of the virtual address. In `boot/boot.S`, we installed a Global Descriptor Table (GDT) that effectively disabled segment translation by setting all segment base addresses to 0 and limits to `0xffffffff`. Hence the "selector" has no effect and the linear address always equals the offset of the virtual address. In lab 3, we'll have to interact a little more with segmentation to set up privilege levels, but as for memory translation, we can ignore segmentation throughout the JOS labs and focus solely on page translation. - -Recall that in part 3 of lab 1, we installed a simple page table so that the kernel could run at its link address of 0xf0100000, even though it is actually loaded in physical memory just above the ROM BIOS at 0x00100000. This page table mapped only 4MB of memory. In the virtual address space layout you are going to set up for JOS in this lab, we'll expand this to map the first 256MB of physical memory starting at virtual address 0xf0000000 and to map a number of other regions of the virtual address space. - -Exercise 3. While GDB can only access QEMU's memory by virtual address, it's often useful to be able to inspect physical memory while setting up virtual memory. Review the QEMU [monitor commands][2] from the lab tools guide, especially the `xp` command, which lets you inspect physical memory. To access the QEMU monitor, press Ctrl-a c in the terminal (the same binding returns to the serial console). - -Use the xp command in the QEMU monitor and the x command in GDB to inspect memory at corresponding physical and virtual addresses and make sure you see the same data. - -Our patched version of QEMU provides an info pg command that may also prove useful: it shows a compact but detailed representation of the current page tables, including all mapped memory ranges, permissions, and flags. Stock QEMU also provides an info mem command that shows an overview of which ranges of virtual addresses are mapped and with what permissions. - -From code executing on the CPU, once we're in protected mode (which we entered first thing in `boot/boot.S`), there's no way to directly use a linear or physical address. _All_ memory references are interpreted as virtual addresses and translated by the MMU, which means all pointers in C are virtual addresses. - -The JOS kernel often needs to manipulate addresses as opaque values or as integers, without dereferencing them, for example in the physical memory allocator. Sometimes these are virtual addresses, and sometimes they are physical addresses. To help document the code, the JOS source distinguishes the two cases: the type `uintptr_t` represents opaque virtual addresses, and `physaddr_t` represents physical addresses. Both these types are really just synonyms for 32-bit integers (`uint32_t`), so the compiler won't stop you from assigning one type to another! Since they are integer types (not pointers), the compiler _will_ complain if you try to dereference them. - -The JOS kernel can dereference a `uintptr_t` by first casting it to a pointer type. In contrast, the kernel can't sensibly dereference a physical address, since the MMU translates all memory references. If you cast a `physaddr_t` to a pointer and dereference it, you may be able to load and store to the resulting address (the hardware will interpret it as a virtual address), but you probably won't get the memory location you intended. - -To summarize: - -C typeAddress type `T*` Virtual `uintptr_t` Virtual `physaddr_t` Physical - -Question - - 1. Assuming that the following JOS kernel code is correct, what type should variable `x` have, `uintptr_t` or `physaddr_t`? - -``` - mystery_t x; - char* value = return_a_pointer(); - *value = 10; - x = (mystery_t) value; - -``` - - - - -The JOS kernel sometimes needs to read or modify memory for which it knows only the physical address. For example, adding a mapping to a page table may require allocating physical memory to store a page directory and then initializing that memory. However, the kernel cannot bypass virtual address translation and thus cannot directly load and store to physical addresses. One reason JOS remaps all of physical memory starting from physical address 0 at virtual address 0xf0000000 is to help the kernel read and write memory for which it knows just the physical address. In order to translate a physical address into a virtual address that the kernel can actually read and write, the kernel must add 0xf0000000 to the physical address to find its corresponding virtual address in the remapped region. You should use `KADDR(pa)` to do that addition. - -The JOS kernel also sometimes needs to be able to find a physical address given the virtual address of the memory in which a kernel data structure is stored. Kernel global variables and memory allocated by `boot_alloc()` are in the region where the kernel was loaded, starting at 0xf0000000, the very region where we mapped all of physical memory. Thus, to turn a virtual address in this region into a physical address, the kernel can simply subtract 0xf0000000. You should use `PADDR(va)` to do that subtraction. - -##### Reference counting - -In future labs you will often have the same physical page mapped at multiple virtual addresses simultaneously (or in the address spaces of multiple environments). You will keep a count of the number of references to each physical page in the `pp_ref` field of the `struct PageInfo` corresponding to the physical page. When this count goes to zero for a physical page, that page can be freed because it is no longer used. In general, this count should be equal to the number of times the physical page appears below `UTOP` in all page tables (the mappings above `UTOP` are mostly set up at boot time by the kernel and should never be freed, so there's no need to reference count them). We'll also use it to keep track of the number of pointers we keep to the page directory pages and, in turn, of the number of references the page directories have to page table pages. - -Be careful when using `page_alloc`. The page it returns will always have a reference count of 0, so `pp_ref` should be incremented as soon as you've done something with the returned page (like inserting it into a page table). Sometimes this is handled by other functions (for example, `page_insert`) and sometimes the function calling `page_alloc` must do it directly. - -##### Page Table Management - -Now you'll write a set of routines to manage page tables: to insert and remove linear-to-physical mappings, and to create page table pages when needed. - -Exercise 4. In the file `kern/pmap.c`, you must implement code for the following functions. - -``` - - pgdir_walk() - boot_map_region() - page_lookup() - page_remove() - page_insert() - - -``` - -`check_page()`, called from `mem_init()`, tests your page table management routines. You should make sure it reports success before proceeding. - -#### Part 3: Kernel Address Space - -JOS divides the processor's 32-bit linear address space into two parts. User environments (processes), which we will begin loading and running in lab 3, will have control over the layout and contents of the lower part, while the kernel always maintains complete control over the upper part. The dividing line is defined somewhat arbitrarily by the symbol `ULIM` in `inc/memlayout.h`, reserving approximately 256MB of virtual address space for the kernel. This explains why we needed to give the kernel such a high link address in lab 1: otherwise there would not be enough room in the kernel's virtual address space to map in a user environment below it at the same time. - -You'll find it helpful to refer to the JOS memory layout diagram in `inc/memlayout.h` both for this part and for later labs. - -##### Permissions and Fault Isolation - -Since kernel and user memory are both present in each environment's address space, we will have to use permission bits in our x86 page tables to allow user code access only to the user part of the address space. Otherwise bugs in user code might overwrite kernel data, causing a crash or more subtle malfunction; user code might also be able to steal other environments' private data. Note that the writable permission bit (`PTE_W`) affects both user and kernel code! - -The user environment will have no permission to any of the memory above `ULIM`, while the kernel will be able to read and write this memory. For the address range `[UTOP,ULIM)`, both the kernel and the user environment have the same permission: they can read but not write this address range. This range of address is used to expose certain kernel data structures read-only to the user environment. Lastly, the address space below `UTOP` is for the user environment to use; the user environment will set permissions for accessing this memory. - -##### Initializing the Kernel Address Space - -Now you'll set up the address space above `UTOP`: the kernel part of the address space. `inc/memlayout.h` shows the layout you should use. You'll use the functions you just wrote to set up the appropriate linear to physical mappings. - -Exercise 5. Fill in the missing code in `mem_init()` after the call to `check_page()`. - -Your code should now pass the `check_kern_pgdir()` and `check_page_installed_pgdir()` checks. - -Question - - 2. What entries (rows) in the page directory have been filled in at this point? What addresses do they map and where do they point? In other words, fill out this table as much as possible: - | Entry | Base Virtual Address | Points to (logically): | - |-------|----------------------|---------------------------------------| - | 1023 | ? | Page table for top 4MB of phys memory | - | 1022 | ? | ? | - | . | ? | ? | - | . | ? | ? | - | . | ? | ? | - | 2 | 0x00800000 | ? | - | 1 | 0x00400000 | ? | - | 0 | 0x00000000 | [see next question] | - 3. We have placed the kernel and user environment in the same address space. Why will user programs not be able to read or write the kernel's memory? What specific mechanisms protect the kernel memory? - 4. What is the maximum amount of physical memory that this operating system can support? Why? - 5. How much space overhead is there for managing memory, if we actually had the maximum amount of physical memory? How is this overhead broken down? - 6. Revisit the page table setup in `kern/entry.S` and `kern/entrypgdir.c`. Immediately after we turn on paging, EIP is still a low number (a little over 1MB). At what point do we transition to running at an EIP above KERNBASE? What makes it possible for us to continue executing at a low EIP between when we enable paging and when we begin running at an EIP above KERNBASE? Why is this transition necessary? - - -``` -Challenge! We consumed many physical pages to hold the page tables for the KERNBASE mapping. Do a more space-efficient job using the PTE_PS ("Page Size") bit in the page directory entries. This bit was _not_ supported in the original 80386, but is supported on more recent x86 processors. You will therefore have to refer to [Volume 3 of the current Intel manuals][3]. Make sure you design the kernel to use this optimization only on processors that support it! -``` - -``` -Challenge! Extend the JOS kernel monitor with commands to: - - * Display in a useful and easy-to-read format all of the physical page mappings (or lack thereof) that apply to a particular range of virtual/linear addresses in the currently active address space. For example, you might enter `'showmappings 0x3000 0x5000'` to display the physical page mappings and corresponding permission bits that apply to the pages at virtual addresses 0x3000, 0x4000, and 0x5000. - * Explicitly set, clear, or change the permissions of any mapping in the current address space. - * Dump the contents of a range of memory given either a virtual or physical address range. Be sure the dump code behaves correctly when the range extends across page boundaries! - * Do anything else that you think might be useful later for debugging the kernel. (There's a good chance it will be!) -``` - - -##### Address Space Layout Alternatives - -The address space layout we use in JOS is not the only one possible. An operating system might map the kernel at low linear addresses while leaving the _upper_ part of the linear address space for user processes. x86 kernels generally do not take this approach, however, because one of the x86's backward-compatibility modes, known as _virtual 8086 mode_ , is "hard-wired" in the processor to use the bottom part of the linear address space, and thus cannot be used at all if the kernel is mapped there. - -It is even possible, though much more difficult, to design the kernel so as not to have to reserve _any_ fixed portion of the processor's linear or virtual address space for itself, but instead effectively to allow user-level processes unrestricted use of the _entire_ 4GB of virtual address space - while still fully protecting the kernel from these processes and protecting different processes from each other! - -``` -Challenge! Each user-level environment maps the kernel. Change JOS so that the kernel has its own page table and so that a user-level environment runs with a minimal number of kernel pages mapped. That is, each user-level environment maps just enough pages mapped so that the user-level environment can enter and leave the kernel correctly. You also have to come up with a plan for the kernel to read/write arguments to system calls. -``` - -``` -Challenge! Write up an outline of how a kernel could be designed to allow user environments unrestricted use of the full 4GB virtual and linear address space. Hint: do the previous challenge exercise first, which reduces the kernel to a few mappings in a user environment. Hint: the technique is sometimes known as " _follow the bouncing kernel_. " In your design, be sure to address exactly what has to happen when the processor transitions between kernel and user modes, and how the kernel would accomplish such transitions. Also describe how the kernel would access physical memory and I/O devices in this scheme, and how the kernel would access a user environment's virtual address space during system calls and the like. Finally, think about and describe the advantages and disadvantages of such a scheme in terms of flexibility, performance, kernel complexity, and other factors you can think of. -``` - -``` -Challenge! Since our JOS kernel's memory management system only allocates and frees memory on page granularity, we do not have anything comparable to a general-purpose `malloc`/`free` facility that we can use within the kernel. This could be a problem if we want to support certain types of I/O devices that require _physically contiguous_ buffers larger than 4KB in size, or if we want user-level environments, and not just the kernel, to be able to allocate and map 4MB _superpages_ for maximum processor efficiency. (See the earlier challenge problem about PTE_PS.) - -Generalize the kernel's memory allocation system to support pages of a variety of power-of-two allocation unit sizes from 4KB up to some reasonable maximum of your choice. Be sure you have some way to divide larger allocation units into smaller ones on demand, and to coalesce multiple small allocation units back into larger units when possible. Think about the issues that might arise in such a system. -``` - -**This completes the lab.** Make sure you pass all of the make grade tests and don't forget to write up your answers to the questions and a description of your challenge exercise solution in `answers-lab2.txt`. Commit your changes (including adding `answers-lab2.txt`) and type make handin in the `lab` directory to hand in your lab. - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labs/lab2/ - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: https://pdos.csail.mit.edu/6.828/2018/readings/i386/toc.htm -[2]: https://pdos.csail.mit.edu/6.828/2018/labguide.html#qemu -[3]: https://pdos.csail.mit.edu/6.828/2018/readings/ia32/IA32-3A.pdf diff --git a/sources/tech/20180928 Quiet log noise with Python and machine learning.md b/sources/tech/20180928 Quiet log noise with Python and machine learning.md index 79894775ed..f1fe2f1b7f 100644 --- a/sources/tech/20180928 Quiet log noise with Python and machine learning.md +++ b/sources/tech/20180928 Quiet log noise with Python and machine learning.md @@ -1,5 +1,3 @@ -translating by Flowsnow - Quiet log noise with Python and machine learning ====== diff --git a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md b/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md deleted file mode 100644 index efd2448e4d..0000000000 --- a/sources/tech/20181001 Turn your book into a website and an ePub using Pandoc.md +++ /dev/null @@ -1,265 +0,0 @@ -Translating by jlztan - -Turn your book into a website and an ePub using Pandoc -====== -Write once, publish twice using Markdown and Pandoc. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -Pandoc is a command-line tool for converting files from one markup language to another. In my [introduction to Pandoc][1], I explained how to convert text written in Markdown into a website, a slideshow, and a PDF. - -In this follow-up article, I'll dive deeper into [Pandoc][2], showing how to produce a website and an ePub book from the same Markdown source file. I'll use my upcoming e-book, [GRASP Principles for the Object-Oriented Mind][3], which I created using this process, as an example. - -First I will explain the file structure used for the book, then how to use Pandoc to generate a website and deploy it in GitHub. Finally, I demonstrate how to generate its companion ePub book. - -You can find the code in my [Programming Fight Club][4] GitHub repository. - -### Setting up the writing structure - -I do all of my writing in Markdown syntax. You can also use HTML, but the more HTML you introduce the highest risk that problems arise when Pandoc converts Markdown to an ePub document. My books follow the one-chapter-per-file pattern. Declare chapters using the Markdown heading H1 ( **#** ). You can put more than one chapter in each file, but putting them in separate files makes it easier to find content and do updates later. - -The meta-information follows a similar pattern: each output format has its own meta-information file. Meta-information files define information about your documents, such as text to add to your HTML or the license of your ePub. I store all of my Markdown documents in a folder named parts (this is important for the Makefile that generates the website and ePub). As an example, let's take the table of contents, the preface, and the about chapters (divided into the files toc.md, preface.md, and about.md) and, for clarity, we will leave out the remaining chapters. - -My about file might begin like: - -``` -# About this book {-} - -## Who should read this book {-} - -Before creating a complex software system one needs to create a solid foundation. -General Responsibility Assignment Software Principles (GRASP) are guidelines to assign -responsibilities to software classes in object-oriented programming. -``` - -Once the chapters are finished, the next step is to add meta-information to setup the format for the website and the ePub. - -### Generating the website - -#### Create the HTML meta-information file - -The meta-information file (web-metadata.yaml) for my website is a simple YAML file that contains information about the author, title, rights, content for the **< head>** tag, and content for the beginning and end of the HTML file. - -I recommend (at minimum) including the following fields in the web-metadata.yaml file: - -``` ---- -title: GRASP principles for the Object-oriented mind -author: Kiko Fernandez-Reyes -rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International -header-includes: -- | -  \```{=html} -  -  \``` -include-before: -- | -  \```{=html} - 

                  If you like this book, please consider -      spreading the word or -      -        buying me a coffee -     

                  -  \``` -include-after: -- | -  ```{=html} - 
                  -   
                  -   
                  -        -   
                  -  \``` ---- -``` - -Some variables to note: - - * The **header-includes** variable contains HTML that will be embedded inside the **< head>** tag. - * The line after calling a variable must be **\- |**. The next line must begin with triple backquotes that are aligned with the **|** or Pandoc will reject it. **{=html}** tells Pandoc that this is raw text and should not be processed as Markdown. (For this to work, you need to check that the **raw_attribute** extension in Pandoc is enabled. To check, type **pandoc --list-extensions | grep raw** and make sure the returned list contains an item named **+raw_html** ; the plus sign indicates it is enabled.) - * The variable **include-before** adds some HTML at the beginning of your website, and I ask readers to consider spreading the word or buying me a coffee. - * The **include-after** variable appends raw HTML at the end of the website and shows my book's license. - - - -These are only some of the fields available; take a look at the template variables in HTML (my article [introduction to Pandoc][1] covered this for LaTeX but the process is the same for HTML) to learn about others. - -#### Split the website into chapters - -The website can be generated as a whole, resulting in a long page with all the content, or split into chapters, which I think is easier to read. I'll explain how to divide the website into chapters so the reader doesn't get intimidated by a long website. - -To make the website easy to deploy on GitHub Pages, we need to create a root folder called docs (which is the root folder that GitHub Pages uses by default to render a website). Then we need to create folders for each chapter under docs, place the HTML chapters in their own folders, and the file content in a file named index.html. - -For example, the about.md file is converted to a file named index.html that is placed in a folder named about (about/index.html). This way, when users type **http:// /about/**, the index.html file from the folder about will be displayed in their browser. - -The following Makefile does all of this: - -``` -# Your book files -DEPENDENCIES= toc preface about - -# Placement of your HTML files -DOCS=docs - -all: web - -web: setup $(DEPENDENCIES) -        @cp $(DOCS)/toc/index.html $(DOCS) - - -# Creation and copy of stylesheet and images into -# the assets folder. This is important to deploy the -# website to Github Pages. -setup: -        @mkdir -p $(DOCS) -        @cp -r assets $(DOCS) - - -# Creation of folder and index.html file on a -# per-chapter basis - -$(DEPENDENCIES): -        @mkdir -p $(DOCS)/$@ -        @pandoc -s --toc web-metadata.yaml parts/$@.md \ -        -c /assets/pandoc.css -o $(DOCS)/$@/index.html - -clean: -        @rm -rf $(DOCS) - -.PHONY: all clean web setup -``` - -The option **-c /assets/pandoc.css** declares which CSS stylesheet to use; it will be fetched from **/assets/pandoc.css**. In other words, inside the **< head>** HTML tag, Pandoc adds the following line: - -``` - -``` - -To generate the website, type: - -``` -make -``` - -The root folder should contain now the following structure and files: - -``` -.---parts -|    |--- toc.md -|    |--- preface.md -|    |--- about.md -| -|---docs -    |--- assets/ -    |--- index.html -    |--- toc -    |     |--- index.html -    | -    |--- preface -    |     |--- index.html -    | -    |--- about -          |--- index.html -    -``` - -#### Deploy the website - -To deploy the website on GitHub, follow these steps: - - 1. Create a new repository - 2. Push your content to the repository - 3. Go to the GitHub Pages section in the repository's Settings and select the option for GitHub to use the content from the Master branch - - - -You can get more details on the [GitHub Pages][5] site. - -Check out [my book's website][6], generated using this process, to see the result. - -### Generating the ePub book - -#### Create the ePub meta-information file - -The ePub meta-information file, epub-meta.yaml, is similar to the HTML meta-information file. The main difference is that ePub offers other template variables, such as **publisher** and **cover-image**. Your ePub book's stylesheet will probably differ from your website's; mine uses one named epub.css. - -``` ---- -title: 'GRASP principles for the Object-oriented Mind' -publisher: 'Programming Language Fight Club' -author: Kiko Fernandez-Reyes -rights: 2017 Kiko Fernandez-Reyes, CC-BY-NC-SA 4.0 International -cover-image: assets/cover.png -stylesheet: assets/epub.css -... -``` - -Add the following content to the previous Makefile: - -``` -epub: -        @pandoc -s --toc epub-meta.yaml \ -        $(addprefix parts/, $(DEPENDENCIES:=.md)) -o $(DOCS)/assets/book.epub -``` - -The command for the ePub target takes all the dependencies from the HTML version (your chapter names), appends to them the Markdown extension, and prepends them with the path to the folder chapters' so Pandoc knows how to process them. For example, if **$(DEPENDENCIES)** was only **preface about** , then the Makefile would call: - -``` -@pandoc -s --toc epub-meta.yaml \ -parts/preface.md parts/about.md -o $(DOCS)/assets/book.epub -``` - -Pandoc would take these two chapters, combine them, generate an ePub, and place the book under the Assets folder. - -Here's an [example][7] of an ePub created using this process. - -### Summarizing the process - -The process to create a website and an ePub from a Markdown file isn't difficult, but there are a lot of details. The following outline may make it easier for you to follow. - - * HTML book: - * Write chapters in Markdown - * Add metadata - * Create a Makefile to glue pieces together - * Set up GitHub Pages - * Deploy - * ePub book: - * Reuse chapters from previous work - * Add new metadata file - * Create a Makefile to glue pieces together - * Set up GitHub Pages - * Deploy - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/book-to-website-epub-using-pandoc - -作者:[Kiko Fernandez-Reyes][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/kikofernandez -[1]: https://opensource.com/article/18/9/intro-pandoc -[2]: https://pandoc.org/ -[3]: https://www.programmingfightclub.com/ -[4]: https://github.com/kikofernandez/programmingfightclub -[5]: https://pages.github.com/ -[6]: https://www.programmingfightclub.com/grasp-principles/ -[7]: https://github.com/kikofernandez/programmingfightclub/raw/master/docs/web_assets/demo.epub diff --git a/sources/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md b/sources/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md deleted file mode 100644 index 768d124aa9..0000000000 --- a/sources/tech/20181002 Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux.md +++ /dev/null @@ -1,75 +0,0 @@ -Translating by qhwdw - - -Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux -============================================================ - - -![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/kernel-security_0.jpg?itok=hOaTQwWV) -Kernel maintainer Greg Kroah-Hartman talks about how the kernel community is hardening Linux against vulnerabilities.[Creative Commons Zero][2] - -As Linux adoption expands, it’s increasingly important for the kernel community to improve the security of the world’s most widely used technology. Security is vital not only for enterprise customers, it’s also important for consumers, as 80 percent of mobile devices are powered by Linux. In this article, Linux kernel maintainer Greg Kroah-Hartman provides a glimpse into how the kernel community deals with vulnerabilities. - -### There will be bugs - - -![Greg Kroah-Hartman](https://www.linux.com/sites/lcom/files/styles/floated_images/public/greg-k-h.png?itok=p4fREYuj "Greg Kroah-Hartman") - -Greg Kroah-Hartman[The Linux Foundation][1] - -As Linus Torvalds once said, most security holes are bugs, and bugs are part of the software development process. As long as the software is being written, there will be bugs. - -“A bug is a bug. We don’t know if a bug is a security bug or not. There is a famous bug that I fixed and then three years later Red Hat realized it was a security hole,” said Kroah-Hartman. - -There is not much the kernel community can do to eliminate bugs, but it can do more testing to find them. The kernel community now has its own security team that’s made up of kernel developers who know the core of the kernel. - -“When we get a report, we involve the domain owner to fix the issue. In some cases it’s the same people, so we made them part of the security team to speed things up,” Kroah Hartman said. But he also stressed that all parts of the kernel have to be aware of these security issues because kernel is a trusted environment and they have to protect it. - -“Once we fix things, we can put them in our stack analysis rules so that they are never reintroduced,” he said. - -Besides fixing bugs, the community also continues to add hardening to the kernel. “We have realized that we need to have mitigations. We need hardening,” said Kroah-Hartman. - -Huge efforts have been made by Kees Cook and others to take the hardening features that have been traditionally outside of the kernel and merge or adapt them for the kernel. With every kernel released, Cook provides a summary of all the new hardening features. But hardening the kernel is not enough, vendors have to enable the new features and take advantage of them. That’s not happening.   - -Kroah-Hartman [releases a stable kernel every week][5], and companies pick one to support for a longer period so that device manufacturers can take advantage of it. However, Kroah-Hartman has observed that, aside from the Google Pixel, most Android phones don’t include the additional hardening features, meaning all those phones are vulnerable. “People need to enable this stuff,” he said. - -“I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said.  “I'm working through the whole supply chain trying to solve that problem because it's a tough problem. There are many different groups involved -- the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.” - -The good news is that unlike with consumer electronics, the big vendors like Red Hat and SUSE keep the kernel updated even in the enterprise environment. Modern systems with containers, pods, and virtualization make this even easier. It’s effortless to update and reboot with no downtime. It is, in fact, easier to keep things secure than it used to be. - -### Meltdown and Spectre - -No security discussion is complete without the mention of Meltdown and Spectre. The kernel community is still working on fixes as new flaws are discovered. However, Intel has changed its approach in light of these events. - -“They are reworking on how they approach security bugs and how they work with the community because they know they did it wrong,” Kroah-Hartman said. “The kernel has fixes for almost all of the big Spectre issues, but there is going to be a long tail of minor things.” - -The good news is that these Intel vulnerabilities proved that things are getting better for the kernel community. “We are doing more testing. With the latest round of security patches, we worked on our own for four months before releasing them to the world because we were embargoed. But once they hit the real world, it made us realize how much we rely on the infrastructure we have built over the years to do this kind of testing, which ensures that we don’t have bugs before they hit other people,” he said. “So things are certainly getting better.” - -The increasing focus on security is also creating more job opportunities for talented people. Since security is an area that gets eyeballs, those who want to build a career in kernel space, security is a good place to get started with. - -“If there are people who want a job to do this type of work, we have plenty of companies who would love to hire them. I know some people who have started off fixing bugs and then got hired,” Kroah-Hartman said. - -You can hear more in the video below: - -[视频](https://youtu.be/jkGVabyMh1I) - - _Check out the schedule of talks for Open Source Summit Europe and sign up to receive updates:_ - --------------------------------------------------------------------------------- - -via: https://www.linux.com/blog/2018/10/greg-kroah-hartman-explains-how-kernel-community-securing-linux-0 - -作者:[SWAPNIL BHARTIYA][a] -选题:[oska874][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.linux.com/users/arnieswap -[b]:https://github.com/oska874 -[1]:https://www.linux.com/licenses/category/linux-foundation -[2]:https://www.linux.com/licenses/category/creative-commons-zero -[3]:https://www.linux.com/files/images/greg-k-hpng -[4]:https://www.linux.com/files/images/kernel-securityjpg-0 -[5]:https://www.kernel.org/category/releases.html diff --git a/sources/tech/20181003 Manage NTP with Chrony.md b/sources/tech/20181003 Manage NTP with Chrony.md new file mode 100644 index 0000000000..aaec88da26 --- /dev/null +++ b/sources/tech/20181003 Manage NTP with Chrony.md @@ -0,0 +1,291 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Manage NTP with Chrony) +[#]: via: (https://opensource.com/article/18/12/manage-ntp-chrony) +[#]: author: (David Both https://opensource.com/users/dboth) + +Manage NTP with Chrony +====== +Chronyd is a better choice for most networks than ntpd for keeping computers synchronized with the Network Time Protocol. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/clocks_time.png?itok=_ID09GDk) + +> "Does anybody really know what time it is? Does anybody really care?" +> – [Chicago][1], 1969 + +Perhaps that rock group didn't care what time it was, but our computers do need to know the exact time. Timekeeping is very important to computer networks. In banking, stock markets, and other financial businesses, transactions must be maintained in the proper order, and exact time sequences are critical for that. For sysadmins and DevOps professionals, it's easier to follow the trail of email through a series of servers or to determine the exact sequence of events using log files on geographically dispersed hosts when exact times are kept on the computers in question. + +I used to work at an organization that received over 20 million emails per day and had four servers just to accept and do a basic filter on the incoming flood of email. From there, emails were sent to one of four other servers to perform more complex anti-spam assessments, then they were delivered to one of several additional servers where the emails were placed in the correct inboxes. At each layer, the emails would be sent to one of the next-level servers, selected only by the randomness of round-robin DNS. Sometimes we had to trace a new message through the system until we could determine where it "got lost," according to the pointy-haired bosses. We had to do this with frightening regularity. + +Most of that email turned out to be spam. Some people actually complained that their [joke, cat pic, recipe, inspirational saying, or other-strange-email]-of-the-day was missing and asked us to find it. We did reject those opportunities. + +Our email and other transactional searches were aided by log entries with timestamps that—today—can resolve down to the nanosecond in even the slowest of modern Linux computers. In very high-volume transaction environments, even a few microseconds of difference in the system clocks can mean sorting thousands of transactions to find the correct one(s). + +### The NTP server hierarchy + +Computers worldwide use the [Network Time Protocol][2] (NTP) to synchronize their times with internet standard reference clocks via a hierarchy of NTP servers. The primary servers are at stratum 1, and they are connected directly to various national time services at stratum 0 via satellite, radio, or even modems over phone lines. The time service at stratum 0 may be an atomic clock, a radio receiver tuned to the signals broadcast by an atomic clock, or a GPS receiver using the highly accurate clock signals broadcast by GPS satellites. + +To prevent time requests from time servers lower in the hierarchy (i.e., with a higher stratum number) from overwhelming the primary reference servers, there are several thousand public NTP stratum 2 servers that are open and available for anyone to use. Many organizations with large numbers of hosts that need an NTP server will set up their own time servers so that only one local host accesses the stratum 2 time servers, then they configure the remaining network hosts to use the local time server which, in my case, is a stratum 3 server. + +### NTP choices + +The original NTP daemon, **ntpd** , has been joined by a newer one, **chronyd**. Both keep the local host's time synchronized with the time server. Both services are available, and I have seen nothing to indicate that this will change anytime soon. + +Chrony has features that make it the better choice for most environments for the following reasons: + + * Chrony can synchronize to the time server much faster than NTP. This is good for laptops or desktops that don't run constantly. + + * It can compensate for fluctuating clock frequencies, such as when a host hibernates or enters sleep mode, or when the clock speed varies due to frequency stepping that slows clock speeds when loads are low. + + * It handles intermittent network connections and bandwidth saturation. + + * It adjusts for network delays and latency. + + * After the initial time sync, Chrony never steps the clock. This ensures stable and consistent time intervals for system services and applications. + + * Chrony can work even without a network connection. In this case, the local host or server can be updated manually. + + + + +The NTP and Chrony RPM packages are available from standard Fedora repositories. You can install both and switch between them, but modern Fedora, CentOS, and RHEL releases have moved from NTP to Chrony as their default time-keeping implementation. I have found that Chrony works well, provides a better interface for the sysadmin, presents much more information, and increases control. + +Just to make it clear, NTP is a protocol that is implemented with either NTP or Chrony. If you'd like to know more, read this [comparison between NTP and Chrony][3] as implementations of the NTP protocol. + +This article explains how to configure Chrony clients and servers on a Fedora host, but the configuration for CentOS and RHEL current releases works the same. + +### Chrony structure + +The Chrony daemon, **chronyd** , runs in the background and monitors the time and status of the time server specified in the **chrony.conf** file. If the local time needs to be adjusted, **chronyd** does it smoothly without the programmatic trauma that would occur if the clock were instantly reset to a new time. + +Chrony's **chronyc** tool allows someone to monitor the current status of Chrony and make changes if necessary. The **chronyc** utility can be used as a command that accepts subcommands, or it can be used as an interactive text-mode program. This article will explain both uses. + +### Client configuration + +The NTP client configuration is simple and requires little or no intervention. The NTP server can be defined during the Linux installation or provided by the DHCP server at boot time. The default **/etc/chrony.conf** file (shown below in its entirety) requires no intervention to work properly as a client. For Fedora, Chrony uses the Fedora NTP pool, and CentOS and RHEL have their own NTP server pools. Like many Red Hat-based distributions, the configuration file is well commented. + +``` +# Use public servers from the pool.ntp.org project. +# Please consider joining the pool (http://www.pool.ntp.org/join.html). +pool 2.fedora.pool.ntp.org iburst + +# Record the rate at which the system clock gains/losses time. +driftfile /var/lib/chrony/drift + +# Allow the system clock to be stepped in the first three updates +# if its offset is larger than 1 second. +makestep 1.0 3 + +# Enable kernel synchronization of the real-time clock (RTC). + + +# Enable hardware timestamping on all interfaces that support it. +#hwtimestamp * + +# Increase the minimum number of selectable sources required to adjust +# the system clock. +#minsources 2 + +# Allow NTP client access from local network. +#allow 192.168.0.0/16 + +# Serve time even if not synchronized to a time source. +#local stratum 10 + +# Specify file containing keys for NTP authentication. +keyfile /etc/chrony.keys + +# Get TAI-UTC offset and leap seconds from the system tz database. +leapsectz right/UTC + +# Specify directory for log files. +logdir /var/log/chrony + +# Select which information is logged. +#log measurements statistics tracking +``` + +Let's look at the current status of NTP on a virtual machine I use for testing. The **chronyc** command, when used with the **tracking** subcommand, provides statistics that report how far off the local system is from the reference server. + +``` +[root@studentvm1 ~]# chronyc tracking +Reference ID    : 23ABED4D (ec2-35-171-237-77.compute-1.amazonaws.com) +Stratum         : 3 +Ref time (UTC)  : Fri Nov 16 16:21:30 2018 +System time     : 0.000645622 seconds slow of NTP time +Last offset     : -0.000308577 seconds +RMS offset      : 0.000786140 seconds +Frequency       : 0.147 ppm slow +Residual freq   : -0.073 ppm +Skew            : 0.062 ppm +Root delay      : 0.041452706 seconds +Root dispersion : 0.022665167 seconds +Update interval : 1044.2 seconds +Leap status     : Normal +[root@studentvm1 ~]# +``` + +The Reference ID in the first line of the result is the server the host is synchronized to—in this case, a stratum 3 reference server that was last contacted by the host at 16:21:30 2018. The other lines are described in the [chronyc(1) man page][4]. + +The **sources** subcommand is also useful because it provides information about the time source configured in **chrony.conf**. + +``` +[root@studentvm1 ~]# chronyc sources +210 Number of sources = 5 +MS Name/IP address         Stratum Poll Reach LastRx Last sample               +=============================================================================== +^+ 192.168.0.51                  3   6   377     0  -2613us[-2613us] +/-   63ms +^+ dev.smatwebdesign.com         3  10   377   28m  -2961us[-3534us] +/-  113ms +^+ propjet.latt.net              2  10   377   465  -1097us[-1085us] +/-   77ms +^* ec2-35-171-237-77.comput>     2  10   377    83  +2388us[+2395us] +/-   95ms +^+ PBX.cytranet.net              3  10   377   507  -1602us[-1589us] +/-   96ms +[root@studentvm1 ~]# +``` + +The first source in the list is the time server I set up for my personal network. The others were provided by the pool. Even though my NTP server doesn't appear in the Chrony configuration file above, my DHCP server provides its IP address for the NTP server. The "S" column—Source State—indicates with an asterisk ( ***** ) the server our host is synced to. This is consistent with the data from the **tracking** subcommand. + +The **-v** option provides a nice description of the fields in this output. + +``` +[root@studentvm1 ~]# chronyc sources -v +210 Number of sources = 5 + +  .-- Source mode  '^' = server, '=' = peer, '#' = local clock. + / .- Source state '*' = current synced, '+' = combined , '-' = not combined, +| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable. +||                                                 .- xxxx [ yyyy ] +/- zzzz +||      Reachability register (octal) -.           |  xxxx = adjusted offset, +||      Log2(Polling interval) --.      |          |  yyyy = measured offset, +||                                \     |          |  zzzz = estimated error. +||                                 |    |           \ +MS Name/IP address         Stratum Poll Reach LastRx Last sample               +=============================================================================== +^+ 192.168.0.51                  3   7   377    28  -2156us[-2156us] +/-   63ms +^+ triton.ellipse.net            2  10   377    24  +5716us[+5716us] +/-   62ms +^+ lithium.constant.com          2  10   377   351   -820us[ -820us] +/-   64ms +^* t2.time.bf1.yahoo.com         2  10   377   453   -992us[ -965us] +/-   46ms +^- ntp.idealab.com               2  10   377   799  +3653us[+3674us] +/-   87ms +[root@studentvm1 ~]# +``` + +If I wanted my server to be the preferred reference time source for this host, I would add the line below to the **/etc/chrony.conf** file. + +``` +server 192.168.0.51 iburst prefer +``` + +I usually place this line just above the first pool server statement near the top of the file. There is no special reason for this, except I like to keep the server statements together. It would work just as well at the bottom of the file, and I have done that on several hosts. This configuration file is not sequence-sensitive. + +The **prefer** option marks this as the preferred reference source. As such, this host will always be synchronized with this reference source (as long as it is available). We can also use the fully qualified hostname for a remote reference server or the hostname only (without the domain name) for a local reference time source as long as the search statement is set in the **/etc/resolv.conf** file. I prefer the IP address to ensure that the time source is accessible even if DNS is not working. In most environments, the server name is probably the better option, because NTP will continue to work even if the server's IP address changes. + +If you don't have a specific reference source you want to synchronize to, it is fine to use the defaults. + +### Configuring an NTP server with Chrony + +The nice thing about the Chrony configuration file is that this single file configures the host as both a client and a server. To add a server function to our host—it will always be a client, obtaining its time from a reference server—we just need to make a couple of changes to the Chrony configuration, then configure the host's firewall to accept NTP requests. + +Open the **/etc/ ** **chrony****.conf** file in your favorite text editor and uncomment the **local stratum 10** line. This enables the Chrony NTP server to continue to act as if it were connected to a remote reference server if the internet connection fails; this enables the host to continue to be an NTP server to other hosts on the local network. + +Let's restart **chronyd** and track how the service is working for a few minutes. Before we enable our host as an NTP server, we want to test a bit. + +``` +[root@studentvm1 ~]# systemctl restart chronyd ; watch chronyc tracking +``` + +The results should look like this. The **watch** command runs the **chronyc tracking** command every two seconds so we can watch changes occur over time. + +``` +Every 2.0s: chronyc tracking                                           studentvm1: Fri Nov 16 20:59:31 2018 + +Reference ID    : C0A80033 (192.168.0.51) +Stratum         : 4 +Ref time (UTC)  : Sat Nov 17 01:58:51 2018 +System time     : 0.001598277 seconds fast of NTP time +Last offset     : +0.001791533 seconds +RMS offset      : 0.001791533 seconds +Frequency       : 0.546 ppm slow +Residual freq   : -0.175 ppm +Skew            : 0.168 ppm +Root delay      : 0.094823152 seconds +Root dispersion : 0.021242738 seconds +Update interval : 65.0 seconds +Leap status     : Normal +``` + +Notice that my NTP server, the **studentvm1** host, synchronizes to the host at 192.168.0.51, which is my internal network NTP server, at stratum 4. Synchronizing directly to the Fedora pool machines would result in synchronization at stratum 3. Notice also that the amount of error decreases over time. Eventually, it should stabilize with a tiny variation around a fairly small range of error. The size of the error depends upon the stratum and other network factors. After a few minutes, use Ctrl+C to break out of the watch loop. + +To turn our host into an NTP server, we need to allow it to listen on the local network. Uncomment the following line to allow hosts on the local network to access our NTP server. + +``` +# Allow NTP client access from local network. +allow 192.168.0.0/16 +``` + +Note that the server can listen for requests on any local network it's attached to. The IP address in the "allow" line is just intended for illustrative purposes. Be sure to change the IP network and subnet mask in that line to match your local network's. + +Restart **chronyd**. + +``` +[root@studentvm1 ~]# systemctl restart chronyd +``` + +To allow other hosts on your network to access this server, configure the firewall to allow inbound UDP packets on port 123. Check your firewall's documentation to find out how to do that. + +### Testing + +Your host is now an NTP server. You can test it with another host or a VM that has access to the network on which the NTP server is listening. Configure the client to use the new NTP server as the preferred server in the **/etc/chrony.conf** file, then monitor that client using the **chronyc** tools we used above. + +### Chronyc as an interactive tool + +As I mentioned earlier, **chronyc** can be used as an interactive command tool. Simply run the command without a subcommand and you get a **chronyc** command prompt. + +``` +[root@studentvm1 ~]# chronyc +chrony version 3.4 +Copyright (C) 1997-2003, 2007, 2009-2018 Richard P. Curnow and others +chrony comes with ABSOLUTELY NO WARRANTY.  This is free software, and +you are welcome to redistribute it under certain conditions.  See the +GNU General Public License version 2 for details. + +chronyc> +``` + +You can enter just the subcommands at this prompt. Try using the **tracking** , **ntpdata** , and **sources** commands. The **chronyc** command line allows command recall and editing for **chronyc** subcommands. You can use the **help** subcommand to get a list of possible commands and their syntax. + +### Conclusion + +Chrony is a powerful tool for synchronizing the times of client hosts, whether they are all on the local network or scattered around the globe. It's easy to configure because, despite the large number of options available, only a few configurations are required for most circumstances. + +After my client computers have synchronized with the NTP server, I like to set the system hardware clock from the system (OS) time by using the following command: + +``` +/sbin/hwclock --systohc +``` + +This command can be added as a cron job or a script in **cron.daily** to keep the hardware clock synced with the system time. + +Chrony and NTP (the service) both use the same configuration, and the files' contents are interchangeable. The man pages for [chronyd][5], [chronyc][4], and [chrony.conf][6] contain an amazing amount of information that can help you get started or learn about esoteric configuration options. + +Do you run your own NTP server? Let us know in the comments and be sure to tell us which implementation you are using, NTP or Chrony. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/manage-ntp-chrony + +作者:[David Both][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/dboth +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Does_Anybody_Really_Know_What_Time_It_Is%3F +[2]: https://en.wikipedia.org/wiki/Network_Time_Protocol +[3]: https://chrony.tuxfamily.org/comparison.html +[4]: https://linux.die.net/man/1/chronyc +[5]: https://linux.die.net/man/8/chronyd +[6]: https://linux.die.net/man/5/chrony.conf diff --git a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md index 987809aa0d..beb3bab797 100644 --- a/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md +++ b/sources/tech/20181004 4 Must-Have Tools for Monitoring Linux.md @@ -1,5 +1,3 @@ -### translating by way-ww - 4 Must-Have Tools for Monitoring Linux ====== diff --git a/sources/tech/20181004 Archiving web sites.md b/sources/tech/20181004 Archiving web sites.md deleted file mode 100644 index 5b7f41b689..0000000000 --- a/sources/tech/20181004 Archiving web sites.md +++ /dev/null @@ -1,121 +0,0 @@ -fuowang 翻译中 - -Archiving web sites -====== - -I recently took a deep dive into web site archival for friends who were worried about losing control over the hosting of their work online in the face of poor system administration or hostile removal. This makes web site archival an essential instrument in the toolbox of any system administrator. As it turns out, some sites are much harder to archive than others. This article goes through the process of archiving traditional web sites and shows how it falls short when confronted with the latest fashions in the single-page applications that are bloating the modern web. - -### Converting simple sites - -The days of handcrafted HTML web sites are long gone. Now web sites are dynamic and built on the fly using the latest JavaScript, PHP, or Python framework. As a result, the sites are more fragile: a database crash, spurious upgrade, or unpatched vulnerability might lose data. In my previous life as web developer, I had to come to terms with the idea that customers expect web sites to basically work forever. This expectation matches poorly with "move fast and break things" attitude of web development. Working with the [Drupal][2] content-management system (CMS) was particularly challenging in that regard as major upgrades deliberately break compatibility with third-party modules, which implies a costly upgrade process that clients could seldom afford. The solution was to archive those sites: take a living, dynamic web site and turn it into plain HTML files that any web server can serve forever. This process is useful for your own dynamic sites but also for third-party sites that are outside of your control and you might want to safeguard. - -For simple or static sites, the venerable [Wget][3] program works well. The incantation to mirror a full web site, however, is byzantine: - -``` - $ nice wget --mirror --execute robots=off --no-verbose --convert-links \ - --backup-converted --page-requisites --adjust-extension \ - --base=./ --directory-prefix=./ --span-hosts \ - --domains=www.example.com,example.com http://www.example.com/ - -``` - -The above downloads the content of the web page, but also crawls everything within the specified domains. Before you run this against your favorite site, consider the impact such a crawl might have on the site. The above command line deliberately ignores [`robots.txt`][] rules, as is now [common practice for archivists][4], and hammer the website as fast as it can. Most crawlers have options to pause between hits and limit bandwidth usage to avoid overwhelming the target site. - -The above command will also fetch "page requisites" like style sheets (CSS), images, and scripts. The downloaded page contents are modified so that links point to the local copy as well. Any web server can host the resulting file set, which results in a static copy of the original web site. - -That is, when things go well. Anyone who has ever worked with a computer knows that things seldom go according to plan; all sorts of things can make the procedure derail in interesting ways. For example, it was trendy for a while to have calendar blocks in web sites. A CMS would generate those on the fly and make crawlers go into an infinite loop trying to retrieve all of the pages. Crafty archivers can resort to regular expressions (e.g. Wget has a `--reject-regex` option) to ignore problematic resources. Another option, if the administration interface for the web site is accessible, is to disable calendars, login forms, comment forms, and other dynamic areas. Once the site becomes static, those will stop working anyway, so it makes sense to remove such clutter from the original site as well. - -### JavaScript doom - -Unfortunately, some web sites are built with much more than pure HTML. In single-page sites, for example, the web browser builds the content itself by executing a small JavaScript program. A simple user agent like Wget will struggle to reconstruct a meaningful static copy of those sites as it does not support JavaScript at all. In theory, web sites should be using [progressive enhancement][5] to have content and functionality available without JavaScript but those directives are rarely followed, as anyone using plugins like [NoScript][6] or [uMatrix][7] will confirm. - -Traditional archival methods sometimes fail in the dumbest way. When trying to build an offsite backup of a local newspaper ([pamplemousse.ca][8]), I found that WordPress adds query strings (e.g. `?ver=1.12.4`) at the end of JavaScript includes. This confuses content-type detection in the web servers that serve the archive, which rely on the file extension to send the right `Content-Type` header. When such an archive is loaded in a web browser, it fails to load scripts, which breaks dynamic websites. - -As the web moves toward using the browser as a virtual machine to run arbitrary code, archival methods relying on pure HTML parsing need to adapt. The solution for such problems is to record (and replay) the HTTP headers delivered by the server during the crawl and indeed professional archivists use just such an approach. - -### Creating and displaying WARC files - -At the [Internet Archive][9], Brewster Kahle and Mike Burner designed the [ARC][10] (for "ARChive") file format in 1996 to provide a way to aggregate the millions of small files produced by their archival efforts. The format was eventually standardized as the WARC ("Web ARChive") [specification][11] that was released as an ISO standard in 2009 and revised in 2017. The standardization effort was led by the [International Internet Preservation Consortium][12] (IIPC), which is an "international organization of libraries and other organizations established to coordinate efforts to preserve internet content for the future", according to Wikipedia; it includes members such as the US Library of Congress and the Internet Archive. The latter uses the WARC format internally in its Java-based [Heritrix crawler][13]. - -A WARC file aggregates multiple resources like HTTP headers, file contents, and other metadata in a single compressed archive. Conveniently, Wget actually supports the file format with the `--warc` parameter. Unfortunately, web browsers cannot render WARC files directly, so a viewer or some conversion is necessary to access the archive. The simplest such viewer I have found is [pywb][14], a Python package that runs a simple webserver to offer a Wayback-Machine-like interface to browse the contents of WARC files. The following set of commands will render a WARC file on `http://localhost:8080/`: - -``` - $ pip install pywb - $ wb-manager init example - $ wb-manager add example crawl.warc.gz - $ wayback - -``` - -This tool was, incidentally, built by the folks behind the [Webrecorder][15] service, which can use a web browser to save dynamic page contents. - -Unfortunately, pywb has trouble loading WARC files generated by Wget because it [followed][16] an [inconsistency in the 1.0 specification][17], which was [fixed in the 1.1 specification][18]. Until Wget or pywb fix those problems, WARC files produced by Wget are not reliable enough for my uses, so I have looked at other alternatives. A crawler that got my attention is simply called [crawl][19]. Here is how it is invoked: - -``` - $ crawl https://example.com/ - -``` - -(It does say "very simple" in the README.) The program does support some command-line options, but most of its defaults are sane: it will fetch page requirements from other domains (unless the `-exclude-related` flag is used), but does not recurse out of the domain. By default, it fires up ten parallel connections to the remote site, a setting that can be changed with the `-c` flag. But, best of all, the resulting WARC files load perfectly in pywb. - -### Future work and alternatives - -There are plenty more [resources][20] for using WARC files. In particular, there's a Wget drop-in replacement called [Wpull][21] that is specifically designed for archiving web sites. It has experimental support for [PhantomJS][22] and [youtube-dl][23] integration that should allow downloading more complex JavaScript sites and streaming multimedia, respectively. The software is the basis for an elaborate archival tool called [ArchiveBot][24], which is used by the "loose collective of rogue archivists, programmers, writers and loudmouths" at [ArchiveTeam][25] in its struggle to "save the history before it's lost forever". It seems that PhantomJS integration does not work as well as the team wants, so ArchiveTeam also uses a rag-tag bunch of other tools to mirror more complex sites. For example, [snscrape][26] will crawl a social media profile to generate a list of pages to send into ArchiveBot. Another tool the team employs is [crocoite][27], which uses the Chrome browser in headless mode to archive JavaScript-heavy sites. - -This article would also not be complete without a nod to the [HTTrack][28] project, the "website copier". Working similarly to Wget, HTTrack creates local copies of remote web sites but unfortunately does not support WARC output. Its interactive aspects might be of more interest to novice users unfamiliar with the command line. - -In the same vein, during my research I found a full rewrite of Wget called [Wget2][29] that has support for multi-threaded operation, which might make it faster than its predecessor. It is [missing some features][30] from Wget, however, most notably reject patterns, WARC output, and FTP support but adds RSS, DNS caching, and improved TLS support. - -Finally, my personal dream for these kinds of tools would be to have them integrated with my existing bookmark system. I currently keep interesting links in [Wallabag][31], a self-hosted "read it later" service designed as a free-software alternative to [Pocket][32] (now owned by Mozilla). But Wallabag, by design, creates only a "readable" version of the article instead of a full copy. In some cases, the "readable version" is actually [unreadable][33] and Wallabag sometimes [fails to parse the article][34]. Instead, other tools like [bookmark-archiver][35] or [reminiscence][36] save a screenshot of the page along with full HTML but, unfortunately, no WARC file that would allow an even more faithful replay. - -The sad truth of my experiences with mirrors and archival is that data dies. Fortunately, amateur archivists have tools at their disposal to keep interesting content alive online. For those who do not want to go through that trouble, the Internet Archive seems to be here to stay and Archive Team is obviously [working on a backup of the Internet Archive itself][37]. - --------------------------------------------------------------------------------- - -via: https://anarc.at/blog/2018-10-04-archiving-web-sites/ - -作者:[Anarcat][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://anarc.at -[1]: https://anarc.at/blog -[2]: https://drupal.org -[3]: https://www.gnu.org/software/wget/ -[4]: https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/ -[5]: https://en.wikipedia.org/wiki/Progressive_enhancement -[6]: https://noscript.net/ -[7]: https://github.com/gorhill/uMatrix -[8]: https://pamplemousse.ca/ -[9]: https://archive.org -[10]: http://www.archive.org/web/researcher/ArcFileFormat.php -[11]: https://iipc.github.io/warc-specifications/ -[12]: https://en.wikipedia.org/wiki/International_Internet_Preservation_Consortium -[13]: https://github.com/internetarchive/heritrix3/wiki -[14]: https://github.com/webrecorder/pywb -[15]: https://webrecorder.io/ -[16]: https://github.com/webrecorder/pywb/issues/294 -[17]: https://github.com/iipc/warc-specifications/issues/23 -[18]: https://github.com/iipc/warc-specifications/pull/24 -[19]: https://git.autistici.org/ale/crawl/ -[20]: https://archiveteam.org/index.php?title=The_WARC_Ecosystem -[21]: https://github.com/chfoo/wpull -[22]: http://phantomjs.org/ -[23]: http://rg3.github.io/youtube-dl/ -[24]: https://www.archiveteam.org/index.php?title=ArchiveBot -[25]: https://archiveteam.org/ -[26]: https://github.com/JustAnotherArchivist/snscrape -[27]: https://github.com/PromyLOPh/crocoite -[28]: http://www.httrack.com/ -[29]: https://gitlab.com/gnuwget/wget2 -[30]: https://gitlab.com/gnuwget/wget2/wikis/home -[31]: https://wallabag.org/ -[32]: https://getpocket.com/ -[33]: https://github.com/wallabag/wallabag/issues/2825 -[34]: https://github.com/wallabag/wallabag/issues/2914 -[35]: https://pirate.github.io/bookmark-archiver/ -[36]: https://github.com/kanishka-linux/reminiscence -[37]: http://iabak.archiveteam.org diff --git a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md b/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md deleted file mode 100644 index 691600a4cc..0000000000 --- a/sources/tech/20181005 Dbxfs - Mount Dropbox Folder Locally As Virtual File System In Linux.md +++ /dev/null @@ -1,133 +0,0 @@ -Dbxfs – Mount Dropbox Folder Locally As Virtual File System In Linux -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/dbxfs-720x340.png) - -A while ago, we summarized all the possible ways to **[mount Google drive locally][1]** as a virtual file system and access the files stored in the google drive from your Linux operating system. Today, we are going to learn to mount Dropbox folder in your local file system using **dbxfs** utility. The dbxfs is used to mount your Dropbox folder locally as a virtual filesystem in Unix-like operating systems. While it is easy to [**install Dropbox client**][2] in Linux, this approach slightly differs from the official method. It is a command line dropbox client and requires no disk space for access. The dbxfs application is free, open source and written for Python 3.5+. - -### Installing dbxfs - -The dbxfs officially supports Linux and Mac OS. However, it should work on any POSIX system that provides a **FUSE-compatible library** or has the ability to mount **SMB** shares. Since it is written for Python 3.5, it can installed using **pip3** package manager. Refer the following guide if you haven’t installed PIP yet. - -And, install FUSE library as well. - -On Debian-based systems, run the following command to install FUSE: - -``` -$ sudo apt install libfuse2 - -``` - -On Fedora: - -``` -$ sudo dnf install fuse - -``` - -Once you installed all required dependencies, run the following command to install dbxfs utility: - -``` -$ pip3 install dbxfs - -``` - -### Mount Dropbox folder locally - -Create a mount point to mount your dropbox folder in your local file system. - -``` -$ mkdir ~/mydropbox - -``` - -Then, mount the dropbox folder locally using dbxfs utility as shown below: - -``` -$ dbxfs ~/mydropbox - -``` - -You will be asked to generate an access token: - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-1.png) - -To generate an access token, just navigate to the URL given in the above output from your web browser and click **Allow** to authenticate Dropbox access. You need to log in to your dropbox account to complete authorization process. - -A new authorization code will be generated in the next screen. Copy the code and head back to your Terminal and paste it into cli-dbxfs prompt to finish the process. - -You will be then asked to save the credentials for future access. Type **Y** or **N** whether you want to save or decline. And then, you need to enter a passphrase twice for the new access token. - -Finally, click **Y** to accept **“/home/username/mydropbox”** as the default mount point. If you want to set different path, type **N** and enter the location of your choice. - -[![Generate access token 2][3]][4] - -All done! From now on, you can see your Dropbox folder is locally mounted in your filesystem. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Dropbox-in-file-manager.png) - -### Change Access Token Storage Path - -By default, the dbxfs application will store your Dropbox access token in the system keyring or an encrypted file. However, you might want to store it in a **gpg** encrypted file or something else. If so, get an access token by creating a personal app on the [Dropbox developers app console][5]. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/access-token.png) - -Once the app is created, click **Generate** button in the next button. This access token can be used to access your Dropbox account via the API. Don’t share your access token with anyone. - -![](https://www.ostechnix.com/wp-content/uploads/2018/10/Create-a-new-app.png) - -Once you created an access token, encrypt it using any encryption tools of your choice, such as [**Cryptomater**][6], [**Cryptkeeper**][7], [**CryptGo**][8], [**Cryptr**][9], [**Tomb**][10], [**Toplip**][11] and [**GnuPG**][12] etc., and store it in your preferred location. - -Next edit the dbxfs configuration file and add the following line in it: - -``` -"access_token_command": ["gpg", "--decrypt", "/path/to/access/token/file.gpg"] - -``` - -You can find the dbxfs configuration file by running the following command: - -``` -$ dbxfs --print-default-config-file - -``` - -For more details, refer dbxfs help section: - -``` -$ dbxfs -h - -``` - -As you can see, mounting Dropfox folder locally in your file system using Dbxfs utility is no big deal. As far tested, dbxfs just works fine as expected. Give it a try if you’re interested to see how it works and let us know about your experience in the comment section below. - -And, that’s all for now. Hope this was useful. More good stuff to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/dbxfs-mount-dropbox-folder-locally-as-virtual-file-system-in-linux/ - -作者:[SK][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[1]: https://www.ostechnix.com/how-to-mount-google-drive-locally-as-virtual-file-system-in-linux/ -[2]: https://www.ostechnix.com/install-dropbox-in-ubuntu-18-04-lts-desktop/ -[3]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[4]: http://www.ostechnix.com/wp-content/uploads/2018/10/Generate-access-token-2.png -[5]: https://dropbox.com/developers/apps -[6]: https://www.ostechnix.com/cryptomator-open-source-client-side-encryption-tool-cloud/ -[7]: https://www.ostechnix.com/how-to-encrypt-your-personal-foldersdirectories-in-linux-mint-ubuntu-distros/ -[8]: https://www.ostechnix.com/cryptogo-easy-way-encrypt-password-protect-files/ -[9]: https://www.ostechnix.com/cryptr-simple-cli-utility-encrypt-decrypt-files/ -[10]: https://www.ostechnix.com/tomb-file-encryption-tool-protect-secret-files-linux/ -[11]: https://www.ostechnix.com/toplip-strong-file-encryption-decryption-cli-utility/ -[12]: https://www.ostechnix.com/an-easy-way-to-encrypt-and-decrypt-files-from-commandline-in-linux/ diff --git a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md b/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md deleted file mode 100644 index 7b77a9cf73..0000000000 --- a/sources/tech/20181005 Terminalizer - A Tool To Record Your Terminal And Generate Animated Gif Images.md +++ /dev/null @@ -1,173 +0,0 @@ -thecyanbird translating - -Terminalizer – A Tool To Record Your Terminal And Generate Animated Gif Images -====== -This is know topic for most of us and i don’t want to give you the detailed information about this flow. Also, we had written many article under this topics. - -Script command is the one of the standard command to record Linux terminal sessions. Today we are going to discuss about same kind of tool called Terminalizer. - -This tool will help us to record the users terminal activity, also will help us to identify other useful information from the output. - -### What Is Terminalizer - -Terminalizer allow users to record their terminal activity and allow them to generate animated gif images. It’s highly customizable CLI tool that user can share a link for an online player, web player for a recording file. - -**Suggested Read :** -**(#)** [Script – A Simple Command To Record Your Terminal Session Activity][1] -**(#)** [Automatically Record/Capture All Users Terminal Sessions Activity In Linux][2] -**(#)** [Teleconsole – A Tool To Share Your Terminal Session Instantly To Anyone In Seconds][3] -**(#)** [tmate – Instantly Share Your Terminal Session To Anyone In Seconds][4] -**(#)** [Peek – Create a Animated GIF Recorder in Linux][5] -**(#)** [Kgif – A Simple Shell Script to Create a Gif File from Active Window][6] -**(#)** [Gifine – Quickly Create An Animated GIF Video In Ubuntu/Debian][7] - -There is no distribution official package to install this utility and we can easily install it by using Node.js. - -### How To Install Noje.js in Linux - -Node.js can be installed in multiple ways. Here, we are going to teach you the standard method. - -For Ubuntu/LinuxMint use [APT-GET Command][8] or [APT Command][9] to install Node.js - -``` -$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash - -$ sudo apt-get install -y nodejs - -``` - -For Debian use [APT-GET Command][8] or [APT Command][9] to install Node.js - -``` -# curl -sL https://deb.nodesource.com/setup_8.x | bash - -# apt-get install -y nodejs - -``` - -For **`RHEL/CentOS`** , use [YUM Command][10] to install tmux. - -``` -$ sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash - -$ sudo yum install epel-release -$ sudo yum -y install nodejs - -``` - -For **`Fedora`** , use [DNF Command][11] to install tmux. - -``` -$ sudo dnf install nodejs - -``` - -For **`Arch Linux`** , use [Pacman Command][12] to install tmux. - -``` -$ sudo pacman -S nodejs npm - -``` - -For **`openSUSE`** , use [Zypper Command][13] to install tmux. - -``` -$ sudo zypper in nodejs6 - -``` - -### How to Install Terminalizer - -As you have already installed prerequisite package called Node.js, now it’s time to install Terminalizer on your system. Simple run the below npm command to install Terminalizer. - -``` -$ sudo npm install -g terminalizer - -``` - -### How to Use Terminalizer - -To record your session activity using Terminalizer, just run the following Terminalizer command. Once you started the recording then play around it and finally hit `CTRL+D` to exit and save the recording. - -``` -# terminalizer record 2g-session - -defaultConfigPath -The recording session is started -Press CTRL+D to exit and save the recording - -``` - -This will save your recording session as a YAML file, in this case my filename would be 2g-session-activity.yml. -![][15] - -Just type few commands to verify this and finally hit `CTRL+D` to exit the current capture. When you hit `CTRL+D` on the terminal and you will be getting the below output. - -``` -# logout -Successfully Recorded -The recording data is saved into the file: -/home/daygeek/2g-session.yml -You can edit the file and even change the configurations. - -``` - -![][16] - -### How to Play the Recorded File - -Use the below command format to paly your recorded YAML file. Make sure, you have to input your recorded file instead of us. - -``` -# terminalizer play 2g-session - -``` - -Render a recording file as an animated gif image. - -``` -# terminalizer render 2g-session - -``` - -`Note:` Below two commands are not implemented yet in the current version and will be available in the next version. - -If you would like to share your recording to others then upload a recording file and get a link for an online player and share it. - -``` -terminalizer share 2g-session - -``` - -Generate a web player for a recording file - -``` -# terminalizer generate 2g-session - -``` - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ -[2]: https://www.2daygeek.com/automatically-record-all-users-terminal-sessions-activity-linux-script-command/ -[3]: https://www.2daygeek.com/teleconsole-share-terminal-session-instantly-to-anyone-in-seconds/ -[4]: https://www.2daygeek.com/tmate-instantly-share-your-terminal-session-to-anyone-in-seconds/ -[5]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ -[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ -[7]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ -[8]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[9]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ -[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ -[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[14]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[15]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-record-2g-session-1.gif -[16]: https://www.2daygeek.com/wp-content/uploads/2018/10/terminalizer-play-2g-session.gif diff --git a/sources/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md b/sources/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md deleted file mode 100644 index 29d8a9c895..0000000000 --- a/sources/tech/20181006 LinuxBoot for Servers - Enter Open Source, Goodbye Proprietary UEFI.md +++ /dev/null @@ -1,125 +0,0 @@ -Translating by qhwdw -LinuxBoot for Servers: Enter Open Source, Goodbye Proprietary UEFI -============================================================ - -[LinuxBoot][13] is an Open Source [alternative][14] to Proprietary [UEFI][15] firmware. It was released last year and is now being increasingly preferred by leading hardware manufacturers as default firmware. Last year, LinuxBoot was warmly [welcomed][16]into the Open Source family by The Linux Foundation. - -This project was an initiative by Ron Minnich, author of LinuxBIOS and lead of [coreboot][17] at Google, in January 2017. - -Google, Facebook, [Horizon Computing Solutions][18], and [Two Sigma][19] collaborated together to develop the [LinuxBoot project][20] (formerly called [NERF][21]) for server machines based on Linux. - -Its openness allows Server users to easily customize their own boot scripts, fix issues, build their own [runtimes][22] and [reflash their firmware][23] with their own keys. They do not need to wait for vendor updates. - -Following is a video of [Ubuntu Xenial][24] booting for the first time with NERF BIOS: - -[视频](https://youtu.be/HBkZAN3xkJg) - -Let’s talk about some other advantages by comparing it to UEFI in terms of Server hardware. - -### Advantages of LinuxBoot over UEFI - -![LinuxBoot vs UEFI](https://4bds6hergc-flywheel.netdna-ssl.com/wp-content/uploads/2018/10/linuxboot-uefi.png) - -Here are some of the major advantages of LinuxBoot over UEFI: - -### Significantly faster startup - -It can boot up Server boards in less than twenty seconds, versus multiple minutes on UEFI. - -### Significantly more flexible - -LinuxBoot can make use of any devices, filesystems and protocols that Linux supports. - -### Potentially more secure - -Linux device drivers and filesystems have significantly more scrutiny than through UEFI. - -We can argue that UEFI is partly open with [EDK II][25] and LinuxBoot is partly closed. But it has been [addressed][26] that even such EDK II code does not have the proper level of inspection and correctness as the [Linux Kernel][27] goes through, while there is a huge amount of other Closed Source components within UEFI development. - -On the other hand, LinuxBoot has a significantly smaller amount of binaries with only a few hundred KB, compared to the 32 MB of UEFI binaries. - -To be precise, LinuxBoot fits a whole lot better into the [Trusted Computing Base][28], unlike UEFI. - -[Suggested readBest Free and Open Source Alternatives to Adobe Products for Linux][29] - -LinuxBoot has a [kexec][30] based bootloader which does not support startup on Windows/non-Linux kernels, but that is insignificant since most clouds are Linux-based Servers. - -### LinuxBoot adoption - -In 2011, the [Open Compute Project][31] was started by [Facebook][32] who [open-sourced][33] designs of some of their Servers, built to make its data centers  more efficient. LinuxBoot has been tested on a few Open Compute Hardware listed as under: - -* Winterfell - -* Leopard - -* Tioga Pass - -More [OCP][34] hardware are described [here][35] in brief. The OCP Foundation runs a dedicated project on firmware through [Open System Firmware][36]. - -Some other devices that support LinuxBoot are: - -* [QEMU][9] emulated [Q35][10] systems - -* [Intel S2600wf][11] - -* [Dell R630][12] - -Last month end, [Equus Compute Solutions][37] [announced][38] the release of its [WHITEBOX OPEN™][39] M2660 and M2760 Servers, as a part of their custom, cost-optimized Open-Hardware Servers and storage platforms. Both of them support LinuxBoot to customize the Server BIOS for flexibility, improved security, and create a blazingly fast booting experience. - -### What do you think of LinuxBoot? - -LinuxBoot is quite well documented [on GitHub][40].  Do you like the features that set it apart from UEFI? Would you prefer using LinuxBoot rather than UEFI for starting up Servers, owing to the former’s open-ended development and future? Let us know in the comments below. - --------------------------------------------------------------------------------- - -via: https://itsfoss.com/linuxboot-uefi/ - -作者:[ Avimanyu Bandyopadhyay][a] -选题:[oska874][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://itsfoss.com/author/avimanyu/ -[b]:https://github.com/oska874 -[1]:https://itsfoss.com/linuxboot-uefi/# -[2]:https://itsfoss.com/linuxboot-uefi/# -[3]:https://itsfoss.com/linuxboot-uefi/# -[4]:https://itsfoss.com/linuxboot-uefi/# -[5]:https://itsfoss.com/linuxboot-uefi/# -[6]:https://itsfoss.com/linuxboot-uefi/# -[7]:https://itsfoss.com/author/avimanyu/ -[8]:https://itsfoss.com/linuxboot-uefi/#comments -[9]:https://en.wikipedia.org/wiki/QEMU -[10]:https://wiki.qemu.org/Features/Q35 -[11]:https://trmm.net/S2600 -[12]:https://trmm.net/NERF#Installing_on_a_Dell_R630 -[13]:https://www.linuxboot.org/ -[14]:https://www.phoronix.com/scan.php?page=news_item&px=LinuxBoot-OSFC-2018-State -[15]:https://itsfoss.com/check-uefi-or-bios/ -[16]:https://www.linuxfoundation.org/blog/2018/01/system-startup-gets-a-boost-with-new-linuxboot-project/ -[17]:https://en.wikipedia.org/wiki/Coreboot -[18]:http://www.horizon-computing.com/ -[19]:https://www.twosigma.com/ -[20]:https://trmm.net/LinuxBoot_34c3 -[21]:https://trmm.net/NERF -[22]:https://trmm.net/LinuxBoot_34c3#Runtimes -[23]:http://www.tech-faq.com/flashing-firmware.html -[24]:https://itsfoss.com/features-ubuntu-1604/ -[25]:https://www.tianocore.org/ -[26]:https://media.ccc.de/v/34c3-9056-bringing_linux_back_to_server_boot_roms_with_nerf_and_heads -[27]:https://medium.com/@bhumikagoyal/linux-kernel-development-cycle-52b4c55be06e -[28]:https://en.wikipedia.org/wiki/Trusted_computing_base -[29]:https://itsfoss.com/adobe-alternatives-linux/ -[30]:https://en.wikipedia.org/wiki/Kexec -[31]:https://en.wikipedia.org/wiki/Open_Compute_Project -[32]:https://github.com/facebook -[33]:https://github.com/opencomputeproject -[34]:https://www.networkworld.com/article/3266293/lan-wan/what-is-the-open-compute-project.html -[35]:http://hyperscaleit.com/ocp-server-hardware/ -[36]:https://www.opencompute.org/projects/open-system-firmware -[37]:https://www.equuscs.com/ -[38]:http://www.dcvelocity.com/products/Software_-_Systems/20180924-equus-compute-solutions-introduces-whitebox-open-m2660-and-m2760-servers/ -[39]:https://www.equuscs.com/servers/whitebox-open/ -[40]:https://github.com/linuxboot/linuxboot diff --git a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md index 71adf0112b..27616a9f6e 100644 --- a/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md +++ b/sources/tech/20181008 Taking notes with Laverna, a web-based information organizer.md @@ -1,5 +1,3 @@ -translating by cyleft - Taking notes with Laverna, a web-based information organizer ====== diff --git a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md index f2885b177c..2fd085eda0 100644 --- a/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md +++ b/sources/tech/20181011 Exploring the Linux kernel- The secrets of Kconfig-kbuild.md @@ -1,4 +1,5 @@ -translating by leemeans +Translating by Jamskr + Exploring the Linux kernel: The secrets of Kconfig/kbuild ====== Dive into understanding how the Linux config/build system works. diff --git a/sources/tech/20181022 Improve login security with challenge-response authentication.md b/sources/tech/20181022 Improve login security with challenge-response authentication.md deleted file mode 100644 index 66ed2534b0..0000000000 --- a/sources/tech/20181022 Improve login security with challenge-response authentication.md +++ /dev/null @@ -1,183 +0,0 @@ -Improve login security with challenge-response authentication -====== - -![](https://fedoramagazine.org/wp-content/uploads/2018/10/challenge-response-816x345.png) - -### Introduction - -Today, Fedora offers multiple ways to improve the secure authentication of our user accounts. Of course it has the familiar user name and password to login. It also offers additional authentication options such as biometric, fingerprint, smart card, one-time password, and even challenge-response authentication. - -Each authentication method has clear pros and cons. That, in itself, could be a topic for a rather lengthy article. Fedora Magazine has covered a few of these options previously: - - -+ [Using the YubiKey4 with Fedora][1] -+ [Fedora 28: Better smart card support in OpenSSH][2] - - -One of the most secure methods in modern Fedora releases is offline hardware challenge-response. It’s also one of the easiest to deploy. Here’s how. - -### Challenge-response authentication - -Technically, when you provide a password, you’re responding to a user name challenge. The offline challenge response covered here requires your user name first. Next, Fedora challenges you to provide an encrypted physical hardware token. The token responds to the challenge with another encrypted key it stores via the Pluggable Authentication Modules (PAM) framework. Finally, Fedora prompts you for the password. This prevents someone from just using a found hardware token, or just using a user name and password without the correct encrypted key. - -This means that in addition to your user name and password, you must have previously registered one or more encrypted hardware tokens with the OS. And you have to provide that physical hardware token to be able to authenticate with your user name. - -Some challenge-response methods, like one time passwords (OTP), take an encrypted code key on the hardware token, and pass that key across the network to a remote authentication server. The server then tells Fedora’s PAM framework if it’s is a valid token for that user name. This is great if the authentication server(s) are on the local network. The downside is if the network connection is down or you’re working remote without a network connection, you can’t use this remote authentication method. You could be locked out of the system until you can connect through the network to the server. - -Sometimes a workplace requires use of Yubikey One Time Passwords (OTP) configuration. However, on home or personal systems you may prefer a local challenge-response configuration. Everything is local, and the method requires no remote network calls. The following process works on Fedora 27, 28, and 29. - -### Preparation - -#### Hardware token keys - -First you need a secure hardware token key. Specifically, this process requires a Yubikey 4, Yubikey NEO, or a recently released Yubikey 5 series device which also supports FIDO2. You should purchase two of them to provide a backup in case one becomes lost or damaged. You can use these keys on numerous workstations. The simpler FIDO or FIDO U2F only versions don’t work for this process, but are great for online services that use FIDO. - -#### Backup, backup, and backup - -Next, make a backup of all your important data. You may want to test the configuration in a Fedora 27/28/29 cloned VM to make sure you understand the process before setting up your personal workstation. - -#### Updating and installing - -Now make sure Fedora is up to date. Then install the required Fedora Yubikey packages via these dnf commands: - -``` -$ sudo dnf upgrade -$ sudo dnf install ykclient* ykpers* pam_yubico* -$ cd -``` - -If you’re in a VM environment, such as Virtual Box, make sure the Yubikey device is inserted in a USB port, and enable USB access to the Yubikey in the VM control. - -### Configuring Yubikey - -Verify that your user account has access to the USB Yubikey: - -``` -$ ykinfo -v -version: 3.5.0 -``` - -If the YubiKey is not detected, the following error message appears: - -``` -Yubikey core error: no yubikey present -``` - -Next, initialize each of your new Yubikeys with the following ykpersonalize command. This sets up the Yubikey configuration slot 2 with a Challenge Response using the HMAC-SHA1 algorithm, even with less than 64 characters. If you have already setup your Yubikeys for challenge-response, you don’t need to run ykpersonalize again. - -``` -ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -oserial-api-visible -``` - -Some users leave the YubiKey in their workstation while using it, and even use challenge-response for virtual machines. However, for more security you may prefer to manually trigger the Yubikey to respond to challenge. - -To add that manual challenge button trigger, add the -ochal-btn-trig flag. This flag causes the Yubikey to flash the yubikey LED on a request. It waits for you to press the button on the hardware key area within 15 seconds to produce the response key. - -``` -$ ykpersonalize -2 -ochal-resp -ochal-hmac -ohmac-lt64 -ochal-btn-trig -oserial-api-visible -``` - -Do this for each of your new hardware keys, only once per key. Once you have programmed your keys, store the Yubikey configuration to ~/.yubico with the following command: - -``` -$ ykpamcfg -2 -v -debug: util.c:222 (check_firmware_version): YubiKey Firmware version: 4.3.4 - -Sending 63 bytes HMAC challenge to slot 2 -Sending 63 bytes HMAC challenge to slot 2 -Stored initial challenge and expected response in '/home/chuckfinley/.yubico/challenge-9992567'. -``` - -If you are setting up multiple keys for backup purposes, configure all the keys the same, and store each key’s challenge-response using the ykpamcfg utility. If you run the command ykpersonalize on an existing registered key, you must store the configuration again. - -### Configuring /etc/pam.d/sudo - -Now to verify this configuration worked, **in the same terminal window** you’ll setup sudo to require the use of the Yubikey challenge-response. Insert the following line into the /etc/pam.d/sudo file: - -``` -auth required pam_yubico.so mode=challenge-response -``` - -Insert the above auth line into the file above the auth include system-auth line. Then save the file and exit the editor. In a default Fedora 29 setup, /etc/pam.d/sudo should now look like this: - -``` -#%PAM-1.0 -auth required pam_yubico.so mode=challenge-response -auth include system-auth -account include system-auth -password include system-auth -session optional pam_keyinit.so revoke -session required pam_limits.so -session include system-auth -``` - -**Keep this original terminal window open** , and test by opening another new terminal window. In the new terminal window type: - -``` -$ sudo echo testing -``` - -You should notice the LED blinking on the key. Tap the Yubikey button and you should see a prompt for your sudo password. After you enter your password, you should see “testing” echoed in the terminal screen. - -Now test to ensure a correct failure. Start another terminal window and remove the Yubikey from the USB port. Verify that sudo no longer works without the Yubikey with this command: - -``` -$ sudo echo testing fail -``` - -You should immediately be prompted for the sudo password. Even if you enter the password, it should fail. - -### Configuring Gnome Desktop Manager - -Once your testing is complete, now you can add challenge-response support for the graphical login. Re-insert your Yubikey into the USB port. Next you’ll add the following line to the /etc/pam.d/gdm-password file: - -``` -auth required pam_yubico.so mode=challenge-response -``` - -Open a terminal window, and issue the following command. You can use another editor if desired: - -``` -$ sudo vi /etc/pam.d/gdm-password -``` - -You should see the yubikey LED blinking. Press the yubikey button, then enter the password at the prompt. - -Modify the /etc/pam.d/gdm-password file to add the new auth line above the existing line auth substack password-auth. The top of the file should now look like this: - -``` -auth [success=done ignore=ignore default=bad] pam_selinux_permit.so -auth required pam_yubico.so mode=challenge-response -auth substack password-auth -auth optional pam_gnome_keyring.so -auth include postlogin - -account required pam_nologin.so -``` - -Save the changes and exit the editor. If you use vi, the key sequence is to hit the **Esc** key, then type wq! at the prompt to save and exit. - -### Conclusion - -Now log out of GNOME. With the Yubikey inserted into the USB port, click on your user name in the graphical login. The Yubikey LED begins to flash. Touch the button, and you will be prompted for your password. - -If you lose the Yubikey, you can still use the secondary backup Yubikey in addition to your set password. You can also add additional Yubikey configurations to your user account. - -If someone gains access to your password, they still can’t login without your physical hardware Yubikey. Congratulations! You’ve now dramatically increased the security of your workstation login. - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/login-challenge-response-authentication/ - -作者:[nabooengineer][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/nabooengineer/ -[b]: https://github.com/lujun9972 -[1]: https://fedoramagazine.org/using-the-yubikey4-with-fedora/ -[2]: https://fedoramagazine.org/fedora-28-better-smart-card-support-openssh/ - diff --git a/sources/tech/20181025 How to write your favorite R functions in Python.md b/sources/tech/20181025 How to write your favorite R functions in Python.md deleted file mode 100644 index a06d3557b9..0000000000 --- a/sources/tech/20181025 How to write your favorite R functions in Python.md +++ /dev/null @@ -1,153 +0,0 @@ -How to write your favorite R functions in Python -====== -R or Python? This Python script mimics convenient R-style functions for doing statistics nice and easy. - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/search_find_code_issue_bug_programming.png?itok=XPrh7fa0) - -One of the great modern battles of data science and machine learning is "Python vs. R." There is no doubt that both have gained enormous ground in recent years to become top programming languages for data science, predictive analytics, and machine learning. In fact, according to a recent IEEE article, Python overtook C++ as the [top programming language][1] and R firmly secured its spot in the top 10. - -However, there are some fundamental differences between these two. [R was developed primarily][2] as a tool for statistical analysis and quick prototyping of a data analysis problem. Python, on the other hand, was developed as a general purpose, modern object-oriented language in the same vein as C++ or Java but with a simpler learning curve and more flexible demeanor. Consequently, R continues to be extremely popular among statisticians, quantitative biologists, physicists, and economists, whereas Python has slowly emerged as the top language for day-to-day scripting, automation, backend web development, analytics, and general machine learning frameworks and has an extensive support base and open source development community work. - -### Mimicking functional programming in a Python environment - -[R's nature as a functional programming language][3] provides users with an extremely simple and compact interface for quick calculations of probabilities and essential descriptive/inferential statistics for a data analysis problem. For example, wouldn't it be great to be able to solve the following problems with just a single, compact function call? - - * How to calculate the mean/median/mode of a data vector. - * How to calculate the cumulative probability of some event following a normal distribution. What if the distribution is Poisson? - * How to calculate the inter-quartile range of a series of data points. - * How to generate a few random numbers following a Student's t-distribution. - - - -The R programming environment can do all of these. - -On the other hand, Python's scripting ability allows analysts to use those statistics in a wide variety of analytics pipelines with limitless sophistication and creativity. - -To combine the advantages of both worlds, you just need a simple Python-based wrapper library that contains the most commonly used functions pertaining to probability distributions and descriptive statistics defined in R-style. This enables you to call those functions really fast without having to go to the proper Python statistical libraries and figure out the whole list of methods and arguments. - -### Python wrapper script for most convenient R-functions - -[I wrote a Python script][4] to define the most convenient and widely used R-functions in simple, statistical analysis—in Python. After importing this script, you will be able to use those R-functions naturally, just like in an R programming environment. - -The goal of this script is to provide simple Python subroutines mimicking R-style statistical functions for quickly calculating density/point estimates, cumulative distributions, and quantiles and generating random variates for important probability distributions. - -To maintain the spirit of R styling, the script uses no class hierarchy and only raw functions are defined in the file. Therefore, a user can import this one Python script and use all the functions whenever they're needed with a single name call. - -Note that I use the word mimic. Under no circumstance am I claiming to emulate R's true functional programming paradigm, which consists of a deep environmental setup and complex relationships between those environments and objects. This script allows me (and I hope countless other Python users) to quickly fire up a Python program or Jupyter notebook, import the script, and start doing simple descriptive statistics in no time. That's the goal, nothing more, nothing less. - -If you've coded in R (maybe in grad school) and are just starting to learn and use Python for data analysis, you will be happy to see and use some of the same well-known functions in your Jupyter notebook in a manner similar to how you use them in your R environment. - -Whatever your reason, using this script is fun. - -### Simple examples - -To start, just import the script and start working with lists of numbers as if they were data vectors in R. - -``` -from R_functions import * -lst=[20,12,16,32,27,65,44,45,22,18] - -``` - -Say you want to calculate the [Tuckey five-number][5] summary from a vector of data points. You just call one simple function, **fivenum** , and pass on the vector. It will return the five-number summary in a NumPy array. - -``` -lst=[20,12,16,32,27,65,44,45,22,18] -fivenum(lst) -> array([12. , 18.5, 24.5, 41. , 65. ]) -``` - -Maybe you want to know the answer to the following question: - -Suppose a machine outputs 10 finished goods per hour on average with a standard deviation of 2. The output pattern follows a near normal distribution. What is the probability that the machine will output at least 7 but no more than 12 units in the next hour? - -The answer is essentially this: - -![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_1.png) - -You can obtain the answer with just one line of code using **pnorm** : - -``` -pnorm(12,10,2)-pnorm(7,10,2) -> 0.7745375447996848 -``` - -Or maybe you need to answer the following: - -Suppose you have a loaded coin with the probability of turning heads up 60% every time you toss it. You are playing a game of 10 tosses. How do you plot and map out the chances of all the possible number of wins (from 0 to 10) with this coin? - -You can obtain a nice bar chart with just a few lines of code using just one function, **dbinom** : - -``` -probs=[] -import matplotlib.pyplot as plt -for i in range(11): -    probs.append(dbinom(i,10,0.6)) -plt.bar(range(11),height=probs) -plt.grid(True) -plt.show() -``` - -![](https://opensource.com/sites/default/files/uploads/r-functions-in-python_2.png) - -### Simple interface for probability calculations - -R offers an extremely simple and intuitive interface for quick calculations from essential probability distributions. The interface goes like this: - - * **d** {distribution} gives the density function value at a point **x** - * **p** {distribution} gives the cumulative value at a point **x** - * **q** {distribution} gives the quantile function value at a probability **p** - * **r** {distribution} generates one or multiple random variates - - - -In our implementation, we stick to this interface and its associated argument list so you can execute these functions exactly like you would in an R environment. - -### Currently implemented functions - -The following R-style functions are implemented in the script for fast calling. - - * Mean, median, variance, standard deviation - * Tuckey five-number summary, IQR - * Covariance of a matrix or between two vectors - * Density, cumulative probability, quantile function, and random variate generation for the following distributions: normal, uniform, binomial, Poisson, F, Student's t, Chi-square, beta, and gamma. - - - -### Work in progress - -Obviously, this is a work in progress, and I plan to add some other convenient R-functions to this script. For example, in R, a single line of command **lm** can get you an ordinary least-square fitted model to a numerical dataset with all the necessary inferential statistics (P-values, standard error, etc.). This is powerfully brief and compact! On the other hand, standard linear regression problems in Python are often tackled using [Scikit-learn][6], which needs a bit more scripting for this use, so I plan to incorporate this single function linear model fitting feature using Python's [statsmodels][7] backend. - -If you like and use this script in your work, please help others find it by starring or forking its [GitHub repository][8]. Also, you can check my other [GitHub repos][9] for fun code snippets in Python, R, or MATLAB and some machine learning resources. - -If you have any questions or ideas to share, please contact me at [tirthajyoti[AT]gmail.com][10]. If you are, like me, passionate about machine learning and data science, please [add me on LinkedIn][11] or [follow me on Twitter. ][12] - -Originally published on [Towards Data Science][13]. Reposted under [CC BY-SA 4.0][14]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/write-favorite-r-functions-python - -作者:[Tirthajyoti Sarkar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/tirthajyoti -[b]: https://github.com/lujun9972 -[1]: https://spectrum.ieee.org/at-work/innovation/the-2018-top-programming-languages -[2]: https://www.coursera.org/lecture/r-programming/overview-and-history-of-r-pAbaE -[3]: http://adv-r.had.co.nz/Functional-programming.html -[4]: https://github.com/tirthajyoti/StatsUsingPython/blob/master/R_Functions.py -[5]: https://en.wikipedia.org/wiki/Five-number_summary -[6]: http://scikit-learn.org/stable/ -[7]: https://www.statsmodels.org/stable/index.html -[8]: https://github.com/tirthajyoti/StatsUsingPython -[9]: https://github.com/tirthajyoti?tab=repositories -[10]: mailto:tirthajyoti@gmail.com -[11]: https://www.linkedin.com/in/tirthajyoti-sarkar-2127aa7/ -[12]: https://twitter.com/tirthajyotiS -[13]: https://towardsdatascience.com/how-to-write-your-favorite-r-functions-in-python-11e1e9c29089 -[14]: https://creativecommons.org/licenses/by-sa/4.0/ diff --git a/sources/tech/20181025 Monitoring database health and behavior- Which metrics matter.md b/sources/tech/20181025 Monitoring database health and behavior- Which metrics matter.md deleted file mode 100644 index 520f08342b..0000000000 --- a/sources/tech/20181025 Monitoring database health and behavior- Which metrics matter.md +++ /dev/null @@ -1,82 +0,0 @@ -Monitoring database health and behavior: Which metrics matter? -====== -Monitoring your database can be overwhelming or seem not important. Here's how to do it right. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/metrics_graph_stats_blue.png?itok=OKCc_60D) - -We don’t talk about our databases enough. In this age of instrumentation, we monitor our applications, our infrastructure, and even our users, but we sometimes forget that our database deserves monitoring, too. That’s largely because most databases do their job so well that we simply trust them to do it. Trust is great, but confirmation of our assumptions is even better. - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image1_-_bffs.png?itok=BZQM_Fos) - -### Why monitor your databases? - -There are plenty of reasons to monitor your databases, most of which are the same reasons you'd monitor any other part of your systems: Knowing what’s going on in the various components of your applications makes you a better-informed developer who makes smarter decisions. - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image5_fire.png?itok=wsip2Fa4) - -More specifically, databases are great indicators of system health and behavior. Odd behavior in the database can point to problem areas in your applications. Alternately, when there’s odd behavior in your application, you can use database metrics to help expedite the debugging process. - -### The problem - -The slightest investigation reveals one problem with monitoring databases: Databases have a lot of metrics. "A lot" is an understatement—if you were Scrooge McDuck, you could swim through all of the metrics available. If this were Wrestlemania, the metrics would be folding chairs. Monitoring them all doesn’t seem practical, so how do you decide which metrics to monitor? - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image2_db_metrics.png?itok=Jd9NY1bt) - -### The solution - -The best way to start monitoring databases is to identify some foundational, database-agnostic metrics. These metrics create a great start to understanding the lives of your databases. - -### Throughput: How much is the database doing? - -The easiest way to start monitoring a database is to track the number of requests the database receives. We have high expectations for our databases; we expect them to store data reliably and handle all of the queries we throw at them, which could be one massive query a day or millions of queries from users all day long. Throughput can tell you which of those is true. - -You can also group requests by type (reads, writes, server-side, client-side, etc.) to begin analyzing the traffic. - -### Execution time: How long does it take the database to do its job? - -This metric seems obvious, but it often gets overlooked. You don’t just want to know how many requests the database received, but also how long the database spent on each request. It’s important to approach execution time with context, though: What's slow for a time-series database like InfluxDB isn’t the same as what's slow for a relational database like MySQL. Slow in InfluxDB might mean milliseconds, whereas MySQL’s default value for its `SLOW_QUERY` variable is ten seconds. - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image4_slow_is_relative.png?itok=9RkuzUi8) - -Monitoring execution time is not the same thing as improving execution time, so beware of the temptation to spend time on optimizations if you have other problems in your app to fix. - -### Concurrency: How many jobs is the database doing at the same time? - -Once you know how many requests the database is handling and how long each one takes, you need to add a layer of complexity to start getting real value from these metrics. - -If the database receives ten requests and each one takes ten seconds to complete, is the database busy for 100 seconds, ten seconds—or somewhere in between? The number of concurrent tasks changes the way the database’s resources are used. When you consider things like the number of connections and threads, you’ll start to get a fuller picture of your database metrics. - -Concurrency can also affect latency, which includes not only the time it takes for the task to be completed (execution time) but also the time the task needs to wait before it’s handled. - -### Utilization: What percentage of the time was the database busy? - -Utilization is a culmination of throughput, execution time, and concurrency to determine how often the database was available—or alternatively, how often the database was too busy to respond to a request. - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image6_telephone.png?itok=YzdpwUQP) - -This metric is particularly useful for determining the overall health and performance of your database. If it’s available to respond to requests only 80% of the time, you can reallocate resources, work on optimization, or otherwise make changes to get closer to high availability. - -### The good news - -It can seem overwhelming to monitor and analyze, especially because most of us aren’t database experts and we may not have time to devote to understanding these metrics. But the good news is that most of this work is already done for us. Many databases have an internal performance database (Postgres: pg_stats, CouchDB: Runtime_Statistics, InfluxDB: _internal, etc.), which is designed by database engineers to monitor the metrics that matter for that particular database. You can see things as broad as the number of slow queries or as detailed as the average microseconds each event in the database takes. - -### Conclusion - -Databases create enough metrics to keep us all busy for a long time, and while the internal performance databases are full of useful information, it’s not always clear which metrics you should care about. Start with throughput, execution time, concurrency, and utilization, which provide enough information for you to start understanding the patterns in your database. - -![](https://opensource.com/sites/default/files/styles/medium/public/uploads/image3_3_hearts.png?itok=iHF-OSwx) - -Are you monitoring your databases? Which metrics have you found to be useful? Tell me about it! - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/10/database-metrics-matter - -作者:[Katy Farmer][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/thekatertot -[b]: https://github.com/lujun9972 diff --git a/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md b/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md deleted file mode 100644 index c11e1c9c38..0000000000 --- a/sources/tech/20181029 How I organize my knowledge as a Software Engineer.md +++ /dev/null @@ -1,119 +0,0 @@ -@flowsnow is translating - - -How I organize my knowledge as a Software Engineer -============================================================ - - -Software Development and Technology in general are areas that evolve at a very fast pace and continuous learning is essential. -Some minutes navigating in the internet, in places like Twitter, Medium, RSS feeds, Hacker News and other specialized sites and communities, are enough to find lots of great pieces of information from articles, case studies, tutorials, code snippets, new applications and much more. - -Saving and organizing all that information can be a daunting task. In this post I will present some tools tools that I use to do it. - -One of the points I consider very important regarding knowledge management is to avoid lock-in in a particular platform. All the tools I use, allow to export your data in standard formats like Markdown and HTML. - -Note that, My workflow is not perfect and I am constantly searching for new tools and ways to optimize it. Also everyone is different, so what works for me might not working well for you. - -### Knowledge base with NotionHQ - -For me, the fundamental piece of Knowledge management is to have some kind of personal Knowledge base / wiki. A place where you can save links, bookmarks, notes etc in an organized manner. - -I use [NotionHQ][7] for that matter. I use it to keep notes on various topics, having lists of resources like great libraries or tutorials grouped by programming language, bookmarking interesting blog posts and tutorials, and much more, not only related to software development but also my personal life. - -What I really like about Notion, is how simple it is to create new content. You write it using Markdown and it is organized as tree. - -Here is my top level pages of my "Development" workspace: - - [![Image](https://res.cloudinary.com/practicaldev/image/fetch/s--uMbaRUtu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://i.imgur.com/kRnuvMV.png)][8] - -Notion has some nice other features like integrated spreadsheets / databases and Task boards. - -You will need to subscribe to paid Personal Plan, if you want to use Notion seriously as the free plan is somewhat limited. I think its worth the price. Notion allows to export your entire workspace to Markdown files. The export has some important problems, like loosing the page hierarchy, but hope Notion Team can improve that. - -As a free alternative I would probably use [VuePress][9] or [GitBook][10] to host my own. - -### Save interesting articles with Pocket - -[Pocket][11] is one of my favorite applications ever! With Pocket you can create a reading list of articles from the Internet.  -Every time I see an article that looks interesting, I save it to Pocket using its Chrome Extension. Later on, I will read it and If I found it useful enough, I will use the "Archive" function of Pocket to permanently save that article and clean up my Pocket inbox. - -I try to keep the Reading list small enough and keep archiving information that I have dealt with. Pocket allows you to tag articles which will make it simpler to search articles for a particular topic later in time. - -You can also save a copy of the article in Pocket servers in case of the original site disappears, but you will need Pocket Premium for that. - -Pocket also have a "Discover" feature which suggests similar articles based on the articles you have saved. This is a great way to find new content to read. - -### Snippet Management with SnippetStore - -From GitHub, to Stack Overflow answers, to blog posts, its common to find some nice code snippets that you want to save for later. It could be some nice algorithm implementation, an useful script or an example of how to do X in Y language. - -I tried many apps from simple GitHub Gists to [Boostnote][12] until I discovered [SnippetStore][13]. - -SnippetStore is an open source snippet management app. What distinguish SnippetStore from others is its simplicity. You can organize snippets by Language or Tags and you can have multi file snippets. Its not perfect but it gets the job done. Boostnote, for example has more features, but I prefer the simpler way of organizing content of SnippetStore. - -For abbreviations and snippets that I use on a daily basis, I prefer to use my Editor / IDE snippets feature as it is more convenient to use. I use SnippetStore more like a reference of coding examples. - -[Cacher][14] is also an interesting alternative, since it has integrations with many editors, have a cli tool and uses GitHub Gists as backend, but 6$/month for its pro plan, its too much IMO. - -### Managing cheat sheets with DevHints - -[Devhints][15] is a collection of cheat sheets created by Rico Sta. Cruz. Its open source and powered by Jekyll, one of the most popular static site generator. - -The cheat sheets are written in Markdown with some extra formatting goodies like support for columns. - -I really like the looks of the interface and being Markdown makes in incredibly easy to add new content and keep it updated and in version control, unlike cheat sheets in PDF or Image format, that you can find on sites like [Cheatography][16]. - -As it is open source I have created my own fork, removed some cheat sheets that I dont need and add some more. - -I use cheat sheets as reference of how to use some library or programming language or to remember some commands. Its very handy to have a single page, with all the basic syntax of a specific programming language for example. - -I am still experimenting with this but its working great so far. - -### Diigo - -[Diigo][17] allows you to Annotate and Highlight parts of websites. I use it to annotate important information when studying new topics or to save particular paragraphs from articles, Stack Overflow answers or inspirational quotes from Twitter! ;) - -* * * - -And thats it. There might be some overlap in terms of functionality in some of the tools, but like I said in the beginning, this is an always evolving workflow, as I am always experimenting and searching for ways to improve and be more productive. - -What about you? How to you organize your Knowledge?. Please feel free to comment below. - -Thank you for reading. - ------------------------------------------------------------------------- - -作者简介: - -Bruno Paz -Web Engineer. Expert in #PHP and @Symfony Framework. Enthusiast about new technologies. Sports and @FCPorto fan! - --------------------------------------------------------------------------------- - -via: https://dev.to/brpaz/how-do-i-organize-my-knowledge-as-a-software-engineer-4387 - -作者:[ Bruno Paz][a] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) -选题:[oska874](https://github.com/oska874) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:http://brunopaz.net/ -[1]:https://dev.to/brpaz -[2]:http://twitter.com/brunopaz88 -[3]:http://github.com/brpaz -[4]:https://dev.to/t/knowledge -[5]:https://dev.to/t/learning -[6]:https://dev.to/t/development -[7]:https://www.notion.so/ -[8]:https://res.cloudinary.com/practicaldev/image/fetch/s--uMbaRUtu--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/http://i.imgur.com/kRnuvMV.png -[9]:https://vuepress.vuejs.org/ -[10]:https://www.gitbook.com/?t=1 -[11]:https://getpocket.com/ -[12]:https://boostnote.io/ -[13]:https://github.com/ZeroX-DG/SnippetStore -[14]:https://www.cacher.io/ -[15]:https://devhints.io/ -[16]:https://cheatography.com/ -[17]:https://www.diigo.com/index diff --git a/sources/tech/20181105 5 Minimal Web Browsers for Linux.md b/sources/tech/20181105 5 Minimal Web Browsers for Linux.md new file mode 100644 index 0000000000..1fbc18aeae --- /dev/null +++ b/sources/tech/20181105 5 Minimal Web Browsers for Linux.md @@ -0,0 +1,163 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (5 Minimal Web Browsers for Linux) +[#]: via: (https://www.linux.com/blog/intro-to-linux/2018/11/5-minimal-web-browsers-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) +[#]: url: ( ) + +5 Minimal Web Browsers for Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimal.jpg?itok=ifA0Y3pV) + +There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options. + +The same thing goes for web browsers. You can use anything from open source favorites, such as [Firefox][1] and [Chromium][2], or closed sourced industry darlings like [Vivaldi][3] and [Chrome][4]. Those options are full-fledged browsers with every possible bell and whistle you’ll ever need. For some, these feature-rich browsers are perfect for everyday needs. + +There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, it’s about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered. + +Let’s take a look at five of the minimal browsers that can be installed on Linux. I’ll be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Let’s dive in. + +### GNOME Web + +GNOME Web (codename Epiphany, which means [“a usually sudden manifestation or perception of the essential nature or meaning of something”][5]) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation. + +Epiphany uses the WebKit rendering engine, which is the same engine used in Apple’s Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines: + + * Simplicity - Feature bloat and user interface clutter are considered evil. + + * Standards compliance - No non-standard features will ever be introduced to the codebase. + + * Software freedom - Epiphany will always be released under a license that respects freedom. + + * Human interface - Epiphany follows the [GNOME Human Interface Guidelines][6]. + + * Minimal preferences - Preferences are only added when they make sense and after careful consideration. + + * Target audience - Non-technical users are the primary target audience (which helps to define the types of features that are included). + + + + +GNOME Web is as clean and simple a web browser as you’ll find (Figure 1). + +![GNOME Web][8] + +Figure 1: The GNOME Web browser displaying a minimal amount of preferences for the user. + +[Used with permission][9] + +The GNOME Web manifesto reads: + +A web browser is more than an application: it is a way of thinking, a way of seeing the world. Epiphany's principles are simplicity, standards compliance, and software freedom. + +### Netsurf + +The [Netsurf][10] minimal web browser opens almost faster than you can release the mouse button. Netsurf uses its own layout and rendering engine (designed completely from scratch), which is rather hit and miss in its rendering (Figure 2). + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_2.jpg?itok=KhGhIKlj) + +Although you might find Netsurf to suffer from rendering issues on certain sites, understand the Hubbub HTML parser is following the work-in-progress HTML5 specification, so there will be issues popup now and then. To ease those rendering headaches, Netsurf does include HTTPS support, web page thumbnailing, URL completion, scale view, bookmarks, full-screen mode, keyboard shorts, and no particular GUI toolkit requirements. That last bit is important, especially when you switch from one desktop to another. + +For those curious as to the requirements for Netsurf, the browser can run on a machine as slow as a 30Mhz ARM 6 computer with 16MB of RAM. That’s impressive, by today’s standard. + +### QupZilla + +If you’re looking for a minimal browser that uses the Qt Framework and the QtWebKit rendering engine, [QupZilla][11] might be exactly what you’re looking for. QupZilla does include all the standard features and functions you’d expect from a web browser, such as bookmarks, history, sidebar, tabs, RSS feeds, ad blocking, flash blocking, and CA Certificates management. Even with those features, QupZilla still manages to remain a very fast lightweight web browser. Other features include: Fast startup, speed dial homepage, built-in screenshot tool, browser themes, and more. +One feature that should appeal to average users is that QupZilla has a more standard preferences tools than found in many lightweight browsers (Figure 3). So, if going too far outside the lines isn’t your style, but you still want something lighter weight, QupZilla is the browser for you. + +![QupZilla][13] + +Figure 3: The QupZilla preferences tool. + +[Used with permission][9] + +### Otter Browser + +Otter Browser is a free, open source attempt to recreate the closed-source offerings found in the Opera Browser. Otter Browser uses the WebKit rendering engine and has an interface that should be immediately familiar with any user. Although lightweight, Otter Browser does include full-blown features such as: + + * Passwords manager + + * Add-on manager + + * Content blocking + + * Spell checking + + * Customizable GUI + + * URL completion + + * Speed dial (Figure 4) + + * Bookmarks and various related features + + * Mouse gestures + + * User style sheets + + * Built-in Note tool + + +![Otter][15] + +Figure 4: The Otter Browser Speed Dial tab. + +[Used with permission][9] + +Otter Browser can be run on nearly any Linux distribution from an [AppImage][16], so there’s no installation required. Just download the AppImage file, give the file executable permissions (with the command chmod u+x otter-browser-*.AppImage), and then launch the app with the command ./otter-browser*.AppImage. + +Otter Browser does an outstanding job of rendering websites and could function as your go-to minimal browser with ease. + +### Lynx + +Let’s get really minimal. When I first started using Linux, back in ‘97, one of the web browsers I often turned to was a text-only take on the app called [Lynx][17]. It should come as no surprise that Lynx is still around and available for installation from the standard repositories. As you might expect, Lynx works from the terminal window and doesn’t display pretty pictures or render much in the way of advanced features (Figure 5). In fact, Lynx is as bare-bones a browser as you will find available. Because of how bare-bones this web browser is, it’s not recommended for everyone. But if you happen to have a gui-less web server and you have a need to be able to read the occasional website, Lynx can be a real lifesaver. + +![Lynx][19] + +Figure 5: The Lynx browser rendering the Linux.com page. + +[Used with permission][9] + +I have also found Lynx an invaluable tool when troubleshooting certain aspects of a website (or if some feature on a website is preventing me from viewing the content in a regular browser). Another good reason to use Lynx is when you only want to view the content (and not the extraneous elements). + +### Plenty More Where This Came From + +There are plenty more minimal browsers than this. But the list presented here should get you started down the path of minimalism. One (or more) of these browsers are sure to fill that need, whether you’re running it on a low-powered machine or not. + +Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/11/5-minimal-web-browsers-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.mozilla.org/en-US/firefox/new/ +[2]: https://www.chromium.org/ +[3]: https://vivaldi.com/ +[4]: https://www.google.com/chrome/ +[5]: https://www.merriam-webster.com/dictionary/epiphany +[6]: https://developer.gnome.org/hig/stable/ +[7]: /files/images/minimalbrowsers1jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_1.jpg?itok=Q7wZLF8B (GNOME Web) +[9]: /licenses/category/used-permission +[10]: https://www.netsurf-browser.org/ +[11]: https://qupzilla.com/ +[12]: /files/images/minimalbrowsers3jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_3.jpg?itok=O8iMALWO (QupZilla) +[14]: /files/images/minimalbrowsers4jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_4.jpg?itok=5bCa0z-e (Otter) +[16]: https://sourceforge.net/projects/otter-browser/files/ +[17]: https://lynx.browser.org/ +[18]: /files/images/minimalbrowsers5jpg +[19]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/minimalbrowsers_5.jpg?itok=p_Lmiuxh (Lynx) +[20]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181105 How to manage storage on Linux with LVM.md b/sources/tech/20181105 How to manage storage on Linux with LVM.md index 36cc8d47a0..9c0ee685d6 100644 --- a/sources/tech/20181105 How to manage storage on Linux with LVM.md +++ b/sources/tech/20181105 How to manage storage on Linux with LVM.md @@ -1,3 +1,4 @@ +[zianglei translating] How to manage storage on Linux with LVM ====== Create, expand, and encrypt storage pools as needed with the Linux LVM utilities. diff --git a/sources/tech/20181105 Revisiting the Unix philosophy in 2018.md b/sources/tech/20181105 Revisiting the Unix philosophy in 2018.md deleted file mode 100644 index 1088f0de5b..0000000000 --- a/sources/tech/20181105 Revisiting the Unix philosophy in 2018.md +++ /dev/null @@ -1,106 +0,0 @@ -Translating by Jamkr - -Revisiting the Unix philosophy in 2018 -====== -The old strategy of building small, focused applications is new again in the modern microservices environment. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/brain_data.png?itok=RH6NA32X) - -In 1984, Rob Pike and Brian W. Kernighan published an article called "[Program Design in the Unix Environment][1]" in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD's **cat -v** implementation. In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via **stdin** / **stdout** , and are connected through pipes. - -Sound familiar? - -Yeah, I thought so. That's pretty much the [definition of microservices][2] offered by James Lewis and Martin Fowler: - -> In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. - -While one *nix program or one microservice may be very limited or not even very interesting on its own, it's the combination of such independently working units that reveals their true benefit and, therefore, their power. - -### *nix vs. microservices - -The following table compares programs (such as **cat** or **lsof** ) in a *nix environment against programs in a microservices environment. - -| | *nix | Microservices | -| ----------------------------------- | -------------------------- | ----------------------------------- | -| Unit of execution | program using stdin/stdout | service with HTTP or gRPC API | -| Data flow | Pipes | ? | -| Configuration & parameterization | Command-line arguments, | | -| environment variables, config files | JSON/YAML docs | | -| Discovery | Package manager, man, make | DNS, environment variables, OpenAPI | - -Let's explore each line in slightly greater detail. - -#### Unit of execution - -**stdin** and writes output to **stdout**. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. - -#### Data flow - -The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input fromand writes output to. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you'll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens. - -Traditionally, *nix programs could communicate via pipes. In other words, thanks to [Doug McIlroy][3], you don't need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little [Apache Kafka-based experiment from 2017][4]. - -#### Configuration and parameterization - -How do you configure a program or service—either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include [Kubernetes resource definitions][5], [Nomad job specifications][6], or [Docker Compose][7] files. These may or may not be parameterized; that is, either you have some templating language, such as [Helm][8] in Kubernetes, or you find yourself doing an awful lot of **sed -i** commands. - -#### Discovery - -How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there's a bit more automation in finding a service. In addition to bespoke approaches like [Airbnb's SmartStack][9] or [Netflix's Eureka][10], there usually are environment variable-based or DNS-based [approaches][11] that allow you to discover services dynamically. Equally important, [OpenAPI][12] provides a de-facto standard for HTTP API documentation and design, and [gRPC][13] does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good [Makefiles][14] and ending with writing your docs with (or in?) [**style**][15]. - -### Pros and cons - -Both *nix and microservices offer a number of challenges and opportunities - -#### Composability - -It's hard to design something that has a clear, sharp focus and can also play well with others. It's even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts—maybe it's a better option to outsource these features into a service mesh? It's hard, but if you get it right, its reusability can be enormous. - -#### Observability - -In a monolith (in 2018) or a big program that tries to do it all (in 1984), it's rather straightforward to find the culprit when things go south. But, in a - -``` -yes | tr \\n x | head -c 450m | grep n -``` - -or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably [OpenCensus][16] and [OpenTracing][17]. Observability still might be the biggest single blocker if you are looking to move to microservices. - -#### Global state - -While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible. - -### Wrapping up - -In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith [is the best option][18] for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices—maybe we can learn something from the former to benefit the latter. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/revisiting-unix-philosophy-2018 - -作者:[Michael Hausenblas][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/mhausenblas -[b]: https://github.com/lujun9972 -[1]: http://harmful.cat-v.org/cat-v/ -[2]: https://martinfowler.com/articles/microservices.html -[3]: https://en.wikipedia.org/wiki/Douglas_McIlroy -[4]: https://speakerdeck.com/mhausenblas/distributed-named-pipes-and-other-inter-services-communication -[5]: http://kubernetesbyexample.com/ -[6]: https://www.nomadproject.io/docs/job-specification/index.html -[7]: https://docs.docker.com/compose/overview/ -[8]: https://helm.sh/ -[9]: https://github.com/airbnb/smartstack-cookbook -[10]: https://github.com/Netflix/eureka -[11]: https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services -[12]: https://www.openapis.org/ -[13]: https://grpc.io/ -[14]: https://suva.sh/posts/well-documented-makefiles/ -[15]: https://www.linux.com/news/improve-your-writing-gnu-style-checkers -[16]: https://opencensus.io/ -[17]: https://opentracing.io/ -[18]: https://robertnorthard.com/devops-days-well-architected-monoliths-are-okay/ diff --git a/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md b/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md new file mode 100644 index 0000000000..81111b465c --- /dev/null +++ b/sources/tech/20181106 How To Check The List Of Packages Installed From Particular Repository.md @@ -0,0 +1,342 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How To Check The List Of Packages Installed From Particular Repository?) +[#]: via: (https://www.2daygeek.com/how-to-check-the-list-of-packages-installed-from-particular-repository/) +[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/) +[#]: url: ( ) + +How To Check The List Of Packages Installed From Particular Repository? +====== + +If you would like to check the list of package installed from particular repository then you are in the right place to get it done. + +Why we need this detail? It may helps you to isolate the installed packages list based on the repository. + +Like, it’s coming from distribution official repository or these are coming from PPA or these are coming from other resources, etc., + +You may want to know what are the packages came from third party repositories to keep eye on those to avoid any damages on your system. + +So many third party repositories and PPAs are available for Linux. These repositories are included set of packages which is not available in distribution repository due to some limitation. + +It helps administrator to easily install some of the important packages which is not available in the distribution official repository. Installing third party repository on production system is not advisable as this may not properly maintained by the repository maintainer due to many reasons. + +So, you have to decide whether you want to install or not. I can say, we can believe some of the third party repositories which is well maintained and suggested by Linux distributions like [EPEL repository][1], Copr (Cool Other Package Repo), etc,. + +If you would like to see the list of package was installed from the corresponding repo, use the following commands based on your distributions. + +[List of Major repositories][2] and it’s details are below. + + * **`CentOS:`** [EPEL][1], [ELRepo][3], etc is [CentOS Community Approved Repositories][4]. + * **`Fedora:`** [RPMfusion repo][5] is commonly used by most of the [Fedora][6] users. + * **`ArchLinux:`** ArchLinux community repository contains packages that have been adopted by Trusted Users from the Arch User Repository. + * **`openSUSE:`** [Packman repo][7] offers various additional packages for openSUSE, especially but not limited to multimedia related applications and libraries that are on the openSUSE Build Service application blacklist. It’s the largest external repository of openSUSE packages. + * **`Ubuntu:`** Personal Package Archives (PPAs) are a kind of repository. Developers create them in order to distribute their software. You can find this information on the PPA’s Launchpad page. Also, you can enable Cananical partners repositories. + + + +### What Is Repository? + +A software repository is a central place which stores the software packages for the particular application. + +All the Linux distributions are maintaining their own repositories and they allow users to retrieve and install packages on their machine. + +Each vendor offered a unique package management tool to manage their repositories such as search, install, update, upgrade, remove, etc. + +Most of the Linux distributions comes as freeware except RHEL and SUSE. To access their repositories you need to buy a subscriptions. + +### How To Check The List Of Packages Installed From Particular Repository on RHEL/CentOS Systems? + +This can be done in multiple ways. Here we will be giving you all the possible options and you can choose which one is best for you. + +### Method-1: Using Yum Command + +RHEL & CentOS systems are using RPM packages hence we can use the [Yum Package Manager][8] to get this information. + +YUM stands for Yellowdog Updater, Modified is an open-source command-line front-end package-management utility for RPM based systems such as Red Hat Enterprise Linux (RHEL) and CentOS. + +Yum is the primary tool for getting, installing, deleting, querying, and managing RPM packages from distribution repositories, as well as other third-party repositories. + +``` +[[email protected] ~]# yum list installed | grep @epel +apachetop.x86_64 0.15.6-1.el7 @epel +aria2.x86_64 1.18.10-2.el7.1 @epel +atop.x86_64 2.3.0-8.el7 @epel +axel.x86_64 2.4-9.el7 @epel +epel-release.noarch 7-11 @epel +lighttpd.x86_64 1.4.50-1.el7 @epel +``` + +Alternatively, you can use the yum command with other option to get the same details like above. + +``` +# yum repo-pkgs epel list installed +Loaded plugins: fastestmirror +Loading mirror speeds from cached hostfile + * epel: epel.mirror.constant.com +Installed Packages +apachetop.x86_64 0.15.6-1.el7 @epel +aria2.x86_64 1.18.10-2.el7.1 @epel +atop.x86_64 2.3.0-8.el7 @epel +axel.x86_64 2.4-9.el7 @epel +epel-release.noarch 7-11 @epel +lighttpd.x86_64 1.4.50-1.el7 @epel +``` + +### Method-2: Using Yumdb Command + +Yumdb info provides information similar to yum info but additionally it provides package checksum data, type, user info (who installed the package). Since yum 3.2.26 yum has started storing additional information outside of the rpmdatabase (where user indicates it was installed by the user, and dep means it was brought in as a dependency). + +``` +# yumdb search from_repo epel* |egrep -v '(from_repo|^$)' +Loaded plugins: fastestmirror +apachetop-0.15.6-1.el7.x86_64 +aria2-1.18.10-2.el7.1.x86_64 +atop-2.3.0-8.el7.x86_64 +axel-2.4-9.el7.x86_64 +epel-release-7-11.noarch +lighttpd-1.4.50-1.el7.x86_64 +``` + +### Method-3: Using Repoquery Command + +repoquery is a program for querying information from YUM repositories similarly to rpm queries. + +``` +# repoquery -a --installed --qf "%{ui_from_repo} %{name}" | grep '^@epel' +@epel apachetop +@epel aria2 +@epel atop +@epel axel +@epel epel-release +@epel lighttpd +``` + +### How To Check The List Of Packages Installed From Particular Repository on Fedora System? + +DNF stands for Dandified yum. We can tell DNF, the next generation of yum package manager (Fork of Yum) using hawkey/libsolv library for back-end. Aleš Kozumplík started working on DNF since Fedora 18 and its implemented/launched in Fedora 22 finally. + +[Dnf command][9] is used to install, update, search & remove packages on Fedora 22 and later system. It automatically resolve dependencies and make it smooth package installation without any trouble. + +``` +# dnf list installed | grep @updates +NetworkManager.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-adsl.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-bluetooth.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libnm.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libreswan.x86_64 1.2.10-1.fc29 @updates +NetworkManager-libreswan-gnome.x86_64 1.2.10-1.fc29 @updates +NetworkManager-openvpn.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-openvpn-gnome.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-ovs.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-ppp.x86_64 1:1.12.4-2.fc29 @updates +. +. +``` + +Alternatively, you can use the dnf command with other option to get the same details like above. + +``` +# dnf repo-pkgs updates list installed +Installed Packages +NetworkManager.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-adsl.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-bluetooth.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libnm.x86_64 1:1.12.4-2.fc29 @updates +NetworkManager-libreswan.x86_64 1.2.10-1.fc29 @updates +NetworkManager-libreswan-gnome.x86_64 1.2.10-1.fc29 @updates +NetworkManager-openvpn.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-openvpn-gnome.x86_64 1:1.8.8-1.fc29 @updates +NetworkManager-ovs.x86_64 1:1.12.4-2.fc29 @updates +. +. +``` + +### How To Check The List Of Packages Installed From Particular Repository on openSUSE System? + +Zypper is a command line package manager which makes use of libzypp. [Zypper command][10] provides functions like repository access, dependency solving, package installation, etc. + +``` +zypper search -ir "Update Repository (Non-Oss)" +Loading repository data... +Reading installed packages... + +S | Name | Summary | Type +---+----------------------------+---------------------------------------------------+-------- +i | gstreamer-0_10-fluendo-mp3 | GStreamer plug-in from Fluendo for MP3 support | package +i+ | openSUSE-2016-615 | Test-update for openSUSE Leap 42.2 Non Free | patch +i+ | openSUSE-2017-724 | Security update for unrar | patch +i | unrar | A program to extract, test, and view RAR archives | package +``` + +Alternatively, we can use repo id instead of repo name. + +``` +zypper search -ir 2 +Loading repository data... +Reading installed packages... + +S | Name | Summary | Type +---+----------------------------+---------------------------------------------------+-------- +i | gstreamer-0_10-fluendo-mp3 | GStreamer plug-in from Fluendo for MP3 support | package +i+ | openSUSE-2016-615 | Test-update for openSUSE Leap 42.2 Non Free | patch +i+ | openSUSE-2017-724 | Security update for unrar | patch +i | unrar | A program to extract, test, and view RAR archives | package +``` + +### How To Check The List Of Packages Installed From Particular Repository on ArchLinux System? + +[Pacman command][11] stands for package manager utility. pacman is a simple command-line utility to install, build, remove and manage Arch Linux packages. Pacman uses libalpm (Arch Linux Package Management (ALPM) library) as a back-end to perform all the actions. + +``` +$ paclist community +acpi 1.7-2 +acpid 2.0.30-1 +adapta-maia-theme 3.94.0.149-1 +android-tools 9.0.0_r3-1 +blueman 2.0.6-1 +brotli 1.0.7-1 +. +. +ufw 0.35-5 +unace 2.5-10 +usb_modeswitch 2.5.2-1 +viewnior 1.7-1 +wallpapers-2018 1.0-1 +xcursor-breeze 5.11.5-1 +xcursor-simpleandsoft 0.2-8 +xcursor-vanilla-dmz-aa 0.4.5-1 +xfce4-whiskermenu-plugin-gtk3 2.3.0-1 +zeromq 4.2.5-1 +``` + +### How To Check The List Of Packages Installed From Particular Repository on Debian Based Systems? + +For Debian based systems, it can be done using grep command. + +If you want to know the list of installed repositories on your system, use the following command. + +``` +$ ls -lh /var/lib/apt/lists/ | uniq +total 370M +-rw-r--r-- 1 root root 10K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_InRelease +-rw-r--r-- 1 root root 6.4K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_partner_binary-amd64_Packages +-rw-r--r-- 1 root root 6.4K Oct 26 10:53 archive.canonical.com_ubuntu_dists_bionic_partner_binary-i386_Packages +-rw-r--r-- 1 root root 3.2K Jun 12 21:19 archive.canonical.com_ubuntu_dists_bionic_partner_i18n_Translation-en +drwxr-xr-x 2 _apt root 4.0K Jul 25 08:44 auxfiles +-rw-r--r-- 1 root root 3.7K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_contrib_binary-amd64_Packages +-rw-r--r-- 1 root root 7.2K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_contrib_Contents-amd64.lz4 +-rw-r--r-- 1 root root 4.4K Oct 16 15:13 download.virtualbox.org_virtualbox_debian_dists_bionic_InRelease +-rw-r--r-- 1 root root 34 Mar 19 2018 download.virtualbox.org_virtualbox_debian_dists_bionic_non-free_Contents-amd64.lz4 +-rw-r--r-- 1 root root 6.4K Sep 21 09:42 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_Contents-amd64.lz4 +-rw-r--r-- 1 root root 6.4K Sep 21 09:42 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_Contents-i386.lz4 +-rw-r--r-- 1 root root 73K Nov 6 11:16 in.archive.ubuntu.com_ubuntu_dists_bionic-backports_InRelease +. +. +-rw-r--r-- 1 root root 29 May 11 06:39 security.ubuntu.com_ubuntu_dists_bionic-security_main_dep11_icons-64x64.tar.gz +-rw-r--r-- 1 root root 747K Nov 5 23:57 security.ubuntu.com_ubuntu_dists_bionic-security_main_i18n_Translation-en +-rw-r--r-- 1 root root 2.8K Oct 9 22:37 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_binary-amd64_Packages +-rw-r--r-- 1 root root 3.7K Oct 9 22:37 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_binary-i386_Packages +-rw-r--r-- 1 root root 1.8K Jul 24 23:06 security.ubuntu.com_ubuntu_dists_bionic-security_multiverse_i18n_Translation-en +-rw-r--r-- 1 root root 519K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_binary-amd64_Packages +-rw-r--r-- 1 root root 517K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_binary-i386_Packages +-rw-r--r-- 1 root root 11K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_Components-amd64.yml.gz +-rw-r--r-- 1 root root 8.9K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_icons-48x48.tar.gz +-rw-r--r-- 1 root root 16K Nov 6 05:36 security.ubuntu.com_ubuntu_dists_bionic-security_universe_dep11_icons-64x64.tar.gz +-rw-r--r-- 1 root root 315K Nov 5 20:12 security.ubuntu.com_ubuntu_dists_bionic-security_universe_i18n_Translation-en +``` + +To get the list of installed packages from the `security.ubuntu.com` repository. + +``` +$ grep Package /var/lib/apt/lists/security.ubuntu.com_*_Packages | awk '{print $2;}' +amd64-microcode +apache2 +apache2-bin +apache2-data +apache2-dbg +apache2-dev +. +. +znc +znc-dev +znc-perl +znc-python +znc-tcl +zsh-static +zziplib-bin +``` + +The security repository containing multiple branches (main, multiverse and universe) and if you would like to list out the installed packages from the particular repository `universe` then use the following format. + +``` +$ grep Package /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_bionic-security_universe*_Packages | awk '{print $2;}' +ant +ant-doc +ant-optional +apache2-suexec-custom +apache2-suexec-pristine +apparmor-easyprof +apport-kde +apport-noui +apport-valgrind +apt-transport-https +. +. +xul-ext-gdata-provider +xul-ext-lightning +xvfb +znc +znc-dev +znc-perl +znc-python +znc-tcl +zsh-static +zziplib-bin +``` + +one more example for `ppa.launchpad.net` repository. + +``` +$ grep Package /var/lib/apt/lists/ppa.launchpad.net_*_Packages | awk '{print $2;}' +notepadqq +notepadqq-gtk +notepadqq-common +notepadqq +notepadqq-gtk +notepadqq-common +numix-gtk-theme +numix-icon-theme +numix-icon-theme-circle +numix-icon-theme-square +numix-gtk-theme +numix-icon-theme +numix-icon-theme-circle +numix-icon-theme-square +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-check-the-list-of-packages-installed-from-particular-repository/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ +[2]: https://www.2daygeek.com/category/repository/ +[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ +[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ +[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ +[6]: https://fedoraproject.org/wiki/Third_party_repositories +[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ +[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[9]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ diff --git a/sources/tech/20181106 How to partition and format a drive on Linux.md b/sources/tech/20181106 How to partition and format a drive on Linux.md deleted file mode 100644 index 7b5d7980b0..0000000000 --- a/sources/tech/20181106 How to partition and format a drive on Linux.md +++ /dev/null @@ -1,216 +0,0 @@ -How to partition and format a drive on Linux -====== -Everything you wanted to know about setting up storage but were afraid to ask. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/hard_drives.png?itok=gOJt8RV3) - -On most computer systems, Linux or otherwise, when you plug a USB thumb drive in, you're alerted that the drive exists. If the drive is already partitioned and formatted to your liking, you just need your computer to list the drive somewhere in your file manager window or on your desktop. It's a simple requirement and one that the computer generally fulfills. - -Sometimes, however, a drive isn't set up the way you want. For those times, you need to know how to find and prepare a storage device connected to your machine. - -### What are block devices? - -A hard drive is generically referred to as a "block device" because hard drives read and write data in fixed-size blocks. This differentiates a hard drive from anything else you might plug into your computer, like a printer, gamepad, microphone, or camera. The easy way to list the block devices attached to your Linux system is to use the **lsblk** (list block devices) command: - -``` -$ lsblk -NAME                  MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT -sda                    8:0    0 238.5G  0 disk   -├─sda1                 8:1    0     1G  0 part  /boot -└─sda2                 8:2    0 237.5G  0 part   -  └─luks-e2bb...e9f8 253:0    0 237.5G  0 crypt -        ├─fedora-root    253:1    0    50G  0 lvm   / -        ├─fedora-swap    253:2    0   5.8G  0 lvm   [SWAP] -        └─fedora-home    253:3    0 181.7G  0 lvm   /home -sdb                   8:16    1  14.6G  0 disk   -└─sdb1                8:17    1  14.6G  0 part -``` - -The device identifiers are listed in the left column, each beginning with **sd** , and ending with a letter, starting with **a**. Each partition of each drive is assigned a number, starting with **1**. For example, the second partition of the first drive is **sda2**. If you're not sure what a partition is, that's OK—just keep reading. - -The **lsblk** command is nondestructive and used only for probing, so you can run it without any fear of ruining data on a drive. - -### Testing with dmesg - -If in doubt, you can test device label assignments by looking at the tail end of the **dmesg** command, which displays recent system log entries including kernel events (such as attaching and detaching a drive). For instance, if you want to make sure a thumb drive is really **/dev/sdc** , plug the drive into your computer and run this **dmesg** command: - -``` -$ sudo dmesg | tail -``` - -The most recent drive listed is the one you just plugged in. If you unplug it and run that command again, you'll see the device has been removed. If you plug it in again and run the command, the device will be there. In other words, you can monitor the kernel's awareness of your drive. - -### Understanding filesystems - -If all you need is the device label, your work is done. But if your goal is to create a usable drive, you must give the drive a filesystem. - -If you're not sure what a filesystem is, it's probably easier to understand the concept by learning what happens when you have no filesystem at all. If you have a spare drive that has no important data on it whatsoever, you can follow along with this example. Otherwise, do not attempt this exercise, because it will DEFINITELY ERASE DATA, by design. - -It is possible to utilize a drive without a filesystem. Once you have definitely, correctly identified a drive, and you have absolutely verified there is nothing important on it, plug it into your computer—but do not mount it. If it auto-mounts, then unmount it manually. - -``` -$ su - -# umount /dev/sdx{,1} -``` - -To safeguard against disastrous copy-paste errors, these examples use the unlikely **sdx** label for the drive. - -Now that the drive is unmounted, try this: - -``` -# echo 'hello world' > /dev/sdx -``` - -You have just written data to the block device without it being mounted on your system or having a filesystem. - -To retrieve the data you just wrote, you can view the raw data on the drive: - -``` -# head -n 1 /dev/sdx -hello world -``` - -That seemed to work pretty well, but imagine that the phrase "hello world" is one file. If you want to write a new "file" using this method, you must: - - 1. Know there's already an existing "file" on line 1 - 2. Know that the existing "file" takes up only 1 line - 3. Derive a way to append new data, or else rewrite line 1 while writing line 2 - - - -For example: - -``` -# echo 'hello world -> this is a second file' >> /dev/sdx -``` - -To get the first file, nothing changes. - -``` -# head -n 1 /dev/sdx -hello world -``` - -But it's more complex to get the second file. - -``` -# head -n 2 /dev/sdx | tail -n 1 -this is a second file -``` - -Obviously, this method of writing and reading data is not practical, so developers have created systems to keep track of what constitutes a file, where one file begins and ends, and so on. - -Most filesystems require a partition. - -### Creating partitions - -A partition on a hard drive is a sort of boundary on the device telling each filesystem what space it can occupy. For instance, if you have a 4GB thumb drive, you can have a partition on that device taking up the entire drive (4GB), two partitions that each take 2GB (or 1 and 3, if you prefer), three of some variation of sizes, and so on. The combinations are nearly endless. - -Assuming your drive is 4GB, you can create one big partition from a terminal with the GNU **parted** command: - -``` -# parted /dev/sdx --align opt mklabel msdos 0 4G -``` - -This command specifies the device path first, as required by **parted**. - -The **\--align** option lets **parted** find the partition's optimal starting and stopping point. - -The **mklabel** command creates a partition table (called a disk label) on the device. This example uses the **msdos** label because it's a very compatible and popular label, although **gpt** is becoming more common. - -The desired start and end points of the partition are defined last. Since the **\--align opt** flag is used, **parted** will adjust the size as needed to optimize drive performance, but these numbers serve as a guideline. - -Next, create the actual partition. If your start and end choices are not optimal, **parted** warns you and asks if you want to make adjustments. - -``` -# parted /dev/sdx -a opt mkpart primary 0 4G - -Warning: The resulting partition is not properly aligned for best performance: 1s % 2048s != 0s -Ignore/Cancel? C                                                           -# parted /dev/sdx -a opt mkpart primary 2048s 4G -``` - -If you run **lsblk** again (you may have to unplug the drive and plug it back in), you'll see that your drive now has one partition on it. - -### Manually creating a filesystem - -There are many filesystems available. Some are free and open source, while others are not. Some companies decline to support open source filesystems, so their users can't read from open filesystems, while open source users can't read from closed ones without reverse-engineering them. - -This disconnect notwithstanding, there are lots of filesystems you can use, and the one you choose depends on the drive's purpose. If you want a drive to be compatible across many systems, then your only choice right now is the exFAT filesystem. Microsoft has not submitted exFAT code to any open source kernel, so you may have to install exFAT support with your package manager, but support for exFAT is included in both Windows and MacOS. - -Once you have exFAT support installed, you can create an exFAT filesystem on your drive in the partition you created. - -``` -# mkfs.exfat -n myExFatDrive /dev/sdx1 -``` - -Now your drive is readable and writable by closed systems and by open source systems utilizing additional (and as-yet unsanctioned by Microsoft) kernel modules. - -A common filesystem native to Linux is [ext4][1]. It's arguably a troublesome filesystem for portable drives since it retains user permissions, which are often different from one computer to another, but it's generally a reliable and flexible filesystem. As long as you're comfortable managing permissions, ext4 is a great, journaled filesystem for portable drives. - -``` -# mkfs.ext4 -L myExt4Drive /dev/sdx1 -``` - -Unplug your drive and plug it back in. For ext4 portable drives, use **sudo** to create a directory and grant permission to that directory to a user and a group common across your systems. If you're not sure what user and group to use, you can either modify read/write permissions with **sudo** or root on the system that's having trouble with the drive. - -### Using desktop tools - -It's great to know how to deal with drives with nothing but a Linux shell standing between you and the block device, but sometimes you just want to get a drive ready to use without so much insightful probing. Excellent tools from both the GNOME and KDE developers can make your drive prep easy. - -[GNOME Disks][2] and [KDE Partition Manager][3] are graphical interfaces providing an all-in-one solution for everything this article has explained so far. Launch either of these applications to see a list of attached devices (in the left column), create or resize partitions, and create a filesystem. - -![KDE Partition Manager][5] - -KDE Partition Manager - -The GNOME version is, predictably, simpler than the KDE version, so I'll demo the more complex one—it's easy to figure out GNOME Disks if that's what you have handy. - -Launch KDE Partition Manager and enter your root password. - -From the left column, select the disk you want to format. If your drive isn't listed, make sure it's plugged in, then select **Tools** > **Refresh devices** (or **F5** on your keyboard). - -Don't continue unless you're ready to destroy the drive's existing partition table. With the drive selected, click **New Partition Table** in the top toolbar. You'll be prompted to select the label you want to give the partition table: either **gpt** or **msdos**. The former is more flexible and can handle larger drives, while the latter is, like many Microsoft technologies, the de-facto standard by force of market share. - -Now that you have a fresh partition table, right-click on your device in the right panel and select **New** to create a new partition. Follow the prompts to set the type and size of your partition. This action combines the partitioning step with creating a filesystem. - -![Create a new partition][7] - -Creating a new partition - -To apply your changes to the drive, click the **Apply** button in the top-left corner of the window. - -### Hard drives, easy drives - -Dealing with hard drives is easy on Linux, and it's even easier if you understand the language of hard drives. Since switching to Linux, I've been better equipped to prepare drives in whatever way I want them to work for me. It's also been easier for me to recover lost data because of the transparency Linux provides when dealing with storage. - -Here are a final few tips, if you want to experiment and learn more about hard drives: - - 1. Back up your data, and not just the data on the drive you're experimenting with. All it takes is one wrong move to destroy the partition of an important drive (which is a great way to learn about recreating lost partitions, but not much fun). - 2. Verify and then re-verify that the drive you are targeting is the correct drive. I frequently use **lsblk** to make sure I haven't moved drives around on myself. (It's easy to remove two drives from two separate USB ports, then mindlessly reattach them in a different order, causing them to get new drive labels.) - 3. Take the time to "destroy" a test drive and see if you can recover the data. It's a good learning experience to recreate a partition table or try to get data back after a filesystem has been removed. - - - -For extra fun, if you have a closed operating system lying around, try getting an open source filesystem working on it. There are a few projects working toward this kind of compatibility, and trying to get them working in a stable and reliable way is a good weekend project. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/partition-format-drive-linux - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://opensource.com/article/17/5/introduction-ext4-filesystem -[2]: https://wiki.gnome.org/Apps/Disks -[3]: https://www.kde.org/applications/system/kdepartitionmanager/ -[4]: /file/413586 -[5]: https://opensource.com/sites/default/files/uploads/blockdevices_kdepartition.jpeg (KDE Partition Manager) -[6]: /file/413591 -[7]: https://opensource.com/sites/default/files/uploads/blockdevices_newpartition.jpeg (Create a new partition) diff --git a/sources/tech/20181107 Automate a web browser with Selenium.md b/sources/tech/20181107 Automate a web browser with Selenium.md deleted file mode 100644 index 6f2f7bc155..0000000000 --- a/sources/tech/20181107 Automate a web browser with Selenium.md +++ /dev/null @@ -1,124 +0,0 @@ -translating---geekpi - -Automate a web browser with Selenium -====== -![](https://fedoramagazine.org/wp-content/uploads/2018/10/selenium-816x345.jpg) - -[Selenium][1] is a great tool for browser automation. With Selenium IDE you can record sequences of commands (like click, drag and type), validate the result and finally store this automated test for later. This is great for active development in the browser. But when you want to integrate these tests with your CI/CD flow it’s time to move on to Selenium WebDriver. - -WebDriver exposes an API with bindings for many programming languages, which lets you integrate browser tests with your other tests. This post shows you how to run WebDriver in a container and use it together with a Python program. - -### Running Selenium with Podman - -Podman is the container runtime in the following examples. See [this previous post][2] for how to get started with Podman. - -This example uses a standalone container for Selenium that contains both the WebDriver server and the browser itself. To launch the server container in the background run the following comand: - -``` -$ podman run -d --network host --privileged --name server \ - docker.io/selenium/standalone-firefox -``` - -When you run the container with the privileged flag and host networking, you can connect to this container later from a Python program. You do not need to use sudo. - -### Using Selenium from Python - -Now you can provide a simple program that uses this server. This program is minimal, but should give you an idea about what you can do: - -``` -from selenium import webdriver -from selenium.webdriver.common.desired_capabilities import DesiredCapabilities - -server ="http://127.0.0.1:4444/wd/hub" - -driver = webdriver.Remote(command_executor=server, - desired_capabilities=DesiredCapabilities.FIREFOX) - -print("Loading page...") -driver.get("https://fedoramagazine.org/") -print("Loaded") -assert "Fedora" in driver.title - -driver.quit() -print("Done.") -``` - -First the program connects to the container you already started. Then it loads the Fedora Magazine web page and asserts that “Fedora” is part of the page title. Finally, it quits the session. - -Python bindings are required in order to run the program. And since you’re already using containers, why not do this in a container as well? Save the following to a file name Dockerfile: - -``` -FROM fedora:29 -RUN dnf -y install python3 -RUN pip3 install selenium -``` - -Then build your container image using Podman, in the same folder as Dockerfile: - -``` -$ podman build -t selenium-python . -``` - -To run your program in the container, mount the file containing your Python code as a volume when you run the container: - -``` -$ podman run -t --rm --network host \ - -v $(pwd)/browser-test.py:/browser-test.py:z \ - selenium-python python3 browser-test.py -``` - -The output should look like this: - -``` -Loading page... -Loaded -Done. -``` - -### What to do next - -The example program above is minimal, and perhaps not that useful. But it barely scratched the surface of what’s possible! Check out the documentation for [Selenium][3] and for the [Python bindings][4]. There you’ll find examples for how to locate elements in a page, handle popups, or fill in forms. Drag and drop is also possible, and of course waiting for various events. - -With a few nice tests implemented, you may want to include the whole thing in your CI/CD pipeline. Luckily enough, this is fairly straightforward since everything was containerized to begin with. - -You may also be interested in setting up a [grid][5] to run the tests in parallel. Not only does this help speed things up, but it also allows you to test several different browsers at the same time. - -### Cleaning up - -When you’re done playing with your containers, you can stop and remove the standalone container with the following commands: - -``` -$ podman stop server -$ podman rm server -``` - -If you also want to free up disk space, run these commands to remove the images as well: - -``` -$ podman rmi docker.io/selenium/standalone-firefox -$ podman rmi selenium-python fedora:29 -``` - -### Conclusion - -In this post, you’ve seen how easy it is to get started with Selenium using container technology. It allowed you to automate interaction with a website, as well as test the interaction. Podman allowed you to run the containers necessary without super user privileges or the Docker daemon. Finally, the Python bindings let you use normal Python code to interact with the browser. - - --------------------------------------------------------------------------------- - -via: https://fedoramagazine.org/automate-web-browser-selenium/ - -作者:[Lennart Jern][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://fedoramagazine.org/author/lennartj/ -[b]: https://github.com/lujun9972 -[1]: https://www.seleniumhq.org/ -[2]: https://fedoramagazine.org/running-containers-with-podman/ -[3]: https://www.seleniumhq.org/docs/ -[4]: https://selenium-python.readthedocs.io -[5]: https://www.seleniumhq.org/docs/07_selenium_grid.jsp diff --git a/sources/tech/20181107 Gitbase- Exploring git repos with SQL.md b/sources/tech/20181107 Gitbase- Exploring git repos with SQL.md deleted file mode 100644 index 81abb7a4d8..0000000000 --- a/sources/tech/20181107 Gitbase- Exploring git repos with SQL.md +++ /dev/null @@ -1,93 +0,0 @@ -Gitbase: Exploring git repos with SQL -====== -Gitbase is a Go-powered open source project that allows SQL queries to be run on Git repositories. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/bus_cloud_database.png?itok=lhhU42fg) - -Git has become the de-facto standard for code versioning, but its popularity didn't remove the complexity of performing deep analyses of the history and contents of source code repositories. - -SQL, on the other hand, is a battle-tested language to query large codebases as its adoption by projects like Spark and BigQuery shows. - -So it is just logical that at source{d} we chose these two technologies to create gitbase: the code-as-data solution for large-scale analysis of git repositories with SQL. - -[Gitbase][1] is a fully open source project that stands on the shoulders of a series of giants which made its development possible, this article aims to point out the main ones. - -![](https://opensource.com/sites/default/files/uploads/gitbase.png) - -The [gitbase][2] [playground][2] provides a visual way to use gitbase. - -### Parsing SQL with Vitess - -Gitbase's user interface is SQL. This means we need to be able to parse and understand the SQL requests that arrive through the network following the MySQL protocol. Fortunately for us, this was already implemented by our friends at YouTube and their [Vitess][3] project. Vitess is a database clustering system for horizontal scaling of MySQL. - -We simply grabbed the pieces of code that mattered to us and made it into an [open source project][4] that allows anyone to write a MySQL server in minutes (as I showed in my [justforfunc][5] episode [CSVQL—serving CSV with SQL][6]). - -### Reading git repositories with go-git - -Once we've parsed a request we still need to find how to answer it by reading the git repositories in our dataset. For this, we integrated source{d}'s most successful repository [go-git][7]. Go-git is a* *highly extensible Git implementation in pure Go. - -This allowed us to easily analyze repositories stored on disk as [siva][8] files (again an open source project by source{d}) or simply cloned with git clone. - -### Detecting languages with enry and parsing files with babelfish - -Gitbase does not stop its analytic power at the git history. By integrating language detection with our (obviously) open source project [enry][9] and program parsing with [babelfish][10]. Babelfish is a self-hosted server for universal source code parsing, turning code files into Universal Abstract Syntax Trees (UASTs) - -These two features are exposed in gitbase as the user functions LANGUAGE and UAST. Together they make requests like "find the name of the function that was most often modified during the last month" possible. - -### Making it go fast - -Gitbase analyzes really large datasets—e.g. Public Git Archive, with 3TB of source code from GitHub ([announcement][11]) and in order to do so every CPU cycle counts. - -This is why we integrated two more projects into the mix: Rubex and Pilosa. - -#### Speeding up regular expressions with Rubex and Oniguruma - -[Rubex][12] is a quasi-drop-in replacement for Go's regexp standard library package. I say quasi because they do not implement the LiteralPrefix method on the regexp.Regexp type, but I also had never heard about that method until right now. - -#### Speeding up queries with Pilosa indexes - -Rubex gets its performance from the highly optimized C library [Oniguruma][13] which it calls using [cgo][14] - -Indexes are a well-known feature of basically every relational database, but Vitess does not implement them since it doesn't really need to. - -But again open source came to the rescue with [Pilosa][15], a distributed bitmap index implemented in Go which made gitbase usable on massive datasets. Pilosa is an open source, distributed bitmap index that dramatically accelerates queries across multiple, massive datasets. - -### Conclusion - -I'd like to use this blog post to personally thank the open source community that made it possible for us to create gitbase in such a shorter period that anyone would have expected. At source{d} we are firm believers in open source and every single line of code under github.com/src-d (including our OKRs and investor board) is a testament to that. - -Would you like to give gitbase a try? The fastest and easiest way is with source{d} Engine. Download it from sourced.tech/engine and get gitbase running with a single command! - -Want to know more? Check out the recording of my talk at the [Go SF meetup][16]. - -The article was [originally published][17] on Medium and is republished here with permission. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/gitbase - -作者:[Francesc Campoy][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/francesc -[b]: https://github.com/lujun9972 -[1]: https://github.com/src-d/gitbase -[2]: https://github.com/src-d/gitbase-web -[3]: https://github.com/vitessio/vitess -[4]: https://github.com/src-d/go-mysql-server -[5]: http://justforfunc.com/ -[6]: https://youtu.be/bcRDXAraprk -[7]: https://github.com/src-d/go-git -[8]: https://github.com/src-d/siva -[9]: https://github.com/src-d/enry -[10]: https://github.com/bblfsh/bblfshd -[11]: https://blog.sourced.tech/post/announcing-pga/ -[12]: https://github.com/moovweb/rubex -[13]: https://github.com/kkos/oniguruma -[14]: https://golang.org/cmd/cgo/ -[15]: https://github.com/pilosa/pilosa -[16]: https://www.meetup.com/golangsf/events/251690574/ -[17]: https://medium.com/sourcedtech/gitbase-exploring-git-repos-with-sql-95ec0986386c diff --git a/sources/tech/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md b/sources/tech/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md deleted file mode 100644 index b24c5898a8..0000000000 --- a/sources/tech/20181107 How To Find The Execution Time Of A Command Or Process In Linux.md +++ /dev/null @@ -1,185 +0,0 @@ -How To Find The Execution Time Of A Command Or Process In Linux -====== -![](https://www.ostechnix.com/wp-content/uploads/2018/11/time-command-720x340.png) - -You probably know the start time of a command/process and [**how long a process is running**][1] in Unix-like systems. But, how do you when did it end and/or what is the total time taken by the command/process to complete? Well, It’s easy! On Unix-like systems, there is a utility named **‘GNU time’** that is specifically designed for this purpose. Using Time utility, we can easily measure the total execution time of a command or program in Linux operating systems. Good thing is ‘time’ command comes preinstalled in most Linux distributions, so you don’t have to bother with installation. - -### Find The Execution Time Of A Command Or Process In Linux - -To measure the execution time of a command/program, just run. - -``` -$ /usr/bin/time -p ls -``` - -Or, - -``` -$ time ls -``` - -Sample output: - -``` -dir1 dir2 file1 file2 mcelog - -real 0m0.007s -user 0m0.001s -sys 0m0.004s - -$ time ls -a -. .bash_logout dir1 file2 mcelog .sudo_as_admin_successful -.. .bashrc dir2 .gnupg .profile .wget-hsts -.bash_history .cache file1 .local .stack - -real 0m0.008s -user 0m0.001s -sys 0m0.005s -``` - -The above commands displays the total execution time of **‘ls’** command. Replace “ls” with any command/process of your choice to find the total execution time. - -Here, - - 1. **real** -refers the total time taken by command/program, - 2. **user** – refers the time taken by the program in user mode, - 3. **sys** – refers the time taken by the program in kernel mode. - - - -We can also limit the command to run only for a certain time as well. Refer the following guide for more details. - -### time vs /usr/bin/time - -As you may noticed, we used two commands **‘time’** and **‘/usr/bin/time’** in the above examples. So, you might wonder what is the difference between them. - -First, let us see what actually ‘time’ is using ‘type’ command. For those who don’t know, the **Type** command is used to find out the information about a Linux command. For more details, refer [**this guide**][2]. - -``` -$ type -a time -time is a shell keyword -time is /usr/bin/time -``` - -As you see in the above output, time is both, - - * A keyword built into the BASH shell - * An executable file i.e **/usr/bin/time** - - - -Since shell keywords take precedence over executable files, when you just run`time`command without full path, you run a built-in shell command. But, When you run `/usr/bin/time`, you run a real **GNU time** program. So, in order to access the real command, you may need to specify its explicit path. Clear, good? - -The built-in ‘time’ shell keyword is available in most shells like BASH, ZSH, CSH, KSH, TCSH etc. The ‘time’ shell keyword has less options than the executables. The only option you can use in ‘time’ keyword is **-p**. - -You know now how to find the total execution time of a given command/process using ‘time’ command. Want to know little bit more about ‘GNU time’ utility? Read on! - -### A brief introduction about ‘GNU time’ program - -The GNU time program runs a command/program with given arguments and summarizes the system resource usage as standard output after the command is completed. Unlike the ‘time’ keyword, the GNU time program not just displays the time used by the command/process, but also other resources like memory, I/O and IPC calls. - -The typical syntax of the Time command is: - -``` -/usr/bin/time [options] command [arguments...] -``` - -The ‘options’ in the above syntax refers a set of flags that can be used with time command to perform a particular functionality. The list of available options are given below. - - * **-f, –format** – Use this option to specify the format of output as you wish. - * **-p, –portability** – Use the portable output format. - * **-o file, –output=FILE** – Writes the output to **FILE** instead of displaying as standard output. - * **-a, –append** – Append the output to the FILE instead of overwriting it. - * **-v, –verbose** – This option displays the detailed description of the output of the ‘time’ utility. - * **–quiet** – This option prevents the time ‘time’ utility to report the status of the program. - - - -When using ‘GNU time’ program without any options, you will see output something like below. - -``` -$ /usr/bin/time wc /etc/hosts -9 28 273 /etc/hosts -0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2024maxresident)k -0inputs+0outputs (0major+73minor)pagefaults 0swaps -``` - -If you run the same command with the shell built-in keyword ‘time’, the output would be bit different: - -``` -$ time wc /etc/hosts -9 28 273 /etc/hosts - -real 0m0.006s -user 0m0.001s -sys 0m0.004s -``` - -Some times, you might want to write the system resource usage output to a file rather than displaying in the Terminal. To do so, use **-o** flag like below. - -``` -$ /usr/bin/time -o file.txt ls -dir1 dir2 file1 file2 file.txt mcelog -``` - -As you can see in the output, Time utility doesn’t display the output. Because, we write the output to a file named file.txt. Let us have a look at this file: - -``` -$ cat file.txt -0.00user 0.00system 0:00.00elapsed 66%CPU (0avgtext+0avgdata 2512maxresident)k -0inputs+0outputs (0major+106minor)pagefaults 0swaps -``` - -When you use **-o** flag, if there is no file named ‘file.txt’, it will create and write the output in it. If the file.txt is already present, it will overwrite its content. - -You can also append output to the file instead of overwriting it using **-a** flag. - -``` -$ /usr/bin/time -a file.txt ls -``` - -The **-f** flag allows the users to control the format of the output as per his/her liking. Say for example, the following command displays output of ‘ls’ command and shows just the user, system, and total time. - -``` -$ /usr/bin/time -f "\t%E real,\t%U user,\t%S sys" ls -dir1 dir2 file1 file2 mcelog -0:00.00 real, 0.00 user, 0.00 sys -``` - -Please be mindful that the built-in shell command ‘time’ doesn’t support all features of GNU time program. - -For more details about GNU time utility, refer the man pages. - -``` -$ man time -``` - -To know more about Bash built-in ‘Time’ keyword, run: - -``` -$ help time -``` - -And, that’s all for now. Hope this useful. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/how-to-find-the-execution-time-of-a-command-or-process-in-linux/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/find-long-process-running-linux/ -[2]: https://www.ostechnix.com/the-type-command-tutorial-with-examples-for-beginners/ diff --git a/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md b/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md deleted file mode 100644 index e00fc5452b..0000000000 --- a/sources/tech/20181107 Top 30 OpenStack Interview Questions and Answers.md +++ /dev/null @@ -1,324 +0,0 @@ -Top 30 OpenStack Interview Questions and Answers -====== -Now a days most of the firms are trying to migrate their IT infrastructure and Telco Infra into private cloud i.e OpenStack. If you planning to give interviews on Openstack admin profile, then below list of interview questions might help you to crack the interview. - -![](https://www.linuxtechi.com/wp-content/uploads/2018/11/OpenStack-Interview-Questions.jpg) - -### Q:1 Define OpenStack and its key components? - -Ans: It is a bundle of opensource software, which all in combine forms a provide cloud software known as OpenStack.OpenStack is known as Stack of Open source Software or Projects. - -Following are the key components of OpenStack - - * **Nova** – It handles the Virtual machines at compute level and performs other computing task at compute or hypervisor level. - * **Neutron** – It provides the networking functionality to VMs, Compute and Controller Nodes. - * **Keystone** – It provides the identity service for all cloud users and openstack services. In other words, we can say Keystone a method to provide access to cloud users and services. - * **Horizon** – It provides a GUI (Graphical User Interface), using the GUI Admin can all day to day operations task at ease. - * **Cinder** – It provides the block storage functionality, generally in OpenStack Cinder is integrated with Chef and ScaleIO to service block storage to Compute & Controller nodes. - * **Swift** – It provides the object storage functionality. Generally, Glance images are on object storage. External storage like ScaleIO can work as Object storage too and can easily be integrated with Glance Service. - * **Glance** – It provides Cloud image services, using glance admin used to upload and download cloud images. - * **Heat** – It provides an orchestration service or functionality. Using Heat admin can easily VMs as stack and based on requirements VMs in the stack can be scale-in and Scale-out - * **Ceilometer** – It provides the telemetry and billing services. - - - -### Q:2 What are services generally run on a controller node? - -Ans: Following services run on a controller node: - - * Identity Service ( KeyStone) - * Image Service ( Glance) - * Nova Services like Nova API, Nova Scheduler & Nova DB - * Block & Object Service - * Ceilometer Service - * MariaDB / MySQL and RabbitMQ Service - * Management services of Networking (Neutron) and Networking agents - * Orchestration Service (Heat) - - - -### Q:3 What are the services generally run on a Compute Node? - -Ans: Following services run on a compute node, - - * Nova-Compute - * Networking Services like OVS - - - -### Q:4 What is the default location of VMs on the Compute Nodes? - -Ans: VMs in the Compute node are stored at “ **/var/lib/nova/instances** ” - -### Q:5 What is default location of glance images? - -Ans: As the Glance service runs on a controller node, all the glance images are store under the folder “ **/var/lib/glance/images** ” on a controller node. - -Read More : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1] - -### Q:6 Tell me the command how to spin a VM from Command Line? - -Ans: We can easily spin a new VM using the following openstack command, - -``` -# openstack server create --flavor {flavor-name} --image {Image-Name-Or-Image-ID}  --nic net-id={Network-ID} --security-group {Security_Group_ID} –key-name {Keypair-Name} -``` - -### Q:7 How to list the network namespace of a tenant in OpenStack? - -Ans: Network namespace of a tenant can be listed using “ip net ns” command - -``` -~# ip netns list -qdhcp-a51635b1-d023-419a-93b5-39de47755d2d -haproxy -vrouter -``` - -### Q:8 How to execute command inside network namespace in openstack? - -Ans: Let’s assume we want to execute “ifconfig” command inside the network namespace “qdhcp-a51635b1-d023-419a-93b5-39de47755d2d”, then run the beneath command, - -Syntax : ip netns exec {network-space} - -``` -~# ip netns exec qdhcp-a51635b1-d023-419a-93b5-39de47755d2d "ifconfig" -``` - -### Q:9 How to upload and download a cloud image in Glance from command line? - -Ans: A Cloud image can be uploaded in glance from command using beneath openstack command, - -``` -~# openstack image create --disk-format qcow2 --container-format bare   --public --file {Name-Cloud-Image}.qcow2     -``` - -Use below openstack command to download a cloud image from command line, - -``` -~# glance image-download --file --progress  -``` - -### Q:10 How to reset error state of a VM into active in OpenStack env? - -Ans: There are some scenarios where some VMs went to error state and this error state can be changed into active state using below commands, - -``` -~# nova reset-state --active {Instance_id} -``` - -### Q:11 How to get list of available Floating IPs from command line? - -Ans: Available floating ips can be listed using the below command, - -``` -~]# openstack ip floating list | grep None | head -10 -``` - -### Q:12 How to provision a virtual machine in specific availability zone and compute Host? - -Ans: Let’s assume we want to provision a VM on the availability zone NonProduction in compute-02, use the beneath command to accomplish this, - -``` -~]# openstack server create --flavor m1.tiny --image cirros --nic net-id=e0be93b8-728b-4d4d-a272-7d672b2560a6 --security-group NonProd_SG  --key-name linuxtec --availability-zone NonProduction:compute-02  nonprod_testvm -``` - -### Q:13 How to get list of VMs which are provisioned on a specific Compute node? - -Ans: Let’s assume we want to list the vms which are provisioned on compute-0-19, use below - -Syntax: openstack server list –all-projects –long -c Name -c Host | grep -i {Compute-Node-Name} - -``` -~# openstack server list --all-projects --long -c Name -c Host | grep -i  compute-0-19 -``` - -### Q:14 How to view the console log of an openstack instance from command line? - -Ans: Console logs of an instance can be viewed from the command line using the following commands, - -First get the ID of an instance and then use the below command, - -``` -~# openstack console log show {Instance-id} -``` - -### Q:15 How to get console URL of an openstack instance? - -Ans: Console URL of an instance can be retrieved from command line using the below openstack command, - -``` -~# openstack console url show {Instance-id} -``` - -### Q:16 How to create a bootable cinder / block storage volume from command line? - -Ans: To Create a bootable cinder or block storage volume (assume 8 GB) , refer the below steps: - - * Get Image list using below - - - -``` -~# openstack image list | grep -i cirros -| 89254d46-a54b-4bc8-8e4d-658287c7ee92 | cirros  | active | -``` - - * Create bootable volume of size 8 GB using cirros image - - - -``` -~# cinder create --image-id 89254d46-a54b-4bc8-8e4d-658287c7ee92 --display-name cirros-bootable-vol  8 -``` - -### Q:17 How to list all projects or tenants that has been created in your opentstack? - -Ans: Projects or tenants list can be retrieved from the command using the below openstack command, - -``` -~# openstack project list --long -``` - -### Q:18 How to list the endpoints of openstack services? - -Ans: Openstack service endpoints are classified into three categories, - - * Public Endpoint - * Internal Endpoint - * Admin Endpoint - - - -Use below openstack command to view endpoints of each openstack service, - -``` -~# openstack catalog list -``` - -To list the endpoint of a specific service like keystone use below, - -``` -~# openstack catalog show keystone -``` - -Read More : [**Step by Step Instance Creation Flow in OpenStack**][2] - -### Q:19 In which order we should restart nova services on a controller node? - -Ans: Following order should be followed to restart the nova services on openstack controller node, - - * service nova-api restart - * service nova-cert restart - * service nova-conductor restart - * service nova-consoleauth restart - * service nova-scheduler restart - - - -### Q:20 Let’s assume DPDK ports are configured on compute node for data traffic, now how you will check the status of dpdk ports? - -Ans: As DPDK ports are configured via openvSwitch (OVS), use below commands to check the status, - -### Q:21 How to add new rules to the existing SG(Security Group) from command line in openstack? - -Ans: New rules to the existing SG in openstack can be added using the neutron command, - -``` -~# neutron security-group-rule-create --protocol   --port-range-min --port-range-max --direction  --remote-ip-prefix Security-Group-Name -``` - -### Q:22 How to view the OVS bridges configured on Controller and Compute Nodes? - -Ans: OVS bridges on Controller and Compute nodes can be viewed using below command, - -``` -~]# ovs-vsctl show -``` - -### Q:23 What is the role of Integration Bridge(br-int) on the Compute Node ? - -Ans: The integration bridge (br-int) performs VLAN tagging and untagging for the traffic coming from and to the instance running on the compute node. - -Packets leaving the n/w interface of an instance goes through the linux bridge (qbr) using the virtual interface qvo. The interface qvb is connected to the Linux Bridge & interface qvo is connected to integration bridge (br-int). The qvo port on integration bridge has an internal VLAN tag that gets appended to packet header when a packet reaches to the integration bridge. - -### Q:24 What is the role of Tunnel Bridge (br-tun) on the compute node? - -Ans: The tunnel bridge (br-tun) translates the VLAN tagged traffic from integration bridge to the tunnel ids using OpenFlow rules. - -br-tun (tunnel bridge) allows the communication between the instances on different networks. Tunneling helps to encapsulate the traffic travelling over insecure networks, br-tun supports two overlay networks i.e GRE and VXLAN - -### Q:25 What is the role of external OVS bridge (br-ex)? - -Ans: As the name suggests, this bridge forwards the traffic coming to and from the network to allow external access to instances. br-ex connects to the physical interface like eth2, so that floating IP traffic for tenants networks is received from the physical network and routed to the tenant network ports. - -### Q:26 What is function of OpenFlow rules in OpenStack Networking? - -Ans: OpenFlow rules is a mechanism that define how a packet will reach to destination starting from its source. OpenFlow rules resides in flow tables. The flow tables are part of OpenFlow switch. - -When a packet arrives to a switch, it is processed by the first flow table, if it doesn’t match any flow entries in the table then packet is dropped or forwarded to another table. - -### Q:27 How to display the information about a OpenFlow switch (like ports, no. of tables, no of buffer)? - -Ans: Let’s assume we want to display the information about OpenFlow switch (br-int), run the following command, - -``` -root@compute-0-15# ovs-ofctl show br-int -OFPT_FEATURES_REPLY (xid=0x2): dpid:0000fe981785c443 -n_tables:254, n_buffers:256 -capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP -actions: output enqueue set_vlan_vid set_vlan_pcp strip_vlan mod_dl_src mod_dl_dst mod_nw_src mod_nw_dst mod_nw_tos mod_tp_src mod_tp_dst - 1(patch-tun): addr:3a:c6:4f:bd:3e:3b -     config:     0 -     state:      0 -     speed: 0 Mbps now, 0 Mbps max - 2(qvob35d2d65-f3): addr:b2:83:c4:0b:42:3a -     config:     0 -     state:      0 -     current:    10GB-FD COPPER -     speed: 10000 Mbps now, 0 Mbps max - ……………………………………… -``` - -### Q:28 How to display the entries for all the flows in a switch? - -Ans: Flows entries of a switch can be displayed using the command ‘ **ovs-ofctl dump-flows** ‘ - -Let’s assume we want to display flow entries of OVS integration bridge (br-int), - -### Q:29 What are Neutron Agents and how to list all neutron agents? - -Ans: OpenStack neutron server acts as the centralized controller, the actual network configurations are executed either on compute and network nodes. Neutron agents are software entities that carry out configuration changes on compute or network nodes. Neutron agents communicate with the main neutron service via Neuron API and message queue. - -Neutron agents can be listed using the following command, - -``` -~# openstack network agent list -c ‘Agent type’ -c Host -c Alive -c State -``` - -### Q:30 What is CPU pinning? - -Ans: CPU pinning refers to reserving the physical cores for specific virtual machine. It is also known as CPU isolation or processor affinity. The configuration is in two parts: - - * it ensures that virtual machine can only run on dedicated cores - * it also ensures that common host processes don’t run on those cores - - - -In other words we can say pinning is one to one mapping of a physical core to a guest vCPU. - --------------------------------------------------------------------------------- - -via: https://www.linuxtechi.com/openstack-interview-questions-answers/ - -作者:[Pradeep Kumar][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: http://www.linuxtechi.com/author/pradeep/ -[b]: https://github.com/lujun9972 -[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/ -[2]: https://www.linuxtechi.com/step-by-step-instance-creation-flow-in-openstack/ diff --git a/sources/tech/20181108 Choosing a printer for Linux.md b/sources/tech/20181108 Choosing a printer for Linux.md deleted file mode 100644 index a3b87329db..0000000000 --- a/sources/tech/20181108 Choosing a printer for Linux.md +++ /dev/null @@ -1,74 +0,0 @@ -Choosing a printer for Linux -====== -Linux offers widespread support for printers. Learn how to take advantage of it. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/email_paper_envelope_document.png?itok=uPj_kouJ) - -We've made significant strides toward the long-rumored paperless society, but we still need to print hard copies of documents from time to time. If you're a Linux user and have a printer without a Linux installation disk or you're in the market for a new device, you're in luck. That's because most Linux distributions (as well as MacOS) use the Common Unix Printing System ([CUPS][1]), which contains drivers for most printers available today. This means Linux offers much wider support than Windows for printers. - -### Selecting a printer - -If you're buying a new printer, the best way to find out if it supports Linux is to check the documentation on the box or the manufacturer's website. You can also search the [Open Printing][2] database. It's a great resource for checking various printers' compatibility with Linux. - -Here are some Open Printing results for Linux-compatible Canon printers. -![](https://opensource.com/sites/default/files/uploads/linux-printer_2-openprinting.png) - -The screenshot below is Open Printing's results for a Hewlett-Packard LaserJet 4050—according to the database, it should work "perfectly." The recommended driver is listed along with generic instructions letting me know it works with CUPS, Line Printing Daemon (LPD), LPRng, and more. -![](https://opensource.com/sites/default/files/uploads/linux-printer_3-hplaserjet.png) - -In all cases, it's best to check the manufacturer's website and ask other Linux users before buying a printer. - -### Checking your connection - -There are several ways to connect a printer to a computer. If your printer is connected through USB, it's easy to check the connection by issuing **lsusb** at the Bash prompt. - -``` -$ lsusb -``` - -The command returns **Bus 002 Device 004: ID 03f0:ad2a Hewlett-Packard** —it's not much information, but I can tell the printer is connected. I can get more information about the printer by entering the following command: - -``` -$ dmesg | grep -i usb -``` - -The results are much more verbose. -![](https://opensource.com/sites/default/files/uploads/linux-printer_1-dmesg.png) - -If you're trying to connect your printer to a parallel port (assuming your computer has a parallel port—they're rare these days), you can check the connection with this command: - -``` -$ dmesg | grep -i parport -``` - -The information returned can help me select the right driver for my printer. I have found that if I stick to popular, name-brand printers, most of the time I get good results. - -### Setting up your printer software - -Both Fedora Linux and Ubuntu Linux contain easy printer setup tools. [Fedora][3] maintains an excellent wiki for answers to printing issues. The tools are easily launched from Settings in the GUI or by invoking **system-config-printer** on the command line. - -![](https://opensource.com/sites/default/files/uploads/linux-printer_4-printersetup.png) - -Hewlett-Packard's [HP Linux Imaging and Printing][4] (HPLIP) software, which supports Linux printing, is probably already installed on your Linux system; if not, you can [download][5] the latest version for your distribution. Printer manufacturers [Epson][6] and [Brother][7] also have web pages with Linux printer drivers and information. - -What's your favorite Linux printer? Please share your opinion in the comments. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/choosing-printer-linux - -作者:[Don Watkins][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/don-watkins -[b]: https://github.com/lujun9972 -[1]: https://www.cups.org/ -[2]: http://www.openprinting.org/printers -[3]: https://fedoraproject.org/wiki/Printing -[4]: https://developers.hp.com/hp-linux-imaging-and-printing -[5]: https://developers.hp.com/hp-linux-imaging-and-printing/gethplip -[6]: https://epson.com/Support/wa00821 -[7]: https://support.brother.com/g/s/id/linux/en/index.html?c=us_ot&lang=en&comple=on&redirect=on diff --git a/sources/tech/20181108 My Google-free Android life.md b/sources/tech/20181108 My Google-free Android life.md new file mode 100644 index 0000000000..4e94af0de8 --- /dev/null +++ b/sources/tech/20181108 My Google-free Android life.md @@ -0,0 +1,191 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (My Google-free Android life) +[#]: via: (https://lushka.al/my-android-setup/) +[#]: author: (Anxhelo Lushka https://lushka.al/) + +My Google-free Android life +====== + +People have been asking me a lot lately about my phone, my Android setup and how I manage to use my smartphone without Google Services. Well, this is a post that aims to address precisely that. I would like to make this article really beginner-friendly so I’ll try to go slow, going through things one by one and including screenshots so you can have a better view on how things happen and work like. + +At first I’ll start with why Google Services are (imo) bad for your device. I could cut it short and guide you to this [post][1] by [Richard Stallman][2], but I’m grabbing a few main points from it and adding them here. + + * Nonfree software required + * In general, most Google services require running nonfree Javascript code. Nowadays, nothing whatsoever appears if Javascript is disabled, even making a Google account requires running nonfree software (Javascript sent by the site), same thing for logging in. + * Surveillance + * Google quietly combines its ad-tracking profiles with its browsing profiles and stores a huge amount of data on each user. + * Terms of Service + * Google cuts off accounts for users that resell Pixel phones. They lose access to all of their mail and documents stored in Google servers under that account. + * Censorship + * Amazon and Google have cut off domain-fronting, a feature used to enable people in tyrannical countries to reach communication systems that are banned there. + * Google has agreed to perform special censorship of Youtube for the government of Pakistan, deleting views that the state opposes. This will help the illiberal Pakistani state suppress dissent. + * Youtube’s “content ID” automatically deletes posted videos in a way copyright law does not require. + + + +These are just a few reasons, but you can read the post by RMS I linked above in which he tries to explain these points in detail. Although it may look like a tinfoil hat reaction to you, all these actions already happen everyday in real life. + +### Next on the list, my setup and a tutorial on how I achieved it + +I own a **[Xiaomi Redmi Note 5 Pro][3]** smartphone (codename **whyred** ), produced in China by [Xiaomi][4], which I bought for around 185 EUR 4 months ago (from the time of writing this post). + +Now you might be thinking, ‘but why did you buy a Chinese brand, they are not reliable’. Yes, it is not made from the usuals as you would expect, such as Samsung (which people often associate with Android, which is plain wrong), OnePlus, Nokia etc, but you should know almost every phone is produced in China. + +There were a few reasons I chose this phone, first one of course being the price. It is a quite **budget-friendly** device, so most people are able to afford it. Next one would be the specs, which on paper (not only) are pretty decents for the price tag. With a 6 inch screen (Full HD resolution), a **4000 mAh battery** (superb battery life), 4GB of RAM, 64GB of storage, dual back cameras (12MP + 5MP), a front camera with flash (13MP) and a decent efficient Snapdragon 636, it was probably the best choice at that moment. + +The issue with it was that it came with [MIUI][5], the Android skin that Xiaomi ships with most of its devices (except the Android One project devices). Yes, it is not that horrible, it has some extra features, but the problems lie deeper within. One of the reasons these devices from Xiaomi are so cheap (afaik they only have 5-10% win margin from sales) is that **they include data mining and ads in the system altogether with MIUI**. In this way, the system apps requires extra unnecessary permissions that mine your data and bombard you with ads, from which Xiaomi earns money. + +Funnily enough, the Weather app included wanted access to my contacts and to make calls, why would it need that if it would just show the weather? Another case was with the Recorder app, it also required contacts and internet permissions, probably to send those recordings back to Xiaomi. + +To fix this, I’d have to format the phone and get rid of MIUI. This has become increasingly difficult with the latest phones in the market. + +The concept of formatting a phone is simple, you remove the existing system and install a new one of your preference (Android-only in this case). To do that, you have to have your [bootloader][6] unlocked. + +> A bootloader is a computer program that loads an operating system (OS) or runtime environment for the computer after completion of the self-tests. — [Wikipedia][7] + +The problem here is that Xiaomi has a specific policy about the bootloader unlocking. A few months ago, the process was like this. You would have to [make a request][8] to Xiaomi to obtain an unlock code for your phone, by giving a valid reason, but this would not always work, as they could just refuse your request without reason and explanation. + +Now, that process has changed. You’ll have to download a specific software from Xiaomi, called [Mi Unlock][9], install it in your Windows PC, [activate Debugging Settings in Developer Options][10] on your phone, reboot to the bootloader mode (by holding the Volume Down + Power button while the phone is off) and connect the phone to your computer to start a process called “Approval”. This process starts a timer on the Xiaomi servers that will allow you to **unlock the phone only after a period of 15 days** (or a month in some rare cases, totally random) goes by. + +![Mi Unlock app][11] + +After this period of 15 days has passed, you have to re-connect your phone and do the same procedure as above, then by pressing the Unlock button your bootloader will be unlocked and this will allow you to install other ROM-s (systems). **Careful, make sure to backup your data because unlocking the bootloader deletes everything in the phone**. + +The next step would be finding a system ([ROM][12]) that works for your device. I searched through the [XDA Developers Forum][13], which is a place where Android developers and users exchange ideas, apps etc. Fortunately, my phone is quite popular so it had [its own forum category][14]. There, I skimmed through some popular ROM-s for my device and decided to use the [AOSiP ROM][15] (AOSiP standing for Android Open Source illusion Project). + +**EDIT** : Someone emailed me to say that my article is exactly what [/e/][16] does and is targeted to. I wanted to say thank you for reaching out but that is not true at all. The reasoning behind my opinion about /e/ can also be found in this [website][17], but I’ll list a few of the reasons here. + +eelo is a “foundation” that got over 200K € in funding from Kickstarter and IndieGoGo, promising to create a mobile OS and web services that are open and secure and protect your privacy. + + 1. Their OS is based on LineageOS 14.1 (Android 7.1) with microG and other open source apps with it, which already exists for a long time now and it’s called [Lineage for microG][18]. + 2. Instead of building all apps from the source code, they download the APKs from [APKPure][19] and put them in the ROM, without knowing if those APKs contain proprietary code/malware in them. + 3. At one point, they were literally just removing the Lineage copyright header from their code and adding theirs. + 4. They love to delete negative feedback and censor their users’ opinions in their Telegram group chat. + + + +In conclusion, I **don’t recommend using /e/** ROM-s (at least until now). + +Another thing you would likely want to do is have [root access][20] to your phone, to make it truly yours and modify files in the system, such as use a system-wide adblocker etc. To do this, I decided to use [Magisk][21], a godsend app developed by a student to help you gain root access on your device and install what are called [modules][22], basically software. + +After downloading the ROM and Magisk, I had to install them on my phone. To do that, I moved the files to my SD card on the phone. Now, to install the system, I had to use something called a [recovery system][23]. The one I use is called [TWRP][24] (standing for TeamWin Recovery Project), a popular solution. + +To install the recovery system (sounds hard, I know), I had to [flash][20] the file on the phone. To do that, I connected my phone with the computer (Fedora Linux system) and with something called [ADB Tools][25] I issued a command that overwrites the system recovery with the custom one I had. + +> fastboot flash recovery twrp.img + +After this was done, I turned off the phone and kept Volume Up + Power button pressed until I saw the TWRP screen show up. That meant I was good to go and it was ready to receive my commands. + +![TWRP screen][26] + +Next step was to **issue a Wipe command** , necessary when you first install a custom ROM on your phone. As you can see from the image above, the Wipe command clears the Data, Cache and Dalvik (there is also an advanced option that allows us to tick a box to delete the System one too, as we don’t need the old one anymore). + +This takes a few moments and after that, your phone is basically clean. Now it’s time to **install the system**. By pressing the Install button on the main screen, we select the zip file we added there before (the ROM file) and swipe the screen to install it. Next, we have to install Magisk, which gives us root access to the device. + +**EDIT** : As some more experienced/power Android users might have noticed until now, there is no [GApps][27] (Google Apps) included. This is what we call GApps-less in the Android world, not having those packages installed at all. + +Note that one of the downsides of not having Google Services installed is that some of your apps might not work, for example their notifications might take longer to arrive or might not even work at all (this is what happens with Mattermost app for me). This happens because these apps use [Google Cloud Messaging][28] (now called [Firebase][29]) to wake the phone and push notifications to your phone. + +You can solve this (partially) by installing and using [microG][30] which provides some features of Google Services but allows for more control on your side. I don’t recommend using this because it still helps Google Services and you don’t really give up on them, but it’s a good start if you want to quit Google slowly and not go cold turkey on it. + +After successfully installing both, now we reboot the phone and **tada** 🎉, we are in the main screen. + +### Next part, installing the apps and configuring everything + +This is where things start to get easier. To install the apps, I use [F-Droid][31], an alternative app store that includes **only free and open source apps**. If you need apps that are not available there, you can use [Aurora Store][32], a client to download apps from the Play Store without using your Google account or getting tracked. + +F-Droid has what are called repos, a “storehouse” that contains apps you can install. I use the default ones and have added another one from [IzzyOnDroid][33], that contains some more apps not available from the default F-Droid repo and is updated more often. + +![My repos][34] + +Below you will find a list of the apps I have installed, what they replace and their use. + +This is pretty much **my list of the most useful F-Droid apps** I use, but unfortunately these are NOT the only apps I use. The proprietary apps I use (I know, I might sound a hypocrite, but not everything is replaceable, not yet at least) are as below: + + * AliExpress + * Boost for Reddit + * Google Camera (coupled with Camera API 2, this app allows me to take wonderful pictures with a 185 EUR phone, it’s just too impressive) + * Instagram + * MediaBox HD (allows me to stream movies) + * Mi Fit (an app that pairs with my Mi Band 2) + * MyVodafoneAL (the carrier app) + * ProtonMail (email app) + * Shazam Encore (to find those songs you usually listen in coffee shops) + * Snapseed (photo editing app, really simple, powerful and quite good) + * Spotify (music streaming) + * Titanium Backup (to backup my app data, wifi passwords, calls log etc.) + * ViPER4Android FX (music equalizer) + * VSCO (photo editing, never use it really) + * WhatsApp (E2E proprietary messaging app, almost everyone I know has it) + * WiFi Map (mapped hotspots that are available, handy when abroad) + + + +This is pretty much it, all the apps I use on my phone. **The configs are then pretty simple and straightforward and I can give a few tips**. + + 1. Read and check the permissions of apps carefully, don’t click ‘Install’ mindlessly. + 2. Try to use as many open source apps as possible, they both respect your privacy and are free (as in both free beer and freedom). + 3. Use a VPN as much as you can, find a reputable one and don’t use free ones, otherwise you get to be the product and you’ll get your data harvested. + 4. Don’t keep your WiFi/mobile data/location on all the time, it might be a security risk. + 5. Try not to rely on fingerprint unlock only, or better yet use only PIN/password/pattern unlock, as biometric data can be cloned and used against you, for example to unlock your phone and steal your data. + + + +And as a bonus for reading far down here, **a screenshot of my home screen** right now. + +![Screenshot][35] + + +-------------------------------------------------------------------------------- + +via: https://lushka.al/my-android-setup/ + +作者:[Anxhelo Lushka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://lushka.al/ +[b]: https://github.com/lujun9972 +[1]: https://stallman.org/google.html +[2]: https://en.wikipedia.org/wiki/Richard_Stallman +[3]: https://www.gsmarena.com/xiaomi_redmi_note_5_pro-8893.php +[4]: https://en.wikipedia.org/wiki/Xiaomi +[5]: https://en.wikipedia.org/wiki/MIUI +[6]: https://forum.xda-developers.com/wiki/Bootloader +[7]: https://en.wikipedia.org/wiki/Booting +[8]: https://en.miui.com/unlock/ +[9]: http://www.miui.com/unlock/apply.php +[10]: https://www.youtube.com/watch?v=7zhEsJlivFA +[11]: https://lushka.al//assets/img/posts/mi-unlock.png +[12]: https://www.xda-developers.com/what-is-custom-rom-android/ +[13]: https://forum.xda-developers.com/ +[14]: https://forum.xda-developers.com/redmi-note-5-pro +[15]: https://forum.xda-developers.com/redmi-note-5-pro/development/rom-aosip-8-1-t3804473 +[16]: https://e.foundation +[17]: https://ewwlo.xyz/evil +[18]: https://lineage.microg.org/ +[19]: https://apkpure.com/ +[20]: https://lifehacker.com/5789397/the-always-up-to-date-guide-to-rooting-any-android-phone +[21]: https://forum.xda-developers.com/apps/magisk/official-magisk-v7-universal-systemless-t3473445 +[22]: https://forum.xda-developers.com/apps/magisk +[23]: http://www.smartmobilephonesolutions.com/content/android-system-recovery +[24]: https://dl.twrp.me/whyred/ +[25]: https://developer.android.com/studio/command-line/adb +[26]: https://lushka.al//assets/img/posts/android-twrp.png +[27]: https://opengapps.org/ +[28]: https://developers.google.com/cloud-messaging/ +[29]: https://firebase.google.com/docs/cloud-messaging/ +[30]: https://microg.org/ +[31]: https://f-droid.org/ +[32]: https://f-droid.org/en/packages/com.dragons.aurora/ +[33]: https://android.izzysoft.de/repo +[34]: https://lushka.al//assets/img/posts/android-fdroid-repos.jpg +[35]: https://lushka.al//assets/img/posts/android-screenshot.jpg +[36]: https://creativecommons.org/licenses/by-nc-sa/4.0/ diff --git a/sources/tech/20181108 The Difference Between more, less And most Commands.md b/sources/tech/20181108 The Difference Between more, less And most Commands.md deleted file mode 100644 index e2c2c5fcf6..0000000000 --- a/sources/tech/20181108 The Difference Between more, less And most Commands.md +++ /dev/null @@ -1,234 +0,0 @@ -HankChow translating - -The Difference Between more, less And most Commands -====== -![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-less-and-most-commands-720x340.png) -If you’re a newbie Linux user, you might be confused with these three command like utilities, namely **more** , **less** and **most**. No problem! In this brief guide, I will explain the differences between these three commands, with some examples in Linux. To be precise, they are more or less same with slight differences. All these commands comes preinstalled in most Linux distributions. - -First, we will discuss about ‘more’ command. - -### The ‘more’ program - -The **‘more’** is an old and basic terminal pager or paging program that is used to open a given file for interactive reading. If the content of the file is too large to fit in one screen, it displays the contents page by page. You can scroll through the contents of the file by pressing **ENTER** or **SPACE BAR** keys. But one limitation is you can scroll in **forward direction only** , not backwards. That means, you can scroll down, but can’t go up. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/more-command-demo.gif) - -**Update:** - -A fellow Linux user has pointed out that more command do allow backward scrolling. The original version allowed only the forward scrolling. However, the newer implementations allows limited backward movement. To scroll backwards, just press **b**. The only limitation is that it doesn’t work for pipes (ls|more for example). - -To quit, press **q**. - -**more command examples:** - -Open a file, for example ostechnix.txt, for interactive reading: - -``` -$ more ostechnix.txt -``` - -To search for a string, type search query after the forward slash (/) like below: - -``` -/linux -``` - -To go to then next matching string, press **‘n’**. - -To open the file start at line number 10, simply type: - -``` -$ more +10 file -``` - -The above command show the contents of ostechnix.txt starting from 10th line. - -If you want the ‘more’ utility to prompt you to continue reading file by pressing the space bar key, just use **-d** flag: - -``` -$ more -d ostechnix.txt -``` - -![][2] - -As you see in the above screenshot, the more command prompts you to press SPACE to continue. - -To view the summary of all options and keybindings in the help section, press **h**. - -For more details about **‘more’** command, refer man pages. - -``` -$ man more -``` - -### The ‘less’ program - -The **‘less** ‘ command is also used to open a given file for interactive reading, allowing scrolling and search. If the content of the file is too large, it pages the output and so you can scroll page by page. Unlike the ‘more’ command, it allows scrolling on both directions. That means, you can scroll up and down through a file. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/less-command-demo.gif) - -So, feature-wise, ‘less’ has more advantages than ‘more’ command. Here are some notable advantages of ‘less’ command: - - * Allows forward and backward scrolling, - * Search in forward and backward directions, - * Go to the end and start of the file immediately, - * Open the given file in an editor. - - - -**less command examples:** - -Open a file: - -``` -$ less ostechnix.txt -``` - -Press **SPACE BAR** or **ENTER** key to go down and press **‘b’** to go up. - -To perform a forward search, type search query after the forward slash ( **/** ) like below: - -``` -/linux -``` - -To go to then next matching string, press **‘n’**. To go back to the previous matching string, press **N** (shift+n). - -To perform a backward search, type search query after the question mark ( **?** ) like below: - -``` -?linux -``` - -Press **n/N** to go to **next/previous** match. - -To open the currently opened file in an editor, press **v**. It will open your file in your default text editor. You can now edit, remove, rename the text in the file. - -To view the summary of less commands, options, keybindings, press **h**. - -To quit, press **q**. - -For more details about ‘less’ command, refer the man pages. - -``` -$ man less -``` - -### The ‘most’ program - -The ‘most’ terminal pager has more features than ‘more’ and ‘less’ programs. Unlike the previous utilities, the ‘most’ command can able to open more than one file at a time. You can easily switch between the opened files, edit the current file, jump to the **N** th line in the opened file, split the current window in half, lock and scroll windows together and so on. By default, it won’t wrap the long lines, but truncates them and provides a left/right scrolling option. - -**most command examples:** - -Open a single file: - -``` -$ most ostechnix1.txt -``` -![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-command.png) -To edit the current file, press **e**. - -To perform a forward search, press **/** or **S** or **f** and type the search query. Press **n** to find the next matching string in the current direction. - -![][3] - -To perform a backward search, press **?** and type the search query. Similarly, press **n** to find the next matching string in the current direction. - -Open multiple files at once: - -``` -$ most ostechnix1.txt ostechnix2.txt ostechnix3.txt -``` - -If you have opened multiple files, you can switch to next file by typing **:n**. Use **UP/DOWN** arrow keys to select next file and hit **ENTER** key to view the chosen file. -![](https://www.ostechnix.com/wp-content/uploads/2018/11/most-2.gif) - -To open a file at the first occurrence of given string, for example **linux** : - -``` -$ most file +/linux -``` - -To view the help section, press **h** at any time. - -**List of all keybindings:** - -Navigation: - - * **SPACE, D** – Scroll down one screen. - * **DELETE, U** – Scroll Up one screen. - * **DOWN arrow** – Move Down one line. - * **UP arrow** – Move Up one line. - * **T** – Goto Top of File. - * **B** – Goto Bottom of file. - * **> , TAB** – Scroll Window right. - * **<** – Scroll Window left. - * **RIGHT arrow** – Scroll Window left by 1 column. - * **LEFT arrow** – Scroll Window right by 1 column. - * **J, G** – Goto nth line. For example, to jump to the 10th line, simply type **“100j”** (without quotes). - * **%** – Goto percent. - - - -Window Commands: - - * **Ctrl-X 2, Ctrl-W 2** – Split window. - * **Ctrl-X 1, Ctrl-W 1** – Make only one window. - * **O, Ctrl-X O** – Move to other window. - * **Ctrl-X 0 (zero)** – Delete Window. - - - -Search through files: - - * **S, f, /** – Search forward. - * **?** – Search Backward. - * **N** – Find next match in current search direction. - - - -Exit: - - * **q** – Quit MOST program. All opened files will be closed. - * **:N, :n** – Quit this file and view next (Use UP/DOWN arrow keys to select next file). - - - -For more details about ‘most’ command, refer the man pages. - -``` -$ man most -``` - -### TL;DR - -**more** – An old, very basic paging program. Allows only forward navigation and limited backward navigation. - -**less** – It has more features than ‘more’ utility. Allows both forward and backward navigation and search functionalities. It starts faster than text editors like **vi** when you open large text files. - -**most** – It has all features of above programs including additional features, like opening multiple files at a time, locking and scrolling all windows together, splitting the windows and more. - -And, that’s all for now. Hope you got the basic idea about these three paging programs. I’ve covered only the basics. You can learn more advanced options and functionalities of these programs by looking into the respective program’s man pages. - -More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/the-difference-between-more-less-and-most-commands/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[2]: http://www.ostechnix.com/wp-content/uploads/2018/11/more-1.png -[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/most-1-1.gif diff --git a/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md b/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md deleted file mode 100644 index 66d34769c4..0000000000 --- a/sources/tech/20181112 A Free, Secure And Cross-platform Password Manager.md +++ /dev/null @@ -1,137 +0,0 @@ -A Free, Secure And Cross-platform Password Manager -====== - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-Password-Manager-720x340.png) - -In this modern Internet era, you will surely have multiple accounts on lot of websites. It could be a personal or official mail account, social or professional network account, GitHub account, and ecommerce account etc. So you should have several different passwords for different accounts. I am sure that you are already aware that setting up same password to multiple accounts is crazy and dangerous practice. If an attacker managed to breach one of your accounts, it’s highly likely he/she will try to access other accounts you have with the same password. So, it is **highly recommended to set different passwords** to different accounts. - -However, remembering several passwords might be difficult. You can write them in a paper. But it is not an efficient method either and you might lose them over a period of time. This is where the password managers comes in help. The password managers are like a repository where you can store all your passwords for different accounts and lock them down with a master password. By this way, all you need to remember is just the master password. We already have reviewed an open source password manager named [**KeeWeb**][1]. Today, we are going to see yet another password manager called **Buttercup**. - -### About Buttercup - -Buttercup is a free, open source, secure and cross-platform password manager written using **NodeJS**. It helps you to store all your login credentials of different accounts in an encrypted archive, which can be stored in your local system or any remote services like DropBox, ownCloud, NextCloud and WebDAV-based services. It uses strong **256bit AES encryption** method to save your sensitive data with a master password. So, no one can access your login details except those who have the master password. Buttercup currently supports Linux, Mac OS and Windows. It is also available a browser extension and mobile app. so, you can access the same archive you use on the desktop application and browser extension in your Android or iOS devices as well. - -### Installing Buttercup Password Manager - -Buttercup is currently available as **.deb** , **.rpm** packages, portable AppImage and tar archives for Linux platform. Head over to the [**releases pages**][2] and download and install the version you want to use. - -Buttercup desktop application is also available in [**AUR**][3], so you can install on Arch-based systems using AUR helper programs, such as [**Yay**][4], as shown below: - -``` -$ yay -S buttercup-desktop -``` - -If you have downloaded the portable AppImage file, make it executable using command: - -``` -$ chmod +x buttercup-desktop-1.11.0-x86_64.AppImage -``` - -Then, launch it using command: - -``` -$ ./buttercup-desktop-1.11.0-x86_64.AppImage -``` - -Once you run this command, it will prompt whether you like to integrate Buttercup AppImage with your system. If you choose ‘Yes’, this will add it to your applications menu and install icons. If you don’t do this, you can still launch the application by double-clicking on the AppImage or using the above command from the Terminal. - -### Add archives - -When you launch it for the first time, you will see the following welcome screen: -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-1.png) - -We haven’t added any archives yet, so let us add one. To do so, click on the “New Archive File” button and type the name of the archive file and choose the location to save it. -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-2.png) - -You can name it as you wish. I named it mine as “mypass”. The archives will have extension **.bcup** at the end and saved in the location of your choice. - -If you already have created one, simply choose it by clicking on “Open Archive File”. - -Next, buttercup will prompt you to enter a master password to the newly created archive. It is recommended to provide a strong password to protect the archives from the unauthorized access. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-3.png) - -We have now created an archive and secured it with a master password. Similarly, you can create any number of archives and protect them with a password. - -Let us go ahead and add the account details in the archives. - -### Adding entries (login credentials) in the archives - -Once you created or opened the archive, you will see the following screen. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-4.png) - -It is like a vault where we are going to save our login credentials of different online accounts. As you can see, we haven’t added any entries yet. Let us add some. - -To add a new entry, click “ADD ENTRY” button on the lower right corner and enter your account information you want to save. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-5-1.png) - -If you want to add any extra detail, there is an “ADD NEW FIELD” option right under the each entry. Just click on it and add as many as fields you want to include in the entries. - -Once you added all entries, you will see them on the right pane of the Buttercup interface. - -![][6] - -### Creating new groups - -You can also group login details under different name for easy recognition. Say for example, you can group all your mail accounts under a distinct name named “my_mails”. By default, your login details will be saved under “General” group. To create a new group, click “NEW GROUP” button and provide the name for the group. When creating new entries inside a new group, just click on the group name and start adding the entries as shown above. - -### Manage and access login details - -The data stored in the archives can be edited, moved to different groups, or entirely deleted at anytime. For instance, if you want to copy the username or password to clipboard, right click on the entry and choose “Copy to Clipboard” option. - -![][7] - -To edit/modify the data in the future, just click “Edit” button under the selected entry. - -### Save archives on remote location - -By default, Buttercup will save your data on the local system. However, you can save them on different remote services, such as Dropbox, ownCloud/NextCloud, WebDAV-based service. - -To connect to these services, go to **File - > Connect Cloud Sources**. - -![](https://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-8.png) - -And, choose the service you want to connect and authorize it to save your data. - -![][8] - -You can also connect those services from the Buttercup welcome screen while adding the archives. - -### Import/Export - -Buttercup allows you to import or export data to or from other password managers, such as 1Password, Lastpass and KeePass. You can also export your data and access them from another system or device, for example on your Android phone. You can export Buttercup vaults to CSV format as well. - -![][9] - -Buttercup is a simple, yet mature and fully functional password manager. It is being actively developed for years. If you ever in need of a password manager, Buttercup might a good choice. For more details, refer the project website and github page. - -And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/buttercup-a-free-secure-and-cross-platform-password-manager/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/keeweb-an-open-source-cross-platform-password-manager/ -[2]: https://github.com/buttercup/buttercup-desktop/releases/latest -[3]: https://aur.archlinux.org/packages/buttercup-desktop/ -[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ -[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[6]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-6.png -[7]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-7.png -[8]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-9.png -[9]: http://www.ostechnix.com/wp-content/uploads/2018/11/buttercup-10.png diff --git a/sources/tech/20181112 The Source History of Cat.md b/sources/tech/20181112 The Source History of Cat.md deleted file mode 100644 index 1cb1139033..0000000000 --- a/sources/tech/20181112 The Source History of Cat.md +++ /dev/null @@ -1,94 +0,0 @@ -The Source History of Cat -====== -I once had a debate with members of my extended family about whether a computer science degree is a degree worth pursuing. I was in college at the time and trying to decide whether I should major in computer science. My aunt and a cousin of mine believed that I shouldn’t. They conceded that knowing how to program is of course a useful and lucrative thing, but they argued that the field of computer science advances so quickly that everything I learned would almost immediately be outdated. Better to pick up programming on the side and instead major in a field like economics or physics where the basic principles would be applicable throughout my lifetime. - -I knew that my aunt and cousin were wrong and decided to major in computer science. (Sorry, aunt and cousin!) It is easy to see why the average person might believe that a field like computer science, or a profession like software engineering, completely reinvents itself every few years. We had personal computers, then the web, then phones, then machine learning… technology is always changing, so surely all the underlying principles and techniques change too. Of course, the amazing thing is how little actually changes. Most people, I’m sure, would be stunned to know just how old some of the important software on their computer really is. I’m not talking about flashy application software, admittedly—my copy of Firefox, the program I probably use the most on my computer, is not even two weeks old. But, if you pull up the manual page for something like `grep`, you will see that it has not been updated since 2010 (at least on MacOS). And the original version of `grep` was written in 1974, which in the computing world was back when dinosaurs roamed Silicon Valley. People (and programs) still depend on `grep` every day. - -My aunt and cousin thought of computer technology as a series of increasingly elaborate sand castles supplanting one another after each high tide clears the beach. The reality, at least in many areas, is that we steadily accumulate programs that have solved problems. We might have to occasionally modify these programs to avoid software rot, but otherwise they can be left alone. `grep` is a simple program that solves a still-relevant problem, so it survives. Most application programming is done at a very high level, atop a pyramid of much older code solving much older problems. The ideas and concepts of 30 or 40 years ago, far from being obsolete today, have in many cases been embodied in software that you can still find installed on your laptop. - -I thought it would be interesting to take a look at one such old program and see how much it had changed since it was first written. `cat` is maybe the simplest of all the Unix utilities, so I’m going to use it as my example. Ken Thompson wrote the original implementation of `cat` in 1969. If I were to tell somebody that I have a program on my computer from 1969, would that be accurate? How much has `cat` really evolved over the decades? How old is the software on our computers? - -Thanks to repositories like [this one][1], we can see exactly how `cat` has evolved since 1969. I’m going to focus on implementations of `cat` that are ancestors of the implementation I have on my Macbook. You will see, as we trace `cat` from the first versions of Unix down to the `cat` in MacOS today, that the program has been rewritten more times than you might expect—but it ultimately works more or less the same way it did fifty years ago. - -### Research Unix - -Ken Thompson and Dennis Ritchie began writing Unix on a PDP 7. This was in 1969, before C, so all of the early Unix software was written in PDP 7 assembly. The exact flavor of assembly they used was unique to Unix, since Ken Thompson wrote his own assembler that added some features on top of the assembler provided by DEC, the PDP 7’s manufacturer. Thompson’s changes are all documented in [the original Unix Programmer’s Manual][2] under the entry for `as`, the assembler. - -[The first implementation][3] of `cat` is thus in PDP 7 assembly. I’ve added comments that try to explain what each instruction is doing, but the program is still difficult to follow unless you understand some of the extensions Thompson made while writing his assembler. There are two important ones. First, the `;` character can be used to separate multiple statements on the same line. It appears that this was used most often to put system call arguments on the same line as the `sys` instruction. Second, Thompson added support for “temporary labels” using the digits 0 through 9. These are labels that can be reused throughout a program, thus being, according to the Unix Programmer’s Manual, “less taxing both on the imagination of the programmer and on the symbol space of the assembler.” From any given instruction, you can refer to the next or most recent temporary label `n` using `nf` and `nb` respectively. For example, if you have some code in a block labeled `1:`, you can jump back to that block from further down by using the instruction `jmp 1b`. (But you cannot jump forward to that block from above without using `jmp 1f` instead.) - -The most interesting thing about this first version of `cat` is that it contains two names we should recognize. There is a block of instructions labeled `getc` and a block of instructions labeled `putc`, demonstrating that these names are older than the C standard library. The first version of `cat` actually contained implementations of both functions. The implementations buffered input so that reads and writes were not done a character at a time. - -The first version of `cat` did not last long. Ken Thompson and Dennis Ritchie were able to persuade Bell Labs to buy them a PDP 11 so that they could continue to expand and improve Unix. The PDP 11 had a different instruction set, so `cat` had to be rewritten. I’ve marked up [this second version][4] of `cat` with comments as well. It uses new assembler mnemonics for the new instruction set and takes advantage of the PDP 11’s various [addressing modes][5]. (If you are confused by the parentheses and dollar signs in the source code, those are used to indicate different addressing modes.) But it also leverages the `;` character and temporary labels just like the first version of `cat`, meaning that these features must have been retained when `as` was adapted for the PDP 11. - -The second version of `cat` is significantly simpler than the first. It is also more “Unix-y” in that it doesn’t just expect a list of filename arguments—it will, when given no arguments, read from `stdin`, which is what `cat` still does today. You can also give this version of `cat` an argument of `-` to indicate that it should read from `stdin`. - -In 1973, in preparation for the release of the Fourth Edition of Unix, much of Unix was rewritten in C. But `cat` does not seem to have been rewritten in C until a while after that. [The first C implementation][6] of `cat` only shows up in the Seventh Edition of Unix. This implementation is really fun to look through because it is so simple. Of all the implementations to follow, this one most resembles the idealized `cat` used as a pedagogic demonstration in K&R C. The heart of the program is the classic two-liner: - -``` -while ((c = getc(fi)) != EOF) - putchar(c); -``` - -There is of course quite a bit more code than that, but the extra code is mostly there to ensure that you aren’t reading and writing to the same file. The other interesting thing to note is that this implementation of `cat` only recognized one flag, `-u`. The `-u` flag could be used to avoid buffering input and output, which `cat` would otherwise do in blocks of 512 bytes. - -### BSD - -After the Seventh Edition, Unix spawned all sorts of derivatives and offshoots. MacOS is built on top of Darwin, which in turn is derived from the Berkeley Software Distribution (BSD), so BSD is the Unix offshoot we are most interested in. BSD was originally just a collection of useful programs and add-ons for Unix, but it eventually became a complete operating system. BSD seems to have relied on the original `cat` implementation up until the fourth BSD release, known as 4BSD, when support was added for a whole slew of new flags. [The 4BSD implementation][7] of `cat` is clearly derived from the original implementation, though it adds a new function to implement the behavior triggered by the new flags. The naming conventions already used in the file were adhered to—the `fflg` variable, used to mark whether input was being read from `stdin` or a file, was joined by `nflg`, `bflg`, `vflg`, `sflg`, `eflg`, and `tflg`, all there to record whether or not each new flag was supplied in the invocation of the program. These were the last command-line flags added to `cat`; the man page for `cat` today lists these flags and no others, at least on Mac OS. 4BSD was released in 1980, so this set of flags is 38 years old. - -`cat` would be entirely rewritten a final time for BSD Net/2, which was, among other things, an attempt to avoid licensing issues by replacing all AT&T Unix-derived code with new code. BSD Net/2 was released in 1991. This final rewrite of `cat` was done by Kevin Fall, who graduated from Berkeley in 1988 and spent the next year working as a staff member at the Computer Systems Research Group (CSRG). Fall told me that a list of Unix utilities still implemented using AT&T code was put up on a wall at CSRG and staff were told to pick the utilities they wanted to reimplement. Fall picked `cat` and `mknod`. The `cat` implementation bundled with MacOS today is built from a source file that still bears his name at the very top. His version of `cat`, even though it is a relatively trivial program, is today used by millions. - -[Fall’s original implementation][8] of `cat` is much longer than anything we have seen so far. Other than support for a `-?` help flag, it adds nothing in the way of new functionality. Conceptually, it is very similar to the 4BSD implementation. It is only longer because Fall separates the implementation into a “raw” mode and a “cooked” mode. The “raw” mode is `cat` classic; it prints a file character for character. The “cooked” mode is `cat` with all the 4BSD command-line options. The distinction makes sense but it also pads out the implementation so that it seems more complex at first glance than it actually is. There is also a fancy error handling function at the end of the file that further adds to its length. - -### MacOS - -In 2001, Apple launched Mac OS X. The launch was an important one for Apple, because Apple had spent many years trying and failing to replace its existing operating system (classic Mac OS), which had long been showing its age. There were two previous attempts to create a new operating system internally, but both went nowhere; in the end, Apple bought NeXT, Steve Jobs’ company, which had developed an operating system and object-oriented programming framework called NeXTSTEP. Apple took NeXTSTEP and used it as a basis for Mac OS X. NeXTSTEP was in part built on BSD, so using NeXTSTEP as a starting point for Mac OS X brought BSD-derived code right into the center of the Apple universe. - -The very first release of Mac OS X thus includes [an implementation][9] of `cat` pulled from the NetBSD project. NetBSD, which remains in development today, began as a fork of 386BSD, which in turn was based directly on BSD Net/2. So the first Mac OS X implementation of `cat` is Kevin Fall’s `cat`. The only thing that had changed over the intervening decade was that Fall’s error-handling function `err()` was removed and the `err()` function made available by `err.h` was used in its place. `err.h` is a BSD extension to the C standard library. - -The NetBSD implementation of `cat` was later swapped out for FreeBSD’s implementation of `cat`. [According to Wikipedia][10], Apple began using FreeBSD instead of NetBSD in Mac OS X 10.3 (Panther). But the Mac OS X implementation of `cat`, according to Apple’s own open source releases, was not replaced until Mac OS X 10.5 (Leopard) was released in 2007. The [FreeBSD implementation][11] that Apple swapped in for the Leopard release is the same implementation on Apple computers today. As of 2018, the implementation has not been updated or changed at all since 2007. - -So the Mac OS `cat` is old. As it happens, it is actually two years older than its 2007 appearance in MacOS X would suggest. [This 2005 change][12], which is visible in FreeBSD’s Github mirror, was the last change made to FreeBSD’s `cat` before Apple pulled it into Mac OS X. So the Mac OS X `cat` implementation, which has not been kept in sync with FreeBSD’s `cat` implementation, is officially 13 years old. There’s a larger debate to be had about how much software can change before it really counts as the same software; in this case, the source file has not changed at all since 2005. - -The `cat` implementation used by Mac OS today is not that different from the implementation that Fall wrote for the 1991 BSD Net/2 release. The biggest difference is that a whole new function was added to provide Unix domain socket support. At some point, a FreeBSD developer also seems to have decided that Fall’s `raw_args()` function and `cook_args()` should be combined into a single function called `scanfiles()`. Otherwise, the heart of the program is still Fall’s code. - -I asked Fall how he felt about having written the `cat` implementation now used by millions of Apple users, either directly or indirectly through some program that relies on `cat` being present. Fall, who is now a consultant and a co-author of the most recent editions of TCP/IP Illustrated, says that he is surprised when people get such a thrill out of learning about his work on `cat`. Fall has had a long career in computing and has worked on many high-profile projects, but it seems that many people still get most excited about the six months of work he put into rewriting `cat` in 1989. - -### The Hundred-Year-Old Program - -In the grand scheme of things, computers are not an old invention. We’re used to hundred-year-old photographs or even hundred-year-old camera footage. But computer programs are in a different category—they’re high-tech and new. At least, they are now. As the computing industry matures, will we someday find ourselves using programs that approach the hundred-year-old mark? - -Computer hardware will presumably change enough that we won’t be able to take an executable compiled today and run it on hardware a century from now. Perhaps advances in programming language design will also mean that nobody will understand C in the future and `cat` will have long since been rewritten in another language. (Though C has already been around for fifty years, and it doesn’t look like it is about to be replaced any time soon.) But barring all that, why not just keep using the `cat` we have forever? - -I think the history of `cat` shows that some ideas in computer science are very durable indeed. Indeed, with `cat`, both the idea and the program itself are old. It may not be accurate to say that the `cat` on my computer is from 1969. But I could make a case for saying that the `cat` on my computer is from 1989, when Fall wrote his implementation of `cat`. Lots of other software is just as ancient. So maybe we shouldn’t think of computer science and software development primarily as fields that disrupt the status quo and invent new things. Our computer systems are built out of historical artifacts. At some point, we may all spend more time trying to understand and maintain those historical artifacts than we spend writing new code. - -If you enjoyed this post, more like it come out every two weeks! Follow [@TwoBitHistory][13] on Twitter or subscribe to the [RSS feed][14] to make sure you know when a new post is out. - - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/11/12/cat.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://github.com/dspinellis/unix-history-repo -[2]: https://www.bell-labs.com/usr/dmr/www/man11.pdf -[3]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-1-cat-pdp7-s -[4]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-2-cat-pdp11-s -[5]: https://en.wikipedia.org/wiki/PDP-11_architecture#Addressing_modes -[6]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-3-cat-v7-c -[7]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-4-cat-bsd4-c -[8]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-5-cat-net2-c -[9]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-6-cat-macosx-c -[10]: https://en.wikipedia.org/wiki/Darwin_(operating_system) -[11]: https://gist.github.com/sinclairtarget/47143ba52b9d9e360d8db3762ee0cbf5#file-7-cat-macos-10-13-c -[12]: https://github.com/freebsd/freebsd/commit/a76898b84970888a6fd015e15721f65815ea119a#diff-6e405d5ab5b47ca2a131ac7955e5a16b -[13]: https://twitter.com/TwoBitHistory -[14]: https://twobithistory.org/feed.xml -[15]: https://twitter.com/TwoBitHistory/status/1051826516844322821?ref_src=twsrc%5Etfw diff --git a/sources/tech/20181113 4 tips for learning Golang.md b/sources/tech/20181113 4 tips for learning Golang.md deleted file mode 100644 index 50921b57a3..0000000000 --- a/sources/tech/20181113 4 tips for learning Golang.md +++ /dev/null @@ -1,75 +0,0 @@ -4 tips for learning Golang -====== -Arriving in Golang land: A senior developer's journey. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_laptop_code_programming_mountain_view.jpg?itok=yx5buqkr) - -In the summer of 2014... - -> IBM: "We need you to go figure out this Docker thing." -> Me: "OK." -> IBM: "Start contributing and just get involved." -> Me: "OK." (internal voice): "This is written in Go. What's that?" (Googles) "Oh, a programming language. I've learned a few of those in my career. Can't be that hard." - -My university's freshman programming class was taught using VAX assembler. In data structures class, we used Pascal—loaded via diskette on tired, old PCs in the library's computer center. In one upper-level course, I had a professor that loved to show all examples in ADA. I learned a bit of C via playing with various Unix utilities' source code on our Sun workstations. At IBM we used C—and some x86 assembler—for the OS/2 source code, and we heavily used C++'s object-oriented features for a joint project with Apple. I learned shell scripting soon after, starting with csh, but moving to Bash after finding Linux in the mid-'90s. I was thrust into learning m4 (arguably more of a macro-processor than a programming language) while working on the just-in-time (JIT) compiler in IBM's custom JVM code when porting it to Linux in the late '90s. - -Fast-forward 20 years... I'd never been nervous about learning a new programming language. But [Go][1] felt different. I was going to contribute publicly, upstream on GitHub, visible to anyone interested enough to look! I didn't want to be the laughingstock, the Go newbie as a 40-something-year-old senior developer! We all know that programmer pride that doesn't like to get bruised, no matter your experience level. - -My early investigations revealed that Go seemed more committed to its "idiomatic-ness" than some languages. It wasn't just about getting the code to compile; I needed to be able to write code "the Go way." - -Now that I'm four years and several hundred pull requests into my personal Go journey, I don't claim to be an expert, but I do feel a lot more comfortable contributing and writing Go code than I did in 2014. So, how do you teach an old guy new tricks—or at least a new programming language? Here are four steps that were valuable in my own journey to Golang land. - -### 1. Don't skip the fundamentals - -While you might be able to get by with copying code and hunting and pecking your way through early learnings (who has time to read the manual?!?), Go has a very readable [language spec][2] that was clearly written to be read and understood, even if you don't have a master's in language or compiler theory. Given that Go made some unique decisions about the order of the **parameter:type** constructs and has interesting language features like channels and goroutines, it is important to get grounded in these new concepts. Reading this document alongside [Effective Go][3], another great resource from the Golang creators, will give you a huge boost in readiness to use the language effectively and properly. - -### 2. Learn from the best - -There are many valuable resources for digging in and taking your Go knowledge to the next level. All the talks from any recent [GopherCon][4] can be found online, like this exhaustive list from [GopherCon US in 2018][5]. Talks range in expertise and skill level, but you can easily find something you didn't know about Go by watching the talks. [Francesc Campoy][6] created a Go programming video series called [JustForFunc][7] that has an ever-increasing number of episodes to expand your Go knowledge and understanding. A quick search on "Golang" reveals many other video and online resources for those who want to learn more. - -Want to look at code? Many of the most popular cloud-native projects on GitHub are written in Go: [Docker/Moby][8], [Kubernetes][9], [Istio][10], [containerd][11], [CoreDNS][12], and many others. Language purists might rate some projects better than others regarding idiomatic-ness, but these are all good starting points to see how large codebases are using Go in highly active projects. - -### 3. Use good language tools - -You will learn quickly about the value of [gofmt][13]. One of the beautiful aspects of Go is that there is no arguing about code formatting guidelines per project— **gofmt** is built into the language runtime, and it formats Go code according to a set of stable, well-understood language rules. I don't know of any Golang-based project that doesn't insist on checking with **gofmt** for pull requests as part of continuous integration. - -Beyond the wide, valuable array of useful tools built directly into the runtime/SDK, I strongly recommend using an editor or IDE with good Golang support features. Since I find myself much more often at a command line, I rely on Vim plus the great [vim-go][14] plugin. I also like what Microsoft has offered with [VS Code][15], especially with its [Go language][16] plugins. - -Looking for a debugger? The [Delve][17] project has been improving and maturing and is a strong contender for doing [gdb][18]-like debugging on Go binaries. - -### 4. Jump in and write some Go! - -You'll never get better at writing Go unless you start trying. Find a project that has some "help needed" issues flagged and make a contribution. If you are already using an open source project written in Go, find out if there are some bugs that have beginner-level solutions and make your first pull request. As with most things in life, the only real way to improve is through practice, so get going. - -And, as it turns out, apparently you can teach an old senior developer new tricks—or languages at least. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/learning-golang - -作者:[Phill Estes][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/estesp -[b]: https://github.com/lujun9972 -[1]: https://golang.org/ -[2]: https://golang.org/ref/spec -[3]: https://golang.org/doc/effective_go.html -[4]: https://www.gophercon.com/ -[5]: https://tqdev.com/2018-gophercon-2018-videos-online -[6]: https://twitter.com/francesc -[7]: https://www.youtube.com/channel/UC_BzFbxG2za3bp5NRRRXJSw -[8]: https://github.com/moby/moby -[9]: https://github.com/kubernetes/kubernetes -[10]: https://github.com/istio/istio -[11]: https://github.com/containerd/containerd -[12]: https://github.com/coredns/coredns -[13]: https://blog.golang.org/go-fmt-your-code -[14]: https://github.com/fatih/vim-go -[15]: https://code.visualstudio.com/ -[16]: https://code.visualstudio.com/docs/languages/go -[17]: https://github.com/derekparker/delve -[18]: https://www.gnu.org/software/gdb/ diff --git a/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md b/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md deleted file mode 100644 index c406c491b0..0000000000 --- a/sources/tech/20181113 An introduction to Udev- The Linux subsystem for managing device events.md +++ /dev/null @@ -1,228 +0,0 @@ -An introduction to Udev: The Linux subsystem for managing device events -====== -Create a script that triggers your computer to do a specific action when a specific device is plugged in. -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/osdc_520x292_opensourceprescription.png?itok=gFrc_GTH) - -Udev is the Linux subsystem that supplies your computer with device events. In plain English, that means it's the code that detects when you have things plugged into your computer, like a network card, external hard drives (including USB thumb drives), mouses, keyboards, joysticks and gamepads, DVD-ROM drives, and so on. That makes it a potentially useful utility, and it's well-enough exposed that a standard user can manually script it to do things like performing certain tasks when a certain hard drive is plugged in. - -This article teaches you how to create a [udev][1] script triggered by some udev event, such as plugging in a specific thumb drive. Once you understand the process for working with udev, you can use it to do all manner of things, like loading a specific driver when a gamepad is attached, or performing an automatic backup when you attach your backup drive. - -### A basic script - -The best way to work with udev is in small chunks. Don't write the entire script upfront, but instead start with something that simply confirms that udev triggers some custom event. - -Depending on your goal for your script, you can't guarantee you will ever see the results of a script with your own eyes, so make sure your script logs that it was successfully triggered. The usual place for log files is in the **/var** directory, but that's mostly the root user's domain. For testing, use **/tmp** , which is accessible by normal users and usually gets cleaned out with a reboot. - -Open your favorite text editor and enter this simple script: - -``` -#!/usr/bin/bash - -echo $date > /tmp/udev.log -``` - -Place this in **/usr/local/bin** or some such place in the default executable path. Call it **trigger.sh** and, of course, make it executable with **chmod +x**. - -``` -$ sudo mv trigger.sh /usr/local/bin -$ sudo chmod +x /usr/local/bin/trigger.sh -``` - -This script has nothing to do with udev. When it executes, the script places a timestamp in the file **/tmp/udev.log**. Test the script yourself: - -``` -$ /usr/local/bin/trigger.sh -$ cat /tmp/udev.log -Tue Oct 31 01:05:28 NZDT 2035 -``` - -The next step is to make udev trigger the script. - -### Unique device identification - -In order for your script to be triggered by a device event, udev must know under what conditions it should call the script. In real life, you can identify a thumb drive by its color, the manufacturer, and the fact that you just plugged it into your computer. Your computer, however, needs a different set of criteria. - -Udev identifies devices by serial numbers, manufacturers, and even vendor ID and product ID numbers. Since this is early in your udev script's lifespan, be as broad, non-specific, and all-inclusive as possible. In other words, you want first to catch nearly any valid udev event to trigger your script. - -With the **udevadm monitor** command, you can tap into udev in real time and see what it sees when you plug in different devices. Become root and try it. - -``` -$ su -# udevadm monitor -``` - -The monitor function prints received events for: - - * UDEV: the event udev sends out after rule processing - * KERNEL: the kernel uevent - - - -With **udevadm monitor** running, plug in a thumb drive and watch as all kinds of information is spewed out onto your screen. Notice that the type of event is an **ADD** event. That's a good way to identify what type of event you want. - -The **udevadm monitor** command provides a lot of good info, but you can see it with prettier formatting with the command **udevadm info** , assuming you know where your thumb drive is currently located in your **/dev** tree. If not, unplug and plug your thumb drive back in, then immediately issue this command: - -``` -$ su -c 'dmesg | tail | fgrep -i sd*' -``` - -If that command returned **sdb: sdb1** , for instance, you know the kernel has assigned your thumb drive the **sdb** label. - -Alternately, you can use the **lsblk** command to see all drives attached to your system, including their sizes and partitions. - -Now that you have established where your drive is located in your filesystem, you can view udev information about that device with this command: - -``` -# udevadm info -a -n /dev/sdb | less -``` - -This returns a lot of information. Focus on the first block of info for now. - -Your job is to pick out parts of udev's report about a device that are most unique to that device, then tell udev to trigger your script when those unique attributes are detected. - -The **udevadm info** process reports on a device (specified by the device path), then "walks" up the chain of parent devices. For every device found, it prints all possible attributes using a key-value format. You can compose a rule to match according to the attributes of a device plus attributes from one single parent device. - -``` -looking at device '/devices/000:000/blah/blah//block/sdb': -  KERNEL=="sdb" -  SUBSYSTEM=="block" -  DRIVER=="" -  ATTR{ro}=="0" -  ATTR{size}=="125722368" -  ATTR{stat}==" 2765 1537 5393" -  ATTR{range}=="16" -  ATTR{discard\_alignment}=="0" -  ATTR{removable}=="1" -  ATTR{blah}=="blah" -``` - -A udev rule must contain one attribute from one single parent device. - -Parent attributes are things that describe a device from the most basic level, such as it's something that has been plugged into a physical port or it is something with a size or this is a removable device. - -Since the KERNEL label of **sdb** can change depending upon how many other drives were plugged in before you plugged that thumb drive in, that's not the optimal parent attribute for a udev rule. However, it works for a proof of concept, so you could use it. An even better candidate is the SUBSYSTEM attribute, which identifies that this is a "block" system device (which is why the **lsblk** command lists the device). - -Open a file called **80-local.rules** in **/etc/udev/rules.d** and enter this code: - -``` -SUBSYSTEM=="block", ACTION=="add", RUN+="/usr/local/bin/trigger.sh" -``` - -Save the file, unplug your test thumb drive, and reboot. - -Wait, reboot on a Linux machine? - -Theoretically, you can just issue **udevadm control --reload** , which should load all rules, but at this stage in the game, it's best to eliminate all variables. Udev is complex enough, and you don't want to be lying in bed all night wondering if that rule didn't work because of a syntax error or if you just should have rebooted. So reboot regardless of what your POSIX pride tells you. - -When your system is back online, switch to a text console (with Ctl+Alt+F3 or similar) and plug in your thumb drive. If you are running a recent kernel, you will probably see a bunch of output in your console when you plug in the drive. If you see an error message such as Could not execute /usr/local/bin/trigger.sh, you probably forgot to make the script executable. Otherwise, hopefully all you see is a device was plugged in, it got some kind of kernel device assignment, and so on. - -Now, the moment of truth: - -``` -$ cat /tmp/udev.log -Tue Oct 31 01:35:28 NZDT 2035 -``` - -If you see a very recent date and time returned from **/tmp/udev.log** , udev has successfully triggered your script. - -### Refining the rule into something useful - -The problem with this rule is that it's very generic. Plugging in a mouse, a thumb drive, or someone else's thumb drive will indiscriminately trigger your script. Now is the time to start focusing on the exact thumb drive you want to trigger your script. - -One way to do this is with the vendor ID and product ID. To get these numbers, you can use the **lsusb** command. - -``` -$ lsusb -Bus 001 Device 002: ID 8087:0024 Slacker Corp. Hub -Bus 002 Device 002: ID 8087:0024 Slacker Corp. Hub -Bus 003 Device 005: ID 03f0:3307 TyCoon Corp. -Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 hub -Bus 001 Device 003: ID 13d3:5165 SBo Networks -``` - -In this example, the **03f0:3307** before **TyCoon Corp.** denotes the idVendor and idProduct attributes. You can also see these numbers in the output of **udevadm info -a -n /dev/sdb | grep vendor** , but I find the output of **lsusb** a little easier on the eyes. - -You can now include these attributes in your rule. - -``` -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/thumb.sh" -``` - -Test this (yes, you should still reboot, just to make sure you're getting fresh reactions from udev), and it should work the same as before, only now if you plug in, say, a thumb drive manufactured by a different company (therefore with a different idVendor) or a mouse or a printer, the script won't be triggered. - -Keep adding new attributes to further focus in on that one unique thumb drive you want to trigger your script. Using **udevadm info -a -n /dev/sdb** , you can find out things like the vendor name, sometimes a serial number, or the product name, and so on. - -For your own sanity, be sure to add only one new attribute at a time. Most mistakes I have made (and have seen other people online make) is to throw a bunch of attributes into their udev rule and wonder why the thing no longer works. Testing attributes one by one is the safest way to ensure udev can identify your device successfully. - -### Security - -This brings up the security concerns of writing udev rules to automatically do something when a drive is plugged in. On my machines, I don't even have auto-mount turned on, and yet this article proposes scripts and rules that execute commands just by having something plugged in. - -Two things to bear in mind here. - - 1. Focus your udev rules once you have them working so they trigger scripts only when you really want them to. Executing a script that blindly copies data to or from your computer is a bad idea in case anyone who happens to be carrying the same brand of thumb drive plugs it into your box. - 2. Do not write your udev rule and scripts and forget about them. I know which computers have my udev rules on them, and those boxes are most often my personal computers, not the ones I take around to conferences or have in my office at work. The more "social" a computer is, the less likely it is to get a udev rule on it that could potentially result in my data ending up on someone else's device or someone else's data or malware on my device. - - - -In other words, as with so much of the power provided by a GNU system, it is your job to be mindful of how you are wielding that power. If you abuse it or fail to treat it with respect, it very well could go horribly wrong. - -### Udev in the real world - -Now that you can confirm that your script is triggered by udev, you can turn your attention to the function of the script. Right now, it is useless, doing nothing more than logging the fact that it has been executed. - -I use udev to trigger [automated backups][2] of my thumb drives. The idea is that the master copies of my active documents are on my thumb drive (since it goes everywhere I go and could be worked on at any moment), and those master documents get backed up to my computer each time I plug the drive into that machine. In other words, my computer is the backup drive and my production data is mobile. The source code is available, so feel free to look at the code of attachup for further examples of constraining your udev tests. - -Since that's what I use udev for the most, it's the example I'll use here, but udev can grab lots of other things, like gamepads (this is useful on systems that aren't set to load the xboxdrv module when a gamepad is attached) and cameras and microphones (useful to set inputs when a specific mic is attached), so realize that it's good for a lot more than this one example. - -A simple version of my backup system is a two-command process: - -``` -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", SYMLINK+="safety%n" -SUBSYSTEM=="block", ATTRS{idVendor}=="03f0", ACTION=="add", RUN+="/usr/local/bin/trigger.sh" -``` - -The first line detects my thumb drive with the attributes already discussed, then assigns the thumb drive a symlink within the device tree. The symlink it assigns is **safety%n**. The **%n** is a udev macro that resolves to whatever number the kernel gives to the device, such as sdb1, sdb2, sdb3, and so on. So **%n** would be the 1 or the 2 or the 3. - -This creates a symlink in the dev tree, so it does not interfere with the normal process of plugging in a device. This means that if you use a desktop environment that likes to auto-mount devices, you won't be causing problems for it. - -The second line runs the script. - -My backup script looks like this: - -``` -#!/usr/bin/bash - -mount /dev/safety1 /mnt/hd -sleep 2 -rsync -az /mnt/hd/ /home/seth/backups/ && umount /dev/safety1 -``` - -The script uses the symlink, which avoids the possibility of udev naming the drive something unexpected (for instance, if I have a thumb drive called DISK plugged into my computer already, and I plug in my other thumb drive also called DISK, the second one will be labeled DISK_, which would foil my script). It mounts **safety1** (the first partition of the drive) at my preferred mount point of **/mnt/hd**. - -Once safely mounted, it uses [rsync][3] to back up the drive to my backup folder (my actual script uses rdiff-backup, and yours can use whatever automated backup solution you prefer). - -### Udev is your dev - -Udev is a very flexible system and enables you to define rules and functions in ways that few other systems dare provide users. Learn it and use it, and enjoy the power of POSIX. - -This article builds on content from the [Slackermedia Handbook][4], which is licensed under the [GNU Free Documentation License 1.3][5]. - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/11/udev - -作者:[Seth Kenlon][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/seth -[b]: https://github.com/lujun9972 -[1]: https://linux.die.net/man/8/udev -[2]: https://gitlab.com/slackermedia/attachup -[3]: https://opensource.com/article/17/1/rsync-backup-linux -[4]: http://slackermedia.info/handbook/doku.php?id=backup -[5]: http://www.gnu.org/licenses/fdl-1.3.html diff --git a/sources/tech/20181113 The alias And unalias Commands Explained With Examples.md b/sources/tech/20181113 The alias And unalias Commands Explained With Examples.md deleted file mode 100644 index 14003432b3..0000000000 --- a/sources/tech/20181113 The alias And unalias Commands Explained With Examples.md +++ /dev/null @@ -1,154 +0,0 @@ -The alias And unalias Commands Explained With Examples -====== -![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias-command-720x340.png) - -You may forget the complex and lengthy Linux commands after certain period of time unless you’re a heavy command line user. Sure, there are a few ways to [**recall the forgotten commands**][1]. You could simply [**save the frequently used commands**][2] and use them on demand. Also, you can [**bookmark the important commands**][3] in your Terminal and use whenever you want. And, of course there is already a built-in **“history”** command available to help you to remember the commands. Another easiest way to remember such long commands is to simply create an alias (shortcut) to them. Not just long commands, you can create alias to any frequently used Linux commands for easier repeated invocation. By this approach, you don’t need to memorize those commands anymore. In this guide, we are going to learn about **alias** and **unalias** commands with examples in Linux. - -### The alias command - -The **alias** command is used to run any command or set of commands (inclusive of many options, arguments) with a user-defined string. The string could be a simple name or abbreviations for the commands regardless of how complex the original commands are. You can use the aliases as the way you use the normal Linux commands. The alias command comes preinstalled in shells, including BASH, Csh, Ksh and Zsh etc. - -The general syntax of alias command is: - -``` -alias [alias-name[=string]...] -``` - -Let us go ahead and see some examples. - -**List aliases** - -You might already have aliases in your system. Some applications may create the aliases automatically when you install them. To view the list of existing aliases, run: - -``` -$ alias -``` - -or, - -``` -$ alias -p -``` - -I have the following aliases in my Arch Linux system. - -``` -alias betty='/home/sk/betty/main.rb' -alias ls='ls --color=auto' -alias pbcopy='xclip -selection clipboard' -alias pbpaste='xclip -selection clipboard -o' -alias update='newsbeuter -r && sudo pacman -Syu' -``` - -**Create a new alias** - -Like I already said, you don’t need to memorize the lengthy and complex commands. You don’t even need to run long commands over and over. Just create an alias to the command with easily recognizable name and run it whenever you want. Let us say, you want to use this command often. - -``` -$ du -h --max-depth=1 | sort -hr -``` - -This command finds which sub-directories consume how much disk size in the current working directory. This command is bit long. Instead of remembering the whole command, we can easily create an alias like below: - -``` -$ alias du='du -h --max-depth=1 | sort -hr' -``` - -Here, **du** is the alias name. You can use any name to the alias to easily remember it later. - -You can either use single or double quotes when creating an alias. It makes no difference. - -Now you can just run the alias (i.e **du** in our case) instead of the full command. Both will produce the same result. - -The aliases will expire with the current shell session. They will be gone once you log out of the current session. In order to make the aliases permanent, you need to add them in your shell’s configuration file. - -On BASH shell, edit **~/.bashrc** file: - -``` -$ nano ~/.bashrc -``` - -Add the aliases one by one: -![](https://www.ostechnix.com/wp-content/uploads/2018/11/alias.png) - -Save and quit the file. Then, update the changes by running the following command: - -``` -$ source ~/.bashrc -``` - -Now, the aliases are persistent across sessions. - -On ZSH, you need to add the aliases in **~/.zshrc** file. Similarly, add your aliases in **~/.config/fish/config.fish** file if you use Fish shell. - -**Viewing a specific aliased command** - -As I mentioned earlier, you can view the list of all aliases in your system using ‘alias’ command. If you want to view the command associated with a given alias, for example ‘du’, just run: - -``` -$ alias du -alias du='du -h --max-depth=1 | sort -hr' -``` - -As you can see, the above command display the command associated with the word ‘du’. - -For more details about alias command, refer the man pages: - -``` -$ man alias -``` - -### The unalias command - -As the name says, the **unalias** command simply removes the aliases in your system. The typical syntax of unalias command is: - -``` -unalias -``` - -To remove an aliased command, for example ‘du’ which we created earlier, simply run: - -``` -$ unalias du -``` - -The unalias command not only removes the alias from the current session, but also remove them permanently from your shell’s configuration file. - -Another way to remove an alias is to create a new alias with same name. - -To remove all aliases from the current session, use **-a** flag: - -``` -$ unalias -a -``` - -For more details, refer man pages. - -``` -$ man unalias -``` - -Creating aliases to complex and lengthy commands will save you some time if you run those commands over and over. Now it is your time to create aliases the frequently used commands. - -And, that’s all for now. Hope this helps. More good stuffs to come. Stay tuned! - -Cheers! - - - --------------------------------------------------------------------------------- - -via: https://www.ostechnix.com/the-alias-and-unalias-commands-explained-with-examples/ - -作者:[SK][a] -选题:[lujun9972][b] -译者:[译者ID](https://github.com/译者ID) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.ostechnix.com/author/sk/ -[b]: https://github.com/lujun9972 -[1]: https://www.ostechnix.com/easily-recall-forgotten-linux-commands/ -[2]: https://www.ostechnix.com/save-commands-terminal-use-demand/ -[3]: https://www.ostechnix.com/bookmark-linux-commands-easier-repeated-invocation/ diff --git a/sources/tech/20181114 How to use systemd-nspawn for Linux system recovery.md b/sources/tech/20181114 How to use systemd-nspawn for Linux system recovery.md new file mode 100644 index 0000000000..3355436cc3 --- /dev/null +++ b/sources/tech/20181114 How to use systemd-nspawn for Linux system recovery.md @@ -0,0 +1,148 @@ +How to use systemd-nspawn for Linux system recovery +====== +Tap into systemd's ability to launch containers to repair a damaged system's root filesystem. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/mistake_bug_fix_find_error.png?itok=PZaz3dga) + +For as long as GNU/Linux systems have existed, system administrators have needed to recover from root filesystem corruption, accidental configuration changes, or other situations that kept the system from booting into a "normal" state. + +Linux distributions typically offer one or more menu options at boot time (for example, in the GRUB menu) that can be used for rescuing a broken system; typically they boot the system into a single-user mode with most system services disabled. In the worst case, the user could modify the kernel command line in the bootloader to use the standard shell as the init (PID 1) process. This method is the most complex and fraught with complications, which can lead to frustration and lost time when a system needs rescuing. + +Most importantly, these methods all assume that the damaged system has a physical console of some sort, but this is no longer a given in the age of cloud computing. Without a physical console, there are few (if any) options to influence the boot process this way. Even physical machines may be small, embedded devices that don't offer an easy-to-use console, and finding the proper serial port cables and adapters and setting up a serial terminal emulator, all to use a serial console port while dealing with an emergency, is often complicated. + +When another system (of the same architecture and generally similar configuration) is available, a common technique to simplify the repair process is to extract the storage device(s) from the damaged system and connect them to the working system as secondary devices. With physical systems, this is usually straightforward, but most cloud computing platforms can also support this since they allow the root storage volume of the damaged instance to be mounted on another instance. + +Once the root filesystem is attached to another system, addressing filesystem corruption is straightforward using **fsck** and other tools. Addressing configuration mistakes, broken packages, or other issues can be more complex since they require mounting the filesystem and locating and changing the correct configuration files or databases. + +### Using systemd + +Before **[**systemd**][1]** , editing configuration files with a text editor was a practical way to correct a configuration. Locating the necessary files and understanding their contents may be a separate challenge, which is beyond the scope of this article. + +When the GNU/Linux system uses **systemd** though, many configuration changes are best made using the tools it provides—enabling and disabling services, for example, requires the creation or removal of symbolic links in various locations. The **systemctl** tool is used to make these changes, but using it requires a **systemd** instance to be running and listening (on D-Bus) for requests. When the root filesystem is mounted as an additional filesystem on another machine, the running **systemd** instance can't be used to make these changes. + +Manually launching the target system's **systemd** is not practical either, since it is designed to be the PID 1 process on a system and manage all other processes, which would conflict with the already-running instance on the system used for the repairs. + +Thankfully, **systemd** has the ability to launch containers, fully encapsulated GNU/Linux systems with their own PID 1 and environment that utilize various namespace features offered by the Linux kernel. Unlike tools like Docker and Rocket, **systemd** doen't require a container image to launch a container; it can launch one rooted at any point in the existing filesystem. This is done using the **systemd-nspawn** tool, which will create the necessary system namespaces and launch the initial process in the container, then provide a console in the container. In contrast to **chroot** , which only changes the apparent root of the filesystem, this type of container will have a separate filesystem namespace, suitable filesystems mounted on **/dev** , **/run** , and **/proc** , and a separate process namespace and IPC namespaces. Consult the **systemd-nspawn** [man page][2] to learn more about its capabilities. + +### An example to show how it works + +In this example, the storage device containing the damaged system's root filesystem has been attached to a running system, where it appears as **/dev/vdc**. The device name will vary based on the number of existing storage devices, the type of device, and the method used to connect it to the system. The root filesystem could use the entire storage device or be in a partition within the device; since the most common (simple) configuration places the root filesystem in the device's first partition, this example will use **/dev/vdc1.** Make sure to replace the device name in the commands below with your system's correct device name. + +The damaged root filesystem may also be more complex than a single filesystem on a device; it may be a volume in an LVM volume set or on a set of devices combined into a software RAID device. In these cases, the necessary steps to compose and activate the logical device holding the filesystem must be performed before it will be available for mounting. Again, those steps are beyond the scope of this article. + +#### Prerequisites + +First, ensure the **systemd-nspawn** tool is installed—most GNU/Linux distributions don't install it by default. It's provided by the **systemd-container** package on most distributions, so use your distribution's package manager to install that package. The instructions in this example were tested using Debian 9 but should work similarly on any modern GNU/Linux distribution. + +Using the commands below will almost certainly require root permissions, so you'll either need to log in as root, use **sudo** to obtain a shell with root permissions, or prefix each of the commands with **sudo**. + +#### Verify and mount the fileystem + +First, use **fsck** to verify the target filesystem's structures and content: + +``` +$ fsck /dev/vdc1 +``` + +If it finds any problems with the filesystem, answer the questions appropriately to correct them. If the filesystem is sufficiently damaged, it may not be repairable, in which case you'll have to find other ways to extract its contents. + +Now, create a temporary directory and mount the target filesystem onto that directory: + +``` +$ mkdir /tmp/target-rescue +$ mount /dev/vdc1 /tmp/target-rescue +``` + +With the filesystem mounted, launch a container with that filesystem as its root filesystem: + +``` +$ systemd-nspawn --directory /tmp/target-rescue --boot -- --unit rescue.target +``` + +The command-line arguments for launching the container are: + + * **\--directory /tmp/target-rescue** provides the path of the container's root filesystem. + * **\--boot** searches for a suitable init program in the container's root filesystem and launches it, passing parameters from the command line to it. In this example, the target system also uses **systemd** as its PID 1 process, so the remaining parameters are intended for it. If the target system you are repairing uses any other tool as its PID 1 process, you'll need to adjust the parameters accordingly. + * **\--** separates parameters for **systemd-nspawn** from those intended for the container's PID 1 process. + * **\--unit rescue.target** tells **systemd** in the container the name of the target it should try to reach during the boot process. In order to simplify the rescue operations in the target system, boot it into "rescue" mode rather than into its normal multi-user mode. + + + +If all goes well, you should see output that looks similar to this: + +``` +Spawning container target-rescue on /tmp/target-rescue. +Press ^] three times within 1s to kill container. +systemd 232 running in system mode. (+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN) +Detected virtualization systemd-nspawn. +Detected architecture arm. + +Welcome to Debian GNU/Linux 9 (Stretch)! + +Set hostname to . +Failed to install release agent, ignoring: No such file or directory +[  OK  ] Reached target Swap. +[  OK  ] Listening on Journal Socket (/dev/log). +[  OK  ] Started Dispatch Password Requests to Console Directory Watch. +[  OK  ] Reached target Encrypted Volumes. +[  OK  ] Created slice System Slice. +         Mounting POSIX Message Queue File System... +[  OK  ] Listening on Journal Socket. +         Starting Set the console keyboard layout... +         Starting Restore / save the current clock... +         Starting Journal Service... +         Starting Remount Root and Kernel File Systems... +[  OK  ] Mounted POSIX Message Queue File System. +[  OK  ] Started Journal Service. +[  OK  ] Started Remount Root and Kernel File Systems. +         Starting Flush Journal to Persistent Storage... +[  OK  ] Started Restore / save the current clock. +[  OK  ] Started Flush Journal to Persistent Storage. +[  OK  ] Started Set the console keyboard layout. +[  OK  ] Reached target Local File Systems (Pre). +[  OK  ] Reached target Local File Systems. +         Starting Create Volatile Files and Directories... +[  OK  ] Started Create Volatile Files and Directories. +[  OK  ] Reached target System Time Synchronized. +         Starting Update UTMP about System Boot/Shutdown... +[  OK  ] Started Update UTMP about System Boot/Shutdown. +[  OK  ] Reached target System Initialization. +[  OK  ] Started Rescue Shell. +[  OK  ] Reached target Rescue Mode. +         Starting Update UTMP about System Runlevel Changes... +[  OK  ] Started Update UTMP about System Runlevel Changes. +You are in rescue mode. After logging in, type "journalctl -xb" to view +system logs, "systemctl reboot" to reboot, "systemctl default" or ^D to +boot into default mode. +Give root password for maintenance +(or press Control-D to continue): +``` + +In this output, you can see **systemd** launching as the init process in the container and detecting that it is being run inside a container so it can adjust its behavior appropriately. Various unit files are started to bring the container to a usable state, then the target system's root password is requested. You can enter the root password here if you want a shell prompt with root permissions, or you can press **Ctrl+D** to allow the startup process to continue, which will display a normal console login prompt. + +When you have completed the necessary changes to the target system, press **Ctrl+]** three times in rapid succession; this will terminate the container and return you to your original shell. From there, you can clean up by unmounting the target system's filesystem and removing the temporary directory: + +``` +$ umount /tmp/target-rescue +$ rmdir /tmp/target-rescue +``` + +That's it! You can now remove the target system's storage device(s) and return them to the target system. + +The idea to use **systemd-nspawn** this way, especially the **\--boot parameter** , came from [a question][3] posted on StackExchange. Thanks to Shibumi and kirbyfan64sos for providing useful answers to this question! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/systemd-nspawn-system-recovery + +作者:[Kevin P.Fleming][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/kpfleming +[b]: https://github.com/lujun9972 +[1]: https://www.freedesktop.org/wiki/Software/systemd/ +[2]: https://www.freedesktop.org/software/systemd/man/systemd-nspawn.html +[3]: https://unix.stackexchange.com/questions/457819/running-systemd-utilities-like-systemctl-under-an-nspawn diff --git a/sources/tech/20181115 11 Things To Do After Installing elementary OS 5 Juno.md b/sources/tech/20181115 11 Things To Do After Installing elementary OS 5 Juno.md new file mode 100644 index 0000000000..5e23c6e5c4 --- /dev/null +++ b/sources/tech/20181115 11 Things To Do After Installing elementary OS 5 Juno.md @@ -0,0 +1,260 @@ +11 Things To Do After Installing elementary OS 5 Juno +====== +I’ve been using [elementary OS 5 Juno][1] for over a month and it has been an amazing experience. It is easily the [best Mac OS inspired Linux distribution][2] and one of the [best Linux distribution for beginners][3]. + +However, you will need to take care of a couple of things after installing it. +In this article, we will discuss the most important things that you need to do after installing [elementary OS][4] 5 Juno. + +### Things to do after installing elementary OS 5 Juno + +![Things to do after installing elementary OS Juno][5] + +Things I mentioned in this list are from my personal experience and preference. Of course, you are not restricted to these few things. You can explore and tweak the system as much as you like. However, if you follow (some of) these recommendations, things might be smoother for you. + +#### 1.Run a System Update + +![terminal showing system updates in elementary os 5 Juno][6] + +Even when you download the latest version of a distribution – it is always recommended to check for the latest System updates. You might have a quick fix for an annoying bug, or, maybe there’s an important security patch that you shouldn’t ignore. So, no matter what – you should always ensure that you have everything up-to-date. + +To do that, you need to type in the following command in the terminal: + +``` +sudo apt-get update +``` + +#### 2\. Set Window Hotcorner + +![][7] + +You wouldn’t notice the minimize button for a window. So, how do you do it? + +Well, you can just bring up the dock and click the app icon again to minimize it or press **Windows key + H** as a shortcut to minimize the active window. + +But, I’ll recommend something way more easy and intuitive. Maybe you already knew it, but for the users who were unaware of the “ **hotcorners** ” feature, here’s what it does: + +Whenever you hover the cursor to any of the 4 corners of the window, you can set a preset action to happen when you do that. For example, when you move your cursor to the **left corner** of the screen you get the **multi-tasking view** to switch between apps – which acts like a “gesture“. + +In order to utilize the functionality, you can follow the steps below: + + 1. Head to the System Settings. + 2. Click on the “ **Desktop** ” option (as shown in the image above). + 3. Next, select the “ **Hot Corner** ” section (as shown in the image below). + 4. Depending on what corner you prefer, choose an appropriate action (refer to the image below – that’s what I personally prefer as my settings) + + + +#### 3\. Install Multimedia codecs + +I’ve tried playing MP3/MP4 files – it just works fine. However, there are a lot of file formats when it comes to multimedia. + +So, just to be able to play almost every format of multimedia, you should install the codecs. Here’s what you need to enter in the terminal: + +To get certain proprietary codecs: + +``` +sudo apt install ubuntu-restricted-extras +``` + +To specifically install [Libav][8]: + +``` +sudo apt install libavcodec-extra +``` + +To install a codec in order to facilitate playing video DVDs: + +``` +sudo apt install libdvd-pkg +``` + +#### 4\. Install GDebi + +You don’t get to install .deb files by just double-clicking it on elementary OS 5 Juno. It just does not let you do that. + +So, you need an additional tool to help you install .deb files. + +We’ll recommend you to use **GDebi**. I prefer it because it lets you know about the dependencies even before trying to install it – that way – you can be sure about what you need in order to correctly install an application. + +Simply install GDebi and open any .deb files by performing a right-click on them **open in GDebi Package Installer.** + +To install it, type in the following command: + +``` +sudo apt install gdebi +``` + +#### 5\. Add a PPA for your Favorite App + +Yes, elementary OS 5 Juno now supports PPA (unlike its previous version). So, you no longer need to enable the support for PPAs explicitly. + +Just grab a PPA and add it via terminal to install something you like. + +#### 6\. Install Essential Applications + +If you’re a Linux power user, you already know what you want and where to get it, but if you’re new to this Linux distro and looking out for some applications to have installed, I have a few recommendations: + +**Steam app** : If you’re a gamer, this is a must-have app. You just need to type in a single command to install it: + +``` +sudo apt install steam +``` + +**GIMP** : It is the best photoshop alternative across every platform. Get it installed for every type of image manipulation: + +``` +sudo apt install gimp +``` + +**Wine** : If you want to install an application that only runs on Windows, you can try using Wine to run such Windows apps here on Linux. To install, follow the command: + +``` +sudo apt install wine-stable +``` + +**qBittorrent** : If you prefer downloading Torrent files, you should have this installed as your Torrent client. To install it, enter the following command: + +``` +sudo apt install qbittorrent +``` + +**Flameshot** : You can obviously utilize the default screenshot tool to take screenshots. But, if you want to instantly share your screenshots and the ability to annotate – install flameshot. Here’s how you can do that: + +``` +sudo apt install flameshot +``` + +**Chrome/Firefox: **The default browser isn’t much useful. So, you should install Chrome/Firefox – as per your choice. + +To install chrome, enter the command: + +``` +sudo apt install chromium-browser +``` + +To install Firefox, enter: + +``` +sudo apt install firefox +``` + +These are some of the most common applications you should definitely have installed. For the rest, you should browse through the App Center or the Flathub to install your favorite applications. + +#### 7\. Install Flatpak (Optional) + +It’s just my personal recommendation – I find flatpak to be the preferred way to install apps on any Linux distro I use. + +You can try it and learn more about it at its [official website][9]. + +To install flatpak, type in: + +``` +sudo apt install flatpak +``` + +After you are done installing flatpak, you can directly head to [Flathub][10] to install some of your favorite apps and you will also find the command/instruction to install it via the terminal. + +In case you do not want to launch the browser, you can search for your app by typing in (example – finding Discord and installing it): + +``` +flatpak search discord flathub +``` + +After gettting the application ID, you can proceed installing it by typing in: + +``` +flatpak install flathub com.discordapp.Discord +``` + +#### 8\. Enable the Night Light + +![Night Light in elementary OS Juno][11] + +You might have installed Redshift as per our recommendation for [elemantary OS 0.4 Loki][12] to filter the blue light to avoid straining our eyes- but you do not need any 3rd party tool anymore. + +It comes baked in as the “ **Night Light** ” feature. + +You just head to System Settings and click on “ **Displays** ” (as shown in the image above). + +Select the **Night Light** section and activate it with your preferred settings. + +#### 9\. Install NVIDIA driver metapackage (for NVIDIA GPUs) + +![Nvidia drivers in elementary OS juno][13] + +The NVIDIA driver metapackage should be listed right at the App Center – so you can easily the NVIDIA driver. + +However, it’s not the latest driver version – I have version **390.77** installed and it’s performing just fine. + +If you want the latest version for Linux, you should check out NVIDIA’s [official download page][14]. + +Also, if you’re curious about the version installed, just type in the following command: + +``` +nvidia-smi +``` + +#### 10\. Install TLP for Advanced Power Management + +We’ve said it before. And, we’ll still recommend it. + +If you want to manage your background tasks/activity and prevent overheating of your system – you should install TLP. + +It does not offer a GUI, but you don’t have to bother. You just install it and let it manage whatever it takes to prevent overheating. + +It’s very helpful for laptop users. + +To install, type in: + +``` +supo apt install tlp tlp-rdw +``` + +#### 11\. Perform visual customizations + +![][15] + +If you need to change the look of your Linux distro, you can install GNOME tweaks tool to get the options. In order to install the tweak tool, type in: + +``` +sudo apt install gnome-tweaks +``` + +Once you install it, head to the application launcher and search for “Tweaks”, you’ll find something like this: + +Here, you can select the icon, theme, wallpaper, and you’ll also be able to tweak a couple more options that’s not limited to the visual elements. + +### Wrapping Up + +It’s the least you should do after installing elementary OS 5 Juno. However, considering that elementary OS 5 Juno comes with numerous new features – you can explore a lot more new things as well. + +Let us know what you did first after installing elementary OS 5 Juno and how’s your experience with it so far? + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/things-to-do-after-installing-elementary-os-5-juno/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/elementary-os-juno-features/ +[2]: https://itsfoss.com/macos-like-linux-distros/ +[3]: https://itsfoss.com/best-linux-beginners/ +[4]: https://elementary.io/ +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/things-to-do-after-installing-elementary-os-juno.jpeg?ssl=1 +[6]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/elementary-os-system-update.jpg?ssl=1 +[7]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/elementary-os-hotcorners.jpg?ssl=1 +[8]: https://libav.org/ +[9]: https://flatpak.org/ +[10]: https://flathub.org/home +[11]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/elementary-os-night-light.jpg?ssl=1 +[12]: https://itsfoss.com/things-to-do-after-installing-elementary-os-loki/ +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/elementary-os-nvidia-metapackage.jpg?ssl=1 +[14]: https://www.nvidia.com/Download/index.aspx +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/elementary-os-gnome-tweaks.jpg?ssl=1 diff --git a/sources/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md b/sources/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md new file mode 100644 index 0000000000..9e1ee18be7 --- /dev/null +++ b/sources/tech/20181119 Arch-Wiki-Man - A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline.md @@ -0,0 +1,214 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Arch-Wiki-Man – A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline) +[#]: via: (https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/) +[#]: author: ([Prakash Subramanian](https://www.2daygeek.com/author/prakash/)) +[#]: url: ( ) + +Arch-Wiki-Man – A Tool to Browse The Arch Wiki Pages As Linux Man Page from Offline +====== + +Getting internet is not a big deal now a days, however there will be a limitation on technology. + +I was really surprise to see the technology growth but in the same time there will be fall in everywhere. + +Whenever you search anything about other Linux distributions most of the time you will get a third party links in the first place but for Arch Linux every time you would get the Arch Wiki page for your results. + +As Arch Wiki has most of the solution other than third party websites. + +As of now, you might used web browser to get a solution for your Arch Linux system but you no need to do the same for now. + +There is a solution is available in command line to perform this action much faster way and the utility called arch-wiki-man. If you are Arch Linux lover, i would suggest you to read **[Arch Linux Post Installation guide][1]** which helps you to tweak your system for day to day use. + +### What is arch-wiki-man? + +[arch-wiki-man][2] tool allows user to search the arch wiki pages right from the command line (CLI) instantly without internet connection. It allows user to access and search an entire wiki pages as a Linux man page. + +Also, you no need to switch to GUI. Updates are pushed automatically every two days so, your local copy of the Arch Wiki pages will be upto date. The tool name is `awman`. awman stands for Arch Wiki Man. + +We had already wrote similar kind of topic called **[Arch Wiki Command Line Utility][3]** (arch-wiki-cli) which allows user search Arch Wiki from command line but make sure you should have internet to use this utility. + +### How to Install arch-wiki-man tool? + +arch-wiki-man utility is available in AUR repository so, we need to use AUR helper to install it. There are many AUR helper is available and we had wrote an article about **[Yaourt AUR helper][4]** and **[Packer AUR helper][5]** which are very famous AUR helper. + +``` +$ yaourt -S arch-wiki-man + +or + +$ packer -S arch-wiki-man +``` + +Alternatively we can install it using npm package manager. Make sure, you should have installed **[NodeJS][6]** on your system. If so, run the following command to install it. + +``` +$ npm install -g arch-wiki-man +``` + +### How to Update the local Arch Wiki copy? + +As updated previously, updates are pushed automatically every two days and it can be done by running the following command. + +``` +$ sudo awman-update +[sudo] password for daygeek: +[email protected] /usr/lib/node_modules/arch-wiki-man +└── [email protected] + +arch-wiki-md-repo has been successfully updated or reinstalled. +``` + +awman-update is faster and more convenient method to get the update. However, you can get the updates by reinstalling this package using the following command. + +``` +$ yaourt -S arch-wiki-man + +or + +$ packer -S arch-wiki-man +``` + +### How to Use Arch Wiki from command line? + +It’s very simple interface and easy to use. To search anything, just run `awman` followed by the search term. The general syntax is as follow. + +``` +$ awman Search-Term +``` + +### How to Search Multiple Matches? + +If you would like to list all the results titles comes with `installation` string, run the following command format. If the output comes with multiple results then you will get a selection menu to navigate each item. + +``` +$ awman installation +``` + +![][8] + +Detailed page screenshot. +![][9] + +### Search a given string in Titles & Descriptions + +The `-d` or `--desc-search` option allow users to search a given string in titles and descriptions. + +``` +$ awman -d mirrors + +or + +$ awman --desc-search mirrors +? Select an article: (Use arrow keys) +❯ [1/3] Mirrors: Related articles + [2/3] DeveloperWiki-NewMirrors: Contents + [3/3] Powerpill: Powerpill is a pac +``` + +### Search a given string in Contents + +The `-k` or `--apropos` option allow users to search a given string in content as well. Make a note, this option significantly slower your search as this scan entire wiki page content. + +``` +$ awman -k openjdk + +or + +$ awman --apropos openjdk +? Select an article: (Use arrow keys) +❯ [1/26] Hadoop: Related articles + [2/26] XDG Base Directory support: Related articles + [3/26] Steam-Game-specific troubleshooting: See Steam/Troubleshooting first. + [4/26] Android: Related articles + [5/26] Elasticsearch: Elasticsearch is a search engine based on Lucene. It provides a distributed, mul.. + [6/26] LibreOffice: Related articles + [7/26] Browser plugins: Related articles +(Move up and down to reveal more choices) +``` + +### Open the search results in a web browser + +The `-w` or `--web` option allow users to open the search results in a web browser. + +``` +$ awman -w AUR helper + +or + +$ awman --web AUR helper +``` + +![][10] + +### Search in other languages + +The `-w` or `--web` option allow users to open the search results in a web browser. To see a list of supported language, run the following command. + +``` +$ awman --list-languages +arabic +bulgarian +catalan +chinesesim +chinesetrad +croatian +czech +danish +dutch +english +esperanto +finnish +greek +hebrew +hungarian +indonesian +italian +korean +lithuanian +norwegian +polish +portuguese +russian +serbian +slovak +spanish +swedish +thai +ukrainian +``` + +Run the awman command with your preferred language to see the results with different language other than English. + +``` +$ awman -l chinesesim deepin +``` + +![][11] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/arch-wiki-man-a-tool-to-browse-the-arch-wiki-pages-as-linux-man-page-from-offline/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/arch-linux-post-installation-30-things-to-do-after-installing-arch-linux/ +[2]: https://github.com/greg-js/arch-wiki-man +[3]: https://www.2daygeek.com/search-arch-wiki-website-command-line-terminal/ +[4]: https://www.2daygeek.com/install-yaourt-aur-helper-on-arch-linux/ +[5]: https://www.2daygeek.com/install-packer-aur-helper-on-arch-linux/ +[6]: https://www.2daygeek.com/install-nodejs-on-ubuntu-centos-debian-fedora-mint-rhel-opensuse/ +[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[8]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-1.png +[9]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-2.png +[10]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-3.png +[11]: https://www.2daygeek.com/wp-content/uploads/2018/11/arch-wiki-man-%E2%80%93-A-Tool-to-Browse-The-Arch-Wiki-Pages-As-Linux-Man-page-from-Offline-4.png diff --git a/sources/tech/20181122 Getting started with Jenkins X.md b/sources/tech/20181122 Getting started with Jenkins X.md new file mode 100644 index 0000000000..1c2aab6903 --- /dev/null +++ b/sources/tech/20181122 Getting started with Jenkins X.md @@ -0,0 +1,148 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Getting started with Jenkins X) +[#]: via: (https://opensource.com/article/18/11/getting-started-jenkins-x) +[#]: author: (Dave Johnson https://opensource.com/users/snoopdave) +[#]: url: ( ) + +Getting started with Jenkins X +====== +Jenkins X provides continuous integration, automated testing, and continuous delivery to Kubernetes. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/ship_wheel_gear_devops_kubernetes.png?itok=xm4a74Kv) + +[Jenkins X][1] is an open source system that offers software developers continuous integration, automated testing, and continuous delivery, known as CI/CD, in Kubernetes. Jenkins X-managed projects get a complete CI/CD process with a Jenkins pipeline that builds and packages project code for deployment to Kubernetes and access to pipelines for promoting projects to staging and production environments. + +Developers are already benefiting from running "classic" open source Jenkins and CloudBees Jenkins on Kubernetes, thanks in part to the Jenkins Kubernetes plugin, which allows you to dynamically spin-up Kubernetes pods to run Jenkins build agents. Jenkins X adds what's missing from Jenkins: comprehensive support for continuous delivery and managing the promotion of projects to preview, staging, and production environments running in Kubernetes. + +This article is a high-level explanation of how Jenkins X works; it assumes you have some knowledge of Kubernetes and classic Jenkins. + +### What you get with Jenkins X + +If you're running on one of the major cloud providers (Amazon Elastic Container Service for Kubernetes, Google Kubernetes Engine, or Microsoft Azure Kubernetes Service), installing and deploying Jenkins X is easy. Download the Jenkins X command-line interface and run the **jx create cluster** command. You'll be prompted for the necessary information and, if you take the defaults, Jenkins X will create a starter-size Kubernetes cluster and install Jenkins X. + +When you deploy Jenkins X, a number of services are put in motion to watch your Git repositories and respond by building, testing, and promoting your applications to staging, production, and other environments you define. Jenkins X also deploys a set of supporting services, including [Jenkins][2], [Docker Registry][3], [Chart Museum][4], and [Monocular][5] to manage [Helm][6] charts, and [Nexus][7], which serves as a Maven and npm repository. + +The Jenkins X deployment also creates two Git repositories, one for your staging environment and one for production. These are in addition to the Git repositories you use to manage your project source code. Jenkins X uses these repositories to manage what is deployed to each environment, and promotions are done via Git pull requests (PRs)—this approach is known as [GitOps][8]. Each repository contains a Helm chart that specifies the applications to be deployed to the corresponding environment. Each repository also has a Jenkins pipeline to handle promotions. + +### Creating a new project with Jenkins X + +To create a new project with Jenkins X, use the **jx create quickstart** command. If you don't specify any options, jx will prompt you to select a project name and a platform—which can be just about anything. SpringBoot, Go, Python, Node, ASP.NET, Rust, Angular, and React are all supported, and the list keeps growing. Once you have chosen your project name and platform, Jenkins X will: + + * Create a new project that includes a "hello-world"-style web project + * Add the appropriate type of makefile or build script for the chosen platform + * Add a Jenkinsfile to manage promotions to staging and production environments + * Add a Dockerfile and Helm charts, created via [Draft][9] + * Add a [Skaffold][10] configuration for deploying the application to Kubernetes + * Create a Git repository and push the new project code there + + + +Next, a webhook from Git will notify Jenkins X that a project changed, and it will run your project's Jenkins pipeline to build and push your Docker image and Helm charts. + +Finally, the pipeline will submit a PR to the staging environment's Git repository with the changes needed to promote the application. + +Once the PR is merged, the staging pipeline will run to apply those changes and do the promotion. A couple of minutes after creating your project, you'll have end-to-end CI/CD, and your project will be running in staging and available for use. + +![Developer commits changes, project deployed to staging][12] + +Developer commits changes, project deployed to the staging environment. + +The figure above illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to staging. Here are the steps: + + 1. The developer commits and pushes the change to the project's Git repository + 2. Jenkins X is notified and runs the project's Jenkins pipeline in a Docker image that includes the project's language and supporting frameworks + 3. The project pipeline builds, tests, and pushes the project's Helm chart to Chart Museum and its Docker image to the registry + 4. The project pipeline creates a PR with changes needed to add the project to the staging environment + 5. Jenkins X automatically merges the PR to Master + 6. Jenkins X is notified and runs the staging pipeline + 7. The staging pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project's resources, typically a pod, service, and ingress. + + + +### Importing your existing projects into Jenkins X + +**jx import** , Jenkins X adds the things needed for your project to be deployed to Kubernetes and participate in CI/CD. It will add a Jenkins pipeline, Helm charts, and a Skaffold configuration for deploying the application to Kubernetes. Jenkins X will create a Git repository and push the changes there. Next, a webhook from Git will notify Jenkins X that a project changed, and promotion to staging will happen as described above for new projects. + +### Promoting your project to production + +When you import a project via, Jenkins X adds the things needed for your project to be deployed to Kubernetes and participate in CI/CD. It will add a Jenkins pipeline, Helm charts, and a Skaffold configuration for deploying the application to Kubernetes. Jenkins X will create a Git repository and push the changes there. Next, a webhook from Git will notify Jenkins X that a project changed, and promotion to staging will happen as described above for new projects. + +To promote a version of your project to the production environment, use the **jx promote** command. This command will prepare a Git PR that contains the Helm chart changes needed to deploy into the production environment and submit this request to the production environment's Git repository. Once the request is manually approved, Jenkins X will run the production pipeline to deploy your project via Helm. + +![Promoting project to production][14] + +Developer promotes the project to production. + +This figure illustrates the repositories, registries, and pipelines and how they interact in a Jenkins X promotion to production. Here are the steps: + + 1. The developer runs the **jx promote** command to promote a project to production + 2. Jenkins X creates a PR with changes needed to add the project to the production environment + 3. The developer manually approves the PR, and it is merged to Master + 4. Jenkins X is notified and runs the production pipeline + 5. The production pipeline runs Helm, which deploys the environment, pulling Helm charts from Chart Museum and Docker images from the Docker registry. Kubernetes creates the project's resources, typically a pod, service, and ingress. + + + +### Other features of Jenkins X + +Other interesting and appealing features of Jenkins X include: + +#### Preview environments + +When you create a PR to add a new feature to your project, you can ask Jenkins X to create a preview environment so you can make your new feature available for preview and testing before the PR is merged. + +#### Extensions + +It is possible to create extensions to Jenkins X. An extension is code that runs at specific times in the CI/CD process. An extension can provide code that runs when the extension is installed, uninstalled, as well as before and after each pipeline. + +#### Serverless Jenkins + +Instead of running the Jenkins web application, which continually consumes CPU and memory resources, you can run Jenkins only when you need it. During the past year, the Jenkins community created a version of Jenkins that can run classic Jenkins pipelines via the command line with the configuration defined by code instead of HTML forms. + +This capability is now available in Jenkins X. When you create a Jenkins X cluster, you can choose to use Serverless Jenkins. If you do, Jenkins X will deploy [Prow][15] to handle webhooks from GitHub and [Knative][16] to run Jenkins pipelines. + +### Jenkins X limitations + +Jenkins X also has some limitations that should be considered: + + * **Jenkins X is currently limited to projects that use Git:** Jenkins X is opinionated about CI/CD and assumes everybody wants to run and deploy software to Kubernetes and everybody is happy to use Git for source code and defining environments. Also, the Serverless Jenkins feature currently works only with GitHub. + * **Jenkins X is limited to Kubernetes:** It is true that Jenkins X can run automated builds, testing, and continuous integration for any type of software, but the continuous delivery part targets a Kubernetes namespace managed by Jenkins X. + * **Jenkins X requires cluster-admin level Kubernetes access:** Jenkins X needs cluster-admin access so it can define and manage a Kubernetes custom resource definition. Hopefully, this is a temporary limitation, because it could be a show-stopper for some. + + + +### Conclusions + +Jenkins X looks to be a good way to implement CI/CD for Kubernetes, and I'm looking forward to putting it to the test in production. Using Jenkins X is also a good way to learn about some useful open source tools for deploying to Kubernetes, including Helm, Draft, Skaffold, Prow, and more. These are things you might want to use even if you decide Jenkins X is not for you. If you're deploying to Kubernetes, take Jenkins X for a spin. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/getting-started-jenkins-x + +作者:[Dave Johnson][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/snoopdave +[b]: https://github.com/lujun9972 +[1]: https://jenkins-x.io/ +[2]: https://jenkins.io/ +[3]: https://docs.docker.com/registry/ +[4]: https://github.com/helm/chartmuseum +[5]: https://github.com/helm/monocular +[6]: https://helm.sh +[7]: https://www.sonatype.com/nexus-repository-oss +[8]: https://www.weave.works/blog/gitops-operations-by-pull-request +[9]: https://draft.sh/ +[10]: https://github.com/GoogleContainerTools/skaffold +[11]: /file/414941 +[12]: https://opensource.com/sites/default/files/uploads/jenkinsx_fig1.png (Developer commits changes, project deployed to staging) +[13]: /file/414946 +[14]: https://opensource.com/sites/default/files/uploads/jenkinsx_fig2.png (Promoting project to production) +[15]: https://github.com/kubernetes/test-infra/tree/master/prow +[16]: https://cloud.google.com/knative/ diff --git a/sources/tech/20181123 Three SSH GUI Tools for Linux.md b/sources/tech/20181123 Three SSH GUI Tools for Linux.md new file mode 100644 index 0000000000..9691a737ca --- /dev/null +++ b/sources/tech/20181123 Three SSH GUI Tools for Linux.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Three SSH GUI Tools for Linux) +[#]: via: (https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) +[#]: url: ( ) + +Three SSH GUI Tools for Linux +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh.jpg?itok=3UcXhJt7) + +At some point in your career as a Linux administrator, you’re going to use Secure Shell (SSH) to remote into a Linux server or desktop. Chances are, you already have. In some instances, you’ll be SSH’ing into multiple Linux servers at once. In fact, Secure Shell might well be one of the most-used tools in your Linux toolbox. Because of this, you’ll want to make the experience as efficient as possible. For many admins, nothing is as efficient as the command line. However, there are users out there who do prefer a GUI tool, especially when working from a desktop machine to remote into and work on a server. + +If you happen to prefer a good GUI tool, you’ll be happy to know there are a couple of outstanding graphical tools for SSH on Linux. Couple that with a unique terminal window that allows you to remote into multiple machines from the same window, and you have everything you need to work efficiently. Let’s take a look at these three tools and find out if one (or more) of them is perfectly apt to meet your needs. + +I’ll be demonstrating these tools on [Elementary OS][1], but they are all available for most major distributions. + +### PuTTY + +Anyone that’s been around long enough knows about [PuTTY][2]. In fact, PuTTY is the de facto standard tool for connecting, via SSH, to Linux servers from the Windows environment. But PuTTY isn’t just for Windows. In fact, from withing the standard repositories, PuTTY can also be installed on Linux. PuTTY’s feature list includes: + + * Saved sessions. + + * Connect via IP address or hostname. + + * Define alternative SSH port. + + * Connection type definition. + + * Logging. + + * Options for keyboard, bell, appearance, connection, and more. + + * Local and remote tunnel configuration + + * Proxy support + + * X11 tunneling support + + + + +The PuTTY GUI is mostly a way to save SSH sessions, so it’s easier to manage all of those various Linux servers and desktops you need to constantly remote into and out of. Once you’ve connected, from PuTTY to the Linux server, you will have a terminal window in which to work. At this point, you may be asking yourself, why not just work from the terminal window? For some, the convenience of saving sessions does make PuTTY worth using. + +Installing PuTTY on Linux is simple. For example, you could issue the command on a Debian-based distribution: + +``` +sudo apt-get install -y putty +``` + +Once installed, you can either run the PuTTY GUI from your desktop menu or issue the command putty. In the PuTTY Configuration window (Figure 1), type the hostname or IP address in the HostName (or IP address) section, configure the port (if not the default 22), select SSH from the connection type, and click Open. + +![PuTTY Connection][4] + +Figure 1: The PuTTY Connection Configuration Window. + +[Used with permission][5] + +Once the connection is made, you’ll then be prompted for the user credentials on the remote server (Figure 2). + +![log in][7] + +Figure 2: Logging into a remote server with PuTTY. + +[Used with permission][5] + +To save a session (so you don’t have to always type the remote server information), fill out the IP address (or hostname), configure the port and connection type, and then (before you click Open), type a name for the connection in the top text area of the Saved Sessions section, and click Save. This will then save the configuration for the session. To then connect to a saved session, select it from the saved sessions window, click Load, and then click Open. You should then be prompted for the remote credentials on the remote server. + +### EasySSH + +Although [EasySSH][8] doesn’t offer the amount of configuration options found in PuTTY, it’s (as the name implies) incredibly easy to use. One of the best features of EasySSH is that it offers a tabbed interface, so you can have multiple SSH connections open and quickly switch between them. Other EasySSH features include: + + * Groups (so you can group tabs for an even more efficient experience). + + * Username/password save. + + * Appearance options. + + * Local and remote tunnel support. + + + + +Install EasySSH on a Linux desktop is simple, as the app can be installed via flatpak (which does mean you must have Flatpak installed on your system). Once flatpak is installed, add EasySSH with the commands: + +``` +sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo + +sudo flatpak install flathub com.github.muriloventuroso.easyssh +``` + +Run EasySSH with the command: + +``` +flatpak run com.github.muriloventuroso.easyssh +``` + +The EasySSH app will open, where you can click the + button in the upper left corner. In the resulting window (Figure 3), configure your SSH connection as required. + +![Adding a connection][10] + +Figure 3: Adding a connection in EasySSH is simple. + +[Used with permission][5] + +Once you’ve added the connection, it will appear in the left navigation of the main window (Figure 4). + +![EasySSH][12] + +Figure 4: The EasySSH main window. + +[Used with permission][5] + +To connect to a remote server in EasySSH, select it from the left navigation and then click the Connect button (Figure 5). + +![Connecting][14] + +Figure 5: Connecting to a remote server with EasySSH. + +[Used with permission][5] + +The one caveat with EasySSH is that you must save the username and password in the connection configuration (otherwise the connection will fail). This means anyone with access to the desktop running EasySSH can remote into your servers without knowing the passwords. Because of this, you must always remember to lock your desktop screen any time you are away (and make sure to use a strong password). The last thing you want is to have a server vulnerable to unwanted logins. + +### Terminator + +Terminator is not actually an SSH GUI. Instead, Terminator functions as a single window that allows you to run multiple terminals (and even groups of terminals) at once. Effectively you can open Terminator, split the window vertical and horizontally (until you have all the terminals you want), and then connect to all of your remote Linux servers by way of the standard SSH command (Figure 6). + +![Terminator][16] + +Figure 6: Terminator split into three different windows, each connecting to a different Linux server. + +[Used with permission][5] + +To install Terminator, issue a command like: + +### sudo apt-get install -y terminator + +Once installed, open the tool either from your desktop menu or from the command terminator. With the window opened, you can right-click inside Terminator and select either Split Horizontally or Split Vertically. Continue splitting the terminal until you have exactly the number of terminals you need, and then start remoting into those servers. +The caveat to using Terminator is that it is not a standard SSH GUI tool, in that it won’t save your sessions or give you quick access to those servers. In other words, you will always have to manually log into your remote Linux servers. However, being able to see your remote Secure Shell sessions side by side does make administering multiple remote machines quite a bit easier. + +Few (But Worthwhile) Options + +There aren’t a lot of SSH GUI tools available for Linux. Why? Because most administrators prefer to simply open a terminal window and use the standard command-line tools to remotely access their servers. However, if you have a need for a GUI tool, you have two solid options and one terminal that makes logging into multiple machines slightly easier. Although there are only a few options for those looking for an SSH GUI tool, those that are available are certainly worth your time. Give one of these a try and see for yourself. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/intro-to-linux/2018/11/three-ssh-guis-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://elementary.io/ +[2]: https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html +[3]: https://www.linux.com/files/images/sshguis1jpg +[4]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_1.jpg?itok=DiNTz_wO (PuTTY Connection) +[5]: https://www.linux.com/licenses/category/used-permission +[6]: https://www.linux.com/files/images/sshguis2jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_2.jpg?itok=4ORsJlz3 (log in) +[8]: https://github.com/muriloventuroso/easyssh +[9]: https://www.linux.com/files/images/sshguis3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_3.jpg?itok=bHC2zlda (Adding a connection) +[11]: https://www.linux.com/files/images/sshguis4jpg +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_4.jpg?itok=hhJzhRIg (EasySSH) +[13]: https://www.linux.com/files/images/sshguis5jpg +[14]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_5.jpg?itok=piFEFYTQ (Connecting) +[15]: https://www.linux.com/files/images/sshguis6jpg +[16]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/ssh_guis_6.jpg?itok=-kYl6iSE (Terminator) diff --git a/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md b/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md new file mode 100644 index 0000000000..094467698b --- /dev/null +++ b/sources/tech/20181124 14 Best ASCII Games for Linux That are Insanely Good.md @@ -0,0 +1,335 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (14 Best ASCII Games for Linux That are Insanely Good) +[#]: via: (https://itsfoss.com/best-ascii-games/) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) +[#]: url: ( ) + +14 Best ASCII Games for Linux That are Insanely Good +====== + +Text-based or should I say [terminal-based games][1] were very popular a decade back – when you didn’t have visual masterpieces like God Of War, Red Dead Redemption 2 or Spiderman. + +Of course, the Linux platform has its share of good games – but not always the “latest and greatest”. But, there are some ASCII games out there – to which you can never turn your back on. + +I’m not sure if you’d believe me, some of the ASCII games proved to be very addictive (So, it might take a while for me to resume work on the next article, or I might just get fired? – Help me!) + +Jokes apart, let us take a look at the best ASCII games. + +**Note:** Installing ASCII games could be time-consuming (some might ask you to install additional dependencies or simply won’t work). You might even encounter some ASCII games that require you build from Source. So, we’ve filtered out only the ones that are easy to install/run – without breaking a sweat. + +### Things to do before Running or Installing an ASCII Game + +Some of the ASCII games might require you to install [Simple DirectMedia Layer][2] unless you already have it installed. So, in case, you should install them first before trying to run any of the games mentioned in this article. + +For that, you just need to type in these commands: + +``` +sudo apt install libsdl2-2.0 +``` + +``` +sudo apt install libsdl2_mixer-2.0 +``` + + +### Best ASCII Games for Linux + +![Best Ascii games for Linux][3] + +The games listed are in no particular order of ranking. + +#### 1 . [Curse of War][4] + +![Curse of War ascii games][5] + +Curse of War is an interesting strategy game. You might find it a bit confusing at first but once you get to know it – you’ll love it. I’ll recommend you to take a look at the rules of the game on their [homepage][4] before launching the game. + +You will be building infrastructure, secure resources and directing your army to fight. All you have to do is place your flag in a good position to let your army take care of the rest. It’s not just about attacking – you need to manage and secure the resources to help win the fight. + +If you’ve never played any ASCII game before, be patient and spend some time learning it – to experience it to its fullest potential. + +##### How to install Curse of War? + +You will find it in the official repository. So, type in the following command to install it: + +``` +sudo apt install curseofwar +``` +#### 2. ASCII Sector + +![ascii sector][6] + +Hate strategy games? Fret not, ASCII sector is a game that has a space-setting and lets you explore a lot. + +Also, the game isn’t just limited to exploration, you need some action? You got that here as well. Of course, not the best combat experience- but it is fun. It gets even more exciting when you see a variety of bases, missions, and quests. You’ll encounter a leveling system in this tiny game where you have to earn enough money or trade in order upgrade your spaceship. + +The best part about this game is – you can create your own quests or play other’s. + +###### How to install ASCII Sector? + +You need to first download and unpack the archived package from the [official site][7]. After it’s done, open up your terminal and type these commands (replace the **Downloads** folder with your location where the unpacked folder exists, ignore it if the unpacked folder resides inside your home directory): + +``` +cd Downloads +cd asciisec +chmod +x asciisec +./asciisec +``` + +#### 3. DoomRL + +![doom ascii game][8] + +You must be knowing the classic game “Doom”. So, if you want the scaled down experience of it as a rogue-like, DoomRL is for you. It is an ASCII-based game, in case you don’t feel like it to be. + +It’s a very tiny game with a lot of gameplay hours to have fun with. + +###### How to install DoomRL? + +Similar to what you did for ASCII Sector, you need to download the official archive from their [download page][9] and then extract it to a folder. + +After extracting it, type in these commands: + +``` +cd Downloads // navigating to the location where the unpacked folder exists +``` + +``` +cd doomrl-linux-x64-0997 +chmod +x doomrl +./doomrl +``` +#### 4. Pyramid Builder + +![Pyramid Builder ascii game for Linux][10] + +Pyramid Builder is an innovative take as an ASCII game where get to improve your civilization by helping build pyramids. + +You need to direct the workers to farm, unload the cargo, and move the gigantic stones to successfully build the pyramid. + +It is indeed a beautiful ASCII game to download. + +###### How to install Pyramid Builder? + +Simply head to its official site and download the package to unpack it. After extraction, navigate to the folder and run the executable file. + +``` +cd Downloads +cd pyramid_builder_linux +chmod +x pyramid_builder_linux.x86_64 +./pyramid_builder_linux.x86_64 +``` +#### 5. DiabloRL + +![Diablo ascii RPG game][11] + +If you’re an avid gamer, you must have heard about Blizzard’s Diablo 1. It is undoubtedly a good game. + +You get the chance to play a unique rendition of the game – which is an ASCII game. DiabloRL is a turn-based rogue-like game that is insanely good. You get to choose from a variety of classes (Warrior, Sorcerer, or Rogue). Every class would result in a different gameplay experience with a set of different stats. + +Of course, personal preference will differ – but it’s a decent “unmake” of Diablo. What do you think? + +#### 6. Ninvaders + +![Ninvaders terminal game for Linux][12] + +Ninvaders is one of the best ASCII game just because it’s so simple and an arcade game to kill time. + +You have to defend against a hord of invaders – just finish them off before they get to you. It sounds very simple – but it is a challenging game. + +##### How to install Ninvaders? + +Similar to Curse of War, you can find this in the official repository. So, just type in this command to install it: + +``` +sudo apt install ninvaders  +``` +#### 7. Empire + +![Empire terminal game][13] + +A real-time strategy game for which you will need an active Internet connection. I’m personally not a fan of Real-Time strategy games, but if you are a fan of such games – you should really check out their [guide][14] to play this game – because it can be very challenging to learn. + +The rectangle contains cities, land, and water. You need to expand your city with an army, ships, planes and other resources. By expanding quickly, you will be able to capture other cities by destroying them before they make a move. + +##### How to install Empire? + +Install this is very simple, just type in the following command: + +``` +sudo apt install empire +``` + +#### 8. Nudoku + +![Nudoku is a terminal version game of Sudoku][15] + +Love Sudoku? Well, you have Nudoku – a clone for it. A perfect time-killing ASCII game while you relax. + +It presents you with three difficulty levels – Easy, normal, and hard. If you want to put up a challenge with the computer, the hard difficulty will be perfect! If you just want to chill, go for the easy one. + +##### How to install Nudoku? + +It’s very easy to get it installed, just type in the following command in the terminal: + +``` +sudo apt install nudoku +``` + +#### 9\. Nethack + +A dungeons and dragon-style ASCII game which is one of the best out there. I believe it’s one of your favorites if you already knew about ASCII games for Linux – in general. + +It features a lot of different levels (about 45) and comes packed in with a bunch of weapons, scrolls, potions, armor, rings, and gems. You can also choose permadeath as your mode to play it. + +It’s not just about killing here – you got a lot to explore. + +##### How to install Nethack? + +Simply follow the command below to install it: + +``` +sudo apt install nethack +``` + +#### 10. ASCII Jump + +![ascii jump game][16] + +ASCII Jump is a dead simple game where you have to slide along a varierty of tracks – while jumping, changing position, and moving as long as you can to cover maximum distance. + +It’s really amazing to see how this ASCII game looks like (visually) even it seems so simple. You can start with the training mode and then proceed to the world cup. You also get to choose your competitors and the hills on which you want to start the game. + +##### How to install Ascii Jump? + +To install the game, just type the following command: + +``` +sudo apt install asciijump +``` + +#### 11. Bastet + +![Bastet is tetris game in ascii form][17] + +Let’s just not pay any attention to the name – it’s actually a fun clone of Tetris game. + +You shouldn’t expect it to be just another ordinary tetris game – but it will present you the worst possible bricks to play with. Have fun! + +##### How to install Bastet? + +Open the terminal and type in the following command: + +``` +sudo apt install bastet +``` + +#### 12\. Bombardier + +![Bomabrdier game in ascii form][18] + +Bombardier is yet another simple ASCII game which will keep you hooked on to it. + +Here, you have a helicopter (or whatever you’d like to call your aircraft) which lowers down every cycle and you need to throw bombs in order to destroy the blocks/buildings under you. The game also puts a pinch of humor for the messages it displays when you destroy a block. It is fun. + +##### How to install Bombardier? + +Bombardier is available in the official repository, so just type in the following in the terminal to install it: + +``` +sudo apt install bombardier +``` + +#### 13\. Angband + +![Angband ascii game][19] + +A cool dungeon exploration game with a neat interface. You can see all the vital information in a single screen while you explore the game. + +It contains different kinds of race to pick a character. You can either be an Elf, Hobbit, Dwarf or something else – there’s nearly a dozen to choose from. Remember, that you need to defeat the lord of darkness at the end – so make every upgrade possible to your weapon and get ready. + +How to install Angband? + +Simply type in the following command: + +``` +sudo apt install angband +``` + +#### 14\. GNU Chess + +![GNU Chess is a chess game that you can play in Linux terminal][20] + +How can you not play chess? It is my favorite strategy game! + +But, GNU Chess can be tough to play with unless you know the Algebraic notation to describe the next move. Of course, being an ASCII game – it isn’t quite possible to interact – so it asks you the notation to detect your move and displays the output (while it waits for the computer to think its next move). + +##### How to install GNU Chess? + +If you’re aware of the algebraic notations of Chess, enter the following command to install it from the terminal: + +``` +sudo apt install gnuchess +``` + +#### Some Honorable Mentions + +As I mentioned earlier, we’ve tried to recommend you the best (but also the ones that are the easiest to install on your Linux machine). + +However, there are some iconic ASCII games which deserve the attention and requires a tad more effort to install (You will get the source code and you need to build it / install it). + +Some of those games are: + ++ [Cataclysm: Dark Days Ahead][22] ++ [Brogue][23] ++ [Dwarf Fortress][24] + +You should follow our [guide to install software from source code][21]. + +### Wrapping Up + +Which of the ASCII games mentioned seem perfect for you? Did we miss any of your favorites? + +Let us know your thoughts in the comments below. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/best-ascii-games/ + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://itsfoss.com/best-command-line-games-linux/ +[2]: https://www.libsdl.org/ +[3]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/best-ascii-games-featured.png?resize=800%2C450&ssl=1 +[4]: http://a-nikolaev.github.io/curseofwar/ +[5]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/curseofwar-ascii-game.jpg?fit=800%2C479&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-sector-game.jpg?fit=800%2C424&ssl=1 +[7]: http://www.asciisector.net/download/ +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/doom-rl-ascii-game.jpg?ssl=1 +[9]: https://drl.chaosforge.org/downloads +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/pyramid-builder-ascii-game.jpg?fit=800%2C509&ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/diablo-rl-ascii-game.jpg?ssl=1 +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/ninvaders-ascii-game.jpg?fit=800%2C426&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/empire-ascii-game.jpg?fit=800%2C570&ssl=1 +[14]: http://www.wolfpackempire.com/infopages/Guide.html +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/nudoku-ascii-game.jpg?fit=800%2C434&ssl=1 +[16]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/ascii-jump.jpg?fit=800%2C566&ssl=1 +[17]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/bastet-tetris-clone-ascii.jpg?fit=800%2C465&ssl=1 +[18]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/bombardier.jpg?fit=800%2C571&ssl=1 +[19]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/angband-ascii-game.jpg?ssl=1 +[20]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/gnuchess-ascii-game.jpg?ssl=1 +[21]: https://itsfoss.com/install-software-from-source-code/ +[22]: https://github.com/CleverRaven/Cataclysm-DDA +[23]: https://sites.google.com/site/broguegame/ +[24]: http://www.bay12games.com/dwarves/index.html + diff --git a/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md b/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md new file mode 100644 index 0000000000..a38acec9da --- /dev/null +++ b/sources/tech/20181127 Bio-Linux- A stable, portable scientific research Linux distribution.md @@ -0,0 +1,79 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Bio-Linux: A stable, portable scientific research Linux distribution) +[#]: via: (https://opensource.com/article/18/11/bio-linux) +[#]: author: (Matt Calverley https://opensource.com/users/mattcalverley) +[#]: url: ( ) + +Bio-Linux: A stable, portable scientific research Linux distribution +====== +Linux distro's integrated software approach offers powerful bioinformatic data analysis with a familiar look and feel. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LIFE_science.png?itok=WDKARWGV) + +Bio-Linux was introduced and detailed in a [Nature Biotechnology paper in July 2006][1]. The distribution was a group effort by the Natural Environment Research Council in the UK. As the creators and authors point out, the analysis demands of high-throughput “-omic” (genomic, proteomic, metabolomic) science has necessitated the development of integrated computing solutions to analyze the resultant mountains of experimental data. + +From this need, Bio-Linux was born. The distribution, [according to its creators][2], serves as a “free bioinformatics workstation platform that can be installed on anything from a laptop to a large server.” The current distro version, Bio-Linux 8, is built on an Ubuntu 14.04 LTS base. Thus, the general look and feel of Bio-Linux is similar to that of Ubuntu. + +In my own work as a research immunologist, I can attest to both the need for and success of the integrated software approach in Bio-Linux's design and development. Bio-Linux functions as a true turnkey solution to data pipeline requirements of modern science. As the website mentions, Bio-Linux includes [more than 250 pre-installed software packages][3], many of which are specific to the requirements of bioinformatic data analysis. + +The power of this approach becomes immediately evident when you try to duplicate the software installation process under another operating system. Integrating all software components and installing all required dependencies is immensely time-consuming, and in some instances is not even possible outside of the Linux operating system. The Bio-Linux distro provides a portable, stable, integrated environment with pre-installed software sufficient to begin a vast array of bioinformatic analysis tasks. + +By now you’re probably saying, “I’m sold—how do I get this amazing distro?” + +I’m glad you asked. I'll start by saying that there is excellent documentation on the Bio-Linux website. This [documentation][4] covers both installation instructions and a very thorough overview of using the distro. + +The distro can be installed and run locally, run off a CD/DVD or USB, installed on a server, or run out of a virtual machine environment. To begin the installation process for local installation, [download the disk image or ISO][5] for the Bio-Linux distro. The disk image is a 3.3GB file, and depending on your internet download speed, this may be a good time to get a cup of coffee or take a nice nap. + +Once the ISO has been downloaded, the Bio-Linux developers recommend using [UNetBootin][6], a freely available cross-platform software package used to make bootable USBs. There is a link provided for UNetBootin on the Bio-Linux website. I can attest to the effectiveness of UNetBootin in both Mac and Linux operating systems. + +On Unix family operating systems (Mac OS and Linux), it is also possible to make a bootable USB from the command line using the `dd `command: + +``` +sudo umount “USB location” + +sudo dd bs=4M if=”ISO location” of =”USB location” conv=fdatasync +``` +Regardless of the method you use, this might be another good time for a coffee break. + +At this point in my installation, UNetBootin appeared to freeze at the `squashfs` file transfer during bootable USB creation. However, a quick check of the Ubuntu disks application confirmed that the file was still being written to the USB. In other words, be patient—it takes quite some time to make the bootable USB. + +Once you’ve had your coffee and you have a finished USB in hand, you are ready to use Bio-Linux. As the Bio-Linux website points out, if you are trying to use a bootable USB with a Mac computer (particularly newer hardware versions), you may not be able to boot from the USB. There are workarounds, but they involve configuring the system for dual boot. Likewise, on Windows-based machines, it may be necessary to make changes to the boot order and possibly the secure boot settings for the machine from within BIOS. + +From this point, how you use the distro is up to you. You can run the distro from the USB to test it. You can install the distro to your computer. You can even follow the instructions on the Bio-Linux website to make a VM instance of the distro or run it on a server. Regardless of how you use it, you have a high-powered bioinformatic data analysis workstation at your disposal. + +Maybe you have a professional need for such a workstation, but even if you never use Bio-Linux as a professional researcher, it could provide a great resource for biology teaching professionals at all levels to introduce students to modern bioinformatics principles. For the price of a laptop and a USB, every school can have an in silico teaching resource to complement classroom lessons in the “-omics” age. Your only limitations are your creativity and the performance of your hardware. + +### More on Linux + +As an open source operating system with strong community support, the Linux kernel shares many of the strengths common to other successful open source software endeavors. Linux tends to be both stable and amenable to customization. It is also fairly hardware-agnostic, capable of running alongside other operating systems on a wide array of hardware configurations. In fact, installing Linux is a common method of regaining usability from dated hardware that is incapable of running other modern operating systems. Linux is also highly portable and can be run from any bootable external storage device, such as a USB drive, without the need to permanently install the operating system. + +It is this combination of stability, customizability, and portability that initially drew me to Linux. Each Linux operating system variant is referred to as a distribution (or distro), and it seems as though there is a Linux distribution for every imaginable computing scenario or desire. The options can actually be rather intimidating, and I suspect they may often discourage people from trying Linux. + +“How many different distributions can there possibly be?” you might wonder. If you have a few minutes, or even a few hours, have a look at [DistroWatch.com][7]. As its name implies, this site is devoted to the cataloging of all things Linux distribution-related. For visual learners, there is an amazing [Linux family tree][8] that really puts it into perspective. + +While [entire books][9] are devoted to the topic of Linux distributions, the differences often depend on what software is included in the base installation, how the software is managed, and graphical differences affecting the “look and feel” of the distribution. Certainly, there are also subtleties of hardware compatibility, speed, and stability. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/bio-linux + +作者:[Matt Calverley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mattcalverley +[b]: https://github.com/lujun9972 +[1]: https://www.nature.com/articles/nbt0706-801 +[2]: http://environmentalomics.org/bio-linux/ +[3]: http://environmentalomics.org/bio-linux-software-list/ +[4]: http://nebc.nerc.ac.uk/downloads/courses/Bio-Linux/bl8_latest.pdf +[5]: http://environmentalomics.org/bio-linux-download/ +[6]: https://unetbootin.github.io/ +[7]: https://distrowatch.com/ +[8]: https://distrowatch.com/images/other/distro-family-tree.png +[9]: https://www.amazon.com/Introducing-Linux-Distros-Dieguez-Castro/dp/1484213939 diff --git a/sources/tech/20181128 Building custom documentation workflows with Sphinx.md b/sources/tech/20181128 Building custom documentation workflows with Sphinx.md new file mode 100644 index 0000000000..7d9137fa40 --- /dev/null +++ b/sources/tech/20181128 Building custom documentation workflows with Sphinx.md @@ -0,0 +1,126 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (Building custom documentation workflows with Sphinx) +[#]: via: (https://opensource.com/article/18/11/building-custom-workflows-sphinx) +[#]: author: ([Mark Meyer](https://opensource.com/users/ofosos)) +[#]: url: ( ) + +Building custom documentation workflows with Sphinx +====== +Create documentation the way that works best for you. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/programming-code-keyboard-laptop.png?itok=pGfEfu2S) + +[Sphinx][1] is a popular application for creating documentation, similar to JavaDoc or Jekyll. However, Sphinx's reStructured Text input allows for a higher degree of customization than those other tools. + +This tutorial will explain how to customize Sphinx to suit your workflow. You can follow along using sample code on [GitHub][2]. + +### Some definitions + +Sphinx goes far beyond just enabling you to style text with predefined tags. It allows you to shape and automate your documentation by defining new roles and directives. A role is a single word element that usually is rendered inline in your documentation, while a directive can contain more complex content. These can be contained in a domain. + +A Sphinx domain is a collection of directives and roles as well as a few other things, such as an index definition. Your next Sphinx domain could be a specific programming language (Sphinx was developed to create Python's documentation). Or you might have a command line tool that implements the same command pattern (e.g., **tool \--args**) over and over. You can document it with a custom domain, adding directives and indexes along the way. + +Here's an example from our **recipe** domain: + +``` +The recipe contains `tomato` and `cilantro`. + +.. rcp:recipe:: TomatoSoup +  :contains: tomato cilantro salt pepper   + +  This recipe is a tasty tomato soup, combine all ingredients +  and cook. +``` + +Now that we've defined the recipe **TomatoSoup** , we can reference it anywhere in our documentation using the custom role **refef**. For example: + +``` +You can use the :rcp:reref:`TomatoSoup` recipe to feed your family. +``` + +This enables our recipes to show up in two indices: the first lists all recipes, and the second lists all recipes by ingredient. + +### What's in a domain? + +A Sphinx domain is a specialized container that ties together roles, directives, and indices, among other things. The domain has a name ( **rcp** ) to address its components in the documentation source. It announces its existence to Sphinx in the **setup()** method of the package. From there, Sphinx can find roles and directives, since these are part of the domain. + +This domain also serves as the central catalog of objects in this sample. Using initial data, it defines two variables, **objects** and **obj2ingredient**. These contain a list of all objects defined (all recipes) and a hash that maps a canonical ingredient name to the list of objects. + +``` +initial_data = { +    'objects': [],  # object list +    'obj2ingredient': {},  # ingredient -> [objects] +} +``` + +The way we name objects is common across our extension. For each object created, the canonical name is **rcp. .**, where **< typename>** is the Python type of the object, and **< objectname>** is the name the documentation writer gives the object. This enables the extension to use different object types that share the same name. + +Having a canonical name and central place for our objects is a huge advantage. Both our indices and our cross-referencing code use this feature. + +### Custom roles and directives + +In our example, **.. rcp:recipe::** indicates a custom directive. You might think it's overly specific to create custom syntax for these items, but it illustrates the degree of customization you can get in Sphinx. This provides rich markup that structures documents and leads to better docs. Specialization allows us to extract information from our docs. + +Our definition for this directive will provide minimal formatting, but it will be functional. + +``` +class RecipeNode(ObjectDescription): +  """A custom node that describes a recipe.""" + +  required_arguments = 1 + +  option_spec = { +    'contains': rst.directives.unchanged_required +  } +``` + +For this directive, **required_arguments** tells Sphinx to expect one parameter, the recipe name. **option_spec** lists the optional arguments, including their names. Finally, **has_content** specifies that there will be more reStructured Text as a child to this node. + +We also implement multiple methods: + + * **handle_signature()** implements parsing the signature of the directive and passes on the object's name and type to its superclass + * **add_taget_and_index()** adds a target (to link to) and an entry to the index for this node + + + +### Creating indices + +Both **IngredientIndex** and **RecipeIndex** are derived from Sphinx's **Index** class. They implement custom logic to generate a tuple of values that define the index. Note that **RecipeIndex** is a degenerate index that has only one entry. Extending it to cover more object types—and moving from a **RecipeDomain** to a **CookbookDomain** —is not yet part of the code. + +Both indices use the method **generate()** to do their work. This method combines the information from our domain, sorts it, and returns it in a list structure that will be accepted by Sphinx. See the [Sphinx Domain API][3] page for more information. + +The first time you visit the Domain API page, you may be a little overwhelmed by the structure. But our ingredient index is just a list of tuples, like **('tomato', 'TomatoSoup', 'test', 'rec-TomatoSoup',...)**. + +### Referencing recipes + +Adding cross-references is not difficult (but it's also not a given). Add an **XRefRole** to the domain and implement the method **resolve_xref()**. Having a custom role to reference a type allows us to unambiguously reference any object, even if two objects have the same name. If you look at the parameters of **resolve_xref()** in **Domain** , you'll see **typ** and **target**. These define the cross-reference type and its target name. We'll use **target** to resolve our destination from our domain's **objects** because we currently have only one type of node. + +We can add the cross-reference role to **RecipeDomain** in the following way: + +``` +roles = { +    'reref': XRefRole() +} +``` + +There's nothing for us to implement. Defining a working **resolve_xref()** and attaching an **XRefRole** to the domain is all you need to do. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/building-custom-workflows-sphinx + +作者:[Mark Meyer][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ofosos +[b]: https://github.com/lujun9972 +[1]: http://www.sphinx-doc.org/en/master/ +[2]: https://github.com/ofosos/sphinxrecipes +[3]: https://www.sphinx-doc.org/en/master/extdev/domainapi.html#sphinx.domains.Index.generate diff --git a/sources/tech/20181128 How to test your network with PerfSONAR.md b/sources/tech/20181128 How to test your network with PerfSONAR.md new file mode 100644 index 0000000000..9e9e66ef62 --- /dev/null +++ b/sources/tech/20181128 How to test your network with PerfSONAR.md @@ -0,0 +1,148 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How to test your network with PerfSONAR) +[#]: via: (https://opensource.com/article/18/11/how-test-your-network-perfsonar) +[#]: author: (Jessica Repka https://opensource.com/users/jrepka) +[#]: url: ( ) + +How to test your network with PerfSONAR +====== +Set up a single-node configuration to measure your network performance. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/command_line_prompt.png?itok=wbGiJ_yg) + +[PerfSONAR][1] is a network measurement toolkit collection for testing and sharing data on end-to-end network perfomance. + +The overall benefit of using network measurement tools like PerfSONAR is they can find issues before they become a large elephant in the room that nobody wants to talk about. Specifically, with the right answers from the right tools, patching can become more stringent, network traffic can be shaped to speed connections across the board, and the network infrastructure design can be improved. + +PerfSONAR is licensed under the open source Apache 2.0 license, which makes it more affordable than most tools that do this type of analysis, a key advantage given constrained network infrastructure budgets. + +### PerfSONAR versions + +Several versions of PerfSONAR are available: + + * **Perfsonar-tools:** The command line client version meant for on-demand testing. + * **Perfsonar-testpoint:** Adds automated testing and central management testing to PerfSONAR-tools. It has an archiving feature, but the archive must be set to an external node. + * **Perfsonar-core:** Includes everything in the testpoint software, but with local rather than external archiving. + * **Perfsonar-toolkit:** The core software; it includes a web UI with systemwide security settings. + * **Perfsonar-centralmanagement:** A completely separate version of PerfSONAR that uses mass grids of nodes to display results. It also has a feature to push out task templates to every node that is sending measurements back to the central host. + + + +This tutorial will use **PerfSonar-toolkit** ; the tools used in this software include [iPerf, iPerf3][2], and [OWAMP][3]. + +### Requirements + + * **Recommended operating system:** CentOS/RHEL7 + * **ISO:** [Downloading][4] the full installation ISO is the fastest way to get the software up and running. While there is a [Debian version][5], it is much harder and more complicated to use. + * **Minimum hardware requirements:** 2 cores and 4GB RAM + * **Recommended hardware:** 200GB HDD, 4 cores, 6GB of RAM + + + +### Installing and configuring PerfSONAR + +The installation is a quick CentOS install where you pick your timezone and configuration for the hard drive and user. I suggest using hard drive autoconfiguration, as you only need to choose "Install Toolkit" and follow the prompts from there. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image1_welcome.png) +Select your language. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image4_language.png) +Select a destination. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image3_destination.png) +After base installation, you see the Linux login screen. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image5a_linuxlogin.png) +After you log in, you are prompted to create a user ID and password to log into PerfSONAR's web frontend—make sure to remember your login information. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image5_createuser.png) +You're also asked to disable SSH access for root and create a new user for sudo; just follow the steps to create the new user. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image17_sudouser.png) +You can use a provisioning service to automatically provide an IP address and hostname. Otherwise, you will have to set the hostname (optional) and configure the IP address. + +### Log into the web frontend + +Once the base configuration is complete, you can log into the web frontend via **** or ****. The web frontend will appear with the name or IP address of the device you just set up, the list of tools used, a test result area, host information, global node directory, and on-demand testing. + +These options appear on the right-hand side of the web page. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image13_ondemandtesting.png) +![](https://opensource.com/sites/default/files/uploads/perfsonar_image1_frontend.png) + +For a single configuration mode, you will need another node to test with. To get one, click on the global node [Lookup Service Directory][6] link, which will bring you to a list of available nodes. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image20_lookupservicemap1.png) + +Pick an external node from the pScheduler Server list on the left. (I picked ESnet's Atlanta testing server.) +![](https://opensource.com/sites/default/files/uploads/perfsonar_image10_selectnode.png) + +Configure the node by clicking the Log In button and entering the user ID and password you created during base configuration. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image8_login.png) + +Next, choose Configuration. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image14_chooseconfig.png) + +This takes you to the configuration page, where you can add tests to other nodes by clicking Test, then clicking +Test. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image6_config.png) + +After you click +Test, you'll see a pop-up window with some drop-down options. For this tutorial, I used One-Way Active Measurement Protocol (OWAMP) testing for one-way latency against the ESnet Atlanta node that is IPv4. + +#### Side bar + + * The OWAMP measures unidirectional characteristics such as one-way delay and one-way loss. High-precision measurement of these one-way IP performance metrics became possible with wider availability of good time sources (such as GPS and CDMA). OWAMP enables the interoperability of these measurements. + * IPv4 is a fourth version of the Internet Protocol, which today is the main protocol to most of the internet. IPv4 protocol defines the rules for the operation of computer networks on the packet-exchange principle. This is a low-level protocol that is responsible for the connection between the nodes of the network on the basis of IP Addresses. + * The IPv4 node is a perfsonar testing node that only does network testing using the IPv4 protocols. The perfsonar testing node you connect to is the same application that is built in this documentation. + + + +The drop-down should use the server's main interface. Confirm that the test is enabled (the Test Status switch will be green) and click the OK button at the bottom of the window. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image9_addtest.png) + +Once you have added the test information, click the Save button at the bottom of the page. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image18_savetestinfo.png) + +You will see information about all of the scheduled tests and the hosts they are testing. You can add more hosts to the test by clicking the Settings icon in the Actions column. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image16_scheduledtests.png) + +The testing intervals are automatically set according to the recommended settings. If the test frequency increases, the tests will still run OK, but your hard drive may fill up with data more quickly. + +Once the test finishes, click View Public Dashboard to see the data that's returned. Note that it may take anywhere from five minutes to several hours to access the first sets of data. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image19_viewpublicdash.png) + +The public dashboard shows a high-level summary dataset. If you want more information, click Details. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image2_details.png) + +You'll see a larger graph and have the option to expand the graph over a year as data is collected. +![](https://opensource.com/sites/default/files/uploads/perfsonar_image7_expandedgraph.png) + +PerfSONAR is now up, running, and testing the network. You can also test with two nodes inside your network (or one internal network node and one external node). + +### What can you learn about your network? + +In the time I've been using PerfSONAR, I've already uncovered the following issues: + + * Asymmetrical throughput + * Fiber outages + * Speed on circuit not meeting contractual agreement + * Internal network slowdowns due to misconfigurations + * Incorrect routes + + + +Have you used PerfSONAR or a similar tool? What benefits have you seen? + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/11/how-test-your-network-perfsonar + +作者:[Jessica Repka][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jrepka +[b]: https://github.com/lujun9972 +[1]: https://www.perfsonar.net/ +[2]: https://iperf.fr/ +[3]: http://software.internet2.edu/owamp/ +[4]: http://downloads.perfsonar.net/toolkit/pS-Toolkit-4.1.3-CentOS7-FullInstall-x86_64-2018Oct24.iso +[5]: http://docs.perfsonar.net/install_options.html# +[6]: http://stats.es.net/ServicesDirectory/ diff --git a/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md b/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md new file mode 100644 index 0000000000..df932ebb83 --- /dev/null +++ b/sources/tech/20181129 The Top Command Tutorial With Examples For Beginners.md @@ -0,0 +1,192 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (The Top Command Tutorial With Examples For Beginners) +[#]: via: (https://www.ostechnix.com/the-top-command-tutorial-with-examples-for-beginners/) +[#]: author: ([SK](https://www.ostechnix.com/author/sk/)) +[#]: url: ( ) + +The Top Command Tutorial With Examples For Beginners +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-720x340.png) + +As a Linux administrator, you may need to need to know some basic details of your Linux system, such as the currently running processes, average system load, cpu and memory usage etc., at some point. Thankfully, we have a command line utility called **“top”** to get such details. The top command is a well-known and most widely used utility to display dynamic real-time information about running processes in Unix-like operating systems. In this brief tutorial, we are going to see some common use cases of top command. + +### Top Command Examples + +**Monitor all processes** + +To start monitoring the running processes, simply run the top command without any options: + +``` +$ top +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-1.png) + +As you see in the above screenshot, top command displays the list of processes in multiple columns. Each column displays details such as pid, user, cpu usage, memory usage. Apart from the list of processes, you will also see the brief stats about average system load, number of tasks, cpu usage, memory usage and swap usage on the top. + +Here is the explanation of the parameters mentioned above. + + * **PID** – Process id of the task. + * **USER** – Username of the the task’s owner. + * **PR** – Priority of the task. + * **NI** – Nice value of the task. If the nice value is negative, the process gets higher priority. If the nice value is positive, the priority is low. Refer [**this guide**][1] to know more about nice. + * **VIRT** – Total amount of virtual memory used by the task. + * **RES** – Resident Memory Size, the non-swapped physical memory a task is currently using. + * **SHR** – Shared Memory Size. The amount of shared memory used by a task. + * **S** – The status of the process (S=sleep R=running Z=zombie). + * **%CPU** – CPU usage. The task’s share of the elapsed CPU time since the last screen update, expressed as a percentage of total CPU time. + * **%MEM** – Memory Usage. A task’s currently resident share of available physical memory. + * **TIME+** – Total CPU time used by the task since it has started, precise to the hundredths of a second. + * **COMMAND** – Name of the running program. + + + +**Display path of processes** + +If you want to see the absolute path of the running processes, just press **‘c’**. Now you will see the actual path of the programs under the COMMAND column in the below screenshot. + +![][3] + +**Monitor processes owned by a specific user** + +If you run top command without any options, it will list all running processes owned by all users. How about displaying processes owned by a specific user? It is easy! To show the processes owned by a given user, for example **sk** , simply run: + +``` +$ top -u sk +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/11/top-command-3.png) + +**Do not show idle/zombie processes** + +Instead of viewing all processes, you can simply ignore the idle or zombie processes. The following command will not show any idle or zombie processes: + +``` +$ top -i +``` + +**Monitor processes with PID** + +If you know the PID of any processes, for example 21180, you can monitor that process using **-p** flag. + +``` +$ top -p 21180 +``` + +You can specify multiple PIDs with comma-separated values. + +**Monitor processes with process name** + +I don’t know PID, but know only the process name. How to monitor it? Simple! + +``` +$ top -p $(pgrep -d ',' firefox) +``` + +Here, **firefox** is the process name and **‘pgrep -d’** picks the respective PID from the process name. + +**Display processes by CPU usage** + +Sometimes, you might want to display processes sorted by CPU usage. If so, use the following command: + +``` +$ top -o %CPU +``` + +![][4] + +The processes with higher CPU usage will be displayed on the top. Alternatively, you sort the processes by CPU usage by pressing **SHIFT+p**. + +**Display processes by Memory usage** + +Similarly, to order processes by memory usage, the command would be: + +``` +$ top -o %MEM +``` + +**Renice processes** + +You can change the priority of a process at any time using the option **‘r’**. Run the top command and press **r** and type the PID of a process to change its priority. + +![][5] + +Here, **‘r’** refers renice. + +**Set update interval** + +Top program has an option to specify the delay between screen updates. If want to change the delay-time, say 5 seconds, run: + +``` +$ top -d 5 +``` + +The default value is **3.0** seconds. + +If you already started the top command, just press **‘d’** and type delay-time and hit ENTER key. + +![][6] + +**Set number of iterations (repetition)** + +By default, top command will keep running until you press **q** to exit. However, you can set the number of iterations after which top will end. For instance, to exit top command automatically after 5 iterations, run: + +``` +$ top -n 5 +``` + +**Kill running processes** + +To kill a running process, simply press **‘k’** and type its PID and hit ENTER key. + +![][7] + +Top command supports few other options as well. For example, press **‘z’** to switch between mono and color output. It will help you to easily highlight running processes. + +![][8] + +Press **‘h’** to view all available keyboard shortcuts and help section. + +To quit top, just press **q**. + +At this stage, you will have a basic understanding of top command. For more details, refer man pages. + +``` +$ man top +``` + +As you can see, using Top command to monitor the running processes isn’t that hard. Top command is easy to learn and use! + +And, that’s all for now. More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/the-top-command-tutorial-with-examples-for-beginners/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/change-priority-process-linux/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-2.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-4.png +[5]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-8.png +[6]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-7.png +[7]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-5.png +[8]: http://www.ostechnix.com/wp-content/uploads/2018/11/top-command-6.png diff --git a/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md b/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md new file mode 100644 index 0000000000..91c16e4e99 --- /dev/null +++ b/sources/tech/20181202 How To Customize The GNOME 3 Desktop.md @@ -0,0 +1,266 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (How To Customize The GNOME 3 Desktop?) +[#]: via: (https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) +[#]: url: ( ) + +How To Customize The GNOME 3 Desktop? +====== + +We have got many emails from user to write an article about GNOME 3 desktop customization but we don’t get a time to write this topic. + +I was using Ubuntu operating system since long time in my primary laptop and i got bored so, i would like to test some other distro which is related to Arch Linux. + +I prefer to go with Majaro so, i have installed Manjaro 18.0 with GNOME 3 desktop in my laptop. + +I’m customizing my desktop, how i want it. So, i would like to take this opportunity to write up this article in detailed way to help others. + +This article helps others to customize their desktop without headache. + +I’m not going to include all my customization and i will be adding a necessary things which will be mandatory and useful for Linux desktop users. + +If you feel some tweak is missing in this article, i would request you to mention that in comment sections. It will be very helpful for other users. + +### 1) How to Launch Activities Overview in GNOME 3 Desktop? + +The Activities Overview will display all the running applications or launched/opened windows by clicking `Super Key` or by clicking `Activities` button in the topmost left corner. + +It allows you to launch a new applications, switch windows, and move windows between workspaces. + +You can simply exit the Activities Overview by choosing the following any of the one actions like selecting a window, application or workspace, or by pressing the `Super Key` or `Esc Key`. + +Activities Overview Screenshot. +![][2] + +### 2) How to Resize Windows in GNOME 3 Desktop? + +The Launched windows can be maximized, unmaximized and snapped to one side of the screen (Left or Right) by using the following key combinations. + + * `Super Key+Down Arrow:` To unmaximize the window. + * `Super Key+Up Arrow:` To maximize the window. + * `Super Key+Right Arrow:` To fill a window in the right side of the half screen. + * `Super Key+Left Arrow:` To fill a window in the left side of the half screen + + + +Use `Super Key+Down Arrow` to unmaximize the window. +![][3] + +Use `Super Key+Up Arrow` to maximize the window. +![][4] + +Use `Super Key+Right Arrow` to fill a window in the right side of the half screen. +![][5] + +Use `Super Key+Left Arrow` to fill a window in the left side of the half screen. +![][6] + +This feature will help you to view two applications at a time a.k.a splitting screen. +![][7] + +### 3) How to Display Applications in GNOME 3 Desktop? + +Click on the `Show Application Grid` button in the Dash to display all the installed applications on your system. +![][8] + +### 4) How to Add Applications on Dash in GNOME 3 Desktop? + +To speed up your day to day activity you may want to add frequently used application into Dash or Drag the application launcher to the Dash. + +It will allow you to directly launch your favorite applications without searching them. To do so, simply right click on it and use the option `Add to Favorites`. +![][9] + +To remove a application launcher a.k.a favorite from Dash, either drag it from the Dash to the grid button or simply right click on it and use the option `Remove from Favorites`. +![][10] + +### 5) How to Switch Between Workspaces in GNOME 3 Desktop? + +Workspaces allow you to group windows together. It will helps you to segregate your work properly. If you are working on Multiple things and you want to group each work and related things separately then it will be very handy and perfect option for you. + +You can switch workspaces in two ways, Open the Activities Overview and select a workspace from the right-hand side or use the following key combinations. + + * Use `Ctrl+Alt+Up` Switch to the workspace above. + * Use `Ctrl+Alt+Down` Switch to the workspace below. + + + +![][11] + +### 6) How to Switch Between Applications (Application Switcher) in GNOME 3 Desktop? + +Use either `Alt+Tab` or `Super+Tab` to switch between applications. To launch Application Switcher, use either `Alt+Tab` or `Super+Tab`. + +Once launched, just keep holding the Alt or Super key and hit the tab key to move to the next application from left to right order. + +### 7) How to Add UserName to Top Panel in GNOME 3 Desktop? + +If you would like to add your UserName to Top Panel then install the following [Add Username to Top Panel][12] GNOME Extension. +![][13] + +### 8) How to Add Microsoft Bing’s wallpaper in GNOME 3 Desktop? + +Install the following [Bing Wallpaper Changer][14] GNOME shell extension to change your wallpaper every day to Microsoft Bing’s wallpaper. +![][15] + +### 9) How to Enable Night Light in GNOME 3 Desktop? + +Night light app is one of the famous app which reduces strain on the eyes by turning your screen a dim yellow from blue light after sunset. + +It is available in smartphones. The other known apps for the same purpose are flux and **[redshift][16]**. + +To enable this, navigate to **System Settings** >> **Devices** >> **Displays** and turn Nigh Light on. +![][17] + +Once it’s enabled and status icon will be placed on the top panel. +![][18] + +### 10) How to Show the Battery Percentage in GNOME 3 Desktop? + +Battery percentage will show you the exact battery usage. To enable this follow the below steps. + +Start GNOME Tweaks >> **Top Bar** >> **Battery Percentage** and switch it on. +![][19] + +After modification you can able to see the battery percentage icon on the top panel. +![][20] + +### 11) How to Enable Mouse Right Click in GNOME 3 Desktop? + +By default right click is disabled on GNOME 3 desktop environment. To enable this follow the below steps. + +Start GNOME Tweaks >> **Keyboard & Mouse** >> Mouse Click Emulation and select “Area” option. +![][21] + +### 12) How to Enable Minimize On Click in GNOME 3 Desktop? + +Enable one-click minimize feature which will help us to minimize opened window without using minimize option. + +``` +$ gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize' +``` + +### 13) How to Customize Dock in GNOME 3 Desktop? + +If you would like to change your Dock similar to Deepin desktop or Mac then use the following set of commands. + +``` +$ gsettings set org.gnome.shell.extensions.dash-to-dock dock-position BOTTOM +$ gsettings set org.gnome.shell.extensions.dash-to-dock extend-height false +$ gsettings set org.gnome.shell.extensions.dash-to-dock transparency-mode FIXED +$ gsettings set org.gnome.shell.extensions.dash-to-dock dash-max-icon-size 50 +``` + +![][22] + +### 14) How to Show Desktop in GNOME 3 Desktop? + +By default `Super Key+D` shortcut doesn’t show your desktop. To configure this follow the below steps. + +Settings >> **Devices** >> **Keyboard** >> Click **Hide all normal windows** under Navigation then Press `Super Key+D` finally hit `Set` button to enable it. +![][23] + +### 15) How to Customize Date and Time Format? + +By default GNOME 3 shows date and time with `Sun 04:48`. It’s not clear and if you want to get the output with following format `Sun Dec 2 4:49 AM` follow the below steps. + +**For Date Modification:** Start GNOME Tweaks >> **Top Bar** and enable `Weekday` option under Clock. +![][24] + +**For Time Modification:** Settings >> **Details** >> **Date & Time** then choose `AM/PM` option in the time format. +![][25] + +After modification you can able to see the date and time format same as below. +![][26] + +### 16) How to Permanently Disable Unused Services in Boot? + +In my case, i’m not going to use **Bluetooth** & **cpus a.k.a Printer service**. Hence, disabling these services on my laptop. To disable services on Arch based systems use **[Pacman Package Manager][27]**. +For Bluetooth + +``` +$ sudo systemctl stop bluetooth.service +$ sudo systemctl disable bluetooth.service +$ sudo systemctl mask bluetooth.service +$ systemctl status bluetooth.service +``` + +For cups + +``` +$ sudo systemctl stop org.cups.cupsd.service +$ sudo systemctl disable org.cups.cupsd.service +$ sudo systemctl mask org.cups.cupsd.service +$ systemctl status org.cups.cupsd.service +``` + +Finally verify whether these services are disabled or not in the boot using the following command. If you want to double confirm this, you can reboot once and check the same. Navigate to the following link to know more about **[systemctl][28]** usage, + +``` +$ systemctl list-unit-files --type=service | grep enabled +[email protected] enabled +dbus-org.freedesktop.ModemManager1.service enabled +dbus-org.freedesktop.NetworkManager.service enabled +dbus-org.freedesktop.nm-dispatcher.service enabled +display-manager.service enabled +gdm.service enabled +[email protected] enabled +linux-module-cleanup.service enabled +ModemManager.service enabled +NetworkManager-dispatcher.service enabled +NetworkManager-wait-online.service enabled +NetworkManager.service enabled +systemd-fsck-root.service enabled-runtime +tlp-sleep.service enabled +tlp.service enabled +``` + +### 17) Install Icons & Themes in GNOME 3 Desktop? + +Bunch of Icons and Themes are available for GNOME Desktop so, choose the desired **[GTK Themes][29]** and **[Icons Themes][30]** for you. To configure this further, navigate to the below links which makes your Desktop more elegant. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-customize-the-gnome-3-desktop/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[2]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-overview-screenshot.jpg +[3]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-unmaximize-the-window.jpg +[4]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-maximize-the-window.jpg +[5]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-right-side.jpg +[6]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-fill-a-window-left-side.jpg +[7]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-activities-split-screen.jpg +[8]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-applications.jpg +[9]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-applications-on-dash.jpg +[10]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-remove-applications-from-dash.jpg +[11]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-workspaces-screenshot.jpg +[12]: https://extensions.gnome.org/extension/1108/add-username-to-top-panel/ +[13]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-username-to-top-panel.jpg +[14]: https://extensions.gnome.org/extension/1262/bing-wallpaper-changer/ +[15]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-add-microsoft-bings-wallpaper.jpg +[16]: https://www.2daygeek.com/install-redshift-reduce-prevent-protect-eye-strain-night-linux/ +[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-night-light-1.jpg +[19]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage.jpg +[20]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-display-battery-percentage-1.jpg +[21]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-mouse-right-click.jpg +[22]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-dock-customization.jpg +[23]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-enable-show-desktop.jpg +[24]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date.jpg +[25]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-time.jpg +[26]: https://www.2daygeek.com/wp-content/uploads/2018/12/how-to-customize-the-gnome-3-desktop-customize-date-time.jpg +[27]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[28]: https://www.2daygeek.com/sysvinit-vs-systemd-cheatsheet-systemctl-command-usage/ +[29]: https://www.2daygeek.com/category/gtk-theme/ +[30]: https://www.2daygeek.com/category/icon-theme/ diff --git a/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md b/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md new file mode 100644 index 0000000000..7c8952549f --- /dev/null +++ b/sources/tech/20181203 ANGRYsearch - Quick Search GUI Tool for Linux.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (ANGRYsearch – Quick Search GUI Tool for Linux) +[#]: via: (https://itsfoss.com/angrysearch/) +[#]: author: (John Paul https://itsfoss.com/author/john/) +[#]: url: ( ) + +ANGRYsearch – Quick Search GUI Tool for Linux +====== + +A search application is one of the most important tools you can have on your computer. Most are slow to indexes your system and find results. However, today we will be looking at an application that can display results as you type. Today, we will be looking at ANGRYsearch. + +### What is ANGRYsearch? + +![][1] +Newly installed ANGRYsearch + +[ANGRYsearch][2] is a Python-based application that delivers results as you type your search query. The overall idea and design of the application are both inspired by [Everything][3] a search tool for Windows. (I discovered Everything ad couple of years ago and install it wherever I use Windows.) + +ANGRYsearch is able to display the search results so quickly because it only indexes filenames. After you install ANGRYsearch, you create a database of filenames by indexing your system. ANGRYsearch then quickly filters filenames as you type your query. + +Even though there is not much to ANGRYsearch, there are several things you can do to customize the experience. First, ANGRYsearch has two different display modes: lite and full. Lite mode only shows the filename and path. Full mode displays filename, path, size, and date of the last modification. Full mode, obviously, takes longer to display. The default is lite mode. In order to switch to full mode, you need to edit the config file at `~/.config/angrysearch/angrysearch.conf`. In that file change the `angrysearch_lite` value to false. + +ANGRYsearch also has three different search modes: fast, slow, and regex. Fast mode displays filenames that start with your search term. For example, if you had a folder full of the latest releases of a bunch of Linux distros and you searched “Ubuntu”, ANGRYsearch would display Ubuntu, Ubuntu Mate, Ubuntu Budgie, but not Kubuntu, Xubuntu, or Lubuntu. Fast mode is on by default and can be turned off by unchecking the checkbox next to the “update” button. Slow mode is slightly slower (obviously), but it will display files that have your search term anywhere in their name. In the previous example, ANGRYsearch would show all Ubuntu distros. Regex mode is the slowest and most precise. It uses [regular expressions][4] and is case insensitive. Regex mode is activated by pressing F8. + +You can also tell ANGRYsearch to ignore certain folders when it indexes your system. Just click the “update” button and enter the names of the folders you want to be ignored in the space provided. You can also choose from several icon themes, though it doesn’t make that much difference. + +![][5]Fast mode results + +### Installing ANGRYsearch on Linux + +ANGRYsearch is available in the [Arch User Repository][6]. It has also been packaged for [Fedora and openSUSE][7]. + +To install on other distros, follow these instructions. Instructions are written for a Debian or Ubuntu based system. + +ANGRYsearch depends on `python3-pyqt5` and`xdg-utils` so you will need to install them first. Most distros have `xdg-utils`already installed. + +`sudo apt install python3-pyqt5` + +Next. download the latest version (1.0.1). + +`wget https://github.com/DoTheEvo/ANGRYsearch/archive/v1.0.1.zip` + +Now, unzip the archive file. + +`unzip v1.0.1.zip` + +Next, we will navigate to the new folder (ANGRYsearch-1.0.1) and run the installer. + +`cd ANGRYsearch-1.0.1` + +`chmod +x install.sh` + +`sudo ./install.sh` + +The installation process is very quick, so don’t be surprised when a new command line is displayed as soon as you hit `Enter`. + +The first time that you start ANGRYsearch, you will need to index your system. ANGRYsearch does not automatically keep its database updated. You can use `crontab` to schedule a system scan. + +To open a text editor to create a new cronjob, use `crontab -e`. To make sure that the ANGRYsearch database is updated every 6 hours, use this command `0 */6 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated 选题模板.txt 中文排版指北.md core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LCTT翻译规范.md LICENSE published README.md scripts sources translated /usr/share/angrysearch/angrysearch_update_database.py`. `crontab` does not run the job if it is powered off when the timer does off. In some case, you may need to manually update the database, but it should not take long. + +![][8]ANGRYsearch update/options menu + +### Experience + +In the past, I was always frustrated by how painfully slow it was to search my computer. I knew that Windows had the Everything app, but I thought Linux out of luck. It didn’t even occur to me to look for something similar on Linux. I’m glad I accidentally stumbled upon ANGRYsearch. + +I know there will be quite a few people complaining that ANGRYsearch only searches filenames, but most of the time that is all I need. Thankfully, most of the time I only need to remember part of the name to find what I am looking for. + +The only thing that annoys me about ANGRYsearch is that fact that it does not automatically update its database. You’d think there would be a way for the installer to create a cron job when you install it. + +![][9]Slow mode results + +### Final Thoughts + +Since ANGRYsearch is basically a Linux port of one of my favorite Windows apps, I’m pretty happy with it. I plan to install it on all my systems going forward. + +I know that I have ragged on other Linux apps for not being packaged for easy install, but I can’t do the same for ANGRYsearch. The installation process is pretty easy. I would definitely recommend it for Linux noobs. + +Have you ever used [ANGRYsearch][2]? If not, what is your favorite Linux search application? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][10]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/angrysearch/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch3.jpg?resize=800%2C627&ssl=1 +[2]: https://github.com/dotheevo/angrysearch/ +[3]: https://www.voidtools.com/ +[4]: http://www.aivosto.com/articles/regex.html +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch1.jpg?resize=800%2C627&ssl=1 +[6]: https://aur.archlinux.org/packages/angrysearch/ +[7]: https://software.opensuse.org/package/angrysearch +[8]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch2.jpg?resize=800%2C626&ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/angrysearch4.jpg?resize=800%2C627&ssl=1 +[10]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md b/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md new file mode 100644 index 0000000000..04110b670e --- /dev/null +++ b/sources/tech/20181204 4 Unique Terminal Emulators for Linux.md @@ -0,0 +1,169 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (4 Unique Terminal Emulators for Linux) +[#]: via: (https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +4 Unique Terminal Emulators for Linux +====== +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_main.jpg?itok=e6av-5VO) +Let’s face it, if you’re a Linux administrator, you’re going to work with the command line. To do that, you’ll be using a terminal emulator. Most likely, your distribution of choice came pre-installed with a default terminal emulator that gets the job done. But this is Linux, so you have a wealth of choices to pick from, and that ideology holds true for terminal emulators as well. In fact, if you open up your distribution’s GUI package manager (or search from the command line), you’ll find a trove of possible options. Of those, many are pretty straightforward tools; however, some are truly unique. + +In this article, I’ll highlight four such terminal emulators, that will not only get the job done, but do so while making the job a bit more interesting or fun. So, let’s take a look at these terminals. + +### Tilda + +[Tilda][1] is designed for Gtk and is a member of the cool drop-down family of terminals. That means the terminal is always running in the background, ready to drop down from the top of your monitor (such as Guake and Yakuake). What makes Tilda rise above many of the others is the number of configuration options available for the terminal (Figure 1). +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_1.jpg?itok=bra6qb6X) + +Tilda can be installed from the standard repositories. On a Ubuntu- (or Debian-) based distribution, the installation is as simple as: + +``` +sudo apt-get install tilda -y +``` + +Once installed, open Tilda from your desktop menu, which will also open the configuration window. Configure the app to suit your taste and then close the configuration window. You can then open and close Tilda by hitting the F1 hotkey. One caveat to using Tilda is that, after the first run, you won’t find any indication as to how to reach the configuration wizard. No worries. If you run the command tilda -C it will open the configuration window, while still retaining the options you’ve previously set. + +Available options include: + + * Terminal size and location + + * Font and color configurations + + * Auto Hide + + * Title + + * Custom commands + + * URL Handling + + * Transparency + + * Animation + + * Scrolling + + * And more + + + + +What I like about these types of terminals is that they easily get out of the way when you don’t need them and are just a button click away when you do. For those that hop in and out of the terminal, a tool like Tilda is ideal. + +### Aterm + +Aterm holds a special place in my heart, as it was one of the first terminals I used that made me realize how flexible Linux was. This was back when AfterStep was my window manager of choice (which dates me a bit) and I was new to the command line. What Aterm offered was a terminal emulator that was highly customizable, while helping me learn the ins and outs of using the terminal (how to add options and switches to a command). “How?” you ask. Because Aterm never had a GUI for customization. To run Aterm with any special options, it had to run as a command. For example, say you want to open Aterm with transparency enabled, green text, white highlights, and no scroll bar. To do this, issue the command: + +``` +aterm -tr -fg green -bg white +xb +``` + +The end result (with the top command running for illustration) would look like that shown in Figure 2. + +![Aterm][3] + +Figure 2: Aterm with a few custom options. + +[Used with permission][4] + +Of course, you must first install Aterm. Fortunately, the application is still found in the standard repositories, so installing on the likes of Ubuntu is as simple as: + +``` +sudo apt-get install aterm -y +``` + +If you want to always open Aterm with those options, your best bet is to create an alias in your ~/.bashrc file like so: + +``` +alias=”aterm -tr -fg green -bg white +sb” +``` + +Save that file and, when you issue the command aterm, it will always open with those options. For more about creating aliases, check out [this tutorial][5]. + +### Eterm + +Eterm is the second terminal that really showed me how much fun the Linux command line could be. Eterm is the default terminal emulator for the Enlightenment desktop. When I eventually migrated from AfterStep to Enlightenment (back in the early 2000s), I was afraid I’d lose out on all those cool aesthetic options. That turned out to not be the case. In fact, Eterm offered plenty of unique options, while making the task easier with a terminal toolbar. With Eterm, you can easily select from a large number of background images (should you want one - Figure 3) by selecting from the Background > Pixmap menu entry. + +![Eterm][7] + +Figure 3: Selecting from one of the many background images for Eterm. + +[Used with permission][4] + +There are a number of other options to configure (such as font size, map alerts, toggle scrollbar, brightness, contrast, and gamma of background images, and more). The one thing you want to make sure is, after you’ve configured Eterm to suit your tastes, to click Eterm > Save User Settings (otherwise, all settings will be lost when you close the app). + +Eterm can be installed from the standard repositories, with a command such as: + +``` +sudo apt-get install eterm +``` + +### Extraterm + +[Extraterm][8] should probably win a few awards for coolest feature set of any terminal window project available today. The most unique feature of Extraterm is the ability to wrap commands in color-coded frames (blue for successful commands and red for failed commands - Figure 4). + +![Extraterm][10] + +Figure 4: Extraterm showing two failed command frames. + +[Used with permission][4] + +When you run a command, Extraterm will wrap the command in an isolated frame. If the command succeeds, the frame will be outlined in blue. Should the command fail, the frame will be outlined in red. + +Extraterm cannot be installed via the standard repositories. In fact, the only way to run Extraterm on Linux (at the moment) is to [download the precompiled binary][11] from the project’s GitHub page, extract the file, change into the newly created directory, and issue the command ./extraterm. + +Once the app is running, to enable frames you must first enable bash integration. To do that, open Extraterm and then right-click anywhere in the window to reveal the popup menu. Scroll until you see the entry for Inject Bash shell Integration (Figure 5). Select that entry and you can then begin using the frames option. + +![Extraterm][13] + +Figure 5: Injecting Bash integration for Extraterm. + +[Used with permission][4] + +If you run a command, and don’t see a frame appear, you probably have to create a new frame for the command (as Extraterm only ships with a few default frames). To do that, click on the Extraterm menu button (three horizontal lines in the top right corner of the window), select Settings, and then click the Frames tab. In this window, scroll down and click the New Rule button. You can then add a command you want to work with the frames option (Figure 6). + +![frames][15] + +Figure 6: Adding a new rule for frames. + +[Used with permission][4] + +If, after this, you still don’t see frames appearing, download the extraterm-commands file from the [Download page][11], extract the file, change into the newly created directory, and issue the command sh setup_extraterm_bash.sh. That should enable frames for Extraterm. +There’s plenty more options available for Extraterm. I’m convinced, once you start playing around with this new take on the terminal window, you won’t want to go back to the standard terminal. Hopefully the developer will make this app available to the standard repositories soon (as it could easily become one of the most popular terminal windows in use). + +### And Many More + +As you probably expected, there are quite a lot of terminals available for Linux. These four represent (at least for me) four unique takes on the task, each of which do a great job of helping you run the commands every Linux admin needs to run. If you aren’t satisfied with one of these, give your package manager a look to see what’s available. You are sure to find something that works perfectly for you. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2018/12/4-unique-terminals-linux + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: http://tilda.sourceforge.net/tildadoc.php +[2]: https://www.linux.com/files/images/terminals2jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_2.jpg?itok=gBkRLwDI (Aterm) +[4]: https://www.linux.com/licenses/category/used-permission +[5]: https://www.linux.com/blog/learn/2018/12/aliases-diy-shell-commands +[6]: https://www.linux.com/files/images/terminals3jpg +[7]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_3.jpg?itok=RVPTJAtK (Eterm) +[8]: http://extraterm.org +[9]: https://www.linux.com/files/images/terminals4jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_4.jpg?itok=2n01qdwO (Extraterm) +[11]: https://github.com/sedwards2009/extraterm/releases +[12]: https://www.linux.com/files/images/terminals5jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_5.jpg?itok=FdaE1Mpf (Extraterm) +[14]: https://www.linux.com/files/images/terminals6jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/terminals_6.jpg?itok=lQ1Zv5wq (frames) diff --git a/sources/tech/20181206 How to view XML files in a web browser.md b/sources/tech/20181206 How to view XML files in a web browser.md new file mode 100644 index 0000000000..6060c792e2 --- /dev/null +++ b/sources/tech/20181206 How to view XML files in a web browser.md @@ -0,0 +1,109 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to view XML files in a web browser) +[#]: via: (https://opensource.com/article/18/12/xml-browser) +[#]: author: (Greg Pittman https://opensource.com/users/greg-p) + +How to view XML files in a web browser +====== +Turn XML files into something more useful. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) + +Once you learn that HTML is a form of XML, you might wonder what would happen if you tried to view an XML file in a browser. The results are quite disappointing—Firefox shows you a banner at the top of the page that says, "This XML file does not appear to have any style information associated with it. The document tree is shown below." The document tree looks like the file would look in an editor: +![](https://opensource.com/sites/default/files/uploads/xml_menu.png) +This is the beginning of the **menu.xml** file for the online manual that comes with [Scribus][1], to which I'm a contributor. Although you see blue text, they are not clickable links. I wanted to be able to view this in a regular browser, since sometimes I need to go back and forth from the canvas in Scribus to the manual to figure out how to do something (maybe to see if I need to edit the manual to straighten out some misinformation or to add some missing information). + +The way to help a browser know what to do with these XML tags is by using XSLT—Extensible Stylesheet Language Transformations. In a broad sense, you could use XSLT to transform XML to a variety of outputs, or even HTML to XML. Here I want to use it to present the XML tags to a browser as suitable HTML. + +One slight modification needs to happen to the XML file: + +![](https://opensource.com/sites/default/files/uploads/xml_modified-menu.png) + +Adding this second line to the file tells the browser to look for a file named **scribus-manual.xsl** for the style information. The more important part is to create this XSL file. Here is the complete listing of **scribus-manual.xsl** for the Scribus manual: + +``` + + + +    +    Scribus Online Manual + + +        +                +                +        +
                  + +    + +      +        +         

                  +          +           

                  +                +                 

                    +                   
                  • +                 

                  +               
                  +         
                  +        +         

                  +       
                  +       
                  +     
                  +    +    + 
                  +
                  +``` + +This looks a lot more like HTML, and you can see it contains a number of HTML tags. After some preliminary tags and some particulars about displaying H2, H3, and H4 tags, you see a Table tag. This adds a graphical heading at the top of the page and uses some images already in the documentation files. + +After this, you get into the process of dissecting the various **submenuitem** tags, trying to create the nested listing structure as it appears in Scribus when you view the manual. One feature I did not try to duplicate is the ability to collapse and expand **submenuitem** areas. As you can imagine, it takes some time to sort through the number of nested lists you need to create, but when I finished, here is how it looked: + +![](https://opensource.com/sites/default/files/uploads/xml_scribusmenuinbrowser.png) + +This minimal editing to **menu.xml** does not interfere with Scribus' ability to show the manual in its own browser. I put this modified **menu.xml** file and the **scribus-manual.xsl** in the English documentation folder for 1.5.x versions of Scribus, so anyone using these versions can simply point their browser to the **menu.xml** file and it should show up just like you see above. + +A much bigger chore I took on a few years ago was to create a version of the ICD10 (International Classification of Diseases, version 10) when it came out. Many changes were made from the previous version (ICD9) to 10. These are important since these codes must be used for diagnostic purposes in medical practice. You can easily download XML files from the US [Centers for Medicare and Medicaid][2] website since it is public information, but—just as with the Scribus manual—these files are hard to use. + +Here is the beginning of the tabular listing of diseases: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_begin.png) + +One of the features I created was the color coding used in the listing shown here: + +![](https://opensource.com/sites/default/files/uploads/xml_tabular_body.png) + +As with **menu.xml** , the only editing I did in this **Tabular.xml** file was to add **** as the second line of the file. I started this project with the 2014 version, and I was quite pleased to find that the original **tabular.xsl** stylesheet worked perfectly when the 2016 version came out, which is the last one I worked on. The** Tabular.xml** file is 8.4MB, quite large for a plaintext file. It takes a few seconds to load into a browser, but once it's loaded, navigation is fast. + +While you may not often have to deal with an XML file in this way, if you do, I hope this article shows that your file can easily be turned into something much more usable. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/xml-browser + +作者:[Greg Pittman][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/greg-p +[b]: https://github.com/lujun9972 +[1]: https://www.scribus.net/ +[2]: https://www.cms.gov/ diff --git a/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md new file mode 100644 index 0000000000..4dd47e948a --- /dev/null +++ b/sources/tech/20181207 5 Screen Recorders for the Linux Desktop.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 Screen Recorders for the Linux Desktop) +[#]: via: (https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +5 Screen Recorders for the Linux Desktop +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screen-record.png?itok=tKWx29k8) + +There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select. + +And, without further ado, let’s get on with the list. + +### Simple Screen Recorder + +I’m starting out with my go-to screen recorder. I use [Simple Screen Recorder][1] on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs. + +![Simple Screen Recorder ][3] + +Figure 1: Simple Screen Recorder input profile window. + +[Used with permission][4] + +Simple screen recorder also: + + * Records audio input + + * Allows you to pause and resume recording + + * Offers a preview during recording + + * Allows for the selection of video containers and codecs + + * Adds timestamp to file name (optional) + + * Includes hotkey recording and sound notifications + + * Works well on slower machines + + * And much more + + + + +Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the [application download page][5]. + +### Gtk-recordmydesktop + +The next entry, [gtk-recordmydesktop][6], doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2). + +![gtk-recordmydesktop][8] + +Figure 2: Some of the options for gtk-recordmydesktop. + +[Used with permission][4] + +Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so: + +``` +recordmydesktop -x X_POS -y Y_POS --width WIDTH --height HEIGHT -o FILENAME.ogv +``` + +where: + + * X_POS is the offset on the X axis + + * Y_POS is the offset on the Y axis + + * WIDTH is the width of the screen to be recorded + + * HEIGHT is the height of the screen to be recorded + + * FILENAME is the name of the file to be saved + + + + +To find out more about the command line options, issue the command man recordmydesktop and read through the manual page. + +### Kazam + +If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3). + +![Kazam][10] + +Figure 3: Setting up YouTube Live broadcasting in Kazam. + +[Used with permission][4] + +Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away. + +The version of Kazam, with broadcast goodness, can be found in the following repository: + +``` +ppa:sylvain-pineau/kazam +``` + +For Ubuntu (and Ubuntu-based distributions), install with the following commands: + +``` +sudo apt-add-repository ppa:sylvain-pineau/kazam + +sudo apt-get update + +sudo apt-get install kazam -y +``` + +### Vokoscreen + +The [Vokoscreen][11] recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a webcam (Figure 4). + +![Vokoscreen][13] + +Figure 4: Configuring a web cam for a Vokoscreen screen recording. + +[Used with permission][4] + +As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200x200, 400x200, or 600x200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse). + +Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its [GitHub repository][14]. + +### OBS Studio + +For many, [OBS Studio][15] will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast, Mixer.com, DailyMotion, Facebook Live, Restream.io, LiveEdu.tv, Twitter, and more. In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop. + +Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream. + +![OBS Studio][17] + +Figure 5: Configuring your streaming service for OBS Studio. + +[Used with permission][4] + +I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally. + +### There’s More Where That Came From + +This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps. + +Learn more about Linux through the free ["Introduction to Linux" ][18]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/intro-to-linux/2018/12/5-screen-recorders-linux-desktop + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: http://www.maartenbaert.be/simplescreenrecorder/ +[2]: /files/images/screenrecorder1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_1.jpg?itok=hZJ5xugI (Simple Screen Recorder ) +[4]: /licenses/category/used-permission +[5]: http://www.maartenbaert.be/simplescreenrecorder/#download +[6]: http://recordmydesktop.sourceforge.net/about.php +[7]: /files/images/screenrecorder2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_2.jpg?itok=TEGXaVYI (gtk-recordmydesktop) +[9]: /files/images/screenrecorder3jpg +[10]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_3.jpg?itok=cvtFjxen (Kazam) +[11]: https://github.com/vkohaupt/vokoscreen +[12]: /files/images/screenrecorder4jpg +[13]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/screenrecorder_4.jpg?itok=c3KVS954 (Vokoscreen) +[14]: https://github.com/vkohaupt/vokoscreen.git +[15]: https://obsproject.com/ +[16]: /files/images/desktoprecorder5jpg +[17]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/desktoprecorder_5.jpg?itok=xyM-dCa7 (OBS Studio) +[18]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md new file mode 100644 index 0000000000..c513f36017 --- /dev/null +++ b/sources/tech/20181207 Automatic continuous development and delivery of a hybrid mobile app.md @@ -0,0 +1,102 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automatic continuous development and delivery of a hybrid mobile app) +[#]: via: (https://opensource.com/article/18/12/hybrid-mobile-app-development) +[#]: author: (Angelo Manganiello https://opensource.com/users/amanganiello90) + +Automatic continuous development and delivery of a hybrid mobile app +====== +Hybrid apps are a good middle ground between native and web apps. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/idea_innovation_mobile_phone.png?itok=RqVtvxkd) + +Offering a mobile app is essentially a business requirement for organizations today. One of the first steps in developing an app is to understand the different types—native, hybrid (or cross-platform), and web—so you can decide which one will best meet your needs. + +### Native is better, right? + +**Native apps** represent the vast majority of applications that people download every day. Native applications are developed specifically for an operating system. Thus, a native iOS application will not work on an Android system and vice versa. To develop a native app, you need to know two things: + + 1. How to develop in a specific programming language (e.g., Swift for Apple devices; Java for Android) + 2. The app will not work for other platforms + + + +Even though native apps will work only on the platform they're developed for, they have several notable advantages over hybrid and web apps: + + * Increased speed, reliability, and responsiveness and higher resolution, all of which provide a better user experience + * May work offline/without internet service + * Easier access to all phone features (e.g., accelerometer, camera, microphone) + + + +### But my business is still linked to the web… + +Most companies have focused their resources on web development and now want to enter the mobile market. But many don't have the right technical resources to develop a native app for each platform. For these companies, **hybrid** development is the right choice. In this model, developers can use their existing frontend skills to develop a single, cross-platform mobile app. + +![Hybrid mobile apps][2] + +Hybrid apps are a good middle ground: they're faster and less expensive to develop than native apps, and they offer more possibilities than web apps. The tradeoffs are they don't perform as well as native apps and developers can't maintain their existing tight focus on web development (as they could with web apps). + +If you already are a fan of the [Angular][3] cross-platform development framework, I recommend trying the [Ionic][4] framework, which "lets web developers build, test, and deploy cross-platform hybrid mobile apps." I see Ionic as an extension of the [Apache Cordova][5] framework, which enables a normal web app (JS, HTML, or CSS) to run as a mobile app in a container. Ionic uses the base Cordova features that support the Angular development for its user interface. + +The advantage of this approach is simple: the Angular paradigm is maintained, so developers can continue writing [TypeScript][6] files but target a build for Android, iOS, and Windows by properly configuring the development environment. It also provides two important tools: + + * An appealing design and widget that are very similar to a native app's, so your hybrid app will look less "web" + * Cordova Plugins allow the app to communicate with all phone features + + + +### What about the Node.js backend? + +The programming world likes to standardize, which is why hybrid apps are so popular. Frontend developers' common skills are useful in the mobile world. But if we have a technology stack for the user interface, why not focus on a single backend with the same programming paradigm? + +This makes [Node.js][7] an appealing option. Node.js is a JavaScript runtime built on the Chrome V8 JavaScript engine. It can make the API development backend very fast and easy, and it integrates fully with web technologies. You can develop a Cordova plugin, using your Node.js backend, internally in your hybrid app, as I did with the [nodejs-cordova-plugin][8]. This plugin, following the Cordova guidelines, integrates a mobile-compatible version of the Node.js platform to provide a full-stack mobile app. + +If you need a simple CRUD Node.js backend, you can use my [API][9] [node generator][9] that generates an app using a [MongoDB][10] embedded database. + +![Cordova Full Stack application][12] + +### Deploying your app + +Open source offers everything you need to deploy your app in the best way. You just need a GitHub repository and a good continuous integration tool. I recommend [Travis-ci][13], an excellent tool that allows you to build and deploy your product for every commit. + +Travis-ci is a fork of the better known [Jenkins][14]. Like with Jenkins, you have to configure your pipeline through a configuration file (in this case a **.travis.yml** file) in your GitHub repo. See the [.travis.yml file][15] in my repository as an example. + +![](https://opensource.com/sites/default/files/uploads/3-travis-ci-process.png) + +In addition, this pipeline automatically delivers and installs your app on [Appetize.io][16], a web-based iOS simulator and Android emulator, for testing. + +You can learn more in the [Cordova Android][17] section of my GitHub repository. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/hybrid-mobile-app-development + +作者:[Angelo Manganiello][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/amanganiello90 +[b]: https://github.com/lujun9972 +[1]: /file/416441 +[2]: https://opensource.com/sites/default/files/uploads/1-title.png (Hybrid mobile apps) +[3]: https://angular.io/ +[4]: https://ionicframework.com/ +[5]: https://cordova.apache.org/ +[6]: https://www.typescriptlang.org/ +[7]: https://nodejs.org/ +[8]: https://github.com/fullStackApp/nodejs-cordova-plugin +[9]: https://github.com/fullStackApp/generator-full-stack-api +[10]: https://www.mongodb.com/ +[11]: /file/416351 +[12]: https://opensource.com/sites/default/files/uploads/2-cordova-full-stack-app.png (Cordova Full Stack application) +[13]: https://travis-ci.org/ +[14]: https://jenkins.io/ +[15]: https://github.com/amanganiello90/java-angular-web-app/blob/master/.travis.yml +[16]: https://appetize.io/ +[17]: https://github.com/amanganiello90/java-angular-web-app#cordova-android diff --git a/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md new file mode 100644 index 0000000000..3962e361f3 --- /dev/null +++ b/sources/tech/20181211 How To Benchmark Linux Commands And Programs From Commandline.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Benchmark Linux Commands And Programs From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Benchmark Linux Commands And Programs From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/benchmark-720x340.png) + +A while ago, I have written a guide about the [**alternatives to ‘top’, the command line utility**][1]. Some of the users asked me which one among those tools is best and on what basis (like features, contributors, years active, page requests etc.) I compared those tools. They also asked me to share the bench-marking results If I have any. Unfortunately, I didn’t even know how to benchmark programs at that time. While searching for some simple and easy to use bench-marking tools to compare the Linux programs, I stumbled upon two utilities named **‘Bench’** and **‘Hyperfine’**. These are simple and easy-to-use command line tools to benchmark Linux commands and programs on Unix-like systems. + +### 1\. Bench Tool + +The **‘Bench’** utility benchmarks one or more given commands/programs using **Haskell’s criterion** library and displays the output statistics in an easy-to-understandable format. This tool can be helpful where you need to compare similar programs based on the bench-marking result. We can also export the results to HTML format or CSV or templated output. + +#### Installing Bench Utility + +The bench utility can be installed in three methods. + +**1\. Using Linuxbrew** + +We can install Bench utility using Linuxbrew package manager. If you haven’t installed Linuxbrew yet, refer the following link. + +After installing Linuxbrew, run the following command to install Bench: + +``` +$ brew install bench +``` + +**2\. Using Haskell’s stack tool** + +First, install Haskell as described in the following link. + +And then, run the following commands to install Bench. + +``` +$ stack setup + +$ stack install bench +``` + +The ‘stack’ will install bench to **~/.local/bin** or something similar. Make sure that the installation directory is on your executable search path before using bench tool. You will be reminded to do this even if you forgot. + +**3\. Using Nix package manager** + +Another way to install Bench is using **Nix** package manager. Install Nix as shown in the below link. + +After installing Nix, install Bench tool using command: + +``` +$ nix-env -i bench +``` + +#### Benchmark Linux Commands And Programs Using Bench + +It is time to start benchmarking the programs. + +For instance, let me show you the benchmark result of ‘ls -al’ command. + +``` +$ bench 'ls -al' +``` + +**Sample output:** + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-1.png) + +You must quote the commands when you use flags/options with them. + +Similarly, you can benchmark any programs installed in your system. The following commands shows the benchmarking result of ‘htop’ and ‘ptop’ programs. + +``` +$ bench htop + +$ bench ptop +``` +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-2-1.png) +Bench tool can benchmark multiple programs at once as well. Here is the benchmarking result of ls, htop, ptop programs. + +``` +$ bench ls htop ptop +``` + +Sample output: +![](https://www.ostechnix.com/wp-content/uploads/2018/12/Benchmark-commands-3.png) + +We can also export the benchmark result to a HTML like below. + +``` +$ bench htop --output example.html +``` + +To export the result to CSV, just run: + +``` +$ bench htop --csv FILE +``` + +View help section: + +``` +$ bench --help +``` + +### **2. Hyperfine Benchmark Tool + +** + +**Hyperfine** is yet another command line benchmarking tool inspired by the ‘Bench’ tool which we just discussed above. It is free, open source, cross-platform benchmarking program and written in **Rust** programming language. It has few additional features compared to the Bench tool as listed below. + + * Statistical analysis across multiple runs. + * Support for arbitrary shell commands. + * Constant feedback about the benchmark progress and current estimates. + * Perform warmup runs before the actual benchmark. + * Cache-clearing commands can be set up before each timing run. + * Statistical outlier detection. + * Export benchmark results to various formats, such as CSV, JSON, Markdown. + * Parameterized benchmarks. + + + +#### Installing Hyperfine + +We can install Hyperfine using any one of the following methods. + +**1\. Using Linuxbrew** + +``` +$ brew install hyperfine +``` + +**2\. Using Cargo** + +Make sure you have installed Rust as described in the following link. + +After installing Rust, run the following command to install Hyperfine via Cargo: + +``` +$ cargo install hyperfine +``` + +**3\. Using AUR helper programs** + +Hyperfine is available in [**AUR**][2]. So, you can install it on Arch-based systems using any helper programs, such as [**YaY**][3], like below. + +``` +$ yay -S hyperfine +``` + +**4\. Download and install the binaries** + +Hyperfine is available in binaries for Debian-based systems. Download the latest .deb binary file from the [**releases page**][4] and install it using ‘dpkg’ package manager. As of writing this guide, the latest version was **1.4.0**. + +``` +$ wget https://github.com/sharkdp/hyperfine/releases/download/v1.4.0/hyperfine_1.4.0_amd64.deb + +$ sudo dpkg -i hyperfine_1.4.0_amd64.deb + +$ sudo apt install -f +``` + +#### Benchmark Linux Commands And Programs Using Hyperfine + +To run a benchmark using Hyperfine, simply run it along with the program/command as shown below. + +``` +$ hyperfine 'ls -al' +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-1.png) + +Benchmark multiple commands/programs: + +``` +$ hyperfine htop ptop +``` + +Sample output: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/hyperfine-2.png) + +As you can see at the end of the output, Hyperfine mentiones – **‘htop ran 1.96 times faster than ptop’** , so we can immediately conclude htop performs better than Ptop. This will help you to quickly find which program performs better when benchmarking multiple programs. We don’t get this detailed output in Bench utility though. + +Hyperfine will automatically determine the number of runs to perform for each command. By default, it will perform at least **10 benchmarking runs**. If you want to set the **minimum number of runs** (E.g 5 runs), use the `-m` **/`--min-runs`** option like below: + +``` +$ hyperfine --min-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -m 5 htop ptop +``` + +Similarly, to perform **maximum number of runs** for each command, the command would be: + +``` +$ hyperfine --max-runs 5 htop ptop +``` + +Or, + +``` +$ hyperfine -M 5 htop ptop +``` + +We can even perform **exact number of runs** for each command using the following command: + +``` +$ hyperfine -r 5 htop ptop +``` + +As you may know, if the program execution time is limited by disk I/O, the benchmarking results can be heavily influenced by disk caches and whether they are cold or warm. Luckily, Hyperfine has the options to perform a certain number of program executions before performing the actual benchmark. + +To perform NUM warmup runs (E.g 3) before the actual benchmark, use the **`-w`/**`--warmup` option like below: + +``` +$ hyperfine --warmup 3 htop +``` + +Just like Bench utility, Hyperfine also allows us to export the benchmark results to a given file. We can export the results to CSV, JSON, and Markdown formats. + +For instance, to export the results in Markdown format, use the following command: + +``` +$ hyperfine htop ptop --export-markdown +``` + +For more options and usage details, refer the help secion: + +``` +$ hyperfine --help +``` + +And, that’s all for now. If you ever be in a situation where you need to benchmark similar and alternative programs, these tools might help to compare how they performs and share the details with your peers and colleagues. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-benchmark-linux-commands-and-programs-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/some-alternatives-to-top-command-line-utility-you-might-want-to-know/ +[2]: https://aur.archlinux.org/packages/hyperfine +[3]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[4]: https://github.com/sharkdp/hyperfine/releases diff --git a/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md b/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md new file mode 100644 index 0000000000..e1e6a7f25e --- /dev/null +++ b/sources/tech/20181212 TLP - An Advanced Power Management Tool That Improve Battery Life On Linux Laptop.md @@ -0,0 +1,745 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop) +[#]: via: (https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +TLP – An Advanced Power Management Tool That Improve Battery Life On Linux Laptop +====== + +Laptop battery is highly optimized for Windows OS, that i had realized when i was using Windows OS in my laptop but it’s not same for Linux. + +Over the years Linux has improved a lot for battery optimization but still we need make some necessary things to improve laptop battery life in Linux. + +When i think about battery life, i got few options for that but i felt TLP is a better solutions for me so, i’m going with it. + +In this tutorial we are going to discuss about TLP in details to improve battery life. + +We had written three articles previously in our site about **[laptop battery saving utilities][1]** for Linux **[PowerTOP][2]** and **[Battery Charging State][3]**. + +### What is TLP? + +[TLP][4] is a free opensource advanced power management tool that improve your battery life without making any configuration change. + +Since it comes with a default configuration already optimized for battery life, so you may just install and forget it. + +Also, it is highly customizable to fulfill your specific requirements. TLP is a pure command line tool with automated background tasks. It does not contain a GUI. + +TLP runs on every laptop brand. Setting the battery charge thresholds is available for IBM/Lenovo ThinkPads only. + +All TLP settings are stored in `/etc/default/tlp`. The default configuration provides optimized power saving out of the box. + +The following TLP settings is available for customization and you need to make the necessary changes accordingly if you want it. + +### TLP Features + + * Kernel laptop mode and dirty buffer timeouts + * Processor frequency scaling including “turbo boost” / “turbo core” + * Limit max/min P-state to control power dissipation of the CPU + * HWP energy performance hints + * Power aware process scheduler for multi-core/hyper-threading + * Processor performance versus energy savings policy (x86_energy_perf_policy) + * Hard disk advanced power magement level (APM) and spin down timeout (per disk) + * AHCI link power management (ALPM) with device blacklist + * PCIe active state power management (PCIe ASPM) + * Runtime power management for PCI(e) bus devices + * Radeon graphics power management (KMS and DPM) + * Wifi power saving mode + * Power off optical drive in drive bay + * Audio power saving mode + * I/O scheduler (per disk) + * USB autosuspend with device blacklist/whitelist (input devices excluded automatically) + * Enable or disable integrated wifi, bluetooth or wwan devices upon system startup and shutdown + * Restore radio device state on system startup (from previous shutdown). + * Radio device wizard: switch radios upon network connect/disconnect and dock/undock + * Disable Wake On LAN + * Integrated WWAN and bluetooth state is restored after suspend/hibernate + * Untervolting of Intel processors – requires kernel with PHC-Patch + * Battery charge thresholds – ThinkPads only + * Recalibrate battery – ThinkPads only + + + +### How to Install TLP in Linux + +TLP package is available in most of the distributions official repository so, use the distributions **[Package Manager][5]** to install it. + +For **`Fedora`** system, use **[DNF Command][6]** to install TLP. + +``` +$ sudo dnf install tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm +$ sudo dnf install http://repo.linrunner.de/fedora/tlp/repos/releases/tlp-release.fc$(rpm -E %fedora).noarch.rpm +$ sudo dnf install akmod-tp_smapi akmod-acpi_call kernel-devel +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo dnf install smartmontools +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][7]** or **[APT Command][8]** to install TLP. + +``` +$ sudo apt install tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ sudo apt-get install tp-smapi-dkms acpi-call-dkms +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo apt-get install smartmontools +``` + +When the official package becomes outdated for Ubuntu based systems then use the following PPA repository which provides an up-to-date version. Run the following commands to install TLP using the PPA. + +``` +$ sudo apt-get install tlp tlp-rdw +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][9]** to install TLP. + +``` +$ sudo pacman -S tlp tlp-rdw +``` + +ThinkPads require an additional packages. + +``` +$ pacman -S tp_smapi acpi_call +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo pacman -S smartmontools +``` + +Enable TLP & TLP-Sleep service on boot for Arch Linux based systems. + +``` +$ sudo systemctl enable tlp.service +$ sudo systemctl enable tlp-sleep.service +``` + +You should also mask the following services to avoid conflicts and assure proper operation of TLP’s radio device switching options for Arch Linux based systems. + +``` +$ sudo systemctl mask systemd-rfkill.service +$ sudo systemctl mask systemd-rfkill.socket +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][10]** to install TLP. + +``` +$ sudo yum install tlp tlp-rdw +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo yum install smartmontools +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][11]** to install TLP. + +``` +$ sudo zypper install TLP +``` + +Install smartmontool to display S.M.A.R.T. data in tlp-stat. + +``` +$ sudo zypper install smartmontools +``` + +After successfully TLP installed, use the following command to start the service. + +``` +$ systemctl start tlp.service +``` + +To show battery information. + +``` +$ sudo tlp-stat -b +or +$ sudo tlp-stat --battery + +--- TLP 1.1 -------------------------------------------- + ++++ Battery Status +/sys/class/power_supply/BAT0/manufacturer = SMP +/sys/class/power_supply/BAT0/model_name = L14M4P23 +/sys/class/power_supply/BAT0/cycle_count = (not supported) +/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh] +/sys/class/power_supply/BAT0/energy_full = 48850 [mWh] +/sys/class/power_supply/BAT0/energy_now = 48850 [mWh] +/sys/class/power_supply/BAT0/power_now = 0 [mW] +/sys/class/power_supply/BAT0/status = Full + +Charge = 100.0 [%] +Capacity = 81.4 [%] +``` + +To show disk information. + +``` +$ sudo tlp-stat -d +or +$ sudo tlp-stat --disk + +--- TLP 1.1 -------------------------------------------- + ++++ Storage Devices +/dev/sda: + Model = WDC WD10SPCX-24HWST1 + Firmware = 02.01A02 + APM Level = 128 + Status = active/idle + Scheduler = mq-deadline + + Runtime PM: control = on, autosuspend_delay = (not available) + + SMART info: + 4 Start_Stop_Count = 18787 + 5 Reallocated_Sector_Ct = 0 + 9 Power_On_Hours = 606 [h] + 12 Power_Cycle_Count = 1792 + 193 Load_Cycle_Count = 25775 + 194 Temperature_Celsius = 31 [°C] + + ++++ AHCI Link Power Management (ALPM) +/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm + ++++ AHCI Host Controller Runtime Power Management +/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on +``` + +To show PCI device information. + +``` +$ sudo tlp-stat -e +or +$ sudo tlp-stat --pcie + +--- TLP 1.1 -------------------------------------------- + ++++ Runtime Power Management +Device blacklist = (not configured) +Driver blacklist = amdgpu nouveau nvidia radeon pcieport + +/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore) +/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915) +/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd) +/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me) +/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci) +/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver) +/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver) +/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel) +/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus) +/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau) +/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci) +/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi) +/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168) +/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme) +``` + +To show graphics card information. + +``` +$ sudo tlp-stat -g +or +$ sudo tlp-stat --graphics + +--- TLP 1.1 -------------------------------------------- + ++++ Intel Graphics +/sys/module/i915/parameters/enable_dc = -1 (use per-chip default) +/sys/module/i915/parameters/enable_fbc = 1 (enabled) +/sys/module/i915/parameters/enable_psr = 0 (disabled) +/sys/module/i915/parameters/modeset = -1 (use per-chip default) +``` + +To show Processor information. + +``` +$ sudo tlp-stat -p +or +$ sudo tlp-stat --processor + +--- TLP 1.1 -------------------------------------------- + ++++ Processor +CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz + +/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%] +/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%] +/sys/devices/system/cpu/intel_pstate/no_turbo = 0 +/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%] +/sys/devices/system/cpu/intel_pstate/num_pstates = 28 + +x86_energy_perf_policy: program not installed. + +/sys/module/workqueue/parameters/power_efficient = Y +/proc/sys/kernel/nmi_watchdog = 0 + ++++ Undervolting +PHC kernel not available. +``` + +To show system data information. + +``` +$ sudo tlp-stat -s +or +$ sudo tlp-stat --system + +--- TLP 1.1 -------------------------------------------- + ++++ System Info +System = LENOVO Lenovo ideapad Y700-15ISK 80NV +BIOS = CDCN35WW +Release = "Manjaro Linux" +Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64 +/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b +Init system = systemd +Boot mode = BIOS (CSM, Legacy) + ++++ TLP Status +State = enabled +Last run = 11:04:00 IST, 596 sec(s) ago +Mode = battery +Power source = battery +``` + +To show temperatures and fan speed information. + +``` +$ sudo tlp-stat -t +or +$ sudo tlp-stat --temp + +--- TLP 1.1 -------------------------------------------- + ++++ Temperatures +CPU temp = 36 [°C] +Fan speed = (not available) +``` + +To show USB device data information. + +``` +$ sudo tlp-stat -u +or +$ sudo tlp-stat --usb + +--- TLP 1.1 -------------------------------------------- + ++++ USB +Autosuspend = disabled +Device whitelist = (not configured) +Device blacklist = (not configured) +Bluetooth blacklist = disabled +Phone blacklist = disabled +WWAN blacklist = enabled + +Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub) +Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo) +Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid) +Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb) +Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub) +``` + +To show warnings. + +``` +$ sudo tlp-stat -w +or +$ sudo tlp-stat --warn + +--- TLP 1.1 -------------------------------------------- + +No warnings detected. +``` + +Status report with configuration and all active settings. + +``` +$ sudo tlp-stat + +--- TLP 1.1 -------------------------------------------- + ++++ Configured Settings: /etc/default/tlp +TLP_ENABLE=1 +TLP_DEFAULT_MODE=AC +TLP_PERSISTENT_DEFAULT=0 +DISK_IDLE_SECS_ON_AC=0 +DISK_IDLE_SECS_ON_BAT=2 +MAX_LOST_WORK_SECS_ON_AC=15 +MAX_LOST_WORK_SECS_ON_BAT=60 +CPU_HWP_ON_AC=balance_performance +CPU_HWP_ON_BAT=balance_power +SCHED_POWERSAVE_ON_AC=0 +SCHED_POWERSAVE_ON_BAT=1 +NMI_WATCHDOG=0 +ENERGY_PERF_POLICY_ON_AC=performance +ENERGY_PERF_POLICY_ON_BAT=power +DISK_DEVICES="sda sdb" +DISK_APM_LEVEL_ON_AC="254 254" +DISK_APM_LEVEL_ON_BAT="128 128" +SATA_LINKPWR_ON_AC="med_power_with_dipm max_performance" +SATA_LINKPWR_ON_BAT="med_power_with_dipm max_performance" +AHCI_RUNTIME_PM_TIMEOUT=15 +PCIE_ASPM_ON_AC=performance +PCIE_ASPM_ON_BAT=powersave +RADEON_POWER_PROFILE_ON_AC=default +RADEON_POWER_PROFILE_ON_BAT=low +RADEON_DPM_STATE_ON_AC=performance +RADEON_DPM_STATE_ON_BAT=battery +RADEON_DPM_PERF_LEVEL_ON_AC=auto +RADEON_DPM_PERF_LEVEL_ON_BAT=auto +WIFI_PWR_ON_AC=off +WIFI_PWR_ON_BAT=on +WOL_DISABLE=Y +SOUND_POWER_SAVE_ON_AC=0 +SOUND_POWER_SAVE_ON_BAT=1 +SOUND_POWER_SAVE_CONTROLLER=Y +BAY_POWEROFF_ON_AC=0 +BAY_POWEROFF_ON_BAT=0 +BAY_DEVICE="sr0" +RUNTIME_PM_ON_AC=on +RUNTIME_PM_ON_BAT=auto +RUNTIME_PM_DRIVER_BLACKLIST="amdgpu nouveau nvidia radeon pcieport" +USB_AUTOSUSPEND=0 +USB_BLACKLIST_BTUSB=0 +USB_BLACKLIST_PHONE=0 +USB_BLACKLIST_PRINTER=1 +USB_BLACKLIST_WWAN=1 +RESTORE_DEVICE_STATE_ON_STARTUP=0 + ++++ System Info +System = LENOVO Lenovo ideapad Y700-15ISK 80NV +BIOS = CDCN35WW +Release = "Manjaro Linux" +Kernel = 4.19.6-1-MANJARO #1 SMP PREEMPT Sat Dec 1 12:21:26 UTC 2018 x86_64 +/proc/cmdline = BOOT_IMAGE=/boot/vmlinuz-4.19-x86_64 root=UUID=69d9dd18-36be-4631-9ebb-78f05fe3217f rw quiet resume=UUID=a2092b92-af29-4760-8e68-7a201922573b +Init system = systemd +Boot mode = BIOS (CSM, Legacy) + ++++ TLP Status +State = enabled +Last run = 11:04:00 IST, 684 sec(s) ago +Mode = battery +Power source = battery + ++++ Processor +CPU model = Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz + +/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu0/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu1/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu1/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu1/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu2/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu2/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu2/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu2/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu3/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu3/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu3/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu3/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu4/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu4/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu4/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu4/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu5/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu5/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu5/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu5/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu6/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu6/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu6/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu6/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/cpu7/cpufreq/scaling_driver = intel_pstate +/sys/devices/system/cpu/cpu7/cpufreq/scaling_governor = powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_available_governors = performance powersave +/sys/devices/system/cpu/cpu7/cpufreq/scaling_min_freq = 800000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/scaling_max_freq = 3500000 [kHz] +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_preference = balance_power +/sys/devices/system/cpu/cpu7/cpufreq/energy_performance_available_preferences = default performance balance_performance balance_power power + +/sys/devices/system/cpu/intel_pstate/min_perf_pct = 22 [%] +/sys/devices/system/cpu/intel_pstate/max_perf_pct = 100 [%] +/sys/devices/system/cpu/intel_pstate/no_turbo = 0 +/sys/devices/system/cpu/intel_pstate/turbo_pct = 33 [%] +/sys/devices/system/cpu/intel_pstate/num_pstates = 28 + +x86_energy_perf_policy: program not installed. + +/sys/module/workqueue/parameters/power_efficient = Y +/proc/sys/kernel/nmi_watchdog = 0 + ++++ Undervolting +PHC kernel not available. + ++++ Temperatures +CPU temp = 42 [°C] +Fan speed = (not available) + ++++ File System +/proc/sys/vm/laptop_mode = 2 +/proc/sys/vm/dirty_writeback_centisecs = 6000 +/proc/sys/vm/dirty_expire_centisecs = 6000 +/proc/sys/vm/dirty_ratio = 20 +/proc/sys/vm/dirty_background_ratio = 10 + ++++ Storage Devices +/dev/sda: + Model = WDC WD10SPCX-24HWST1 + Firmware = 02.01A02 + APM Level = 128 + Status = active/idle + Scheduler = mq-deadline + + Runtime PM: control = on, autosuspend_delay = (not available) + + SMART info: + 4 Start_Stop_Count = 18787 + 5 Reallocated_Sector_Ct = 0 + 9 Power_On_Hours = 606 [h] + 12 Power_Cycle_Count = 1792 + 193 Load_Cycle_Count = 25777 + 194 Temperature_Celsius = 31 [°C] + + ++++ AHCI Link Power Management (ALPM) +/sys/class/scsi_host/host0/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host1/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host2/link_power_management_policy = med_power_with_dipm +/sys/class/scsi_host/host3/link_power_management_policy = med_power_with_dipm + ++++ AHCI Host Controller Runtime Power Management +/sys/bus/pci/devices/0000:00:17.0/ata1/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata2/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata3/power/control = on +/sys/bus/pci/devices/0000:00:17.0/ata4/power/control = on + ++++ PCIe Active State Power Management +/sys/module/pcie_aspm/parameters/policy = powersave + ++++ Intel Graphics +/sys/module/i915/parameters/enable_dc = -1 (use per-chip default) +/sys/module/i915/parameters/enable_fbc = 1 (enabled) +/sys/module/i915/parameters/enable_psr = 0 (disabled) +/sys/module/i915/parameters/modeset = -1 (use per-chip default) + ++++ Wireless +bluetooth = on +wifi = on +wwan = none (no device) + +hci0(btusb) : bluetooth, not connected +wlp8s0(iwlwifi) : wifi, connected, power management = on + ++++ Audio +/sys/module/snd_hda_intel/parameters/power_save = 1 +/sys/module/snd_hda_intel/parameters/power_save_controller = Y + ++++ Runtime Power Management +Device blacklist = (not configured) +Driver blacklist = amdgpu nouveau nvidia radeon pcieport + +/sys/bus/pci/devices/0000:00:00.0/power/control = auto (0x060000, Host bridge, skl_uncore) +/sys/bus/pci/devices/0000:00:01.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:02.0/power/control = auto (0x030000, VGA compatible controller, i915) +/sys/bus/pci/devices/0000:00:14.0/power/control = auto (0x0c0330, USB controller, xhci_hcd) +/sys/bus/pci/devices/0000:00:16.0/power/control = auto (0x078000, Communication controller, mei_me) +/sys/bus/pci/devices/0000:00:17.0/power/control = auto (0x010601, SATA controller, ahci) +/sys/bus/pci/devices/0000:00:1c.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.2/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1c.3/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1d.0/power/control = auto (0x060400, PCI bridge, pcieport) +/sys/bus/pci/devices/0000:00:1f.0/power/control = auto (0x060100, ISA bridge, no driver) +/sys/bus/pci/devices/0000:00:1f.2/power/control = auto (0x058000, Memory controller, no driver) +/sys/bus/pci/devices/0000:00:1f.3/power/control = auto (0x040300, Audio device, snd_hda_intel) +/sys/bus/pci/devices/0000:00:1f.4/power/control = auto (0x0c0500, SMBus, i801_smbus) +/sys/bus/pci/devices/0000:01:00.0/power/control = auto (0x030200, 3D controller, nouveau) +/sys/bus/pci/devices/0000:07:00.0/power/control = auto (0x080501, SD Host controller, sdhci-pci) +/sys/bus/pci/devices/0000:08:00.0/power/control = auto (0x028000, Network controller, iwlwifi) +/sys/bus/pci/devices/0000:09:00.0/power/control = auto (0x020000, Ethernet controller, r8168) +/sys/bus/pci/devices/0000:0a:00.0/power/control = auto (0x010802, Non-Volatile memory controller, nvme) + ++++ USB +Autosuspend = disabled +Device whitelist = (not configured) +Device blacklist = (not configured) +Bluetooth blacklist = disabled +Phone blacklist = disabled +WWAN blacklist = enabled + +Bus 002 Device 001 ID 1d6b:0003 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 3.0 root hub (hub) +Bus 001 Device 003 ID 174f:14e8 control = auto, autosuspend_delay_ms = 2000 -- Syntek (uvcvideo) +Bus 001 Device 002 ID 17ef:6053 control = on, autosuspend_delay_ms = 2000 -- Lenovo (usbhid) +Bus 001 Device 004 ID 8087:0a2b control = auto, autosuspend_delay_ms = 2000 -- Intel Corp. (btusb) +Bus 001 Device 001 ID 1d6b:0002 control = auto, autosuspend_delay_ms = 0 -- Linux Foundation 2.0 root hub (hub) + ++++ Battery Status +/sys/class/power_supply/BAT0/manufacturer = SMP +/sys/class/power_supply/BAT0/model_name = L14M4P23 +/sys/class/power_supply/BAT0/cycle_count = (not supported) +/sys/class/power_supply/BAT0/energy_full_design = 60000 [mWh] +/sys/class/power_supply/BAT0/energy_full = 51690 [mWh] +/sys/class/power_supply/BAT0/energy_now = 50140 [mWh] +/sys/class/power_supply/BAT0/power_now = 12185 [mW] +/sys/class/power_supply/BAT0/status = Discharging + +Charge = 97.0 [%] +Capacity = 86.2 [%] +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/tlp-increase-optimize-linux-laptop-battery-life/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/check-laptop-battery-status-and-charging-state-in-linux-terminal/ +[2]: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/ +[3]: https://www.2daygeek.com/monitor-laptop-battery-charging-state-linux/ +[4]: https://linrunner.de/en/tlp/docs/tlp-linux-advanced-power-management.html +[5]: https://www.2daygeek.com/category/package-management/ +[6]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[7]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[10]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ diff --git a/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md new file mode 100644 index 0000000000..adc14c6111 --- /dev/null +++ b/sources/tech/20181213 Podman and user namespaces- A marriage made in heaven.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Podman and user namespaces: A marriage made in heaven) +[#]: via: (https://opensource.com/article/18/12/podman-and-user-namespaces) +[#]: author: (Daniel J Walsh https://opensource.com/users/rhatdan) + +Podman and user namespaces: A marriage made in heaven +====== +Learn how to use Podman to run containers in separate user namespaces. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/architecture_structure_planning_design_.png?itok=KL7dIDct) + +[Podman][1], part of the [libpod][2] library, enables users to manage pods, containers, and container images. In my last article, I wrote about [Podman as a more secure way to run containers][3]. Here, I'll explain how to use Podman to run containers in separate user namespaces. + +I have always thought of [user namespace][4], primarily developed by Red Hat's Eric Biederman, as a great feature for separating containers. User namespace allows you to specify a user identifier (UID) and group identifier (GID) mapping to run your containers. This means you can run as UID 0 inside the container and UID 100000 outside the container. If your container processes escape the container, the kernel will treat them as UID 100000. Not only that, but any file object owned by a UID that isn't mapped into the user namespace will be treated as owned by "nobody" (65534, kernel.overflowuid), and the container process will not be allowed access unless the object is accessible by "other" (world readable/writable). + +If you have a file owned by "real" root with permissions [660][5], and the container processes in the user namespace attempt to read it, they will be prevented from accessing it and will see the file as owned by nobody. + +### An example + +Here's how that might work. First, I create a file in my system owned by root. + +``` +$ sudo bash -c "echo Test > /tmp/test" +$ sudo chmod 600 /tmp/test +$ sudo ls -l /tmp/test +-rw-------. 1 root root 5 Dec 17 16:40 /tmp/test +``` + +Next, I volume-mount the file into a container running with a user namespace map 0:100000:5000. + +``` +$ sudo podman run -ti -v /tmp/test:/tmp/test:Z --uidmap 0:100000:5000 fedora sh +# id +uid=0(root) gid=0(root) groups=0(root) +# ls -l /tmp/test +-rw-rw----. 1 nobody nobody 8 Nov 30 12:40 /tmp/test +# cat /tmp/test +cat: /tmp/test: Permission denied +``` + +The **\--uidmap** setting above tells Podman to map a range of 5000 UIDs inside the container, starting with UID 100000 outside the container (so the range is 100000-104999) to a range starting at UID 0 inside the container (so the range is 0-4999). Inside the container, if my process is running as UID 1, it is 100001 on the host + +Since the real UID=0 is not mapped into the container, any file owned by root will be treated as owned by nobody. Even if the process inside the container has **CAP_DAC_OVERRIDE** , it can't override this protection. **DAC_OVERRIDE** enables root processes to read/write any file on the system, even if the process was not owned by root or world readable or writable. + +User namespace capabilities are not the same as capabilities on the host. They are namespaced capabilities. This means my container root has capabilities only within the container—really only across the range of UIDs that were mapped into the user namespace. If a container process escaped the container, it wouldn't have any capabilities over UIDs not mapped into the user namespace, including UID=0. Even if the processes could somehow enter another container, they would not have those capabilities if the container uses a different range of UIDs. + +Note that SELinux and other technologies also limit what would happen if a container process broke out of the container. + +### Using `podman top` to show user namespaces + +We have added features to **podman top** to allow you to examine the usernames of processes running inside a container and identify their real UIDs on the host. + +Let's start by running a sleep container using our UID mapping. + +``` +$ sudo podman run --uidmap 0:100000:5000 -d fedora sleep 1000 +``` + +Now run **podman top** : + +``` +$ sudo podman top --latest user huser +USER   HUSER +root   100000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice **podman top** reports that the user process is running as root inside the container but as UID 100000 on the host (HUSER). Also the **ps** command confirms that the sleep process is running as UID 100000. + +Now let's run a second container, but this time we will choose a separate UID map starting at 200000. + +``` +$ sudo podman run --uidmap 0:200000:5000 -d fedora sleep 1000 +$ sudo podman top --latest user huser +USER   HUSER +root   200000 + +$ ps -ef | grep sleep +100000   21821 21809  0 08:04 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +200000   23644 23632  1 08:08 ?         00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 +``` + +Notice that **podman top** reports the second container is running as root inside the container but as UID=200000 on the host. + +Also look at the **ps** command—it shows both sleep processes running: one as 100000 and the other as 200000. + +This means running the containers inside separate user namespaces gives you traditional UID separation between processes, which has been the standard security tool of Linux/Unix from the beginning. + +### Problems with user namespaces + +For several years, I've advocated user namespace as the security tool everyone wants but hardly anyone has used. The reason is there hasn't been any filesystem support or a shifting file system. + +In containers, you want to share the **base** image between lots of containers. The examples above use the Fedora base image in each example. Most of the files in the Fedora image are owned by real UID=0. If I run a container on this image with the user namespace 0:100000:5000, by default it sees all of these files as owned by nobody, so we need to shift all of these UIDs to match the user namespace. For years, I've wanted a mount option to tell the kernel to remap these file UIDs to match the user namespace. Upstream kernel storage developers continue to investigate and make progress on this feature, but it is a difficult problem. + + +Podman can use different user namespaces on the same image because of automatic [chowning][6] built into [containers/storage][7] by a team led by Nalin Dahyabhai. Podman uses containers/storage, and the first time Podman uses a container image in a new user namespace, container/storage "chowns" (i.e., changes ownership for) all files in the image to the UIDs mapped in the user namespace and creates a new image. Think of this as the **fedora:0:100000:5000** image. + +When Podman runs another container on the image with the same UID mappings, it uses the "pre-chowned" image. When I run the second container on 0:200000:5000, containers/storage creates a second image, let's call it **fedora:0:200000:5000**. + +Note if you are doing a **podman build** or **podman commit** and push the newly created image to a container registry, Podman will use container/storage to reverse the shift and push the image with all files chowned back to real UID=0. + +This can cause a real slowdown in creating containers in new UID mappings since the **chown** can be slow depending on the number of files in the image. Also, on a normal [OverlayFS][8], every file in the image gets copied up. The normal Fedora image can take up to 30 seconds to finish the chown and start the container. + +Luckily, the Red Hat kernel storage team, primarily Vivek Goyal and Miklos Szeredi, added a new feature to OverlayFS in kernel 4.19. The feature is called **metadata only copy-up**. If you mount an overlay filesystem with **metacopy=on** as a mount option, it will not copy up the contents of the lower layers when you change file attributes; the kernel creates new inodes that include the attributes with references pointing at the lower-level data. It will still copy up the contents if the content changes. This functionality is available in the Red Hat Enterprise Linux 8 Beta, if you want to try it out. + +This means container chowning can happen in a couple of seconds, and you won't double the storage space for each container. + +This makes running containers with tools like Podman in separate user namespaces viable, greatly increasing the security of the system. + +### Going forward + +I want to add a new flag, like **\--userns=auto** , to Podman that will tell it to automatically pick a unique user namespace for each container you run. This is similar to the way SELinux works with separate multi-category security (MCS) labels. If you set the environment variable **PODMAN_USERNS=auto** , you won't even need to set the flag. + +Podman is finally allowing users to run containers in separate user namespaces. Tools like [Buildah][9] and [CRI-O][10] will also be able to take advantage of user namespaces. For CRI-O, however, Kubernetes needs to understand which user namespace will run the container engine, and the upstream is working on that. + +In my next article, I will explain how to run Podman as non-root in a user namespace. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/podman-and-user-namespaces + +作者:[Daniel J Walsh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rhatdan +[b]: https://github.com/lujun9972 +[1]: https://podman.io/ +[2]: https://github.com/containers/libpod +[3]: https://opensource.com/article/18/10/podman-more-secure-way-run-containers +[4]: http://man7.org/linux/man-pages/man7/user_namespaces.7.html +[5]: https://chmodcommand.com/chmod-660/ +[6]: https://en.wikipedia.org/wiki/Chown +[7]: https://github.com/containers/storage +[8]: https://en.wikipedia.org/wiki/OverlayFS +[9]: https://buildah.io/ +[10]: http://cri-o.io/ diff --git a/sources/tech/20181214 Tips for using Flood Element for performance testing.md b/sources/tech/20181214 Tips for using Flood Element for performance testing.md new file mode 100644 index 0000000000..90994b0724 --- /dev/null +++ b/sources/tech/20181214 Tips for using Flood Element for performance testing.md @@ -0,0 +1,180 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Tips for using Flood Element for performance testing) +[#]: via: (https://opensource.com/article/18/12/tips-flood-element-testing) +[#]: author: (Nicole van der Hoeven https://opensource.com/users/nicolevanderhoeven) + +Tips for using Flood Element for performance testing +====== +Get started with this powerful, intuitive load testing tool. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +In case you missed it, there’s a new performance test tool on the block: [Flood Element][1]. It’s a scalable, browser-based tool that allows you to write scripts in JavaScript that interact with web pages like a real user would. + +Browser Level Users is a [newer approach to load testing][2] that overcomes many of the common challenges we hear about traditional methods of testing. It offers: + + * Scripting that is akin to common functional tools like Selenium and easier to learn + * More realistic results that are based on true browser performance rather than API response + * The ability to test against all components of your web app, including things like JavaScript that are rendered via the browser + + + +Given the above benefits, it’s a no-brainer to check out Flood Element for your web load testing, especially if you have struggled with existing tools like JMeter or HP LoadRunner. + +Pairing Element with [Flood][3] turns it into a pretty powerful load test tool. We have a [great guide here][4] that you can follow if you’d like to get started. I’ve been using and testing Element for several months now, and I’d like to share some tips I’ve learned along the way. + +### Initializing your script + +You can always start from scratch, but the quickest way to get started is to type `element init myfirstelementtest` from your terminal, filling in your preferred project name. + +You’ll then be asked to type the title of your test as well as the URL you’d like to script against. After a minute, you’ll see that a new directory has been created: + +![](https://opensource.com/sites/default/files/uploads/image_1_-_new_directory.png) + +Element will automatically create a file called **test.ts**. This file contains the skeleton of a script, along with some sample code to help you find a button and then click on it. But before you open it, let’s move on to… + +### Choosing the right text editor + +Scripting in Element is already pretty simple, but two things that help are syntax highlighting and code completion. Syntax highlighting will greatly improve the experience of learning a new test tool like Element, and code completion will make your scripting lightning-fast as you become more experienced. My text editor of choice is [Visual Studio Code][5], which has both of those features. It’s slick and clean, and it does the job. + +Syntax highlighting is when the text editor intelligently changes the font color of your code according to its role in the programming language you’re using. Here’s a screenshot of the **test.ts** file we generated earlier in VS Code to show you what I mean: + +![](https://opensource.com/sites/default/files/uploads/image_2_test.ts_.png) + +This makes it easier to make sense of the code at a glance: Comments are in green, values and labels are in orange, etc. + +Code completion is when you start to type something, and VS Code helpfully opens a context menu with suggestions for methods you can use. + +![][6] + +I love this because it means I don’t need to remember the exact name of the method. It also suggests names of variables you’ve already defined and highlights code that doesn’t make sense. This will help to make your tests more maintainable and readable for others, which is a great benefit as you look to scale your testing out in the future. + +![](https://opensource.com/sites/default/files/image-4-element-visible-copy.gif) + +### Taking screenshots + +One of the most powerful features of Element is its ability to take screenshots. I find it immensely useful when debugging because sometimes it’s just easier to see what’s going on visually. With protocol-based tools, debugging can be a much more involved and technical process. + +There are two ways to take screenshots in Element: + + 1. Add a setting to automatically take a screenshot when an error is encountered. You can do this by setting `screenshotOnFailure` to "true" in `TestSettings`: + + + +``` +export const settings: TestSettings = { +        device: Device.iPadLandscape, +        userAgent: 'flood-chrome-test', +        clearCache: true, +        disableCache: true, +        screenshotOnFailure: true, +} +``` + + 2. Explicitly take a screenshot at a particular point in the script. You can do this by adding + + + +``` +await browser.takeScreenshot() +``` + +to your code. + +### Viewing screenshots + +Once you’ve taken screenshots within your tests, you will probably want to view them and know that they will be stored for future safekeeping. Whether you are running your test locally on have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +Screenshots will be saved as .jpg files in a timestamped folder corresponding to your run. It should look something like this: **…myfirstelementtest/tmp/element-results/test/2018-11-20T135700.595Z/flood/screenshots/**. The screenshots will be uniquely named so that new screenshots, even for the same step, don’t overwrite older ones. + +However, I rarely need to look up the screenshots in that folder because I prefer to see them in iTerm2 for MacOS. iTerm is an alternative to the terminal that works particularly well with Element. When you take a screenshot, iTerm actually shows it in-line: + +![](https://opensource.com/sites/default/files/uploads/image_5_iterm_inline.png) + +**Tests run in Flood** + +Running an Element script on Flood is ideal when you need larger concurrency. Rather than accessing your screenshot locally, Flood will centralize the images into your account, so the images remain even after the cloud load injectors are destroyed. You can get to the screenshot files by downloading Archived Results: + +![](https://opensource.com/sites/default/files/image_6_archived_results.png) + +You can also click on a step on the dashboard to see a filmstrip of your test: + +![](https://opensource.com/sites/default/files/uploads/image_7_filmstrip_view.png) + +### Using logs + +You may need to check out the logs for more technical debugging, especially when the screenshots don’t tell the whole story. Again, whether you are running your test locally or have uploaded it to Flood to run with increased concurrency, Flood Element has you covered. + +**Locally run tests** + +You can print to the console by typing, for example: `console.log('orderValues = ’ + orderValues)` + +This will print the value of the variable `orderValues` at that point in the script. You would see this in your terminal if you’re running Element locally. + +**Tests run in Flood** + +If you’re running the script on Flood, you can either download the log (in the same Archived Results zipped file mentioned earlier) or click on the Logs tab: + +![](https://opensource.com/sites/default/files/uploads/image_8_logs_tab.png) + +### Fun with flags + +Element comes with a few flags that give you more control over how the script is run locally. Here are a few of my favorites: + +**Headless flag** + +When in doubt, run Element in non-headless mode to see the script actually opening the web app on Chrome and interacting with the page. This is only possible locally, but there’s nothing like actually seeing for yourself what’s happening in real time instead of relying on screenshots and logs after the fact. To enable this mode, add the flag when running your test: + +``` +element run myfirstelementtest.ts --no-headless +``` + +**Watch flag** + +Element will automatically close the browser window when it encounters an error or finishes the iteration. Adding `--watch` will leave the browser window open and then monitor the script. As soon as the script is saved, it will automatically run it in the same window from the beginning. Simply add this flag like the above example: + +``` +--watch +``` + +**Dev tools flag** + +This opens a browser instance and runs the script with the Chrome Dev Tools open, allowing you to find locators for the next action you want to script. Simply add this flag as in the first example: + +``` +--dev-tools +``` + +For more flags, use `element run --help`. + +### Try Element + +You’ve just gotten a crash course on Flood Element and are ready to get started. [Download Element][1] to start writing functional test scripts and reusing them as load test scripts on Flood. If you don’t have a Flood account, you can easily sign up for a free trial [on the Flood website][7]. + +We’re proud to contribute to the open source community and can’t wait to have you try this new addition to the Flood line. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/tips-flood-element-testing + +作者:[Nicole van der Hoeven][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nicolevanderhoeven +[b]: https://github.com/lujun9972 +[1]: https://element.flood.io/ +[2]: https://flood.io/blog/why-you-should-load-test-with-browsers/ +[3]: https://flood.io/ +[4]: https://help.flood.io/getting-started-with-load-testing/step-by-step-guide-flood-element +[5]: https://code.visualstudio.com/ +[6]: https://flood.io/wp-content/uploads/2018/11/vscode-codecompletion2.gif +[7]: https://flood.io/load-performance-testing-tool/free-load-testing-trial/ diff --git a/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md b/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md new file mode 100644 index 0000000000..6d72cda348 --- /dev/null +++ b/sources/tech/20181216 Schedule a visit with the Emacs psychiatrist.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Schedule a visit with the Emacs psychiatrist) +[#]: via: (https://opensource.com/article/18/12/linux-toy-eliza) +[#]: author: (Jason Baker https://opensource.com/users/jason-baker) + +Schedule a visit with the Emacs psychiatrist +====== +Eliza is a natural language processing chatbot hidden inside of one of Linux's most popular text editors. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-eliza.png?itok=3ioiBik_) + +Welcome to another day of the 24-day-long Linux command-line toys advent calendar. If this is your first visit to the series, you might be asking yourself what a command-line toy even is. We’re figuring that out as we go, but generally, it could be a game, or any simple diversion that helps you have fun at the terminal. + +Some of you will have seen various selections from our calendar before, but we hope there’s at least one new thing for everyone. + +Today's selection is a hidden gem inside of Emacs: Eliza, the Rogerian psychotherapist, a terminal toy ready to listen to everything you have to say. + +A brief aside: While this toy is amusing, your health is no laughing matter. Please take care of yourself this holiday season, physically and mentally, and if stress and anxiety from the holidays are having a negative impact on your wellbeing, please consider seeing a professional for guidance. It really can help. + +To launch [Eliza][1], first, you'll need to launch Emacs. There's a good chance Emacs is already installed on your system, but if it's not, it's almost certainly in your default repositories. + +Since I've been pretty fastidious about keeping this series in the terminal, launch Emacs with the **-nw** flag to keep in within your terminal emulator. + +``` +$ emacs -nw +``` + +Inside of Emacs, type M-x doctor to launch Eliza. For those of you like me from a Vim background who have no idea what this means, just hit escape, type x and then type doctor. Then, share all of your holiday frustrations. + +Eliza goes way back, all the way to the mid-1960s a the MIT Artificial Intelligence Lab. [Wikipedia][2] has a rather fascinating look at her history. + +Eliza isn't the only amusement inside of Emacs. Check out the [manual][3] for a whole list of fun toys. + + +![Linux toy: eliza animated][5] + +Do you have a favorite command-line toy that you think I ought to profile? We're running out of time, but I'd still love to hear your suggestions. Let me know in the comments below, and I'll check it out. And let me know what you thought of today's amusement. + +Be sure to check out yesterday's toy, [Head to the arcade in your Linux terminal with this Pac-man clone][6], and come back tomorrow for another! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/linux-toy-eliza + +作者:[Jason Baker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[b]: https://github.com/lujun9972 +[1]: https://www.emacswiki.org/emacs/EmacsDoctor +[2]: https://en.wikipedia.org/wiki/ELIZA +[3]: https://www.gnu.org/software/emacs/manual/html_node/emacs/Amusements.html +[4]: /file/417326 +[5]: https://opensource.com/sites/default/files/uploads/linux-toy-eliza-animated.gif (Linux toy: eliza animated) +[6]: https://opensource.com/article/18/12/linux-toy-myman diff --git a/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md new file mode 100644 index 0000000000..ad688a7820 --- /dev/null +++ b/sources/tech/20181217 6 tips and tricks for using KeePassX to secure your passwords.md @@ -0,0 +1,78 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (6 tips and tricks for using KeePassX to secure your passwords) +[#]: via: (https://opensource.com/article/18/12/keepassx-security-best-practices) +[#]: author: (Michael McCune https://opensource.com/users/elmiko) + +6 tips and tricks for using KeePassX to secure your passwords +====== +Get more out of your password manager by following these best practices. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/security-lock-password.jpg?itok=KJMdkKum) + +Our increasingly interconnected digital world makes security an essential and common discussion topic. We hear about [data breaches][1] with alarming regularity and are often on our own to make informed decisions about how to use technology securely. Although security is a deep and nuanced topic, there are some easy daily habits you can keep to reduce your attack surface. + +Securing passwords and account information is something that affects anyone today. Technologies like [OAuth][2] help make our lives simpler by reducing the number of accounts we need to create, but we are still left with a staggering number of places where we need new, unique information to keep our records secure. An easy way to deal with the increased mental load of organizing all this sensitive information is to use a password manager like [KeePassX][3]. + +In this article, I will explain the importance of keeping your password information secure and offer suggestions for getting the most out of KeePassX. For an introduction to KeePassX and its features, I highly recommend Ricardo Frydman's article "[Managing passwords in Linux with KeePassX][4]." + +### Why are unique passwords important? + +Using a different password for each account is the first step in ensuring that your accounts are not vulnerable to shared information leaks. Generating new credentials for every account is time-consuming, and it is extremely common for people to fall into the trap of using the same password on several accounts. The main problem with reusing passwords is that you increase the number of accounts an attacker could access if one of them experiences a credential breach. + +It may seem like a burden to create new credentials for each account, but the few minutes you spend creating and recording this information will pay for itself many times over in the event of a data breach. This is where password management tools like KeePassX are invaluable for providing convenience and reliability in securing your logins. + +### 3 tips for getting the most out of KeePassX + +I have been using KeePassX to manage my password information for many years, and it has become a primary resource in my digital toolbox. Overall, it's fairly simple to use, but there are a few best practices I've learned that I think are worth highlighting. + + 1. Add the direct login URL for each account entry. KeePassX has a very convenient shortcut to open the URL listed with an entry. (It's Control+Shift+U on Linux.) When creating a new account entry for a website, I spend some time to locate the site's direct login URL. Although most websites have a login widget in their navigation toolbars, they also usually have direct pages for login forms. By putting this URL into the URL field on the account entry setup form, I can use the shortcut to directly open the login page in my browser. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip1.png) + + 2. Use the Notes field to record extra security information. In addition to passwords, most websites will ask several questions to create additional authentication factors for an account. I use the Notes sections in my account entries to record these additional factors. + +![](https://opensource.com/sites/default/files/uploads/keepassx-tip2.png) + + 3. Turn on automatic database locking. In the **Application Settings** under the **Tools** menu, there is an option to lock the database after a period of inactivity. Enabling this option is a good common-sense measure, similar to enabling a password-protected screen lock, that will help ensure your password database is not left open and unprotected if someone else gains access to your computer. + +![](https://opensource.com/sites/default/files/uploads/keepassx_application-settings.png) + +### Food for thought + +Protecting your accounts with better password practices and daily habits is just the beginning. Once you start using a password manager, you need to consider issues like protecting the password database file and ensuring you don't forget or lose the master credentials. + +The cloud-native world of disconnected devices and edge computing makes having a central password store essential. The practices and methodologies you adopt will help minimize your risk while you explore and work in the digital world. + + 1. Be aware of retention policies when storing your database in the cloud. KeePassX's database has an open format used by several tools on multiple platforms. Sooner or later, you will want to transfer your database to another device. As you do this, consider the medium you will use to transfer the file. The best option is to use some sort of direct transfer between devices, but this is not always convenient. Always think about where the database file might be stored as it winds its way through the information superhighway; an email may get cached on a server, an object store may move old files to a trash folder. Learn about these interactions for the platforms you are using before deciding where and how you will share your database file. + + 2. Consider the source of truth for your database while you're making edits. After you share your database file between devices, you might need to create accounts for new services or change information for existing services while using a device. To ensure your information is always correct across all your devices, you need to make sure any edits you make on one device end up in all copies of the database file. There is no easy solution to this problem, but you might think about making all edits from a single device or storing the master copy in a location where all your devices can make edits. + + 3. Do you really need to know your passwords? This is more of a philosophical question that touches on the nature of memorable passwords, convenience, and secrecy. I hardly look at passwords as I create them for new accounts; in most cases, I don't even click the "Show Password" checkbox. There is an idea that you can be more secure by not knowing your passwords, as it would be impossible to compel you to provide them. This may seem like a worrisome idea at first, but consider that you can recover or reset passwords for most accounts through alternate verification methods. When you consider that you might want to change your passwords on a semi-regular basis, it almost makes more sense to treat them as ephemeral information that can be regenerated or replaced. + + + + +Here are a few more ideas to consider as you develop your best practices. + +I hope these tips and tricks have helped expand your knowledge of password management and KeePassX. You can find tools that support the KeePass database format on nearly every platform. If you are not currently using a password manager or have never tried KeePassX, I highly recommend doing so now! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/keepassx-security-best-practices + +作者:[Michael McCune][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/elmiko +[b]: https://github.com/lujun9972 +[1]: https://vigilante.pw/ +[2]: https://en.wikipedia.org/wiki/OAuth +[3]: https://www.keepassx.org/ +[4]: https://opensource.com/business/16/5/keepassx diff --git a/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md new file mode 100644 index 0000000000..c10e7ae4ed --- /dev/null +++ b/sources/tech/20181218 Insync- The Hassleless Way of Using Google Drive on Linux.md @@ -0,0 +1,137 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Insync: The Hassleless Way of Using Google Drive on Linux) +[#]: via: (https://itsfoss.com/insync-linux-review/) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Insync: The Hassleless Way of Using Google Drive on Linux +====== + +Using Google Drive on Linux is a pain and you probably already know that. There is no official desktop client of Google Drive for Linux. It’s been [more than six years since Google promised Google Drive on Linux][1] but it doesn’t seem to be happening. + +In the absence of the official Google Drive client on Linux, you have no option other than trying the alternatives. I have already discussed a number of [tools that allow you to use Google Drive on Linux][2]. One of those to[ols is][3] Insync, and in my opinion, this is your best bet for a native Google Drive experience on desktop Linux. + +Note that Insync is not an open source software. Heck, it is not even free to use. + +But it has so many features that it becomes an essential tool for those Linux users who rely heavily on Google Drive. + +I briefly discussed Insync in the old article about [Google Drive and Linux][2]. In this article, I’ll discuss Insync features in detail. + +### Insync brings native Google Drive experience to Linux desktop + +![Use insync to access Google Drive in Linux][4] + +The core competency of Insync is syncing your Google Drive, but the app is much more than that. It has features to help you maximize and control your productivity, your Google Drive and your files such as: + + * Cross-platform access (supports Linux, Windows and macOS) + * Easy multiple Google Drive accounts access + * Choose your syncing location. Sync files to your hard drive, external drives and NAS! + * Support for features like file matching, symlink and ignore list + + + +Let me show you some of the main features in action: + +#### Cross-platform in true sense + +Insync claims to run the same app across all operating systems i.e., Linux, Windows, and macOS. That means that you can access the same UI across different OSes, making it easy for you to manage your files across multiple machines. + +![The UI of Insync and the default location of the Insync folder.][5]The UI of Insync and the default location of the Insync folder. + +#### Multiple Google account management + +Insync interface allows you to manage multiple Google Drive accounts seamlessly. You can easily switch between several accounts just by clicking your Google account. + +![Switching between multiple Google accounts in Insync][6]Switching between multiple Google accounts + +#### Custom sync folders + +Customize the way you sync your files and folders. You can easily set your syncing destination anywhere on your machine including external drive and network drives. + +![Customize sync location in Insync][7]Customize sync location + +The selective syncing mode also allows you to easily select a number of files and folders you’d want to sync (or unsync) in your local machine. This includes selectively syncing files within folders. + +![Selective synchronization in Insync][8]Selective synchronization + +It has features like file matching and ‘ignore list’ to help you filter files you don’t want to sync or files that you already have on your machine. + +![File matching feature in Insync][9]Avoids duplication of files + +The ‘ignore list’ allows you to set rules to exclude certain type of files from synchronization. + +![Selective syncing based on rules in Insync][10]Selective syncing based on rules + +If you prefer to work out of the desktop, you have an “Add to Insync” feature that will allow you to add any local file to your Drive. + +![Sync files right from your desktop][11]Sync files right from your desktop + +Insync also supports symlinks for those with workflows that use symbolic links. To learn more about Insync and symlinks, you can refer to [this article.][12] + +#### Exclusive features for Linux + +Insync supports the most commonly used 64-bit Linux distributions like **Ubuntu, Debian and Fedora**. You can check out the full list of distribution support [here][13]. + +Insync also has [headless][14] support for those looking to sync through the command line interface. This is perfect if you use a distro that is not fully supported by the GUI app or if you are working with servers or if you simply prefer the CLI. + +![Insync CLI][15]Command Line Interface + +You can learn more about installing and running Insync headless [here][16]. + +### Insync pricing and special discount + +Insync is a premium tool and it comes with a [price tag][17]. You have 2 licenses to choose from: + + * **Prime** is priced at $29.99 per Google account. You’ll get access to: cross-platform syncing, multiple accounts access and **support**. + * **Teams** is priced at $49.99 per Google account. You’ll be able to access all the Prime features + Team Drives syncing + + + +It’s a one-time fee which means once you buy it, you don’t have to pay it again. In a world where everything is paid monthly, it’s refreshing to pay for software that is still one-time! + +Each Google account has a 15-day free trial that will allow you to test the full suite of features, including [Team Drives][18] syncing. + +If you think it’s a bit expensive for your budget, I have good news for you. As an It’s FOSS reader, you get Insync at 25% discount. + +Just use the code ITSFOSS25 at checkout time and you will get 25% immediate discount on any license. Isn’t it cool? + +If you are not certain yet, you can try Insync free for 15 days. And if you think it’s worth the money, purchase the license with **ITSFOSS25** coupon code. + +You can download Insync from their website. + +I have used Insync from the time when it was available for free and I have always liked it. They have added more features over the time and improved its UI and performance. Overall, it’s a nice-to-have application if you use Google Drive a lot and do not mind paying for the efforts of the developers. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/insync-linux-review/ + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://abevoelker.github.io/how-long-since-google-said-a-google-drive-linux-client-is-coming/ +[2]: https://itsfoss.com/use-google-drive-linux/ +[3]: https://www.insynchq.com +[4]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/google-drive-linux-insync.jpeg?resize=800%2C450&ssl=1 +[5]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_interface.jpeg?fit=800%2C501&ssl=1 +[6]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_multiple_google_account.jpeg?ssl=1 +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_folder_settings.png?ssl=1 +[8]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_selective_sync.png?ssl=1 +[9]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_file_matching.jpeg?ssl=1 +[10]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_ignore_list_1.png?ssl=1 +[11]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2018/11/add-to-insync-shortcut.jpeg?ssl=1 +[12]: https://help.insynchq.com/key-features-and-syncing-explained/syncing-superpowers/using-symlinks-on-google-drive-with-insync +[13]: https://www.insynchq.com/downloads +[14]: https://en.wikipedia.org/wiki/Headless_software +[15]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/11/insync_cli.jpeg?fit=800%2C478&ssl=1 +[16]: https://help.insynchq.com/installation-on-windows-linux-and-macos/advanced/linux-controlling-insync-via-command-line-cli +[17]: https://www.insynchq.com/pricing +[18]: https://gsuite.google.com/learning-center/products/drive/get-started-team-drive/#!/ diff --git a/sources/tech/20181219 PowerTOP - Monitors Power Usage and Improve Laptop Battery Life in Linux.md b/sources/tech/20181219 PowerTOP - Monitors Power Usage and Improve Laptop Battery Life in Linux.md new file mode 100644 index 0000000000..a615ffc73a --- /dev/null +++ b/sources/tech/20181219 PowerTOP - Monitors Power Usage and Improve Laptop Battery Life in Linux.md @@ -0,0 +1,411 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (PowerTOP – Monitors Power Usage and Improve Laptop Battery Life in Linux) +[#]: via: (https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +PowerTOP – Monitors Power Usage and Improve Laptop Battery Life in Linux +====== + +We all know, we almost 80-90% migrated from PC (Desktop) to laptop. + +But one thing we want from a laptop, it’s long battery life and we want to use every drop of power. + +So it’s good to know where our power is going and getting waste. + +You can use the powertop utility to see what’s drawing power when your system’s not plugged in. + +You need to run the powertop utility in terminal with super user privilege. + +It will access the hardware and measure power usage. + +### What is PowerTOP + +PowerTOP is a Linux tool to diagnose issues with power consumption and power management. + +It was developed by Intel to enable various power-saving modes in kernel, userspace, and hardware. + +In addition to being a diagnostic tool, PowerTOP also has an interactive mode where the user can experiment various power management settings for cases where the Linux distribution has not enabled these settings. + +It is possible to monitor processes and show which of them are utilizing the CPU and wake it from its Idle-States, allowing to identify applications with particular high power demands. + +### How to Install PowerTOP + +PowerTOP package is available in most of the distributions official repository so, use the distributions **[Package Manager][1]** to install it. + +For **`Fedora`** system, use **[DNF Command][2]** to install PowerTOP. + +``` +$ sudo dnf install powertop +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][3]** or **[APT Command][4]** to install PowerTOP. + +``` +$ sudo apt install powertop +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][5]** to install PowerTOP. + +``` +$ sudo pacman -S powertop +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][6]** to install PowerTOP. + +``` +$ sudo yum install powertop +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][7]** to install PowerTOP. + +``` +$ sudo zypper install powertop +``` + +### How To Access PowerTOP + +PowerTOP requires super user privilege so, run as root to use PowerTOP utility on your Linux system. + +By default it shows `Overview` tab where we can see the power usage consumption for all the devices. Also shows your system wakeups seconds. + +``` +$ sudo powertop + +PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables + +The battery reports a discharge rate of 12.6 W +The power consumed was 259 J +The estimated remaining time is 1 hours, 52 minutes + +Summary: 1692.9 wakeups/second, 0.0 GPU ops/seconds, 0.0 VFS ops/sec and 54.9% CPU use + + Usage Events/s Category Description + 9.3 ms/s 529.4 Timer tick_sched_timer + 378.5 ms/s 139.8 Process [PID 2991] /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8314 -prefMapSize 173895 -schedulerPrefs 00 + 7.5 ms/s 141.7 Timer hrtimer_wakeup + 3.3 ms/s 102.7 Process [PID 1527] /usr/lib/firefox/firefox --new-window + 11.6 ms/s 69.1 Process [PID 1568] /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 173895 -schedulerPrefs 0001, + 6.2 ms/s 59.0 Process [PID 1496] /usr/lib/firefox/firefox --new-window + 2.1 ms/s 59.6 Process [PID 2466] /usr/lib/firefox/firefox -contentproc -childID 3 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 1.8 ms/s 52.3 Process [PID 2052] /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 1.8 ms/s 50.8 Process [PID 3034] /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8314 -prefMapSize 173895 -schedulerPrefs 00 + 3.6 ms/s 48.4 Process [PID 3009] /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8314 -prefMapSize 173895 -schedulerPrefs 00 + 7.5 ms/s 46.2 Process [PID 2996] /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -prefsLen 8314 -prefMapSize 173895 -schedulerPrefs 00 + 25.2 ms/s 33.6 Process [PID 1528] /usr/lib/firefox/firefox --new-window + 5.7 ms/s 32.2 Interrupt [7] sched(softirq) + 2.1 ms/s 32.2 Process [PID 1811] /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 19.7 ms/s 25.0 Process [PID 1794] /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 1.9 ms/s 31.5 Process [PID 1596] /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -prefsLen 1 -prefMapSize 173895 -schedulerPrefs 0001, + 3.1 ms/s 29.9 Process [PID 1535] /usr/lib/firefox/firefox --new-window + 7.1 ms/s 28.2 Process [PID 1488] /usr/lib/firefox/firefox --new-window + 1.8 ms/s 29.5 Process [PID 1762] /usr/lib/firefox/firefox -contentproc -childID 3 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 8.8 ms/s 23.3 Process [PID 1121] /usr/bin/gnome-shell + 1.2 ms/s 21.8 Process [PID 1657] /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 920 -prefMapSize 173895 -schedulerPrefs 000 + 13.3 ms/s 13.9 Process [PID 1746] /usr/lib/firefox/firefox -contentproc -childID 3 -isForBrowser -prefsLen 5814 -prefMapSize 173895 -schedulerPrefs 00 + 2.7 ms/s 11.1 Process [PID 3410] /usr/lib/gnome-terminal-server + 3.8 ms/s 10.8 Process [PID 1057] /usr/lib/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -nolisten tcp -background none -noreset -keeptty + 3.1 ms/s 9.8 Process [PID 1629] /usr/lib/firefox/firefox -contentproc -childID 2 -isForBrowser -prefsLen 920 -prefMapSize 173895 -schedulerPrefs 000 + 0.9 ms/s 6.7 Interrupt [136] xhci_hcd + 278.0 us/s 6.4 Process [PID 414] [irq/141-iwlwifi] + 128.7 us/s 5.7 Process [PID 1] /sbin/init + 118.5 us/s 5.2 Process [PID 10] [rcu_preempt] + 49.0 us/s 4.7 Interrupt [0] HI_SOFTIRQ + 459.3 us/s 3.1 Interrupt [142] i915 + 2.1 ms/s 2.3 Process [PID 3451] powertop + 8.4 us/s 2.7 kWork intel_atomic_helper_free_state_ + 1.2 ms/s 1.8 kWork intel_atomic_commit_work + 374.2 us/s 2.1 Interrupt [9] acpi + 42.1 us/s 1.8 kWork intel_atomic_cleanup_work + 3.5 ms/s 0.25 kWork delayed_fput + 238.0 us/s 1.5 Process [PID 907] /usr/lib/upowerd + 17.7 us/s 1.5 Timer intel_uncore_fw_release_timer + 26.4 us/s 1.4 Process [PID 576] [i915/signal:0] + 19.8 us/s 1.3 Timer watchdog_timer_fn + 1.1 ms/s 0.00 Process [PID 206] [kworker/7:2] + 2.4 ms/s 0.00 Interrupt [1] timer(softirq) + 13.4 us/s 0.9 Process [PID 9] [ksoftirqd/0] + + Exit | / Navigate | +``` + +The powertop output looks similar to the above screenshot, it will be slightly different based on your hardware. This have many screen you can switch between screen the using `Tab` and `Shift+Tab` button. + +### Idle Stats Tab + +It displays various information about the processor. + +``` +PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables + + + Package | Core | CPU 0 CPU 4 + | | C0 active 6.7% 7.2% + | | POLL 0.0% 0.1 ms 0.0% 0.1 ms + | | C1E 1.2% 0.2 ms 1.6% 0.3 ms +C2 (pc2) 7.5% | | +C3 (pc3) 25.2% | C3 (cc3) 0.7% | C3 0.5% 0.2 ms 0.6% 0.1 ms +C6 (pc6) 0.0% | C6 (cc6) 7.1% | C6 6.6% 0.5 ms 6.3% 0.5 ms +C7 (pc7) 0.0% | C7 (cc7) 59.8% | C7s 0.0% 0.0 ms 0.0% 0.0 ms +C8 (pc8) 0.0% | | C8 33.9% 1.6 ms 32.3% 1.5 ms +C9 (pc9) 0.0% | | C9 2.1% 3.4 ms 0.7% 2.8 ms +C10 (pc10) 0.0% | | C10 39.5% 4.7 ms 41.4% 4.7 ms + + | Core | CPU 1 CPU 5 + | | C0 active 8.3% 7.2% + | | POLL 0.0% 0.0 ms 0.0% 0.1 ms + | | C1E 1.3% 0.2 ms 1.4% 0.3 ms + | | + | C3 (cc3) 0.5% | C3 0.5% 0.2 ms 0.4% 0.2 ms + | C6 (cc6) 6.0% | C6 5.3% 0.5 ms 4.7% 0.5 ms + | C7 (cc7) 59.3% | C7s 0.0% 0.8 ms 0.0% 1.0 ms + | | C8 27.2% 1.5 ms 23.8% 1.4 ms + | | C9 1.6% 3.0 ms 0.5% 3.0 ms + | | C10 44.5% 4.7 ms 52.2% 4.6 ms + + | Core | CPU 2 CPU 6 + | | C0 active 11.2% 8.4% + | | POLL 0.0% 0.0 ms 0.0% 0.0 ms + | | C1E 1.4% 0.4 ms 1.3% 0.3 ms + | | + | C3 (cc3) 0.3% | C3 0.2% 0.1 ms 0.4% 0.2 ms + | C6 (cc6) 4.0% | C6 3.7% 0.5 ms 4.3% 0.5 ms + | C7 (cc7) 54.2% | C7s 0.0% 0.0 ms 0.0% 1.0 ms + | | C8 20.0% 1.5 ms 20.7% 1.4 ms + | | C9 1.0% 3.4 ms 0.4% 3.8 ms + | | C10 48.8% 4.6 ms 52.3% 5.0 ms + + | Core | CPU 3 CPU 7 + | | C0 active 8.8% 8.1% + | | POLL 0.0% 0.1 ms 0.0% 0.0 ms + | | C1E 1.2% 0.2 ms 1.2% 0.2 ms + | | + | C3 (cc3) 0.6% | C3 0.6% 0.2 ms 0.4% 0.2 ms + | C6 (cc6) 7.0% | C6 7.5% 0.5 ms 4.4% 0.5 ms + | C7 (cc7) 56.8% | C7s 0.0% 0.0 ms 0.0% 0.9 ms + | | C8 29.4% 1.4 ms 23.8% 1.4 ms + | | C9 1.1% 2.7 ms 0.7% 3.9 ms + | | C10 41.0% 4.0 ms 50.0% 4.8 ms + + + Exit | / Navigate | +``` + +### Frequency Stats Tab + +It displays the frequency of CPU. + +``` +PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables + + + Package | Core | CPU 0 CPU 4 + | | Average 930 MHz 1101 MHz +Idle | Idle | Idle + + | Core | CPU 1 CPU 5 + | | Average 1063 MHz 979 MHz + | Idle | Idle + + | Core | CPU 2 CPU 6 + | | Average 976 MHz 942 MHz + | Idle | Idle + + | Core | CPU 3 CPU 7 + | | Average 924 MHz 957 MHz + | Idle | Idle + +``` + +### Device Stats Tab + +It displays power usage information against only devices. + +``` +PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables + + +The battery reports a discharge rate of 13.8 W +The power consumed was 280 J + + Usage Device name + 46.7% CPU misc + 46.7% DRAM + 46.7% CPU core + 19.0% Display backlight + 0.0% Audio codec hwC0D0: Realtek + 0.0% USB device: Lenovo EasyCamera (160709000341) + 100.0% PCI Device: Intel Corporation HD Graphics 530 + 100.0% Radio device: iwlwifi + 100.0% PCI Device: O2 Micro, Inc. SD/MMC Card Reader Controller + 100.0% PCI Device: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers + 100.0% USB device: Lenovo Wireless Optical Mouse N100 + 100.0% PCI Device: Intel Corporation Wireless 8260 + 100.0% PCI Device: Intel Corporation HM170/QM170 Chipset SATA Controller [AHCI Mode] + 100.0% Radio device: btusb + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #4 + 100.0% USB device: xHCI Host Controller + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller + 100.0% PCI Device: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #3 + 100.0% PCI Device: Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #2 + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #9 + 100.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family SMBus + 26.1 pkts/s Network interface: wlp8s0 (iwlwifi) + 0.0% USB device: usb-device-8087-0a2b + 0.0% runtime-reg-dummy + 0.0% Audio codec hwC0D2: Intel + 0.0 pkts/s Network interface: enp9s0 (r8168) + 0.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller + 0.0% PCI Device: Intel Corporation HM170 Chipset LPC/eSPI Controller + 0.0% PCI Device: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) + 0.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family MEI Controller #1 + 0.0% PCI Device: NVIDIA Corporation GM107M [GeForce GTX 960M] + 0.0% I2C Adapter (i2c-8): nvkm-0000:01:00.0-bus-0005 + 0.0% runtime-PNP0C14:00 + 0.0% PCI Device: Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller + 0.0% runtime-PNP0C0C:00 + 0.0% USB device: xHCI Host Controller + 0.0% runtime-ACPI000C:00 + 0.0% runtime-regulatory.0 + 0.0% runtime-PNP0C14:01 + 0.0% runtime-vesa-framebuffer.0 + 0.0% runtime-coretemp.0 + 0.0% runtime-alarmtimer + + Exit | / Navigate | +``` + +### Tunables Stats Tab + +This tab is important area that provides suggestions to optimize your laptop battery. + +``` +PowerTOP v2.9 Overview Idle stats Frequency stats Device stats Tunables + + +>> Bad Enable SATA link power management for host2 + Bad Enable SATA link power management for host3 + Bad Enable SATA link power management for host0 + Bad Enable SATA link power management for host1 + Bad VM writeback timeout + Bad Autosuspend for USB device Lenovo Wireless Optical Mouse N100 [1-2] + Good Bluetooth device interface status + Good Enable Audio codec power management + Good NMI watchdog should be turned off + Good Runtime PM for I2C Adapter i2c-7 (nvkm-0000:01:00.0-bus-0002) + Good Autosuspend for unknown USB device 1-11 (8087:0a2b) + Good Runtime PM for I2C Adapter i2c-3 (i915 gmbus dpd) + Good Autosuspend for USB device Lenovo EasyCamera [160709000341] + Good Runtime PM for I2C Adapter i2c-1 (i915 gmbus dpc) + Good Runtime PM for I2C Adapter i2c-12 (nvkm-0000:01:00.0-bus-0009) + Good Autosuspend for USB device xHCI Host Controller [usb1] + Good Runtime PM for I2C Adapter i2c-13 (nvkm-0000:01:00.0-aux-000a) + Good Runtime PM for I2C Adapter i2c-2 (i915 gmbus dpb) + Good Runtime PM for I2C Adapter i2c-8 (nvkm-0000:01:00.0-bus-0005) + Good Runtime PM for I2C Adapter i2c-15 (nvkm-0000:01:00.0-aux-000c) + Good Runtime PM for I2C Adapter i2c-16 (nvkm-0000:01:00.0-aux-000d) + Good Runtime PM for I2C Adapter i2c-5 (nvkm-0000:01:00.0-bus-0000) + Good Runtime PM for I2C Adapter i2c-0 (SMBus I801 adapter at 6040) + Good Runtime PM for I2C Adapter i2c-11 (nvkm-0000:01:00.0-bus-0008) + Good Runtime PM for I2C Adapter i2c-14 (nvkm-0000:01:00.0-aux-000b) + Good Autosuspend for USB device xHCI Host Controller [usb2] + Good Runtime PM for I2C Adapter i2c-9 (nvkm-0000:01:00.0-bus-0006) + Good Runtime PM for I2C Adapter i2c-10 (nvkm-0000:01:00.0-bus-0007) + Good Runtime PM for I2C Adapter i2c-6 (nvkm-0000:01:00.0-bus-0001) + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family HD Audio Controller + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family USB 3.0 xHCI Controller + Good Runtime PM for PCI Device Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor Host Bridge/DRAM Registers + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #9 + Good Runtime PM for PCI Device Intel Corporation HD Graphics 530 + Good Runtime PM for PCI Device Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #3 + Good Runtime PM for PCI Device O2 Micro, Inc. SD/MMC Card Reader Controller + Good Runtime PM for PCI Device Intel Corporation HM170 Chipset LPC/eSPI Controller + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family MEI Controller #1 + Good Runtime PM for PCI Device Samsung Electronics Co Ltd NVMe SSD Controller SM951/PM951 + Good Runtime PM for PCI Device Intel Corporation HM170/QM170 Chipset SATA Controller [AHCI Mode] + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family Power Management Controller + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #2 + Good Runtime PM for PCI Device Intel Corporation Wireless 8260 + Good Runtime PM for PCI Device Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family PCI Express Root Port #4 + Good Runtime PM for PCI Device Intel Corporation 100 Series/C230 Series Chipset Family SMBus + Good Runtime PM for PCI Device NVIDIA Corporation GM107M [GeForce GTX 960M] + + Exit | Toggle tunable | Window refresh +``` + +### How To Generate PowerTop HTML Report + +Run the following command to generate the PowerTop HTML report. + +``` +$ sudo powertop --html=powertop.html +modprobe cpufreq_stats failedLoaded 100 prior measurements +Cannot load from file /var/cache/powertop/saved_parameters.powertop +File will be loaded after taking minimum number of measurement(s) with battery only +RAPL device for cpu 0 +RAPL Using PowerCap Sysfs : Domain Mask f +RAPL device for cpu 0 +RAPL Using PowerCap Sysfs : Domain Mask f +Devfreq not enabled +glob returned GLOB_ABORTED +Cannot load from file /var/cache/powertop/saved_parameters.powertop +File will be loaded after taking minimum number of measurement(s) with battery only +Preparing to take measurements +To show power estimates do 182 measurement(s) connected to battery only +Taking 1 measurement(s) for a duration of 20 second(s) each. +PowerTOP outputing using base filename powertop.html +``` + +Navigate to `file:///home/daygeek/powertop.html` file to access the generated PowerTOP HTML report. +![][9] + +### Auto-Tune mode + +This feature sets all tunable options from `BAD` to `GOOD` which increase the laptop battery life in Linux. + +``` +$ sudo powertop --auto-tune +modprobe cpufreq_stats failedLoaded 210 prior measurements +Cannot load from file /var/cache/powertop/saved_parameters.powertop +File will be loaded after taking minimum number of measurement(s) with battery only +RAPL device for cpu 0 +RAPL Using PowerCap Sysfs : Domain Mask f +RAPL device for cpu 0 +RAPL Using PowerCap Sysfs : Domain Mask f +Devfreq not enabled +glob returned GLOB_ABORTED +Cannot load from file /var/cache/powertop/saved_parameters.powertop +File will be loaded after taking minimum number of measurement(s) with battery only +To show power estimates do 72 measurement(s) connected to battery only +Leaving PowerTOP +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/powertop-monitors-laptop-battery-usage-linux/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/category/package-management/ +[2]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[3]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[4]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[6]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[7]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[8]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]: https://www.2daygeek.com/wp-content/uploads/2015/07/powertop-html-output.jpg diff --git a/sources/tech/20181220 Getting started with Prometheus.md b/sources/tech/20181220 Getting started with Prometheus.md new file mode 100644 index 0000000000..79704addb7 --- /dev/null +++ b/sources/tech/20181220 Getting started with Prometheus.md @@ -0,0 +1,166 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Prometheus) +[#]: via: (https://opensource.com/article/18/12/introduction-prometheus) +[#]: author: (Michael Zamot https://opensource.com/users/mzamot) + +Getting started with Prometheus +====== +Learn to install and write queries for the Prometheus monitoring and alerting system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_sysadmin_cloud.png?itok=sUciG0Cn) + +[Prometheus][1] is an open source monitoring and alerting system that directly scrapes metrics from agents running on the target hosts and stores the collected samples centrally on its server. Metrics can also be pushed using plugins like **collectd_exporter** —although this is not Promethius' default behavior, it may be useful in some environments where hosts are behind a firewall or prohibited from opening ports by security policy. + +Prometheus, a project of the [Cloud Native Computing Foundation][2], scales up using a federation model, which enables one Prometheus server to scrape another Prometheus server. This allows creation of a hierarchical topology, where a central system or higher-level Prometheus server can scrape aggregated data already collected from subordinate instances. + +Besides the Prometheus server, its most common components are its [Alertmanager][3] and its exporters. + +Alerting rules can be created within Prometheus and configured to send custom alerts to Alertmanager. Alertmanager then processes and handles these alerts, including sending notifications through different mechanisms like email or third-party services like [PagerDuty][4]. + +Prometheus' exporters can be libraries, processes, devices, or anything else that exposes the metrics that will be scraped by Prometheus. The metrics are available at the endpoint **/metrics** , which allows Prometheus to scrape them directly without needing an agent. The tutorial in this article uses **node_exporter** to expose the target hosts' hardware and operating system metrics. Exporters' outputs are plaintext and highly readable, which is one of Prometheus' strengths. + +In addition, you can configure [Grafana][5] to use Prometheus as a backend to provide data visualization and dashboarding functions. + +### Making sense of Prometheus' configuration file + +The number of seconds between when **/metrics** is scraped controls the granularity of the time-series database. This is defined in the configuration file as the **scrape_interval** parameter, which by default is set to 60 seconds. + +Targets are set for each scrape job in the **scrape_configs** section. Each job has its own name and a set of labels that can help filter, categorize, and make it easier to identify the target. One job can have many targets. + +### Installing Prometheus + +In this tutorial, for simplicity, we will install a Prometheus server and **node_exporter** with docker. Docker should already be installed and configured properly on your system. For a more in-depth, automated method, I recommend Steve Ovens' article [How to use Ansible to set up system monitoring with Prometheus][6]. + +Before starting, create the Prometheus configuration file **prometheus.yml** in your work directory as follows: + +``` +global: +  scrape_interval:      15s +  evaluation_interval: 15s + +scrape_configs: +  - job_name: 'prometheus' + +        static_configs: +        - targets: ['localhost:9090'] + +  - job_name: 'webservers' + +        static_configs: +        - targets: [':9100'] +``` + +Start Prometheus with Docker by running the following command: + +``` +$ sudo docker run -d -p 9090:9090 -v +/path/to/prometheus.yml:/etc/prometheus/prometheus.yml +prom/prometheus +``` + +By default, the Prometheus server will use port 9090. If this port is already in use, you can change it by adding the parameter **\--web.listen-address=" :"** at the end of the previous command. + +In the machine you want to monitor, download and run the **node_exporter** container by using the following command: + +``` +$ sudo docker run -d -v "/proc:/host/proc" -v "/sys:/host/sys" -v +"/:/rootfs" --net="host" prom/node-exporter --path.procfs +/host/proc --path.sysfs /host/sys --collector.filesystem.ignored- +mount-points "^/(sys|proc|dev|host|etc)($|/)" +``` + +For the purposes of this learning exercise, you can install **node_exporter** and Prometheus on the same machine. Please note that it's not wise to run **node_exporter** under Docker in production—this is for testing purposes only. + +To verify that **node_exporter** is running, open your browser and navigate to **http:// :9100/metrics**. All the metrics collected will be displayed; these are the same metrics Prometheus will scrape. + +![](https://opensource.com/sites/default/files/uploads/check-node_exporter.png) + +To verify the Prometheus server installation, open your browser and navigate to . + +You should see the Prometheus interface. Click on **Status** and then **Targets**. Under State, you should see your machines listed as **UP**. + +![](https://opensource.com/sites/default/files/uploads/targets-up.png) + +### Using Prometheus queries + +It's time to get familiar with [PromQL][7], Prometheus' query syntax, and its graphing web interface. Go to **** on your Prometheus server. You will see a query editor and two tabs: Graph and Console. + +Prometheus stores all data as time series, identifying each one with a metric name. For example, the metric **node_filesystem_avail_bytes** shows the available filesystem space. The metric's name can be used in the expression box to select all of the time series with this name and produce an instant vector. If desired, these time series can be filtered using selectors and labels—a set of key-value pairs—for example: + +``` +node_filesystem_avail_bytes{fstype="ext4"} +``` + +When filtering, you can match "exactly equal" ( **=** ), "not equal" ( **!=** ), "regex-match" ( **=~** ), and "do not regex-match" ( **!~** ). The following examples illustrate this: + +To filter **node_filesystem_avail_bytes** to show both ext4 and XFS filesystems: + +``` +node_filesystem_avail_bytes{fstype=~"ext4|xfs"} +``` + +To exclude a match: + +``` +node_filesystem_avail_bytes{fstype!="xfs"} +``` + +You can also get a range of samples back from the current time by using square brackets. You can use **s** to represent seconds, **m** for minutes, **h** for hours, **d** for days, **w** for weeks, and **y** for years. When using time ranges, the vector returned will be a range vector. + +For example, the following command produces the samples from five minutes to the present: + +``` +node_memory_MemAvailable_bytes[5m] +``` + +Prometheus also includes functions to allow advanced queries, such as this: + +``` +100 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated (1 - avg by(instance)(irate(node_cpu_seconds_total{job='webservers',mode='idle'}[5m]))) +``` + +Notice how the labels are used to filter the job and the mode. The metric **node_cpu_seconds_total** returns a counter, and the **irate()** function calculates the per-second rate of change based on the last two data points of the range interval (meaning the range can be smaller than five minutes). To calculate the overall CPU usage, you can use the idle mode of the **node_cpu_seconds_total** metric. The idle percent of a processor is the opposite of a busy processor, so the **irate** value is subtracted from 1. To make it a percentage, multiply it by 100. + +![](https://opensource.com/sites/default/files/uploads/cpu-usage.png) + +### Learn more + +Prometheus is a powerful, scalable, lightweight, and easy to use and deploy monitoring tool that is indispensable for every system administrator and developer. For these and other reasons, many companies are implementing Prometheus as part of their infrastructure. + +To learn more about Prometheus and its functions, I recommend the following resources: + ++ About [PromQL][8] ++ What [node_exporters collects][9] ++ [Prometheus functions][10] ++ [4 open source monitoring tools][11] ++ [Now available: The open source guide to DevOps monitoring tools][12] + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/introduction-prometheus + +作者:[Michael Zamot][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/mzamot +[b]: https://github.com/lujun9972 +[1]: https://prometheus.io/ +[2]: https://www.cncf.io/ +[3]: https://prometheus.io/docs/alerting/alertmanager/ +[4]: https://en.wikipedia.org/wiki/PagerDuty +[5]: https://grafana.com/ +[6]: https://opensource.com/article/18/3/how-use-ansible-set-system-monitoring-prometheus +[7]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[8]: https://prometheus.io/docs/prometheus/latest/querying/basics/ +[9]: https://github.com/prometheus/node_exporter#collectors +[10]: https://prometheus.io/docs/prometheus/latest/querying/functions/ +[11]: https://opensource.com/article/18/8/open-source-monitoring-tools +[12]: https://opensource.com/article/18/8/now-available-open-source-guide-devops-monitoring-tools diff --git a/sources/tech/20181221 Large files with Git- LFS and git-annex.md b/sources/tech/20181221 Large files with Git- LFS and git-annex.md new file mode 100644 index 0000000000..2e7b9a9b74 --- /dev/null +++ b/sources/tech/20181221 Large files with Git- LFS and git-annex.md @@ -0,0 +1,145 @@ +[#]: collector: (lujun9972) +[#]: translator: (runningwater) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Large files with Git: LFS and git-annex) +[#]: via: (https://anarc.at/blog/2018-12-21-large-files-with-git/) +[#]: author: (Anarc.at https://anarc.at/) + +Large files with Git: LFS and git-annex +====== + +Git does not handle large files very well. While there is work underway to handle large repositories through the [commit graph work][2], Git's internal design has remained surprisingly constant throughout its history, which means that storing large files into Git comes with a significant and, ultimately, prohibitive performance cost. Thankfully, other projects are helping Git address this challenge. This article compares how Git LFS and git-annex address this problem and should help readers pick the right solution for their needs. + +### The problem with large files + +As readers probably know, Linus Torvalds wrote Git to manage the history of the kernel source code, which is a large collection of small files. Every file is a "blob" in Git's object store, addressed by its cryptographic hash. A new version of that file will store a new blob in Git's history, with no deduplication between the two versions. The pack file format can store binary deltas between similar objects, but if many objects of similar size change in a repository, that algorithm might fail to properly deduplicate. In practice, large binary files (say JPEG images) have an irritating tendency of changing completely when even the smallest change is made, which makes delta compression useless. + +There have been different attempts at fixing this in the past. In 2006, Torvalds worked on [improving the pack-file format][3] to reduce object duplication between the index and the pack files. Those changes were eventually reverted because, as Nicolas Pitre [put it][4]: "that extra loose object format doesn't appear to be worth it anymore". + +Then in 2009, [Caca Labs][5] worked on improving the `fast-import` and `pack-objects` Git commands to do special handling for big files, in an effort called [git-bigfiles][6]. Some of those changes eventually made it into Git: for example, since [1.7.6][7], Git will stream large files directly to a pack file instead of holding them all in memory. But files are still kept forever in the history. + +An example of trouble I had to deal with is for the Debian security tracker, which follows all security issues in the entire Debian history in a single file. That file is around 360,000 lines for a whopping 18MB. The resulting repository takes 1.6GB of disk space and a local clone takes 21 minutes to perform, mostly taken up by Git resolving deltas. Commit, push, and pull are noticeably slower than a regular repository, taking anywhere from a few seconds to a minute depending one how old the local copy is. And running annotate on that large file can take up to ten minutes. So even though that is a simple text file, it's grown large enough to cause significant problems for Git, which is otherwise known for stellar performance. + +Intuitively, the problem is that Git needs to copy files into its object store to track them. Third-party projects therefore typically solve the large-files problem by taking files out of Git. In 2009, Git evangelist Scott Chacon released [GitMedia][8], which is a Git filter that simply takes large files out of Git. Unfortunately, there hasn't been an official release since then and it's [unclear][9] if the project is still maintained. The next effort to come up was [git-fat][10], first released in 2012 and still maintained. But neither tool has seen massive adoption yet. If I would have to venture a guess, it might be because both require manual configuration. Both also require a custom server (rsync for git-fat; S3, SCP, Atmos, or WebDAV for GitMedia) which limits collaboration since users need access to another service. + +### Git LFS + +That was before GitHub [released][11] Git Large File Storage (LFS) in August 2015. Like all software taking files out of Git, LFS tracks file hashes instead of file contents. So instead of adding large files into Git directly, LFS adds a pointer file to the Git repository, which looks like this: + +``` +version https://git-lfs.github.com/spec/v1 +oid sha256:4d7a214614ab2935c943f9e0ff69d22eadbb8f32b1258daaa5e2ca24d17e2393 +size 12345 +``` + +LFS then uses Git's smudge and clean filters to show the real file on checkout. Git only stores that small text file and does so efficiently. The downside, of course, is that large files are not version controlled: only the latest version of a file is kept in the repository. + +Git LFS can be used in any repository by installing the right hooks with `git lfs install` then asking LFS to track any given file with `git lfs track`. This will add the file to the `.gitattributes` file which will make Git run the proper LFS filters. It's also possible to add patterns to the `.gitattributes` file, of course. For example, this will make sure Git LFS will track MP3 and ZIP files: + +``` +$ cat .gitattributes +*.mp3 filter=lfs -text +*.zip filter=lfs -text +``` + +After this configuration, we use Git normally: `git add`, `git commit`, and so on will talk to Git LFS transparently. + +The actual files tracked by LFS are copied to a path like `.git/lfs/objects/{OID-PATH}`, where `{OID-PATH}` is a sharded file path of the form `OID[0:2]/OID[2:4]/OID` and where `OID` is the content's hash (currently SHA-256) of the file. This brings the extra feature that multiple copies of the same file in the same repository are automatically deduplicated, although in practice this rarely occurs. + +Git LFS will copy large files to that internal storage on `git add`. When a file is modified in the repository, Git notices, the new version is copied to the internal storage, and the pointer file is updated. The old version is left dangling until the repository is pruned. + +This process only works for new files you are importing into Git, however. If a Git repository already has large files in its history, LFS can fortunately "fix" repositories by retroactively rewriting history with [git lfs migrate][12]. This has all the normal downsides of rewriting history, however --- existing clones will have to be reset to benefit from the cleanup. + +LFS also supports [file locking][13], which allows users to claim a lock on a file, making it read-only everywhere except in the locking repository. This allows users to signal others that they are working on an LFS file. Those locks are purely advisory, however, as users can remove other user's locks by using the `--force` flag. LFS can also [prune][14] old or unreferenced files. + +The main [limitation][15] of LFS is that it's bound to a single upstream: large files are usually stored in the same location as the central Git repository. If it is hosted on GitHub, this means a default quota of 1GB storage and bandwidth, but you can purchase additional "packs" to expand both of those quotas. GitHub also limits the size of individual files to 2GB. This [upset][16] some users surprised by the bandwidth fees, which were previously hidden in GitHub's cost structure. + +While the actual server-side implementation used by GitHub is closed source, there is a [test server][17] provided as an example implementation. Other Git hosting platforms have also [implemented][18] support for the LFS [API][19], including GitLab, Gitea, and BitBucket; that level of adoption is something that git-fat and GitMedia never achieved. LFS does support hosting large files on a server other than the central one --- a project could run its own LFS server, for example --- but this will involve a different set of credentials, bringing back the difficult user onboarding that affected git-fat and GitMedia. + +Another limitation is that LFS only supports pushing and pulling files over HTTP(S) --- no SSH transfers. LFS uses some [tricks][20] to bypass HTTP basic authentication, fortunately. This also might change in the future as there are proposals to add [SSH support][21], resumable uploads through the [tus.io protocol][22], and other [custom transfer protocols][23]. + +Finally, LFS can be slow. Every file added to LFS takes up double the space on the local filesystem as it is copied to the `.git/lfs/objects` storage. The smudge/clean interface is also slow: it works as a pipe, but buffers the file contents in memory each time, which can be prohibitive with files larger than available memory. + +### git-annex + +The other main player in large file support for Git is git-annex. We [covered the project][24] back in 2010, shortly after its first release, but it's certainly worth discussing what has changed in the eight years since Joey Hess launched the project. + +Like Git LFS, git-annex takes large files out of Git's history. The way it handles this is by storing a symbolic link to the file in `.git/annex`. We should probably credit Hess for this innovation, since the Git LFS storage layout is obviously inspired by git-annex. The original design of git-annex introduced all sorts of problems however, especially on filesystems lacking symbolic-link support. So Hess has implemented different solutions to this problem. Originally, when git-annex detected such a "crippled" filesystem, it switched to [direct mode][25], which kept files directly in the work tree, while internally committing the symbolic links into the Git repository. This design turned out to be a little confusing to users, including myself; I have managed to shoot myself in the foot more than once using this system. + +Since then, git-annex has adopted a different v7 mode that is also based on smudge/clean filters, which it called "[unlocked files][26]". Like Git LFS, unlocked files will double disk space usage by default. However it is possible to reduce disk space usage by using "thin mode" which uses hard links between the internal git-annex disk storage and the work tree. The downside is, of course, that changes are immediately performed on files, which means previous file versions are automatically discarded. This can lead to data loss if users are not careful. + +Furthermore, git-annex in v7 mode suffers from some of the performance problems affecting Git LFS, because both use the smudge/clean filters. Hess actually has [ideas][27] on how the smudge/clean interface could be improved. He proposes changing Git so that it stops buffering entire files into memory, allows filters to access the work tree directly, and adds the hooks he found missing (for `stash`, `reset`, and `cherry-pick`). Git-annex already implements some tricks to work around those problems itself but it would be better for those to be implemented in Git natively. + +Being more distributed by design, git-annex does not have the same "locking" semantics as LFS. Locking a file in git-annex means protecting it from changes, so files need to actually be in the "unlocked" state to be editable, which might be counter-intuitive to new users. In general, git-annex has some of those unusual quirks and interfaces that often come with more powerful software. + +And git-annex is much more powerful: it not only addresses the "large-files problem" but goes much further. For example, it supports "partial checkouts" --- downloading only some of the large files. I find that especially useful to manage my video, music, and photo collections, as those are too large to fit on my mobile devices. Git-annex also has support for location tracking, where it knows how many copies of a file exist and where, which is useful for archival purposes. And while Git LFS is only starting to look at transfer protocols other than HTTP, git-annex already supports a [large number][28] through a [special remote protocol][29] that is fairly easy to implement. + +"Large files" is therefore only scratching the surface of what git-annex can do: I have used it to build an [archival system for remote native communities in northern Québec][30], while others have built a [similar system in Brazil][31]. It's also used by the scientific community in projects like [GIN][32] and [DataLad][33], which manage terabytes of data. Another example is the [Japanese American Legacy Project][34] which manages "upwards of 100 terabytes of collections, transporting them from small cultural heritage sites on USB drives". + +Unfortunately, git-annex is not well supported by hosting providers. GitLab [used to support it][35], but since it implemented Git LFS, it [dropped support for git-annex][36], saying it was a "burden to support". Fortunately, thanks to git-annex's flexibility, it may eventually be possible to treat [LFS servers as just another remote][37] which would make git-annex capable of storing files on those servers again. + +### Conclusion + +Git LFS and git-annex are both mature and well maintained programs that deal efficiently with large files in Git. LFS is easier to use and is well supported by major Git hosting providers, but it's less flexible than git-annex. + +Git-annex, in comparison, allows you to store your content anywhere and espouses Git's distributed nature more faithfully. It also uses all sorts of tricks to save disk space and improve performance, so it should generally be faster than Git LFS. Learning git-annex, however, feels like learning Git: you always feel you are not quite there and you can always learn more. It's a double-edged sword and can feel empowering for some users and terrifyingly hard for others. Where you stand on the "power-user" scale, along with project-specific requirements will ultimately determine which solution is the right one for you. + +Ironically, after thorough evaluation of large-file solutions for the Debian security tracker, I ended up proposing to rewrite history and [split the file by year][38] which improved all performance markers by at least an order of magnitude. As it turns out, keeping history is critical for the security team so any solution that moves large files outside of the Git repository is not acceptable to them. Therefore, before adding large files into Git, you might want to think about organizing your content correctly first. But if large files are unavoidable, the Git LFS and git-annex projects allow users to keep using most of their current workflow. + +> This article [first appeared][39] in the [Linux Weekly News][40]. + +-------------------------------------------------------------------------------- + +via: https://anarc.at/blog/2018-12-21-large-files-with-git/ + +作者:[Anarc.at][a] +选题:[lujun9972][b] +译者:[runningwater](https://github.com/runningwater) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://anarc.at/ +[b]: https://github.com/lujun9972 +[1]: https://anarc.at/blog/ +[2]: https://github.com/git/git/blob/master/Documentation/technical/commit-graph.txt +[3]: https://public-inbox.org/git/Pine.LNX.4.64.0607111010320.5623@g5.osdl.org/ +[4]: https://public-inbox.org/git/alpine.LFD.0.99.0705091422130.24220@xanadu.home/ +[5]: http://caca.zoy.org/ +[6]: http://caca.zoy.org/wiki/git-bigfiles +[7]: https://public-inbox.org/git/7v8vsnz2nc.fsf@alter.siamese.dyndns.org/ +[8]: https://github.com/alebedev/git-media +[9]: https://github.com/alebedev/git-media/issues/15 +[10]: https://github.com/jedbrown/git-fat +[11]: https://blog.github.com/2015-04-08-announcing-git-large-file-storage-lfs/ +[12]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-migrate.1.ronn +[13]: https://github.com/git-lfs/git-lfs/wiki/File-Locking +[14]: https://github.com/git-lfs/git-lfs/blob/master/docs/man/git-lfs-prune.1.ronn +[15]: https://github.com/git-lfs/git-lfs/wiki/Limitations +[16]: https://medium.com/@megastep/github-s-large-file-storage-is-no-panacea-for-open-source-quite-the-opposite-12c0e16a9a91 +[17]: https://github.com/git-lfs/lfs-test-server +[18]: https://github.com/git-lfs/git-lfs/wiki/Implementations%0A +[19]: https://github.com/git-lfs/git-lfs/tree/master/docs/api +[20]: https://github.com/git-lfs/git-lfs/blob/master/docs/api/authentication.md +[21]: https://github.com/git-lfs/git-lfs/blob/master/docs/proposals/ssh_adapter.md +[22]: https://tus.io/ +[23]: https://github.com/git-lfs/git-lfs/blob/master/docs/custom-transfers.md +[24]: https://lwn.net/Articles/419241/ +[25]: http://git-annex.branchable.com/direct_mode/ +[26]: https://git-annex.branchable.com/tips/unlocked_files/ +[27]: http://git-annex.branchable.com/todo/git_smudge_clean_interface_suboptiomal/ +[28]: http://git-annex.branchable.com/special_remotes/ +[29]: http://git-annex.branchable.com/special_remotes/external/ +[30]: http://isuma-media-players.readthedocs.org/en/latest/index.html +[31]: https://github.com/RedeMocambos/baobaxia +[32]: https://web.gin.g-node.org/ +[33]: https://www.datalad.org/ +[34]: http://www.densho.org/ +[35]: https://docs.gitlab.com/ee/workflow/git_annex.html +[36]: https://gitlab.com/gitlab-org/gitlab-ee/issues/1648 +[37]: https://git-annex.branchable.com/todo/LFS_API_support/ +[38]: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=908678#52 +[39]: https://lwn.net/Articles/774125/ +[40]: http://lwn.net/ diff --git a/sources/tech/20181222 How to detect automatically generated emails.md b/sources/tech/20181222 How to detect automatically generated emails.md new file mode 100644 index 0000000000..2ccaeddeee --- /dev/null +++ b/sources/tech/20181222 How to detect automatically generated emails.md @@ -0,0 +1,144 @@ +[#]: collector: (lujun9972) +[#]: translator: (wyxplus) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to detect automatically generated emails) +[#]: via: (https://arp242.net/weblog/autoreply.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +How to detect automatically generated emails +====== + +### How to detect automatically generated emails + + +When you send out an auto-reply from an email system you want to take care to not send replies to automatically generated emails. At best, you will get a useless delivery failure. At words, you will get an infinite email loop and a world of chaos. + +Turns out that reliably detecting automatically generated emails is not always easy. Here are my observations based on writing a detector for this and scanning about 100,000 emails with it (extensive personal archive and company archive). + +### Auto-submitted header + +Defined in [RFC 3834][1]. + +This is the ‘official’ standard way to indicate your message is an auto-reply. You should **not** send a reply if `Auto-Submitted` is present and has a value other than `no`. + +### X-Auto-Response-Suppress header + +Defined [by Microsoft][2] + +This header is used by Microsoft Exchange, Outlook, and perhaps some other products. Many newsletters and such also set this. You should **not** send a reply if `X-Auto-Response-Suppress` contains `DR` (“Suppress delivery reports”), `AutoReply` (“Suppress auto-reply messages other than OOF notifications”), or `All`. + +### List-Id and List-Unsubscribe headers + +Defined in [RFC 2919][3] + +You usually don’t want to send auto-replies to mailing lists or news letters. Pretty much all mail lists and most newsletters set at least one of these headers. You should **not** send a reply if either of these headers is present. The value is unimportant. + +### Feedback-ID header + +Defined [by Google][4]. + +Gmail uses this header to identify mail newsletters, and uses it to generate statistics/reports for owners of those newsletters. You should **not** send a reply if this headers is present; the value is unimportant. + +### Non-standard ways + +The above methods are well-defined and clear (even though some are non-standard). Unfortunately some email systems do not use any of them :-( Here are some additional measures. + +#### Precedence header + +Not really defined anywhere, mentioned in [RFC 2076][5] where its use is discouraged (but this header is commonly encountered). + +Note that checking for the existence of this field is not recommended, as some ails use `normal` and some other (obscure) values (this is not very common though). + +My recommendation is to **not** send a reply if the value case-insensitively matches `bulk`, `auto_reply`, or `list`. + +#### Other obscure headers + +A collection of other (somewhat obscure) headers I’ve encountered. I would recommend **not** sending an auto-reply if one of these is set. Most mails also set one of the above headers, but some don’t (but it’s not very common). + + * `X-MSFBL`; can’t really find a definition (Microsoft header?), but I only have auto-generated mails with this header. + + * `X-Loop`; not really defined anywhere, and somewhat rare, but sometimes it’s set. It’s most often set to the address that should not get emails, but `X-Loop: yes` is also encountered. + + * `X-Autoreply`; fairly rare, and always seems to have a value of `yes`. + + + + +#### Email address + +Check if the `From` or `Reply-To` headers contains `noreply`, `no-reply`, or `no_reply` (regex: `^no.?reply@`). + +#### HTML only + +If an email only has a HTML part, but no text part it’s a good indication this is an auto-generated mail or newsletter. Pretty much all mail clients also set a text part. + +#### Delivery failures + +Many delivery failure messages don’t really indicate that they’re failures. Some ways to check this: + + * `From` contains `mailer-daemon` or `Mail Delivery Subsystem` + + + +Many mail libraries leave some sort of footprint, and most regular mail clients override this with their own data. Checking for this seems to work fairly well. + + * `X-Mailer: Microsoft CDO for Windows 2000` – Set by some MS software; I can only find it on autogenerated mails. Yes, it’s still used in 2015. + + * `Message-ID` header contains `.JavaMail.` – I’ve found a few (5 on 50k) regular messages with this, but not many; the vast majority (thousends) of messages are news-letters, order confirmations, etc. + + * `^X-Mailer` starts with `PHP`. This should catch both `X-Mailer: PHP/5.5.0` and `X-Mailer: PHPmailer blah blah`. The same as `JavaMail` applies. + + * `X-Library` presence; only [Indy][6] seems to set this. + + * `X-Mailer` starts with `wdcollect`. Set by some Plesk mails. + + * `X-Mailer` starts with `MIME-tools`. + + + + +### Final precaution: limit the number of replies + +Even when following all of the above advice, you may still encounter an email program that will slip through. This can very dangerous, as email systems that simply `IF email THEN send_email` have the potential to cause infinite email loops. + +For this reason, I recommend keeping track of which emails you’ve sent an autoreply to and rate limiting this to at most n emails in n minutes. This will break the back-and-forth chain. + +We use one email per five minutes, but something less strict will probably also work well. + +### What you need to set on your auto-response + +The specifics for this will vary depending on what sort of mails you’re sending. This is what we use for auto-reply mails: + +``` +Auto-Submitted: auto-replied +X-Auto-Response-Suppress: All +Precedence: auto_reply +``` + +### Feedback + +You can mail me at [martin@arp242.net][7] or [create a GitHub issue][8] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/autoreply.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: http://tools.ietf.org/html/rfc3834 +[2]: https://msdn.microsoft.com/en-us/library/ee219609(v=EXCHG.80).aspx +[3]: https://tools.ietf.org/html/rfc2919) +[4]: https://support.google.com/mail/answer/6254652?hl=en +[5]: http://www.faqs.org/rfcs/rfc2076.html +[6]: http://www.indyproject.org/index.en.aspx +[7]: mailto:martin@arp242.net +[8]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20181224 An Introduction to Go.md b/sources/tech/20181224 An Introduction to Go.md new file mode 100644 index 0000000000..5989b6c913 --- /dev/null +++ b/sources/tech/20181224 An Introduction to Go.md @@ -0,0 +1,278 @@ +[#]: collector: (lujun9972) +[#]: translator: (LazyWolfLin) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (An Introduction to Go) +[#]: via: (https://blog.jak-linux.org/2018/12/24/introduction-to-go/) +[#]: author: (Julian Andres Klode https://blog.jak-linux.org/) + +An Introduction to Go +====== + +(What follows is an excerpt from my master’s thesis, almost all of section 2.1, quickly introducing Go to people familiar with CS) + +Go is an imperative programming language for concurrent programming created at and mainly developed by Google, initially mostly by Robert Griesemer, Rob Pike, and Ken Thompson. Design of the language started in 2007, and an initial version was released in 2009; with the first stable version, 1.0 released in 2012 . + +Go has a C-like syntax (without a preprocessor), garbage collection, and, like its predecessors devloped at Bell Labs – Newsqueak (Rob Pike), Alef (Phil Winterbottom), and Inferno (Pike, Ritchie, et al.) – provides built-in support for concurrency using so-called goroutines and channels, a form of co-routines, based on the idea of Hoare’s ‘Communicating Sequential Processes’ . + +Go programs are organised in packages. A package is essentially a directory containing Go files. All files in a package share the same namespace, and there are two visibilities for symbols in a package: Symbols starting with an upper case character are visible to other packages, others are private to the package: + +``` +func PublicFunction() { + fmt.Println("Hello world") +} + +func privateFunction() { + fmt.Println("Hello package") +} +``` + +### Types + +Go has a fairly simple type system: There is no subtyping (but there are conversions), no generics, no polymorphic functions, and there are only a few basic categories of types: + + 1. base types: `int`, `int64`, `int8`, `uint`, `float32`, `float64`, etc. + + 2. `struct` + + 3. `interface` \- a set of methods + + 4. `map[K, V]` \- a map from a key type to a value type + + 5. `[number]Type` \- an array of some element type + + 6. `[]Type` \- a slice (pointer to array with length and capability) of some type + + 7. `chan Type` \- a thread-safe queue + + 8. pointer `*T` to some other type + + 9. functions + + 10. named type - aliases for other types that may have associated methods: + +``` +type T struct { foo int } +type T *T +type T OtherNamedType +``` + +Named types are mostly distinct from their underlying types, so you cannot assign them to each other, but some operators like `+` do work on objects of named types with an underlying numerical type (so you could add two `T` in the example above). + + +Maps, slices, and channels are reference-like types - they essentially are structs containing pointers. Other types are passed by value (copied), including arrays (which have a fixed length and are copied). + +#### Conversions + +Conversions are the similar to casts in C and other languages. They are written like this: + +``` +TypeName(value) +``` + +#### Constants + +Go has “untyped” literals and constants. + +``` +1 // untyped integer literal +const foo = 1 // untyped integer constant +const foo int = 1 // int constant +``` + +Untyped values are classified into the following categories: `UntypedBool`, `UntypedInt`, `UntypedRune`, `UntypedFloat`, `UntypedComplex`, `UntypedString`, and `UntypedNil` (Go calls them basic kinds, other basic kinds are available for the concrete types like `uint8`). An untyped value can be assigned to a named type derived from a base type; for example: + +``` +type someType int + +const untyped = 2 // UntypedInt +const bar someType = untyped // OK: untyped can be assigned to someType +const typed int = 2 // int +const bar2 someType = typed // error: int cannot be assigned to someType +``` + +### Interfaces and ‘objects’ + +As mentioned before, interfaces are a set of methods. Go is not an object-oriented language per se, but it has some support for associating methods with named types: When declaring a function, a receiver can be provided - a receiver is an additional function argument that is passed before the function and involved in the function lookup, like this: + +``` +type SomeType struct { ... } + +func (s *SomeType) MyMethod() { +} + +func main() { + var s SomeType + s.MyMethod() +} +``` + +An object implements an interface if it implements all methods; for example, the following interface `MyMethoder` is implemented by `*SomeType` (note the pointer), and values of `*SomeType` can thus be used as values of `MyMethoder`. The most basic interface is `interface{}`, that is an interface with an empty method set - any object satisfies that interface. + +``` +type MyMethoder interface { + MyMethod() +} +``` + +There are some restrictions on valid receiver types; for example, while a named type could be a pointer (for example, `type MyIntPointer *int`), such a type is not a valid receiver type. + +### Control flow + +Go provides three primary statements for control flow: `if`, `switch`, and `for`. The statements are fairly similar to their equivalent in other C-like languages, with some exceptions: + + * There are no parentheses around conditions, so it is `if a == b {}`, not `if (a == b) {}`. The braces are mandatory. + + * All of them can have initialisers, like this + +`if result, err := someFunction(); err == nil { // use result }` + + * The `switch` statement can use arbitrary expressions in cases + + * The `switch` statement can switch over nothing (equals switching over true) + + * Cases do not fall through by default (no `break` needed), use `fallthrough` at the end of a block to fall through. + + * The `for` loop can loop over ranges: `for key, val := range map { do something }` + + + + +### Goroutines + +The keyword `go` spawns a new goroutine, a concurrently executed function. It can be used with any function call, even a function literal: + +``` +func main() { + ... + go func() { + ... + }() + + go some_function(some_argument) +} +``` + +### Channels + +Goroutines are often combined with channels to provide an extended form of Communicating Sequential Processes . A channel is a concurrent-safe queue, and can be buffered or unbuffered: + +``` +var unbuffered = make(chan int) // sending blocks until value has been read +var buffered = make(chan int, 5) // may have up to 5 unread values queued +``` + +The `<-` operator is used to communicate with a single channel. + +``` +valueReadFromChannel := <- channel +otherChannel <- valueToSend +``` + +The `select` statement allows communication with multiple channels: + +``` +select { + case incoming := <- inboundChannel: + // A new message for me + case outgoingChannel <- outgoing: + // Could send a message, yay! +} +``` + +### The `defer` statement + +Go provides a `defer` statement that allows a function call to be scheduled for execution when the function exits. It can be used for resource clean-up, for example: + +``` +func myFunc(someFile io.ReadCloser) { + defer someFile.close() + /bin /boot /dev /etc /home /lib /lib64 /lost+found /media /mnt /opt /proc /root /run /sbin /srv /sys /tmp /usr /var Do stuff with file */ +} +``` + +It is of course possible to use function literals as the function to call, and any variables can be used as usual when writing the call. + +### Error handling + +Go does not provide exceptions or structured error handling. Instead, it handles errors by returning them in a second or later return value: + +``` +func Read(p []byte) (n int, err error) + +// Built-in type: +type error interface { + Error() string +} +``` + +Errors have to be checked in the code, or can be assigned to `_`: + +``` +n0, _ := Read(Buffer) // ignore error +n, err := Read(buffer) +if err != nil { + return err +} +``` + +There are two functions to quickly unwind and recover the call stack, though: `panic()` and `recover()`. When `panic()` is called, the call stack is unwound, and any deferred functions are run as usual. When a deferred function invokes `recover()`, the unwinding stops, and the value given to `panic()` is returned. If we are unwinding normally and not due to a panic, `recover()` simply returns `nil`. In the example below, a function is deferred and any `error` value that is given to `panic()` will be recovered and stored in an error return value. Libraries sometimes use that approach to make highly recursive code like parsers more readable, while still maintaining the usual error return value for public functions. + +``` +func Function() (err error) { + defer func() { + s := recover() + switch s := s.(type) { // type switch + case error: + err = s // s has type error now + default: + panic(s) + } + } +} +``` + +### Arrays and slices + +As mentioned before, an array is a value type and a slice is a pointer into an array, created either by slicing an existing array or by using `make()` to create a slice, which will create an anonymous array to hold the elements. + +``` +slice1 := make([]int, 2, 5) // 5 elements allocated, 2 initialized to 0 +slice2 := array[:] // sliced entire array +slice3 := array[1:] // slice of array without first element +``` + +There are some more possible combinations for the slicing operator than mentioned above, but this should give a good first impression. + +A slice can be used as a dynamically growing array, using the `append()` function. + +``` +slice = append(slice, value1, value2) +slice = append(slice, arrayOrSlice...) +``` + +Slices are also used internally to represent variable parameters in variable length functions. + +### Maps + +Maps are simple key-value stores and support indexing and assigning. They are not thread-safe. + +``` +someValue := someMap[someKey] +someValue, ok := someMap[someKey] // ok is false if key not in someMap +someMap[someKey] = someValue +``` +-------------------------------------------------------------------------------- + +via: https://blog.jak-linux.org/2018/12/24/introduction-to-go/ + +作者:[Julian Andres Klode][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.jak-linux.org/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md new file mode 100644 index 0000000000..e49778eab7 --- /dev/null +++ b/sources/tech/20181224 Turn GNOME to Heaven With These 23 GNOME Extensions.md @@ -0,0 +1,288 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Turn GNOME to Heaven With These 23 GNOME Extensions) +[#]: via: (https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions) +[#]: author: (M.Hanny Sabbagh https://fosspost.org/author/mhsabbagh) + +Turn GNOME to Heaven With These 23 GNOME Extensions +====== + +GNOME Shell is one of the most used desktop interfaces on the Linux desktop. It’s part of the GNOME project and is considered to be the next generation of the old classic GNOME 2.x interface. GNOME Shell was first released in 2011 carrying a lot of features, including GNOME Shell extensions feature. + +GNOME Extensions are simply extra functionality that you can add to your interface, they can be panel extensions, performance extensions, quick access extensions, productivity extensions or for any other type of usage. They are all free and open source of course; you can install them with a single click **from your web browser** actually. + +### How To Install GNOME Extensions? + +You main way to install GNOME extensions will be via the extensions.gnome.org website. It’s an official platform belonging to GNOME where developers publish their extensions easily so that users can install them in a single click. + +In order to for this to work, you’ll need two things: + + 1. Browser Add-on: You’ll need to install a browser add-on that allows the website to communicate with your local GNOME desktop. You install it from [here for Firefox][1], or [here for Chrome][2] or [here for Opera][3]. + + 2. Native Connector: You still need another part to allow your system to accept installing files locally from your web browser. To install this component, you must install the `chrome-gnome-shell` package. Do not be deceived! Although the package name is containing “chrome”, it also works on Firefox too. To install it on Debian/Ubuntu/Mint run the following command in terminal: + +``` +sudo apt install chrome-gnome-shell +``` + +For Fedora: + +``` +sudo dnf install chrome-gnome-shell +``` + +For Arch: + +``` +sudo pacman -S chrome-gnome-shell +``` + +After you have installed the two components above, you can easily install extensions from the GNOME extensions website. + +### How to Configure GNOME Extensions Settings? + +Many of these extensions do have a settings window that you can access to adjust the preferences of that extension. You must make sure that you have seen its options at least once so that you know what you can possibly do using that extension. + +To do this, you can head to the [installed extensions page on the GNOME website][4], and you’ll see a small options button near every extension that offers one: + +![Screenshot 2018 12 24 20 50 55 41][5] + +Clicking it will display a window for you, from which you can see the possible settings: + +![Screenshot 2018 12 24 20 51 29 43][6] + +Read our article below for our list of recommended extension! + +### General Extensions + +#### 1\. User Themes + +![Screenshot from 2018 12 23 12 30 20 45][7] + +This is the first must-install extension on the GNOME Shell interface, it simply allows you to change the desktop theme to another one using the tweak tool. After installation run gnome-tweak-tool, and you’ll be able to change your desktop theme. + +Installation link: + +#### 2\. Dash to Panel + +![Screenshot from 2018 12 24 21 16 11 47][8] + +Converts the GNOME top bar into a taskbar with many added features, such as favorite icons, moving the clock to right, adding currently opened windows to the panel and many other features. (Make sure not to install this one with some other extensions below which do provide the same functionality). + +Installation link: + +#### 3\. Desktop Icons + +![gnome shell screenshot SSP3UZ 49][9] + +Restores desktop icons back again to GNOME. Still in continues development. + +Installation link: + +#### 4\. Dash to Dock + +![Screenshot from 2018 12 24 21 50 07 51][10] + +If you are a fan of the Unity interface, then this extension may help you. It simply adds a dock to the left/right side of the screen, which is very similar to Unity. You can customize that dock however you like. + +Installation link: + +### Productivity Extensions + +#### 5\. Todo.txt + +![screenshot_570_5X5YkZb][11] + +For users who like to maintain productivity, you can use this extension to add a simple To-Do list functionality to your desktop, it will use the [syntax][12] from todotxt.com, you can add unlimited to-dos, mark them as complete or remove them, change their position beside modifying or taking a backup of the todo.txt file manually. + +Installation link: + +#### 6\. Screenshot Tool + +![Screenshot from 2018 12 24 21 04 14 54][13] + +Easily take a screenshot of your desktop or a specific area, with the possibility of also auto-uploading it to imgur.com and auto-saving the link into the clipboard! Very useful extension. + +Installation link: + +#### 7\. OpenWeather + +![screenshot_750][14] + +If you would like to know the weather forecast everyday then this extension will be the right one for you, this extension will simply add an applet to the top panel allowing you to fetch the weather data from openweathermap.org or forecast.io, it supports all the countries and cities around the world. It also shows the wind and humidity. + +Installation link: + +#### 8 & 9\. Search Providers Extensions + +![Screenshot from 2018 12 24 21 29 41 57][15] + +In GNOME, you can add what’s known as “search providers” to the shell, meaning that when you type something in the search box, you’ll be able to automatically search these websites (search providers) using the same text you entered, and see the results directly from your shell! + +YouTube Search Provider: + +Wikipedia Search Provider: + +### Workflow Extensions + +#### 10\. No Title Bar + +![Screenshot 20181224210737 59][16] + +This extension simply removes the title bar from all the maximized windows, and moves it into the top GNOME Panel. In this way, you’ll be able to save a complete horizontal line on your screen, more space for your work! + +Installation Link: + +#### 11\. Applications Menu + +![Screenshot 2018 12 23 13 58 07 61][17] + +This extension simply adds a classic menu to the “activities” menu on the corner. By using it, you will be able to browse the installed applications and categories without the need to use the dash or the search feature, which saves you time. (Check the “No hot corner” extension below to get a better usage). + +Installation link: + +#### 12\. Places Status Indicator + +![screenshot_8_1][18] + +This indicator will put itself near the left corner of the activities button, it allows you to access your home folder and sub-folders easily using a menu, you can also browse the available devices and networks using it. + +Installation link: + +#### 13\. Window List + +![Screenshot from 2016-08-12 08-05-48][19] + +Officially supported by GNOME team, this extension adds a bottom panel to the desktop which allows you to navigate between the open windows easily, it also include a workspace indicator to switch between them. + +Installation link: + +#### 14\. Frippery Panel Favorites + +![screenshot_4][20] + +This extensions adds your favorite applications and programs to the panel near the activities button, allowing you to access to it more quickly with just 1 click, you can add or remove applications from it just by modifying your applications in your favorites (the same applications in the left panel when you click the activities button will appear here). + +Installation link: + +#### 15\. TopIcons + +![Screenshot 20181224211009 66][21] + +Those extensions restore the system tray back into the top GNOME panel. Very needed in cases of where applications are very much dependent on the tray icon. + +For GNOME 3.28, installation link: + +For GNOME 3.30, installation link: + +#### 16\. Clipboard Indicator + +![Screenshot 20181224214626 68][22] + +A clipboard manager is simply an applications that manages all the copy & paste operations you do on your system and saves them into a history, so that you can access them later whenever you want. + +This extension does exactly this, plus many other cool features that you can check. + +Installation link: + +### Other Extensions + +#### 17\. Frippery Move Clock + +![screenshot_2][23] + +If you are from those people who like alignment a lot, and dividing the panels into 2 parts only, then you may like this extension, what it simply does is moving the clock from the middle of the GNOME Shell panel to the right near the other applets on the panel, which makes it more organized. + +Installation link: + +#### 18\. No Topleft Hot Corner + +If you don’t like opening the dash whenever you move the mouse to the left corner, you can disable it easily using this extension. You can for sure click the activities button if you want to open the dash view (or via the Super key on the keyboard), but the hot corner will be disabled only. + +Installation link: + +#### 19\. No Annoyance + +Simply removes the “window is ready” notification each time a new window a opened. + +Installation link: + +#### 20\. EasyScreenCast + +![Screenshot 20181224214219 71][24] + +If you would like to quickly take a screencast for your desktop, then this extension may help you. By simply just choosing the type of recording you want, you’ll be able to take screencasts any time. You can also configure advanced options for the extension, such as the pipeline and many other things. + +Installation link: + +#### 21\. Removable drive Menu + +![Screenshot 20181224214131 73][25] + +Adds an icon to the top bar which shows you a list of your currently removable drives. + +Installation link: + +#### 22\. BottomPanel + +![Screenshot 20181224214419 75][26] + +As its title says.. It simply moves the top GNOME bar into the bottom of the screen. + +Installation link: + +#### 23\. Unite + +If you would like one extension only to do most of the above tasks, then Unite extension can help you. It adds panel favorites, removes title bar, moves the clock, allows you to change the location of the panel.. And many other features. All using this extension alone! + +Installation link: + +### Conclusion + +This was our list for some great GNOME Shell extensions to try out. Of course, you don’t (and shouldn’t!) install all of these, but just what you need for your own usage. As you can see, you can convert GNOME into any form you would like, but be careful for RAM usage (because if you use more extensions, the shell will consume very much resources). + +What other GNOME Shell extensions do you use? What do you think of this list? + + +-------------------------------------------------------------------------------- + +via: https://fosspost.org/tutorials/turn-gnome-to-heaven-with-these-23-gnome-extensions + +作者:[M.Hanny Sabbagh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fosspost.org/author/mhsabbagh +[b]: https://github.com/lujun9972 +[1]: https://addons.mozilla.org/en/firefox/addon/gnome-shell-integration/ +[2]: https://chrome.google.com/webstore/detail/gnome-shell-integration/gphhapmejobijbbhgpjhcjognlahblep +[3]: https://addons.opera.com/en/extensions/details/gnome-shell-integration/ +[4]: https://extensions.gnome.org/local/ +[5]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-50-55.png?resize=850%2C359&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 42) +[6]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-24_20-51-29.png?resize=850%2C462&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 44) +[7]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-23-12-30-20.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 46) +[8]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-16-11.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 48) +[9]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/gnome-shell-screenshot-SSP3UZ.png?resize=850%2C492&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 50) +[10]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-50-07.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 52) +[11]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_570_5X5YkZb.png?resize=478%2C474&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 53) +[12]: https://github.com/ginatrapani/todo.txt-cli/wiki/The-Todo.txt-Format +[13]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-04-14.png?resize=715%2C245&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 55) +[14]: https://i2.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_750.jpg?resize=648%2C276&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 56) +[15]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-from-2018-12-24-21-29-41.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 58) +[16]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224210737-380x95.png?resize=380%2C95&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 60) +[17]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot_2018-12-23_13-58-07.png?resize=524%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 62) +[18]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_8_1.png?resize=247%2C620&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 63) +[19]: https://i1.wp.com/fosspost.org/wp-content/uploads/2016/08/Screenshot-from-2016-08-12-08-05-48.png?resize=850%2C478&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 64) +[20]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_4.png?resize=414%2C39&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 65) +[21]: https://i0.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224211009-631x133.png?resize=631%2C133&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 67) +[22]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214626-520x443.png?resize=520%2C443&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 69) +[23]: https://i0.wp.com/fosspost.org/wp-content/uploads/2016/08/screenshot_2.png?resize=388%2C26&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 70) +[24]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214219-327x328.png?resize=327%2C328&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 72) +[25]: https://i1.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214131-366x199.png?resize=366%2C199&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 74) +[26]: https://i2.wp.com/fosspost.org/wp-content/uploads/2018/12/Screenshot-20181224214419-830x143.png?resize=830%2C143&ssl=1 (Turn GNOME to Heaven With These 23 GNOME Extensions 76) diff --git a/sources/tech/20181226 -Review- Polo File Manager in Linux.md b/sources/tech/20181226 -Review- Polo File Manager in Linux.md new file mode 100644 index 0000000000..cf763850cf --- /dev/null +++ b/sources/tech/20181226 -Review- Polo File Manager in Linux.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: ([Review] Polo File Manager in Linux) +[#]: via: (https://itsfoss.com/polo-file-manager/) +[#]: author: (John Paul https://itsfoss.com/author/john/) + +[Review] Polo File Manager in Linux +====== + +We are all familiar with file managers. It’s that piece of software that allows you to access your directories, files in a GUI. + +Most of us use the default file manager included with our desktop of choice. The creator of [Polo][1] hopes to get you to use his file manager by adding extra features but hides the good ones behind a paywall. + +![][2]Polo file manager + +### What is Polo file manager? + +According to its [website][1], Polo is an “advanced file manager for Linux written in [Vala][3])”. Further down the page, Polo is referred to as a “modern, light-weight file manager for Linux with support for multiple panes and tabs; support for archives, and much more.” + +It is from the same developer (Tony George) that has given us some of the most popular applications for desktop Linux. [Timeshift backup][4] tool, [Conky Manager][5], [Aptik backup tool][6]s for applications etc. Polo is the latest offering from Tony. + +Note that Polo is still in the beta stage of development which means the first stable version of the software is not out yet. + +### Features of Polo file manager + +![Polo File Manager in Ubuntu Linux][7]Polo File Manager in Ubuntu Linux + +It’s true that Polo has a bunch of neat features that most file managers don’t have. However, the really neat features are only available if you donate more than $10 to the project or sign up for the creator’s Patreon. I will be separating the free features from the features that require the “donation plugin”. + +![Cloud storage support in Polo file manager][8]Support cloud storage + +#### Free Features + + * Multiple Panes – Single-pane, dual-pane (vertical or horizontal split) and quad-pane layouts. + * Multiple Views – List view, Icon view, Tiled view, and Media view + * Device Manager – Devices popup displays the list of connected devices with options to mount and unmount + * Archive Support – Support for browsing archives as normal folders. Supports creation of archives in multiple formats with advanced compression settings. + * Checksum & Hashing – Generate and compare MD5, SHA1, SHA2-256 ad SHA2-512 checksums + * Built-in [Fish shell][9] + * Support for [cloud storage][10], such as Dropbox, Google Drive, Amazon Drive, Amazon S3, Backblaze B2, Hubi, Microsoft OneDrive, OpenStack Swift, and Yandex Disk + * Compare files + * Analyses disk usage + * KVM support + * Connect to FTP, SFTP, SSH and Samba servers + + + +![Dual pane view of Polo file manager][11]Polo in dual pane view + +#### Donation/Paywall Features + + * Write ISO to USB Device + * Image optimization and adjustment tools + * Optimize PNG + * Reduce JPEG Quality + * Remove Color + * Reduce Color + * Boost Color + * Set as Wallpaper + * Rotate + * Resize + * Convert to PNG, JPEG, TIFF, BMP, ICO and more + * PDF tools + * Split + * Merge + * Add and Remove Password + * Reduce File Size + * Uncompress + * Remove Colors + * Rotate + * Optimize + * Video Download via [youtube-dl][12] + + + +### Installing Polo + +Let’s see how to install Polo file manager on various Linux distributions. + +#### 1\. Ubuntu based distributions + +For all Ubuntu based systems (Ubuntu, Linux Mint, Elementary OS, etc), you can install Polo via the [official PPA][13]. Not sure what a PPA is? [Read about PPA here][14]. + +`sudo apt-add-repository -y ppa:teejee2008/ppa` +`sudo apt-get update` +`sudo apt-get install polo-file-manager` + +#### 2\. Arch based distributions + +For all Arch-based systems (Arch, Manjaro, ArchLabs, etc), you can install Polo from the [Arch User Repository][15]. + +#### 3\. Other Distros + +For all other distros, you can download and use the [.RUN installer][16] to setup Polo. + +### Thoughts on Polo + +I’ve installed tons of different distros and never had a problem with the default file manager. (I’ve probably used Thunar and Caja the most.) The free version of Polo doesn’t contain any features that would make me switch. As for the paid features, I already use a number of applications that accomplish the same things. + +One final note: the paid version of Polo is supposed to help fund development of the project. However, [according to GitHub][17], the last commit on Polo was three months ago. That’s quite a big interval of inactivity for a software that is still in the beta stages of development. + +Have you ever used [Polo][1]? If not, what is your favorite Linux file manager? Let us know in the comments below. + +If you found this article interesting, please take a minute to share it on social media, Hacker News or [Reddit][18]. + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/polo-file-manager/ + +作者:[John Paul][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/john/ +[b]: https://github.com/lujun9972 +[1]: https://teejee2008.github.io/polo/ +[2]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo.jpg?fit=800%2C500&ssl=1 +[3]: https://en.wikipedia.org/wiki/Vala_(programming_language +[4]: https://itsfoss.com/backup-restore-linux-timeshift/ +[5]: https://itsfoss.com/conky-gui-ubuntu-1304/ +[6]: https://github.com/teejee2008/aptik +[7]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-file-manager-in-ubuntu.jpeg?resize=800%2C450&ssl=1 +[8]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-coud-options.jpg?fit=800%2C795&ssl=1 +[9]: https://fishshell.com/ +[10]: https://itsfoss.com/cloud-services-linux/ +[11]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/polo-dual-pane.jpg?fit=800%2C520&ssl=1 +[12]: https://itsfoss.com/download-youtube-linux/ +[13]: https://launchpad.net/~teejee2008/+archive/ubuntu/ppa +[14]: https://itsfoss.com/ppa-guide/ +[15]: https://aur.archlinux.org/packages/polo +[16]: https://github.com/teejee2008/polo/releases +[17]: https://github.com/teejee2008/polo +[18]: http://reddit.com/r/linuxusersgroup diff --git a/sources/tech/20181227 Asciinema - Record And Share Your Terminal Sessions On The Fly.md b/sources/tech/20181227 Asciinema - Record And Share Your Terminal Sessions On The Fly.md new file mode 100644 index 0000000000..20bcfbe26d --- /dev/null +++ b/sources/tech/20181227 Asciinema - Record And Share Your Terminal Sessions On The Fly.md @@ -0,0 +1,312 @@ +[#]: collector: (lujun9972) +[#]: translator: (bestony) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Asciinema – Record And Share Your Terminal Sessions On The Fly) +[#]: via: (https://www.2daygeek.com/linux-asciinema-record-your-terminal-sessions-share-them-on-web/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Asciinema – Record And Share Your Terminal Sessions On The Fly +====== + +This is known topic and we had already written so many article about this topic. + +Even today also we are going to discuss about the same topic. + +Other tools are works locally but Asciinema works in both way like local and web. + +I mean we can share the recording on the web. + +By default everyone prefer history command to review/recall the previously entered commands in terminal. + +But unfortunately, that shows only the commands that we ran and doesn’t shows the commands output which was performed previously. + +There are many utilities available in Linux to record the terminal session activity. + +Also, we had written about few utilities in the past and today also we are going to discuss about the same kind of topic. + +If you would like to check other utilities to record your Linux terminal session activity then you can give a try to **[Script Command][1]** , **[Terminalizer Tool][2]** and **[Asciinema Tool][3]**. + +But if you are looking for **[GIF Recorder][4]** then try **[Gifine][5]** , **[Kgif][6]** and **[Peek][7]** utilities. + +### What is Asciinema + +asciinema is a free and open source solution for recording terminal sessions and sharing them on the web. + +When you run asciinema rec in your terminal the recording starts, capturing all output that is being printed to your terminal while you’re issuing the shell commands. + +When the recording finishes (by hitting `Ctrl-D` or typing `exit`) then the captured output is uploaded to asciinema.org website and prepared for playback on the web. + +Asciinema project is built of several complementary pieces such as asciinema command line tool, API at asciinema.org and javascript player. + +Asciinema was inspired by script and scriptreplay commands. + +### How to Install Asciinema In Linux + +It was written in Python and pip installation is a recommended method to install Asciinema on Linux. + +Make sure you should have installed python-pip package on your system. If no, use the following command to install it. + +For Debian/Ubuntu users, use **[Apt Command][8]** or **[Apt-Get Command][9]** to install pip package. + +``` +$ sudo apt install python-pip +``` + +For Archlinux users, use **[Pacman Command][10]** to install pip package. + +``` +$ sudo pacman -S python-pip +``` + +For Fedora users, use **[DNF Command][11]** to install pip package. + +``` +$ sudo dnf install python-pip +``` + +For CentOS/RHEL users, use **[YUM Command][12]** to install pip package. + +``` +$ sudo yum install python-pip +``` + +For openSUSE users, use **[Zypper Command][13]** to install pip package. + +``` +$ sudo zypper install python-pip +``` + +Finally run the following **[pip command][14]** to install Asciinema tool in Linux. + +``` +$ sudo pip3 install asciinema +``` + +### How to Record Your Terminal Session Using Asciinema + +Once you successfully installed Asciinema. Just run the following command to start recording. + +``` +$ asciinema rec 2g-test +asciinema: recording asciicast to 2g-test +asciinema: press "ctrl-d" or type "exit" when you're done +``` + +For testing purpose run few commands and see whether it’s working fine or not. + +``` +$ free + total used free shared buff/cache available +Mem: 15867 2783 10537 1264 2546 11510 +Swap: 17454 0 17454 + +$ hostnamectl + Static hostname: daygeek-Y700 + Icon name: computer-laptop + Chassis: laptop + Machine ID: 31bdeb7b833547368d230a2025d475bc + Boot ID: c84f7e6f39394d1f8fdc4bcaa251aee2 + Operating System: Manjaro Linux + Kernel: Linux 4.19.8-2-MANJARO + Architecture: x86-64 + +$ uname -a +Linux daygeek-Y700 4.19.8-2-MANJARO #1 SMP PREEMPT Sat Dec 8 14:45:36 UTC 2018 x86_64 GNU/Linux + +$ lscpu +Architecture: x86_64 +CPU op-mode(s): 32-bit, 64-bit +Byte Order: Little Endian +Address sizes: 39 bits physical, 48 bits virtual +CPU(s): 8 +On-line CPU(s) list: 0-7 +Thread(s) per core: 2 +Core(s) per socket: 4 +Socket(s): 1 +NUMA node(s): 1 +Vendor ID: GenuineIntel +CPU family: 6 +Model: 94 +Model name: Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz +Stepping: 3 +CPU MHz: 800.047 +CPU max MHz: 3500.0000 +CPU min MHz: 800.0000 +BogoMIPS: 5186.00 +Virtualization: VT-x +L1d cache: 32K +L1i cache: 32K +L2 cache: 256K +L3 cache: 6144K +NUMA node0 CPU(s): 0-7 +Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp flush_l1d +``` + +Once you have done, simple press `CTRL+D` or type `exit` to stop the recording. The result will be saved in the same directory. + +``` +$ exit +exit +asciinema: recording finished +asciinema: asciicast saved to 2g-test +``` + +If you would like to save the output in the different directory then mention the path where you want to save the file. + +``` +$ asciinema rec /opt/session-record/2g-test1 +``` + +We can play the recorded session using the following command. + +``` +$ asciinema play 2g-test +``` + +We can play the recorded session with double speed. + +``` +$ asciinema play -s 2 2g-test +``` + +Alternatively we can play the recorded session with normal speed with idle time limited to 2 seconds. + +``` +$ asciinema play -i 2 2g-test +``` + +### How To Share the Recorded Session on The Web + +If you would like to share the recorded session with your friends, just run the following command which upload the recording to asciinema.org and provide you the unique link. + +It will be automatically archived 7 days after upload. + +``` +$ asciinema upload 2g-test +View the recording at: + + https://asciinema.org/a/jdJrxhDLboeyrhzZRHsve0x8i + +This installation of asciinema recorder hasn't been linked to any asciinema.org +account. All unclaimed recordings (from unknown installations like this one) +are automatically archived 7 days after upload. + +If you want to preserve all recordings made on this machine, connect this +installation with asciinema.org account by opening the following link: + + https://asciinema.org/connect/10cd4f24-45b6-4f64-b737-ae0e5d12baf8 +``` + +![][16] + +If you would like to share the recorded session on social media, just click the `Share` button in the bottom of the page. + +If anyone wants to download this recording, just click the `Download` button in the bottom of the page to save on your system. + +### How to Manage Recording on asciinema.org Site + +If you want to preserve all recordings made on this machine, connect this installation with asciinema.org account by opening the following link and follow the instructions. + +``` +https://asciinema.org/connect/10cd4f24-45b6-4f64-b737-ae0e5d12baf8 +``` + +If you have already recorded an asciicast but you don’t see it on your profile on asciinema.org website. Just run the `asciinema auth` command in your terminal to move those. + +``` +$ asciinema auth + +Open the following URL in a web browser to link your install ID with your asciinema.org user account: + +https://asciinema.org/connect/10cd4f24-45b6-4f64-b737-ae0e5d12baf8 + +This will associate all recordings uploaded from this machine (past and future ones) to your account, and allow you to manage them (change title/theme, delete) at asciinema.org. +``` + +![][17] + +Run the following command if you would like to upload the file directly to asciinema.org instead of locally saving. + +``` +$ asciinema rec +asciinema: recording asciicast to /tmp/tmp6kuh4247-ascii.cast +asciinema: press "ctrl-d" or type "exit" when you're done +``` + +Just run the following command to start recording. + +``` +$ asciinema rec 2g-test +asciinema: recording asciicast to 2g-test +asciinema: press "ctrl-d" or type "exit" when you're done +``` + +For testing purpose run few commands and see whether it’s working fine or not. + +``` +$ free + total used free shared buff/cache available +Mem: 15867 2783 10537 1264 2546 11510 +Swap: 17454 0 17454 + +$ hostnamectl + Static hostname: daygeek-Y700 + Icon name: computer-laptop + Chassis: laptop + Machine ID: 31bdeb7b833547368d230a2025d475bc + Boot ID: c84f7e6f39394d1f8fdc4bcaa251aee2 + Operating System: Manjaro Linux + Kernel: Linux 4.19.8-2-MANJARO + Architecture: x86-64 + +$ uname -a +Linux daygeek-Y700 4.19.8-2-MANJARO #1 SMP PREEMPT Sat Dec 8 14:45:36 UTC 2018 x86_64 GNU/Linux +``` + +Once you have done, simple press `CTRL+D` or type `exit` to stop the recording then hit `Enter` button to upload the recording to asciinema.org website. + +It will take few seconds to generate the unique url for your uploaded recording. Once it’s done you will be getting the results same as below. + +``` +$ exit +exit +asciinema: recording finished +asciinema: press "enter" to upload to asciinema.org, "ctrl-c" to save locally + +View the recording at: + + https://asciinema.org/a/b7bu5OhuCy2vUH7M8RRPjsSxg +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-asciinema-record-your-terminal-sessions-share-them-on-web/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/script-command-record-save-your-terminal-session-activity-linux/ +[2]: https://www.2daygeek.com/terminalizer-a-tool-to-record-your-terminal-and-generate-animated-gif-images/ +[3]: https://www.2daygeek.com/Asciinema-record-your-terminal-sessions-as-svg-animations-in-linux/ +[4]: https://www.2daygeek.com/category/gif-recorder/ +[5]: https://www.2daygeek.com/gifine-create-animated-gif-vedio-recorder-linux-mint-debian-ubuntu/ +[6]: https://www.2daygeek.com/kgif-create-animated-gif-file-active-window-screen-recorder-capture-arch-linux-mint-fedora-ubuntu-debian-opensuse-centos/ +[7]: https://www.2daygeek.com/peek-create-animated-gif-screen-recorder-capture-arch-linux-mint-fedora-ubuntu/ +[8]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[9]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[10]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[11]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[12]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[13]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[14]: https://www.2daygeek.com/install-pip-manage-python-packages-linux/ +[15]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[16]: https://www.2daygeek.com/wp-content/uploads/2018/12/linux-asciinema-record-your-terminal-sessions-share-web-1.png +[17]: https://www.2daygeek.com/wp-content/uploads/2018/12/linux-asciinema-record-your-terminal-sessions-share-web-3.png diff --git a/sources/tech/20181227 Linux commands for measuring disk activity.md b/sources/tech/20181227 Linux commands for measuring disk activity.md new file mode 100644 index 0000000000..badda327dd --- /dev/null +++ b/sources/tech/20181227 Linux commands for measuring disk activity.md @@ -0,0 +1,252 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Linux commands for measuring disk activity) +[#]: via: (https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Linux commands for measuring disk activity +====== +![](https://images.idgesg.net/images/article/2018/12/tape-measure-100782593-large.jpg) +Linux systems provide a handy suite of commands for helping you see how busy your disks are, not just how full. In this post, we examine five very useful commands for looking into disk activity. Two of the commands (iostat and ioping) may have to be added to your system, and these same two commands require you to use sudo privileges, but all five commands provide useful ways to view disk activity. + +Probably one of the easiest and most obvious of these commands is **dstat**. + +### dtstat + +In spite of the fact that the **dstat** command begins with the letter "d", it provides stats on a lot more than just disk activity. If you want to view just disk activity, you can use the **-d** option. As shown below, you’ll get a continuous list of disk read/write measurements until you stop the display with a ^c. Note that after the first report, each subsequent row in the display will report disk activity in the following time interval, and the default is only one second. + +``` +$ dstat -d +-dsk/total- + read writ + 949B 73k + 65k 0 <== first second + 0 24k <== second second + 0 16k + 0 0 ^C +``` + +Including a number after the -d option will set the interval to that number of seconds. + +``` +$ dstat -d 10 +-dsk/total- + read writ + 949B 73k + 65k 81M <== first five seconds + 0 21k <== second five second + 0 9011B ^C +``` + +Notice that the reported data may be shown in a number of different units — e.g., M (megabytes), k (kilobytes), and B (bytes). + +Without options, the dstat command is going to show you a lot of other information as well — indicating how the CPU is spending its time, displaying network and paging activity, and reporting on interrupts and context switches. + +``` +$ dstat +You did not select any stats, using -cdngy by default. +--total-cpu-usage-- -dsk/total- -net/total- ---paging-- ---system-- +usr sys idl wai stl| read writ| recv send| in out | int csw + 0 0 100 0 0| 949B 73k| 0 0 | 0 3B| 38 65 + 0 0 100 0 0| 0 0 | 218B 932B| 0 0 | 53 68 + 0 1 99 0 0| 0 16k| 64B 468B| 0 0 | 64 81 ^C +``` + +The dstat command provides valuable insights into overall Linux system performance, pretty much replacing a collection of older tools, such as vmstat, netstat, iostat, and ifstat, with a flexible and powerful command that combines their features. For more insight into the other information that the dstat command can provide, refer to this post on the [dstat][1] command. + +### iostat + +The iostat command helps monitor system input/output device loading by observing the time the devices are active in relation to their average transfer rates. It's sometimes used to evaluate the balance of activity between disks. + +``` +$ iostat +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +loop0 0.00 0.00 0.00 1048 0 +loop1 0.00 0.00 0.00 365 0 +loop2 0.00 0.00 0.00 1056 0 +loop3 0.00 0.01 0.00 16169 0 +loop4 0.00 0.00 0.00 413 0 +loop5 0.00 0.00 0.00 1184 0 +loop6 0.00 0.00 0.00 1062 0 +loop7 0.00 0.00 0.00 5261 0 +sda 1.06 0.89 72.66 2837453 232735080 +sdb 0.00 0.02 0.00 48669 40 +loop8 0.00 0.00 0.00 1053 0 +loop9 0.01 0.01 0.00 18949 0 +loop10 0.00 0.00 0.00 56 0 +loop11 0.00 0.00 0.00 7090 0 +loop12 0.00 0.00 0.00 1160 0 +loop13 0.00 0.00 0.00 108 0 +loop14 0.00 0.00 0.00 3572 0 +loop15 0.01 0.01 0.00 20026 0 +loop16 0.00 0.00 0.00 24 0 +``` + +Of course, all the stats provided on Linux loop devices can clutter the display when you want to focus solely on your disks. The command, however, does provide the **-p** option, which allows you to just look at your disks — as shown in the commands below. + +``` +$ iostat -p sda +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.54 2843737 232815784 +sda1 1.04 0.88 72.54 2821733 232815784 +``` + +Note that **tps** refers to transfers per second. + +You can also get iostat to provide repeated reports. In the example below, we're getting measurements every five seconds by using the **-d** option. + +``` +$ iostat -p sda -d 5 +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 1.06 0.89 72.51 2843749 232834048 +sda1 1.04 0.88 72.51 2821745 232834048 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +If you prefer to omit the first (stats since boot) report, add a **-y** to your command. + +``` +$ iostat -p sda -d 5 -y +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sda 0.80 0.00 11.20 0 56 +sda1 0.80 0.00 11.20 0 56 +``` + +Next, we look at our second disk drive. + +``` +$ iostat -p sdb +Linux 4.18.0-041800-generic (butterfly) 12/26/2018 _x86_64_ (2 CPU) + +avg-cpu: %user %nice %system %iowait %steal %idle + 0.07 0.01 0.03 0.05 0.00 99.85 + +Device tps kB_read/s kB_wrtn/s kB_read kB_wrtn +sdb 0.00 0.02 0.00 48669 40 +sdb2 0.00 0.00 0.00 4861 40 +sdb1 0.00 0.01 0.00 35344 0 +``` + +### iotop + +The **iotop** command is top-like utility for looking at disk I/O. It gathers I/O usage information provided by the Linux kernel so that you can get an idea which processes are most demanding in terms in disk I/O. In the example below, the loop time has been set to 5 seconds. The display will update itself, overwriting the previous output. + +``` +$ sudo iotop -d 5 +Total DISK READ: 0.00 B/s | Total DISK WRITE: 1585.31 B/s +Current DISK READ: 0.00 B/s | Current DISK WRITE: 12.39 K/s + TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND +32492 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.12 % [kworker/u8:1-ev~_power_efficient] + 208 be/3 root 0.00 B/s 1585.31 B/s 0.00 % 0.11 % [jbd2/sda1-8] + 1 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % init splash + 2 be/4 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [kthreadd] + 3 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_gp] + 4 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [rcu_par_gp] + 8 be/0 root 0.00 B/s 0.00 B/s 0.00 % 0.00 % [mm_percpu_wq] +``` + +### ioping + +The **ioping** command is an altogether different type of tool, but it can report disk latency — how long it takes a disk to respond to requests — and can be helpful in diagnosing disk problems. + +``` +$ sudo ioping /dev/sda1 +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=1 time=960.2 us (warmup) +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=2 time=841.5 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=3 time=831.0 us +4 KiB <<< /dev/sda1 (block device 111.8 GiB): request=4 time=1.17 ms +^C +--- /dev/sda1 (block device 111.8 GiB) ioping statistics --- +3 requests completed in 2.84 ms, 12 KiB read, 1.05 k iops, 4.12 MiB/s +generated 4 requests in 3.37 s, 16 KiB, 1 iops, 4.75 KiB/s +min/avg/max/mdev = 831.0 us / 947.9 us / 1.17 ms / 158.0 us +``` + +### atop + +The **atop** command, like **top** provides a lot of information on system performance, including some stats on disk activity. + +``` +ATOP - butterfly 2018/12/26 17:24:19 37d3h13m------ 10ed +PRC | sys 0.03s | user 0.01s | #proc 179 | #zombie 0 | #exit 6 | +CPU | sys 1% | user 0% | irq 0% | idle 199% | wait 0% | +cpu | sys 1% | user 0% | irq 0% | idle 99% | cpu000 w 0% | +CPL | avg1 0.00 | avg5 0.00 | avg15 0.00 | csw 677 | intr 470 | +MEM | tot 5.8G | free 223.4M | cache 4.6G | buff 253.2M | slab 394.4M | +SWP | tot 2.0G | free 2.0G | | vmcom 1.9G | vmlim 4.9G | +DSK | sda | busy 0% | read 0 | write 7 | avio 1.14 ms | +NET | transport | tcpi 4 | tcpo stall 8 | udpi 1 | udpo 0swout 2255 | +NET | network | ipi 10 | ipo 7 | ipfrw 0 | deliv 60.67 ms | +NET | enp0s25 0% | pcki 10 | pcko 8 | si 1 Kbps | so 3 Kbp0.73 ms | + + PID SYSCPU USRCPU VGROW RGROW ST EXC THR S CPUNR CPU CMD 1/1673e4 | + 3357 0.01s 0.00s 672K 824K -- - 1 R 0 0% atop + 3359 0.01s 0.00s 0K 0K NE 0 0 E - 0% + 3361 0.00s 0.01s 0K 0K NE 0 0 E - 0% + 3363 0.01s 0.00s 0K 0K NE 0 0 E - 0% +31357 0.00s 0.00s 0K 0K -- - 1 S 1 0% bash + 3364 0.00s 0.00s 8032K 756K N- - 1 S 1 0% sleep + 2931 0.00s 0.00s 0K 0K -- - 1 I 1 0% kworker/u8:2-e + 3356 0.00s 0.00s 0K 0K -E 0 0 E - 0% + 3360 0.00s 0.00s 0K 0K NE 0 0 E - 0% + 3362 0.00s 0.00s 0K 0K NE 0 0 E - 0% +``` + +If you want to look at _just_ the disk stats, you can easily manage that with a command like this: + +``` +$ atop | grep DSK +$ atop | grep DSK +DSK | sda | busy 0% | read 122901 | write 3318e3 | avio 0.67 ms | +DSK | sdb | busy 0% | read 1168 | write 103 | avio 0.73 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.39 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.47 ms | +DSK | sda | busy 2% | read 0 | write 99 | avio 2.26 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 94 | avio 2.43 ms | +DSK | sda | busy 2% | read 0 | write 92 | avio 2.43 ms | +^C +``` + +### Being in the know with disk I/O + +Linux provides enough commands to give you good insights into how hard your disks are working and help you focus on potential problems or slowdowns. Hopefully, one of these commands will tell you just what you need to know when it's time to question disk performance. Occasional use of these commands will help ensure that especially busy or slow disks will be obvious when you need to check them. + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3330497/linux/linux-commands-for-measuring-disk-activity.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3291616/linux/examining-linux-system-performance-with-dstat.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md new file mode 100644 index 0000000000..58b072f2fc --- /dev/null +++ b/sources/tech/20181231 Easily Upload Text Snippets To Pastebin-like Services From Commandline.md @@ -0,0 +1,259 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Easily Upload Text Snippets To Pastebin-like Services From Commandline) +[#]: via: (https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Easily Upload Text Snippets To Pastebin-like Services From Commandline +====== + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-720x340.png) + +Whenever there is need to share the code snippets online, the first one probably comes to our mind is Pastebin.com, the online text sharing site launched by **Paul Dixon** in 2002. Now, there are several alternative text sharing services available to upload and share text snippets, error logs, config files, a command’s output or any sort of text files. If you happen to share your code often using various Pastebin-like services, I do have a good news for you. Say hello to **Wgetpaste** , a command line BASH utility to easily upload text snippets to pastebin-like services. Using Wgetpaste script, anyone can quickly share text snippets to their friends, colleagues, or whoever wants to see/use/review the code from command line in Unix-like systems. + +### Installing Wgetpaste + +Wgetpaste is available in Arch Linux [Community] repository. To install it on Arch Linux and its variants like Antergos and Manjaro Linux, just run the following command: + +``` +$ sudo pacman -S wgetpaste +``` + +For other distributions, grab the source code from [**Wgetpaste website**][1] and install it manually as described below. + +First download the latest Wgetpaste tar file: + +``` +$ wget http://wgetpaste.zlin.dk/wgetpaste-2.28.tar.bz2 +``` + +Extract it: + +``` +$ tar -xvjf wgetpaste-2.28.tar.bz2 +``` + +It will extract the contents of the tar file in a folder named “wgetpaste-2.28”. + +Go to that directory: + +``` +$ cd wgetpaste-2.28/ +``` + +Copy the wgetpaste binary to your $PATH, for example **/usr/local/bin/**. + +``` +$ sudo cp wgetpaste /usr/local/bin/ +``` + +Finally, make it executable using command: + +``` +$ sudo chmod +x /usr/local/bin/wgetpaste +``` + +### Upload Text Snippets To Pastebin-like Services + +Uploading text snippets using Wgetpaste is trivial. Let me show you a few examples. + +**1\. Upload text files** + +To upload any text file using Wgetpaste, just run: + +``` +$ wgetpaste mytext.txt +``` + +This command will upload the contents of mytext.txt file. + +Sample output: + +``` +Your paste can be seen here: https://paste.pound-python.org/show/eO0aQjTgExP0wT5uWyX7/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-1.png) + +You can share the pastebin URL via any medium like mail, message, whatsapp or IRC etc. Whoever has this URL can visit it and view the contents of the text file in a web browser of their choice. + +Here is the contents of mytext.txt file in web browser: + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-2.png) + +You can also use **‘tee’** command to display what is being pasted, instead of uploading them blindly. + +To do so, use **-t** option like below. + +``` +$ wgetpaste -t mytext.txt +``` + +![][3] + +**2. Upload text snippets to different services +** + +By default, Wgetpaste will upload the text snippets to **poundpython** () service. + +To view the list of supported services, run: + +``` +$ wgetpaste -S +``` + +Sample output: + +``` +Services supported: (case sensitive): +Name: | Url: +=============|================= +bpaste | https://bpaste.net/ +codepad | http://codepad.org/ +dpaste | http://dpaste.com/ +gists | https://api.github.com/gists +*poundpython | https://paste.pound-python.org/ +``` + +Here, ***** indicates the default service. + +As you can see, Wgetpaste currently supports five text sharing services. I didn’t try all of them, but I believe all services will work. + +To upload the contents to other services, for example **bpaste.net** , use **-s** option like below. + +``` +$ wgetpaste -s bpaste mytext.txt +Your paste can be seen here: https://bpaste.net/show/5199e127e733 +``` + +**3\. Read input from stdin** + +Wgetpaste can also read the input from stdin. + +``` +$ uname -a | wgetpaste +``` + +This command will upload the output of ‘uname -a’ command. + +**4. Upload the COMMAND and the output of COMMAND together +** + +Sometimes, you may need to paste a COMMAND and its output. To do so, specify the contents of the command within quotes like below. + +``` +$ wgetpaste -c 'ls -l' +``` + +This will upload the command ‘ls -l’ along with its output to the pastebin service. + +This can be useful when you wanted to let others to clearly know what was the exact command you just ran and its output. + +![][4] + +As you can see in the output, I ran ‘ls -l’ command. + +**5. Upload system log files, config files +** + +Like I already said, we can upload any sort of text files, not just an ordinary text file, in your system such as log files, a specific command’s output etc. Say for example, you just updated your Arch Linux box and ended up with a broken system. You ask your colleague how to fix it and s/he wants to read the pacman.log file. Here is the command to upload the contents of the pacman.log file: + +``` +$ wgetpaste /var/log/pacman.log +``` + +Share the pastebin URL with your Colleague, so s/he will review the pacman.log and may help you to fix the problem by reviewing the log file. + +Usually, the contents of log files might be too long and you don’t want to share them all. In such cases, just use **cat** command to read the output and use **tail** command with the **-n** switch to define the number of lines to share and finally pipe the output to Wgetpaste as shown below. + +``` +$ cat /var/log/pacman.log | tail -n 50 | wgetpaste +``` + +The above command will upload only the **last 50 lines** of pacman.log file. + +**6\. Convert input url to tinyurl** + +By default, Wgetpaste will display the full pastebin URL in the output. If you want to convert the input URL to a tinyurl, just use **-u** option. + +``` +$ wgetpaste -u mytext.txt +Your paste can be seen here: http://tinyurl.com/y85d8gtz +``` + +**7. Set language +** + +By default, Wgetpaste will upload text snippets in **plain text**. + +To list languages supported by the specified service, use **-L** option. + +``` +$ wgetpaste -L +``` + +This command will list all languages supported by default service i.e **poundpython** (). + +We can change this using **-l** option. + +``` +$ wgetpaste -l Bash mytext.txt +``` + +**8\. Disable syntax highlighting or html in the output** + +As I mentioned above, the text snippets will be displayed in a specific language format (plaintext, Bash etc.). + +You can, however, change this behaviour to display the raw text snippets using **-r** option. + +``` +$ wgetpaste -r mytext.txt +Your raw paste can be seen here: https://paste.pound-python.org/raw/CUJhQ3jEmr2UvfmD2xCL/ +``` + +![](https://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-5.png) + +As you can see in the above output, there is no syntax highlighting, no html formatting. Just a raw output. + +**9\. Change Wgetpaste defaults** + +All Defaults values (DEFAULT_{NICK,LANGUAGE,EXPIRATION}[_${SERVICE}] and DEFAULT_SERVICE) can be changed globally in **/etc/wgetpaste.conf** or per user in **~/.wgetpaste.conf** files. These files, however, are not available by default in my system. I guess we need to manually create them. The developer has given the sample contents for both files [**here**][5] and [**here**][6]. Just create these files manually with given sample contents and modify the parameters accordingly to change Wgetpaste defaults. + +**10\. Getting help** + +To display the help section, run: + +``` +$ wgetpaste -h +``` + +And, that’s all for now. Hope this was useful. We will publish more useful content in the days to come. Stay tuned! + +On behalf of **OSTechNix** , I wish you all a very **Happy New Year 2019**. I am grateful to all our readers, contributors, and mentors for supporting us from the beginning of our journey. We couldn’t come this far without your support and guidance. Thank you everyone! Have a great year ahead!! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-easily-upload-text-snippets-to-pastebin-like-services-from-commandline/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://wgetpaste.zlin.dk/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-3.png +[4]: http://www.ostechnix.com/wp-content/uploads/2018/12/wgetpaste-4.png +[5]: http://wgetpaste.zlin.dk/zlin.conf +[6]: http://wgetpaste.zlin.dk/wgetpaste.example diff --git a/sources/tech/20181231 Troubleshooting hardware problems in Linux.md b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md new file mode 100644 index 0000000000..dcc89034db --- /dev/null +++ b/sources/tech/20181231 Troubleshooting hardware problems in Linux.md @@ -0,0 +1,141 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Troubleshooting hardware problems in Linux) +[#]: via: (https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux) +[#]: author: (Daniel Oh https://opensource.com/users/daniel-oh) + +Troubleshooting hardware problems in Linux +====== +Learn what's causing your Linux hardware to malfunction so you can get it back up and running quickly. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003499_01_other11x_cc.png?itok=I_kCDYj0) + +[Linux servers][1] run mission-critical business applications in many different types of infrastructures including physical machines, virtualization, private cloud, public cloud, and hybrid cloud. It's important for Linux sysadmins to understand how to manage Linux hardware infrastructure—including software-defined functionalities related to [networking][2], storage, Linux containers, and multiple tools on Linux servers. + +It can take some time to troubleshoot and solve hardware-related issues on Linux. Even highly experienced sysadmins sometimes spend hours working to solve mysterious hardware and software discrepancies. + +The following tips should make it quicker and easier to troubleshoot hardware in Linux. Many different things can cause problems with Linux hardware; before you start trying to diagnose them, it's smart to learn about the most common issues and where you're most likely to find them. + +### Quick-diagnosing devices, modules, and drivers + +The first step in troubleshooting usually is to display a list of the hardware installed on your Linux server. You can obtain detailed information on the hardware using **ls** commands such as **[lspci][3]** , **[lsblk][4]** , **[lscpu][5]** , and **[lsscsi][6]**. For example, here is output of the **lsblk** command: + +``` +# lsblk +NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT +xvda    202:0    0  50G  0 disk +├─xvda1 202:1    0   1M  0 part +└─xvda2 202:2    0  50G  0 part / +xvdb    202:16   0  20G  0 disk +└─xvdb1 202:17   0  20G  0 part +``` + +If the **ls** commands don't reveal any errors, use init processes (e.g., **systemd** ) to see how the Linux server is working. **systemd** is the most popular init process for bootstrapping user spaces and controlling multiple system processes. For example, here is output of the **systemctl status** command: + +``` +# systemctl status +● bastion.f347.internal +    State: running +     Jobs: 0 queued +   Failed: 0 units +    Since: Wed 2018-11-28 01:29:05 UTC; 2 days ago +   CGroup: / +           ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 21 +           ├─kubepods.slice +           │ ├─kubepods-pod3881728a_f2af_11e8_af77_06af52f87498.slice +           │ │ ├─docker-88b27385f4bae77bba834fbd60a61d19026bae13d18eb147783ae27819c34967.scope +           │ │ │ └─23860 /opt/bridge/bin/bridge --public-dir=/opt/bridge/static --config=/var/console-config/console-c +           │ │ └─docker-a4433f0d523c7e5bc772ee4db1861e4fa56c4e63a2d48f6bc831458c2ce9fd2d.scope +           │ │   └─23639 /usr/bin/pod +.... +``` + +### Digging into multiple loggings + +**Dmesg** allows you to figure out errors and warnings in the kernel's latest messages. For example, here is output of the **dmesg | more** command: + +``` +# dmesg | more +.... +[ 1539.027419] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 1539.042726] IPv6: ADDRCONF(NETDEV_UP): veth61f37018: link is not ready +[ 1539.048706] IPv6: ADDRCONF(NETDEV_CHANGE): veth61f37018: link becomes ready +[ 1539.055034] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready +[ 1539.098550] device veth61f37018 entered promiscuous mode +[ 1541.450207] device veth61f37018 left promiscuous mode +[ 1542.493266] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.292788] SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue) +[ 9965.449401] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready +[ 9965.462738] IPv6: ADDRCONF(NETDEV_UP): vetheacc333c: link is not ready +[ 9965.468942] IPv6: ADDRCONF(NETDEV_CHANGE): vetheacc333c: link becomes ready +.... +``` + +You can also look at all Linux system logs in the **/var/log/messages** file, which is where you'll find errors related to specific issues. It's worthwhile to monitor the messages via the **tail** command in real time when you make modifications to your hardware, such as mounting an extra disk or adding an Ethernet network interface. For example, here is output of the **tail -f /var/log/messages** command: + +``` +# tail -f /var/log/messages +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:20:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:03 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:03 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +Dec  1 13:21:33 bastion dnsmasq[30201]: setting upstream servers from DBus +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 192.199.0.2#53 +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain in-addr.arpa +Dec  1 13:21:33 bastion dnsmasq[30201]: using nameserver 127.0.0.1#53 for domain cluster.local +``` + +### Analyzing networking functions + +You may have hundreds of thousands of cloud-native applications to serve business services in a complex networking environment; these may include virtualization, multiple cloud, and hybrid cloud. This means you should analyze whether networking connectivity is working correctly as part of your troubleshooting. Useful commands to figure out networking functions in the Linux server include **ip addr** , **traceroute** , **nslookup** , **dig** , and **ping** , among others. For example, here is output of the **ip addr show** command: + +``` +# ip addr show +1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 +    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 +    inet 127.0.0.1/8 scope host lo +       valid_lft forever preferred_lft forever +    inet6 ::1/128 scope host +       valid_lft forever preferred_lft forever +2: eth0: mtu 9001 qdisc mq state UP group default qlen 1000 +    link/ether 06:af:52:f8:74:98 brd ff:ff:ff:ff:ff:ff +    inet 192.199.0.169/24 brd 192.199.0.255 scope global noprefixroute dynamic eth0 +       valid_lft 3096sec preferred_lft 3096sec +    inet6 fe80::4af:52ff:fef8:7498/64 scope link +       valid_lft forever preferred_lft forever +3: docker0: mtu 1500 qdisc noqueue state DOWN group default +    link/ether 02:42:67:fb:1a:a2 brd ff:ff:ff:ff:ff:ff +    inet 172.17.0.1/16 scope global docker0 +       valid_lft forever preferred_lft forever +    inet6 fe80::42:67ff:fefb:1aa2/64 scope link +       valid_lft forever preferred_lft forever +.... +``` + +### In conclusion + +Troubleshooting Linux hardware requires considerable knowledge, including how to use powerful command-line tools and figure out system loggings. You should also know how to diagnose the kernel space, which is where you can find the root cause of many hardware problems. Keep in mind that hardware issues in Linux may come from many different sources, including devices, modules, drivers, BIOS, networking, and even plain old hardware malfunctions. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/troubleshooting-hardware-problems-linux + +作者:[Daniel Oh][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/daniel-oh +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/5/what-linux-server +[2]: https://opensource.com/article/18/11/intro-software-defined-networking +[3]: https://linux.die.net/man/8/lspci +[4]: https://linux.die.net/man/8/lsblk +[5]: https://linux.die.net/man/1/lscpu +[6]: https://linux.die.net/man/8/lsscsi diff --git a/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md b/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md new file mode 100644 index 0000000000..3c4105e13f --- /dev/null +++ b/sources/tech/20190102 How To Display Thumbnail Images In Terminal.md @@ -0,0 +1,186 @@ +[#]: collector: (lujun9972) +[#]: translator: ( WangYueScream) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Display Thumbnail Images In Terminal) +[#]: via: (https://www.ostechnix.com/how-to-display-thumbnail-images-in-terminal/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +How To Display Thumbnail Images In Terminal +====== +![](https://www.ostechnix.com/wp-content/uploads/2019/01/lsix-720x340.png) + +A while ago, we discussed about [**Fim**][1], a lightweight, CLI image viewer application used to display various type of images, such as bmp, gif, jpeg, and png etc., from command line. Today, I stumbled upon a similar utility named **‘lsix’**. It is like ‘ls’ command in Unix-like systems, but for images only. The lsix is a simple CLI utility designed to display thumbnail images in Terminal using **Sixel** graphics. For those wondering, Sixel, short for six pixels, is a type of bitmap graphics format. It uses **ImageMagick** , so almost all file formats supported by imagemagick will work fine. + +### Features + +Concerning the features of lsix, we can list the following: + + * Automatically detects if your Terminal supports Sixel graphics or not. If your Terminal doesn’t support Sixel, it will notify you to enable it. + * Automatically detects the terminal background color. It uses terminal escape sequences to try to figure out the foreground and background colors of your Terminal application and will display the thumbnails clearly. + * If there are more images in the directory, usually >21, lsix will display those images one row a a time, so you need not to wait for the entire montage to be created. + * Works well over SSH, so you can manipulate images stored on your remote web server without much hassle. + * It supports Non-bitmap graphics, such as.svg, .eps, .pdf, .xcf etc. + * Written in BASH, so works on almost all Linux distros. + + + +### Installing lsix + +Since lsix uses ImageMagick, make sure you have installed it. It is available in the default repositories of most Linux distributions. For example, on Arch Linux and its variants like Antergos, Manjaro Linux, ImageMagick can be installed using command: + +``` +$ sudo pacman -S imagemagick +``` + +On Debian, Ubuntu, Linux Mint: + +``` +$ sudo apt-get install imagemagick +``` + +lsix doesn’t require any installation as it is just a BASH script. Just download it and move it to your $PATH. It’s that simple. + +Download the latest lsix version from project’s github page. I am going to download the lsix archive file using command: + +``` +$ wget https://github.com/hackerb9/lsix/archive/master.zip +``` + +Extract the downloaded zip file: + +``` +$ unzip master.zip +``` + +This command will extract all contents into a folder named ‘lsix-master’. Copy the lsix binary from this directory to your $PATH, for example /usr/local/bin/. + +``` +$ sudo cp lsix-master/lsix /usr/local/bin/ +``` + +Finally, make the lsbix binary executable: + +``` +$ sudo chmod +x /usr/local/bin/lsix +``` + +That’s it. Now is the time to display thumbnails in the terminal itself. + +Before start using lsix, **make sure your Terminal supports Sixel graphics**. + +The developer has developed lsix on an Xterm in **vt340 emulation mode**. However, the he claims that lsix should work on any Sixel compatible Terminal. + +Xterm supports Sixel graphics, but it isn’t enabled by default. + +You can launch Xterm with Sixel mode enabled using command (from another Terminal): + +``` +$ xterm -ti vt340 +``` + +Alternatively, you can make vt340 the default terminal type for Xterm as described below. + +Edit **.Xresources** file (If it not available, just create it): + +``` +$ vi .Xresources +``` + +Add the following line: + +``` +xterm*decTerminalID : vt340 +``` + +Press **ESC** and type **:wq** to save and close the file. + +Finally, run the following command to apply the changes: + +``` +$ xrdb -merge .Xresources +``` + +Now Xterm will start with Sixel mode enabled at every launch by default. + +### Display Thumbnail Images In Terminal + +Launch Xterm (Don’t forget to start it with vt340 mode). Here is how Xterm looks like in my system. +![](https://www.ostechnix.com/wp-content/uploads/2019/01/xterm-1.png) + +Like I already stated, lsix is very simple utility. It doesn’t have any command line flags or configuration files. All you have to do is just pass the path of your file as an argument like below. + +``` +$ lsix ostechnix/logo.png +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/lsix-4.png) + +If you run it without path, it will display the thumbnail images in your current working directory. I have few files in a directory named **ostechnix**. + +To display the thumbnails in this directory, just run: + +``` +$ lsix +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/lsix-1.png) + +See? The thumbnails of all files are displayed in the terminal itself. + +If you use ‘ls’ command, you would just see the filenames only, not thumbnails. + +![][3] + +You can also display a specific image or group of images of a specific type using wildcards. + +For example, to display a single image, just mention the full path of the image like below. + +``` +$ lsix girl.jpg +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/lsix-2.png) + +To display all images of a specific type, say PNG, use the wildcard character like below. + +``` +$ lsix *.png +``` + +![][4] + +For JPEG type images, the command would be: + +``` +$ lsix *jpg +``` + +The thumbnail image quality is surprisingly good. I thought lsix would just display blurry thumbnails. I was wrong. The thumbnails are clearly visible just like on the graphical image viewers. + +And, that’s all for now. As you can see, lsix is very similar to ‘ls’ command, but it only for displaying thumbnails. If you deal with a lot of images at work, lsix might be quite handy. Give it a try and let us know your thoughts on this utility in the comment section below. If you know any similar tools, please suggest them as well. I will check and update this guide. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/how-to-display-thumbnail-images-in-terminal/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/how-to-display-images-in-the-terminal/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2019/01/ls-command-1.png +[4]: http://www.ostechnix.com/wp-content/uploads/2019/01/lsix-3.png diff --git a/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md new file mode 100644 index 0000000000..71555454f5 --- /dev/null +++ b/sources/tech/20190102 Using Yarn on Ubuntu and Other Linux Distributions.md @@ -0,0 +1,265 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Yarn on Ubuntu and Other Linux Distributions) +[#]: via: (https://itsfoss.com/install-yarn-ubuntu) +[#]: author: (Abhishek Prakash https://itsfoss.com/author/abhishek/) + +Using Yarn on Ubuntu and Other Linux Distributions +====== + +**This quick tutorial shows you the official way of installing Yarn package manager on Ubuntu and Debian Linux. You’ll also learn some basic Yarn commands and the steps to remove Yarn completely.** + +[Yarn][1] is an open source JavaScript package manager developed by Facebook. It is an alternative or should I say improvement to the popular npm package manager. [Facebook developers’ team][2] created Yarn to overcome the shortcomings of [npm][3]. Facebook claims that Yarn is faster, reliable and more secure than npm. + +Like npm, Yarn provides you a way to automate the process of installing, updating, configuring, and removing packages retrieved from a global registry. + +The advantage of Yarn is that it is faster as it caches every package it downloads so it doesn’t need to download it again. It also parallelizes operations to maximize resource utilization. Yarn also uses [checksums to verify the integrity][4] of every installed package before its code is executed. Yarn also guarantees that an install that worked on one system will work exactly the same way on any other system. + +If you are [using nodejs on Ubuntu][5], probably you already have npm installed on your system. In that case, you can use npm to install Yarn globally in the following manner: + +``` +sudo npm install yarn -g +``` + +However, I would recommend using the official way to install Yarn on Ubuntu/Debian. + +### Installing Yarn on Ubuntu and Debian [The Official Way] + +![Yarn JS][6] + +The instructions mentioned here should be applicable to all versions of Ubuntu such as Ubuntu 18.04, 16.04 etc. The same set of instructions are also valid for Debian and other Debian based distributions. + +Since the tutorial uses Curl to add the GPG key of Yarn project, it would be a good idea to verify whether you have Curl installed already or not. + +``` +sudo apt install curl +``` + +The above command will install Curl if it wasn’t installed already. Now that you have curl, you can use it to add the GPG key of Yarn project in the following fashion: + +``` +curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - +``` + +After that, add the repository to your sources list so that you can easily upgrade the Yarn package in future with the rest of the system updates: + +``` +sudo sh -c 'echo "deb https://dl.yarnpkg.com/debian/ stable main" >> /etc/apt/sources.list.d/yarn.list' +``` + +You are set to go now. [Update Ubuntu][7] or Debian system to refresh the list of available packages and then install yarn: + +``` +sudo apt update +sudo apt install yarn +``` + +This will install Yarn along with nodejs. Once the process completes, verify that Yarn has been installed successfully. You can do that by checking the Yarn version. + +``` +yarn --version +``` + +For me, it showed an output like this: + +``` +yarn --version +1.12.3 +``` + +This means that I have Yarn version 1.12.3 installed on my system. + +### Using Yarn + +I presume that you have some basic understandings of JavaScript programming and how dependencies work. I am not going to go in details here. I’ll show you some of the basic Yarn commands that will help you getting started with it. + +#### Creating a new project with Yarn + +Like npm, Yarn also works with a package.json file. This is where you add your dependencies. All the packages of the dependencies are cached in the node_modules directory in the root directory of your project. + +In the root directory of your project, run the following command to generate a fresh package.json file: + +It will ask you a number of questions. You can skip the questions r go with the defaults by pressing enter. + +``` +yarn init +yarn init v1.12.3 +question name (test_yarn): test_yarn_proect +question version (1.0.0): 0.1 +question description: Test Yarn +question entry point (index.js): +question repository url: +question author: abhishek +question license (MIT): +question private: +success Saved package.json +Done in 82.42s. +``` + +With this, you get a package.json file of this sort: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT" +} +``` + +Now that you have the package.json, you can either manually edit it to add or remove package dependencies or use Yarn commands (preferred). + +#### Adding dependencies with Yarn + +You can add a dependency on a certain package in the following fashion: + +``` +yarn add +``` + +For example, if you want to use [Lodash][8] in your project, you can add it using Yarn like this: + +``` +yarn add lodash +yarn add v1.12.3 +info No lockfile found. +[1/4] Resolving packages… +[2/4] Fetching packages… +[3/4] Linking dependencies… +[4/4] Building fresh packages… +success Saved lockfile. +success Saved 1 new dependency. +info Direct dependencies +└─ [email protected] +info All dependencies +└─ [email protected] +Done in 2.67s. +``` + +And you can see that this dependency has been added automatically in the package.json file: + +``` +{ + "name": "test_yarn_proect", + "version": "0.1", + "description": "Test Yarn", + "main": "index.js", + "author": "abhishek", + "license": "MIT", + "dependencies": { + "lodash": "^4.17.11" + } +} +``` + +By default, Yarn will add the latest version of a package in the dependency. If you want to use a specific version, you may specify it while adding. + +As always, you can also update the package.json file manually. + +#### Upgrading dependencies with Yarn + +You can upgrade a particular dependency to its latest version with the following command: + +``` +yarn upgrade +``` + +It will see if the package in question has a newer version and will update it accordingly. + +You can also change the version of an already added dependency in the following manner: + +You can also upgrade all the dependencies of your project to their latest version with one single command: + +``` +yarn upgrade +``` + +It will check the versions of all the dependencies and will update them if there are any newer versions. + +#### Removing dependencies with Yarn + +You can remove a package from the dependencies of your project in this way: + +``` +yarn remove +``` + +#### Install all project dependencies + +If you made any changes to the project.json file, you should run either + +``` +yarn +``` + +or + +``` +yarn install +``` + +to install all the dependencies at once. + +### How to remove Yarn from Ubuntu or Debian + +I’ll complete this tutorial by mentioning the steps to remove Yarn from your system if you used the above steps to install it. If you ever realized that you don’t need Yarn anymore, you will be able to remove it. + +Use the following command to remove Yarn and its dependencies. + +``` +sudo apt purge yarn +``` + +You should also remove the Yarn repository from the repository list: + +``` +sudo rm /etc/apt/sources.list.d/yarn.list +``` + +The optional next step is to remove the GPG key you had added to the trusted keys. But for that, you need to know the key. You can get that using the apt-key command: + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging + +Warning: apt-key output should not be parsed (stdout is not a terminal) pub rsa4096 2016-10-05 [SC] 72EC F46A 56B4 AD39 C907 BBB7 1646 B01B 86E5 0310 uid [ unknown] Yarn Packaging yarn@dan.cx sub rsa4096 2016-10-05 [E] sub rsa4096 2019-01-02 [S] [expires: 2020-02-02] + +The key here is the last 8 characters of the GPG key’s fingerprint in the line starting with pub. + +So, in my case, the key is 86E50310 and I’ll remove it using this command: + +``` +sudo apt-key del 86E50310 +``` + +You’ll see an OK in the output and the GPG key of Yarn package will be removed from the list of GPG keys your system trusts. + +I hope this tutorial helped you to install Yarn on Ubuntu, Debian, Linux Mint, elementary OS etc. I provided some basic Yarn commands to get you started along with complete steps to remove Yarn from your system. + +I hope you liked this tutorial and if you have any questions or suggestions, please feel free to leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/install-yarn-ubuntu + +作者:[Abhishek Prakash][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/abhishek/ +[b]: https://github.com/lujun9972 +[1]: https://yarnpkg.com/lang/en/ +[2]: https://code.fb.com/ +[3]: https://www.npmjs.com/ +[4]: https://itsfoss.com/checksum-tools-guide-linux/ +[5]: https://itsfoss.com/install-nodejs-ubuntu/ +[6]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/yarn-js-ubuntu-debian.jpeg?resize=800%2C450&ssl=1 +[7]: https://itsfoss.com/update-ubuntu/ +[8]: https://lodash.com/ diff --git a/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md new file mode 100644 index 0000000000..fa92d3450a --- /dev/null +++ b/sources/tech/20190104 Search, Study And Practice Linux Commands On The Fly.md @@ -0,0 +1,223 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Search, Study And Practice Linux Commands On The Fly!) +[#]: via: (https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Search, Study And Practice Linux Commands On The Fly! +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-720x340.png) + +The title may look like sketchy and click bait. Allow me to explain what I am about to explain in this tutorial. Let us say you want to download an archive file, extract it and move the file from one location to another from command line. As per the above scenario, we may need at least three Linux commands, one for downloading the file, one for extracting the downloaded file and one for moving the file. If you’re intermediate or advanced Linux user, you could do this easily with an one-liner command or a script in few seconds/minutes. But, if you are a noob who don’t know much about Linux commands, you might need little help. + +Of course, a quick google search may yield many results. Or, you could use [**man pages**][1]. But some man pages are really long, comprehensive and lack in useful example. You might need to scroll down for quite a long time when you’re looking for a particular information on the specific flags/options. Thankfully, there are some [**good alternatives to man pages**][2], which are focused on mostly practical commands. One such good alternative is **TLDR pages**. Using TLDR pages, we can quickly and easily learn a Linux command with practical examples. To access the TLDR pages, we require a TLDR client. There are many clients available. Today, we are going to learn about one such client named **“Tldr++”**. + +Tldr++ is a fast and interactive tldr client written with **Go** programming language. Unlike the other Tldr clients, it is fully interactive. That means, you can pick a command, read all examples , and immediately run any command without having to retype or copy/paste each command in the Terminal. Still don’t get it? No problem. Read on to learn and practice Linux commands on the fly. + +### Install Tldr++ + +Installing Tldr++ is very simple. Download tldr++ latest version from the [**releases page**][3]. Extract it and move the tldr++ binary to your $PATH. + +``` +$ wget https://github.com/isacikgoz/tldr/releases/download/v0.5.0/tldr_0.5.0_linux_amd64.tar.gz + +$ tar xzf tldr_0.5.0_linux_amd64.tar.gz + +$ sudo mv tldr /usr/local/bin + +$ sudo chmod +x /usr/local/bin/tldr +``` + +Now, run ‘tldr’ binary to populate the tldr pages in your local system. + +``` +$ tldr +``` + +Sample output: + +``` +Enumerating objects: 6, done. +Counting objects: 100% (6/6), done. +Compressing objects: 100% (6/6), done. +Total 18157 (delta 0), reused 3 (delta 0), pack-reused 18151 +Successfully cloned into: /home/sk/.local/share/tldr +``` + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-2.png) + +Tldr++ is available in AUR. If you’re on Arch Linux, you can install it using any AUR helper, for example [**YaY**][4]. Make sure you have removed any existing tldr client from your system and run the following command to install tldr++. + +``` +$ yay -S tldr++ +``` + +Alternatively, you can build from source as described below. Since Tldr++ is written using Go language, make sure you have installed it on your Linux box. If it isn’t installed yet, refer the following guide. + ++ [How To Install Go Language In Linux](https://www.ostechnix.com/install-go-language-linux/) + +After installing Go, run the following command to install Tldr++. + +``` +$ go get -u github.com/isacikgoz/tldr +``` + +This command will download the contents of tldr repository in a folder named **‘go’** in the current working directory. + +Now run the ‘tldr’ binary to populate all tldr pages in your local system using command: + +``` +$ go/bin/tldr +``` + +Sample output: + +![][6] + +Finally, copy the tldr binary to your PATH. + +``` +$ sudo mv tldr /usr/local/bin +``` + +It is time to see some examples. + +### Tldr++ Usage + +Type ‘tldr’ command without any options to display all command examples in alphabetical order. + +![][7] + +Use the **UP/DOWN arrows** to navigate through the commands, type any letters to search or type a command name to view the examples of that respective command. Press **?** for more and **Ctrl+c** to return/exit. + +To display the example commands of a specific command, for example **apt** , simply do: + +``` +$ tldr apt +``` + +![][8] + +Choose any example command from the list and hit ENTER. You will see a *** symbol** before the selected command. For example, I choose the first command i.e ‘sudo apt update’. Now, it will ask you whether to continue or not. If the command is correct, just type ‘y’ to continue and type your sudo password to run the selected command. + +![][9] + +See? You don’t need to copy/paste or type the actual command in the Terminal. Just choose it from the list and run on the fly! + +There are hundreds of Linux command examples are available in Tldr pages. You can choose one or two commands per day and learn them thoroughly. And keep this practice everyday to learn as much as you can. + +### Learn And Practice Linux Commands On The Fly Using Tldr++ + +Now think of the scenario that I mentioned in the first paragraph. You want to download a file, extract it and move it to different location and make it executable. Let us see how to do it interactively using Tldr++ client. + +**Step 1 – Download a file from Internet** + +To download a file from command line, we mostly use **‘curl’** or **‘wget’** commands. Let me use ‘wget’ to download the file. To open tldr page of wget command, just run: + +``` +$ tldr wget +``` + +Here is the examples of wget command. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/wget-tldr.png) + +You can use **UP/DOWN** arrows to go through the list of commands. Once you choose the command of your choice, press ENTER. Here I chose the first command. + +Now, enter the path of the file to download. + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/tldr-3.png) + +You will then be asked to confirm if it is the correct command or not. If the command is correct, simply type ‘yes’ or ‘y’ to start downloading the file. + +![][10] + +We have downloaded the file. Let us go ahead and extract this file. + +**Step 2 – Extract downloaded archive** + +We downloaded the **tar.gz** file. So I am going to open the ‘tar’ tldr page. + +``` +$ tldr tar +``` + +You will see the list of example commands. Go through the examples and find which command is suitable to extract tar.gz(gzipped archive) file and hit ENTER key. In our case, it is the third command. + +![][11] + +Now, you will be prompted to enter the path of the tar.gz file. Just type the path and hit ENTER key. Tldr++ supports smart file suggestions. That means it will suggest the file name automatically as you type. Just press TAB key for auto-completion. + +![][12] + +If you downloaded the file to some other location, just type the full path, for example **/home/sk/Downloads/tldr_0.5.0_linux_amd64.tar.gz.** + +Once you enter the path of the file to extract, press ENTER and then, type ‘y’ to confirm. + +![][13] + +**Step 3 – Move file from one location to another** + +We extracted the archive. Now we need to move the file to another location. To move the files from one location to another, we use ‘mv’ command. So, let me open the tldr page for mv command. + +``` +$ tldr mv +``` + +Choose the correct command to move the files from one location to another. In our case, the first command will work, so let me choose it. + +![][14] + +Type the path of the file that you want to move and enter the destination path and hit ENTER key. + +![][15] + +**Note:** Type **y!** or **yes!** to run command with **sudo** privileges. + +As you see in the above screenshot, I moved the file named **‘tldr’** to **‘/usr/local/bin/’** location. + +For more details, refer the project’s GitHub page given at the end. + + +### Conclusion + +Don’t get me wrong. **Man pages are great!** There is no doubt about it. But, as I already said, many man pages are comprehensive and doesn’t have useful examples. There is no way I could memorize all lengthy commands with tricky flags. Some times I spent much time on man pages and remained clueless. The Tldr pages helped me to find what I need within few minutes. Also, we use some commands once in a while and then we forget them completely. Tldr pages on the other hand actually helps when it comes to using commands we rarely use. Tldr++ client makes this task much easier with smart user interaction. Give it a go and let us know what you think about this tool in the comment section below. + +And, that’s all. More good stuffs to come. Stay tuned! + +Good luck! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/search-study-and-practice-linux-commands-on-the-fly/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: https://www.ostechnix.com/learn-use-man-pages-efficiently/ +[2]: https://www.ostechnix.com/3-good-alternatives-man-pages-every-linux-user-know/ +[3]: https://github.com/isacikgoz/tldr/releases +[4]: https://www.ostechnix.com/yay-found-yet-another-reliable-aur-helper/ +[5]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[6]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-1.png +[7]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-11.png +[8]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-12.png +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-13.png +[10]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-4.png +[11]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-6.png +[12]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-7.png +[13]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-8.png +[14]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-9.png +[15]: http://www.ostechnix.com/wp-content/uploads/2019/01/tldr-10.png diff --git a/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md new file mode 100644 index 0000000000..c6c520e339 --- /dev/null +++ b/sources/tech/20190105 Setting up an email server, part 1- The Forwarder.md @@ -0,0 +1,224 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Setting up an email server, part 1: The Forwarder) +[#]: via: (https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/) +[#]: author: (Julian Andres Klode https://blog.jak-linux.org/) + +Setting up an email server, part 1: The Forwarder +====== + +This week, I’ve been working on rolling out mail services on my server. I started working on a mail server setup at the end of November, while the server was not yet in use, but only for about two days, and then let it rest. + +As my old shared hosting account expired on January 1, I had to move mail forwarding duties over to the new server. Yes forwarding - I do plan to move hosting the actual email too, but at the moment it’s “just” forwarding to gmail. + +### The Software + +As you might know from the web server story, my server runs on Ubuntu 18.04. I set up a mail server on this system using + + * [Postfix][1] for SMTP duties (warning, they oddly do not have an https page) + * [rspamd][2] for spam filtering, and signing DKIM / ARC + * [bind9][3] for DNS resolving + * [postsrsd][4] for SRS + + + +You might wonder why bind9 is in there. It turns out that DNS blacklists used by spam filters block the caching DNS servers you usually use, so you have to use your own recursive DNS server. Ubuntu offers you the choice between bind9 and dnsmasq in main, and it seems like bind9 is more appropriate here than dnsmasq. + +### Setting up postfix + +Most of the postfix configuration is fairly standard. So, let’s skip TLS configuration and outbound SMTP setups (this is email, and while they support TLS, it’s all optional, so let’s not bother that much here). + +The most important part is restrictions in `main.cf`. + +First of all, relay restrictions prevent us from relaying emails to weird domains: + +``` +# Relay Restrictions +smtpd_relay_restrictions = reject_non_fqdn_recipient reject_unknown_recipient_domain permit_mynetworks permit_sasl_authenticated defer_unauth_destination +``` + +We also only accept mails from hosts that know their own full qualified name: + +``` +# Helo restrictions (hosts not having a proper fqdn) +smtpd_helo_required = yes +smtpd_helo_restrictions = permit_mynetworks reject_invalid_helo_hostname reject_non_fqdn_helo_hostname reject_unknown_helo_hostname +``` + +We also don’t like clients (other servers) that send data too early, or have an unknown hostname: + +``` +smtpd_data_restrictions = reject_unauth_pipelining +smtpd_client_restrictions = permit_mynetworks reject_unknown_client_hostname +``` + +I also set up a custom apparmor profile that’s pretty lose, I plan to migrate to the one in the apparmor git eventually but it needs more testing and some cleanup. + +### Sender rewriting scheme + +For SRS using postsrsd, we define the `SRS_DOMAIN` in `/etc/default/postsrsd` and then configure postfix to talk to it: + +``` +# Handle SRS for forwarding +recipient_canonical_maps = tcp:localhost:10002 +recipient_canonical_classes= envelope_recipient,header_recipient + +sender_canonical_maps = tcp:localhost:10001 +sender_canonical_classes = envelope_sender +``` + +This has a minor issue that it also rewrites the `Return-Path` when it delivers emails locally, but as I am only forwarding, I’m worrying about that later. + +### rspamd basics + +rspamd is a great spam filtering system. It uses a small core written in C and a bunch of Lua plugins, such as: + + * IP score, which keeps track of how good a specific server was in the past + * Replies, which can check whether an email is a reply to another one + * DNS blacklisting + * DKIM and ARC validation and signing + * DMARC validation + * SPF validation + + + +It also has a nice web UI: + +![rspamd web ui status][5] + +rspamd web ui status + +![rspamd web ui investigating a spam message][6] + +rspamd web ui investigating a spam message + +Setting up rspamd is quite easy. You basically just drop a bunch of configuration overrides into `/etc/rspamd/local.d` and you’re done. Heck, it mostly works out of the box. There’s a fancy `rspamadm configwizard` too. + +What you do want for rspamd is a redis server. redis is needed in [many places][7], such as rate limiting, greylisting, dmarc, reply tracking, ip scoring, neural networks. + +I made a few changes to the defaults: + + * I enabled subject rewriting instead of adding headers, so spam mail subjects get `[SPAM]` prepended, in `local.d/actions.conf`: + +``` + reject = 15; +rewrite_subject = 6; +add_header = 6; +greylist = 4; +subject = "[SPAM] %s"; +``` + + * I set `autolearn = true;` in `local.d/classifier-bayes.conf` to make it learn that an email that has a score of at least 15 (those that are rejected) is spam, and emails with negative scores are ham. + + * I set `extended_spam_headers = true;` in `local.d/milter_headers.conf` to get a report from rspamd in the header seeing the score and how the score came to be. + + + + +### ARC setup + +[ARC][8] is the ‘Authenticated Received Chain’ and is currently a DMARC working group work item. It allows forwarders / mailing lists to authenticate their forwarding of the emails and the checks they have performed. + +rspamd is capable of validating and signing emails with ARC, but I’m not sure how much influence ARC has on gmail at the moment, for example. + +There are three parts to setting up ARC: + + 1. Generate a DKIM key pair (use `rspamadm dkim_keygen`) + 2. Setup rspamd to sign incoming emails using the private key + 3. Add a DKIM `TXT` record for the public key. `rspamadm` helpfully tells you how it looks like. + + + +For step two, what we need to do is configure `local.d/arc.conf`. You can basically use the example configuration from the [rspamd page][9], the key point for signing incoming email is to specifiy `sign_incoming = true;` and `use_domain_sign_inbound = "recipient";` (FWIW, none of these options are documented, they are fairly new, and nobody updated the documentation for them). + +My configuration looks like this at the moment: + +``` +# If false, messages with empty envelope from are not signed +allow_envfrom_empty = true; +# If true, envelope/header domain mismatch is ignored +allow_hdrfrom_mismatch = true; +# If true, multiple from headers are allowed (but only first is used) +allow_hdrfrom_multiple = false; +# If true, username does not need to contain matching domain +allow_username_mismatch = false; +# If false, messages from authenticated users are not selected for signing +auth_only = true; +# Default path to key, can include '$domain' and '$selector' variables +path = "${DBDIR}/arc/$domain.$selector.key"; +# Default selector to use +selector = "arc"; +# If false, messages from local networks are not selected for signing +sign_local = true; +# +sign_inbound = true; +# Symbol to add when message is signed +symbol_signed = "ARC_SIGNED"; +# Whether to fallback to global config +try_fallback = true; +# Domain to use for ARC signing: can be "header" or "envelope" +use_domain = "header"; +use_domain_sign_inbound = "recipient"; +# Whether to normalise domains to eSLD +use_esld = true; +# Whether to get keys from Redis +use_redis = false; +# Hash for ARC keys in Redis +key_prefix = "ARC_KEYS"; +``` + +This would also sign any outgoing email, but I’m not sure that’s necessary - my understanding is that we only care about ARC when forwarding/receiving incoming emails, not when sending them (at least that’s what gmail does). + +### Other Issues + +There are few other things to keep in mind when running your own mail server. I probably don’t know them all yet, but here we go: + + * You must have a fully qualified hostname resolving to a public IP address + + * Your public IP address must resolve back to the fully qualified host name + + * Again, you should run a recursive DNS resolver so your DNS blacklists work (thanks waldi for pointing that out) + + * Setup an SPF record. Mine looks like this: + +`jak-linux.org. 3600 IN TXT "v=spf1 +mx ~all"` + +this states that all my mail servers may send email, but others probably should not (a softfail). Not having an SPF record can punish you; for example, rspamd gives missing SPF and DKIM a score of 1. + + * All of that software is sandboxed using AppArmor. Makes you question its security a bit less! + + + + +### Source code, outlook + +As always, you can find the Ansible roles on [GitHub][10]. Feel free to point out bugs! 😉 + +In the next installment of this series, we will be looking at setting up Dovecot, and configuring DKIM. We probably also want to figure out how to run notmuch on the server, keep messages in matching maildirs, and have my laptop synchronize the maildir and notmuch state with the server. Ugh, sounds like a lot of work. + +-------------------------------------------------------------------------------- + +via: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/ + +作者:[Julian Andres Klode][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://blog.jak-linux.org/ +[b]: https://github.com/lujun9972 +[1]: http://www.postfix.org/ +[2]: https://rspamd.com/ +[3]: https://www.isc.org/downloads/bind/ +[4]: https://github.com/roehling/postsrsd +[5]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-status.png +[6]: https://blog.jak-linux.org/2019/01/05/setting-up-an-email-server-part1/rspamd-spam.png +[7]: https://rspamd.com/doc/configuration/redis.html +[8]: http://arc-spec.org/ +[9]: https://rspamd.com/doc/modules/arc.html +[10]: https://github.com/julian-klode/ansible.jak-linux.org diff --git a/sources/tech/20190107 Aliases- To Protect and Serve.md b/sources/tech/20190107 Aliases- To Protect and Serve.md new file mode 100644 index 0000000000..783c59dc41 --- /dev/null +++ b/sources/tech/20190107 Aliases- To Protect and Serve.md @@ -0,0 +1,176 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Aliases: To Protect and Serve) +[#]: via: (https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +Aliases: To Protect and Serve +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/prairie-path_1920.jpg?itok=wRARsM7p) + +Happy 2019! Here in the new year, we’re continuing our series on aliases. By now, you’ve probably read our [first article on aliases][1], and it should be quite clear how they are the easiest way to save yourself a lot of trouble. You already saw, for example, that they helped with muscle-memory, but let's see several other cases in which aliases come in handy. + +### Aliases as Shortcuts + +One of the most beautiful things about Linux's shells is how you can use zillions of options and chain commands together to carry out really sophisticated operations in one fell swoop. All right, maybe beauty is in the eye of the beholder, but let's agree that this feature published practical. + +The downside is that you often come up with recipes that are often hard to remember or cumbersome to type. Say space on your hard disk is at a premium and you want to do some New Year's cleaning. Your first step may be to look for stuff to get rid off in you home directory. One criteria you could apply is to look for stuff you don't use anymore. `ls` can help with that: + +``` +ls -lct +``` + +The instruction above shows the details of each file and directory (`-l`) and also shows when each item was last accessed (`-c`). It then orders the list from most recently accessed to least recently accessed (`-t`). + +Is this hard to remember? You probably don’t use the `-c` and `-t` options every day, so perhaps. In any case, defining an alias like + +``` +alias lt='ls -lct' +``` + +will make it easier. + +Then again, you may want to have the list show the oldest files first: + +``` +alias lo='lt -F | tac' +``` + +![aliases][3] + +Figure 1: The lt and lo aliases in action. + +[Used with permission][4] + +There are a few interesting things going here. First, we are using an alias (`lt`) to create another alias -- which is perfectly okay. Second, we are passing a new parameter to `lt` (which, in turn gets passed to `ls` through the definition of the `lt` alias). + +The `-F` option appends special symbols to the names of items to better differentiate regular files (that get no symbol) from executable files (that get an `*`), files from directories (end in `/`), and all of the above from links, symbolic and otherwise (that end in an `@` symbol). The `-F` option is throwback to the days when terminals where monochrome and there was no other way to easily see the difference between items. You use it here because, when you pipe the output from `lt` through to `tac` you lose the colors from `ls`. + +The third thing to pay attention to is the use of piping. Piping happens when you pass the output from an instruction to another instruction. The second instruction can then use that output as its own input. In many shells (including Bash), you pipe something using the pipe symbol (`|`). + +In this case, you are piping the output from `lt -F` into `tac`. `tac`'s name is a bit of a joke. You may have heard of `cat`, the instruction that was nominally created to con _cat_ enate files together, but that in practice is used to print out the contents of a file to the terminal. `tac` does the same, but prints out the contents it receives in reverse order. Get it? `cat` and `tac`. Developers, you so funny! + +The thing is both `cat` and `tac` can also print out stuff piped over from another instruction, in this case, a list of files ordered chronologically. + +So... after that digression, what comes out of the other end is the list of files and directories of the current directory in inverse order of freshness. + +The final thing you have to bear in mind is that, while `lt` will work the current directory and any other directory... + +``` +# This will work: +lt +# And so will this: +lt /some/other/directory +``` + +... `lo` will only work with the current directory: + +``` +# This will work: +lo +# But this won't: +lo /some/other/directory +``` + +This is because Bash expands aliases into their components. When you type this: + +``` +lt /some/other/directory +``` + +Bash REALLY runs this: + +``` +ls -lct /some/other/directory +``` + +which is a valid Bash command. + +However, if you type this: + +``` +lo /some/other/directory +``` + +Bash tries to run this: + +``` +ls -lct -F | tac /some/other/directory +``` + +which is not a valid instruction, because `tac` mainly because _/some/other/directory_ is a directory, and `cat` and `tac` don't do directories. + +### More Alias Shortcuts + + * `alias lll='ls -R'` prints out the contents of a directory and then drills down and prints out the contents of its subdirectories and the subdirectories of the subdirectories, and so on and so forth. It is a way of seeing everything you have under a directory. + + * `mkdir='mkdir -pv'` let's you make directories within directories all in one go. With the base form of `mkdir`, to make a new directory containing a subdirectory you have to do this: + +``` + mkdir newdir +mkdir newdir/subdir +``` + +Or this: + +``` +mkdir -p newdir/subdir +``` + +while with the alias you would only have to do this: + +``` +mkdir newdir/subdir +``` + +Your new `mkdir` will also tell you what it is doing while is creating new directories. + + + + +### Aliases as Safeguards + +The other thing aliases are good for is as safeguards against erasing or overwriting your files accidentally. At this stage you have probably heard the legendary story about the new Linux user who ran: + +``` +rm -rf / +``` + +as root, and nuked the whole system. Then there's the user who decided that: + +``` +rm -rf /some/directory/ * +``` + +was a good idea and erased the complete contents of their home directory. Notice how easy it is to overlook that space separating the directory path and the `*`. + +Both things can be avoided with the `alias rm='rm -i'` alias. The `-i` option makes `rm` ask the user whether that is what they really want to do and gives you a second chance before wreaking havoc in your file system. + +The same goes for `cp`, which can overwrite a file without telling you anything. Create an alias like `alias cp='cp -i'` and stay safe! + +### Next Time + +We are moving more and more into scripting territory. Next time, we'll take the next logical step and see how combining instructions on the command line gives you really interesting and sophisticated solutions to everyday admin problems. + + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/learn/2019/1/aliases-protect-and-serve +[2]: https://www.linux.com/files/images/fig01png-0 +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/fig01_0.png?itok=crqTm_va (aliases) +[4]: https://www.linux.com/licenses/category/used-permission diff --git a/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md new file mode 100644 index 0000000000..32a6a7dd3e --- /dev/null +++ b/sources/tech/20190107 Different Ways To Update Linux Kernel For Ubuntu.md @@ -0,0 +1,232 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Different Ways To Update Linux Kernel For Ubuntu) +[#]: via: (https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/) +[#]: author: (SK https://www.ostechnix.com/author/sk/) + +Different Ways To Update Linux Kernel For Ubuntu +====== + +![](https://www.ostechnix.com/wp-content/uploads/2019/01/ubuntu-linux-kernel-720x340.png) + +In this guide, we have given 7 different ways to update Linux kernel for Ubuntu. Among the 7 methods, five methods requires system reboot to apply the new Kernel and two methods don’t. Before updating Linux Kernel, it is **highly recommended to backup your important data!** All methods mentioned here are tested on Ubuntu OS only. We are not sure if they will work on other Ubuntu flavors (Eg. Xubuntu) and Ubuntu derivatives (Eg. Linux Mint). + +### Part A – Kernel Updates with reboot + +The following methods requires you to reboot your system to apply the new Linux Kernel. All of the following methods are recommended for personal or testing systems. Again, please backup your important data, configuration files and any other important stuff from your Ubuntu system. + +#### Method 1 – Update the Linux Kernel with dpkg (The manual way) + +This method helps you to manually download and install the latest available Linux kernel from **[kernel.ubuntu.com][1]** website. If you want to install most recent version (either stable or release candidate), this method will help. Download the Linux kernel version from the above link. As of writing this guide, the latest available version was **5.0-rc1** and latest stable version was **v4.20**. + +![][3] + +Click on the Linux Kernel version link of your choice and find the section for your architecture (‘Build for XXX’). In that section, download the two files with these patterns (where X.Y.Z is the highest version): + + 1. linux-image-*X.Y.Z*-generic-*.deb + 2. linux-modules-X.Y.Z*-generic-*.deb + + + +In a terminal, change directory to where the files are and run this command to manually install the kernel: + +``` +$ sudo dpkg --install *.deb +``` + +Reboot to use the new kernel: + +``` +$ sudo reboot +``` + +Check the kernel is as expected: + +``` +$ uname -r +``` + +For step by step instructions, please check the section titled under “Install Linux Kernel 4.15 LTS On DEB based systems” in the following guide. + ++ [Install Linux Kernel 4.15 In RPM And DEB Based Systems](https://www.ostechnix.com/install-linux-kernel-4-15-rpm-deb-based-systems/) + +The above guide is specifically written for 4.15 version. However, all the steps are same for installing latest versions too. + +**Pros:** No internet needed (You can download the Linux Kernel from any system). + +**Cons:** Manual update. Reboot necessary. + +#### Method 2 – Update the Linux Kernel with apt-get (The recommended method) + +This is the recommended way to install latest Linux kernel on Ubuntu-like systems. Unlike the previous method, this method will download and install latest Kernel version from Ubuntu official repositories instead of **kernel.ubuntu.com** website.. + +To update the whole system including the Kernel, just do: + +``` +$ sudo apt-get update + +$ sudo apt-get upgrade +``` + +If you want to update the Kernel only, run: + +``` +$ sudo apt-get upgrade linux-image-generic +``` + +**Pros:** Simple. Recommended method. + +**Cons:** Internet necessary. Reboot necessary. + +Updating Kernel from official repositories will mostly work out of the box without any problems. If it is the production system, this is the recommended way to update the Kernel. + +Method 1 and 2 requires user intervention to update Linux Kernels. The following methods (3, 4 & 5) are mostly automated. + +#### Method 3 – Update the Linux Kernel with Ukuu + +**Ukuu** is a Gtk GUI and command line tool that downloads the latest main line Linux kernel from **kernel.ubuntu.com** , and install it automatically in your Ubuntu desktop and server editions. Ukku is not only simplifies the process of manually downloading and installing new Kernels, but also helps you to safely remove the old and unnecessary Kernels. For more details, refer the following guide. + ++ [Ukuu – An Easy Way To Install And Upgrade Linux Kernel In Ubuntu-based Systems](https://www.ostechnix.com/ukuu-an-easy-way-to-install-and-upgrade-linux-kernel-in-ubuntu-based-systems/) + +**Pros:** Easy to install and use. Automatically installs main line Kernel. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 4 – Update the Linux Kernel with UKTools + +Just like Ukuu, the **UKTools** also fetches the latest stable Kernel from from **kernel.ubuntu.com** site and installs it automatically on Ubuntu and its derivatives like Linux Mint. More details about UKTools can be found in the link given below. + ++ [UKTools – Upgrade Latest Linux Kernel In Ubuntu And Derivatives](https://www.ostechnix.com/uktools-upgrade-latest-linux-kernel-in-ubuntu-and-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + +#### Method 5 – Update the Linux Kernel with Linux Kernel Utilities + +**Linux Kernel Utilities** is yet another program that makes the process of updating Linux kernel easy in Ubuntu-like systems. It is actually a set of BASH shell scripts used to compile and/or update latest Linux kernels for Debian and derivatives. It consists of three utilities, one for manually compiling and installing Kernel from source from [**http://www.kernel.org**][4] website, another for downloading and installing pre-compiled Kernels from from **** website. and third script is for removing the old kernels. For more details, please have a look at the following link. + ++ [Linux Kernel Utilities – Scripts To Compile And Update Latest Linux Kernel For Debian And Derivatives](https://www.ostechnix.com/linux-kernel-utilities-scripts-compile-update-latest-linux-kernel-debian-derivatives/) + +**Pros:** Simple. Automated. + +**Cons:** Internet necessary. Reboot necessary. + + +### Part B – Kernel Updates without reboot + +As I already said, all of above methods need you to reboot the server before the new kernel is active. If they are personal systems or testing machines, you could simply reboot and start using the new Kernel. But, what if they are production systems that requires zero downtime? No problem. This is where **Livepatching** comes in handy! + +The **livepatching** (or hot patching) allows you to install Linux updates or patches without rebooting, keeping your server at the latest security level, without any downtime. This is attractive for ‘always-on’ servers, such as web hosts, gaming servers, in fact, any situation where the server needs to stay on all the time. Linux vendors maintain patches only for security fixes, so this approach is best when security is your main concern. + +The following two methods doesn’t require system reboot and useful for updating Linux Kernel on production and mission-critical Ubuntu servers. + +#### Method 6 – Update the Linux Kernel Canonical Livepatch Service + +![][5] + +[**Canonical Livepatch Service**][6] applies Kernel updates, patches and security hotfixes automatically without rebooting the Ubuntu systems. It reduces the Ubuntu systems downtime and keep them secure. Canonical Livepatch Service can be set up either during or after installation. If you are using desktop Ubuntu, the Software Updater will automatically check for kernel patches and notify you. In a console-based system, it is up to you to run apt-get update regularly. It will install kernel security patches only when you run the command “apt-get upgrade”, hence is semi-automatic. + +Livepatch is free for three systems. If you have more than three, you need to upgrade to enterprise support solution named **Ubuntu Advantage** suite. This suite includes **Kernel Livepatching** and other services such as, + + * Extended Security Maintenance – critical security updates after Ubuntu end-of-life. + * Landscape – the systems management tool for using Ubuntu at scale. + * Knowledge Base – A private collection of articles and tutorials written by Ubuntu experts. + * Phone and web-based support. + + + +**Cost** + +Ubuntu Advantage includes three paid plans namely, Essential, Standard and Advanced. The basic plan (Essential plan) starts from **225 USD per year for one physical node** and **75 USD per year for one VPS**. It seems there is no monthly subscription for Ubuntu servers and desktops. You can view detailed information on all plans [**here**][7]. + +**Pros:** Simple. Semi-automatic. No reboot necessary. Free for 3 systems. + +**Cons:** Expensive for 4 or more hosts. No patch rollback. + +**Enable Canonical Livepatch Service** + +If you want to setup Livepatch service after installation, just do the following steps. + +Get a key at [**https://auth.livepatch.canonical.com/**][8]. + +``` +$ sudo snap install canonical-livepatch + +$ sudo canonical-livepatch enable your-key +``` + +#### Method 7 – Update the Linux Kernel with KernelCare + +![][9] + +[**KernelCare**][10] is the newest of all the live patching solutions. It is the product of [CloudLinux][11]. KernelCare runs on Ubuntu and other flavors of Linux. It checks for patch releases every 4 hours and will install them without confirmation. Patches can be rolled back if there are problems. + +**Cost** + +Fees, per server: **4 USD per month** , **45 USD per year**. + +Compared to Ubuntu Livepatch, kernelCare seems very cheap and affordable. Good thing is **monthly subscriptions are also available**. Another notable feature is it supports other Linux distributions, such as Red Hat, CentOS, Debian, Oracle Linux, Amazon Linux and virtualization platforms like OpenVZ, Proxmox etc. + +You can read all the features and benefits of KernelCare [**here**][12] and check all available plan details [**here**][13]. + +**Pros:** Simple. Fully automated. Wide OS coverage. Patch rollback. No reboot necessary. Free license for non-profit organizations. Low cost. + +**Cons:** Not free (except for 30 day trial). + +**Enable KernelCare Service** + +Get a 30-day trial key at [**https://cloudlinux.com/kernelcare-free-trial5**][14]. + +Run the following commands to enable KernelCare and register the key. + +``` +$ sudo wget -qq -O - https://repo.cloudlinux.com/kernelcare/kernelcare_install.sh | bash + +$ sudo /usr/bin/kcarectl --register KEY +``` + +If you’re looking for an affordable and reliable commercial service to keep the Linux Kernel updated on your Linux servers, KernelCare is good to go. + +*with inputs from **Paul A. Jacobs** , a Technical Evangelist and Content Writer from Cloud Linux.* + +**Suggested read:** + +And, that’s all for now. Hope this was useful. If you believe any other tools/methods should include in this list, feel free to let us know in the comment section below. I will check and update this guide accordingly. + +More good stuffs to come. Stay tuned! + +Cheers! + + + +-------------------------------------------------------------------------------- + +via: https://www.ostechnix.com/different-ways-to-update-linux-kernel-for-ubuntu/ + +作者:[SK][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.ostechnix.com/author/sk/ +[b]: https://github.com/lujun9972 +[1]: http://kernel.ubuntu.com/~kernel-ppa/mainline/ +[2]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[3]: http://www.ostechnix.com/wp-content/uploads/2019/01/Ubuntu-mainline-kernel.png +[4]: http://www.kernel.org +[5]: http://www.ostechnix.com/wp-content/uploads/2019/01/Livepatch.png +[6]: https://www.ubuntu.com/livepatch +[7]: https://www.ubuntu.com/support/plans-and-pricing +[8]: https://auth.livepatch.canonical.com/ +[9]: http://www.ostechnix.com/wp-content/uploads/2019/01/KernelCare.png +[10]: https://www.kernelcare.com/ +[11]: https://www.cloudlinux.com/ +[12]: https://www.kernelcare.com/update-kernel-linux/ +[13]: https://www.kernelcare.com/pricing/ +[14]: https://cloudlinux.com/kernelcare-free-trial5 diff --git a/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md new file mode 100644 index 0000000000..6552cc3905 --- /dev/null +++ b/sources/tech/20190107 DriveSync - Easy Way to Sync Files Between Local And Google Drive from Linux CLI.md @@ -0,0 +1,239 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI) +[#]: via: (https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +DriveSync – Easy Way to Sync Files Between Local And Google Drive from Linux CLI +====== + +Google Drive is an one of the best cloud storage compared with other cloud storage. + +It’s one of the application which is used by millions of users in daily basics. + +It allow users to access the application anywhere irrespective of devices. + +We can upload, download & share documents, photo, files, docs, spreadsheet, etc to anyone with securely. + +We had already written few articles in 2daygeek website about google drive mapping with Linux. + +If you would like to check those, navigate to the following link. + +GNOME desktop offers easy way to **[Integrate Google Drive Using Gnome Nautilus File Manager in Linux][1]** without headache. + +Also, you can give a try with **[Google Drive Ocamlfuse Client][2]**. + +### What’s DriveSync? + +[DriveSync][3] is a command line utility that synchronizes your files between local system and Google Drive via command line. + +Downloads new remote files, uploads new local files to your Drive and deletes or updates files both locally and on Drive if they have changed in one place. + +Allows blacklisting or whitelisting of files and folders that should not / should be synced. + +It was written in Ruby scripting language so, make sure your system should have ruby installed. If it’s not installed then install it as a prerequisites for DriveSync. + +### DriveSync Features + + * Downloads new remote files + * Uploads new local files + * Delete or Update files in both locally and Drive + * Allow blacklist to disable sync for files and folders + * Automate the sync using cronjob + * Allow us to set file upload/download size (Defautl 512MB) + * Allow us to modify Timeout threshold + + + +### How to Install Ruby Scripting Language in Linux? + +Ruby is an interpreted scripting language for quick and easy object-oriented programming. It has many features to process text files and to do system management tasks (like in Perl). It is simple, straight-forward, and extensible. + +It’s available in all the Linux distribution official repository. Hence we can easily install it with help of distribution official **[Package Manager][4]**. + +For **`Fedora`** system, use **[DNF Command][5]** to install Ruby. + +``` +$ sudo dnf install ruby rubygem-bundler +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][6]** or **[APT Command][7]** to install Ruby. + +``` +$ sudo apt install ruby ruby-bundler +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][8]** to install Ruby. + +``` +$ sudo pacman -S ruby ruby-bundler +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][9]** to install Ruby. + +``` +$ sudo yum install ruby ruby-bundler +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install Ruby. + +``` +$ sudo zypper install ruby ruby-bundler +``` + +### How to Install DriveSync in Linux? + +DriveSync installation also easy to do it. Follow the below procedure to get it done. + +``` +$ git clone https://github.com/MStadlmeier/drivesync.git +$ cd drivesync/ +$ bundle install +``` + +### How to Set Up DriveSync in Linux? + +As of now, we had successfully installed DriveSync and still we need to perform few steps to use this. + +Run the following command to set up this and Sync the files. + +``` +$ ruby drivesync.rb +``` + +When you ran the above command you will be getting the below url. +![][12] + +Navigate to the given URL in your preferred Web Browser and follow the instruction. It will open a google sign-in page in default web browser. Enter your credentials then hit Sign in button. +![][13] + +Input your password. +![][14] + +Hit **`Allow`** button to allow DriveSync to access your Google Drive. +![][15] + +Finally, it will give you an authorization code. +![][16] + +Just copy and past it on the terminal and hit **`Enter`** button to start the sync. +![][17] + +Yes, it’s syncing the files from Google Drive to my local folder. + +``` +$ ruby drivesync.rb +Warning: Could not find config file at /home/daygeek/.drivesync/config.yml . Creating default config... +Open the following URL in the browser and enter the resulting code after authorization +https://accounts.google.com/o/oauth2/auth?access_type=offline&approval_prompt=force&client_id=xxxxxxxxxxxxxxxxxxxxxxxxxxxx.apps.googleusercontent.com&include_granted_scopes=true&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&scope=https://www.googleapis.com/auth/drive +4/ygAxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx +Local folder is 1437 files behind and 0 files ahead of remote +Starting sync at 2019-01-06 19:48:49 +0530 +Downloading file 2018-07-31-17-48-54-635_1533039534635_XXXPM0534X_ITRV.zip ... +Downloading file 5459XXXXXXXXXX25_11-03-2018.PDF ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format-icon-theme.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/2g-banner-format.svg ... +Downloading file 2g-image-design/new-design-28-Mar-2018/new-base-format.svg ... +Downloading file documents/Magesh_Resume_Updated_26_Mar_2018.doc ... +Downloading file documents/Magesh_Resume_updated-new.doc ... +Downloading file documents/Aadhaar-Thanu.PNG ... +Downloading file documents/Aadhaar-Magesh.PNG ... +Downloading file documents/Copy of PP-docs.pdf ... +Downloading file EAadhaar_2189821080299520170807121602_25082017123052_172991.pdf ... +Downloading file Tanisha/VID_20170223_113925.mp4 ... +Downloading file Tanisha/VID_20170224_073234.mp4 ... +Downloading file Tanisha/VID_20170304_170457.mp4 ... +Downloading file Tanisha/IMG_20170225_203243.jpg ... +Downloading file Tanisha/IMG_20170226_123949.jpg ... +Downloading file Tanisha/IMG_20170226_123953.jpg ... +Downloading file Tanisha/IMG_20170304_184227.jpg ... +. +. +. +Sync complete. +``` + +It will create the **`drive`** folder under **`/home/user/Documents/`** and sync all the files in it. +![][18] + +DriveSync configuration files are located in the following location **`/home/user/.drivesync/`** if you had installed it on your **home** directory. + +``` +$ ls -lh ~/.drivesync/ +total 176K +-rw-r--r-- 1 daygeek daygeek 1.9K Jan 6 19:42 config.yml +-rw-r--r-- 1 daygeek daygeek 170K Jan 6 21:31 manifest +``` + +You can make your changes by modifying the **`config.yml`** file. + +### How to Verify Whether Sync is Working Fine or Not? + +To test this, we are going to create a new folder called **`2g-docs-2019`**. Also, adding an image file in it. Once it’s done, run the **`drivesync.rb`** command again. + +``` +$ ruby drivesync.rb +Local folder is 0 files behind and 1 files ahead of remote +Starting sync at 2019-01-06 21:59:32 +0530 +Uploading file 2g-docs-2019/Asciinema - Record And Share Your Terminal Activity On The Web.png ... +``` + +Yes, it has been synced to Google Drive. The same has been verified through Web Browser. +![][19] + +Create the below **CronJob** to enable an auto sync. The following “CronJob” will be running an every mins. + +``` +$ vi crontab +*/1 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated ruby ~/drivesync/drivesync.rb +``` + +I have added one more file to test this. Yes, it got success. + +``` +Jan 07 09:36:01 daygeek-Y700 crond[590]: (daygeek) RELOAD (/var/spool/cron/daygeek) +Jan 07 09:36:01 daygeek-Y700 crond[20942]: pam_unix(crond:session): session opened for user daygeek by (uid=0) +Jan 07 09:36:01 daygeek-Y700 CROND[20943]: (daygeek) CMD (ruby ~/drivesync/drivesync.rb) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Local folder is 0 files behind and 1 files ahead of remote) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Starting sync at 2019-01-07 09:36:26 +0530) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Uploading file 2g-docs-2019/Check CPU And HDD Temperature In Linux.png ...) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT () +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: (daygeek) CMDOUT (Sync complete.) +Jan 07 09:36:29 daygeek-Y700 CROND[20942]: pam_unix(crond:session): session closed for user daygeek +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/ +[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/ +[3]: https://github.com/MStadlmeier/drivesync +[4]: https://www.2daygeek.com/category/package-management/ +[5]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[6]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[7]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[9]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-1.jpg +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-2.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-3.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-4.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-5.png +[17]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-6.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-7.jpg +[19]: https://www.2daygeek.com/wp-content/uploads/2019/01/drivesync-easy-way-to-sync-files-between-local-and-google-drive-from-linux-cli-8.png diff --git a/sources/tech/20190107 How to manage your media with Kodi.md b/sources/tech/20190107 How to manage your media with Kodi.md new file mode 100644 index 0000000000..cea446c5b0 --- /dev/null +++ b/sources/tech/20190107 How to manage your media with Kodi.md @@ -0,0 +1,303 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to manage your media with Kodi) +[#]: via: (https://opensource.com/article/19/1/manage-your-media-kodi) +[#]: author: (Steve Ovens https://opensource.com/users/stratusss) + +How to manage your media with Kodi +====== + +![](Get control over your home media content with Kodi media player software.) + +If you, like me, like to own your own data, chances are you also like to purchase movies and TV shows on Blu-Ray or DVD discs. And you may also like to make [ISOs][1] of the videos to keep exact digital copies, as I do. + +For a little while, it might be manageable to have a bunch of files stored in some sort of directory structure. However, as your collection grows, you may want features like resuming from a specific spot; keeping track of where you left off watching a video (i.e., its watched status); storing episode or movie summaries and movie trailers; buying media in multiple languages; or just having a sane way to play all those ISOs you ripped. + +This is where Kodi comes in. + +### What is Kodi? + +Modern [Kodi][2] is the successor to Xbox Media Player, which was discontinued way back in 2003. In June 2004, Xbox Media Center (XBMC) was born. For over three years, XBMC remained on the Xbox. Then in 2007, work began in earnest to port the media player over to Linux. + +![](https://opensource.com/sites/default/files/uploads/00_xbmc_500x300.png) + +Aside from some uninteresting technical history, things remained fairly stable, and XBMC grew in prominence. By 2014, XBMC had a thriving community, and its core functionality grew to include playing games, streaming content from the web, and connecting to mobile devices. This, combined with legal issues involving Xbox in the name, lead the team behind XBMC to rename it Kodi. Kodi is now branded as an "entertainment hub that brings all your digital media together into a beautiful and user-friendly package." + +Today, Kodi has an extensible interface that has allowed the open source community to build new functionality using plugins. Note that, as with all open source software, Kodi's developers are not responsible for the ecosystem's plugins. + +### How do I start? + +For Ubuntu-based distributions, Kodi is just a few short commands away: + +``` +sudo apt install software-properties-common +sudo add-apt-repository ppa:team-xbmc/ppa +sudo apt update +sudo apt install kodi +``` + +In Arch Linux, you can install the latest version from the community repo: + +``` +sudo pacman -S kodi +``` + +Packages were maintained for Fedora 26 by RPM Fusion (referenced in the [Kodi documentation][3]). I tried it on Fedora 29, and it was quite unstable. I'm sure that this will improve over time, but my experience is that Fedora 29 is not the ideal platform for Kodi. + +### OK, it's installed… now what? + +Before we proceed, note that I am making two assumptions about your media content: + + 1. You have your own local, legally attained content. + 2. You have already transferred this content from your DVDs, Blu-Rays, or another digital distribution source to your local directory or network. + + + +Kodi uses a scraping service to pull down TV and movie metadata. For Kodi to match things appropriately, I recommend adopting a directory and file-naming structure similar to this: + +``` +Utopia +├── Utopia.S01.dvd_rip.x264 +│   ├── Utopia.S01E01.dvd_rip.x264.mkv +│   ├── Utopia.S01E02.dvd_rip.x264.mkv +│   ├── Utopia.S01E03.dvd_rip.x264.mkv +│   ├── Utopia.S01E04.dvd_rip.x264.mkv +│   ├── Utopia.S01E05.dvd_rip.x264.mkv +│   ├── Utopia.S01E06.dvd_rip.x264.mkv +└── Utopia.S02.dvd_rip.x264 +    ├── Utopia.S02E01.dvd_rip.x264.mkv +    ├── Utopia.S02E02.dvd_rip.x264.mkv +    ├── Utopia.S02E03.dvd_rip.x264.mkv +    ├── Utopia.S02E04.dvd_rip.x264.mkv +    ├── Utopia.S02E05.dvd_rip.x264.mkv +    └── Utopia.S02E06.dvd_rip.x264.mkv +``` + +I put the source (my DVD) and the codec (x264) in the title, but these are optional. For a TV series, you can include the episode title in the filename if you like. The important part is **SxxExx** , which stands for Season and Episode. This is how Kodi (and by extension the scrapers) can identify your media. + +Assuming you have organized your media like this, let's do some basic Kodi configuration. + +### Add video sources + +Adding video sources is a simple, six-step process: + + 1. Enter the files section + 2. Select **Files** + 3. Click **Add source** + 4. Browse to your source + 5. Define the video content type + 6. Refresh the metadata + + + +If you're impatient, feel free to navigate these steps on your own. But if you want details, keep reading. + +When you first launch Kodi, you'll see the home screen below. Click **Enter files section**. It doesn't matter whether you do this under Movies (as shown here) or TV shows. + +![](https://opensource.com/sites/default/files/uploads/01_fresh_kodi_main_screen.png) + +Next, select the **Videos** folder, click **Files** , and choose **Add videos**. + +![](https://opensource.com/sites/default/files/uploads/02_videos_folder.png) + +![](https://opensource.com/sites/default/files/uploads/03_add_videos.png) + +Either click on **None** and start typing the path to your files or click **Browse** and use the file navigation. + +![](https://opensource.com/sites/default/files/uploads/04_browse_video_source.png) + +![](https://opensource.com/sites/default/files/uploads/05_add_video_source_name.png) + +As you can see in this screenshot, I added my local **Videos** directory. You can set some default options through **Browse** , such as specifying your home folder and any drives you have mounted—maybe on a network file system (NFS), universal plug and play (UPnP) device, Windows Network ([SMB/CIFS][4]), or [zeroconf][5]. I won't cover most of these, as they are outside the scope of this article, but we will use NFS later for one of Kodi's advanced features. + +After you select your path and click OK, identify the type of content you're working with. + +![](https://opensource.com/sites/default/files/uploads/06_define_video_content.png) + +Next, Kodi prompts you to refresh the metadata for the content in the selected directory. This is how Kodi knows what videos you have and their synopsis, cast information, thumbnails, fan art, etc. Select **Yes** , and you can watch the video-scanning progress in the top right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/07_refresh.png) + +![](https://opensource.com/sites/default/files/uploads/08_active_scan_in_progress.png) + +When the scan completes, you'll see lots of useful information, such as video overviews and season and episode descriptions for TV shows. + +![](https://opensource.com/sites/default/files/uploads/09_screen_after_scan.png) + +![](https://opensource.com/sites/default/files/uploads/10_show_description.png) + +You can use the same process for other types of content, such as music or music videos. + +### Increase functionality with add-ons + +One of the most interesting things about open source projects is that the community often extends them well beyond their initial scope. Kodi has a very robust add-on infrastructure. Most of them are produced by Kodi fans who want to extend its default functionality, and sometimes companies (such as the [Plex][6] content streaming service) release official plugins. Be very careful about adding plugins from untrusted sources. Just because you find an add-on on the internet does not mean it is safe! + +**Be warned:** Add-ons are not supported by Kodi's core team! + +Having said that, there are many useful add-ons that are worth your consideration. In my house, we use Kodi for local playback and Plex when we want to access our content outside the house—with one exception. One of our rooms has a poor WiFi signal. I rip my Blu-Rays to very large MKV files (usually 20–40GB each), and the WiFi (and therefore Kodi) can't handle the files without stuttering. Although you can (and we have) dug into some of the advanced buffering options, even those tweaks have proved insufficient with very large files. Since we already have a Plex server that can transcode content, we solved our problem with a Kodi add-on. + +To show how to install an add-on, I'll use Plex as an example. First, click on **Add-ons** in the side panel and select **Enter add-on browser**. Either use the search function or scroll down until you find Plex. + +![](https://opensource.com/sites/default/files/uploads/11_addons.png) + +Select the Plex add-on and click the **Install** button in the lower right-hand corner. + +![](https://opensource.com/sites/default/files/uploads/13_install_plex_addon.png) + +Once the download completes, you can access Plex on the main Kodi screen under **Add-ons**. + +![](https://opensource.com/sites/default/files/uploads/14_addons_finished_installing.png) + +There are several ways to configure an add-on. Some add-ons, such as NHL TV, are configured via a menu accessed by right-clicking the add-on and selecting Configure. Others, such as Plex, display a configuration walk-through when they launch. If an add-on doesn't seem to be configured when you first launch it, try right-clicking its menu and see if a settings option is available there. + +### Coordinating metadata across Kodi devices + +In our house, we have multiple machines that run Kodi. By default, Kodi tracks metadata, such as a video's watched status and show information, locally. Therefore, content updates on one machine won't appear on any other machine—unless you configure all your Kodi devices to store metadata inside an SQL database (which is a feature Kodi supports). This technique is not particularly difficult, but it is more advanced. If you're willing to put in the effort, here's how to do it. + +#### Before you begin + +There are a few things you need to know before configuring shared status for Kodi. + + 1. All content must be on a network share ([Samba][7], NFS, etc.). + 2. All content must be mounted via the network protocol, even if the disks are local to the machine. That means that no matter where the content is physically located, each client must be configured to use a network fileshare source. + 3. You need to be running an SQL-style database. Kodi's official guide walks through MySQL, but I chose MariaDB. + 4. All clients need to have the database port open (port 3306 in the case of MySQL/MariaDB) or the firewalls disabled. + 5. All clients must be running the same version of Kodi + + + +#### Install and configure the database + +If you're running Ubuntu, you can install MariaDB with the following commands: + +``` +sudo apt update +sudo apt install mariadb-server -y +``` + +I am running MariaDB on an Arch Linux machine. The [Arch Wiki][8] documents the initial setup process well, but I'll summarize it here. + +To install, issue the following command: + +``` +sudo pacman -S mariadb +``` + +Most distributions of MariaDB will have the same setup commands. I recommend that you understand what the commands do, but you can safely take the defaults if you're in a home environment. + +``` +sudo systemctl start mariadb +sudo mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql +sudo mysql_secure_installation +``` + +Next, edit the MariaDB config file. This file is different depending on your distribution. On Ubuntu, you want to edit **/etc/mysql/mariadb.conf.d/50-server.cnf**. On Arch, the file is either **/etc/my.cnf** or **/etc/mysql/my.cnf**. Locate the line that says **bind-address = 127.0.0.1** and change it to your desired Ethernet port's IP address or to **bind-address = 0.0.0.0** if you want it to listen on all interfaces. + +Restart the service so the change will take effect: + +``` +sudo systemctl restart mariadb +``` + +#### Configure Kodi and MariaDB/MySQL + +To enable Kodi to write to the database, one of two things needs to happen: You can create the database yourself, or you can let Kodi do it for you. In this case, since the only database on this system is for Kodi, I'll create a user with the rights to create any databases that Kodi requires. Do NOT do this if the machine runs more than one database. + +``` +mysql -u root -p +CREATE USER 'kodi' IDENTIFIED BY 'kodi'; +GRANT ALL ON core.md Dict.md lctt2014.md lctt2016.md lctt2018.md README.md TO 'kodi'; +flush privileges; +\q +``` + +This grants the user all rights—essentially enabling it to act as a root user. For my purposes, this is fine. + +Next, on each Kodi device where you want to share metadata, create the following file: **/home/ /.kodi/userdata/advancedsettings.xml**. This file can contain a lot of very advanced, tweakable settings. My devices have these settings: + +``` + +    +        mysql +        mysql-arch.example.com +        3306 +        kodi +        kodi +    +    +        true +        true +    +    +        +        1 +        322122547 +        20 +    + +``` + +The **< cache>** section—which sets how much of a file Kodi will buffer over the network— is optional in this scenario. See the [Kodi wiki][9] for a full breakdown of this file and its options. + +Once the configuration is complete, it's a good idea to close and reopen Kodi to make sure the settings are applied. + +The final step is configuring all the Kodi clients to use the same network share for all their content. Only one client needs to scrap/refresh the metadata if everything is created successfully. When data is collected, you should see that Kodi creates a new database on your SQL server: + +``` +[kodi@kodi-mysql ~]$ mysql -u root -p +Enter password: +Welcome to the MariaDB monitor.  Commands end with ; or \g. +Your MariaDB connection id is 180 +Server version: 10.1.37-MariaDB MariaDB Server + +Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. + +Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. + +MariaDB [(none)]> show databases; ++--------------------+ +| Database           | ++--------------------+ +| MyVideos107        | +| information_schema | +| mysql              | +| performance_schema | ++--------------------+ +4 rows in set (0.00 sec) +``` + +### Wrapping up + +This article walked through how to get up and running with the basic functionality of Kodi. You should be able to add content and pull down metadata to make browsing your media more convenient. + +You also know how to search for, install, and potentially configure add-ons for additional features. Be extra careful when downloading add-ons, as they are provided by the community at large and not the core developers. It's best to use add-ons only from organizations or companies you trust. + +And you know a bit about sharing metadata across multiple devices. You've been introduced to **advancedsettings.xml** ; hopefully it has piqued your interest. Kodi has a lot of dials and knobs to turn, and you can squeeze a lot of performance and functionality out of the platform with enough experimentation. + +Are you interested in doing more tweaking? What are some of your favorite add-ons or settings? Do you want to know how to change the user interface? What are some of your favorite skins? Let me know in the comments! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/manage-your-media-kodi + +作者:[Steve Ovens][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/stratusss +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/ISO_image +[2]: https://kodi.tv/ +[3]: https://kodi.wiki/view/HOW-TO:Install_Kodi_for_Linux#Fedora +[4]: https://en.wikipedia.org/wiki/Server_Message_Block +[5]: https://en.wikipedia.org/wiki/Zero-configuration_networking +[6]: https://www.plex.tv +[7]: https://www.samba.org/ +[8]: https://wiki.archlinux.org/index.php/MySQL +[9]: https://kodi.wiki/view/Advancedsettings.xml diff --git a/sources/tech/20190107 Testing isn-t everything.md b/sources/tech/20190107 Testing isn-t everything.md new file mode 100644 index 0000000000..b2a2daaaac --- /dev/null +++ b/sources/tech/20190107 Testing isn-t everything.md @@ -0,0 +1,135 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Testing isn't everything) +[#]: via: (https://arp242.net/weblog/testing.html) +[#]: author: (Martin Tournoij https://arp242.net/) + +Testing isn't everything +====== + +This is adopted from a discussion about [Want to write good unit tests in go? Don’t panic… or should you?][1] While this mainly talks about Go a lot of the points also apply to other languages. + +Some of the most difficult code I’ve worked with is code that is “easily testable”. Code that abstracts everything to the point where you have no idea what’s going on, just so that it can add a “unit test” to what would otherwise be a very straightforward function. DHH called this [Test-induced design damage][2]. + +Testing is just one tool to make sure that your program works, out of several. Another very important tool is writing code in such a way that it is easy to understand and reason about (“simplicity”). + +Books that advocate extensive testing – such as Robert C. Martin’s Clean Code – were written, in part, as a response to ever more complex programs, where you read 1,000 lines of code but still had no idea what’s going on. I recently had to port a simple Java “emoji replacer” (😂 ➙ 😂) to Go. To ensure compatibility I looked up the im­ple­men­ta­tion. It was a whole bunch of classes, factories, and whatnot which all just resulted in calling a regexp on a string. 🤷 + +In dynamic languages like Ruby and Python tests are important for a different reason, as something like this will “work” just fine: + +``` +if condition: + print('w00t') +else: + nonexistent_function() +``` + +Except of course if that `else` branch is entered. It’s easy to typo stuff, or mix stuff up. + +In Go, both of these problems are less of a concern. It has a good static type system, and the focus is on simple straightforward code that is easy to comprehend. Even for a number of dynamic languages there are optional typing systems (function annotations in Python, TypeScript for JavaScript). + +Sometimes you can do a straightforward implementation that doesn’t sacrifice anything for testability; great! But sometimes you have to strike a balance. For some code, not adding a unit test is fine. + +Intensive focus on “unit tests” can be incredibly damaging to a code base. Some codebases have a gazillion unit tests, which makes any change excessively time-consuming as you’re fixing up a whole bunch of tests for even trivial changes. Often times a lot of these tests are just duplicates; adding tests to every layer of a simple CRUD HTTP endpoint is a common example. In many apps it’s fine to just rely on a single integration test. + +Stuff like SQL mocks is another great example. It makes code more complex, harder to change, all so we can say we added a “unit test” to `select * from foo where x=?`. The worst part is, it doesn’t even test anything other than verifying you didn’t typo an SQL query. As soon as the test starts doing anything useful, such as verifying that it actually returns the correct rows from the database, the Unit Test purists will start complaining that it’s not a True Unit Test™ and that You’re Doing It Wrong™. +For most queries, the integration tests and/or manual tests are fine, and extensive SQL mocks are entirely superfluous at best, and harmful at worst. + +There are exceptions, of course; if you’ve got a lot of `if cond { q += "more sql" }` then adding SQL mocks to verify the correctness of that logic might be a good idea. Even in those cases a “non-unit unit test” (e.g. one that just accesses the database) is still a viable option. Integration tests are also still an option. A lot of applications don’t have those kind of complex queries anyway. + +One important reason for the focus on unit tests is to ensure test code runs fast. This was a response to massive test harnesses that take a day to run. This, again, is not really a problem in Go. All integration tests I’ve written run in a reasonable amount of time (several seconds at most, usually faster). The test cache introduced in Go 1.10 makes it even less of a concern. + +Last year a coworker refactored our ETag-based caching library. The old code was very straightforward and easy to understand, and while I’m not claiming it was guaranteed bug-free, it did work very well for a long time. + +It should have been written with some tests in place, but it wasn’t (I didn’t write the original version). Note that the code was not completely untested, as we did have integration tests. + +The refactored version is much more complex. Aside from the two weeks lost on refactoring a working piece of code to … another working piece of code (topic for another post), I’m not so convinced it’s actually that much better. I consider myself a reasonably accomplished and experienced programmer, with a reasonable knowledge and experience in Go. I think that in general, based on feedback from peers and performance reviews, I am at least a programmer of “average” skill level, if not more. + +If an average programmer has trouble comprehending what is in essence a handful of simple functions because there are so many layers of abstractions, then something has gone wrong. The refactor traded one tool to verify correctness (simplicity) with another (testing). Simplicity is hardly a guarantee to ensure correctness, but neither are unit tests. Ideally, we should do both. + +Postscript: the refactor introduced a bug and removed a feature that was useful, but is now harder to add, not in the least because the code is much more complex. + +All units working correctly gives exactly zero guarantees that the program is working correctly. A lot of logic errors won’t be caught because the logic consists of several units working together. So you need integration tests, and if the integration tests duplicate half of your unit tests, then why bother with those unit tests? + +Test Driven Development (TDD) is also just one tool. It works well for some problems; not so much for others. In particular, I think that “forced to write code in tiny units” can be terribly harmful in some cases. Some code is just a serial script which says “do this, and then that, and then this”. Splitting that up in a whole bunch of “tiny units” can greatly reduce how easy the code is to understand, and thus harder to verify that it is correct. + +I’ve had to fix some Ruby code where everything was in tiny units – there is a strong culture of TDD in the Ruby community – and even though the units were easy to understand I found it incredibly hard to understand the application logic. If everything is split in “tiny units” then understanding how everything fits together to create an actual program that does something useful will be much harder. + +You see the same friction in the old microkernel vs. monolithic kernel debate, or the more recent microservices vs. monolithic app one. In principle splitting everything up in small parts sounds like a great idea, but in practice it turns out that making all the small parts work together is a very hard problem. A hybrid approach seems to work best for kernels and app design, balancing the ad­van­tages and downsides of both approaches. I think the same applies to code. + +To be clear, I am not against unit tests or TDD and claiming we should all gung-go cowboy code our way through life 🤠. I write unit tests and practice TDD, when it makes sense. My point is that unit tests and TDD are not the solution to every single last problem and should applied indiscriminately. This is why I use words such as “some” and “often” so frequently. + +This brings me to the topic of testing frameworks. I have never understood what problem libraries such as [goblin][3] are solving. How is this: + +``` +Expect(err).To(nil) +Expect(out).To(test.wantOut) +``` + +An improvement over this? + +``` +if err != nil { + t.Fatal(err) +} + +if out != tt.want { + t.Errorf("out: %q\nwant: %q", out, tt.want) +} +``` + +What’s wrong with `if` and `==`? Why do we need to abstract it? Note that with table-driven tests you’re only typing these checks once, so you’re saving just a few lines here. + +[Ginkgo][4] is even worse. It turns a very simple, straightforward, and understandable piece of code and doesn’t just abstract `if`, it also chops up the execution in several different functions (`BeforeEach()` and `DescribeTable()`). + +This is known as Behaviour-driven development (BDD). I am not entirely sure what to think of BDD. I am skeptical, but I’ve never properly used it in a large project so I’m hesitant to just dismiss it. Note that I said “properly”: most projects don’t really use BDD, they just use a library with a BDD syntax and shoehorn their testing code in to that. That’s ad-hoc BDD, or faux-BDD. + +Whatever merits BDD may have, they are not present simply because your testing code vaguely resembles BDD-style syntax. This on its own demonstrates that BDD is perhaps not a great idea for many projects. + +I think there are real problems with these BDD(-ish) test tools, as they obfuscate what you’re actually doing. No matter what, testing remains a matter of getting the output of a function and checking if that matches what you expected. No testing methodology is going to change that fundamental. The more layers you add on top of that, the harder it will be to debug. + +When determining if something is “easy” then my prime concern is not how easy something is to write, but how easy something is to debug when things fail. I will gladly spend a bit more effort writing things if that makes things a lot easier to debug. + +All code – including testing code – can fail in confusing, surprising, and unexpected ways (a “bug”), and then you’re expected to debug that code. The more complex the code, the harder it is to debug. + +You should expect all code – including testing code – to go through several debugging cycles. Note that with debugging cycle I don’t mean “there is a bug in the code you need to fix”, but rather “I need to look at this code to fix the bug”. + +In general, I already find testing code harder to debug than regular code, as the “code surface” tends to be larger. You have the testing code and the actual implementation code to think of. That’s a lot more than just thinking of the implementation code. + +Adding these abstractions means you will now also have to think about that, too! This might be okay if the abstractions would reduce the scope of what you have to think about, which is a common reason to add abstractions in regular code, but it doesn’t. It just adds more things to think about. + +So these are exactly the wrong kind of abstractions: they wrap and obfuscate, rather than separate concerns and reduce the scope. + +If you’re interested in soliciting contributions from other people in open source projects then making your tests understandable is a very important concern (it’s also important in business context, but a bit less so, as you’ve got actual time to train people). + +Seeing PRs with “here’s the code, it works, but I couldn’t figure out the tests, plz halp!” is not uncommon; and I’m fairly sure that at least a few people never even bothered to submit PRs just because they got stuck on the tests. I know I have. + +There is one open source project that I contributed to, and would like to contribute more to, but don’t because it’s just too hard to write and run tests. Every change is “write working code in 15 minutes, spend 45 minutes dealing with tests”. It’s … no fun at all. + +Writing good software is hard. I’ve got some ideas on how to do it, but don’t have a comprehensive view. I’m not sure if anyone really does. I do know that “always add unit tests” and “always practice TDD” isn’t the answer, in spite of them being useful concepts. To give an analogy: most people would agree that a free market is a good idea, but at the same time even most libertarians would agree it’s not the complete solution to every single problem (well, [some do][5], but those ideas are … rather misguided). + +You can mail me at [martin@arp242.net][6] or [create a GitHub issue][7] for feedback, questions, etc. + +-------------------------------------------------------------------------------- + +via: https://arp242.net/weblog/testing.html + +作者:[Martin Tournoij][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://arp242.net/ +[b]: https://github.com/lujun9972 +[1]: https://medium.com/@jens.neuse/want-to-write-good-unit-tests-in-go-dont-panic-or-should-you-ba3eb5bf4f51 +[2]: http://david.heinemeierhansson.com/2014/test-induced-design-damage.html +[3]: https://github.com/franela/goblin +[4]: https://github.com/onsi/ginkgo +[5]: https://en.wikipedia.org/wiki/Murray_Rothbard#Children's_rights_and_parental_obligations +[6]: mailto:martin@arp242.net +[7]: https://github.com/Carpetsmoker/arp242.net/issues/new diff --git a/sources/tech/20190108 Create your own video streaming server with Linux.md b/sources/tech/20190108 Create your own video streaming server with Linux.md new file mode 100644 index 0000000000..24dd44524d --- /dev/null +++ b/sources/tech/20190108 Create your own video streaming server with Linux.md @@ -0,0 +1,301 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Create your own video streaming server with Linux) +[#]: via: (https://opensource.com/article/19/1/basic-live-video-streaming-server) +[#]: author: (Aaron J.Prisk https://opensource.com/users/ricepriskytreat) + +Create your own video streaming server with Linux +====== +Set up a basic live streaming server on a Linux or BSD operating system. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/shortcut_command_function_editing_key.png?itok=a0sEc5vo) + +Live video streaming is incredibly popular—and it's still growing. Platforms like Amazon's Twitch and Google's YouTube boast millions of users that stream and consume countless hours of live and recorded media. These services are often free to use but require you to have an account and generally hold your content behind advertisements. Some people don't need their videos to be available to the masses or just want more control over their content. Thankfully, with the power of open source software, anyone can set up a live streaming server. + +### Getting started + +In this tutorial, I'll explain how to set up a basic live streaming server with a Linux or BSD operating system. + +This leads to the inevitable question of system requirements. These can vary, as there are a lot of variables involved with live streaming, such as: + + * **Stream quality:** Do you want to stream in high definition or will standard definition fit your needs? + * **Viewership:** How many viewers are you expecting for your videos? + * **Storage:** Do you plan on keeping saved copies of your video stream? + * **Access:** Will your stream be private or open to the world? + + + +There are no set rules when it comes to system requirements, so I recommend you experiment and find what works best for your needs. I installed my server on a virtual machine with 4GB RAM, a 20GB hard drive, and a single Intel i7 processor core. + +This project uses the Real-Time Messaging Protocol (RTMP) to handle audio and video streaming. There are other protocols available, but I chose RTMP because it has broad support. As open standards like WebRTC become more compatible, I would recommend that route. + +It's also very important to know that "live" doesn't always mean instant. A video stream must be encoded, transferred, buffered, and displayed, which often adds delays. The delay can be shortened or lengthened depending on the type of stream you're creating and its attributes. + +### Setting up a Linux server + +You can use many different distributions of Linux, but I prefer Ubuntu, so I downloaded the [Ubuntu Server][1] edition for my operating system. If you prefer your server to have a graphical user interface (GUI), feel free to use [Ubuntu Desktop][2] or one of its many flavors. Then, I fired up the Ubuntu installer on my computer or virtual machine and chose the settings that best matched my environment. Below are the steps I took. + +Note: Because this is a server, you'll probably want to set some static network settings. + +![](https://opensource.com/sites/default/files/uploads/stream-server_profilesetup.png) + +After the installer finishes and your system reboots, you'll be greeted with a lovely new Ubuntu system. As with any newly installed operating system, install any updates that are available: + +``` +sudo apt update +sudo apt upgrade +``` + +This streaming server will use the very powerful and versatile Nginx web server, so you'll need to install it: + +``` +sudo apt install nginx +``` + +Then you'll need to get the RTMP module so Nginx can handle your media stream: + +``` +sudo add-apt-repository universe +sudo apt install libnginx-mod-rtmp +``` + +Adjust your web server's configuration so it can accept and deliver your media stream. + +``` +sudo nano /etc/nginx/nginx.conf +``` + +Scroll to the bottom of the configuration file and add the following code: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_config.png) + +Save the config. Because I'm a heretic, I use [Nano][3] for editing configuration files. In Nano, you can save your config by pressing **Ctrl+X** , **Y** , and then **Enter.** + +This is a very minimal config that will create a working streaming server. You'll add to this config later, but this is a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new configuration: + +``` +sudo systemctl restart nginx +``` + +### Setting up a BSD server + +If you're of the "beastie" persuasion, getting a streaming server up and running is also devilishly easy. + +Head on over to the [FreeBSD][4] website and download the latest release. Fire up the FreeBSD installer on your computer or virtual machine and go through the initial steps and choose settings that best match your environment. Since this is a server, you'll likely want to set some static network settings. + +After the installer finishes and your system reboots, you should have a shiny new FreeBSD system. Like any other freshly installed system, you'll likely want to get everything updated (from this step forward, make sure you're logged in as root): + +``` +pkg update +pkg upgrade +``` + +I install [Nano][3] for editing configuration files: + +``` +pkg install nano +``` + +This streaming server will use the very powerful and versatile Nginx web server. You can build Nginx using the excellent ports system that FreeBSD boasts. + +First, update your ports tree: + +``` +portsnap fetch +portsnap extract +``` + +Browse to the Nginx ports directory: + +``` +cd /usr/ports/www/nginx +``` + +And begin building Nginx by running: + +``` +make install +``` + +You'll see a screen asking what modules to include in your Nginx build. For this project, you'll need to add the RTMP module. Scroll down until the RTMP module is selected and press **Space**. Then Press **Enter** to proceed with the rest of the build and installation. + +Once Nginx has finished installing, it's time to configure it for streaming purposes. + +First, add an entry into **/etc/rc.conf** to ensure the Nginx server starts when your system boots: + +``` +nano /etc/rc.conf +``` + +Add this text to the file: + +``` +nginx_enable="YES" +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamingconfig.png) + +Next, create a webroot directory from where Nginx will serve its content. I call mine **stream** : + +``` +cd /usr/local/www/ +mkdir stream +chmod -R 755 stream/ +``` + +Now that you have created your stream directory, configure Nginx by editing its configuration file: + +``` +nano /usr/local/etc/nginx/nginx.conf +``` + +Load your streaming modules at the top of the file: + +``` +load_module /usr/local/libexec/nginx/ngx_stream_module.so; +load_module /usr/local/libexec/nginx/ngx_rtmp_module.so; +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_modules.png) + +Under the **Server** section, change the webroot location to match the one you created earlier: + +``` +Location / { +root /usr/local/www/stream +} +``` + +![](https://opensource.com/sites/default/files/uploads/stream-server_webroot.png) + +And finally, add your RTMP settings so Nginx will know how to handle your media streams: + +``` +rtmp { +        server { +                listen 1935; +                chunk_size 4096; + +                application live { +                        live on; +                        record off; +                } +        } +} +``` + +Save the config. In Nano, you can do this by pressing **Ctrl+X** , **Y** , and then **Enter.** + +As you can see, this is a very minimal config that will create a working streaming server. Later, you'll add to this config, but this will provide you with a great starting point. + +However, before you can begin your first stream, you'll need to restart Nginx with its new config: + +``` +service nginx restart +``` + +### Set up your streaming software + +#### Broadcasting with OBS + +Now that your server is ready to accept your video streams, it's time to set up your streaming software. This tutorial uses the powerful and open source Open Broadcast Studio (OBS). + +Head over to the [OBS website][5] and find the build for your operating system and install it. Once OBS launches, you should see a first-time-run wizard that will help you configure OBS with the settings that best fit your hardware. + +![](https://opensource.com/sites/default/files/uploads/stream-server_autoconfig.png) + +OBS isn't capturing anything because you haven't supplied it with a source. For this tutorial, you'll just capture your desktop for the stream. Simply click the **+** button under **Source** , choose **Screen Capture** , and select which desktop you want to capture. + +Click OK, and you should see OBS mirroring your desktop. + +Now it's time to send your newly configured video stream to your server. In OBS, click **File** > **Settings**. Click on the **Stream** section, and set **Stream Type** to **Custom Streaming Server**. + +In the URL box, enter the prefix **rtmp://** followed the IP address of your streaming server followed by **/live**. For example, **rtmp://IP-ADDRESS/live**. + +Next, you'll probably want to enter a Stream key—a special identifier required to view your stream. Enter whatever key you want (and can remember) in the **Stream key** box. + +![](https://opensource.com/sites/default/files/uploads/stream-server_streamkey.png) + +Click **Apply** and then **OK**. + +Now that OBS is configured to send your stream to your server, you can start your first stream. Click **Start Streaming**. + +If everything worked, you should see the button change to **Stop Streaming** and some bandwidth metrics will appear at the bottom of OBS. + +![](https://opensource.com/sites/default/files/uploads/stream-server_metrics.png) + +If you receive an error, double-check Stream Settings in OBS for misspellings. If everything looks good, there could be another issue preventing it from working. + +### Viewing your stream + +A live video isn't much good if no one is watching it, so be your first viewer! + +There are a multitude of open source media players that support RTMP, but the most well-known is probably [VLC media player][6]. + +After you install and launch VLC, open your stream by clicking on **Media** > **Open Network Stream**. Enter the path to your stream, adding the Stream Key you set up in OBS, then click **Play**. For example, **rtmp://IP-ADDRESS/live/SECRET-KEY**. + +You should now be viewing your very own live video stream! + +![](https://opensource.com/sites/default/files/uploads/stream-server_livevideo.png) + +### Where to go next? + +This is a very simple setup that will get you off the ground. Here are two other features you likely will want to use. + + * **Limit access:** The next step you might want to take is to limit access to your server, as the default setup allows anyone to stream to and from the server. There are a variety of ways to set this up, such as an operating system firewall, [.htaccess file][7], or even using the [built-in access controls in the STMP module][8]. + + * **Record streams:** This simple Nginx configuration will only stream and won't save your videos, but this is easy to add. In the Nginx config, under the RTMP section, set up the recording options and the location where you want to save your videos. Make sure the path you set exists and Nginx is able to write to it. + + + + +``` +application live { +             live on; +             record all; +             record_path /var/www/html/recordings; +             record_unique on; +} +``` + +The world of live streaming is constantly evolving, and if you're interested in more advanced uses, there are lots of other great resources you can find floating around the internet. Good luck and happy streaming! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/basic-live-video-streaming-server + +作者:[Aaron J.Prisk][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ricepriskytreat +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/download/server +[2]: https://www.ubuntu.com/download/desktop +[3]: https://www.nano-editor.org/ +[4]: https://www.freebsd.org/ +[5]: https://obsproject.com/ +[6]: https://www.videolan.org/vlc/index.html +[7]: https://httpd.apache.org/docs/current/howto/htaccess.html +[8]: https://github.com/arut/nginx-rtmp-module/wiki/Directives#access diff --git a/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md b/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md new file mode 100644 index 0000000000..41d4e47acc --- /dev/null +++ b/sources/tech/20190108 How ASLR protects Linux systems from buffer overflow attacks.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How ASLR protects Linux systems from buffer overflow attacks) +[#]: via: (https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +How ASLR protects Linux systems from buffer overflow attacks +====== + +![](https://images.idgesg.net/images/article/2019/01/shuffling-cards-100784640-large.jpg) + +Address Space Layout Randomization (ASLR) is a memory-protection process for operating systems that guards against buffer-overflow attacks. It helps to ensure that the memory addresses associated with running processes on systems are not predictable, thus flaws or vulnerabilities associated with these processes will be more difficult to exploit. + +ASLR is used today on Linux, Windows, and MacOS systems. It was first implemented on Linux in 2005. In 2007, the technique was deployed on Microsoft Windows and MacOS. While ASLR provides the same function on each of these operating systems, it is implemented differently on each one. + +The effectiveness of ASLR is dependent on the entirety of the address space layout remaining unknown to the attacker. In addition, only executables that are compiled as Position Independent Executable (PIE) programs will be able to claim the maximum protection from ASLR technique because all sections of the code will be loaded at random locations. PIE machine code will execute properly regardless of its absolute address. + +**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]** + +### ASLR limitations + +In spite of ASLR making exploitation of system vulnerabilities more difficult, its role in protecting systems is limited. It's important to understand that ASLR: + + * Doesn't _resolve_ vulnerabilities, but makes exploiting them more of a challenge + * Doesn't track or report vulnerabilities + * Doesn't offer any protection for binaries that are not built with ASLR support + * Isn't immune to circumvention + + + +### How ASLR works + +ASLR increases the control-flow integrity of a system by making it more difficult for an attacker to execute a successful buffer-overflow attack by randomizing the offsets it uses in memory layouts. + +ASLR works considerably better on 64-bit systems, as these systems provide much greater entropy (randomization potential). + +### Is ASLR working on your Linux system? + +Either of the two commands shown below will tell you whether ASLR is enabled on your system. + +``` +$ cat /proc/sys/kernel/randomize_va_space +2 +$ sysctl -a --pattern randomize +kernel.randomize_va_space = 2 +``` + +The value (2) shown in the commands above indicates that ASLR is working in full randomization mode. The value shown will be one of the following: + +``` +0 = Disabled +1 = Conservative Randomization +2 = Full Randomization +``` + +If you disable ASLR and run the commands below, you should notice that the addresses shown in the **ldd** output below are all the same in the successive **ldd** commands. The **ldd** command works by loading the shared objects and showing where they end up in memory. + +``` +$ sudo sysctl -w kernel.randomize_va_space=0 <== disable +[sudo] password for shs: +kernel.randomize_va_space = 0 +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000) + /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000) +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffff7fd1000) <== same addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007ffff7c69000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007ffff7c63000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ffff7a79000) + /lib64/ld-linux-x86-64.so.2 (0x00007ffff7fd3000) +``` + +If the value is set back to **2** to enable ASLR, you will see that the addresses will change each time you run the command. + +``` +$ sudo sysctl -w kernel.randomize_va_space=2 <== enable +[sudo] password for shs: +kernel.randomize_va_space = 2 +$ ldd /bin/bash + linux-vdso.so.1 (0x00007fff47d0e000) <== first set of addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007f1cb7ce0000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f1cb7cda000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f1cb7af0000) + /lib64/ld-linux-x86-64.so.2 (0x00007f1cb8045000) +$ ldd /bin/bash + linux-vdso.so.1 (0x00007ffe1cbd7000) <== second set of addresses + libtinfo.so.6 => /lib/x86_64-linux-gnu/libtinfo.so.6 (0x00007fed59742000) + libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fed5973c000) + libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fed59552000) + /lib64/ld-linux-x86-64.so.2 (0x00007fed59aa7000) +``` + +### Attempting to bypass ASLR + +In spite of its advantages, attempts to bypass ASLR are not uncommon and seem to fall into several categories: + + * Using address leaks + * Gaining access to data relative to particular addresses + * Exploiting implementation weaknesses that allow attackers to guess addresses when entropy is low or when the ASLR implementation is faulty + * Using side channels of hardware operation + + + +### Wrap-up + +ASLR is of great value, especially when run on 64 bit systems and implemented properly. While not immune from circumvention attempts, it does make exploitation of system vulnerabilities considerably more difficult. Here is a reference that can provide a lot more detail [on the Effectiveness of Full-ASLR on 64-bit Linux][2], and here is a paper on one circumvention effort to [bypass ASLR][3] using branch predictors. + +Join the Network World communities on [Facebook][4] and [LinkedIn][5] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3331199/linux/what-does-aslr-do-for-linux.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html +[2]: https://cybersecurity.upv.es/attacks/offset2lib/offset2lib-paper.pdf +[3]: http://www.cs.ucr.edu/~nael/pubs/micro16.pdf +[4]: https://www.facebook.com/NetworkWorld/ +[5]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190108 How To Understand And Identify File types in Linux.md b/sources/tech/20190108 How To Understand And Identify File types in Linux.md new file mode 100644 index 0000000000..c1c4ca4c0a --- /dev/null +++ b/sources/tech/20190108 How To Understand And Identify File types in Linux.md @@ -0,0 +1,359 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Understand And Identify File types in Linux) +[#]: via: (https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +How To Understand And Identify File types in Linux +====== + +We all are knows, that everything is a file in Linux which includes Hard Disk, Graphics Card, etc. + +When you are navigating the Linux filesystem most of the files are fall under regular files and directories. + +But it has other file types as well for different purpose which fall in five categories. + +So, it’s very important to understand the file types in Linux that helps you in many ways. + +If you can’t believe this, you just gone through the complete article then you come to know how important is. + +If you don’t understand the file types you can’t make any changes on that without fear. + +If you made the changes wrongly that damage your system very badly so be careful when you are doing that. + +Files are very important in Linux because all the devices and daemon’s were stored as a file in Linux system. + +### How Many Types of File is Available in Linux? + +As per my knowledge, totally 7 types of files are available in Linux with 3 Major categories. The details are below. + + * Regular File + * Directory File + * Special Files (This category having five type of files) + * Link File + * Character Device File + * Socket File + * Named Pipe File + * Block File + + + +Refer the below table for better understanding of file types in Linux. +| Symbol | Meaning | +| – | Regular File. It starts with underscore “_”. | +| d | Directory File. It starts with English alphabet letter “d”. | +| l | Link File. It starts with English alphabet letter “l”. | +| c | Character Device File. It starts with English alphabet letter “c”. | +| s | Socket File. It starts with English alphabet letter “s”. | +| p | Named Pipe File. It starts with English alphabet letter “p”. | +| b | Block File. It starts with English alphabet letter “b”. | + +### Method-1: Manual Way to Identify File types in Linux + +If you are having good knowledge in Linux then you can easily identify the files type with help of above table. + +#### How to view the Regular files in Linux? + +Use the below command to view the Regular files in Linux. Regular files are available everywhere in Linux filesystem. +The Regular files color is `WHITE` + +``` +# ls -la | grep ^- +-rw-------. 1 mageshm mageshm 1394 Jan 18 15:59 .bash_history +-rw-r--r--. 1 mageshm mageshm 18 May 11 2012 .bash_logout +-rw-r--r--. 1 mageshm mageshm 176 May 11 2012 .bash_profile +-rw-r--r--. 1 mageshm mageshm 124 May 11 2012 .bashrc +-rw-r--r--. 1 root root 26 Dec 27 17:55 liks +-rw-r--r--. 1 root root 104857600 Jan 31 2006 test100.dat +-rw-r--r--. 1 root root 104874307 Dec 30 2012 test100.zip +-rw-r--r--. 1 root root 11536384 Dec 30 2012 test10.zip +-rw-r--r--. 1 root root 61 Dec 27 19:05 test2-bzip2.txt +-rw-r--r--. 1 root root 61 Dec 31 14:24 test3-bzip2.txt +-rw-r--r--. 1 root root 60 Dec 27 19:01 test-bzip2.txt +``` + +#### How to view the Directory files in Linux? + +Use the below command to view the Directory files in Linux. Directory files are available everywhere in Linux filesystem. The Directory files colour is `BLUE` + +``` +# ls -la | grep ^d +drwxr-xr-x. 3 mageshm mageshm 4096 Dec 31 14:24 links/ +drwxrwxr-x. 2 mageshm mageshm 4096 Nov 16 15:44 perl5/ +drwxr-xr-x. 2 mageshm mageshm 4096 Nov 16 15:37 public_ftp/ +drwxr-xr-x. 3 mageshm mageshm 4096 Nov 16 15:37 public_html/ +``` + +#### How to view the Link files in Linux? + +Use the below command to view the Link files in Linux. Link files are available everywhere in Linux filesystem. +Two type of link files are available, it’s Soft link and Hard link. The Link files color is `LIGHT TURQUOISE` + +``` +# ls -la | grep ^l +lrwxrwxrwx. 1 root root 31 Dec 7 15:11 s-link-file -> /links/soft-link/test-soft-link +lrwxrwxrwx. 1 root root 38 Dec 7 15:12 s-link-folder -> /links/soft-link/test-soft-link-folder +``` + +#### How to view the Character Device files in Linux? + +Use the below command to view the Character Device files in Linux. Character Device files are available only in specific location. + +It’s available under `/dev` directory. The Character Device files color is `YELLOW` + +``` +# ls -la | grep ^c +crw-------. 1 root root 5, 1 Jan 28 14:05 console +crw-rw----. 1 root root 10, 61 Jan 28 14:05 cpu_dma_latency +crw-rw----. 1 root root 10, 62 Jan 28 14:05 crash +crw-rw----. 1 root root 29, 0 Jan 28 14:05 fb0 +crw-rw-rw-. 1 root root 1, 7 Jan 28 14:05 full +crw-rw-rw-. 1 root root 10, 229 Jan 28 14:05 fuse +``` + +#### How to view the Block files in Linux? + +Use the below command to view the Block files in Linux. The Block files are available only in specific location. +It’s available under `/dev` directory. The Block files color is `YELLOW` + +``` +# ls -la | grep ^b +brw-rw----. 1 root disk 7, 0 Jan 28 14:05 loop0 +brw-rw----. 1 root disk 7, 1 Jan 28 14:05 loop1 +brw-rw----. 1 root disk 7, 2 Jan 28 14:05 loop2 +brw-rw----. 1 root disk 7, 3 Jan 28 14:05 loop3 +brw-rw----. 1 root disk 7, 4 Jan 28 14:05 loop4 +``` + +#### How to view the Socket files in Linux? + +Use the below command to view the Socket files in Linux. The Socket files are available only in specific location. +The Socket files color is `PINK` + +``` +# ls -la | grep ^s +srw-rw-rw- 1 root root 0 Jan 5 16:36 system_bus_socket +``` + +#### How to view the Named Pipe files in Linux? + +Use the below command to view the Named Pipe files in Linux. The Named Pipe files are available only in specific location. The Named Pipe files color is `YELLOW` + +``` +# ls -la | grep ^p +prw-------. 1 root root 0 Jan 28 14:06 replication-notify-fifo| +prw-------. 1 root root 0 Jan 28 14:06 stats-mail| +``` + +### Method-2: How to Identify File types in Linux Using file Command? + +The file command allow us to determine various file types in Linux. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests to identify file types. + +#### How to view the Regular files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Regular file. The file command will read the given file contents and display exactly what kind of file it is. + +That’s why we are seeing different results for each Regular files. See the below various results for Regular files. + +``` +# file 2daygeek_access.log +2daygeek_access.log: ASCII text, with very long lines + +# file powertop.html +powertop.html: HTML document, ASCII text, with very long lines + +# file 2g-test +2g-test: JSON data + +# file powertop.txt +powertop.txt: HTML document, UTF-8 Unicode text, with very long lines + +# file 2g-test-05-01-2019.tar.gz +2g-test-05-01-2019.tar.gz: gzip compressed data, last modified: Sat Jan 5 18:22:20 2019, from Unix, original size 450560 +``` + +#### How to view the Directory files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Directory file. See the results below. + +``` +# file Pictures/ +Pictures/: directory +``` + +#### How to view the Link files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Link file. See the results below. + +``` +# file log +log: symbolic link to /run/systemd/journal/dev-log +``` + +#### How to view the Character Device files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Character Device file. See the results below. + +``` +# file vcsu +vcsu: character special (7/64) +``` + +#### How to view the Block files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Block file. See the results below. + +``` +# file sda1 +sda1: block special (8/1) +``` + +#### How to view the Socket files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Socket file. See the results below. + +``` +# file system_bus_socket +system_bus_socket: socket +``` + +#### How to view the Named Pipe files in Linux Using file Command? + +Simple enter the file command on your terminal and followed by Named Pipe file. See the results below. + +``` +# file pipe-test +pipe-test: fifo (named pipe) +``` + +### Method-3: How to Identify File types in Linux Using stat Command? + +The stat command allow us to check file types or file system status. This utility giving more information than file command. It shows lot of information about the given file such as Size, Block Size, IO Block Size, Inode Value, Links, File permission, UID, GID, File Access, Modify and Change time details. + +#### How to view the Regular files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Regular file. + +``` +# stat 2daygeek_access.log + File: 2daygeek_access.log + Size: 14406929 Blocks: 28144 IO Block: 4096 regular file +Device: 10301h/66305d Inode: 1727555 Links: 1 +Access: (0644/-rw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2019-01-03 14:05:26.430328867 +0530 +Modify: 2019-01-03 14:05:26.460328868 +0530 +Change: 2019-01-03 14:05:26.460328868 +0530 + Birth: - +``` + +#### How to view the Directory files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Directory file. See the results below. + +``` +# stat Pictures/ + File: Pictures/ + Size: 4096 Blocks: 8 IO Block: 4096 directory +Device: 10301h/66305d Inode: 1703982 Links: 3 +Access: (0755/drwxr-xr-x) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2018-11-24 03:22:11.090000828 +0530 +Modify: 2019-01-05 18:27:01.546958817 +0530 +Change: 2019-01-05 18:27:01.546958817 +0530 + Birth: - +``` + +#### How to view the Link files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Link file. See the results below. + +``` +# stat /dev/log + File: /dev/log -> /run/systemd/journal/dev-log + Size: 28 Blocks: 0 IO Block: 4096 symbolic link +Device: 6h/6d Inode: 278 Links: 1 +Access: (0777/lrwxrwxrwx) Uid: ( 0/ root) Gid: ( 0/ root) +Access: 2019-01-05 16:36:31.033333447 +0530 +Modify: 2019-01-05 16:36:30.766666768 +0530 +Change: 2019-01-05 16:36:30.766666768 +0530 + Birth: - +``` + +#### How to view the Character Device files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Character Device file. See the results below. + +``` +# stat /dev/vcsu + File: /dev/vcsu + Size: 0 Blocks: 0 IO Block: 4096 character special file +Device: 6h/6d Inode: 16 Links: 1 Device type: 7,40 +Access: (0660/crw-rw----) Uid: ( 0/ root) Gid: ( 5/ tty) +Access: 2019-01-05 16:36:31.056666781 +0530 +Modify: 2019-01-05 16:36:31.056666781 +0530 +Change: 2019-01-05 16:36:31.056666781 +0530 + Birth: - +``` + +#### How to view the Block files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Block file. See the results below. + +``` +# stat /dev/sda1 + File: /dev/sda1 + Size: 0 Blocks: 0 IO Block: 4096 block special file +Device: 6h/6d Inode: 250 Links: 1 Device type: 8,1 +Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 994/ disk) +Access: 2019-01-05 16:36:31.596666806 +0530 +Modify: 2019-01-05 16:36:31.596666806 +0530 +Change: 2019-01-05 16:36:31.596666806 +0530 + Birth: - +``` + +#### How to view the Socket files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Socket file. See the results below. + +``` +# stat /var/run/dbus/system_bus_socket + File: /var/run/dbus/system_bus_socket + Size: 0 Blocks: 0 IO Block: 4096 socket +Device: 15h/21d Inode: 576 Links: 1 +Access: (0666/srw-rw-rw-) Uid: ( 0/ root) Gid: ( 0/ root) +Access: 2019-01-05 16:36:31.823333482 +0530 +Modify: 2019-01-05 16:36:31.810000149 +0530 +Change: 2019-01-05 16:36:31.810000149 +0530 + Birth: - +``` + +#### How to view the Named Pipe files in Linux Using stat Command? + +Simple enter the stat command on your terminal and followed by Named Pipe file. See the results below. + +``` +# stat pipe-test + File: pipe-test + Size: 0 Blocks: 0 IO Block: 4096 fifo +Device: 10301h/66305d Inode: 1705583 Links: 1 +Access: (0644/prw-r--r--) Uid: ( 1000/ daygeek) Gid: ( 1000/ daygeek) +Access: 2019-01-06 02:00:03.040394731 +0530 +Modify: 2019-01-06 02:00:03.040394731 +0530 +Change: 2019-01-06 02:00:03.040394731 +0530 + Birth: - +``` +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/how-to-understand-and-identify-file-types-in-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190109 Automating deployment strategies with Ansible.md b/sources/tech/20190109 Automating deployment strategies with Ansible.md new file mode 100644 index 0000000000..175244e760 --- /dev/null +++ b/sources/tech/20190109 Automating deployment strategies with Ansible.md @@ -0,0 +1,152 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Automating deployment strategies with Ansible) +[#]: via: (https://opensource.com/article/19/1/automating-deployment-strategies-ansible) +[#]: author: (Jario da Silva Junior https://opensource.com/users/jairojunior) + +Automating deployment strategies with Ansible +====== +Use automation to eliminate time sinkholes due to repetitive tasks and unplanned work. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) + +When you examine your technology stack from the bottom layer to the top—hardware, operating system (OS), middleware, and application—with their respective configurations, it's clear that changes are far more frequent as you go up in the stack. Your hardware will hardly change, your OS has a long lifecycle, and your middleware will keep up with the application's needs, but even if your release cycle is long (weeks or months), your applications will be the most volatile. + +![](https://opensource.com/sites/default/files/uploads/osdc-deployment-strategies.png) + +In [The Practice of System and Network Administration][1], the authors categorize the biggest "time sinkholes" in IT as manual/non-standard provisioning of OSes and application deployments. These time sinkholes will consume you with repetitive tasks or unplanned work. + +How so? Let's say you provision a new server without Network Time Protocol (NTP) properly configured, and a small percentage of your requests—in a cluster of dozens of servers—start to behave strangely because an application uses some sort of scheduler that relies on correct time. When you look at it like this, it is an easy problem to fix, but how long it would it take your team figure it out? Incidents or unplanned work consume a lot of your time and, even worse, your greatest talents. Should you really be wasting time investigating production systems like this? Wouldn't it be better to set this server aside and automatically provision a new one from scratch? + +What about manual deployment? Imagine 20 binaries deployed across a farm or nodes with their respective configuration files? How error-prone is this? Inevitably, it will eventually end up in unplanned work. + +The [State of DevOps Report 2018][2] introduces the stages of DevOps adoption, and it's no surprise that Stage 0 includes deployment automation and reuse of deployment patterns, while Stage 1 and 2 focus on standardization of your infrastructure stack to reduce inconsistencies across your environment. + +Note that, more than once, I have seen an ops team using this "standardization" as an excuse to limit a development team's ability to deliver, forcing them to use a hammer on something that is definitely not a nail. Don't do it; the price is extremely high. + +The lesson to be learned here is that lack of automation not only increases your lead time but also the rate of problems in your process and the amount of unplanned work you face. If you've read [The Phoenix Project][3], you know this is the root of all evil in any value stream, and if you don't get rid of it, it will eventually kill your business. + +When trying to fill the biggest time sinkholes, why not start with automating operating system installation? We could, but the results would take longer to appear since new virtual machines are not created as frequently as applications are deployed. In other words, this may not free up the time we need to power our initiative, so it could die prematurely. + +Still not convinced? Smaller and more frequent releases are also extremely positive from the development side. Let's explain a little further… + +### Deploy ≠ Release + +The first thing to understand is that, although they're used interchangeably, deployment and release do **NOT** mean the same thing. Release refers to providing the user a new version, while deployment is the technical process of deploying the new version. Let's focus on the technical process of deployment. + +### Tasks, groups, and Ansible + +We need to understand the deployment process from the beginning to the end, including everything in the middle—the tasks, which servers are involved in the process, and which steps are executed—to avoid falling into the pitfalls described by Mattias Geniar in [Automating the unknown][4]. + +#### Tasks + +The steps commonly executed in a regular deployment process include: + + * Deploy application(s)/database(s) or database(s) change(s) + * Stop/start services and monitoring + * Add/remove the server from our load balancers + * Verify application state—is it ready to serve requests? + * Manual approval—is it necessary? + + + +For some people, automating the deployment process but leaving a manual approval step is like riding a bike with training wheels. As someone once told me: "It's better to ride with training wheels than not ride at all." + +What if a tool doesn't include an API or a command-line interface (CLI) to enable task automation? Well, maybe it's time to think about changing tools. There are many open source application servers, databases, monitoring systems, and load balancers that are easily automated—thanks in large part to the [Unix way][5]. When adopting a new technology, eliminate options that are not automated and use your creativity to support your legacy technologies. For example, I've seen people versioning network appliance configuration files and updating them using FTP. + +And guess what? It's a wonderful time to adopt open source tools. The recent [Accelerate: State of DevOps][6] report found that open source technologies are in predominant use in high-performance organizations. The logic is pretty simple: open source projects function in a "Darwinist" model, where those that do not adapt and evolve will die for lack of a user base or contributions. Feedback is paramount to software evolution. + +#### Groups + +To identify groups of servers to target for automation, think about the most tasks you want to automate, such as those that: + + * Deploy application(s)/database(s) or database change(s) + * Stop/start services and monitoring + * Add/remove server(s) from load balancer(s) + * Verify application state—is it ready to serve requests? + + + +#### The playbook + +A high-level deployment process could be: + + 1. Stop monitoring (to avoid false-positives) + 2. Remove server from the load balancer (to prevent the user from receiving an error code) + 3. Stop the service (to enable a graceful shutdown) + 4. Deploy the new version of the application + 5. Wait for the application to be ready to receive new requests + 6. Execute steps 3, 2, and 1. + 7. Do the same for the next N servers. + + + +Having documentation of your process is nice, but having an executable documenting your deployment is better! Here's what steps 1–5 would look like in Ansible for a fully open source stack: + +``` +- name: Disable alerts +  nagios: +    action: disable_alerts +    host: "{{ inventory_hostname }}" +    services: webserver +  delegate_to: "{{ item }}" +  loop: "{{ groups.monitoring }}" + +- name: Disable servers in the LB +  haproxy: +    host: "{{ inventory_hostname }}" +    state: disabled +    backend: app +  delegate_to: "{{ item }}" +  loop: " {{ groups.lbserver }}" + +- name: Stop the service +  service: name=httpd state=stopped + +- name: Deploy a new version +  unarchive: src=app.tar.gz dest=/var/www/app + +- name: Verify application state +  uri: +    url: "http://{{ inventory_hostname }}/app/healthz" +    status_code: 200 +  retries: 5 +``` + +### Why Ansible? + +There are other alternatives for application deployment, but the things that make Ansible an excellent choice include: + + * Multi-tier orchestration (i.e., **delegate_to** ) allowing you to orderly target different groups of servers: monitoring, load balancer, application server, database, etc. + * Rolling upgrade (i.e., serial) to control how changes are made (e.g., 1 by 1, N by N, X% at a time, etc.) + * Error control, **max_fail_percentage** and **any_errors_fatal** , is my process all-in or will it tolerate fails? + * A vast library of modules for: + * Monitoring (e.g., Nagios, Zabbix, etc.) + * Load balancers (e.g., HAProxy, F5, Netscaler, Cisco, etc.) + * Services (e.g., service, command, file) + * Deployment (e.g., copy, unarchive) + * Programmatic verifications (e.g., command, Uniform Resource Identifier) + + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/automating-deployment-strategies-ansible + +作者:[Jario da Silva Junior][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jairojunior +[b]: https://github.com/lujun9972 +[1]: https://www.amazon.com/Practice-System-Network-Administration-Enterprise/dp/0321919165/ref=dp_ob_title_bk +[2]: https://puppet.com/resources/whitepaper/state-of-devops-report +[3]: https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/0988262592 +[4]: https://ma.ttias.be/automating-unknown/ +[5]: https://en.wikipedia.org/wiki/Unix_philosophy +[6]: https://cloudplatformonline.com/2018-state-of-devops.html diff --git a/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md new file mode 100644 index 0000000000..3bad5ba969 --- /dev/null +++ b/sources/tech/20190109 GoAccess - A Real-Time Web Server Log Analyzer And Interactive Viewer.md @@ -0,0 +1,187 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer) +[#]: via: (https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +GoAccess – A Real-Time Web Server Log Analyzer And Interactive Viewer +====== + +Analyzing a log file is a big headache for Linux administrators as it’s capturing a lot of things. + +Most of the newbies and L1 administrators doesn’t know how to analyze this. + +If you have good knowledge to analyze a logs then you will be a legend for NIX system. + +There are many tools available in Linux to analyze the logs easily. + +GoAccess is one of the tool which allow users to analyze web server logs easily. + +We will be going to discuss in details about GoAccess tool in this article. + +### What is GoAccess? + +GoAccess is a real-time web log analyzer and interactive viewer that runs in a terminal in *nix systems or through your browser. + +GoAccess has minimal requirements, it’s written in C and requires only ncurses. + +It will support Apache, Nginx and Lighttpd logs. It provides fast and valuable HTTP statistics for system administrators that require a visual server report on the fly. + +GoAccess parses the specified web log file and outputs the data to the X terminal and browser. + +GoAccess was designed to be a fast, terminal-based log analyzer. Its core idea is to quickly analyze and view web server statistics in real time without needing to use your browser. + +Terminal output is the default output, it has the capability to generate a complete, self-contained, real-time HTML report, as well as a JSON, and CSV report. + +GoAccess allows any custom log format and the following (Combined Log Format (XLF/ELF) Apache | Nginx & Common Log Format (CLF) Apache) predefined log format options are included, but not limited to. + +### GoAccess Features + + * **`Completely Real Time:`** All the metrics are updated every 200 ms on the terminal and every second on the HTML output. + * **`Track Application Response Time:`** Track the time taken to serve the request. Extremely useful if you want to track pages that are slowing down your site. + * **`Visitors:`** Determine the amount of hits, visitors, bandwidth, and metrics for slowest running requests by the hour, or date. + * **`Metrics per Virtual Host:`** Have multiple Virtual Hosts (Server Blocks)? It features a panel that displays which virtual host is consuming most of the web server resources. + + + +### How to Install GoAccess? + +I would advise users to install GoAccess from distribution official repository with help of Package Manager. It is available in most of the distributions official repository. + +As we know, we will be getting bit outdated package for standard release distribution and rolling release distributions always include latest package. + +If you are running the OS with standard release distributions, i would suggest you to check the alternative options such as PPA or Official GoAccess maintainer repository, etc, to get a latest package. + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][1]** or **[APT Command][2]** to install GoAccess on your systems. + +``` +# apt install goaccess +``` + +To get a latest GoAccess package, use the below GoAccess official repository. + +``` +$ echo "deb https://deb.goaccess.io/ $(lsb_release -cs) main" | sudo tee -a /etc/apt/sources.list.d/goaccess.list +$ wget -O - https://deb.goaccess.io/gnugpg.key | sudo apt-key add - +$ sudo apt-get update +$ sudo apt-get install goaccess +``` + +For **`RHEL/CentOS`** systems, use **[YUM Package Manager][3]** to install GoAccess on your systems. + +``` +# yum install goaccess +``` + +For **`Fedora`** system, use **[DNF Package Manager][4]** to install GoAccess on your system. + +``` +# dnf install goaccess +``` + +For **`ArchLinux/Manjaro`** based systems, use **[Pacman Package Manager][5]** to install GoAccess on your systems. + +``` +# pacman -S goaccess +``` + +For **`openSUSE Leap`** system, use **[Zypper Package Manager][6]** to install GoAccess on your system. + +``` +# zypper install goaccess + +# zypper ar -f obs://server:http + +# zypper ref && zypper in goaccess +``` + +### How to Use GoAccess? + +After successful installation of GoAccess. Just enter the goaccess command and followed by the web server log location to view it. + +``` +# goaccess [options] /path/to/Web Server/access.log + +# goaccess /var/log/apache/2daygeek_access.log +``` + +When you execute the above command, it will ask you to select the **Log Format Configuration**. +![][8] + +I had tested this with Apache access log. The Apache log is splitted in fifteen section. The details are below. The main section shows the summary about the fifteen section. + +The below screenshots included four sessions such as Unique Visitors, Requested files, Static Requests, Not found URLs. +![][9] + +The below screenshots included four sessions such as Visitor Hostnames and IPs, Operating Systems, Browsers, Time Distribution. +![][10] + +The below screenshots included four sessions such as Referrers URLs, Referring Sites, Google’s search engine results, HTTP status codes. +![][11] + +If you would like to generate a html report, use the following format. + +Initially i got an error when i was trying to generate the html report. + +``` +# goaccess 2daygeek_access.log -a > report.html + +GoAccess - version 1.3 - Nov 23 2018 11:28:19 +Config file: No config file used + +Fatal error has occurred +Error occurred at: src/parser.c - parse_log - 2764 +No time format was found on your conf file.Parsing... [0] [0/s] +``` + +It says “No time format was found on your conf file”. To overcome this issue, add the “COMBINED” log format option on it. + +``` +# goaccess -f 2daygeek_access.log --log-format=COMBINED -o 2daygeek.html +Parsing...[0,165] [50,165/s] +``` + +![][12] + +GoAccess allows you to access and analyze the real-time log filtering and parsing. + +``` +# tail -f /var/log/apache/2daygeek_access.log | goaccess - +``` + +For more details navigate to man or help page. + +``` +# man goaccess +or +# goaccess --help +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[2]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[3]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[4]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[5]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[6]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[7]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-1.png +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-2.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-3.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-4.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/goaccess-a-real-time-web-server-log-analyzer-and-interactive-viewer-5.png diff --git a/sources/tech/20190110 5 useful Vim plugins for developers.md b/sources/tech/20190110 5 useful Vim plugins for developers.md new file mode 100644 index 0000000000..2b5b9421d4 --- /dev/null +++ b/sources/tech/20190110 5 useful Vim plugins for developers.md @@ -0,0 +1,369 @@ +[#]: collector: (lujun9972) +[#]: translator: (pityonline) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (5 useful Vim plugins for developers) +[#]: via: (https://opensource.com/article/19/1/vim-plugins-developers) +[#]: author: (Ricardo Gerardi https://opensource.com/users/rgerardi) + +5 useful Vim plugins for developers +====== +Expand Vim's capabilities and improve your workflow with these five plugins for writing code. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/web_browser_desktop_devlopment_design_system_computer.jpg?itok=pfqRrJgh) + +I have used [Vim][1] as a text editor for over 20 years, but about two years ago I decided to make it my primary text editor. I use Vim to write code, configuration files, blog articles, and pretty much everything I can do in plaintext. Vim has many great features and, once you get used to it, you become very productive. + +I tend to use Vim's robust native capabilities for most of what I do, but there are a number of plugins developed by the open source community that extend Vim's capabilities, improve your workflow, and make you even more productive. + +Following are five plugins that are useful when using Vim to write code in any programming language. + +### 1. Auto Pairs + +The [Auto Pairs][2] plugin helps insert and delete pairs of characters, such as brackets, parentheses, or quotation marks. This is very useful for writing code, since most programming languages use pairs of characters in their syntax—such as parentheses for function calls or quotation marks for string definitions. + +In its most basic functionality, Auto Pairs inserts the corresponding closing character when you type an opening character. For example, if you enter a bracket **[** , Auto-Pairs automatically inserts the closing bracket **]**. Conversely, if you use the Backspace key to delete the opening bracket, Auto Pairs deletes the corresponding closing bracket. + +If you have automatic indentation on, Auto Pairs inserts the paired character in the proper indented position when you press Return/Enter, saving you from finding the correct position and typing the required spaces or tabs. + +Consider this Go code block for instance: + +``` +package main + +import "fmt" + +func main() { +    x := true +    items := []string{"tv", "pc", "tablet"} + +    if x { +        for _, i := range items +    } +} +``` + +Inserting an opening curly brace **{** after **items** and pressing Return/Enter produces this result: + +``` +package main + +import "fmt" + +func main() { +    x := true +    items := []string{"tv", "pc", "tablet"} + +    if x { +        for _, i := range items  { +            | (cursor here) +        } +    } +} +``` + +Auto Pairs offers many other options (which you can read about on [GitHub][3]), but even these basic features will save time. + +### 2. NERD Commenter + +The [NERD Commenter][4] plugin adds code-commenting functions to Vim, similar to the ones found in an integrated development environment (IDE). With this plugin installed, you can select one or several lines of code and change them to comments with the press of a button. + +NERD Commenter integrates with the standard Vim [filetype][5] plugin, so it understands several programming languages and uses the appropriate commenting characters for single or multi-line comments. + +The easiest way to get started is by pressing **Leader+Space** to toggle the current line between commented and uncommented. The standard Vim Leader key is the **\** character. + +In Visual mode, you can select multiple lines and toggle their status at the same time. NERD Commenter also understands counts, so you can provide a count n followed by the command to change n lines together. + +Other useful features are the "Sexy Comment," triggered by **Leader+cs** , which creates a fancy comment block using the multi-line comment character. For example, consider this block of code: + +``` +package main + +import "fmt" + +func main() { +    x := true +    items := []string{"tv", "pc", "tablet"} + +    if x { +        for _, i := range items { +            fmt.Println(i) +        } +    } +} +``` + +Selecting all the lines in **function main** and pressing **Leader+cs** results in the following comment block: + +``` +package main + +import "fmt" + +func main() { +/* + *    x := true + *    items := []string{"tv", "pc", "tablet"} + * + *    if x { + *        for _, i := range items { + *            fmt.Println(i) + *        } + *    } + */ +} +``` + +Since all the lines are commented in one block, you can uncomment the entire block by toggling any of the lines of the block with **Leader+Space**. + +NERD Commenter is a must-have for any developer using Vim to write code. + +### 3. VIM Surround + +The [Vim Surround][6] plugin helps you "surround" existing text with pairs of characters (such as parentheses or quotation marks) or tags (such as HTML or XML tags). It's similar to Auto Pairs but, instead of working while you're inserting text, it's more useful when you're editing text. + +For example, if you have the following sentence: + +``` +"Vim plugins are awesome !" +``` + +You can remove the quotation marks around the sentence by pressing the combination **ds"** while your cursor is anywhere between the quotation marks: + +``` +Vim plugins are awesome ! +``` + +You can also change the double quotation marks to single quotation marks with the command **cs"'** : + +``` +'Vim plugins are awesome !' +``` + +Or replace them with brackets by pressing **cs'[** + +``` +[ Vim plugins are awesome ! ] +``` + +While it's a great help for text objects, this plugin really shines when working with HTML or XML tags. Consider the following HTML line: + +``` +

                  Vim plugins are awesome !

                  +``` + +You can emphasize the word "awesome" by pressing the combination **ysiw ** while the cursor is anywhere on that word: + +``` +

                  Vim plugins are awesome !

                  +``` + +Notice that the plugin is smart enough to use the proper closing tag **< /em>**. + +Vim Surround can also indent text and add tags in their own lines using **ySS**. For example, if you have: + +``` +

                  Vim plugins are awesome !

                  +``` + +Add a **div** tag with this combination: **ySS
                  **, and notice that the paragraph line is indented automatically. + +``` +
                  +       

                  Vim plugins are awesome !

                  +
                  +``` + +Vim Surround has many other options. Give it a try—and consult [GitHub][7] for additional information. + +### 4\. Vim Gitgutter + +The [Vim Gitgutter][8] plugin is useful for anyone using Git for version control. It shows the output of **Git diff** as symbols in the "gutter"—the sign column where Vim presents additional information, such as line numbers. For example, consider the following as the committed version in Git: + +``` +  1 package main +  2 +  3 import "fmt" +  4 +  5 func main() { +  6     x := true +  7     items := []string{"tv", "pc", "tablet"} +  8 +  9     if x { + 10         for _, i := range items { + 11             fmt.Println(i) + 12         } + 13     } + 14 } +``` + +After making some changes, Vim Gitgutter displays the following symbols in the gutter: + +``` +    1 package main +    2 +    3 import "fmt" +    4 +_   5 func main() { +    6     items := []string{"tv", "pc", "tablet"} +    7 +~   8     if len(items) > 0 { +    9         for _, i := range items { +   10             fmt.Println(i) ++  11             fmt.Println("------") +   12         } +   13     } +   14 } +``` + +The **-** symbol shows that a line was deleted between lines 5 and 6. The **~** symbol shows that line 8 was modified, and the symbol **+** shows that line 11 was added. + +In addition, Vim Gitgutter allows you to navigate between "hunks"—individual changes made in the file—with **[c** and **]c** , or even stage individual hunks for commit by pressing **Leader+hs**. + +This plugin gives you immediate visual feedback of changes, and it's a great addition to your toolbox if you use Git. + +### 5\. VIM Fugitive + +[Vim Fugitive][9] is another great plugin for anyone incorporating Git into the Vim workflow. It's a Git wrapper that allows you to execute Git commands directly from Vim and integrates with Vim's interface. This plugin has many features—check its [GitHub][10] page for more information. + +Here's a basic Git workflow example using Vim Fugitive. Considering the changes we've made to the Go code block on section 4, you can use **git blame** by typing the command **:Gblame** : + +``` +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    1 package main +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    2 +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    3 import "fmt" +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    4 +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│_   5 func main() { +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    6     items := []string{"tv", "pc", "tablet"} +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    7 +00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│~   8     if len(items) > 0 { +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│    9         for _, i := range items { +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│   10             fmt.Println(i) +00000000 (Not Committed Yet 2018-12-05 18:55:00 -0500)│+  11             fmt.Println("------") +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│   12         } +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│   13     } +e9949066 (Ricardo Gerardi   2018-12-05 18:17:19 -0500)│   14 } +``` + +You can see that lines 8 and 11 have not been committed. Check the repository status by typing **:Gstatus** : + +``` + 1 # On branch master +  2 # Your branch is up to date with 'origin/master'. +  3 # +  4 # Changes not staged for commit: +  5 #   (use "git add ..." to update what will be committed) +  6 #   (use "git checkout -- ..." to discard changes in working directory) +  7 # +  8 #       modified:   vim-5plugins/examples/test1.go +  9 # + 10 no changes added to commit (use "git add" and/or "git commit -a") +-------------------------------------------------------------------------------------------------------- +    1 package main +    2 +    3 import "fmt" +    4 +_   5 func main() { +    6     items := []string{"tv", "pc", "tablet"} +    7 +~   8     if len(items) > 0 { +    9         for _, i := range items { +   10             fmt.Println(i) ++  11             fmt.Println("------") +   12         } +   13     } +   14 } +``` + +Vim Fugitive opens a split window with the result of **git status**. You can stage a file for commit by pressing the **-** key on the line with the name of the file. You can reset the status by pressing **-** again. The message updates to reflect the new status: + +``` +  1 # On branch master +  2 # Your branch is up to date with 'origin/master'. +  3 # +  4 # Changes to be committed: +  5 #   (use "git reset HEAD ..." to unstage) +  6 # +  7 #       modified:   vim-5plugins/examples/test1.go +  8 # +-------------------------------------------------------------------------------------------------------- +    1 package main +    2 +    3 import "fmt" +    4 +_   5 func main() { +    6     items := []string{"tv", "pc", "tablet"} +    7 +~   8     if len(items) > 0 { +    9         for _, i := range items { +   10             fmt.Println(i) ++  11             fmt.Println("------") +   12         } +   13     } +   14 } +``` + +Now you can use the command **:Gcommit** to commit the changes. Vim Fugitive opens another split that allows you to enter a commit message: + +``` +  1 vim-5plugins: Updated test1.go example file +  2 # Please enter the commit message for your changes. Lines starting +  3 # with '#' will be ignored, and an empty message aborts the commit. +  4 # +  5 # On branch master +  6 # Your branch is up to date with 'origin/master'. +  7 # +  8 # Changes to be committed: +  9 #       modified:   vim-5plugins/examples/test1.go + 10 # +``` + +Save the file with **:wq** to complete the commit: + +``` +[master c3bf80f] vim-5plugins: Updated test1.go example file + 1 file changed, 2 insertions(+), 2 deletions(-) +Press ENTER or type command to continue +``` + +You can use **:Gstatus** again to see the result and **:Gpush** to update the remote repository with the new commit. + +``` +  1 # On branch master +  2 # Your branch is ahead of 'origin/master' by 1 commit. +  3 #   (use "git push" to publish your local commits) +  4 # +  5 nothing to commit, working tree clean +``` + +If you like Vim Fugitive and want to learn more, the GitHub repository has links to screencasts showing additional functionality and workflows. Check it out! + +### What's next? + +These Vim plugins help developers write code in any programming language. There are two other categories of plugins to help developers: code-completion plugins and syntax-checker plugins. They are usually related to specific programming languages, so I will cover them in a follow-up article. + +Do you have another Vim plugin you use when writing code? Please share it in the comments below. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/vim-plugins-developers + +作者:[Ricardo Gerardi][a] +选题:[lujun9972][b] +译者:[pityonline](https://github.com/pityonline) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/rgerardi +[b]: https://github.com/lujun9972 +[1]: https://www.vim.org/ +[2]: https://www.vim.org/scripts/script.php?script_id=3599 +[3]: https://github.com/jiangmiao/auto-pairs +[4]: https://github.com/scrooloose/nerdcommenter +[5]: http://vim.wikia.com/wiki/Filetype.vim +[6]: https://www.vim.org/scripts/script.php?script_id=1697 +[7]: https://github.com/tpope/vim-surround +[8]: https://github.com/airblade/vim-gitgutter +[9]: https://www.vim.org/scripts/script.php?script_id=2975 +[10]: https://github.com/tpope/vim-fugitive diff --git a/sources/tech/20190111 Build a retro gaming console with RetroPie.md b/sources/tech/20190111 Build a retro gaming console with RetroPie.md new file mode 100644 index 0000000000..eedac575c9 --- /dev/null +++ b/sources/tech/20190111 Build a retro gaming console with RetroPie.md @@ -0,0 +1,82 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Build a retro gaming console with RetroPie) +[#]: via: (https://opensource.com/article/19/1/retropie) +[#]: author: (Jay LaCroix https://opensource.com/users/jlacroix) + +Build a retro gaming console with RetroPie +====== +Play your favorite classic Nintendo, Sega, and Sony console games on Linux. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/open_gaming_games_roundup_news.png?itok=KM0ViL0f) + +The most common question I get on [my YouTube channel][1] and in person is what my favorite Linux distribution is. If I limit the answer to what I run on my desktops and laptops, my answer will typically be some form of an Ubuntu-based Linux distro. My honest answer to this question may surprise many. My favorite Linux distribution is actually [RetroPie][2]. + +As passionate as I am about Linux and open source software, I'm equally passionate about classic gaming, specifically video games produced in the '90s and earlier. I spend most of my surplus income on older games, and I now have a collection of close to a thousand games for over 20 gaming consoles. In my spare time, I raid flea markets, yard sales, estate sales, and eBay buying games for various consoles, including almost every iteration made by Nintendo, Sega, and Sony. There's something about classic games that I adore, a charm that seems lost in games released nowadays. + +Unfortunately, collecting retro games has its fair share of challenges. Cartridges with memory for save files will lose their charge over time, requiring the battery to be replaced. While it's not hard to replace save batteries (if you know how), it's still time-consuming. Games on CD-ROMs are subject to disc rot, which means that even if you take good care of them, they'll still lose data over time and become unplayable. Also, sometimes it's difficult to find replacement parts for some consoles. This wouldn't be so much of an issue if the majority of classic games were available digitally, but the vast majority are never re-released on a digital platform. + +### Gaming on RetroPie + +RetroPie is a great project and an asset to retro gaming enthusiasts like me. RetroPie is a Raspbian-based distribution designed for use on the Raspberry Pi (though it is possible to get it working on other platforms, such as a PC). RetroPie boots into a graphical interface that is completely controllable via a gamepad or joystick and allows you to easily manage digital copies (ROMs) of your favorite games. You can scrape information from the internet to organize your collection better and manage lists of favorite games, and the entire interface is very user-friendly and efficient. From the interface, you can launch directly into a game, then exit the game by pressing a combination of buttons on your gamepad. You rarely need a keyboard, unless you have to enter your WiFi password or manually edit configuration files. + +I use RetroPie to host a digital copy of every physical game I own in my collection. When I purchase a game from a local store or eBay, I also download the ROM. As a collector, this is very convenient. If I don't have a particular physical console within arms reach, I can boot up RetroPie and enjoy a game quickly without having to connect cables or clean cartridge contacts. There's still something to be said about playing a game on the original hardware, but if I'm pressed for time, RetroPie is very convenient. I also don't have to worry about dead save batteries, dirty cartridge contacts, disc rot, or any of the other issues collectors like me have to regularly deal with. I simply play the game. + +Also, RetroPie allows me to be very clever and utilize my technical know-how to achieve additional functionality that's not normally available. For example, I have three RetroPies set up, each of them synchronizing their files between each other by leveraging [Syncthing][3], a popular open source file synchronization tool. The synchronization happens automatically, and it means I can start a game on one television and continue in the same place on another unit since the save files are included in the synchronization. To take it a step further, I also back up my save and configuration files to [Backblaze B2][4], so I'm protected if an SD card becomes defective. + +### Setting up RetroPie + +Setting up RetroPie is very easy, and if you've ever set up a Raspberry Pi Linux distribution before (such as Raspbian) the process is essentially the same—you simply download the IMG file and flash it to your SD card by utilizing another tool, such as [Etcher][5], and insert it into your RetroPie. Then plug in an AC adapter and gamepad and hook it up to your television via HDMI. Optionally, you can buy a case to protect your RetroPie from outside elements and add visual appeal. Here is a listing of things you'll need to get started: + + * Raspberry Pi board (Model 3B+ or higher recommended) + * SD card (16GB or larger recommended) + * A USB gamepad + * UL-listed micro USB power adapter, at least 2.5 amp + + + +If you choose to add the optional Raspberry Pi case, I recommend the Super NES and Super Famicom themed cases from [RetroFlag][6]. Not only do these cases look cool, but they also have fully functioning power and reset buttons. This means you can configure the reset and power buttons to directly trigger the operating system's halt process, rather than abruptly terminating power. This definitely makes for a more professional experience, but it does require the installation of a special script. The instructions are on [RetroFlag's GitHub page][7]. Be wary: there are many cases available on Amazon and eBay of varying quality. Some of them are cheap knock-offs of RetroFlag cases, and others are just a lower quality overall. In fact, even cases by RetroFlag vary in quality—I had some power-distribution issues with the NES-themed case that made for an unstable experience. If in doubt, I've found that RetroFlag's Super NES and Super Famicom themed cases work very well. + +### Adding games + +When you boot RetroPie for the first time, it will resize the filesystem to ensure you have full access to the available space on your SD card and allow you to set up your gamepad. I can't give you links for game ROMs, so I'll leave that part up to you to figure out. When you've found them, simply add them to the RetroPie SD card in the designated folder, which would be located under **/home/pi/RetroPie/roms/ **. You can use your favorite tool for transferring the ROMs to the Pi, such as [SCP][8] in a terminal, [WinSCP][9], [Samba][10], etc. Once you've added the games, you can rescan them by pressing start and choosing the option to restart EmulationStation. When it restarts, it should automatically add menu entries for the ROMs you've added. That's basically all there is to it. + +(The rescan updates EmulationStation’s game inventory. If you don’t do that, it won’t list any newly added games you copy over.) + +Regarding the games' performance, your mileage will vary depending on which consoles you're emulating. For example, I've noticed that Sega Dreamcast games barely run at all, and most Nintendo 64 games will run sluggishly with a bad framerate. Many PlayStation Portable (PSP) games also perform inconsistently. However, all of the 8-bit and 16-bit consoles emulate seemingly perfectly—I haven't run into a single 8-bit or 16-bit game that doesn't run well. Surprisingly, games designed for the original PlayStation run great for me, which is a great feat considering the lower-performance potential of the Raspberry Pi. + +Overall, RetroPie's performance is great, but the Raspberry Pi is not as powerful as a gaming PC, so adjust your expectations accordingly. + +### Conclusion + +RetroPie is a fantastic open source project dedicated to preserving classic games and an asset to game collectors everywhere. Having a digital copy of my physical game collection is extremely convenient. If I were to tell my childhood self that one day I could have an entire game collection on one device, I probably wouldn't believe it. But RetroPie has become a staple in my household and provides hours of fun and enjoyment. + +If you want to see the parts I mentioned as well as a quick installation overview, I have [a video][11] on [my YouTube channel][12] that goes over the process and shows off some gameplay at the end. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/retropie + +作者:[Jay LaCroix][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jlacroix +[b]: https://github.com/lujun9972 +[1]: https://www.youtube.com/channel/UCxQKHvKbmSzGMvUrVtJYnUA +[2]: https://retropie.org.uk/ +[3]: https://syncthing.net/ +[4]: https://www.backblaze.com/b2/cloud-storage.html +[5]: https://www.balena.io/etcher/ +[6]: https://www.amazon.com/shop/learnlinux.tv?listId=1N9V89LEH5S8K +[7]: https://github.com/RetroFlag/retroflag-picase +[8]: https://en.wikipedia.org/wiki/Secure_copy +[9]: https://winscp.net/eng/index.php +[10]: https://www.samba.org/ +[11]: https://www.youtube.com/watch?v=D8V-KaQzsWM +[12]: http://www.youtube.com/c/LearnLinuxtv diff --git a/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md new file mode 100644 index 0000000000..fbd8b9d120 --- /dev/null +++ b/sources/tech/20190111 Top 5 Linux Distributions for Productivity.md @@ -0,0 +1,170 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 Linux Distributions for Productivity) +[#]: via: (https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +Top 5 Linux Distributions for Productivity +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_main.jpg?itok=2IKyg_7_) + +I have to confess, this particular topic is a tough one to address. Why? First off, Linux is a productive operating system by design. Thanks to an incredibly reliable and stable platform, getting work done is easy. Second, to gauge effectiveness, you have to consider what type of work you need a productivity boost for. General office work? Development? School? Data mining? Human resources? You see how this question can get somewhat complicated. + +That doesn’t mean, however, that some distributions aren’t able to do a better job of configuring and presenting that underlying operating system into an efficient platform for getting work done. Quite the contrary. Some distributions do a much better job of “getting out of the way,” so you don’t find yourself in a work-related hole, having to dig yourself out and catch up before the end of day. These distributions help strip away the complexity that can be found in Linux, thereby making your workflow painless. + +Let’s take a look at the distros I consider to be your best bet for productivity. To help make sense of this, I’ve divided them into categories of productivity. That task itself was challenging, because everyone’s productivity varies. For the purposes of this list, however, I’ll look at: + + * General Productivity: For those who just need to work efficiently on multiple tasks. + + * Graphic Design: For those that work with the creation and manipulation of graphic images. + + * Development: For those who use their Linux desktops for programming. + + * Administration: For those who need a distribution to facilitate their system administration tasks. + + * Education: For those who need a desktop distribution to make them more productive in an educational environment. + + + + +Yes, there are more categories to be had, many of which can get very niche-y, but these five should fill most of your needs. + +### General Productivity + +For general productivity, you won’t get much more efficient than [Ubuntu][1]. The primary reason for choosing Ubuntu for this category is the seamless integration of apps, services, and desktop. You might be wondering why I didn’t choose Linux Mint for this category? Because Ubuntu now defaults to the GNOME desktop, it gains the added advantage of GNOME Extensions (Figure 1). + +![GNOME Clipboard][3] + +Figure 1: The GNOME Clipboard Indicator extension in action. + +[Used with permission][4] + +These extensions go a very long way to aid in boosting productivity (so Ubuntu gets the nod over Mint). But Ubuntu didn’t just accept a vanilla GNOME desktop. Instead, they tweaked it to make it slightly more efficient and user-friendly, out of the box. And because Ubuntu contains just the right mixture of default, out-of-the-box, apps (that just work), it makes for a nearly perfect platform for productivity. + +Whether you need to write a paper, work on a spreadsheet, code a new app, work on your company website, create marketing images, administer a server or network, or manage human resources from within your company HR tool, Ubuntu has you covered. The Ubuntu desktop distribution also doesn’t require the user to jump through many hoops to get things working … it simply works (and quite well). Finally, thanks to it’s Debian base, Ubuntu makes installing third-party apps incredibly easy. + +Although Ubuntu tends to be the go-to for nearly every list of “top distributions for X,” it’s very hard to argue against this particular distribution topping the list of general productivity distributions. + +### Graphic Design + +If you’re looking to up your graphic design productivity, you can’t go wrong with [Fedora Design Suite][5]. This Fedora respin was created by the team responsible for all Fedora-related art work. Although the default selection of apps isn’t a massive collection of tools, those it does include are geared specifically for the creation and manipulation of images. + +With apps like GIMP, Inkscape, Darktable, Krita, Entangle, Blender, Pitivi, Scribus, and more (Figure 2), you’ll find everything you need to get your image editing jobs done and done well. But Fedora Design Suite doesn’t end there. This desktop platform also includes a bevy of tutorials that cover countless subjects for many of the installed applications. For anyone trying to be as productive as possible, this is some seriously handy information to have at the ready. I will say, however, the tutorial entry in the GNOME Favorites is nothing more than a link to [this page][6]. + +![Fedora Design Suite Favorites][8] + +Figure 2: The Fedora Design Suite Favorites menu includes plenty of tools for getting your graphic design on. + +[Used with permission][4] + +Those that work with a digital camera will certainly appreciate the inclusion of the Entangle app, which allows you to control your DSLR from the desktop. + +### Development + +Nearly all Linux distributions are great platforms for programmers. However, one particular distributions stands out, above the rest, as one of the most productive tools you’ll find for the task. That OS comes from [System76][9] and it’s called [Pop!_OS][10]. Pop!_OS is tailored specifically for creators, but not of the artistic type. Instead, Pop!_OS is geared toward creators who specialize in developing, programming, and making. If you need an environment that is not only perfected suited for your development work, but includes a desktop that’s sure to get out of your way, you won’t find a better option than Pop!_OS (Figure 3). + +What might surprise you (given how “young” this operating system is), is that Pop!_OS is also one of the single most stable GNOME-based platforms you’ll ever use. This means Pop!_OS isn’t just for creators and makers, but anyone looking for a solid operating system. One thing that many users will greatly appreciate with Pop!_OS, is that you can download an ISO specifically for your video hardware. If you have Intel hardware, [download][10] the version for Intel/AMD. If your graphics card is NVIDIA, download that specific release. Either way, you are sure go get a solid platform for which to create your masterpiece. + +![Pop!_OS][12] + +Figure 3: The Pop!_OS take on GNOME Overview. + +[Used with permission][4] + +Interestingly enough, with Pop!_OS, you won’t find much in the way of pre-installed development tools. You won’t find an included IDE, or many other dev tools. You can, however, find all the development tools you need in the Pop Shop. + +### Administration + +If you’re looking to find one of the most productive distributions for admin tasks, look no further than [Debian][13]. Why? Because Debian is not only incredibly reliable, it’s one of those distributions that gets out of your way better than most others. Debian is the perfect combination of ease of use and unlimited possibility. On top of which, because this is the distribution for which so many others are based, you can bet if there’s an admin tool you need for a task, it’s available for Debian. Of course, we’re talking about general admin tasks, which means most of the time you’ll be using a terminal window to SSH into your servers (Figure 4) or a browser to work with web-based GUI tools on your network. Why bother making use of a desktop that’s going to add layers of complexity (such as SELinux in Fedora, or YaST in openSUSE)? Instead, chose simplicity. + +![Debian][15] + +Figure 4: SSH’ing into a remote server on Debian. + +[Used with permission][4] + +And because you can select which desktop you want (from GNOME, Xfce, KDE, Cinnamon, MATE, LXDE), you can be sure to have the interface that best matches your work habits. + +### Education + +If you are a teacher or student, or otherwise involved in education, you need the right tools to be productive. Once upon a time, there existed the likes of Edubuntu. That distribution never failed to be listed in the top of education-related lists. However, that distro hasn’t been updated since it was based on Ubuntu 14.04. Fortunately, there’s a new education-based distribution ready to take that title, based on openSUSE. This spin is called [openSUSE:Education-Li-f-e][16] (Linux For Education - Figure 5), and is based on openSUSE Leap 42.1 (so it is slightly out of date). + +openSUSE:Education-Li-f-e includes tools like: + + * Brain Workshop - A dual n-back brain exercise + + * GCompris - An educational software suite for young children + + * gElemental - A periodic table viewer + + * iGNUit - A general purpose flash card program + + * Little Wizard - Development environment for children based on Pascal + + * Stellarium - An astronomical sky simulator + + * TuxMath - An math tutor game + + * TuxPaint - A drawing program for young children + + * TuxType - An educational typing tutor for children + + * wxMaxima - A cross platform GUI for the computer algebra system + + * Inkscape - Vector graphics program + + * GIMP - Graphic image manipulation program + + * Pencil - GUI prototyping tool + + * Hugin - Panorama photo stitching and HDR merging program + + +![Education][18] + +Figure 5: The openSUSE:Education-Li-f-e distro has plenty of tools to help you be productive in or for school. + +[Used with permission][4] + +Also included with openSUSE:Education-Li-f-e is the [KIWI-LTSP Server][19]. The KIWI-LTSP Server is a flexible, cost effective solution aimed at empowering schools, businesses, and organizations all over the world to easily install and deploy desktop workstations. Although this might not directly aid the student to be more productive, it certainly enables educational institutions be more productive in deploying desktops for students to use. For more information on setting up KIWI-LTSP, check out the openSUSE [KIWI-LTSP quick start guide][20]. + +Learn more about Linux through the free ["Introduction to Linux" ][21]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/top-5-linux-distributions-productivity + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://www.ubuntu.com/ +[2]: /files/images/productivity1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_1.jpg?itok=yxez3X1w (GNOME Clipboard) +[4]: /licenses/category/used-permission +[5]: https://labs.fedoraproject.org/en/design-suite/ +[6]: https://fedoraproject.org/wiki/Design_Suite/Tutorials +[7]: /files/images/productivity2jpg +[8]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_2.jpg?itok=ke0b8qyH (Fedora Design Suite Favorites) +[9]: https://system76.com/ +[10]: https://system76.com/pop +[11]: /files/images/productivity3jpg-0 +[12]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_3_0.jpg?itok=8UkCUfsD (Pop!_OS) +[13]: https://www.debian.org/ +[14]: /files/images/productivity4jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_4.jpg?itok=c9yD3Xw2 (Debian) +[16]: https://en.opensuse.org/openSUSE:Education-Li-f-e +[17]: /files/images/productivity5jpg +[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/productivity_5.jpg?itok=oAFtV8nT (Education) +[19]: https://en.opensuse.org/Portal:KIWI-LTSP +[20]: https://en.opensuse.org/SDB:KIWI-LTSP_quick_start +[21]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20190113 Editing Subtitles in Linux.md b/sources/tech/20190113 Editing Subtitles in Linux.md new file mode 100644 index 0000000000..1eaa6a68fd --- /dev/null +++ b/sources/tech/20190113 Editing Subtitles in Linux.md @@ -0,0 +1,168 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Editing Subtitles in Linux) +[#]: via: (https://itsfoss.com/editing-subtitles) +[#]: author: (Shirish https://itsfoss.com/author/shirish/) + +Editing Subtitles in Linux +====== + +I have been a world movie and regional movies lover for decades. Subtitles are the essential tool that have enabled me to enjoy the best movies in various languages and from various countries. + +If you enjoy watching movies with subtitles, you might have noticed that sometimes the subtitles are not synced or not correct. + +Did you know that you can edit subtitles and make them better? Let me show you some basic subtitle editing in Linux. + +![Editing subtitles in Linux][1] + +### Extracting subtitles from closed captions data + +Around 2012, 2013 I came to know of a tool called [CCEextractor.][2] As time passed, it has become one of the vital tools for me, especially if I come across a media file which has the subtitle embedded in it. + +CCExtractor analyzes video files and produces independent subtitle files from the closed captions data. + +CCExtractor is a cross-platform, free and open source tool. The tool has matured quite a bit from its formative years and has been part of [GSOC][3] and Google Code-in now and [then.][4] + +The tool, to put it simply, is more or less a set of scripts which work one after another in a serialized order to give you an extracted subtitle. + +You can follow the installation instructions for CCExtractor on [this page][5]. + +After installing when you want to extract subtitles from a media file, do the following: + +``` +ccextractor +``` + +The output of the command will be something like this: + +It basically scans the media file. In this case, it found that the media file is in malyalam and that the media container is an [.mkv][6] container. It extracted the subtitle file with the same name as the video file adding _eng to it. + +CCExtractor is a wonderful tool which can be used to enhance subtitles along with Subtitle Edit which I will share in the next section. + +``` +Interesting Read: There is an interesting synopsis of subtitles at [vicaps][7] which tells and shares why subtitles are important to us. It goes into quite a bit of detail of movie-making as well for those interested in such topics. +``` + +### Editing subtitles with SubtitleEditor Tool + +You probably are aware that most subtitles are in [.srt format][8] . The beautiful thing about this format is and was you could load it in your text editor and do little fixes in it. + +A srt file looks something like this when launched into a simple text-editor: + +The excerpt subtitle I have shared is from a pretty Old German Movie called [The Cabinet of Dr. Caligari (1920)][9] + +Subtitleeditor is a wonderful tool when it comes to editing subtitles. Subtitle Editor is and can be used to manipulate time duration, frame-rate of the subtitle file to be in sync with the media file, duration of breaks in-between and much more. I’ll share some of the basic subtitle editing here. + +![][10] + +First install subtitleeditor the same way you installed ccextractor, using your favorite installation method. In Debian, you can use this command: + +``` +sudo apt install subtitleeditor +``` + +When you have it installed, let’s see some of the common scenarios where you need to edit a subtitle. + +#### Manipulating Frame-rates to sync with Media file + +If you find that the subtitles are not synced with the video, one of the reasons could be the difference between the frame rates of the video file and the subtitle file. + +How do you know the frame rates of these files, then? + +To get the frame rate of a video file, you can use the mediainfo tool. You may need to install it first using your distribution’s package manager. + +Using mediainfo is simple: + +``` +$ mediainfo somefile.mkv | grep Frame + Format settings : CABAC / 4 Ref Frames + Format settings, ReFrames : 4 frames + Frame rate mode : Constant + Frame rate : 25.000 FPS + Bits/(Pixel*Frame) : 0.082 + Frame rate : 46.875 FPS (1024 SPF) +``` + +Now you can see that framerate of the video file is 25.000 FPS. The other Frame-rate we see is for the audio. While I can share why particular fps are used in Video-encoding, Audio-encoding etc. it would be a different subject matter. There is a lot of history associated with it. + +Next is to find out the frame rate of the subtitle file and this is a slightly complicated. + +Usually, most subtitles are in a zipped format. Unzipping the .zip archive along with the subtitle file which ends in something.srt. Along with it, there is usually also a .info file with the same name which sometime may have the frame rate of the subtitle. + +If not, then it usually is a good idea to go some site and download the subtitle from a site which has that frame rate information. For this specific German file, I will be using [Opensubtitle.org][11] + +As you can see in the link, the frame rate of the subtitle is 23.976 FPS. Quite obviously, it won’t play well with my video file with frame rate 25.000 FPS. + +In such cases, you can change the frame rate of the subtitle file using the Subtitle Editor tool: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Change Framerate and change frame rates from 23.976 fps to 25.000 fps or whatever it is that is desired. Save the changed file. + +![synchronize frame rates of subtitles in Linux][12] + +#### Changing the Starting position of a subtitle file + +Sometimes the above method may be enough, sometimes though it will not be enough. + +You might find some cases when the start of the subtitle file is different from that in the movie or a media file while the frame rate is the same. + +In such cases, do the following: + +Select all the contents from the subtitle file by doing CTRL+A. Go to Timings -> Select Move Subtitle. + +![Move subtitles using Subtitle Editor on Linux][13] + +Change the new Starting position of the subtitle file. Save the changed file. + +![Move subtitles using Subtitle Editor in Linux][14] + +If you wanna be more accurate, then use [mpv][15] to see the movie or media file and click on the timing, if you click on the timing bar which shows how much the movie or the media file has elapsed, clicking on it will also reveal the microsecond. + +I usually like to be accurate so I try to be as precise as possible. It is very difficult in MPV as human reaction time is imprecise. If I wanna be super accurate then I use something like [Audacity][16] but then that is another ball-game altogether as you can do so much more with it. That may be something to explore in a future blog post as well. + +#### Manipulating Duration + +Sometimes even doing both is not enough and you even have to shrink or add the duration to make it sync with the media file. This is one of the more tedious works as you have to individually fix the duration of each sentence. This can happen especially if you have variable frame rates in the media file (nowadays rare but you still get such files). + +In such a scenario, you may have to edit the duration manually and automation is not possible. The best way is either to fix the video file (not possible without degrading the video quality) or getting video from another source at a higher quality and then [transcode][17] it with the settings you prefer. This again, while a major undertaking I could shed some light on in some future blog post. + +### Conclusion + +What I have shared in above is more or less on improving on existing subtitle files. If you were to start a scratch you need loads of time. I haven’t shared that at all because a movie or any video material of say an hour can easily take anywhere from 4-6 hours or even more depending upon skills of the subtitler, patience, context, jargon, accents, native English speaker, translator etc. all of which makes a difference to the quality of the subtitle. + +I hope you find this interesting and from now onward, you’ll handle your subtitles slightly better. If you have any suggestions to add, please leave a comment below. + + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/editing-subtitles + +作者:[Shirish][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/shirish/ +[b]: https://github.com/lujun9972 +[1]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?resize=800%2C450&ssl=1 +[2]: https://www.ccextractor.org/ +[3]: https://itsfoss.com/best-open-source-internships/ +[4]: https://www.ccextractor.org/public:codein:google_code-in_2018 +[5]: https://github.com/CCExtractor/ccextractor/wiki/Installation +[6]: https://en.wikipedia.org/wiki/Matroska +[7]: https://www.vicaps.com/blog/history-of-silent-movies-and-subtitles/ +[8]: https://en.wikipedia.org/wiki/SubRip#SubRip_text_file_format +[9]: https://www.imdb.com/title/tt0010323/ +[10]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2018/12/subtitleeditor.jpg?ssl=1 +[11]: https://www.opensubtitles.org/en/search/sublanguageid-eng/idmovie-4105 +[12]: https://i1.wp.com/itsfoss.com/wp-content/uploads/2019/01/subtitleeditor-frame-rate-sync.jpg?resize=800%2C450&ssl=1 +[13]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/Move-subtitles-Caligiri.jpg?resize=800%2C450&ssl=1 +[14]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/move-subtitles.jpg?ssl=1 +[15]: https://itsfoss.com/mpv-video-player/ +[16]: https://www.audacityteam.org/ +[17]: https://en.wikipedia.org/wiki/Transcoding +[18]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/editing-subtitles-in-linux.jpeg?fit=800%2C450&ssl=1 diff --git a/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md new file mode 100644 index 0000000000..fb98f78b06 --- /dev/null +++ b/sources/tech/20190116 Zipping files on Linux- the many variations and how to use them.md @@ -0,0 +1,324 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Zipping files on Linux: the many variations and how to use them) +[#]: via: (https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +Zipping files on Linux: the many variations and how to use them +====== +![](https://images.idgesg.net/images/article/2019/01/zipper-100785364-large.jpg) + +Some of us have been zipping files on Unix and Linux systems for many decades — to save some disk space and package files together for archiving. Even so, there are some interesting variations on zipping that not all of us have tried. So, in this post, we’re going to look at standard zipping and unzipping as well as some other interesting zipping options. + +### The basic zip command + +First, let’s look at the basic **zip** command. It uses what is essentially the same compression algorithm as **gzip** , but there are a couple important differences. For one thing, the gzip command is used only for compressing a single file where zip can both compress files and join them together into an archive. For another, the gzip command zips “in place”. In other words, it leaves a compressed file — not the original file alongside the compressed copy. Here's an example of gzip at work: + +``` +$ gzip onefile +$ ls -l +-rw-rw-r-- 1 shs shs 10514 Jan 15 13:13 onefile.gz +``` + +And here's zip. Notice how this command requires that a name be provided for the zipped archive where gzip simply uses the original file name and adds the .gz extension. + +``` +$ zip twofiles.zip file* + adding: file1 (deflated 82%) + adding: file2 (deflated 82%) +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +Notice also that the original files are still sitting there. + +The amount of disk space that is saved (i.e., the degree of compression obtained) will depend on the content of each file. The variation in the example below is considerable. + +``` +$ zip mybin.zip ~/bin/* + adding: bin/1 (deflated 26%) + adding: bin/append (deflated 64%) + adding: bin/BoD_meeting (deflated 18%) + adding: bin/cpuhog1 (deflated 14%) + adding: bin/cpuhog2 (stored 0%) + adding: bin/ff (deflated 32%) + adding: bin/file.0 (deflated 1%) + adding: bin/loop (deflated 14%) + adding: bin/notes (deflated 23%) + adding: bin/patterns (stored 0%) + adding: bin/runme (stored 0%) + adding: bin/tryme (deflated 13%) + adding: bin/tt (deflated 6%) +``` + +### The unzip command + +The **unzip** command will recover the contents from a zip file and, as you'd likely suspect, leave the zip file intact, whereas a similar gunzip command would leave only the uncompressed file. + +``` +$ unzip twofiles.zip +Archive: twofiles.zip + inflating: file1 + inflating: file2 +$ ls -l +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21289 Jan 15 13:35 twofiles.zip +``` + +### The zipcloak command + +The **zipcloak** command encrypts a zip file, prompting you to enter a password twice (to help ensure you don't "fat finger" it) and leaves the file in place. You can expect the file size to vary a little from the original. + +``` +$ zipcloak twofiles.zip +Enter password: +Verify password: +encrypting: file1 +encrypting: file2 +$ ls -l +total 204 +-rw-rw-r-- 1 shs shs 58021 Jan 15 13:25 file1 +-rw-rw-r-- 1 shs shs 58933 Jan 15 13:34 file2 +-rw-rw-r-- 1 shs shs 21313 Jan 15 13:46 twofiles.zip <== slightly larger than + unencrypted version +``` + +Keep in mind that the original files are still sitting there unencrypted. + +### The zipdetails command + +The **zipdetails** command is going to show you details — a _lot_ of details about a zipped file, likely a lot more than you care to absorb. Even though we're looking at an encrypted file, zipdetails does display the file names along with file modification dates, user and group information, file length data, etc. Keep in mind that this is all "metadata." We don't see the contents of the files. + +``` +$ zipdetails twofiles.zip + +0000 LOCAL HEADER #1 04034B50 +0004 Extract Zip Spec 14 '2.0' +0005 Extract OS 00 'MS-DOS' +0006 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +0008 Compression Method 0008 'Deflated' +000A Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +000E CRC F1B115BD +0012 Compressed Length 00002904 +0016 Uncompressed Length 0000E2A5 +001A Filename Length 0005 +001C Extra Length 001C +001E Filename 'file1' +0023 Extra ID #0001 5455 'UT: Extended Timestamp' +0025 Length 0009 +0027 Flags '03 mod access' +0028 Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +002C Access Time 5C3E27BB 'Tue Jan 15 13:34:35 2019' +0030 Extra ID #0002 7875 'ux: Unix Extra Type 3' +0032 Length 000B +0034 Version 01 +0035 UID Size 04 +0036 UID 000003E8 +003A GID Size 04 +003B GID 000003E8 +003F PAYLOAD + +2943 LOCAL HEADER #2 04034B50 +2947 Extract Zip Spec 14 '2.0' +2948 Extract OS 00 'MS-DOS' +2949 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +294B Compression Method 0008 'Deflated' +294D Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +2951 CRC EC214569 +2955 Compressed Length 00002913 +2959 Uncompressed Length 0000E635 +295D Filename Length 0005 +295F Extra Length 001C +2961 Filename 'file2' +2966 Extra ID #0001 5455 'UT: Extended Timestamp' +2968 Length 0009 +296A Flags '03 mod access' +296B Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +296F Access Time 5C3E27BD 'Tue Jan 15 13:34:37 2019' +2973 Extra ID #0002 7875 'ux: Unix Extra Type 3' +2975 Length 000B +2977 Version 01 +2978 UID Size 04 +2979 UID 000003E8 +297D GID Size 04 +297E GID 000003E8 +2982 PAYLOAD + +5295 CENTRAL HEADER #1 02014B50 +5299 Created Zip Spec 1E '3.0' +529A Created OS 03 'Unix' +529B Extract Zip Spec 14 '2.0' +529C Extract OS 00 'MS-DOS' +529D General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +529F Compression Method 0008 'Deflated' +52A1 Last Mod Time 4E2F6B24 'Tue Jan 15 13:25:08 2019' +52A5 CRC F1B115BD +52A9 Compressed Length 00002904 +52AD Uncompressed Length 0000E2A5 +52B1 Filename Length 0005 +52B3 Extra Length 0018 +52B5 Comment Length 0000 +52B7 Disk Start 0000 +52B9 Int File Attributes 0001 + [Bit 0] 1 Text Data +52BB Ext File Attributes 81B40000 +52BF Local Header Offset 00000000 +52C3 Filename 'file1' +52C8 Extra ID #0001 5455 'UT: Extended Timestamp' +52CA Length 0005 +52CC Flags '03 mod access' +52CD Mod Time 5C3E2584 'Tue Jan 15 13:25:08 2019' +52D1 Extra ID #0002 7875 'ux: Unix Extra Type 3' +52D3 Length 000B +52D5 Version 01 +52D6 UID Size 04 +52D7 UID 000003E8 +52DB GID Size 04 +52DC GID 000003E8 + +52E0 CENTRAL HEADER #2 02014B50 +52E4 Created Zip Spec 1E '3.0' +52E5 Created OS 03 'Unix' +52E6 Extract Zip Spec 14 '2.0' +52E7 Extract OS 00 'MS-DOS' +52E8 General Purpose Flag 0001 + [Bit 0] 1 'Encryption' + [Bits 1-2] 1 'Maximum Compression' +52EA Compression Method 0008 'Deflated' +52EC Last Mod Time 4E2F6C56 'Tue Jan 15 13:34:44 2019' +52F0 CRC EC214569 +52F4 Compressed Length 00002913 +52F8 Uncompressed Length 0000E635 +52FC Filename Length 0005 +52FE Extra Length 0018 +5300 Comment Length 0000 +5302 Disk Start 0000 +5304 Int File Attributes 0001 + [Bit 0] 1 Text Data +5306 Ext File Attributes 81B40000 +530A Local Header Offset 00002943 +530E Filename 'file2' +5313 Extra ID #0001 5455 'UT: Extended Timestamp' +5315 Length 0005 +5317 Flags '03 mod access' +5318 Mod Time 5C3E27C4 'Tue Jan 15 13:34:44 2019' +531C Extra ID #0002 7875 'ux: Unix Extra Type 3' +531E Length 000B +5320 Version 01 +5321 UID Size 04 +5322 UID 000003E8 +5326 GID Size 04 +5327 GID 000003E8 + +532B END CENTRAL HEADER 06054B50 +532F Number of this disk 0000 +5331 Central Dir Disk no 0000 +5333 Entries in this disk 0002 +5335 Total Entries 0002 +5337 Size of Central Dir 00000096 +533B Offset to Central Dir 00005295 +533F Comment Length 0000 +Done +``` + +### The zipgrep command + +The **zipgrep** command is going to use a grep-type feature to locate particular content in your zipped files. If the file is encrypted, you will need to enter the password provided for the encryption for each file you want to examine. If you only want to check the contents of a single file from the archive, add its name to the end of the zipgrep command as shown below. + +``` +$ zipgrep hazard twofiles.zip file1 +[twofiles.zip] file1 password: +Certain pesticides should be banned since they are hazardous to the environment. +``` + +### The zipinfo command + +The **zipinfo** command provides information on the contents of a zipped file whether encrypted or not. This includes the file names, sizes, dates and permissions. + +``` +$ zipinfo twofiles.zip +Archive: twofiles.zip +Zip file size: 21313 bytes, number of entries: 2 +-rw-rw-r-- 3.0 unx 58021 Tx defN 19-Jan-15 13:25 file1 +-rw-rw-r-- 3.0 unx 58933 Tx defN 19-Jan-15 13:34 file2 +2 files, 116954 bytes uncompressed, 20991 bytes compressed: 82.1% +``` + +### The zipnote command + +The **zipnote** command can be used to extract comments from zip archives or add them. To display comments, just preface the name of the archive with the command. If no comments have been added previously, you will see something like this: + +``` +$ zipnote twofiles.zip +@ file1 +@ (comment above this line) +@ file2 +@ (comment above this line) +@ (zip file comment below this line) +``` + +If you want to add comments, write the output from the zipnote command to a file: + +``` +$ zipnote twofiles.zip > comments +``` + +Next, edit the file you've just created, inserting your comments above the **(comment above this line)** lines. Then add the comments using a zipnote command like this one: + +``` +$ zipnote -w twofiles.zip < comments +``` + +### The zipsplit command + +The **zipsplit** command can be used to break a zip archive into multiple zip archives when the original file is too large — maybe because you're trying to add one of the files to a small thumb drive. The easiest way to do this seems to be to specify the max size for each of the zipped file portions. This size must be large enough to accomodate the largest included file. + +``` +$ zipsplit -n 12000 twofiles.zip +2 zip files will be made (100% efficiency) +creating: twofile1.zip +creating: twofile2.zip +$ ls twofile*.zip +-rw-rw-r-- 1 shs shs 10697 Jan 15 14:52 twofile1.zip +-rw-rw-r-- 1 shs shs 10702 Jan 15 14:52 twofile2.zip +-rw-rw-r-- 1 shs shs 21377 Jan 15 14:27 twofiles.zip +``` + +Notice how the extracted files are sequentially named "twofile1" and "twofile2". + +### Wrap-up + +The **zip** command, along with some of its zipping compatriots, provide a lot of control over how you generate and work with compressed file archives. + +**[ Also see:[Invaluable tips and tricks for troubleshooting Linux][1] ]** + +Join the Network World communities on [Facebook][2] and [LinkedIn][3] to comment on topics that are top of mind. + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3333640/linux/zipping-files-on-linux-the-many-variations-and-how-to-use-them.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.networkworld.com/article/3242170/linux/invaluable-tips-and-tricks-for-troubleshooting-linux.html +[2]: https://www.facebook.com/NetworkWorld/ +[3]: https://www.linkedin.com/company/network-world diff --git a/sources/tech/20190119 Get started with Roland, a random selection tool for the command line.md b/sources/tech/20190119 Get started with Roland, a random selection tool for the command line.md new file mode 100644 index 0000000000..edf787447b --- /dev/null +++ b/sources/tech/20190119 Get started with Roland, a random selection tool for the command line.md @@ -0,0 +1,90 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with Roland, a random selection tool for the command line) +[#]: via: (https://opensource.com/article/19/1/productivity-tools-roland) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +Get started with Roland, a random selection tool for the command line +====== + +Get help making hard choices with Roland, the seventh in our series on open source tools that will make you more productive in 2019. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/dice_tabletop_board_gaming_game.jpg?itok=y93eW7HN) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Here's the seventh of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. + +### Roland + +By the time the workday has ended, often the only thing I want to think about is hitting the couch and playing the video game of the week. But even though my professional obligations stop at the end of the workday, I still have to manage my household. Laundry, pet care, making sure my teenager has what he needs, and most important: deciding what to make for dinner. + +Like many people, I often suffer from [decision fatigue][1], and I make less-than-healthy choices for dinner based on speed, ease of preparation, and (quite frankly) whatever causes me the least stress. + +![](https://opensource.com/sites/default/files/uploads/roland-1.png) + +[Roland][2] makes planning my meals much easier. Roland is a Perl application designed for tabletop role-playing games. It picks randomly from a list of items, such as monsters and hirelings. In essence, Roland does the same thing at the command line that a game master does when rolling physical dice to look up things in a table from the Game Master's Big Book of Bad Things to Do to Players. + +With minor modifications, Roland can do so much more. For example, just by adding a table, I can enable Roland to help me choose what to cook for dinner. + +The first step is installing Roland and all its dependencies. + +``` +git clone git@github.com:rjbs/Roland.git +cpan install Getopt::Long::Descriptive Moose \ +   namespace::autoclean List:AllUtils Games::Dice \ +   Sort::ByExample Data::Bucketeer Text::Autoformat \ +   YAML::XS +cd oland +``` + +Next, I create a YAML document named **dinner** and enter all our meal options. + +``` +type: list +pick: 1 +items: + - "frozen pizza" + - "chipotle black beans" + - "huevos rancheros" + - "nachos" + - "pork roast" + - "15 bean soup" + - "roast chicken" + - "pot roast" + - "grilled cheese sandwiches" +``` + +Running the command **bin/roland dinner** will read the file and pick one of the options. + +![](https://opensource.com/sites/default/files/uploads/roland-2.png) + +I like to plan for the week ahead so I can shop for all my ingredients in advance. The **pick** command determines how many items from the list to chose, and right now, the **pick** option is set to 1. If I want to plan a full week's dinner menu, I can just change **pick: 1** to **pick: 7** and it will give me a week's worth of dinners. You can also use the **-m** command line option to manually enter the choices. + +![](https://opensource.com/sites/default/files/uploads/roland-3.png) + +You can also do fun things with Roland, like adding a file named **8ball** with some classic phrases. + +![](https://opensource.com/sites/default/files/uploads/roland-4.png) + +You can create all kinds of files to help with common decisions that seem so stressful after a long day of work. And even if you don't use it for that, you can still use it to decide which devious trap to set up for tonight's game. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tools-roland + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Decision_fatigue +[2]: https://github.com/rjbs/Roland diff --git a/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md b/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md new file mode 100644 index 0000000000..bd58eca5bf --- /dev/null +++ b/sources/tech/20190121 Akira- The Linux Design Tool We-ve Always Wanted.md @@ -0,0 +1,92 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Akira: The Linux Design Tool We’ve Always Wanted?) +[#]: via: (https://itsfoss.com/akira-design-tool) +[#]: author: (Ankush Das https://itsfoss.com/author/ankush/) + +Akira: The Linux Design Tool We’ve Always Wanted? +====== + +Let’s make it clear, I am not a professional designer – but I’ve used certain tools on Windows (like Photoshop, Illustrator, etc.) and [Figma][1] (which is a browser-based interface design tool). I’m sure there are a lot more design tools available for Mac and Windows. + +Even on Linux, there is a limited number of dedicated [graphic design tools][2]. A few of these tools like [GIMP][3] and [Inkscape][4] are used by professionals as well. But most of them are not considered professional grade, unfortunately. + +Even if there are a couple more solutions – I’ve never come across a native Linux application that could replace [Sketch][5], Figma, or Adobe **** XD. Any professional designer would agree to that, isn’t it? + +### Is Akira going to replace Sketch, Figma, and Adobe XD on Linux? + +Well, in order to develop something that would replace those awesome proprietary tools – [Alessandro Castellani][6] – came up with a [Kickstarter campaign][7] by teaming up with a couple of experienced developers – +[Alberto Fanjul][8], [Bilal Elmoussaoui][9], and [Felipe Escoto][10]. + +So, yes, Akira is still pretty much just an idea- with a working prototype of its interface (as I observed in their [live stream session][11] via Kickstarter recently). + +### If it does not exist, why the Kickstarter campaign? + +![][12] + +The aim of the Kickstarter campaign is to gather funds in order to hire the developers and take a few months off to dedicate their time in order to make Akira possible. + +Nonetheless, if you want to support the project, you should know some details, right? + +Fret not, we asked a couple of questions in their livestream session – let’s get into it… + +### Akira: A few more details + +![Akira prototype interface][13] +Image Credits: Kickstarter + +As the Kickstarter campaign describes: + +> The main purpose of Akira is to offer a fast and intuitive tool to **create Web and Mobile interfaces** , more like **Sketch** , **Figma** , or **Adobe XD** , with a completely native experience for Linux. + +They’ve also written a detailed description as to how the tool will be different from Inkscape, Glade, or QML Editor. Of course, if you want all the technical details, [Kickstarter][7] is the way to go. But, before that, let’s take a look at what they had to say when I asked some questions about Akira. + +Q: If you consider your project – similar to what Figma offers – why should one consider installing Akira instead of using the web-based tool? Is it just going to be a clone of those tools – offering a native Linux experience or is there something really interesting to encourage users to switch (except being an open source solution)? + +**Akira:** A native experience on Linux is always better and fast in comparison to a web-based electron app. Also, the hardware configuration matters if you choose to utilize Figma – but Akira will be light on system resource and you will still be able to do similar stuff without needing to go online. + +Q: Let’s assume that it becomes the open source solution that Linux users have been waiting for (with similar features offered by proprietary tools). What are your plans to sustain it? Do you plan to introduce any pricing plans – or rely on donations? + +**Akira** : The project will mostly rely on Donations (something like [Krita Foundation][14] could be an idea). But, there will be no “pro” pricing plans – it will be available for free and it will be an open source project. + +So, with the response I got, it definitely seems to be something promising that we should probably support. + +### Wrapping Up + +What do you think about Akira? Is it just going to remain a concept? Or do you hope to see it in action? + +Let us know your thoughts in the comments below. + +![][15] + +-------------------------------------------------------------------------------- + +via: https://itsfoss.com/akira-design-tool + +作者:[Ankush Das][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://itsfoss.com/author/ankush/ +[b]: https://github.com/lujun9972 +[1]: https://www.figma.com/ +[2]: https://itsfoss.com/best-linux-graphic-design-software/ +[3]: https://itsfoss.com/gimp-2-10-release/ +[4]: https://inkscape.org/ +[5]: https://www.sketchapp.com/ +[6]: https://github.com/Alecaddd +[7]: https://www.kickstarter.com/projects/alecaddd/akira-the-linux-design-tool/description +[8]: https://github.com/albfan +[9]: https://github.com/bilelmoussaoui +[10]: https://github.com/Philip-Scott +[11]: https://live.kickstarter.com/alessandro-castellani/live-stream/the-current-state-of-akira +[12]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?resize=800%2C451&ssl=1 +[13]: https://i2.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-mockup.png?ssl=1 +[14]: https://krita.org/en/about/krita-foundation/ +[15]: https://i0.wp.com/itsfoss.com/wp-content/uploads/2019/01/akira-design-tool-kickstarter.jpg?fit=812%2C458&ssl=1 diff --git a/sources/tech/20190121 Get started with TaskBoard, a lightweight kanban board.md b/sources/tech/20190121 Get started with TaskBoard, a lightweight kanban board.md new file mode 100644 index 0000000000..e77e5e3b1c --- /dev/null +++ b/sources/tech/20190121 Get started with TaskBoard, a lightweight kanban board.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with TaskBoard, a lightweight kanban board) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-taskboard) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +Get started with TaskBoard, a lightweight kanban board +====== +Check out the ninth tool in our series on open source tools that will make you more productive in 2019. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/checklist_hands_team_collaboration.png?itok=u82QepPk) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Here's the ninth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. + +### TaskBoard + +As I wrote in the [second article][1] in this series, [kanban boards][2] are pretty popular these days. And not all kanban boards are created equal. [TaskBoard][3] is a PHP application that is easy to set up on an existing web server and has a set of functions that make it easy to use and manage. + +![](https://opensource.com/sites/default/files/uploads/taskboard-1.png) + +[Installation][4] is as simple as unzipping the files on your web server, running a script or two, and making sure the correct directories are accessible. The first time you start it up, you're presented with a login form, and then it's time to start adding users and making boards. Board creation options include adding the columns you want to use and setting the default color of the cards. You can also assign users to boards so everyone sees only the boards they need to see. + +User management is lightweight, and all accounts are local to the server. You can set a default board for everyone on the server, and users can set their own default boards, too. These options can be useful when someone works on one board more than others. + +![](https://opensource.com/sites/default/files/uploads/taskboard-2.png) + +TaskBoard also allows you to create automatic actions, which are actions taken upon changes to user assignment, columns, or card categories. Although TaskBoard is not as powerful as some other kanban apps, you can set up automatic actions to make cards more visible for board users, clear due dates, and auto-assign new cards to people as needed. For example, in the screenshot below, if a card is assigned to the "admin" user, its color is changed to red, and when a card is assigned to my user, its color is changed to teal. I've also added an action to clear an item's due date if it's added to the "To-Do" column and to auto-assign cards to my user when that happens. + +![](https://opensource.com/sites/default/files/uploads/taskboard-3.png) + +The cards are very straightforward. While they don't have a start date, they do have end dates and a points field. Points can be used for estimating the time needed, effort required, or just general priority. Using points is optional, but if you are using TaskBoard for scrum planning or other agile techniques, it is a really handy feature. You can also filter the view by users and categories. This can be helpful on a team with multiple work streams going on, as it allows a team lead or manager to get status information about progress or a person's workload. + +![](https://opensource.com/sites/default/files/uploads/taskboard-4.png) + +If you need a reasonably lightweight kanban board, check out TaskBoard. It installs quickly, has some nice features, and is very, very easy to use. It's also flexible enough to be used for development teams, personal task tracking, and a whole lot more. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-taskboard + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/19/1/productivity-tool-wekan +[2]: https://en.wikipedia.org/wiki/Kanban +[3]: https://taskboard.matthewross.me/ +[4]: https://taskboard.matthewross.me/docs/ diff --git a/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md new file mode 100644 index 0000000000..e235cabdbf --- /dev/null +++ b/sources/tech/20190121 How to Resize OpenStack Instance (Virtual Machine) from Command line.md @@ -0,0 +1,149 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How to Resize OpenStack Instance (Virtual Machine) from Command line) +[#]: via: (https://www.linuxtechi.com/resize-openstack-instance-command-line/) +[#]: author: (Pradeep Kumar http://www.linuxtechi.com/author/pradeep/) + +How to Resize OpenStack Instance (Virtual Machine) from Command line +====== + +Being a Cloud administrator, resizing or changing resources of an instance or virtual machine is one of the most common tasks. + +![](https://www.linuxtechi.com/wp-content/uploads/2019/01/Resize-openstack-instance.jpg) + +In Openstack environment, there are some scenarios where cloud user has spin a vm using some flavor( like m1.smalll) where root partition disk size is 20 GB, but at some point of time user wants to extends the root partition size to 40 GB. So resizing of vm’s root partition can be accomplished by using the resize option in nova command. During the resize, we need to specify the new flavor that will include disk size as 40 GB. + +**Note:** Once you extend the instance resources like RAM, CPU and disk using resize option in openstack then you can’t reduce it. + +**Read More on** : [**How to Create and Delete Virtual Machine(VM) from Command line in OpenStack**][1] + +In this tutorial I will demonstrate how to resize an openstack instance from command line. Let’s assume I have an existing instance named “ **test_resize_vm** ” and it’s associated flavor is “m1.small” and root partition disk size is 20 GB. + +Execute the below command from controller node to check on which compute host our vm “test_resize_vm” is provisioned and its flavor details + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-57    | +| flavor                               | m1.small (2)  | +:~# +``` + +Login to VM as well and check the root partition size, + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        20G  885M   20G   5% / +devtmpfs       devtmpfs  900M     0  900M   0% /dev +tmpfs          tmpfs     920M     0  920M   0% /dev/shm +tmpfs          tmpfs     920M  8.4M  912M   1% /run +tmpfs          tmpfs     920M     0  920M   0% /sys/fs/cgroup +tmpfs          tmpfs     184M     0  184M   0% /run/user/1000 +[[email protected] ~]# echo "test file for resize operation" > demofile +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +Get the available flavor list using below command, + +``` +:~# openstack flavor list ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| ID                                   | Name            |   RAM | Disk | Ephemeral | VCPUs | Is Public | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +| 2                                    | m1.small        |  2048 |   20 |         0 |     1 | True      | +| 3                                    | m1.medium       |  4096 |   40 |         0 |     2 | True      | +| 4                                    | m1.large        |  8192 |   80 |         0 |     4 | True      | +| 5                                    | m1.xlarge       | 16384 |  160 |         0 |     8 | True      | ++--------------------------------------|-----------------|-------|------|-----------|-------|-----------+ +``` + +So we will be using the flavor “m1.medium” for resize operation, Run the beneath nova command to resize “test_resize_vm”, + +Syntax: # nova resize {VM_Name} {flavor_id} —poll + +``` +:~# nova resize test_resize_vm 3 --poll +Server resizing... 100% complete +Finished +:~# +``` + +Now confirm the resize operation using “ **openstack server –confirm”** command, + +``` +~# openstack server list | grep -i test_resize_vm +| 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 | test_resize_vm | VERIFY_RESIZE |private-net=10.20.10.51                                  | +:~# +``` + +As we can see in the above command output the current status of the vm is “ **verify_resize** “, execute below command to confirm resize, + +``` +~# openstack server resize --confirm 1d56f37f-94bd-4eef-9ff7-3dccb4682ce0 +~# +``` + +After the resize confirmation, status of VM will become active, now re-verify hypervisor and flavor details for the vm + +``` +:~# openstack server show test_resize_vm | grep -E "flavor|hypervisor" +| OS-EXT-SRV-ATTR:hypervisor_hostname  | compute-58   | +| flavor                               | m1.medium (3)| +``` + +Login to your VM now and verify the root partition size + +``` +[[email protected] ~]# df -Th +Filesystem     Type      Size  Used Avail Use% Mounted on +/dev/vda1      xfs        40G  887M   40G   3% / +devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev +tmpfs          tmpfs     1.9G     0  1.9G   0% /dev/shm +tmpfs          tmpfs     1.9G  8.4M  1.9G   1% /run +tmpfs          tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup +tmpfs          tmpfs     380M     0  380M   0% /run/user/1000 +[[email protected] ~]# cat demofile +test file for resize operation +[[email protected] ~]# +``` + +This confirm that VM root partition has been resized successfully. + +**Note:** Due to some reason if resize operation was not successful and you want to revert the vm back to previous state, then run the following command, + +``` +# openstack server resize --revert {instance_uuid} +``` + +If have noticed “ **openstack server show** ” commands output, VM is migrated from compute-57 to compute-58 after resize. This is the default behavior of “nova resize” command ( i.e nova resize command will migrate the instance to another compute & then resize it based on the flavor details) + +In case if you have only one compute node then nova resize will not work, but we can make it work by changing the below parameter in nova.conf file on compute node, + +Login to compute node, verify the parameter value + +If “ **allow_resize_to_same_host** ” is set as False then change it to True and restart the nova compute service. + +**Read More on** [**OpenStack Deployment using Devstack on CentOS 7 / RHEL 7 System**][2] + +That’s all from this tutorial, in case it helps you technically then please do share your feedback and comments. + +-------------------------------------------------------------------------------- + +via: https://www.linuxtechi.com/resize-openstack-instance-command-line/ + +作者:[Pradeep Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: http://www.linuxtechi.com/author/pradeep/ +[b]: https://github.com/lujun9972 +[1]: https://www.linuxtechi.com/create-delete-virtual-machine-command-line-openstack/ +[2]: https://www.linuxtechi.com/openstack-deployment-devstack-centos-7-rhel-7/ diff --git a/sources/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md b/sources/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md new file mode 100644 index 0000000000..b6499932ae --- /dev/null +++ b/sources/tech/20190122 Dcp (Dat Copy) - Easy And Secure Way To Transfer Files Between Linux Systems.md @@ -0,0 +1,177 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dcp (Dat Copy) – Easy And Secure Way To Transfer Files Between Linux Systems) +[#]: via: (https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +Dcp (Dat Copy) – Easy And Secure Way To Transfer Files Between Linux Systems +====== + +Linux has native command to perform this task nicely using scp and rsync. However, we need to try new things. + +Also, we need to encourage the developers who is working new things with different concept and new technology. + +We also written few articles about these kind of topic, you can navigate those by clicking the below appropriate links. + +Those are **[OnionShare][1]** , **[Magic Wormhole][2]** , **[Transfer.sh][3]** and **ffsend**. + +### What’s Dcp? + +[dcp][4] copies files between hosts on a network using the peer-to-peer Dat network. + +dcp can be seen as an alternative to tools like scp, removing the need to configure SSH access between hosts. + +This lets you transfer files between two remote hosts, without you needing to worry about the specifics of how said hosts reach each other and regardless of whether hosts are behind NATs. + +dcp requires zero configuration and is secure, fast, and peer-to-peer. Also, this is not production-ready software. Use at your own risk. + +### What’s Dat Protocol? + +Dat is a peer-to-peer protocol. A community-driven project powering a next-generation Web. + +### How dcp works: + +dcp will create a dat archive for a specified set of files or directories and, using the generated public key, lets you download said archive from a second host. + +Any data shared over the network is encrypted using the public key of the archive, meaning data access is limited to those who have access to said key. + +### dcp Use cases: + + * Send files to multiple colleagues – just send the generated public key via chat and they can receive the files on their machine. + * Sync files between two physical computers on your local network, without needing to set up SSH access. + * Easily send files to a friend without needing to create a zip and upload it the cloud. + * Copy files to a remote server when you have shell access but not SSH, for example on a kubernetes pod. + * Share files between Linux/macOS and Windows, which isn’t exactly known for great SSH support. + + + +### How To Install NodeJS & npm in Linux? + +dcp package was written in JavaScript programming language so, we need to install NodeJS as a prerequisites to install dcp. Use the following command to install NodeJS in Linux. + +For **`Fedora`** system, use **[DNF Command][5]** to install NodeJS & npm. + +``` +$ sudo dnf install nodejs npm +``` + +For **`Debian/Ubuntu`** systems, use **[APT-GET Command][6]** or **[APT Command][7]** to install NodeJS & npm. + +``` +$ sudo apt install nodejs npm +``` + +For **`Arch Linux`** based systems, use **[Pacman Command][8]** to install NodeJS & npm. + +``` +$ sudo pacman -S nodejs npm +``` + +For **`RHEL/CentOS`** systems, use **[YUM Command][9]** to install NodeJS & npm. + +``` +$ sudo yum install epel-release +$ sudo yum install nodejs npm +``` + +For **`openSUSE Leap`** system, use **[Zypper Command][10]** to install NodeJS & npm. + +``` +$ sudo zypper nodejs6 +``` + +### How To Install dcp in Linux? + +Once you have installed the NodeJS, use the following npm command to install dcp. + +npm is a package manager for the JavaScript programming language. It is the default package manager for the JavaScript runtime environment Node.js. + +``` +# npm i -g dat-cp +``` + +### How to Send Files Through dcp? + +Enter the files or folders which you want to transfer to remote server followed by the dcp command, And no need to mention the destination machine name. + +``` +# dcp [File Name Which You Want To Transfer] +``` + +It will generate a dat archive for the given file when you ran the dcp command. Once it’s done then it will geerate a public key at the bottom of the page. + +### How To Receive Files Through dcp? + +Enter the generated the public key on remote server to receive the files or folders. + +``` +# dcp [Public Key] +``` + +To recursively copy directories. + +``` +# dcp [Folder Name Which You Want To Transfer] -r +``` + +In the following example, we are going to transfer a single file. +![][12] + +Output for the above file transfer. +![][13] + +If you want to send more than one file, use the following format. +![][14] + +Output for the above file transfer. +![][15] + +To recursively copy directories. +![][16] + +Output for the above folder transfer. +![][17] + +It won’t allow you to download the files or folders in second time. It means once you downloaded the files or folders then immediately the link will be expired. +![][18] + +Navigate to man page to know about other options. + +``` +# dcp --help +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/onionshare-secure-way-to-share-files-sharing-tool-linux/ +[2]: https://www.2daygeek.com/wormhole-securely-share-files-from-linux-command-line/ +[3]: https://www.2daygeek.com/transfer-sh-easy-fast-way-share-files-over-internet-from-command-line/ +[4]: https://github.com/tom-james-watson/dat-cp +[5]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[6]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[7]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[8]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ +[9]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ +[10]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ +[11]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-1.png +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-2.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-3.jpg +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-4.jpg +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-6.jpg +[17]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-7.jpg +[18]: https://www.2daygeek.com/wp-content/uploads/2019/01/Dcp-Dat-Copy-Easy-And-Secure-Way-To-Transfer-Files-Between-Linux-Systems-5.jpg diff --git a/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md b/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md new file mode 100644 index 0000000000..56dde41884 --- /dev/null +++ b/sources/tech/20190122 Get started with Go For It, a flexible to-do list application.md @@ -0,0 +1,60 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with Go For It, a flexible to-do list application) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-go-for-it) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +Get started with Go For It, a flexible to-do list application +====== +Go For It, the tenth in our series on open source tools that will make you more productive in 2019, builds on the Todo.txt system to help you get more things done. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/coffee_cafe_brew_laptop_desktop.jpg?itok=G-n1o1-o) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Here's the tenth of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. + +### Go For It + +Sometimes what a person needs to be productive isn't a fancy kanban board or a set of notes, but a simple, straightforward to-do list. Something that is as basic as "add item to list, check it off when done." And for that, the [plain-text Todo.txt system][1] is possibly one of the easiest to use, and it's supported on almost every system out there. + +![](https://opensource.com/sites/default/files/uploads/go-for-it_1_1.png) + +[Go For It][2] is a simple, easy-to-use graphical interface for Todo.txt. It can be used with an existing file, if you are already using Todo.txt, and will create both a to-do and a done file if you aren't. It allows drag-and-drop ordering of tasks, allowing users to organize to-do items in the order they want to execute them. It also supports priorities, projects, and contexts, as outlined in the [Todo.txt format guidelines][3]. And, it can filter tasks by context or project simply by clicking on the project or context in the task list. + +![](https://opensource.com/sites/default/files/uploads/go-for-it_2.png) + +At first, Go For It may look the same as just about any other Todo.txt program, but looks can be deceiving. The real feature that sets Go For It apart is that it includes a built-in [Pomodoro Technique][4] timer. Select the task you want to complete, switch to the Timer tab, and click Start. When the task is done, simply click Done, and it will automatically reset the timer and pick the next task on the list. You can pause and restart the timer as well as click Skip to jump to the next task (or break). It provides a warning when 60 seconds are left for the current task. The default time for tasks is set at 25 minutes, and the default time for breaks is set at five minutes. You can adjust this in the Settings screen, as well as the location of the directory containing your Todo.txt and done.txt files. + +![](https://opensource.com/sites/default/files/uploads/go-for-it_3.png) + +Go For It's third tab, Done, allows you to look at the tasks you've completed and clean them out when you want. Being able to look at what you've accomplished can be very motivating and a good way to get a feel for where you are in a longer process. + +![](https://opensource.com/sites/default/files/uploads/go-for-it_4.png) + +It also has all of Todo.txt's other advantages. Go For It's list is accessible by other programs that use the same format, including [Todo.txt's original command-line tool][5] and any [add-ons][6] you've installed. + +Go For It seeks to be a simple tool to help manage your to-do list and get those items done. If you already use Todo.txt, Go For It is a fantastic addition to your toolkit, and if you don't, it's a really good way to start using one of the simplest and most flexible systems available. + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-go-for-it + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: http://todotxt.org/ +[2]: http://manuel-kehl.de/projects/go-for-it/ +[3]: https://github.com/todotxt/todo.txt +[4]: https://en.wikipedia.org/wiki/Pomodoro_Technique +[5]: https://github.com/todotxt/todo.txt-cli +[6]: https://github.com/todotxt/todo.txt-cli/wiki/Todo.sh-Add-on-Directory diff --git a/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md b/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md new file mode 100644 index 0000000000..6de6cd173f --- /dev/null +++ b/sources/tech/20190122 How To Copy A File-Folder From A Local System To Remote System In Linux.md @@ -0,0 +1,398 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (How To Copy A File/Folder From A Local System To Remote System In Linux?) +[#]: via: (https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/) +[#]: author: (Prakash Subramanian https://www.2daygeek.com/author/prakash/) + +How To Copy A File/Folder From A Local System To Remote System In Linux? +====== + +Copying a file from one server to another server or local to remote is one of the routine task for Linux administrator. + +If anyone says no, i won’t accept because this is one of the regular activity wherever you go. + +It can be done in many ways and we are trying to cover all the possible options. + +You can choose the one which you would prefer. Also, check other commands as well that may help you for some other purpose. + +I have tested all these commands and script in my test environment so, you can use this for your routine work. + +By default every one go with SCP because it’s one of the native command that everyone use for file copy. But commands which is listed in this article are be smart so, give a try if you would like to try new things. + +This can be done in below four ways easily. + + * **`SCP:`** scp copies files between hosts on a network. It uses ssh for data transfer, and uses the same authentication and provides the same security as ssh. + * **`RSYNC:`** rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. + * **`PSCP:`** pscp is a program for copying files in parallel to a number of hosts. It provides features such as passing a password to scp, saving output to files, and timing out. + * **`PRSYNC:`** prsync is a program for copying files in parallel to a number of hosts. It provides features such as passing a password to ssh, saving output to files, and timing out. + + + +### Method-1: Copy Files/Folders From A Local System To Remote System In Linux Using SCP Command? + +scp command allow us to copy files/folders from a local system to remote system. + +We are going to copy the `output.txt` file from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory. + +``` +# scp output.txt root@2g.CentOS.com:/opt/backup + +output.txt 100% 2468 2.4KB/s 00:00 +``` + +We are going to copy two files `output.txt` and `passwd-up.sh` files from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory. + +``` +# scp output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup + +output.txt 100% 2468 2.4KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +``` + +We are going to copy the `shell-script` directory from my local system to `2g.CentOS.com` remote system under `/opt/backup` directory. + +This will copy the `shell-script` directory and associated files under `/opt/backup` directory. + +``` +# scp -r /home/daygeek/2g/shell-script/ [email protected]:/opt/backup/ + +output.txt 100% 2468 2.4KB/s 00:00 +ovh.sh 100% 76 0.1KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +passwd-up1.sh 100% 7 0.0KB/s 00:00 +server-list.txt 100% 23 0.0KB/s 00:00 +``` + +### Method-2: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with scp Command? + +If you would like to copy the same file into multiple remote servers then create the following small shell script to achieve this. + +To do so, get the servers list and add those into `server-list.txt` file. Make sure you have to update the servers list into `server-list.txt` file. Each server should be in separate line. + +Finally mention the file location which you want to copy like below. + +``` +# file-copy.sh + +#!/bin/sh +for server in `more server-list.txt` +do + scp /home/daygeek/2g/shell-script/output.txt [email protected]$server:/opt/backup +done +``` + +Once you done, set an executable permission to password-update.sh file. + +``` +# chmod +x file-copy.sh +``` + +Finally run the script to achieve this. + +``` +# ./file-copy.sh + +output.txt 100% 2468 2.4KB/s 00:00 +output.txt 100% 2468 2.4KB/s 00:00 +``` + +Use the following script to copy the multiple files into multiple remote servers. + +``` +# file-copy.sh + +#!/bin/sh +for server in `more server-list.txt` +do + scp /home/daygeek/2g/shell-script/output.txt passwd-up.sh [email protected]$server:/opt/backup +done +``` + +The below output shows all the files twice as this copied into two servers. + +``` +# ./file-cp.sh + +output.txt 100% 2468 2.4KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +output.txt 100% 2468 2.4KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +``` + +Use the following script to copy the directory recursively into multiple remote servers. + +``` +# file-copy.sh + +#!/bin/sh +for server in `more server-list.txt` +do + scp -r /home/daygeek/2g/shell-script/ [email protected]$server:/opt/backup +done +``` + +Output for the above script. + +``` +# ./file-cp.sh + +output.txt 100% 2468 2.4KB/s 00:00 +ovh.sh 100% 76 0.1KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +passwd-up1.sh 100% 7 0.0KB/s 00:00 +server-list.txt 100% 23 0.0KB/s 00:00 + +output.txt 100% 2468 2.4KB/s 00:00 +ovh.sh 100% 76 0.1KB/s 00:00 +passwd-up.sh 100% 877 0.9KB/s 00:00 +passwd-up1.sh 100% 7 0.0KB/s 00:00 +server-list.txt 100% 23 0.0KB/s 00:00 +``` + +### Method-3: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using PSCP Command? + +pscp command directly allow us to perform the copy to multiple remote servers. + +Use the following pscp command to copy a single file to remote server. + +``` +# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt /opt/backup + +[1] 18:46:11 [SUCCESS] 2g.CentOS.com +``` + +Use the following pscp command to copy a multiple files to remote server. + +``` +# pscp.pssh -H 2g.CentOS.com /home/daygeek/2g/shell-script/output.txt ovh.sh /opt/backup + +[1] 18:47:48 [SUCCESS] 2g.CentOS.com +``` + +Use the following pscp command to copy a directory recursively to remote server. + +``` +# pscp.pssh -H 2g.CentOS.com -r /home/daygeek/2g/shell-script/ /opt/backup + +[1] 18:48:46 [SUCCESS] 2g.CentOS.com +``` + +Use the following pscp command to copy a single file to multiple remote servers. + +``` +# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt /opt/backup + +[1] 18:49:48 [SUCCESS] 2g.CentOS.com +[2] 18:49:48 [SUCCESS] 2g.Debian.com +``` + +Use the following pscp command to copy a multiple files to multiple remote servers. + +``` +# pscp.pssh -h server-list.txt /home/daygeek/2g/shell-script/output.txt passwd-up.sh /opt/backup + +[1] 18:50:30 [SUCCESS] 2g.Debian.com +[2] 18:50:30 [SUCCESS] 2g.CentOS.com +``` + +Use the following pscp command to copy a directory recursively to multiple remote servers. + +``` +# pscp.pssh -h server-list.txt -r /home/daygeek/2g/shell-script/ /opt/backup + +[1] 18:51:31 [SUCCESS] 2g.Debian.com +[2] 18:51:31 [SUCCESS] 2g.CentOS.com +``` + +### Method-4: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using rsync Command? + +Rsync is a fast and extraordinarily versatile file copying tool. It can copy locally, to/from another host over any remote shell, or to/from a remote rsync daemon. + +Use the following rsync command to copy a single file to remote server. + +``` +# rsync -avz /home/daygeek/2g/shell-script/output.txt [email protected]:/opt/backup + +sending incremental file list +output.txt + +sent 598 bytes received 31 bytes 1258.00 bytes/sec +total size is 2468 speedup is 3.92 +``` + +Use the following pscp command to copy a multiple files to remote server. + +``` +# rsync -avz /home/daygeek/2g/shell-script/output.txt passwd-up.sh root@2g.CentOS.com:/opt/backup + +sending incremental file list +output.txt +passwd-up.sh + +sent 737 bytes received 50 bytes 1574.00 bytes/sec +total size is 2537 speedup is 3.22 +``` + +Use the following rsync command to copy a single file to remote server overh ssh. + +``` +# rsync -avzhe ssh /home/daygeek/2g/shell-script/output.txt root@2g.CentOS.com:/opt/backup + +sending incremental file list +output.txt + +sent 598 bytes received 31 bytes 419.33 bytes/sec +total size is 2.47K speedup is 3.92 +``` + +Use the following pscp command to copy a directory recursively to remote server over ssh. This will copy only files not the base directory. + +``` +# rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com:/opt/backup + +sending incremental file list +./ +output.txt +ovh.sh +passwd-up.sh +passwd-up1.sh +server-list.txt + +sent 3.85K bytes received 281 bytes 8.26K bytes/sec +total size is 9.12K speedup is 2.21 +``` + +### Method-5: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with rsync Command? + +If you would like to copy the same file into multiple remote servers then create the following small shell script to achieve this. + +``` +# file-copy.sh + +#!/bin/sh +for server in `more server-list.txt` +do + rsync -avzhe ssh /home/daygeek/2g/shell-script/ root@2g.CentOS.com$server:/opt/backup +done +``` + +Output for the above shell script. + +``` +# ./file-copy.sh + +sending incremental file list +./ +output.txt +ovh.sh +passwd-up.sh +passwd-up1.sh +server-list.txt + +sent 3.86K bytes received 281 bytes 8.28K bytes/sec +total size is 9.13K speedup is 2.21 + +sending incremental file list +./ +output.txt +ovh.sh +passwd-up.sh +passwd-up1.sh +server-list.txt + +sent 3.86K bytes received 281 bytes 2.76K bytes/sec +total size is 9.13K speedup is 2.21 +``` + +### Method-6: Copy Files/Folders From A Local System To Multiple Remote System In Linux Using Shell Script with scp Command? + +In the above two shell script, we need to mention the file and folder location as a prerequiesties but here i did a small modification that allow the script to get a file or folder as a input. It could be very useful when you want to perform the copy multiple times in a day. + +``` +# file-copy.sh + +#!/bin/sh +for server in `more server-list.txt` +do +scp -r $1 root@2g.CentOS.com$server:/opt/backup +done +``` + +Run the shell script and give the file name as a input. + +``` +# ./file-copy.sh output1.txt + +output1.txt 100% 3558 3.5KB/s 00:00 +output1.txt 100% 3558 3.5KB/s 00:00 +``` + +### Method-7: Copy Files/Folders From A Local System To Multiple Remote System In Linux With Non-Standard Port Number? + +Use the below shell script to copy a file or folder if you are using Non-Standard port. + +If you are using `Non-Standard` port, make sure you have to mention the port number as follow for SCP command. + +``` +# file-copy-scp.sh + +#!/bin/sh +for server in `more server-list.txt` +do +scp -P 2222 -r $1 root@2g.CentOS.com$server:/opt/backup +done +``` + +Run the shell script and give the file name as a input. + +``` +# ./file-copy.sh ovh.sh + +ovh.sh 100% 3558 3.5KB/s 00:00 +ovh.sh 100% 3558 3.5KB/s 00:00 +``` + +If you are using `Non-Standard` port, make sure you have to mention the port number as follow for rsync command. + +``` +# file-copy-rsync.sh + +#!/bin/sh +for server in `more server-list.txt` +do +rsync -avzhe 'ssh -p 2222' $1 root@2g.CentOS.com$server:/opt/backup +done +``` + +Run the shell script and give the file name as a input. + +``` +# ./file-copy-rsync.sh passwd-up.sh +sending incremental file list +passwd-up.sh + +sent 238 bytes received 35 bytes 26.00 bytes/sec +total size is 159 speedup is 0.58 + +sending incremental file list +passwd-up.sh + +sent 238 bytes received 35 bytes 26.00 bytes/sec +total size is 159 speedup is 0.58 +``` +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/linux-scp-rsync-pscp-command-copy-files-folders-in-multiple-servers-using-shell-script/ + +作者:[Prakash Subramanian][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/prakash/ +[b]: https://github.com/lujun9972 diff --git a/sources/tech/20190123 Dockter- A container image builder for researchers.md b/sources/tech/20190123 Dockter- A container image builder for researchers.md new file mode 100644 index 0000000000..359d0c1d1e --- /dev/null +++ b/sources/tech/20190123 Dockter- A container image builder for researchers.md @@ -0,0 +1,121 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Dockter: A container image builder for researchers) +[#]: via: (https://opensource.com/article/19/1/dockter-image-builder-researchers) +[#]: author: (Nokome Bentley https://opensource.com/users/nokome) + +Dockter: A container image builder for researchers +====== +Dockter supports the specific requirements of researchers doing data analysis, including those using R. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/building_skyscaper_organization.jpg?itok=Ir5epxm8) + +Dependency hell is ubiquitous in the world of software for research, and this affects research transparency and reproducibility. Containerization is one solution to this problem, but it creates new challenges for researchers. Docker is gaining popularity in the research community—but using it efficiently requires solid Dockerfile writing skills. + +As a part of the [Stencila][1] project, which is a platform for creating, collaborating on, and sharing data-driven content, we are developing [Dockter][2], an open source tool that makes it easier for researchers to create Docker images for their projects. Dockter scans a research project's source code, generates a Dockerfile, and builds a Docker image. It has a range of features that allow flexibility and can help researchers learn more about working with Docker. + +Dockter also generates a JSON file with information about the software environment (based on [CodeMeta][3] and [Schema.org][4]) to enable further processing and interoperability with other tools. + +Several other projects create Docker images from source code and/or requirements files, including: [alibaba/derrick][5], [jupyter/repo2docker][6], [Gueils/whales][7], [o2r-project/containerit][8]; [openshift/source-to-image][9], and [ViDA-NYU/reprozip][10]. Dockter is similar to repo2docker, containerit, and ReproZip in that it is aimed at researchers doing data analysis (and supports R), whereas most other tools are aimed at software developers (and don't support R). + +Dockter differs from these projects principally in that it: + + * Performs static code analysis for multiple languages to determine package requirements + * Uses package databases to determine package system dependencies and generate linked metadata (containerit does this for R) + * Installs language package dependencies quicker (which can be useful during research projects where dependencies often change) + * By default but optionally, installs Stencila packages so that Stencila client interfaces can execute code in the container + + + +### Dockter's features + +Following are some of the ways researchers can use Dockter. + +#### Generating Docker images from code + +Dockter scans a research project folder and builds a Docker image for it. If the folder already has a Dockerfile, Dockter will build the image from that. If not, Dockter will scan the source code files in the folder and generate one. Dockter currently handles R, Python, and Node.js source code. The .dockerfile (with the dot at the beginning) it generates is fully editable so users can take over from Dockter and carry on with editing the file as they see fit. + +If the folder contains an R package [DESCRIPTION][11] file, Dockter will install the R packages listed under Imports into the image. If the folder does not contain a DESCRIPTION file, Dockter will scan all the R files in the folder for package import or usage statements and create a .DESCRIPTION file. + +If the folder contains a [requirements.txt][12] file for Python, Dockter will copy it into the Docker image and use [pip][13] to install the specified packages. If the folder does not contain either of those files, Dockter will scan all the folder's .py files for import statements and create a .requirements.txt file. + +If the folder contains a [package.json][14] file, Dockter will copy it into the Docker image and use npm to install the specified packages. If the folder does not contain a package.json file, Dockter will scan all the folder's .js files for require calls and create a .package.json file. + +#### Capturing system requirements automatically + +One of the headaches researchers face when hand-writing Dockerfiles is figuring out which system dependencies their project needs. Often this involves a lot of trial and error. Dockter automatically checks if any dependencies (or dependencies of dependencies, or dependencies of…) require system packages and installs those into the image. No more trial and error cycles of build, fail, add dependency, repeat… + +#### Reinstalling language packages faster + +If you have ever built a Docker image, you know it can be frustrating waiting for all your project's dependencies to reinstall when you add or remove just one. + +This happens because of Docker's layered filesystem: When you update a requirements file, Docker throws away all the subsequent layers—including the one where you previously installed your dependencies. That means all the packages have to be reinstalled. + +Dockter takes a different approach. It leaves the installation of language packages to the language package managers: Python's pip, Node.js's npm, and R's install.packages. These package managers are good at the job they were designed for: checking which packages need to be updated and updating only them. The result is much faster rebuilds, especially for R packages, which often involve compilation. + +Dockter does this by looking for a special **# dockter** comment in a Dockerfile. Instead of throwing away layers, it executes all instructions after this comment in the same layer—thereby reusing packages that were previously installed. + +#### Generating structured metadata for a project + +Dockter uses [JSON-LD][15] as its internal data structure. When it parses a project's source code, it generates a JSON-LD tree using vocabularies from schema.org and CodeMeta. + +Dockter also fetches metadata on a project's dependencies, which could be used to generate a complete software citation for the project. + +### Easy to pick up, easy to throw away + +Dockter is designed to make it easier to get started creating Docker images for your project. But it's also designed to not get in your way or restrict you from using bare Docker. You can easily and individually override any of the steps Dockter takes to build an image. + + * **Code analysis:** To stop Dockter from doing code analysis and specify your project's package dependencies, just remove the leading **.** (dot) from the .DESCRIPTION, .requirements.txt, or .package.json files. + + * **Dockerfile generation:** Dockter aims to generate readable Dockerfiles that conform to best practices. They include comments on what each section does and are a good way to start learning how to write your own Dockerfiles. To stop Dockter from generating a .Dockerfile and start editing it yourself, just rename it Dockerfile (without the leading dot). + + + + +### Install Dockter + +[Dockter is available][16] as pre-compiled, standalone command line tool or as a Node.js package. Click [here][17] for a demo. + +We welcome and encourage all [contributions][18]! + +A longer version of this article is available on the project's [GitHub page][19]. + +Aleksandra Pawlik will present [Building reproducible computing environments: a workshop for non-experts][20] at [linux.conf.au][21], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/dockter-image-builder-researchers + +作者:[Nokome Bentley][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nokome +[b]: https://github.com/lujun9972 +[1]: https://stenci.la/ +[2]: https://stencila.github.io/dockter/ +[3]: https://codemeta.github.io/index.html +[4]: http://Schema.org +[5]: https://github.com/alibaba/derrick +[6]: https://github.com/jupyter/repo2docker +[7]: https://github.com/Gueils/whales +[8]: https://github.com/o2r-project/containerit +[9]: https://github.com/openshift/source-to-image +[10]: https://github.com/ViDA-NYU/reprozip +[11]: http://r-pkgs.had.co.nz/description.html +[12]: https://pip.readthedocs.io/en/1.1/requirements.html +[13]: https://pypi.org/project/pip/ +[14]: https://docs.npmjs.com/files/package.json +[15]: https://json-ld.org/ +[16]: https://github.com/stencila/dockter/releases/ +[17]: https://asciinema.org/a/pOHpxUqIVkGdA1dqu7bENyxZk?size=medium&cols=120&autoplay=1 +[18]: https://github.com/stencila/dockter/blob/master/CONTRIBUTING.md +[19]: https://github.com/stencila/dockter +[20]: https://2019.linux.conf.au/schedule/presentation/185/ +[21]: https://linux.conf.au/ diff --git a/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md new file mode 100644 index 0000000000..bb7e129ff3 --- /dev/null +++ b/sources/tech/20190123 GStreamer WebRTC- A flexible solution to web-based media.md @@ -0,0 +1,108 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (GStreamer WebRTC: A flexible solution to web-based media) +[#]: via: (https://opensource.com/article/19/1/gstreamer) +[#]: author: (Nirbheek Chauhan https://opensource.com/users/nirbheek) + +GStreamer WebRTC: A flexible solution to web-based media +====== +GStreamer's WebRTC implementation eliminates some of the shortcomings of using WebRTC in native apps, server applications, and IoT devices. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-Internet_construction_9401467_520x292_0512_dc.png?itok=RPkPPtDe) + +Currently, [WebRTC.org][1] is the most popular and feature-rich WebRTC implementation. It is used in Chrome and Firefox and works well for browsers, but the Native API and implementation have several shortcomings that make it a less-than-ideal choice for uses outside of browsers, including native apps, server applications, and internet of things (IoT) devices. + +Last year, our company ([Centricular][2]) made an independent implementation of a Native WebRTC API available in GStreamer 1.14. This implementation is much easier to use and more flexible than the WebRTC.org Native API, is transparently compatible with WebRTC.org, has been tested with all browsers, and is already in production use. + +### What are GStreamer and WebRTC? + +[GStreamer][3] is an open source, cross-platform multimedia framework and one of the easiest and most flexible ways to implement any application that needs to play, record, or transform media-like data across a diverse scale of devices and products, including embedded (IoT, in-vehicle infotainment, phones, TVs, etc.), desktop (video/music players, video recording, non-linear editing, video conferencing, [VoIP][4] clients, browsers, etc.), servers (encode/transcode farms, video/voice conferencing servers, etc.), and [more][5]. + +The main feature that makes GStreamer the go-to multimedia framework for many people is its pipeline-based model, which solves one of the hardest problems in API design: catering to applications of varying complexity; from the simplest one-liners and quick solutions to those that need several hundreds of thousands of lines of code to implement their full feature set. If you want to learn how to use GStreamer, [Jan Schmidt's tutorial][6] from [LCA 2018][7] is a good place to start. + +[WebRTC][8] is a set of draft specifications that build upon existing [RTP][9], [RTCP][10], [SDP][11], [DTLS][12], [ICE][13], and other real-time communication (RTC) specifications and define an API for making them accessible using browser JavaScript (JS) APIs. + +People have been doing real-time communication over [IP][14] for [decades][15] with the protocols WebRTC builds upon. WebRTC's real innovation was creating a bridge between native applications and web apps by defining a standard yet flexible API that browsers can expose to untrusted JavaScript code. + +These specifications are [constantly being improved][16], which, combined with the ubiquitous nature of browsers, means WebRTC is fast becoming the standard choice for video conferencing on all platforms and for most applications. + +### **Everything is great, let's build amazing apps!** + +Not so fast, there's more to the story! For web apps, the [PeerConnection API][17] is [everywhere][18]. There are some browser-specific quirks, and the API keeps changing, but the [WebRTC JS adapter][19] handles most of that. Overall, the web app experience is mostly 👍. + +Unfortunately, for native code or applications that need more flexibility than a sandboxed JavaScript app can achieve, there haven't been a lot of great options. + +[Libwebrtc][20] (Google's implementation), [Janus][21], [Kurento][22], and [OpenWebRTC][23] have traditionally been the main contenders, but each implementation has its own inflexibilities, shortcomings, and constraints. + +Libwebrtc is still the most mature implementation, but it is also the most difficult to work with. Since it's embedded inside Chrome, it's a moving target and the project [is quite difficult to build and integrate][24]. These are all obstacles for native or server app developers trying to quickly prototype and experiment with things. + +Also, WebRTC was not built for multimedia, so the lower layers get in the way of non-browser use cases and applications. It is quite painful to do anything other than the default "set raw media, transmit" and "receive from remote, get raw media." This means if you want to use your own filters or hardware-specific codecs or sinks/sources, you end up having to fork libwebrtc. + +[**OpenWebRTC**][23] by Ericsson was the first attempt to rectify this situation. It was built on top of GStreamer. Its target audience was app developers, and it fit the bill quite well as a proof of concept—even though it used a custom API and some of the architectural decisions made it quite inflexible for most other uses. However, after an initial flurry of activity around the project, momentum petered out, the project failed to gather a community, and it is now effectively dead. Full disclosure: Centricular worked with Ericsson to polish some of the rough edges around the project immediately prior to its public release. + +### WebRTC in GStreamer + +GStreamer's WebRTC implementation gives you full control, as it does with any other [GStreamer pipeline][25]. + +As we said, the WebRTC standards build upon existing standards and protocols that serve similar purposes. GStreamer has supported almost all of them for a while now because they were being used for real-time communication, live streaming, and many other IP-based applications. This led Ericsson to choose GStreamer as the base for its OpenWebRTC project. + +Combined with the [SRTP][26] and DTLS plugins that were written during OpenWebRTC's development, it means that the implementation is built upon a solid and well-tested base, and implementing WebRTC features does not involve as much code-from-scratch work as one might presume. However, WebRTC is a large collection of standards, and reaching feature-parity with libwebrtc is an ongoing task. + +Due to decisions made while architecting WebRTCbin's internals, the API follows the PeerConnection specification quite closely. Therefore, almost all its missing features involve writing code that would plug into clearly defined sockets. For instance, since the GStreamer 1.14 release, the following features have been added to the WebRTC implementation and will be available in the next release of the GStreamer WebRTC: + + * Forward error correction + * RTP retransmission (RTX) + * RTP BUNDLE + * Data channels over SCTP + + + +We believe GStreamer's API is the most flexible, versatile, and easy to use WebRTC implementation out there, and it will only get better as time goes by. Bringing the power of pipeline-based multimedia manipulation to WebRTC opens new doors for interesting, unique, and highly efficient applications. If you'd like to demo the technology and play with the code, build and run [these demos][27], which include C, Rust, Python, and C# examples. + +Matthew Waters will present [GStreamer WebRTC—The flexible solution to web-based media][28] at [linux.conf.au][29], January 21-25 in Christchurch, New Zealand. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/gstreamer + +作者:[Nirbheek Chauhan][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nirbheek +[b]: https://github.com/lujun9972 +[1]: http://webrtc.org/ +[2]: https://www.centricular.com/ +[3]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/gstreamer.html +[4]: https://en.wikipedia.org/wiki/Voice_over_IP +[5]: https://wiki.ligo.org/DASWG/GstLAL +[6]: https://www.youtube.com/watch?v=ZphadMGufY8 +[7]: http://lca2018.linux.org.au/ +[8]: https://en.wikipedia.org/wiki/WebRTC +[9]: https://en.wikipedia.org/wiki/Real-time_Transport_Protocol +[10]: https://en.wikipedia.org/wiki/RTP_Control_Protocol +[11]: https://en.wikipedia.org/wiki/Session_Description_Protocol +[12]: https://en.wikipedia.org/wiki/Datagram_Transport_Layer_Security +[13]: https://en.wikipedia.org/wiki/Interactive_Connectivity_Establishment +[14]: https://en.wikipedia.org/wiki/Internet_Protocol +[15]: https://en.wikipedia.org/wiki/Session_Initiation_Protocol +[16]: https://datatracker.ietf.org/wg/rtcweb/documents/ +[17]: https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection +[18]: https://caniuse.com/#feat=rtcpeerconnection +[19]: https://github.com/webrtc/adapter +[20]: https://github.com/aisouard/libwebrtc +[21]: https://janus.conf.meetecho.com/ +[22]: https://www.kurento.org/kurento-architecture +[23]: https://en.wikipedia.org/wiki/OpenWebRTC +[24]: https://webrtchacks.com/building-webrtc-from-source/ +[25]: https://gstreamer.freedesktop.org/documentation/application-development/introduction/basics.html +[26]: https://en.wikipedia.org/wiki/Secure_Real-time_Transport_Protocol +[27]: https://github.com/centricular/gstwebrtc-demos/ +[28]: https://linux.conf.au/schedule/presentation/143/ +[29]: https://linux.conf.au/ diff --git a/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md b/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md new file mode 100644 index 0000000000..146f95752a --- /dev/null +++ b/sources/tech/20190123 Mind map yourself using FreeMind and Fedora.md @@ -0,0 +1,81 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Mind map yourself using FreeMind and Fedora) +[#]: via: (https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/) +[#]: author: (Paul W. Frields https://fedoramagazine.org/author/pfrields/) + +Mind map yourself using FreeMind and Fedora +====== +![](https://fedoramagazine.org/wp-content/uploads/2019/01/freemind-816x345.jpg) + +A mind map of yourself sounds a little far-fetched at first. Is this process about neural pathways? Or telepathic communication? Not at all. Instead, a mind map of yourself is a way to describe yourself to others visually. It also shows connections among the characteristics you use to describe yourself. It’s a useful way to share information with others in a clever but also controllable way. You can use any mind map application for this purpose. This article shows you how to get started using [FreeMind][1], available in Fedora. + +### Get the application + +The FreeMind application has been around a while. While the UI is a bit dated and could use a refresh, it’s a powerful app that offers many options for building mind maps. And of course it’s 100% open source. There are other mind mapping apps available for Fedora and Linux users, as well. Check out [this previous article that covers several mind map options][2]. + +Install FreeMind from the Fedora repositories using the Software app if you’re running Fedora Workstation. Or use this [sudo][3] command in a terminal: + +``` +$ sudo dnf install freemind +``` + +You can launch the app from the GNOME Shell Overview in Fedora Workstation. Or use the application start service your desktop environment provides. FreeMind shows you a new, blank map by default: + +![][4] +FreeMind initial (blank) mind map + +A map consists of linked items or descriptions — nodes. When you think of something related to a node you want to capture, simply create a new node connected to it. + +### Mapping yourself + +Click in the initial node. Replace it with your name by editing the text and hitting **Enter**. You’ve just started your mind map. + +What would you think of if you had to fully describe yourself to someone? There are probably many things to cover. How do you spend your time? What do you enjoy? What do you dislike? What do you value? Do you have a family? All of this can be captured in nodes. + +To add a node connection, select the existing node, and hit **Insert** , or use the “light bulb” icon for a new child node. To add another node at the same level as the new child, use **Enter**. + +Don’t worry if you make a mistake. You can use the **Delete** key to remove an unwanted node. There’s no rules about content. Short nodes are best, though. They allow your mind to move quickly when creating the map. Concise nodes also let viewers scan and understand the map easily later. + +This example uses nodes to explore each of these major categories: + +![][5] +Personal mind map, first level + +You could do another round of iteration for each of these areas. Let your mind freely connect ideas to generate the map. Don’t worry about “getting it right.” It’s better to get everything out of your head and onto the display. Here’s what a next-level map might look like. + +![][6] +Personal mind map, second level + +You could expand on any of these nodes in the same way. Notice how much information you can quickly understand about John Q. Public in the example. + +### How to use your personal mind map + +This is a great way to have team or project members introduce themselves to each other. You can apply all sorts of formatting and color to the map to give it personality. These are fun to do on paper, of course. But having one on your Fedora system means you can always fix mistakes, or even make changes as you change. + +Have fun exploring your personal mind map! + + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/mind-map-yourself-using-freemind-and-fedora/ + +作者:[Paul W. Frields][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/pfrields/ +[b]: https://github.com/lujun9972 +[1]: http://freemind.sourceforge.net/wiki/index.php/Main_Page +[2]: https://fedoramagazine.org/three-mind-mapping-tools-fedora/ +[3]: https://fedoramagazine.org/howto-use-sudo/ +[4]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-17-04-1024x736.png +[5]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-32-38-1024x736.png +[6]: https://fedoramagazine.org/wp-content/uploads/2019/01/Screenshot-from-2019-01-19-15-38-00-1024x736.png diff --git a/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md b/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md new file mode 100644 index 0000000000..71a91ec3d8 --- /dev/null +++ b/sources/tech/20190124 ODrive (Open Drive) - Google Drive GUI Client For Linux.md @@ -0,0 +1,127 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ODrive (Open Drive) – Google Drive GUI Client For Linux) +[#]: via: (https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +ODrive (Open Drive) – Google Drive GUI Client For Linux +====== + +This we had discussed in so many times. However, i will give a small introduction about it. + +As of now there is no official Google Drive Client for Linux and we need to use unofficial clients. + +There are many applications available in Linux for Google Drive integration. + +Each application has came out with set of features. + +We had written few articles about this in our website in the past. + +Those are **[DriveSync][1]** , **[Google Drive Ocamlfuse Client][2]** and **[Mount Google Drive in Linux Using Nautilus File Manager][3]**. + +Today also we are going to discuss about the same topic and the utility name is ODrive. + +### What’s ODrive? + +ODrive stands for Open Drive. It’s a GUI client for Google Drive which was written in electron framework. + +It’s simple GUI which allow users to integrate the Google Drive with few steps. + +### How To Install & Setup ODrive on Linux? + +Since the developer is offering the AppImage package and there is no difficulty for installing the ODrive on Linux. + +Simple download the latest ODrive AppImage package from developer github page using **wget Command**. + +``` +$ wget https://github.com/liberodark/ODrive/releases/download/0.1.3/odrive-0.1.3-x86_64.AppImage +``` + +You have to set executable file permission to the ODrive AppImage file. + +``` +$ chmod +x odrive-0.1.3-x86_64.AppImage +``` + +Simple run the following ODrive AppImage file to launch the ODrive GUI for further setup. + +``` +$ ./odrive-0.1.3-x86_64.AppImage +``` + +You might get the same window like below when you ran the above command. Just hit the **`Next`** button for further setup. +![][5] + +Click **`Connect`** link to add a Google drive account. +![][6] + +Enter your email id which you want to setup a Google Drive account. +![][7] + +Enter your password for the given email id. +![][8] + +Allow ODrive (Open Drive) to access your Google account. +![][9] + +By default, it will choose the folder location. You can change if you want to use the specific one. +![][10] + +Finally hit **`Synchronize`** button to start download the files from Google Drive to your local system. +![][11] + +Synchronizing is in progress. +![][12] + +Once synchronizing is completed. It will show you all files downloaded. +Once synchronizing is completed. It’s shows you that all the files has been downloaded. +![][13] + +I have seen all the files were downloaded in the mentioned directory. +![][14] + +If you want to sync any new files from local system to Google Drive. Just start the `ODrive` from the application menu but it won’t actual launch the application. But it will be running in the background that we can able to see by using the ps command. + +``` +$ ps -df | grep odrive +``` + +![][15] + +It will automatically sync once you add a new file into the google drive folder. The same has been checked through notification menu. Yes, i can see one file was synced to Google Drive. +![][16] + +GUI is not loading after sync, and i’m not sure this functionality. I will check with developer and will add update based on his input. + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/odrive-open-drive-google-drive-gui-client-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/drivesync-google-drive-sync-client-for-linux/ +[2]: https://www.2daygeek.com/mount-access-google-drive-on-linux-with-google-drive-ocamlfuse-client/ +[3]: https://www.2daygeek.com/mount-access-setup-google-drive-in-linux/ +[4]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[5]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-1.png +[6]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-2.png +[7]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-3.png +[8]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-4.png +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-5.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-6.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-7.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-8a.png +[13]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9.png +[14]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-11.png +[15]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-9b.png +[16]: https://www.2daygeek.com/wp-content/uploads/2019/01/odrive-open-drive-google-drive-gui-client-for-linux-10.png diff --git a/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md new file mode 100644 index 0000000000..10e666f625 --- /dev/null +++ b/sources/tech/20190124 Orpie- A command-line reverse Polish notation calculator.md @@ -0,0 +1,128 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Orpie: A command-line reverse Polish notation calculator) +[#]: via: (https://opensource.com/article/19/1/orpie) +[#]: author: (Peter Faller https://opensource.com/users/peterfaller) + +Orpie: A command-line reverse Polish notation calculator +====== +Orpie is a scientific calculator that functions much like early, well-loved HP calculators. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/calculator_money_currency_financial_tool.jpg?itok=2QMa1y8c) +Orpie is a text-mode [reverse Polish notation][1] (RPN) calculator for the Linux console. It works very much like the early, well-loved Hewlett-Packard calculators. + +### Installing Orpie + +RPM and DEB packages are available for most distributions, so installation is just a matter of using either: + +``` +$ sudo apt install orpie +``` + +or + +``` +$ sudo yum install orpie +``` + +Orpie has a comprehensive man page; new users may want to have it open in another terminal window as they get started. Orpie can be customized for each user by editing the **~/.orpierc** configuration file. The [orpierc(5)][2] man page describes the contents of this file, and **/etc/orpierc** describes the default configuration. + +### Starting up + +Start Orpie by typing **orpie** at the command line. The main screen shows context-sensitive help on the left and the stack on the right. The cursor, where you enter numbers you want to calculate, is at the bottom-right corner. + +![](https://opensource.com/sites/default/files/uploads/orpie_start.png) + +### Example calculation + +For a simple example, let's calculate the factorial of **5 (2 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 3 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 4 core.md Dict.md lctt2014.md lctt2016.md lctt2018.md LICENSE published README.md scripts sources translated 5)**. First the long way: + +| Keys | Result | +| --------- | --------- | +| 2 | Push 2 onto the stack | +| 3 | Push 3 onto the stack | +| * | Multiply to get 6 | +| 4 | Push 4 onto the stack | +| * | Multiply to get 24 | +| 5 | Push 5 onto the stack | +| * | Multiply to get 120 | + +Note that the multiplication happens as soon as you type *****. If you hit **< enter>** after ***** , Orpie will duplicate the value at position 1 on the stack. (If this happens, you can drop the duplicate with **\**.) + +Equivalent sequences are: + +| Keys | Result | +| ------------- | ------------- | +| 2 3 * 4 * 5 * | Faster! | +| 2 3 4 5 * * * | Same result | +| 5 ' fact | Fastest: Use the built-in function | + +Observe that when you enter **'** , the left pane changes to show matching functions as you type. In the example above, typing **fa** is enough to get the **fact** function. Orpie offers many functions—experiment by typing **'** and a few letters to see what's available. + +![](https://opensource.com/sites/default/files/uploads/orpie_functions.png) + +Note that each operation replaces one or more values on the stack. If you want to store the value at position 1 in the stack, key in (for example) **@factot ** and **S'**. To retrieve the value, key in (for example) **@factot ** then **;** (if you want to see it; otherwise just leave **@factot** as the value for the next calculation). + +### Constants and units + +Orpie understands units and predefines many useful scientific constants. For example, to calculate the energy in a blue light photon at 400nm, calculate **E=hc/(400nm)**. The key sequences are: + +| Keys | Result | +| -------------- | -------------- | +| C c | Get the speed of light in m/s | +| C h | Get Planck's constant in Js | +| * | Calculate h*c | +| 400 9 n _ m | Input 4 _ 10^-9 m | +| / | Do the division and get the result: 4.966 _ 10^-19 J | + +Like choosing functions after typing **'** , typing **C** shows matching constants based on what you type. + +![](https://opensource.com/sites/default/files/uploads/orpie_constants.png) + +### Matrices + +Orpie can also do operations with matrices. For example, to multiply two 2x2 matrices: + +| Keys | Result | +| -------- | -------- | +| [ 1 , 2 [ 3 , 4 | Stack contains the matrix [[ 1, 2 ][ 3, 4 ]] | +| [ 1 , 0 [ 1 , 1 | Push the multiplier matrix onto the stack | +| * | The result is: [[ 3, 2 ][ 7, 4 ]] | + +Note that the **]** characters are automatically inserted—entering **[** starts a new row. + +### Complex numbers + +Orpie can also calculate with complex numbers. They can be entered or displayed in either polar or rectangular form. You can toggle between the polar and rectangular display using the **p** key, and between degrees and radians using the **r** key. For example, to multiply **3 + 4i** by **4 + 4i** : + +| Keys | Result | +| -------- | -------- | +| ( 3 , 4 | The stack contains (3, 4) | +| ( 4 , 4 | Push (4, 4) | +| * | Get the result: (-4, 28) | + +Note that as you go, the results are kept on the stack so you can observe intermediate results in a lengthy calculation. + +![](https://opensource.com/sites/default/files/uploads/orpie_final.png) + +### Quitting Orpie + +You can exit from Orpie by typing **Q**. Your state is saved, so the next time you start Orpie, you'll find the stack as you left it. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/orpie + +作者:[Peter Faller][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/peterfaller +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Reverse_Polish_notation +[2]: https://github.com/pelzlpj/orpie/blob/master/doc/orpierc.5 diff --git a/sources/tech/20190124 Understanding Angle Brackets in Bash.md b/sources/tech/20190124 Understanding Angle Brackets in Bash.md new file mode 100644 index 0000000000..063eec3fd0 --- /dev/null +++ b/sources/tech/20190124 Understanding Angle Brackets in Bash.md @@ -0,0 +1,154 @@ +[#]: collector: (lujun9972) +[#]: translator: (HankChow) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Understanding Angle Brackets in Bash) +[#]: via: (https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash) +[#]: author: (Paul Brown https://www.linux.com/users/bro66) + +Understanding Angle Brackets in Bash +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/architecture-1839450_1920.jpg?itok=ra6XonD3) + +[Bash][1] provides many important built-in commands, like `ls`, `cd`, and `mv`, as well as regular tools such as `grep`, `awk,` and `sed`. But, it is equally important to know the punctuation marks -- [the glue in the shape of dots][2], commas, brackets. and quotes -- that allow you to transform and push data from one place to another. Take angle brackets (`< >`), for example. + +### Pushing Around + +If you are familiar with other programming and scripting languages, you may have used `<` and `>` as logical operators to check in a condition whether one value is larger or smaller than another. If you have ever written HTML, you have used angle brackets to enclose tags. + +In shell scripting, you can also use brackets to push data from place to place, for example, to a file: + +``` +ls > dir_content.txt +``` + +In this example, instead of showing the contents of the directory on the command line, `>` tells the shell to copy it into a file called _dir_content.txt_. If _dir_content.txt_ doesn't exist, Bash will create it for you, but if _dir_content.txt_ already exists and is not empty, you will overwrite whatever it contained, so be careful! + +You can avoid overwriting existing content by tacking the new stuff onto the end of the old stuff. For that you use `>>` (instead of `>`): + +``` +ls $HOME > dir_content.txt; wc -l dir_content.txt >> dir_content.txt +``` + +This line stores the list of contents of your home directory into _dir_content.txt_. You then count the number of lines in _dir_content.txt_ (which gives you the number of items in the directory) with [`wc -l`][3] and you tack that value onto the end of the file. + +After running the command line on my machine, this is what my _dir_content.txt_ file looks like: + +``` +Applications +bin +cloud +Desktop +Documents +Downloads +Games +ISOs +lib +logs +Music +OpenSCAD +Pictures +Public +Templates +test_dir +Videos +17 dir_content.txt +``` + +The mnemonic here is to look at `>` and `>>` as arrows. In fact, the arrows can point the other way, too. Say you have a file called _CBActors_ containing some names of actors and the number of films by the Coen brothers they have been in. Something like this: + +``` +John Goodman 5 +John Turturro 3 +George Clooney 2 +Frances McDormand 6 +Steve Buscemi 5 +Jon Polito 4 +Tony Shalhoub 3 +James Gandolfini 1 +``` + +Something like + +``` +sort < CBActors # Do this +Frances McDormand 6 # And you get this +George Clooney 2 +James Gandolfini 1 +John Goodman 5 +John Turturro 3 +Jon Polito 4 +Steve Buscemi 5 +Tony Shalhoub 3 +``` + +Will [sort][4] the list alphabetically. But then again, you don't need `<` here since `sort` already expects a file anyway, so `sort CBActors` will work just as well. + +However, if you need to see who is the Coens' favorite actor, you can check with : + +``` +while read name surname films; do echo $films $name $surname > filmsfirst.txt; done < CBActors +``` + +Or, to make that a bit more readable: + +``` +while read name surname films;\ + do + echo $films $name $surname >> filmsfirst;\ + done < CBActors +``` + +Let's break this down, shall we? + + * the [`while ...; do ... done`][5] structure is a loop. The instructions between `do` and `done` are repeatedly executed while a condition is met, in this case... + * ... the [`read`][6] instruction has lines to read. `read` reads from the standard input and will continue reading until there is nothing more to read... + * ... And as standard input is fed in via `<` and comes from _CBActors_ , that means the `while` loop will loop until the last line of _CBActors_ is piped into the loop. + * Getting back to `read` for a sec, the tool is clever enough to see that there are three distinct fields separated by spaces on each line of the file. That allows you to put the first field from each line in the `name` variable, the second in `surname` and the third in `films`. This comes in handy later, on the line that says `echo $films $name $surname >> filmsfirst;\`, allowing you to reorder the fields and push them into a file called _filmsfirst_. + + + +At the end of all that, you have a file called _filmsfirst_ that looks like this: + +``` +5 John Goodman +3 John Turturro +2 George Clooney +6 Frances McDormand +5 Steve Buscemi +4 Jon Polito +3 Tony Shalhoub +1 James Gandolfini +``` + +which you can now use with `sort`: + +``` +sort -r filmsfirst +``` + +to see who is the Coens' favorite actor. Yes, it is Frances McDormand. (The [`-r`][4] option reverses the sort, so McDormand ends up on top). + +We'll look at more angles on this topic next time! + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/learn/2019/1/understanding-angle-brackets-bash + +作者:[Paul Brown][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/bro66 +[b]: https://github.com/lujun9972 +[1]: https://www.linux.com/blog/2019/1/bash-shell-utility-reaches-50-milestone +[2]: https://www.linux.com/blog/learn/2019/1/linux-tools-meaning-dot +[3]: https://linux.die.net/man/1/wc +[4]: https://linux.die.net/man/1/sort +[5]: http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-7.html +[6]: https://linux.die.net/man/2/read diff --git a/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md new file mode 100644 index 0000000000..fcbdd3c5c7 --- /dev/null +++ b/sources/tech/20190124 ffsend - Easily And Securely Share Files From Linux Command Line Using Firefox Send Client.md @@ -0,0 +1,330 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client) +[#]: via: (https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/) +[#]: author: (Vinoth Kumar https://www.2daygeek.com/author/vinoth/) + +ffsend – Easily And Securely Share Files From Linux Command Line Using Firefox Send Client +====== + +Linux users were preferred to go with scp or rsync for files or folders copy. + +However, so many new options are coming to Linux because it’s a opensource. + +Anyone can develop a secure software for Linux. + +We had written multiple articles in our site in the past about this topic. + +Even, today we are going to discuss the same kind of topic called ffsend. + +Those are **[OnionShare][1]** , **[Magic Wormhole][2]** , **[Transfer.sh][3]** and **[Dcp – Dat Copy][4]**. + +### What’s ffsend? + +[ffsend][5] is a command line Firefox Send client that allow users to transfer and receive files and folders through command line. + +It allow us to easily and securely share files and directories from the command line through a safe, private and encrypted link using a single simple command. + +Files are shared using the Send service and the allowed file size is up to 2GB. + +Others are able to download these files with this tool, or through their web browser. + +All files are always encrypted on the client, and secrets are never shared with the remote host. + +Additionally you can add a password for the file upload. + +The uploaded files will be removed after the download (default count is 1 up to 10) or after 24 hours. This will make sure that your files does not remain online forever. + +This tool is currently in the alpha phase. Use at your own risk. Also, only limited installation options are available right now. + +### ffsend Features: + + * Fully featured and friendly command line tool + * Upload and download files and directories securely + * Always encrypted on the client + * Additional password protection, generation and configurable download limits + * Built-in file and directory archiving and extraction + * History tracking your files for easy management + * Ability to use your own Send host + * Inspect or delete shared files + * Accurate error reporting + * Low memory footprint, due to encryption and download/upload streaming + * Intended to be used in scripts without interaction + + + +### How To Install ffsend in Linux? + +There is no package for each distributions except Debian and Arch Linux systems. However, we can easily get this utility by downloading the prebuilt appropriate binaries file based on the operating system and architecture. + +Run the below command to download the latest available version for your operating system. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend-v0.1.2-linux-x64.tar.gz +``` + +Extract the tar archive using the following command. + +``` +$ tar -xvf ffsend-v0.1.2-linux-x64.tar.gz +``` + +Run the following command to identify your path variable. + +``` +$ echo $PATH +/home/daygeek/.cargo/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/usr/lib/jvm/default/bin:/usr/bin/site_perl:/usr/bin/vendor_perl:/usr/bin/core_perl +``` + +As i told previously, just move the executable file to your path directory. + +``` +$ sudo mv ffsend /usr/local/sbin +``` + +Run the `ffsend` command alone to get the basic usage information. + +``` +$ ffsend +ffsend 0.1.2 +Usage: ffsend [FLAGS] ... + +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +Missing subcommand. Here are the most used: + ffsend upload ... + ffsend download ... + +To show all subcommands, features and other help: + ffsend help [SUBCOMMAND] +``` + +For Arch Linux based users can easily install it with help of **[AUR Helper][6]** , as this package is available in AUR repository. + +``` +$ yay -S ffsend +``` + +For **`Debian/Ubuntu`** systems, use **[DPKG Command][7]** to install ffsend. + +``` +$ wget https://github.com/timvisee/ffsend/releases/download/v0.1.2/ffsend_0.1.2_amd64.deb +$ sudo dpkg -i ffsend_0.1.2_amd64.deb +``` + +### How To Send A File Using ffsend? + +It’s not complicated. We can easily send a file using simple syntax. + +**Syntax:** + +``` +$ ffsend upload [/Path/to/the/file/name] +``` + +In the following example, we are going to upload a file called `passwd-up1.sh`. Once you upload the file then you will be getting the unique URL. + +``` +$ ffsend upload passwd-up1.sh --copy +Upload complete +Share link: https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +``` + +![][9] + +Just download the above unique URL to get the file in any remote system. + +**Syntax:** + +``` +$ ffsend download [Generated URL] +``` + +Output for the above command. + +``` +$ ffsend download https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ +Download complete +``` + +![][10] + +Use the following syntax format for directory upload. + +``` +$ ffsend upload [/Path/to/the/Directory] --copy +``` + +In this example, we are going to upload `2g` directory. + +``` +$ ffsend upload /home/daygeek/2g --copy +You've selected a directory, only a single file may be uploaded. +Archive the directory into a single file? [Y/n]: y +Archiving... +Upload complete +Share link: https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +``` + +Just download the above generated the unique URL to get a folder in any remote system. + +``` +$ ffsend download https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg +You're downloading an archive, extract it into the selected directory? [Y/n]: y +Extracting... +Download complete +``` + +As this already send files through a safe, private, and encrypted link. However, if you would like to add a additional security at your level. Yes, you can add a password for a file. + +``` +$ ffsend upload file-copy-rsync.sh --copy --password +Password: +Upload complete +Share link: https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +``` + +It will prompt you to update a password when you are trying to download a file in the remote system. + +``` +$ ffsend download https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA +This file is protected with a password. +Password: +Download complete +``` + +Alternatively you can limit a download speed by providing the download speed while uploading a file. + +``` +$ ffsend upload file-copy-scp.sh --copy --downloads 10 +Upload complete +Share link: https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +``` + +Just download the above unique URL to get a file in any remote system. + +``` +ffsend download https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +Download complete +``` + +If you want to see more details about the file, use the following format. It will shows you the file name, file size, Download counts and when it will going to expire. + +**Syntax:** + +``` +$ ffsend info [Generated URL] + +$ ffsend info https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw +ID: 23cb923c4e +Name: file-copy-scp.sh +Size: 115 B +MIME: application/x-sh +Downloads: 3 of 10 +Expiry: 23h58m (86280s) +``` + +You can view your transaction history using the following format. + +``` +$ ffsend history +# LINK EXPIRY +1 https://send.firefox.com/download/23cb923c4e/#LVg6K0CIb7Y9KfJRNZDQGw 23h57m +2 https://send.firefox.com/download/0742d24515/#P7gcNiwZJ87vF8cumU71zA 23h55m +3 https://send.firefox.com/download/90aa5cfe67/#hrwu6oXZRG2DNh8vOc3BGg 23h52m +4 https://send.firefox.com/download/a4062553f4/#yy2_VyPaUMG5HwXZzYRmpQ 23h46m +5 https://send.firefox.com/download/74ff30e43e/#NYfDOUp_Ai-RKg5g0fCZXw 23h44m +6 https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA 23h43m +``` + +If you don’t want the link anymore then we can delete it. + +**Syntax:** + +``` +$ ffsend delete [Generated URL] + +$ ffsend delete https://send.firefox.com/download/69afaab1f9/#5z51_94jtxcUCJNNvf6RcA +File deleted +``` + +Alternatively this can be done using firefox browser by opening the page . + +Just drag and drop a file to upload it. +![][11] + +Once the file is downloaded, it will show you that 100% download completed. +![][12] + +To check other possible options, navigate to man page or help page. + +``` +$ ffsend --help +ffsend 0.1.2 +Tim Visee +Easily and securely share files from the command line. +A fully featured Firefox Send client. + +USAGE: + ffsend [FLAGS] [OPTIONS] [SUBCOMMAND] + +FLAGS: + -f, --force Force the action, ignore warnings + -h, --help Prints help information + -i, --incognito Don't update local history for actions + -I, --no-interact Not interactive, do not prompt + -q, --quiet Produce output suitable for logging and automation + -V, --version Prints version information + -v, --verbose Enable verbose information and logging + -y, --yes Assume yes for prompts + +OPTIONS: + -H, --history Use the specified history file [env: FFSEND_HISTORY] + -t, --timeout Request timeout (0 to disable) [env: FFSEND_TIMEOUT] + -T, --transfer-timeout Transfer timeout (0 to disable) [env: FFSEND_TRANSFER_TIMEOUT] + +SUBCOMMANDS: + upload Upload files [aliases: u, up] + download Download files [aliases: d, down] + debug View debug information [aliases: dbg] + delete Delete a shared file [aliases: del] + exists Check whether a remote file exists [aliases: e] + help Prints this message or the help of the given subcommand(s) + history View file history [aliases: h] + info Fetch info about a shared file [aliases: i] + parameters Change parameters of a shared file [aliases: params] + password Change the password of a shared file [aliases: pass, p] + +The public Send service that is used as default host is provided by Mozilla. +This application is not affiliated with Mozilla, Firefox or Firefox Send. +``` + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/ffsend-securely-share-files-folders-from-linux-command-line-using-firefox-send-client/ + +作者:[Vinoth Kumar][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/vinoth/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/onionshare-secure-way-to-share-files-sharing-tool-linux/ +[2]: https://www.2daygeek.com/wormhole-securely-share-files-from-linux-command-line/ +[3]: https://www.2daygeek.com/transfer-sh-easy-fast-way-share-files-over-internet-from-command-line/ +[4]: https://www.2daygeek.com/dcp-dat-copy-secure-way-to-transfer-files-between-linux-systems/ +[5]: https://github.com/timvisee/ffsend +[6]: https://www.2daygeek.com/category/aur-helper/ +[7]: https://www.2daygeek.com/dpkg-command-to-manage-packages-on-debian-ubuntu-linux-mint-systems/ +[8]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[9]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-1.png +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-2.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-3.png +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/ffsend-easily-and-securely-share-files-from-linux-command-line-using-firefox-send-client-4.png diff --git a/sources/tech/20190125 PyGame Zero- Games without boilerplate.md b/sources/tech/20190125 PyGame Zero- Games without boilerplate.md new file mode 100644 index 0000000000..f60c2b3407 --- /dev/null +++ b/sources/tech/20190125 PyGame Zero- Games without boilerplate.md @@ -0,0 +1,99 @@ +[#]: collector: (lujun9972) +[#]: translator: (xiqingongzi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (PyGame Zero: Games without boilerplate) +[#]: via: (https://opensource.com/article/19/1/pygame-zero) +[#]: author: (Moshe Zadka https://opensource.com/users/moshez) + +PyGame Zero: Games without boilerplate +====== +Say goodbye to boring boilerplate in your game development with PyGame Zero. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/python3-game.png?itok=jG9UdwC3) + +Python is a good beginner programming language. And games are a good beginner project: they are visual, self-motivating, and fun to show off to friends and family. However, the most common library to write games in Python, [PyGame][1], can be frustrating for beginners because forgetting seemingly small details can easily lead to nothing rendering. + +Until people understand why all the parts are there, they treat many of them as "mindless boilerplate"—magic paragraphs that need to be copied and pasted into their program to make it work. + +[PyGame Zero][2] is intended to bridge that gap by putting a layer of abstraction over PyGame so it requires literally no boilerplate. + +When we say literally, we mean it. + +This is a valid PyGame Zero file: + +``` +# This comment is here for clarity reasons +``` + +We can run put it in a **game.py** file and run: + +``` +$ pgzrun game.py +``` + +This will show a window and run a game loop that can be shut down by closing the window or interrupting the program with **CTRL-C**. + +This will, sadly, be a boring game. Nothing happens. + +To make it slightly more interesting, we can draw a different background: + +``` +def draw(): +    screen.fill((255, 0, 0)) +``` + +This will make the background red instead of black. But it is still a boring game. Nothing is happening. We can make it slightly more interesting: + +``` +colors = [0, 0, 0] + +def draw(): +    screen.fill(tuple(colors)) + +def update(): +    colors[0] = (colors[0] + 1) % 256 +``` + +This will make a window that starts black, becomes brighter and brighter red, then goes back to black, over and over again. + +The **update** function updates parameters, while the **draw** function renders the game based on these parameters. + +However, there is no way for the player to interact with the game! Let's try something else: + +``` +colors = [0, 0, 0] + +def draw(): +    screen.fill(tuple(colors)) + +def update(): +    colors[0] = (colors[0] + 1) % 256 + +def on_key_down(key, mod, unicode): +    colors[1] = (colors[1] + 1) % 256 +``` + +Now pressing keys on the keyboard will increase the "greenness." + +These comprise the three important parts of a game loop: respond to user input, update parameters, and re-render the screen. + +PyGame Zero offers much more, including functions for drawing sprites and playing sound clips. + +Try it out and see what type of game you can come up with! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/pygame-zero + +作者:[Moshe Zadka][a] +选题:[lujun9972][b] +译者:[xiqingongzi](https://github.com/xiqingongzi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/moshez +[b]: https://github.com/lujun9972 +[1]: https://www.pygame.org/news +[2]: https://pygame-zero.readthedocs.io/en/stable/ diff --git a/sources/tech/20190125 Top 5 Linux Distributions for Development in 2019.md b/sources/tech/20190125 Top 5 Linux Distributions for Development in 2019.md new file mode 100644 index 0000000000..b3e2de22ba --- /dev/null +++ b/sources/tech/20190125 Top 5 Linux Distributions for Development in 2019.md @@ -0,0 +1,161 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 Linux Distributions for Development in 2019) +[#]: via: (https://www.linux.com/blog/2019/1/top-5-linux-distributions-development-2019) +[#]: author: (Jack Wallen https://www.linux.com/users/jlwallen) + +Top 5 Linux Distributions for Development in 2019 +====== + +![](https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev-main.jpg?itok=DEe9pYtb) + +One of the most popular tasks undertaken on Linux is development. With good reason: Businesses rely on Linux. Without Linux, technology simply wouldn’t meet the demands of today’s ever-evolving world. Because of that, developers are constantly working to improve the environments with which they work. One way to manage such improvements is to have the right platform to start with. Thankfully, this is Linux, so you always have a plethora of choices. + +But sometimes, too many choices can be a problem in and of itself. Which distribution is right for your development needs? That, of course, depends on what you’re developing, but certain distributions that just make sense to use as a foundation for your task. I’ll highlight five distributions I consider the best for developers in 2019. + +### Ubuntu + +Let’s not mince words here. Although the Linux Mint faithful are an incredibly loyal group (with good reason, their distro of choice is fantastic), Ubuntu Linux gets the nod here. Why? Because, thanks to the likes of [AWS][1], Ubuntu is one of the most deployed server operating systems. That means developing on a Ubuntu desktop distribution makes for a much easier translation to Ubuntu Server. And because Ubuntu makes it incredibly easy to develop for, work with, and deploy containers, it makes perfect sense that you’d want to work with this platform. Couple that with Ubuntu’s inclusion of Snap Packages, and Canonical's operating system gets yet another boost in popularity. + +But it’s not just about what you can do with Ubuntu, it’s how easily you can do it. For nearly every task, Ubuntu is an incredibly easy distribution to use. And because Ubuntu is so popular, chances are every tool and IDE you want to work with can be easily installed from the Ubuntu Software GUI (Figure 1). + +![Ubuntu][3] + +Figure 1: Developer tools found in the Ubuntu Software tool. + +[Used with permission][4] + +If you’re looking for ease of use, simplicity of migration, and plenty of available tools, you cannot go wrong with Ubuntu as a development platform. + +### openSUSE + +There’s a very specific reason why I add openSUSE to this list. Not only is it an outstanding desktop distribution, it’s also one of the best rolling releases you’ll find on the market. So if you’re wanting to develop with and release for the most recent software available, [openSUSE Tumbleweed][5] should be one of your top choices. If you want to leverage the latest releases of your favorite IDEs, if you always want to make sure you’re developing with the most recent libraries and toolkits, Tumbleweed is your platform. + +But openSUSE doesn’t just offer a rolling release distribution. If you’d rather make use of a standard release platform, [openSUSE Leap][6] is what you want. + +Of course, it’s not just about standard or rolling releases. The openSUSE platform also has a Kubernetes-specific release, called [Kubic][7], which is based on Kubernetes atop openSUSE MicroOS. But even if you aren’t developing for Kubernetes, you’ll find plenty of software and tools to work with. + +And openSUSE also offers the ability to select your desktop environment, or (should you chose) a generic desktop or server (Figure 2). + +![openSUSE][9] + +Figure 2: The openSUSE Tumbleweed installation in action. + +[Used with permission][4] + +### Fedora + +Using Fedora as a development platform just makes sense. Why? The distribution itself seems geared toward developers. With a regular, six month release cycle, developers can be sure they won’t be working with out of date software for long. This can be important, when you need the most recent tools and libraries. And if you’re developing for enterprise-level businesses, Fedora makes for an ideal platform, as it is the upstream for Red Hat Enterprise Linux. What that means is the transition to RHEL should be painless. That’s important, especially if you hope to bring your project to a much larger market (one with deeper pockets than a desktop-centric target). + +Fedora also offers one of the best GNOME experiences you’ll come across (Figure 3). This translates to a very stable and fast desktops. + +![GNOME][11] + +Figure 3: The GNOME desktop on Fedora. + +[Used with permission][4] + +But if GNOME isn’t your jam, you can opt to install one of the [Fedora spins][12] (which includes KDE, XFCE, LXQT, Mate-Compiz, Cinnamon, LXDE, and SOAS). + +### Pop!_OS + +I’d be remiss if I didn’t include [System76][13]’s platform, customized specifically for their hardware (although it does work fine on other hardware). Why would I include such a distribution, especially one that doesn’t really venture far away from the Ubuntu platform for which is is based? Primarily because this is the distribution you want if you plan on purchasing a desktop or laptop from System76. But why would you do that (especially given that Linux works on nearly all off-the-shelf hardware)? Because System76 sells outstanding hardware. With the release of their Thelio desktop, you have available one of the most powerful desktop computers on the market. If you’re developing seriously large applications (especially ones that lean heavily on very large databases or require a lot of processing power for compilation), why not go for the best? And since Pop!_OS is perfectly tuned for System76 hardware, this is a no-brainer. +Since Pop!_OS is based on Ubuntu, you’ll have all the tools available to the base platform at your fingertips (Figure 4). + +![Pop!_OS][15] + +Figure 4: The Anjunta IDE running on Pop!_OS. + +[Used with permission][4] + +Pop!_OS also defaults to encrypted drives, so you can trust your work will be safe from prying eyes (should your hardware fall into the wrong hands). + +### Manjaro + +For anyone that likes the idea of developing on Arch Linux, but doesn’t want to have to jump through all the hoops of installing and working with Arch Linux, there’s Manjaro. Manjaro makes it easy to have an Arch Linux-based distribution up and running (as easily as installing and using, say, Ubuntu). + +But what makes Manjaro developer-friendly (besides enjoying that Arch-y goodness at the base) is how many different flavors you’ll find available for download. From the [Manjaro download page][16], you can grab the following flavors: + + * GNOME + + * XFCE + + * KDE + + * OpenBox + + * Cinnamon + + * I3 + + * Awesome + + * Budgie + + * Mate + + * Xfce Developer Preview + + * KDE Developer Preview + + * GNOME Developer Preview + + * Architect + + * Deepin + + + + +Of note are the developer editions (which are geared toward testers and developers), the Architect edition (which is for users who want to build Manjaro from the ground up), and the Awesome edition (Figure 5 - which is for developers dealing with everyday tasks). The one caveat to using Manjaro is that, like any rolling release, the code you develop today may not work tomorrow. Because of this, you need to think with a certain level of agility. Of course, if you’re not developing for Manjaro (or Arch), and you’re doing more generic (or web) development, that will only affect you if the tools you use are updated and no longer work for you. Chances of that happening, however, are slim. And like with most Linux distributions, you’ll find a ton of developer tools available for Manjaro. + +![Manjaro][18] + +Figure 5: The Manjaro Awesome Edition is great for developers. + +[Used with permission][4] + +Manjaro also supports the Arch User Repository (a community-driven repository for Arch users), which includes cutting edge software and libraries, as well as proprietary applications like [Unity Editor][19] or yEd. A word of warning, however, about the Arch User Repository: It was discovered that the AUR contained software considered to be malicious. So, if you opt to work with that repository, do so carefully and at your own risk. + +### Any Linux Will Do + +Truth be told, if you’re a developer, just about any Linux distribution will work. This is especially true if you do most of your development from the command line. But if you prefer a good GUI running on top of a reliable desktop, give one of these distributions a try, they will not disappoint. + +Learn more about Linux through the free ["Introduction to Linux" ][20]course from The Linux Foundation and edX. + +-------------------------------------------------------------------------------- + +via: https://www.linux.com/blog/2019/1/top-5-linux-distributions-development-2019 + +作者:[Jack Wallen][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.linux.com/users/jlwallen +[b]: https://github.com/lujun9972 +[1]: https://aws.amazon.com/ +[2]: https://www.linux.com/files/images/dev1jpg +[3]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_1.jpg?itok=7QJQWBKi (Ubuntu) +[4]: https://www.linux.com/licenses/category/used-permission +[5]: https://en.opensuse.org/Portal:Tumbleweed +[6]: https://en.opensuse.org/Portal:Leap +[7]: https://software.opensuse.org/distributions/tumbleweed +[8]: /files/images/dev2jpg +[9]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_2.jpg?itok=1GJmpr1t (openSUSE) +[10]: /files/images/dev3jpg +[11]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_3.jpg?itok=_6Ki4EOo (GNOME) +[12]: https://spins.fedoraproject.org/ +[13]: https://system76.com/ +[14]: /files/images/dev4jpg +[15]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_4.jpg?itok=nNG2Ax24 (Pop!_OS) +[16]: https://manjaro.org/download/ +[17]: /files/images/dev5jpg +[18]: https://www.linux.com/sites/lcom/files/styles/rendered_file/public/dev_5.jpg?itok=RGfF2UEi (Manjaro) +[19]: https://unity3d.com/unity/editor +[20]: https://training.linuxfoundation.org/linux-courses/system-administration-training/introduction-to-linux diff --git a/sources/tech/20190125 Using Antora for your open source documentation.md b/sources/tech/20190125 Using Antora for your open source documentation.md new file mode 100644 index 0000000000..3df2862ba1 --- /dev/null +++ b/sources/tech/20190125 Using Antora for your open source documentation.md @@ -0,0 +1,208 @@ +[#]: collector: (lujun9972) +[#]: translator: ( ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Using Antora for your open source documentation) +[#]: via: (https://fedoramagazine.org/using-antora-for-your-open-source-documentation/) +[#]: author: (Adam Šamalík https://fedoramagazine.org/author/asamalik/) + +Using Antora for your open source documentation +====== +![](https://fedoramagazine.org/wp-content/uploads/2019/01/antora-816x345.jpg) + +Are you looking for an easy way to write and publish technical documentation? Let me introduce [Antora][1] — an open source documentation site generator. Simple enough for a tiny project, but also complex enough to cover large documentation sites such as [Fedora Docs][2]. + +With sources stored in git, written in a simple yet powerful markup language AsciiDoc, and a static HTML as an output, Antora makes writing, collaborating on, and publishing your documentation a no-brainer. + +### The basic concepts + +Before we build a simple site, let’s have a look at some of the core concepts Antora uses to make the world a happier place. Or, at least, to build a documentation website. + +#### Organizing the content + +All sources that are used to build your documentation site are stored in a **git repository**. Or multiple ones — potentially owned by different people. For example, at the time of writing, the Fedora Docs had its sources stored in 24 different repositories owned by different groups having their own rules around contributions. + +The content in Antora is organized into **components** , usually representing different areas of your project, or, well, different components of the software you’re documenting — such as the backend, the UI, etc. Components can be independently versioned, and each component gets a separate space on the docs site with its own menu. + +Components can be optionally broken down into so-called **modules**. Modules are mostly invisible on the site, but they allow you to organize your sources into logical groups, and even store each in different git repository if that’s something you need to do. We use this in Fedora Docs to separate [the Release Notes, the Installation Guide, and the System Administrator Guide][3] into three different source repositories with their own rules, while preserving a single view in the UI. + +What’s great about this approach is that, to some extent, the way your sources are physically structured is not reflected on the site. + +#### Virtual catalog + +When assembling the site, Antora builds a **virtual catalog** of all pages, assigning a [unique ID][4] to each one based on its name and the component, the version, and module it belongs to. The page ID is then used to generate URLs for each page, and for internal links as well. So, to some extent, the source repository structure doesn’t really matter as far as the site is concerned. + +As an example, if we’d for some reason decided to merge all the 24 repositories of Fedora Docs into one, nothing on the site would change. Well, except the “Edit this page” link on every page that would suddenly point to this one repository. + +#### Independent UI + +We’ve covered the content, but how it’s going to look like? + +Documentation sites generated with Antora use a so-called [UI bundle][5] that defines the look and feel of your site. The UI bundle holds all graphical assets such as CSS, images, etc. to make your site look beautiful. + +It is expected that the UI will be developed independently of the documentation content, and that’s exactly what Antora supports. + +#### Putting it all together + +Having sources distributed in multiple repositories might raise a question: How do you build the site? The answer is: [Antora Playbook][6]. + +Antora Playbook is a file that points to all the source repositories and the UI bundle. It also defines additional metadata such as the name of your site. + +The Playbook is the only file you need to have locally available in order to build the site. Everything else gets fetched automatically as a part of the build process. + +### Building a site with Antora + +Demo time! To build a minimal site, you need three things: + + 1. At least one component holding your AsciiDoc sources. + 2. An Antora Playbook. + 3. A UI bundle + + + +Good news is the nice people behind Antora provide [example Antora sources][7] we can try right away. + +#### The Playbook + +Let’s first have a look at [the Playbook][8]: + +``` +site: + title: Antora Demo Site +# the 404 page and sitemap files only get generated when the url property is set + url: https://example.org/docs + start_page: component-b::index.adoc +content: + sources: + - url: https://gitlab.com/antora/demo/demo-component-a.git + branches: master + - url: https://gitlab.com/antora/demo/demo-component-b.git + branches: [v2.0, v1.0] + start_path: docs +ui: + bundle: + url: https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable + snapshot: true +``` + +As we can see, the Playbook defines some information about the site, lists the content repositories, and points to the UI bundle. + +There are two repositories. The [demo-component-a][9] with a single branch, and the [demo-component-b][10] having two branches, each representing a different version. + +#### Components + +The minimal source repository structure is nicely demonstrated in the [demo-component-a][9] repository: + +``` +antora.yml <- component metadata +modules/ + ROOT/ <- the default module + nav.adoc <- menu definition + pages/ <- a directory with all the .adoc sources + source1.adoc + source2.adoc + ... +``` + +The following + +``` +antora.yml +``` + +``` +name: component-a +title: Component A +version: 1.5.6 +start_page: ROOT:inline-text-formatting.adoc +nav: + - modules/ROOT/nav.adoc +``` + +contains metadata for this component such as the name and the version of the component, the starting page, and it also points to a menu definition file. + +The menu definition file is a simple list that defines the structure of the menu and the content. It uses the [page ID][4] to identify each page. + +``` +* xref:inline-text-formatting.adoc[Basic Inline Text Formatting] +* xref:special-characters.adoc[Special Characters & Symbols] +* xref:admonition.adoc[Admonition] +* xref:sidebar.adoc[Sidebar] +* xref:ui-macros.adoc[UI Macros] +* Lists +** xref:lists/ordered-list.adoc[Ordered List] +** xref:lists/unordered-list.adoc[Unordered List] + +And finally, there's the actual content under modules/ROOT/pages/ — you can see the repository for examples, or the AsciiDoc syntax reference +``` + +#### The UI bundle + +For the UI, we’ll be using the example UI provided by the project. + +Going into the details of Antora UI would be above the scope of this article, but if you’re interested, please see the [Antora UI documentation][5] for more info. + +#### Building the site + +Note: We’ll be using Podman to run Antora in a container. You can [learn about Podman on the Fedora Magazine][11]. + +To build the site, we only need to call Antora on the Playbook file. + +The easiest way to get antora at the moment is to use the container image provided by the project. You can get it by running: + +``` +$ podman pull antora/antora +``` + +Let’s get the playbook repository: + +``` +$ git clone https://gitlab.com/antora/demo/demo-site.git +$ cd demo-site +``` + +And run Antora using the following command: + +``` +$ podman run --rm -it -v $(pwd):/antora:z antora/antora site.yml +``` + +The site will be available in the + +public + +``` +$ cd public +$ python3 -m http.server 8080 +``` + +directory. You can either open it in your web browser directly, or start a local web server using: + +Your site will be available on . + + +-------------------------------------------------------------------------------- + +via: https://fedoramagazine.org/using-antora-for-your-open-source-documentation/ + +作者:[Adam Šamalík][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://fedoramagazine.org/author/asamalik/ +[b]: https://github.com/lujun9972 +[1]: https://antora.org/ +[2]: http://docs.fedoraproject.org/ +[3]: https://docs.fedoraproject.org/en-US/fedora/f29/ +[4]: https://docs.antora.org/antora/2.0/page/page-id/#structure +[5]: https://docs.antora.org/antora-ui-default/ +[6]: https://docs.antora.org/antora/2.0/playbook/ +[7]: https://gitlab.com/antora/demo +[8]: https://gitlab.com/antora/demo/demo-site/blob/master/site.yml +[9]: https://gitlab.com/antora/demo/demo-component-a +[10]: https://gitlab.com/antora/demo/demo-component-b +[11]: https://fedoramagazine.org/running-containers-with-podman/ diff --git a/sources/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md b/sources/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md new file mode 100644 index 0000000000..e8afdbb417 --- /dev/null +++ b/sources/tech/20190126 Get started with Tint2, an open source taskbar for Linux.md @@ -0,0 +1,59 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with Tint2, an open source taskbar for Linux) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-tint2) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +Get started with Tint2, an open source taskbar for Linux +====== + +Tint2, the 14th in our series on open source tools that will make you more productive in 2019, offers a consistent user experience with any window manager. + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tools_hardware_purple.png?itok=3NdVoYhl) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Here's the 14th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. + +### Tint2 + +One of the best ways for me to be more productive is to use a clean interface with as little distraction as possible. As a Linux user, this means using a minimal window manager like [Openbox][1], [i3][2], or [Awesome][3]. Each has customization options that make me more efficient. The one thing that slows me down is that none has a consistent configuration, so I have to tweak and re-tune my window manager constantly. + +![](https://opensource.com/sites/default/files/uploads/tint2-1.png) + +[Tint2][4] is a lightweight panel and taskbar that provides a consistent experience with any window manager. It is included with most distributions, so it is as easy to install as any other package. + +It includes two programs, Tint2 and Tint2conf. At first launch, Tint2 starts with its default layout and theme. The default configuration includes multiple web browsers, the tint2conf program, a taskbar, and a system tray. + +![](https://opensource.com/sites/default/files/uploads/tint2-2.png) + +Launching the configuration tool allows you to select from the included themes and customize the top, bottom, and sides of the screen. I recommend starting with the theme that is closest to what you want and customizing from there. + +![](https://opensource.com/sites/default/files/uploads/tint2-3.png) + +Within the themes, you can customize where panel items are placed as well as background and font options for every item on the panel. You can also add and remove items from the launcher. + +![](https://opensource.com/sites/default/files/uploads/tint2-4.png) + +Tint2 is a lightweight taskbar that helps you get to the tools you need quickly and efficiently. It is highly customizable, unobtrusive (unless the user wants it not to be), and compatible with almost any window manager on a Linux desktop. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-tint2 + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: http://openbox.org/wiki/Main_Page +[2]: https://i3wm.org/ +[3]: https://awesomewm.org/ +[4]: https://gitlab.com/o9000/tint2 diff --git a/sources/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md b/sources/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md new file mode 100644 index 0000000000..78f31a6b94 --- /dev/null +++ b/sources/tech/20190127 Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops.md @@ -0,0 +1,55 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-edex-ui) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +Get started with eDEX-UI, a Tron-influenced terminal program for tablets and desktops +====== +Make work more fun with eDEX-UI, the 15th in our series on open source tools that will make you more productive in 2019. +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/button_push_open_keyboard_file_organize.png?itok=KlAsk1gx) + +There seems to be a mad rush at the beginning of every year to find ways to be more productive. New Year's resolutions, the itch to start the year off right, and of course, an "out with the old, in with the new" attitude all contribute to this. And the usual round of recommendations is heavily biased towards closed source and proprietary software. It doesn't have to be that way. + +Here's the 15th of my picks for 19 new (or new-to-you) open source tools to help you be more productive in 2019. + +### eDEX-UI + +I was 11 years old when [Tron][1] was in movie theaters. I cannot deny that, despite the fantastical nature of the film, it had an impact on my career choice later in life. + +![](https://opensource.com/sites/default/files/uploads/edex-ui-1.png) + +[eDEX-UI][2] is a cross-platform terminal program designed for tablets and desktops that was inspired by the user interface in Tron. It has five terminals in a tabbed interface, so it is easy to switch between tasks, as well as useful displays of system information. + +At launch, eDEX-UI goes through a boot sequence with information about the ElectronJS system it is based on. After the boot, eDEX-UI shows system information, a file browser, a keyboard (for tablets), and the main terminal tab. The other four tabs (labeled EMPTY) don't have anything loaded and will start a shell when you click on one. The default shell in eDEX-UI is Bash (if you are on Windows, you will likely have to change it to either PowerShell or cmd.exe). + +![](https://opensource.com/sites/default/files/uploads/edex-ui-2.png) + +Changing directories in the file browser will change directories in the active terminal and vice-versa. The file browser does everything you'd expect, including opening associated applications when you click on a file. The one exception is eDEX-UI's settings.json file (in .config/eDEX-UI by default), which opens the configuration editor instead. This allows you to set the shell command for the terminals, change the theme, and modify several other settings for the user interface. Themes are also stored in the configuration directory and, since they are also JSON files, creating a custom theme is pretty straightforward. + +![](https://opensource.com/sites/default/files/uploads/edex-ui-3.png) + +eDEX-UI allows you to run five terminals with full emulation. The default terminal type is xterm-color, meaning it has full-color support. One thing to be aware of is that the keys light up on the keyboard while you type, so if you're using eDEX-UI on a tablet, the keyboard could present a security risk in environments where people can see the screen. It is better to use a theme without the keyboard on those devices, although it does look pretty cool when you are typing. + +![](https://opensource.com/sites/default/files/uploads/edex-ui-4.png) + +While eDEX-UI supports only five terminal windows, that has been more than enough for me. On a tablet, eDEX-UI gives me that cyberspace feel without impacting my productivity. On a desktop, eDEX-UI allows all of that and lets me look cool in front of my co-workers. + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-edex-ui + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: https://en.wikipedia.org/wiki/Tron +[2]: https://github.com/GitSquared/edex-ui diff --git a/translated/talk/20171222 10 keys to quick game development.md b/translated/talk/20171222 10 keys to quick game development.md new file mode 100644 index 0000000000..c41f66bfc7 --- /dev/null +++ b/translated/talk/20171222 10 keys to quick game development.md @@ -0,0 +1,99 @@ +快速开发游戏的十个关键 +====== +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/computer_keyboard_laptop_development_code_woman.png?itok=vbYz6jjb) + +十月早些时候,Opensource.com赞助的 [Open Jam][1] 入职仪式为处于世界各地的团队设立了45个入口。这些队伍只有三天时间用开源软件制作出一个游戏来参与角逐,并 [取前三][2]。 + +我们在大学为每一位愿意参与的人举办了我们自己的 Open Jam 活动。周末预留了计算机实验室教大家使用开源软件——游戏引擎:[Godot][3],音乐:[LMMS][4] ,2D画面: [GIMP][5] 和3D画面 [Blender][6] ——来创作游戏和相关组件。活动产出了三个游戏: [Loathsome][7], [Lost Artist][8], 和 [Paint Rider][9] (我做的)。 + +总的说来,我从游戏创作和游戏开发中,学到了十课关于游戏引擎、代码和快速游戏开发。 + +## 1\. 限定规模 + +很容易想要去做一个规模宏大的冒险游戏或者比拟你最喜欢的游戏的东西。追求高于游戏创作之外的东西可能很酷,如果你有体会,但不要高估自己拥有的时间。我欣赏游戏创作的一点是强制你快速将一个游戏从概念阶段变成最终产品,因为你的时间非常有限。这也就是限定规模如此重要。 + +Open Jam 的主题是“留下痕迹”,题目一出来,我和朋友就开始讨论什么样的游戏合题意。有个想法就是做玩家能在敌人身上留下伤痕的3D拳击游戏。我几乎没有做3D游戏的经验,我想做好的话,在我甚至还没发掘出可玩性之前,就得花太多时间在学习如何让痕迹合理和打击有效。 + +## 2\. 尽早可玩 + +对游戏创作我最中肯的建议就是这。试着做出核心部件,快速写出代码,这样你就可以测试并决定它是否值得做成一个完整的游戏。不应该只剩几个小时截止了,才让你的游戏可玩。像 Open Jam 这样的三天创作,最好少花时间在实现概念上。 + +## 3\. 保持简单 + +你想加入的每个特性都会延长整个开发时间。因为你不能迅速使之运行,所以无从得知提交一个新特性是否会消耗大量时间。街机风格的高分作品往往会在游戏创作中表现良好,它们天生就很简单。一旦核心部分完成,你可以开始加入特性并润色,无需担心最后游戏是否功能强大。 + +## 4\. 从其他游戏获取灵感 + +可能你想做出完全原创的作品,但作品的原型极其有用。原型将节省重复劳动的时间,因为你已经知道什么有趣。告诉自己实践的经验越多,越容易做出包含自己想法的大型游戏,自然你也能从再创作其他人的作品中很好地练习。 + +考虑到 Open Jam 的“留下痕迹”主题,我觉得创作一个玩的时候可以留下颜料痕迹的游戏会很有趣,这样也可以看到你留下的标记。我记得这款老式动画游戏 [Line Rider 2 Beta][10] (后来叫 Paint Rider),而且知道玩的时候按住 Control 键可以画出痕迹的彩蛋。我简化了概念,甚至只需要一个按键来垂直移动。(更像老式飞机游戏)。大概一两个小时的创作,我有了基本模型,用一个按钮上下移动和留下小黑圈的痕迹。 + +## 5\. 不要忽视可得性 + +确保尽可能多的人能玩你的游戏。某个提交到 Open Jam 的游戏是虚拟现实游戏。尽管那很酷,但几乎没有人可以玩,因为拥有VR设备的人不多。所幸它的开发者并不期望取得好名次,只是想练手。但如果你想和人们分享你的游戏(或者赢得游戏创作),注意可得性是很重要的。 + +Godot (和其他大多数游戏引擎)允许你在所有主流平台发布游戏。提交游戏时,特别是在 [Itch.io][11],有浏览器版本将支持大多数人玩。但尽你所能去发布在更多的平台和开放系统上。我甚至试着在移动端发布 Paint Rider ,但技术有限。 + +## 6\. 不要做得太难 + +如果游戏需要花费过多精力去学或者玩,你将失去一部分玩家。这照应了保持简单和限定规模,在游戏计划阶段非常重要。再次重申,想做一个宏大的游戏花上十天半个月开发很容易;难的是做出好玩、简单的游戏。 + +给妈妈介绍了 Paint Rider 之后,她很快开始玩起来,我认为不需要跟她说明更多。 + +## 7\. 不用太整洁 + +如果你习惯于花时间在设计每处图案和确保代码可复用、可适应,试着放松一点。如果你花太多时间考虑设计,当你最后到了可以玩游戏的时候,你可能发现游戏不是很有趣,那时候就来不及修改了。 + +这过程也适用于简化更严格的游戏:快速码出验证概念性展示模型直到找出值得做成完整游戏的,然后你潜心建立完美的代码基础来支持它。游戏创作的开发游戏就像快速码出可验证的理念。 + +## 8\. 但也不要太随意 + +另一方面, [意大利面式代码][12] 容易失控,即使游戏开发没有大量代码。还好大多是游戏引擎用脑中的设计图建成。就拿 Godot 的[信号][13] 功能来说,节点可以发送数据信息给它们“连上了”的节点——这是你的设计自动成型的[观测图][14]。 只要你知道如何利用游戏引擎的特性,就可以快速写代码,你的代码也不会特别难读。 + +## 9\. 取得反馈 + +向人们展示你正在做的。让他们试一试并看看他们说些啥。看看他们如何玩你的游戏,找找他们有没有发现你期望之外的事。如果游戏创作有[争论][15] 频道或者类似的,把你的游戏放上去,人们会反馈你的想法。 Paint Rider 的定义功能之一是画布循环,所以你可以看到之前留下来的画。在有人问我为什么这个游戏没有之前,我甚至没有考虑那个设置。 + +团队协作的话,确保有其他可以传递周围反馈的人参与这个开发。 + +而且不要忘了用相同的方式帮助其他人;如果你在玩其他人游戏的时候发现了有助于你游戏的东西,这就是双赢。 + +## 10\. 哪里找资源 + +做出所有你自己的组件真的会拖你后腿。 Open Jam 期间,当我忙于组装新特性和修漏洞时,我注意到 Loathsome 的开发者花了大量时间在绘制主要角色上,你可以简化游戏的艺术风格创作并且用一些视听效果尚可的东西,这里有其他选择。试着在 [Creative Commons][16] 上寻找组件或者免费音乐站点,比如 [Anttis Instrumentals][17] 。或者,可行的话,组一个有专门艺术家、作家或者音乐家的团队。 + +其他你可能觉得有用的软件有 [Krita][18] ,一款适合数字绘画的开源 2D 图像生成软件,特别是如果你有一块绘图板,还有 [sfxr][19] ,一款游戏音效生成软件,很多参数可以调,但正如它的开发者所说:“基本用法包括了按下随机按钮。”( Paint Rider 的所有音效都是用 Sfxr 做的。)你也可以试试 [Calinou][20] 的众多但有序的开源游戏开发软件列表。 + +你参加 Open Jam 或者其他游戏创作并有别的建议吗?对我未提及的有问题吗?有的话,请在评论中分享。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/17/12/10-keys-rapid-open-source-game-development + +作者:[Ryan Estes][a] +译者:[XYenChi](https://github.com/XYenChi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]:https://opensource.com/users/figytuna +[1]:https://itch.io/jam/open-jam-1 +[2]:https://opensource.com/article/17/11/open-jam +[3]:https://godotengine.org/ +[4]:https://lmms.io/ +[5]:https://www.gimp.org/ +[6]:https://www.blender.org/ +[7]:https://astropippin.itch.io/loathsome +[8]:https://masonraus.itch.io/lost-artist +[9]:https://figytuna.itch.io/paint-rider +[10]:http://www.andkon.com/arcade/racing/lineriderbeta2/ +[11]:https://itch.io/ +[12]:https://en.wikipedia.org/wiki/Spaghetti_code +[13]:http://kidscancode.org/blog/2017/03/godot_101_07/ +[14]:https://en.wikipedia.org/wiki/Observer_pattern +[15]:https://discordapp.com/ +[16]:https://creativecommons.org/ +[17]:http://www.soundclick.com/bands/default.cfm?bandID=1277008 +[18]:https://krita.org/en/ +[19]:http://www.drpetter.se/project_sfxr.html +[20]:https://notabug.org/Calinou/awesome-gamedev/src/master/README.md diff --git a/translated/talk/20180409 5 steps to building a cloud that meets your users- needs.md b/translated/talk/20180409 5 steps to building a cloud that meets your users- needs.md deleted file mode 100644 index 09ac1aced2..0000000000 --- a/translated/talk/20180409 5 steps to building a cloud that meets your users- needs.md +++ /dev/null @@ -1,108 +0,0 @@ - -构建满足客户需求的一套云环境的5个步骤 -====== - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/rh_003601_05_mech_osyearbook2016_cloud_cc.png?itok=XSV7yR9e) -这篇文是和[Ian Teksbury][1]共同完成的。 - -无论你如何定义,云就是你的用户展现组织价值的另一个工具。当谈论新的范例或者技术的时候是很容易被,(云是两者兼有)它的新特性所分心。由一系列无止境的问题引发的对话能够很快的被发展为功能愿景清单,所有的这些都是你可能已经考虑到的。 - * 是公有云,私有云还是混合云? - * 将会使用虚拟机还是容器,或者是两者? - * 将会提供自助服务吗? - * 将会完全自动的从开发转移到生产,还是它将需要手动操作? - * 我们能以多块的速度创建? - * 关于工具X,Y,还有Z? - -这样的清单还可以列举很多。 - -开始现代化,或者数字转型,无论你是如何称呼的,通常方法是开始回答高级管理层的一些高层次问题,这种方法的结果是可以预想到的:失败。经过大范围的调研并且花费了数月的时间,如果不是几年,部署这个最炫的新技术,新的云技术从未被使用过而且陷入了荒废直到它最终被丢弃或者遗忘在数据中心的一角和预算中。 - -因为无论你交付的是什么工具都不是用户所想要或者需要的。更加糟糕的是,当用户真正需要的是一个单独的工具时,一系列其他的工具就会被用户抛弃因为新的,闪光的 -升级的工具能够更好的满足他们的需求。 - -### 议题聚焦 - -问题是关注,传统一直是关注工具。但工具并不是要增加到组织价值中的东西;终端用户利用它做什么。你需要将你的注意力从创建云(列入技术和工具)转移到你的人员和用户身上。 - -事实上,使用工具的用户(而不是工具本身)是驱动价值的因素,聚焦注意力在用户身上也是由其他原因的。工具是给用户使用去解决他们的问题并允许他们创造价值的, -所有这就导致了如果那些工具不能满足那些用户的需求,那么那些工具将不会被使用。如果你交付给你的用户的工具并不是他们喜欢的,他们将不会使用,这就是人类的 -人性行为。 - -数十年来,IT产业只为用户提供一种解决方案,因为仅有一个或两个选项,用户是没有权力去改变的。现在情况已经不同了。我们现在生活在一个技术选择的世界中。 -不给用户一个选择的机会的情况将不会被接受的;他们在个人的科技生活中有选择,同时希望在工作中也有选择。现在的用户都是受过教育的并且知道将会有比你提供的机会更好的选择。 - -因此,在物理上的最安全的地点之外,没有能够阻止他们只做他们自己想要的东西的方法,我们称之为“影子IT。”如果你的组织由如此严格的安全策略和承诺策略,许多员工将会感到灰心丧气并且会离职去其他能提供更好机会的公司。 - -基于以上所有的原因,你必须牢记要首先和你的终端用户设计你的昂贵又费时的云项目。 - -### 创建满足用户需求的云五个步骤的过程 - -既然我们已经知道了为什么,接下来我们来讨论一下怎么做。你如何去为终端用户创建一个云?你怎样重新将你的注意力从技术转移到使用技术的用户身上? -根据以往的经验,我们知道最好的方法中包含两件重要的事情:从你的用户中得到及时的反馈,创建中和用户进行更多的互动。 - -你的云环境将继续随着你的组织不段发展。下面的五个步骤将会帮助你创建满足用户需求的云环境。 - -### 1\. 识别谁将是你的用户 - -在你开始询问用户问题之前,你首先必须识别谁将是你的新的云环境的用户。他们可能包括将在云上创建开发应用的开发者。也可能是运营,维护或者或者创建云的运维团队;还可能是保护组织的安全团队。在第一次迭代时,将你的用户数量缩小至人数较少的小组防止你被大量的反馈所淹没,让你识别的每个小组指派两个代表(一个主要的一个辅助的)。这将使你的第一次交付在大小和时间上都很小。 - -#### 2\. 和你的用户面对面的交谈来收获有价值的输入。 - -The best way to get users' feedback is through direct communication. Mass emails asking for input will self-select respondents—if you even get a response. Group discussions can be helpful, but people tend to be more candid when they have a private, attentive audience. -获得反馈的最佳途径是和用户直接交谈。如果你收到回复,大量的邮件要求你输入信息,你会选择自动回复。小组讨论会很有帮助的,但是当人们有私密的,吸引人注意的观众,他们会比较的坦诚。 - -和你的第一批用户安排面对面的个人的会谈并且向他们询问以下的问题: - - * 为了完成你的任务,你需要什么? - * 为了完成你的任务,你想要什么? - * 你现在最头疼的技术点是什么? - * 你现在最头疼的政策或者程序是哪个? - * 为了满足你的需求你有什么想法,欲望还是疼痛? - -这些问题只是指导性的并不一定适合每个组。你不应该只询问这些问题,他们应该导向更深层次的讨论。确保告诉用户任何所说的和被问的都会被反馈的。所有的反馈都是有帮助的,无论是消极的还是积极的。这些对话将会帮助你设置你的开发优先级。 - -收集这种个性化的反馈是保持初始用户群较小的另一个原因:将会花费你大量的时间来和每个用户交流,但是我们已经发现这是相当值得付出的投入。 - -#### 3\. 设计并交付你的解决方案的第一个版本 - -一旦你收到初始用户的反馈,就是时候开始去设计并交付一部分的功能了。我们不推荐尝试一次性交付整个解决方案。设计和交付的时期要短;这是为了避免犯一个需要你花费一年的时间去寻找解决方案的错误,只会让你的用户拒绝它,因为对他们来说毫无用处。创建你的云所需要的工具取决于你的组织和它的特殊需求。只需确保你的解决方案是建立在用户的反馈的基础上的,你将功能小块化的交付并且要经常的去征求用户的反馈。 - -#### 4\. 询问用户对第一个版本的反馈 - -太棒了,现在你已经设计并向你的用户交付了你的炫酷的新的云环境的第一个版本!你并不是花费一整年去完成它而是将它处理成小的模块。为什么将其分为小的模块如此重要呢?因为你要回归你的用户并且向他们收集关于你的设计和交付的功能。他们喜欢什么?不喜欢什么?你正确的处理了他们所关注的吗?是技术功能上很厉害,但系统进程或者策略方面仍然欠缺? - -再重申一次,你要问的问题取决于你的组织;这里的关键是继续前一个阶段的讨论。毕竟你正在为用户创建云环境,所以确保它对用户来说是有用的并且能够有效利用每个人的时间。 - -#### 5\. 回到第一步。 - -这是一个互动的过程。你的第一次交付应该是快速而小规模的,而且以后的迭代也应该是这样的。不要期待仅仅按照这个流程完成了一次,两次即使是三次就能完成。 -一旦你持续的迭代,你将会吸引更多的用户从而能够在这个过程中得到更好的回报。你将会从用户那里得到更多的支持。你能狗迭代的更迅速并且更可靠。到最后,你 -将会通过改变你的进程来满足用户的需求。 - -用户是这个过程中最重要的一部分,但迭代是第二重要的因为它让你能够回到用户中进行持续沟通从而得到更多有用的信息。在每个阶段,记录那些是有效的哪些没有起到应有的效果。要自省,要对自己诚实。我们所花费的时间提供了最有价值的了吗?如果不是,在下一个阶段尝试些不同的。在每次循环中不要花费太多时间的重要部分是,如果某部分在这次不起作用,你能够很容易的在写一次中调整它,知道你找到能够在你组织中起作用的方法。 - -### 这仅仅是开始 - -通过许多客户的约定,从他们那里收集反馈,以及在这个领域的同行的经验,我们一次次的发现在你创建云的时候最重要事就是和你的用户交谈。这看起来是很明显的, -但很让人惊讶的是很多组织却偏离了这个方向去花费数月或者数年的时间去创建,然后最终发现它对终端用户甚至一点用处都没有。 - -现在你已经知道为什么你需要将你的注意力集中到终端用户身上并且在中心节点和用户有一个一起创建云的互动过程。剩下的是我们所喜欢的部分,你出去做的部分。 - -这篇文章是基于"[为终端用户设计混合云或者失败],"一篇作者将在[Red Hat Summit 2018][3]上发表的文章,并且将于5月8日至10日在旧金山举行 - -[在5月7号前注册][3]将会节省US$500。在支付页面使用折扣码**OPEN18**将会享受到折扣。 --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/4/5-steps-building-your-cloud-correctly - -作者:[Cameron Wyatt][a] -译者:[FelixYFZ](https://github.com/FelixYFZ) -校对:[校对者ID](https://github.com/校对者ID) -选题:[lujun9972](https://github.com/lujun9972) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/cameronmwyatt -[1]:https://opensource.com/users/itewk -[2]:https://agenda.summit.redhat.com/SessionDetail.aspx?id=154225 -[3]:https://www.redhat.com/en/summit/2018 diff --git a/translated/talk/20180809 Two Years With Emacs as a CEO (and now CTO).md b/translated/talk/20180809 Two Years With Emacs as a CEO (and now CTO).md new file mode 100644 index 0000000000..b25721a59b --- /dev/null +++ b/translated/talk/20180809 Two Years With Emacs as a CEO (and now CTO).md @@ -0,0 +1,87 @@ +[#]: collector: (lujun9972) +[#]: translator: (oneforalone) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Two Years With Emacs as a CEO (and now CTO)) +[#]: via: (https://www.fugue.co/blog/2018-08-09-two-years-with-emacs-as-a-cto.html) +[#]: author: (Josh Stella https://www.fugue.co/blog/author/josh-stella) + +作为 CEO 使用 Emacs 的两年经验之谈(现任 CTO) +====== + +两年前,我写了一篇[博客][1],并取得了一些反响。这让我有点受宠若惊。那篇博客写的是我准备将 Emacs 作为我的主办公软件,当时我还是 CEO,现在已经转为 CTO 了。现在回想起来,我发现我之前不是做程序员就是做软件架构师,而且那时我也喜欢用 Emacs 写代码。重新考虑 Emacs 是一次很不错的尝试,但我不太清楚具体该怎么实现。在网上,那篇博客也是褒贬不一,但是还是有数万的阅读量,所以总的来说,我写的还是不错的。在 [Reddit][2] 和 [HackerNews][3] 上有些令人哭笑不得的回复,说我的手会变形,或者说我会因白色的背景而近视。在这里我可以很肯定的回答 —— 完全没有这回事,相反,我的手腕还因此变得更灵活了。还有一些人担心,说使用 Emacs 会耗费一个 CEO 的精力。把 Fugue 从在家得到的想法变成强大的产品,并有一大批忠实的顾客,我觉得 Emacs 可以让你从复杂的事务中解脱出来。我现在还在用白色的背景。 + +近段时间那篇博客又被翻出来了,并发到了 [HackerNews][4] 上。我收到了大量的跟帖者问我现在怎么样了,所以我写这篇博客来回应他们。在本文中,我还将重点讨论为什么 Emacs 和函数式编程有很高的相关性,以及我们是怎样使用 Emacs 来开发我们的产品 —— Fugue,一个使用函数式编程的自动化的云计算平台。由于我收到了很多反馈,比较有用的是一些细节的详细程度和有关背景色的注解,因此这篇博客比较长,而我确实也需要费点精力来解释我的想法,但这篇文章的主要内容还是反映了我担任 CEO 时处理的事务。而我想在之后更频繁地用 Emacs 写代码,所以需要提前做一些准备。一如既往,本文因人而异,后果自负。 + +### 意外之喜 + +我大部分时间都在不断得处理公司内外沟通。交流是解决问题的唯一方法,但也是反思及思考困难或是复杂问题的敌人。对我来说,作为创业公司的 CEO,最需要的是有时间专注工作而不别打扰。一旦开始投入时间来学习一些命令,Emacs 就很适合这种情况。其他的应用弹出提示,但是配置好了的 Emacs 就可以完全的忽略掉,无论是视觉上还是精神上。除非你想修改,否则的话他不会变,而且没有比空白屏幕和漂亮的字体更干净的界面了。在我不断被打扰的情况下,这种简洁让我能够专注于我在想什么,而不是电脑。好的程序能够默默地对电脑的进行访问。 + +一些人指出,原来的帖子既是对现代图形界面的批判,也是对 Emacs 的赞许。我既不赞同,也不否认。现代的接口,特别是那些以应用程序为中心的方法(相对于以内容为中心的方法),既不是以用户为中心的,也不是面向进程的。Emacs 避免了这种错误,这也是我如此喜欢它的部分原因,而它也带来了其他优点。Emacs 是进入计算机本身的入口,这打开了一扇新世界的大门。它的核心是发现和创造属于自己的道路,对我来说这就是创造的定义。现代电脑的悲哀之处在于,它很大程度上是由带有闪亮界面的黑盒组成的,这些黑盒提供的是瞬间的满足感,而不是真正的满足感。这让我们变成了消费者,而不是技术的创造者。我不在乎你是谁或者你的背景是什么;你可以理解你的电脑,你可以用它做东西。它很有趣,令人满意,而且不是你想的那么难学! + +我们常常低估了环境对我们心理的影响。Emacs 给人一种平静和自由的感觉,而不是紧迫感、烦恼或兴奋——后者是思想和沉思的敌人。我喜欢那些持久的,不碍事的东西,当我花时间去关注它们的时候,它们会给我带来真知灼见。Emacs 满足我的所有这些标准。我每天都使用 Emacs 来创建内容,我也很高兴我很少考虑它。Emacs 确实有一个学习曲线,但不会比学自行车更陡,而且一旦你完成了它,你会得到相应的回报,你就不必再去想它了,它赋予你一种其他工具所没有的自由感。这是一个优雅的工具,来自一个更加文明的时代。我很高兴我们步入了另一个计算机时代,而 Emacs 也将越来越受欢迎。 + +### 放弃用 Emacs 规划日程及处理待办事项 + +在原来的文章中,我花了一些时间介绍如何使用 Org 模式来规划日程。我放弃了使用 Org 模式来处理待办事项之类的,因为我每天都有很多会要开,很多电话要打, 而我也不能让其他人来适应我选的工具,我也没有时间将事务转换或是自动移动到 Org 上 。我们主要是用 Mac shop,使用谷歌日历等,原生的 Mac OS/iOS 工具可以很好的进行协作。我还有支比较旧的笔用来在会议中做笔记,因为我发现在会议中使用笔记本电脑或者说键盘很不礼貌,而且这也限制了我的聆听和思考。因此,我基本上放弃了用 Org 帮我规划日程或安排生活的想法。当然,Org 模式对其他的方面也很有用,它是我编写文档的首选,包括本文。换句话说,我与其作者背道而驰,但它在这方面做得很好。我也希望有一天也有人这么说我们在 Fugue 的工作。 + +### Emacs 在 Fugue 已经扩散 + +我在上篇博客就有说,你可能会喜欢 Emacs,也可能不会。因此,当 Fugue 的文档组将 Emacs 作为标准工具时,我是有点担心的,因为我觉得他们可能是受了我的影响。几年后,我确信他们做出了个正确的选择。那个组长是一个很聪明的程序员,但是那两个编写文档的人却没有怎么接触过技术。我想,如果这是一个经理强加错误工具的案例,我就会得到投诉并去解决,因为 Fugue 有反威权文化,大家不怕惹麻烦,包括我在内。之前的组长去年辞职了,但[文档组][5]现在有了一个灵活的集成的 CI/CD 工具链。并且文档组的人已经成为了 Emacs 的忠实用户。Emacs 有一条学习曲线,但即使很陡,也不会那么陡,翻过后对生产力和总体幸福感都有益。这也提醒我们,学文科的人在技术方面和程序员一样聪明,一样能干,也许不应该那么倾向于技术而产生派别歧视。 + +### 我的手腕得益于我的决定 + +上世纪80年代中期以来,我每天花12个小时左右在电脑前工作,这给我的手腕(以及后背)造成了很大的损伤,在此我强烈安利 Tag Capisco 的椅子。Emacs 和人机工程学键盘的结合让手腕的 [RSI][10](Repetitive Strain Injury/Repetitive Motion Syndrome) 问题消失了,我已经一年多没有想过这种问题了。在那之前,我的手腕每天都会疼,尤其是右手,如果你也遇到这种问题,你就知道这很让人分心和担心。有几个人问过键盘和鼠标的问题,如果你感兴趣的话,我现在用的是[这款键盘][6]。虽然在过去的几年里我主要使用的是真正符合人体工程学的键盘。我已经换成现在的键盘有几个星期了,而且我爱死它了。键帽的形状很神奇,因为你不用看就能知道自己在哪里,而拇指键设计的很合理,尤其是对于 Emacs, Control和Meta是你的固定伙伴。不要再用小指做高度重复的任务了! + +我使用鼠标的次数比使用 Office 和 IDE 时要少得多,这对我有很大帮助,但我还是会用鼠标。我一直在使用外观相当过时,但功能和人体工程学明显优越的轨迹球,这是名副其实的。 + +撇开具体的工具不谈,最重要的一点是,事实证明,一个很棒的键盘,再加上避免使用鼠标,在减少身体的磨损方面很有效。Emacs 是这方面的核心,因为我不需要在菜单上滑动鼠标来完成任务,而且导航键就在我的手指下面。我肯定,手离开标准打字姿势会给我的肌腱造成很大的压力。这因人而异,我也不是医生。 + +### 还没完成大部分配置…… + +有人说我会在界面配置上花很多的时间。我想验证下他们说的对不对,所以我留意了下。我不仅让配置基本上不受影响,关注这个问题还让我意识到我使用的其他工具是多么的耗费我的精力和时间。Emacs 是我用过的维护成本最低的软件。Mac OS 和 Windows 一直要求我更新它,但在我我看来,这远没有 Adobe 套件和 Office 的更新的困恼那么大。我只是偶尔更新 Emacs,但也没什么变化,所以对我来说,它基本上是一个接近于零成本的操作,我高兴什么时候跟新就什么时候更新。 + +有一点然你们失望了,因为许多人想知道我为跟上 Emacs 社区的更新及其输出所做的事情,但是在过去的两年中,我只在配置中添加了一些内容。我认为也是成功的,因为 Emacs 只是一个工具,而不是我的爱好。也就是说,如果你想和我分享,我很乐意听到新的东西。 + +### 期望实现控制云端 + +我们在 Fugue 有很多 Emacs 的粉丝,所以我们有一段时间在用 [Ludwing 模式][7]。Ludwig 是我们用于自动化云基础设施和服务的声明式、功能性的 DSL。最近,Alex Schoof 利用飞机上和晚上的时间来构建 fugue 模式,它在 Fugue CLI 上充当 Emacs 控制台。要是你不熟悉 Fugue,我们会开发一个云自动化和管理工具,它利用函数式编程为用户提供与云的 api 交互的良好体验。它做的不止这些,但它也做了。fugue 模式很酷的原因有很多。它有一个不断报告云基础设备状态的缓冲区,而由于我经常修改这些设备,所以我可以快速看到编码的效果。Fugue 将云工作负载当成进程处理,fugue 模式非常类似于云工作负载的 top 模式。它还允许我执行一些操作,比如创建新的设备或删除过期的东西,而且也不需要太多输入。Fugue 模式只是个雏形,但它非常方便,而我现在也经常使用它。 + +![fugue-mode-edited.gif][8] + +### 模式及监听 + +我添加了一些模式和集成插件,但并不是真正用于 CEO 工作。我喜欢在周末时写写 Haskell 和 Scheme,所以我添加了 haskell 模式和 geiser。Emacs 对具有 REPL 的语言很友好,因为你可以在不同的窗口中运行不同的模式,包括 REPL 和 shell。Geiser 和 Scheme 很配,要是你还没有这样做过,那么用 SICP 工作也不失为一种乐趣,在这个有很多土鳖编程的例子的时代,这可能是一种启发。安装 MIT Scheme 和 geiser,你就会感觉有点像 lore 的符号环境。 + +这就引出了我在 15 年的文章中没有提到的另一个话题:屏幕管理。我喜欢使用用竖屏来写作,我在家里和我的主要办公室都有这个配置。对于编程或混合使用,我喜欢 fuguer 提供的新的超宽显示器。对于宽屏,我更喜欢将屏幕分成三列,中间是主编辑缓冲区,左边是水平分隔的 shell 和 fugue 模式缓冲区,右边是文档缓冲区或另一个或两个编辑缓冲区。这个很简单,首先按 'Ctl-x 3' 两次,然后使用 'Ctl-x =' 使窗口的宽度相等。这将提供三个相等的列,你也可以使用 'Ctl-x 2' 进行水平分割。以下是我的截图。 + +![Emacs Screen Shot][9] + +### 最后一篇 CEO/Emacs 文章…… + +首先,我现在是 Fugue 的 CTO,其次我也想要写一些其他方面的博客,而我现在刚好有时间。我还打算写些更深入的东西,比如说函数式编程、基础结构类型安全,以及我们即将推出一些的新功能,还有一些关于 Fugue 在云上可以做什么。 + +-------------------------------------------------------------------------------- + +via: https://www.fugue.co/blog/2018-08-09-two-years-with-emacs-as-a-cto.html + +作者:[Josh Stella][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/oneforalone) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.fugue.co/blog/author/josh-stella +[b]: https://github.com/lujun9972 +[1]: https://blog.fugue.co/2015-11-11-guide-to-emacs.html +[2]: https://www.reddit.com/r/emacs/comments/7efpkt/a_ceos_guide_to_emacs/ +[3]: https://news.ycombinator.com/item?id=10642088 +[4]: https://news.ycombinator.com/item?id=15753150 +[5]: https://docs.fugue.co/ +[6]: https://shop.keyboard.io/ +[7]: https://github.com/fugue/ludwig-mode +[8]: https://www.fugue.co/hubfs/Imported_Blog_Media/fugue-mode-edited-1.gif +[9]: https://www.fugue.co/hs-fs/hubfs/Emacs%20Screen%20Shot.png?width=929&name=Emacs%20Screen%20Shot.png +[10]: https://baike.baidu.com/item/RSI/21509642 diff --git a/translated/talk/20181014 How Lisp Became God-s Own Programming Language.md b/translated/talk/20181014 How Lisp Became God-s Own Programming Language.md deleted file mode 100644 index eea75c68de..0000000000 --- a/translated/talk/20181014 How Lisp Became God-s Own Programming Language.md +++ /dev/null @@ -1,122 +0,0 @@ -Lisp 是怎么成为上帝的编程语言的 -====== - -当程序员们谈论各类编程语言的相对优势时,他们通常会采用相当平淡的词措,就好像这些语言是一条工具带上的各种工具似的——有适合写操作系统的,也有适合把其它程序黏在一起来完成特殊工作的。这种讨论方式非常合理;不同语言的能力不同。不声明特定用途就声称某门语言比其他语言更优秀只能导致侮辱性的无用争论。 - -但有一门语言似乎受到和用途无关的特殊尊敬:那就是 Lisp。即使是恨不得给每个说出形如“某某语言比其他所有语言都好”这类话的人都来一拳的键盘远征军们,也会承认Lisp处于另一个层次。 Lisp 超越了用于评判其他语言的实用主义标准,因为普通程序员并不使用 Lisp 编写实用的程序 —— 而且,多半他们永远也不会这么做。然而,人们对 Lisp 的敬意是如此深厚,甚至于到了这门语言会时而被加上神话属性的程度。大家都喜欢的网络漫画合集 xkcd 就至少在两组漫画中如此描绘过 Lisp:[其中一组漫画][1]中,一个人物得到了某种 Lisp 启示,而这好像使他理解了宇宙的基本构架。在[另一组漫画][2]中,一个穿着长袍的老程序员给他的徒弟递了一沓圆括号,说这是“文明时代的优雅武器”,暗示着 Lisp 就像原力那样拥有各式各样的神秘力量。 - -另一个绝佳例子是 Bob Kanefsky 的滑稽剧插曲,《上帝就在人间》。这部剧叫做《永恒之火》,编写于 1990 年代中期;剧中描述了上帝必然是使用 Lisp 创造世界的种种原因。完整的歌词可以在 [GNU 幽默合集][3]中找到,如下是一段摘抄: - -> 因为上帝用祂的 Lisp 代码 - -> 让树叶充满绿意。 - -> 分形的花儿和递归的根: - -> 我见过的奇技淫巧(hack)之中没什么比这更可爱。 - -> 当我对着雪花深思时, - -> 从未见过两片相同的, - -> 我知道,上帝偏爱那一门 - -> 名字是四个字母的语言。 - -以下这句话我实在不好在人前说;不过,我还是觉得,这样一种“ Lisp 是奥术魔法”的文化模因实在是有史以来最奇异、最迷人的东西。 Lisp 是象牙塔的产物,是人工智能研究的工具;因此,它对于编程界的俗人而言总是陌生的,甚至是带有神秘色彩的。然而,当今的程序员们开始怂恿彼此,[“在你死掉之前至少试一试 Lisp ”][4],就像这是一种令人恍惚入迷的致幻剂似的。尽管 Lisp 是广泛使用的编程语言中第二古老的(只比 Fortran 年轻一岁),程序员们也仍旧在互相怂恿。想象一下,如果你的工作是为某种组织或者团队推广一门新的编程语言的话,忽悠大家让他们相信你的新语言拥有神力难道不是绝佳的策略吗?——但你如何能够做到这一点呢?或者,换句话说,一门编程语言究竟是如何变成人们口中“隐晦知识的载体”的呢? - -Lisp 究竟是怎么成为这样的? - -![Byte 杂志封面,1979年八月。][5] Byte 杂志封面,1979年八月。 - -### 理论 A :公理般的语言 - -Lisp 的创造者 John McCarthy 最初并没有想过把 Lisp 做成优雅、精炼的计算法则结晶。然而,在一两次运气使然的深谋远虑和一系列优化之后, Lisp 的确变成了那样的东西。 Paul Graham —— 我们一会儿之后才会聊到他 —— 曾经这么写,说, McCarthy 通过 Lisp “为编程作出的贡献就像是欧几里得对几何学所做的贡献一般”。人们可能会在 Lisp 中看出更加隐晦的含义——因为 McCarthy 创造 Lisp 时使用的要素实在是过于基础,基础到连弄明白他到底是创造了这门语言、还是发现了这门语言,都是一件难事。 - -最初, McCarthy 产生要造一门语言的想法,是在 1956 年的达特茅斯人工智能夏季研究项目(Darthmouth Summer Research Project on Artificial Intelligence)上。夏季研究项目是个持续数周的学术会议,直到现在也仍旧在举行;它是此类会议之中最早开始举办的会议之一。 McCarthy 当初还是个达特茅斯的数学助教,而“人工智能”这个词事实上就是他建议举办会议时发明的。在整个会议期间大概有十人参加。他们之中包括了 Allen Newell 和 Herbert Simon ,两名隶属于兰德公司和卡内基梅隆大学的学者。这两人不久之前设计了一门语言,叫做IPL。 - -当时,Newell 和 Simon 正试图制作一套能够在命题演算中生成证明的系统。两人意识到,用电脑的原生指令集编写这套系统会非常困难;于是他们决定创造一门语言——原话是“伪代码”,这样,他们就能更加轻松自然地表达这台“逻辑理论机器”的底层逻辑了。这门语言叫做IPL,即“信息处理语言” (Information Processing Language) ;比起我们现在认知中的编程语言,它更像是一种汇编语言的方言。 Newell 和 Simon 提到,当时人们开发的其它“伪代码”都抓着标准数学符号不放——也许他们指的是 Fortran;与此不同的是,他们的语言使用成组的符号方程来表示命题演算中的语句。通常,用 IPL 写出来的程序会调用一系列的汇编语言宏,以此在这些符号方程列表中对表达式进行变换和求值。 - -McCarthy 认为,一门实用的编程语言应该像 Fortran 那样使用代数表达式;因此,他并不怎么喜欢 IPL 。然而,他也认为,在给人工智能领域的一些问题建模时,符号列表会是非常好用的工具——而且在那些涉及演绎的问题上尤其有用。 McCarthy 的渴望最终被诉诸行动;他要创造一门代数的列表处理语言——这门语言会像 Fortran 一样使用代数表达式,但拥有和 IPL 一样的符号列表处理能力。 - -当然,今日的 Lisp 可不像 Fortran。在会议之后的几年中, McCarthy 关于“理想的列表处理语言”的见解似乎在逐渐演化。到 1957 年,他的想法发生了改变。他那时候正在用 Fortran 编写一个能下象棋的程序;越是长时间地使用 Fortran , McCarthy 就越确信其设计中存在不当之处,而最大的问题就是尴尬的“ IF ”声明。为此,他发明了一个替代品,即条件表达式“ true ”;这个表达式会在给定的测试通过时返回子表达式 A ,而在测试未通过时返回子表达式 B ,而且,它只会对返回的子表达式进行求值。在 1958 年夏天,当 McCarthy 设计一个能够求导的程序时,他意识到,他发明的“true”表达式让编写递归函数这件事变得更加简单自然了。也是这个求导问题让 McCarthy 创造了 maplist 函数;这个函数会将其它函数作为参数并将之作用于指定列表的所有元素。在给项数多得叫人抓狂的多项式求导时,它尤其有用。 - -然而,以上的所有这些,在 Fortran 中都是没有的;因此,在1958年的秋天,McCarthy 请来了一群学生来实现 Lisp。因为他那时已经成了一名麻省理工助教,所以,这些学生可都是麻省理工的学生。当 McCarthy 和学生们最终将他的主意变为能运行的代码时,这门语言得到了进一步的简化。这之中最大的改变涉及了 Lisp 的语法本身。最初,McCarthy 在设计语言时,曾经试图加入所谓的“M 表达式”;这是一层语法糖,能让 Lisp 的语法变得类似于 Fortran。虽然 M 表达式可以被翻译为 S 表达式 —— 基础的、“用圆括号括起来的列表”,也就是 Lisp 最著名的特征 —— 但 S 表达式事实上是一种给机器看的低阶表达方法。唯一的问题是,McCarthy 用方括号标记 M 表达式,但他的团队在麻省理工使用的 IBM 026 键盘打孔机的键盘上根本没有方括号。于是 Lisp 团队坚定不移地使用着 S 表达式,不仅用它们表示数据列表,也拿它们来表达函数的应用。McCarthy 和他的学生们还作了另外几样改进,包括将数学符号前置;他们也修改了内存模型,这样 Lisp 实质上就只有一种数据类型了。 - -到 1960 年,McCarthy 发表了他关于 Lisp 的著名论文,《用符号方程表示的递归函数及它们的机器计算》。那时候,Lisp 已经被极大地精简,而这让 McCarthy 意识到,他的作品其实是“一套优雅的数学系统”,而非普通的编程语言。他之后这么写道,对 Lisp 的许多简化使其“成了一种描述可计算函数的方式,而且它比图灵机或者一般情况下用于递归函数理论的递归定义更加简洁”。在他的论文中,他不仅使用 Lisp 作为编程语言,也将它当作一套用于研究递归函数行为方式的表达方法。 - -通过“从一小撮规则中逐步实现出 Lisp”的方式,McCarthy 将这门语言介绍给了他的读者。不久之后,Paul Graham 换用更加易读的写法,在短文[《Lisp 之根》][6](The Roots of Lisp)中再次进行了介绍。在 Graham 的介绍中,他只用了七种基本的运算符、两种函数写法,和几个稍微高级一点的函数(也都使用基本运算符进行定义)。毫无疑问,Lisp 的这种只需使用极少量的基本规则就能完整说明的特点加深了其神秘色彩。Graham 称 McCarthy 的论文为“使计算公理化”的一种尝试。我认为,在思考 Lisp 的魅力从何而来时,这是一个极好的切入点。其它编程语言都有明显的人工构造痕迹,表现为“While”,“typedef”,“public static void”这样的关键词;而 Lisp 的设计却简直像是纯粹计算逻辑的鬼斧神工。Lisp 的这一性质,以及它和晦涩难懂的“递归函数理论”的密切关系,使它具备了获得如今声望的充分理由。 - -### 理论 B:属于未来的机器 - -Lisp 诞生二十年后,它成了著名的《黑客词典》中所说的,人工智能研究的“母语”。Lisp 在此之前传播迅速,多半是托了语法规律的福 —— 不管在怎么样的电脑上,实现 Lisp 都是一件相对简单直白的事。而学者们之后坚持使用它乃是因为 Lisp 在处理符号表达式这方面有巨大的优势;在那个时代,人工智能很大程度上就意味着符号,于是这一点就显得十分重要。在许多重要的人工智能项目中都能见到 Lisp 的身影。这些项目包括了 [SHRDLU 自然语言程序][8](the SHRDLU natural language program),[Macsyma 代数系统][9](the Macsyma algebra system),和 [ACL2 逻辑系统][10](the ACL2 logic system)。 - -然而,在 1970 年代中期,人工智能研究者们的电脑算力开始不够用了。PDP-10 就是一个典型。这个型号在人工智能学界曾经极受欢迎;但面对这些用 Lisp 写的 AI 程序,它的 18 位内存空间一天比一天显得吃紧。许多的 AI 程序在设计上可以与人互动。要让这些既极度要求硬件性能、又有互动功能的程序在分时系统上优秀发挥,是很有挑战性的。麻省理工的 Peter Deutsch 给出了解决方案:那就是针对 Lisp 程序来特别设计电脑。就像是我那[关于 Chaosnet 的上一篇文章][11]所说的那样,这些 Lisp 计算机(Lisp machines)会给每个用户都专门分配一个为 Lisp 特别优化的处理器。到后来,考虑到硬核 Lisp 程序员的需求,这些计算机甚至还配备上了完全由 Lisp 编写的开发环境。在当时那样一个小型机时代已至尾声而微型机的繁盛尚未完全到来的尴尬时期,Lisp 计算机就是编程精英们的“高性能个人电脑”。 - -有那么一会儿,Lisp 计算机被当成是未来趋势。好几家公司无中生有地出现,追着赶着要把这项技术商业化。其中最成功的一家叫做 Symbolics,由麻省理工 AI 实验室的前成员创立。上世纪八十年代,这家公司生产了所谓的 3600 系列计算机,它们当时在 AI 领域和需要高性能计算的产业中应用极广。3600 系列配备了大屏幕、位图显示、鼠标接口,以及[强大的图形与动画软件][12]。它们都是惊人的机器,能让惊人的程序运行起来。例如,之前在推特上跟我聊过的机器人研究者 Bob Culley,就能用一台 1985 年生产的 Symbolics 3650 写出带有图形演示的寻路算法。他向我解释说,在 1980 年代,位图显示和面向对象编程(能够通过 [Flavors 扩展][13]在 Lisp 计算机上使用)都刚刚出现。Symbolics 站在时代的最前沿。 - -![Bob Culley 的寻路程序。][14] Bob Culley 的寻路程序。 - -而以上这一切导致 Symbolics 的计算机奇贵无比。在 1983 年,一台 Symbolics 3600 能卖 111,000 美金。所以,绝大部分人只可能远远地赞叹 Lisp 计算机的威力,和操作员们用 Lisp 编写程序的奇妙技术 —— 但他们的确发出了赞叹。从 1979 年到 1980 年代末,Byte 杂志曾经多次提到过 Lisp 和 Lisp 计算机。在 1979 年八月发行的、关于 Lisp 的一期特别杂志中,杂志编辑激情洋溢地写道,麻省理工正在开发的计算机配备了“大坨大坨的内存”和“先进的操作系统”;他觉得,这些 Lisp 计算机的前途是如此光明,以至于它们的面世会让 1978 和 1977 年 —— 诞生了 Apple II, Commodore PET,和TRS-80 的两年 —— 显得黯淡无光。五年之后,在1985年,一名 Byte 杂志撰稿人描述了为“复杂精巧、性能强悍的 Symbolics 3670”编写 Lisp 程序的体验,并力劝读者学习 Lisp,称其为“绝大数人工智能工作者的语言选择”,和将来的通用编程语言。 - -我问过 Paul McJones [他在山景(Mountain View)的计算机历史博物馆做了许多 Lisp 的[保存工作][15]],人们是什么时候开始将 Lisp 当作高维生物的赠礼一样谈论的呢?他说,这门语言自有的性质毋庸置疑地促进了这种现象的产生;然而,他也说,Lisp 上世纪六七十年代在人工智能领域得到的广泛应用,很有可能也起到了作用。当 1980 年代到来、Lisp 计算机进入市场时,象牙塔外的某些人由此接触到了 Lisp 的能力,于是传说开始滋生。时至今日,很少有人还记得 Lisp 计算机和 Symbolics 公司;但 Lisp 得以在八十年代一直保持神秘,很大程度上要归功于它们。 - -### 理论 C:学习编程 - -1985 年,两位麻省理工的教授,Harold Abelson 和 Gerald Sussman,外加 Sussman 的妻子,出版了一本叫做《计算机程序的构造和解释》(Structure and Interpretation of Computer Programs)的教科书。这本书用 Scheme(一种 Lisp 方言)向读者们示范如何编程。它被用于教授麻省理工入门编程课程长达二十年之久。出于直觉,我认为 SICP(这是通常而言的标题缩写)倍增了 Lisp 的“神秘要素”。SICP 使用 Lisp 描绘了深邃得几乎可以称之为哲学的编程理念。这些理念非常普适,可以用任意一种编程语言展现;但 SICP 的作者们选择了 Lisp。结果,这本阴阳怪气、卓越不凡、吸引了好几代程序员(还成了一种[奇特的模因][16])的著作臭名远扬之后,Lisp 的声望也顺带被提升了。Lisp 已不仅仅是一如既往的“McCarthy 的优雅表达方式”;它现在还成了“向你传授编程的不传之秘的语言”。 - -SICP 究竟有多奇怪这一点值得好好说;因为我认为,时至今日,这本书的古怪之处和 Lisp 的古怪之处是相辅相成的。书的封面就透着一股古怪。那上面画着一位朝着桌子走去,准备要施法的巫师或者炼金术士。他的一只手里抓着一副测径仪 —— 或者圆规,另一只手上拿着个球,上书“eval”和“apply”。他对面的女人指着桌子;在背景中,希腊字母λ漂浮在半空,释放出光芒。 - -![SICP 封面上的画作][17] SICP 封面上的画作。 - -说真的,这上面画的究竟是怎么一回事?为什么桌子会长着动物的腿?为什么这个女人指着桌子?墨水瓶又是干什么用的?我们是不是该说,这位巫师已经破译了宇宙的隐藏奥秘,而所有这些奥秘就蕴含在 eval/apply 循环和 Lambda 微积分之中?看似就是如此。单单是这张图片,就一定对人们如今谈论 Lisp 的方式产生了难以计量的影响。 - -然而,这本书的内容通常并不比封面正常多少。SICP 跟你读过的所有计算机科学教科书都不同。在引言中,作者们表示,这本书不只教你怎么用 Lisp 编程 —— 它是关于“现象的三个焦点:人的心智,复数的计算机程序,和计算机”的作品。在之后,他们对此进行了解释,描述了他们对如下观点的坚信:编程不该被当作是一种计算机科学的训练,而应该是“程序性认识论”的一种新表达方式。程序是将那些偶然被送入计算机的思想组织起来的全新方法。这本书的第一章简明地介绍了 Lisp,但是之后的绝大部分都在讲述更加抽象的概念。其中包括了对不同编程范式的讨论,对于面向对象系统中“时间”和“一致性”的讨论;在书中的某一处,还有关于通信的基本限制可能会如何带来同步问题的讨论 —— 而这些基本限制在通信中就像是光速不变在相对中一样关键。都是些高深难懂的东西。 - -以上这些并不是说这是本糟糕的书;这本书其实棒极了。在我读过的所有作品中,这本书对于重要的编程理念的讨论是最为深刻的;那些理念我琢磨了很久,却一直无力用文字去表达。一本入门编程教科书能如此迅速地开始描述面向对象编程的根本缺陷,和函数式语言“将可变状态降到最少”的优点,实在是一件让人印象深刻的事。而这种描述之后变为了另一种震撼人心的讨论:某种(可能类似于今日的 [RxJS][18] 的)流范式能如何同时具备两者的优秀特性。SICP 用和当初 McCarthy 的 Lisp 论文相似的方式提纯出了高级程序设计的精华。你读完这本书之后,会立即想要将它推荐给你的程序员朋友们;如果他们找到这本书,看到了封面,但最终没有阅读的话,他们就只会记住长着动物腿的桌子上方那神秘的、根本的、给予魔法师特殊能力的、写着 eval/apply 的东西。话说回来,书上这两人的鞋子也让我印象颇深。 - -然而,SICP 最重要的影响恐怕是,它将 Lisp 由一门怪语言提升成了必要教学工具。在 SICP 面世之前,人们互相推荐 Lisp,以学习这门语言为提升编程技巧的途径。1979 年的 Byte 杂志 Lisp 特刊印证了这一事实。之前提到的那位编辑不仅就麻省理工的新计算机大书特书,还说,Lisp 这门语言值得一学,因为它“代表了分析问题的另一种视角”。但 SICP 并未只把 Lisp 作为其它语言的陪衬来使用;SICP 将其作为入门语言。这就暗含了一种论点,那就是,Lisp 是最能把握计算机编程基础的语言。可以认为,如今的程序员们彼此怂恿“在死掉之前至少试试 Lisp”的时候,他们很大程度上是因为 SICP 才这么说的。毕竟,编程语言 [Brainfuck][19] 想必同样也提供了“分析问题的另一种视角”;但人们学习 Lisp 而非学习 Brainfuck,那是因为他们知道,前者的那种视角在二十年中都被看作是极其有用的,有用到麻省理工在给他们的本科生教其它语言之前,必然会先教 Lisp。 - -### Lisp 的回归 - -在 SICP 出版的同一年,Bjarne Stroustrup 公布了 C++ 语言的首个版本,它将面向对象编程带到了大众面前。几年之后,Lisp 计算机市场崩盘,AI 寒冬开始了。在下一个十年的变革中, C++ 和后来的 Java 成了前途无量的语言,而 Lisp 被冷落,无人问津。 - -理所当然地,确定人们对 Lisp 重新燃起热情的具体时间并不可能;但这多半是 Paul Graham 发表他那几篇声称 Lisp 是首选入门语言的短文之后的事了。Paul Graham 是 Y-Combinator 的联合创始人和《黑客新闻》(Hacker News)的创始者,他这几篇短文有很大的影响力。例如,在短文[《胜于平庸》][20](Beating the Averages)中,他声称 Lisp 宏使 Lisp 比其它语言更强。他说,因为他在自己创办的公司 Viaweb 中使用 Lisp,他得以比竞争对手更快地推出新功能。至少,[一部分程序员][21]被说服了。然而,庞大的主流程序员群体并未换用 Lisp。 - -实际上出现的情况是,Lisp 并未流行,但越来越多 Lisp 式的特性被加入到广受欢迎的语言中。Python 有了列表理解。C# 有了 Linq。Ruby……嗯,[Ruby 是 Lisp 的一种][22]。就如 Graham 在2002年提到的那样,“在一系列常用语言中所体现出的‘默认语言’正越发朝着 Lisp 的方向演化”。尽管其它语言变得越来越像 Lisp,Lisp 本身仍然保留了其作为“很少人了解但是大家都该学的神秘语言”的特殊声望。在 1980 年,Lisp 的诞生二十周年纪念日上,McCarthy写道,Lisp 之所以能够存活这么久,是因为它具备“编程语言领域中的某种近似局部最优”。这句话并未充分地表明 Lisp 的真正影响力。Lisp 能够存活超过半个世纪之久,并非因为程序员们一年年地勉强承认它就是最好的编程工具;事实上,即使绝大多数程序员根本不用它,它还是存活了下来。多亏了它的起源和它的人工智能研究用途,说不定还要多亏 SICP 的遗产,Lisp 一直都那么让人着迷。在我们能够想象上帝用其它新的编程语言创造世界之前,Lisp 都不会走下神坛。 - --------------------------------------------------------------------------------- - -via: https://twobithistory.org/2018/10/14/lisp.html - -作者:[Two-Bit History][a] -选题:[lujun9972][b] -译者:[Northurland](https://github.com/Northurland) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://twobithistory.org -[b]: https://github.com/lujun9972 -[1]: https://xkcd.com/224/ -[2]: https://xkcd.com/297/ -[3]: https://www.gnu.org/fun/jokes/eternal-flame.en.html -[4]: https://www.reddit.com/r/ProgrammerHumor/comments/5c14o6/xkcd_lisp/d9szjnc/ -[5]: https://twobithistory.org/images/byte_lisp.jpg -[6]: http://languagelog.ldc.upenn.edu/myl/llog/jmc.pdf -[7]: https://en.wikipedia.org/wiki/Jargon_File -[8]: https://hci.stanford.edu/winograd/shrdlu/ -[9]: https://en.wikipedia.org/wiki/Macsyma -[10]: https://en.wikipedia.org/wiki/ACL2 -[11]: https://twobithistory.org/2018/09/30/chaosnet.html -[12]: https://youtu.be/gV5obrYaogU?t=201 -[13]: https://en.wikipedia.org/wiki/Flavors_(programming_language) -[14]: https://twobithistory.org/images/symbolics.jpg -[15]: http://www.softwarepreservation.org/projects/LISP/ -[16]: https://knowyourmeme.com/forums/meme-research/topics/47038-structure-and-interpretation-of-computer-programs-hugeass-image-dump-for-evidence -[17]: https://twobithistory.org/images/sicp.jpg -[18]: https://rxjs-dev.firebaseapp.com/ -[19]: https://en.wikipedia.org/wiki/Brainfuck -[20]: http://www.paulgraham.com/avg.html -[21]: https://web.archive.org/web/20061004035628/http://wiki.alu.org/Chris-Perkins -[22]: http://www.randomhacks.net/2005/12/03/why-ruby-is-an-acceptable-lisp/ diff --git a/translated/tech/20180403 17 Ways To Check Size Of Physical Memory (RAM) In Linux.md b/translated/tech/20180403 17 Ways To Check Size Of Physical Memory (RAM) In Linux.md deleted file mode 100644 index 6e873551b3..0000000000 --- a/translated/tech/20180403 17 Ways To Check Size Of Physical Memory (RAM) In Linux.md +++ /dev/null @@ -1,456 +0,0 @@ -在 Linux 中 17 种方法来查看物理内存(RAM) -======= - -大多数系统管理员在遇到性能问题时会检查 CPU 和内存利用率。 - -Linux 中有许多实用程序可以用于检查物理内存。 - -这些命令有助于我们检查系统中存在的物理 RAM,还允许用户检查各种方面的内存利用率。 - -我们大多数人只知道很少的命令,在本文中我们试图包含所有可能的命令。 - -你可能会想,为什么我想知道所有这些命令,而不是知道一些特定的和例行的命令。 - -不要认为不好或采取负面的方式,因为每个人都有不同的需求和看法,所以,对于那些在寻找其它目的的人,这对于他们非常有帮助。 - -### 什么是 RAM - -计算机内存是能够临时或永久存储信息的物理设备。RAM 代表随机存取存储器,它是一种易失性存储器,用于存储操作系统,软件和硬件使用的信息。 - -有两种类型的内存可供选择: - * 主存 - * 辅助内存 - -主存是计算机的主存储器。CPU 可以直接读取或写入此内存。它固定在电脑的主板上。 - - * **`RAM:`** 随机存取存储器是临时存储。关闭计算机后,此信息将消失。 - * **`ROM:`** 只读存储器是永久存储,即使系统关闭也能保存数据。 - -### 方法-1 : 使用 free 命令 - -free 显示系统中空闲和已用的物理内存和交换内存的总量,以及内核使用的缓冲区和缓存。它通过解析 /proc/meminfo 来收集信息。 - -**建议阅读:** [free – 在 Linux 系统中检查内存使用情况统计(空闲和已用)的标准命令][1] -``` -$ free -m - total used free shared buff/cache available -Mem: 1993 1681 82 81 228 153 -Swap: 12689 1213 11475 - -$ free -g - total used free shared buff/cache available -Mem: 1 1 0 0 0 0 -Swap: 12 1 11 - -``` - -### 方法-2 : 使用 /proc/meminfo 文件 - -/proc/meminfo 是一个虚拟文本文件,它包含有关系统 RAM 使用情况的大量有价值的信息。 - -它报告系统上的空闲和已用内存(物理和交换)的数量。 -``` -$ grep MemTotal /proc/meminfo -MemTotal: 2041396 kB - -$ grep MemTotal /proc/meminfo | awk '{print $2 / 1024}' -1993.55 - -$ grep MemTotal /proc/meminfo | awk '{print $2 / 1024 / 1024}' -1.94683 - -``` - -### 方法-3 : 使用 top 命令 - -​Top 命令是 Linux 中监视实时系统进程的基本命令之一。它显示系统信息和运行的进程信息,如正常运行时间,平均负载,正在运行的任务,登录的用户数,CPU 数量和 CPU 利用率,以及内存和交换信息。运行 top 命令,然后按下 `E` 来使内存利用率以 MB 为单位。 - -**建议阅读:** [TOP 命令示例监视服务器性能][2] -``` -$ top - -top - 14:38:36 up 1:59, 1 user, load average: 1.83, 1.60, 1.52 -Tasks: 223 total, 2 running, 221 sleeping, 0 stopped, 0 zombie -%Cpu(s): 48.6 us, 11.2 sy, 0.0 ni, 39.3 id, 0.3 wa, 0.0 hi, 0.5 si, 0.0 st -MiB Mem : 1993.551 total, 94.184 free, 1647.367 used, 252.000 buff/cache -MiB Swap: 12689.58+total, 11196.83+free, 1492.750 used. 306.465 avail Mem - - PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND - 9908 daygeek 20 0 2971440 649324 39700 S 55.8 31.8 11:45.74 Web Content -21942 daygeek 20 0 2013760 308700 69272 S 35.0 15.1 4:13.75 Web Content - 4782 daygeek 20 0 3687116 227336 39156 R 14.5 11.1 16:47.45 gnome-shell - -``` - -### 方法-4 : 使用 vmstat 命令 - -vmstat 是一个标准且漂亮的工具,它报告 Linux 系统的虚拟内存统计信息。vmstat 报告有关进程,内存,分页,块 IO,陷阱和 CPU 活动的信息。它有助于 Linux 管理员在故障检修时识别系统瓶颈。 - -**建议阅读:** [vmstat – 一个报告虚拟内存统计信息的标准且漂亮的工具][3] -``` -$ vmstat -s | grep "total memory" - 2041396 K total memory - -$ vmstat -s -S M | egrep -ie 'total memory' - 1993 M total memory - -$ vmstat -s | awk '{print $1 / 1024 / 1024}' | head -1 -1.94683 - -``` - -### 方法-5 : 使用 nmon 命令 - -nmon 是另一个很棒的工具,用于监视各种系统资源,如 CPU,内存,网络,磁盘,文件系统,NFS,top 进程,Power 微分区和 Linux 终端上的资源(Linux 版本和处理器)。 - -只需按下 `m` 键,即可查看内存利用率统计数据(缓存,活动,非活动,缓冲,空闲,以 MB 和百分比为单位)。 - -**建议阅读:** [nmon – Linux 中一个监视系统资源的漂亮的工具][4] -``` -┌nmon─14g──────[H for help]───Hostname=2daygeek──Refresh= 2secs ───07:24.44─────────────────┐ -│ Memory Stats ─────────────────────────────────────────────────────────────────────────────│ -│ RAM High Low Swap Page Size=4 KB │ -│ Total MB 32079.5 -0.0 -0.0 20479.0 │ -│ Free MB 11205.0 -0.0 -0.0 20479.0 │ -│ Free Percent 34.9% 100.0% 100.0% 100.0% │ -│ MB MB MB │ -│ Cached= 19763.4 Active= 9617.7 │ -│ Buffers= 172.5 Swapcached= 0.0 Inactive = 10339.6 │ -│ Dirty = 0.0 Writeback = 0.0 Mapped = 11.0 │ -│ Slab = 636.6 Commit_AS = 118.2 PageTables= 3.5 │ -│───────────────────────────────────────────────────────────────────────────────────────────│ -│ │ -│ │ -│ │ -│ │ -│ │ -│ │ -└───────────────────────────────────────────────────────────────────────────────────────────┘ - -``` - -### 方法-6 : 使用 dmidecode 命令 - -Dmidecode 是一个读取计算机 DMI表内容的工具,它以人类可读的格式显示系统硬件信息。(DMI 代表桌面管理接口,有人说 SMBIOS 代表系统管理 BIOS) - -此表包含系统硬件组件的描述,以及其它有用信息,如序列号,制造商信息,发布日期和 BIOS 修改等。 - -**建议阅读:** -[Dmidecode – 获取 Linux 系统硬件信息的简便方法][5] -``` -# dmidecode -t memory | grep Size: - Size: 8192 MB - Size: No Module Installed - Size: No Module Installed - Size: 8192 MB - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: 8192 MB - Size: No Module Installed - Size: No Module Installed - Size: 8192 MB - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - Size: No Module Installed - -``` - -只打印已安装的 RAM 模块。 -``` - -# dmidecode -t memory | grep Size: | grep -v "No Module Installed" - Size: 8192 MB - Size: 8192 MB - Size: 8192 MB - Size: 8192 MB - -``` - -汇总所有已安装的 RAM 模块。 -``` -# dmidecode -t memory | grep Size: | grep -v "No Module Installed" | awk '{sum+=$2}END{print sum}' -32768 - -``` - -### 方法-7 : 使用 hwinfo 命令 - -hwinfo 代表硬件信息,它是另一个很棒的实用工具,用于探测系统中存在的硬件,并以人类可读的格式显示有关各种硬件组件的详细信息。 - -它报告有关 CPU,RAM,键盘,鼠标,图形卡,声音,存储,网络接口,磁盘,分区,BIOS 和网桥等的信息。 - -**建议阅读:** [hwinfo(硬件信息)– 一个在 Linux 系统上检测系统硬件信息的好工具][6] -``` -$ hwinfo --memory -01: None 00.0: 10102 Main Memory - [Created at memory.74] - Unique ID: rdCR.CxwsZFjVASF - Hardware Class: memory - Model: "Main Memory" - Memory Range: 0x00000000-0x7a4abfff (rw) - Memory Size: 1 GB + 896 MB - Config Status: cfg=new, avail=yes, need=no, active=unknown - -``` - -### 方法-8 : 使用 lshw 命令 - -lshw(代表 Hardware Lister)是一个小巧的工具,可以生成机器上各种硬件组件的详细报告,如内存配置,固件版本,主板配置,CPU 版本和速度,缓存配置,USB,网卡,显卡,多媒体,打印机,总线速度等。 - -它通过读取 /proc 目录和 DMI 表中的各种文件来生成硬件信息。 - -**建议阅读:** [LSHW (Hardware Lister) – 一个在 Linux 上获取硬件信息的好工具][7] -``` -$ sudo lshw -short -class memory -[sudo] password for daygeek: -H/W path Device Class Description -================================================== -/0/0 memory 128KiB BIOS -/0/1 memory 1993MiB System memory - -``` - -### 方法-9 : 使用 inxi 命令 - -inxi 是一个很棒的工具,它可以检查 Linux 上的硬件信息,并提供了大量的选项来获取 Linux 系统上的所有硬件信息,这些特性是我在 Linux 上的其它工具中从未发现的。它是从 locsmif 编写的古老的但至今看来都异常灵活的 infobash 演化而来的。 - -inxi 是一个脚本,它可以快速显示系统硬件,CPU,驱动程序,Xorg,桌面,内核,GCC 版本,进程,RAM 使用情况以及各种其它有用的信息,还可以用于论坛技术支持和调试工具。 - -**建议阅读:** [inxi – 一个检查 Linux 上硬件信息的好工具][8] -``` -$ inxi -F | grep "Memory" -Info: Processes: 234 Uptime: 3:10 Memory: 1497.3/1993.6MB Client: Shell (bash) inxi: 2.3.37 - -``` - -### 方法-10 : 使用 screenfetch 命令 - -screenFetch 是一个 bash 脚本。它将自动检测你的发行版,并在右侧显示该发行版标识的 ASCII 艺术版本和一些有价值的信息。 - -**建议阅读:** [ScreenFetch – 以 ASCII 艺术标志在终端显示 Linux 系统信息][9] -``` -$ screenfetch - ./+o+- [email protected] - yyyyy- -yyyyyy+ OS: Ubuntu 17.10 artful - ://+//////-yyyyyyo Kernel: x86_64 Linux 4.13.0-37-generic - .++ .:/++++++/-.+sss/` Uptime: 44m - .:++o: /++++++++/:--:/- Packages: 1831 - o:+o+:++.`..```.-/oo+++++/ Shell: bash 4.4.12 - .:+o:+o/. `+sssoo+/ Resolution: 1920x955 - .++/+:+oo+o:` /sssooo. DE: GNOME - /+++//+:`oo+o /::--:. WM: GNOME Shell - \+/+o+++`o++o ++////. WM Theme: Adwaita - .++.o+++oo+:` /dddhhh. GTK Theme: Azure [GTK2/3] - .+.o+oo:. `oddhhhh+ Icon Theme: Papirus-Dark - \+.++o+o``-````.:ohdhhhhh+ Font: Ubuntu 11 - `:o+++ `ohhhhhhhhyo++os: CPU: Intel Core i7-6700HQ @ 2x 2.592GHz - .o:`.syhhhhhhh/.oo++o` GPU: llvmpipe (LLVM 5.0, 256 bits) - /osyyyyyyo++ooo+++/ RAM: 1521MiB / 1993MiB - ````` +oo+++o\: - `oo++. - -``` - -### 方法-11 : 使用 neofetch 命令 - -Neofetch 是一个跨平台且易于使用的命令行(CLI)脚本,它收集你的 Linux 系统信息,并将其作为一张图片显示在终端上,也可以是你的发行版徽标,或者是你选择的任何 ascii 艺术。 - -**建议阅读:** [Neofetch – 以 ASCII 分发标志来显示 Linux 系统信息][10] -``` -$ neofetch - .-/+oossssoo+/-. [email protected] - `:+ssssssssssssssssss+:` -------------- - -+ssssssssssssssssssyyssss+- OS: Ubuntu 17.10 x86_64 - .ossssssssssssssssssdMMMNysssso. Host: VirtualBox 1.2 - /ssssssssssshdmmNNmmyNMMMMhssssss/ Kernel: 4.13.0-37-generic - +ssssssssshmydMMMMMMMNddddyssssssss+ Uptime: 47 mins - /sssssssshNMMMyhhyyyyhmNMMMNhssssssss/ Packages: 1832 -.ssssssssdMMMNhsssssssssshNMMMdssssssss. Shell: bash 4.4.12 -+sssshhhyNMMNyssssssssssssyNMMMysssssss+ Resolution: 1920x955 -ossyNMMMNyMMhsssssssssssssshmmmhssssssso DE: ubuntu:GNOME -ossyNMMMNyMMhsssssssssssssshmmmhssssssso WM: GNOME Shell -+sssshhhyNMMNyssssssssssssyNMMMysssssss+ WM Theme: Adwaita -.ssssssssdMMMNhsssssssssshNMMMdssssssss. Theme: Azure [GTK3] - /sssssssshNMMMyhhyyyyhdNMMMNhssssssss/ Icons: Papirus-Dark [GTK3] - +sssssssssdmydMMMMMMMMddddyssssssss+ Terminal: gnome-terminal - /ssssssssssshdmNNNNmyNMMMMhssssss/ CPU: Intel i7-6700HQ (2) @ 2.591GHz - .ossssssssssssssssssdMMMNysssso. GPU: VirtualBox Graphics Adapter - -+sssssssssssssssssyyyssss+- Memory: 1620MiB / 1993MiB - `:+ssssssssssssssssss+:` - .-/+oossssoo+/-. - -``` - -### 方法-12 : 使用 dmesg 命令 - -dmesg(代表显示消息或驱动消息)是大多数类 unix 操作系统上的命令,用于打印内核的消息缓冲区。 -``` -$ dmesg | grep "Memory" -[ 0.000000] Memory: 1985916K/2096696K available (12300K kernel code, 2482K rwdata, 4000K rodata, 2372K init, 2368K bss, 110780K reserved, 0K cma-reserved) -[ 0.012044] x86/mm: Memory block size: 128MB - -``` - -### 方法-13 : 使用 atop 命令 - -Atop 是一个用于 Linux 的 ASCII 全屏系统性能监视工具,它能报告所有服务器进程的活动(即使进程在间隔期间已经完成)。 - -它记录系统和进程活动以进行长期分析(默认情况下,日志文件保存 28 天),通过使用颜色等来突出显示过载的系统资源。它结合可选的内核模块 netatop 显示每个进程或线程的网络活动。 - -**建议阅读:** [Atop – 实时监控系统性能,资源,进程和检查资源利用历史][11] -``` -$ atop -m - -ATOP - ubuntu 2018/03/31 19:34:08 ------------- 10s elapsed -PRC | sys 0.47s | user 2.75s | | | #proc 219 | #trun 1 | #tslpi 802 | #tslpu 0 | #zombie 0 | clones 7 | | | #exit 4 | -CPU | sys 7% | user 22% | irq 0% | | | idle 170% | wait 0% | | steal 0% | guest 0% | | curf 2.59GHz | curscal ?% | -cpu | sys 3% | user 11% | irq 0% | | | idle 85% | cpu001 w 0% | | steal 0% | guest 0% | | curf 2.59GHz | curscal ?% | -cpu | sys 4% | user 11% | irq 0% | | | idle 85% | cpu000 w 0% | | steal 0% | guest 0% | | curf 2.59GHz | curscal ?% | -CPL | avg1 1.98 | | avg5 3.56 | avg15 3.20 | | | csw 14894 | | intr 6610 | | | numcpu 2 | | -MEM | tot 1.9G | free 101.7M | cache 244.2M | dirty 0.2M | buff 6.9M | slab 92.9M | slrec 35.6M | shmem 97.8M | shrss 21.0M | shswp 3.2M | vmbal 0.0M | hptot 0.0M | hpuse 0.0M | -SWP | tot 12.4G | free 11.6G | | | | | | | | | vmcom 7.9G | | vmlim 13.4G | -PAG | scan 0 | steal 0 | | stall 0 | | | | | | | swin 3 | | swout 0 | -DSK | sda | busy 0% | | read 114 | write 37 | KiB/r 21 | KiB/w 6 | | MBr/s 0.2 | MBw/s 0.0 | avq 6.50 | | avio 0.26 ms | -NET | transport | tcpi 11 | tcpo 17 | udpi 4 | udpo 8 | tcpao 3 | tcppo 0 | | tcprs 3 | tcpie 0 | tcpor 0 | udpnp 0 | udpie 0 | -NET | network | ipi 20 | | ipo 33 | ipfrw 0 | deliv 20 | | | | | icmpi 5 | | icmpo 0 | -NET | enp0s3 0% | pcki 11 | pcko 28 | sp 1000 Mbps | si 1 Kbps | so 1 Kbps | | coll 0 | mlti 0 | erri 0 | erro 0 | drpi 0 | drpo 0 | -NET | lo ---- | pcki 9 | pcko 9 | sp 0 Mbps | si 0 Kbps | so 0 Kbps | | coll 0 | mlti 0 | erri 0 | erro 0 | drpi 0 | drpo 0 | - - PID TID MINFLT MAJFLT VSTEXT VSLIBS VDATA VSTACK VSIZE RSIZE PSIZE VGROW RGROW SWAPSZ RUID EUID MEM CMD 1/1 - 2536 - 941 0 188K 127.3M 551.2M 144K 2.3G 281.2M 0K 0K 344K 6556K daygeek daygeek 14% Web Content - 2464 - 75 0 188K 187.7M 680.6M 132K 2.3G 226.6M 0K 0K 212K 42088K daygeek daygeek 11% firefox - 2039 - 4199 6 16K 163.6M 423.0M 132K 3.5G 220.2M 0K 0K 2936K 109.6M daygeek daygeek 11% gnome-shell - 10822 - 1 0 4K 16680K 377.0M 132K 3.4G 193.4M 0K 0K 0K 0K root root 10% java - -``` - -### 方法-14 : 使用 htop 命令 - -htop 是由 Hisham 用 ncurses 库开发的用于 Linux 的交互式进程查看器。与 top 命令相比,htop 有许多特性和选项。 - -**建议阅读:** [使用 Htop 命令监视系统资源][12] -``` -$ htop - - 1 [||||||||||||| 13.0%] Tasks: 152, 587 thr; 1 running - 2 [||||||||||||||||||||||||| 25.0%] Load average: 0.91 2.03 2.66 - Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||1.66G/1.95G] Uptime: 01:14:53 - Swp[|||||| 782M/12.4G] - - PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command - 2039 daygeek 20 0 3541M 214M 46728 S 36.6 10.8 22:36.77 /usr/bin/gnome-shell - 2045 daygeek 20 0 3541M 214M 46728 S 10.3 10.8 3:02.92 /usr/bin/gnome-shell - 2046 daygeek 20 0 3541M 214M 46728 S 8.3 10.8 3:04.96 /usr/bin/gnome-shell - 6080 daygeek 20 0 807M 37228 24352 S 2.1 1.8 0:11.99 /usr/lib/gnome-terminal/gnome-terminal-server - 2880 daygeek 20 0 2205M 164M 17048 S 2.1 8.3 7:16.50 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51:0|57:128|58:10000|63:0|65:400|66 - 6125 daygeek 20 0 1916M 159M 92352 S 2.1 8.0 2:09.14 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51:0|57:128|58:10000|63:0|65:400|66 - 2536 daygeek 20 0 2335M 243M 26792 S 2.1 12.2 6:25.77 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51:0|57:128|58:10000|63:0|65:400|66 - 2653 daygeek 20 0 2237M 185M 20788 S 1.4 9.3 3:01.76 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51:0|57:128|58:10000|63:0|65:400|66 - -``` - -### 方法-15 : 使用 corefreq 实用程序 - -CoreFreq 是为 Intel 64 位处理器设计的 CPU 监控软件,支持的架构有 Atom,Core2,Nehalem,SandyBridge 和 superior,AMD 家族。(to 校正:这里 OF 最后什么意思) - -CoreFreq 提供了一个框架来以高精确度检索 CPU 数据。 - -**建议阅读:** [CoreFreq – 一个用于 Linux 系统的强大的 CPU 监控工具][13] -``` -$ ./corefreq-cli -k -Linux: -|- Release [4.13.0-37-generic] -|- Version [#42-Ubuntu SMP Wed Mar 7 14:13:23 UTC 2018] -|- Machine [x86_64] -Memory: -|- Total RAM 2041396 KB -|- Shared RAM 99620 KB -|- Free RAM 108428 KB -|- Buffer RAM 8108 KB -|- Total High 0 KB -|- Free High 0 KB - -$ ./corefreq-cli -k | grep "Total RAM" | awk '{print $4 / 1024 }' -1993.55 - -$ ./corefreq-cli -k | grep "Total RAM" | awk '{print $4 / 1024 / 1024}' -1.94683 - -``` - -### 方法-16 : 使用 glances 命令 - -Glances 是用 Python 编写的跨平台基于 curses(LCTT 译注:curses 是一个 Linux/Unix 下的图形函数库)的系统监控工具。我们可以说一物俱全,就像在最小的空间含有最大的信息。它使用 psutil 库从系统中获取信息。 - -Glances 可以监视 CPU,内存,负载,进程列表,网络接口,磁盘 I/O,Raid,传感器,文件系统(和文件夹),Docker,监视器,警报,系统信息,正常运行时间,快速预览(CPU,内存,负载)等。 - -**建议阅读:** [Glances (一物俱全)– 一个 Linux 的高级的实时系统性能监控工具][14] -``` -$ glances - -ubuntu (Ubuntu 17.10 64bit / Linux 4.13.0-37-generic) - IP 192.168.1.6/24 Uptime: 1:08:40 - -CPU [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 90.6%] CPU - 90.6% nice: 0.0% ctx_sw: 4K MEM \ 78.4% active: 942M SWAP - 5.9% LOAD 2-core -MEM [||||||||||||||||||||||||||||||||||||||||||||||||||||||||| 78.0%] user: 55.1% irq: 0.0% inter: 1797 total: 1.95G inactive: 562M total: 12.4G 1 min: 4.35 -SWAP [|||| 5.9%] system: 32.4% iowait: 1.8% sw_int: 897 used: 1.53G buffers: 14.8M used: 749M 5 min: 4.38 - idle: 7.6% steal: 0.0% free: 431M cached: 273M free: 11.7G 15 min: 3.38 - -NETWORK Rx/s Tx/s TASKS 211 (735 thr), 4 run, 207 slp, 0 oth sorted automatically by memory_percent, flat view -docker0 0b 232b -enp0s3 12Kb 4Kb Systemd 7 Services loaded: 197 active: 196 failed: 1 -lo 616b 616b -_h478e48e 0b 232b CPU% MEM% VIRT RES PID USER NI S TIME+ R/s W/s Command - 63.8 18.9 2.33G 377M 2536 daygeek 0 R 5:57.78 0 0 /usr/lib/firefox/firefox -contentproc -childID 1 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -DefaultGateway 83ms 78.5 10.9 3.46G 217M 2039 daygeek 0 S 21:07.46 0 0 /usr/bin/gnome-shell - 8.5 10.1 2.32G 201M 2464 daygeek 0 S 8:45.69 0 0 /usr/lib/firefox/firefox -new-window -DISK I/O R/s W/s 1.1 8.5 2.19G 170M 2653 daygeek 0 S 2:56.29 0 0 /usr/lib/firefox/firefox -contentproc -childID 4 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -dm-0 0 0 1.7 7.2 2.15G 143M 2880 daygeek 0 S 7:10.46 0 0 /usr/lib/firefox/firefox -contentproc -childID 6 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 -sda1 9.46M 12K 0.0 4.9 1.78G 97.2M 6125 daygeek 0 S 1:36.57 0 0 /usr/lib/firefox/firefox -contentproc -childID 7 -isForBrowser -intPrefs 6:50|7:-1|19:0|34:1000|42:20|43:5|44:10|51 - -``` - -### 方法-17 : 使用 gnome-system-monitor - -系统监视器是一个管理正在运行的进程和监视系统资源的工具。它向你显示正在运行的程序以及耗费的处理器时间,内存和磁盘空间。 -![][16] - - - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/easy-ways-to-check-size-of-physical-memory-ram-in-linux/ - -作者:[Ramya Nuvvula][a] -译者:[MjSeven](https://github.com/MjSeven) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.2daygeek.com/author/ramya/ -[1]:https://www.2daygeek.com/free-command-to-check-memory-usage-statistics-in-linux/ -[2]:https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/ -[3]:https://www.2daygeek.com/linux-vmstat-command-examples-tool-report-virtual-memory-statistics/ -[4]:https://www.2daygeek.com/nmon-system-performance-monitor-system-resources-on-linux/ -[5]:https://www.2daygeek.com/dmidecode-get-print-display-check-linux-system-hardware-information/ -[6]:https://www.2daygeek.com/hwinfo-check-display-detect-system-hardware-information-linux/ -[7]:https://www.2daygeek.com/lshw-find-check-system-hardware-information-details-linux/ -[8]:https://www.2daygeek.com/inxi-system-hardware-information-on-linux/ -[9]:https://www.2daygeek.com/screenfetch-display-linux-systems-information-ascii-distribution-logo-terminal/ -[10]:https://www.2daygeek.com/neofetch-display-linux-systems-information-ascii-distribution-logo-terminal/ -[11]:https://www.2daygeek.com/atop-system-process-performance-monitoring-tool/ -[12]:https://www.2daygeek.com/htop-command-examples-to-monitor-system-resources/ -[13]:https://www.2daygeek.com/corefreq-linux-cpu-monitoring-tool/ -[14]:https://www.2daygeek.com/install-glances-advanced-real-time-linux-system-performance-monitoring-tool-on-centos-fedora-ubuntu-debian-opensuse-arch-linux/ -[15]:data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 -[16]:https://www.2daygeek.com/wp-content/uploads/2018/03/check-memory-information-using-gnome-system-monitor.png diff --git a/translated/tech/20180614 An introduction to the Tornado Python web app framework.md b/translated/tech/20180614 An introduction to the Tornado Python web app framework.md new file mode 100644 index 0000000000..3816f5138b --- /dev/null +++ b/translated/tech/20180614 An introduction to the Tornado Python web app framework.md @@ -0,0 +1,580 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: subject: (An introduction to the Tornado Python web app framework) +[#]: via: (https://opensource.com/article/18/6/tornado-framework) +[#]: author: (Nicholas Hunt-Walker https://opensource.com/users/nhuntwalker) +[#]: url: ( ) + +Python Web 框架 Tornado 简介 +====== + +在比较 Python 框架的系列文章的第三部分中,我们来了解 Tornado,它是为处理异步进程而构建的。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/tornado.png?itok=kAa3eXIU) + +在这个由四部分组成的系列文章的前两篇中,我们介绍了 [Pyramid][1] 和 [Flask][2] Web 框架。我们已经构建了两次相同的应用程序,看到了一个完全的 DIY 框架和包含更多电池的框架之间的异同。 + +现在让我们来看看另一个稍微不同的选项:[Tornado 框架][3]。Tornado 在很大程度上与 Flask 一样简单,但有一个主要区别:Tornado 是专门为处理异步进程而构建的。在我们本系列所构建的应用程序中,这种特殊的酱料(译者注:这里意思是 Tornado 的异步功能)并不是非常有用,但我们将看到在哪里可以使用它,以及它在更一般的情况下是如何工作的。 + +让我们继续前两篇文章中设置的流程,首先从处理设置和配置。 + +### Tornado 启动和配置 + +如果你一直关注这个系列,那么第一步应该对你来说习以为常。 +``` +$ mkdir tornado_todo +$ cd tornado_todo +$ pipenv install --python 3.6 +$ pipenv shell +(tornado-someHash) $ pipenv install tornado +``` + +创建一个 `setup.py` 文件来安装我们的应用程序相关的东西: +``` +(tornado-someHash) $ touch setup.py +# setup.py +from setuptools import setup, find_packages + +requires = [ +    'tornado', +    'tornado-sqlalchemy', +    'psycopg2', +] + +setup( +    name='tornado_todo', +    version='0.0', +    description='A To-Do List built with Tornado', +    author='', +    author_email='', +    keywords='web tornado', +    packages=find_packages(), +    install_requires=requires, +    entry_points={ +        'console_scripts': [ +            'serve_app = todo:main', +        ], +    }, +) +``` + +因为 Tornado 不需要任何外部配置,所以我们可以直接编写 Python 代码来让程序运行。让我们创建 `todo` 目录,并用需要的前几个文件填充它。 +``` +todo/ +    __init__.py +    models.py +    views.py +``` + +就像 Flask 和 Pyramid 一样,Tornado 也有一些基本配置,将放在 `__init__.py` 中。从 `tornado.web` 中,我们将导入 `Application` 对象,它将处理路由和视图的连接,包括数据库(当我们谈到那里时再说)以及运行 Tornado 应用程序所需的其它额外设置。 + +``` +# __init__.py +from tornado.httpserver import HTTPServer +from tornado.options import define, options +from tornado.web import Application + +define('port', default=8888, help='port to listen on') + +def main(): +    """Construct and serve the tornado application.""" +    app = Application() +    http_server = HTTPServer(app) +    http_server.listen(options.port) +``` + +当我们使用 `define` 函数时,我们最终会在 `options` 对象上创建属性。第一个参数位置的任何内容都将是属性的名称,分配给 `default` 关键字参数的内容将是该属性的值。 + +例如,如果我们将属性命名为 `potato` 而不是 `port`,我们可以通过 `options.potato` 访问它的值。 + +在 `HTTPServer` 上调用 `listen` 并不会启动服务器。我们必须再做一步,找一个可以监听请求并返回响应的工作应用程序,我们需要一个输入输出循环。幸运的是,Tornado 以 `tornado.ioloop.IOLoop` 的形式提供了开箱即用的功能。 + +``` +# __init__.py +from tornado.httpserver import HTTPServer +from tornado.ioloop import IOLoop +from tornado.options import define, options +from tornado.web import Application + +define('port', default=8888, help='port to listen on') + +def main(): +    """Construct and serve the tornado application.""" +    app = Application() +    http_server = HTTPServer(app) +    http_server.listen(options.port) +    print('Listening on http://localhost:%i' % options.port) +    IOLoop.current().start() +``` + +我喜欢某种形式的 `print` 声明,告诉我什么时候应用程序正在提供服务,我就是这样子。如果你愿意,可以不使用 `print`。 + +我们以 `IOLoop.current().start()` 开始我们的 I/O 循环。让我们进一步讨论输入,输出和异步性。 + +### Python 中的异步和 I/O 循环的基础知识 + +请允许我提前说明,我绝对,肯定,肯定并且安心地说不是异步编程方面的专家。就像我写的所有内容一样,接下来的内容源于我对这个概念的理解的局限性。因为我是人,可能有很深很深的缺陷。 + +异步程序的主要问题是: + + * 数据如何进来? + * 数据如何出去? + * 什么时候可以在不占用我全部注意力情况下运行某个过程? + +由于[全局解释器锁][4](GIL),Python 被设计为一种单线程语言。对于 Python 程序必须执行的每个任务,其线程执行的全部注意力都集中在该任务的持续时间内。我们的 HTTP 服务器是用 Python 编写的,因此,当接收到数据(如 HTTP 请求)时,服务器的唯一关心的是传入的数据。这意味着,在大多数情况下,无论是程序需要运行还是处理数据,程序都将完全消耗服务器的执行线程,阻止接收其它可能的数据,直到服务器完成它需要做的事情。 + +在许多情况下,这不是太成问题。典型的 Web 请求,响应周期只需要几分之一秒。除此之外,构建 HTTP 服务器的套接字可以维护待处理的传入请求的积压。因此,如果请求在该套接字处理其它内容时进入,则它很可能只是在处理之前稍微排队等待一会。对于低到中等流量的站点,几分之一秒的时间并不是什么大问题,你可以使用多个部署的实例以及 [NGINX][6] 等负载均衡器来为更大的请求负载分配流量。 + +但是,如果你的平均响应时间超过一秒钟,该怎么办?如果你使用来自传入请求的数据来启动一些长时间的过程(如机器学习算法或某些海量数据库查询),该怎么办?现在,你的单线程 Web 服务器开始累积一个无法寻址的积压请求,其中一些请求会因为超时而被丢弃。这不是一种选择,特别是如果你希望你的服务在一段时间内是可靠的。 + +异步 Python 程序登场。重要的是要记住因为它是用 Python 编写的,所以程序仍然是一个单线程进程。除非特别标记,否则在异步程序中仍然会阻塞执行。 + +但是,当异步程序结构正确时,只要你指定某个函数应该具有这样的能力,你的异步 Python 程序就可以“搁置”长时间运行的任务。然后,当搁置的任务完成并准备好恢复时,异步控制器会收到报告,只要在需要时管理它们的执行,而不会完全阻塞对新输入的处理。 + +这有点夸张,所以让我们用一个人类的例子来证明。 + +### 带回家吧 + +我经常发现自己在家里试图完成很多家务,但没有多少时间来做它们。在某一天,积压的家务可能看起来像: + + * 做饭(20 分钟准备,40 分钟烹饪) + * 洗碗(60 分钟) + * 洗涤并擦干衣物(30 分钟洗涤,每次干燥 90 分钟) + * 真空清洗地板(30 分钟) + +如果我是一个传统的同步程序,我会亲自完成每项任务。在我考虑处理任何其他事情之前,每项任务都需要我全神贯注地完成。因为如果没有我的全力关注,什么事情都完成不了。所以我的执行顺序可能如下: + + 1. 完全专注于准备和烹饪食物,包括等待食物烹饪(60 分钟) + 2. 将脏盘子移到水槽中(65 分钟过去了) + 3. 清洗所有盘子(125 分钟过去了) + 4. 开始完全专注于洗衣服,包括等待洗衣机洗完,然后将衣物转移到烘干机,再等烘干机完成( 250 分钟过去了) + 5. 对地板进行真空吸尘(280 分钟了) + +从头到尾完成所有事情花费了 4 小时 40 分钟。 + +我应该像异步程序一样聪明地工作,而不是努力工作。我的家里到处都是可以为我工作的机器,而不用我一直努力工作。同时,现在我可以将注意力转移真正需要的东西上。 + +我的执行顺序可能看起来像: + + 1. 将衣物放入洗衣机并启动它(5 分钟) + 2. 在洗衣机运行时,准备食物(25 分钟过去了) + 3. 准备好食物后,开始烹饪食物(30 分钟过去了) + 4. 在烹饪食物时,将衣物从洗衣机移到烘干机机中开始烘干(35 分钟过去了) + 5. 当烘干机运行中,且食物仍在烹饪时,对地板进行真空吸尘(65 分钟过去了) + 6. 吸尘后,将食物从炉子中取出并装盘子入洗碗机(70 分钟过去了) + 7. 运行洗碗机(130 分钟完成) + +现在花费的时间下降到 2 小时 10 分钟。即使我允许在作业之间切换花费更多时间(总共 10-20 分钟)。如果我等待着按顺序执行每项任务,我花费的时间仍然只有一半左右。这就是将程序构造为异步的强大功能。 + +#### 那么 I/O 循环在哪里? + +一个异步 Python 程序的工作方式是从某个外部源(输入)获取数据,如果某个进程需要,则将该数据转移到某个外部工作者(输出)进行处理。当外部进程完成时,Python 主程序会收到提醒,然后程序获取外部处理(输入)的结果,并继续这样其乐融融的方式。 + +当数据不在 Python 主程序手中时,主程序就会被释放来处理其它任何事情。包括等待全新的输入(如 HTTP 请求)和处理长时间运行的进程的结果(如机器学习算法的结果,长时间运行的数据库查询)。主程序虽仍然是单线程的,但成了事件驱动的,它对程序处理的特定事件会触发动作。监听这些事件并指示应如何处理它们的主要是 I/O 循环在工作。 + +我知道,我们走了很长的路才得到这个重要的解释,但我希望在这里传达的是,它不是魔术,也不是某种复杂的并行处理或多线程工作。全局解释器锁仍然存在,主程序中任何长时间运行的进程仍然会阻塞其它任何事情的进行,该程序仍然是单线程的。然而,通过将繁琐的工作外部化,我们可以将线程的注意力集中在它需要注意的地方。 + +这有点像我上面的异步任务。当我的注意力完全集中在准备食物上时,它就是我所能做的一切。然而,当我能让炉子帮我做饭,洗碗机帮我洗碗,洗衣机和烘干机帮我洗衣服时,我的注意力就会被释放出来,去做其它事情。当我被提醒,我的一个长时间运行的任务已经完成并准备再次处理时,如果我的注意力是空闲的,我可以获取该任务的结果,并对其做下一步需要做的任何事情。 + +### Tornado 路由和视图 + +尽管经历了在 Python 中讨论异步的所有麻烦,我们还是决定暂不使用它。先来编写一个基本的 Tornado 视图。 + +与我们在 Flask 和 Pyramid 实现中看到的基于函数的视图不同,Tornado 的视图都是基于类的。这意味着我们将不在使用单独的,独立的函数来规定如何处理请求。相反,传入的 HTTP 请求将被捕获并将其分配为我们定义的类的一个属性。然后,它的方法将处理相应的请求类型。 + +让我们从一个基本的视图开始,即在屏幕上打印 "Hello, World"。我们为 Tornado 应用程序构造的每个基于类的视图都必须继承 `tornado.web` 中的 `RequestHandler` 对象。这将设置我们需要(但不想写)的所有底层逻辑来接收请求,同时构造正确格式的 HTTP 响应。 + +``` +from tornado.web import RequestHandler + +class HelloWorld(RequestHandler): +    """Print 'Hello, world!' as the response body.""" + +    def get(self): +        """Handle a GET request for saying Hello World!.""" +        self.write("Hello, world!") +``` + +因为我们要处理 `GET` 请求,所以我们声明(实际上是重写) `get` 方法。我们提供文本或 JSON 可序列化对象,用 `self.write` 写入响应体。之后,我们让 `RequestHandler` 来做在发送响应之前必须完成的其它工作。 + +就目前而言,此视图与 Tornado 应用程序本身并没有实际连接。我们必须回到 `__init__.py`,并稍微更新 `main` 函数。以下是新的内容: + +``` +# __init__.py +from tornado.httpserver import HTTPServer +from tornado.ioloop import IOLoop +from tornado.options import define, options +from tornado.web import Application +from todo.views import HelloWorld + +define('port', default=8888, help='port to listen on') + +def main(): +    """Construct and serve the tornado application.""" +    app = Application([ +        ('/', HelloWorld) +    ]) +    http_server = HTTPServer(app) +    http_server.listen(options.port) +    print('Listening on http://localhost:%i' % options.port) +    IOLoop.current().start() +``` + +#### 我们做了什么 + +我们将 `views.py` 文件中的 `HelloWorld` 视图导入到脚本 `__init__.py` 的顶部。然后我们添加了一个路由-视图对应的列表,作为 `Application` 实例化的第一个参数。每当我们想要在应用程序中声明一个路由时,它必须绑定到一个视图。如果需要,可以对多个路由使用相同的视图,但每个路由必须有一个视图。 + +我们可以通过在 `setup.py` 中启用的 `serve_app` 命令来运行应用程序,从而确保这一切都能正常工作。查看 `http://localhost:8888/` 并看到它显示 "Hello, world!"。 + +当然,在这个领域中我们还能做更多,也将做更多,但现在让我们来讨论模型吧。 + +### 连接数据库 + +如果我们想要保留数据,我们需要连接数据库。与 Flask 一样,我们将使用一个特定于框架的 SQLAchemy 变体,名为 [tornado-sqlalchemy][7]。 + +为什么要使用它而不是 [SQLAlchemy][8] 呢?好吧,其实 `tornado-sqlalchemy` 具有简单 SQLAlchemy 的所有优点,因此我们仍然可以使用通用的 `Base` 声明模型,并使用我们习以为常的所有列数据类型和关系。除了我们已经从习惯中了解到的,`tornado-sqlalchemy` 还为其数据库查询功能提供了一种可访问的异步模式,专门用于与 Tornado 现有的 I/O 循环一起工作。 + +我们通过将 `tornado-sqlalchemy` 和 `psycopg2` 添加到 `setup.py` 到所需包的列表并重新安装包来创建环境。在 `models.py` 中,我们声明了模型。这一步看起来与我们在 Flask 和 Pyramid 中已经看到的完全一样,所以我将跳过全部声明,只列出了 `Task` 模型的必要部分。 + +``` +# 这不是完整的 models.py, 但是足够看到不同点 +from tornado_sqlalchemy import declarative_base + +Base = declarative_base + +class Task(Base): +    # 等等,因为剩下的几乎所有的东西都一样 ... +``` + +我们仍然需要将 `tornado-sqlalchemy` 连接到实际应用程序。在 `__init__.py` 中,我们将定义数据库并将其集成到应用程序中。 + +``` +# __init__.py +from tornado.httpserver import HTTPServer +from tornado.ioloop import IOLoop +from tornado.options import define, options +from tornado.web import Application +from todo.views import HelloWorld + +# add these +import os +from tornado_sqlalchemy import make_session_factory + +define('port', default=8888, help='port to listen on') +factory = make_session_factory(os.environ.get('DATABASE_URL', '')) + +def main(): +    """Construct and serve the tornado application.""" +    app = Application([ +        ('/', HelloWorld) +    ], +        session_factory=factory +    ) +    http_server = HTTPServer(app) +    http_server.listen(options.port) +    print('Listening on http://localhost:%i' % options.port) +    IOLoop.current().start() +``` + +就像我们在 Pyramid 中传递的会话工厂一样,我们可以使用 `make_session_factory` 来接收数据库 URL 并生成一个对象,这个对象的唯一目的是为视图提供到数据库的连接。然后我们将新创建的 `factory` 传递给 `Application` 对象,并使用 `session_factory` 关键字参数将它绑定到应用程序中。 + +最后,初始化和管理数据库与 Flask 和 Pyramid 相同(即,单独的 DB 管理脚本,与 `Base` 对象一起工作等)。它看起来很相似,所以在这里我就不介绍了。 + +### 回顾视图 + +Hello,World 总是适合学习基础知识,但我们需要一些真实的,特定应用程序的视图。 + +让我们从 info 视图开始。 + +``` +# views.py +import json +from tornado.web import RequestHandler + +class InfoView(RequestHandler): +    """只允许 GET 请求""" +    SUPPORTED_METHODS = ["GET"] + +    def set_default_headers(self): +        """设置默认响应头为 json 格式的""" +        self.set_header("Content-Type", 'application/json; charset="utf-8"') + +    def get(self): +        """列出这个 API 的路由""" +        routes = { +            'info': 'GET /api/v1', +            'register': 'POST /api/v1/accounts', +            'single profile detail': 'GET /api/v1/accounts/', +            'edit profile': 'PUT /api/v1/accounts/', +            'delete profile': 'DELETE /api/v1/accounts/', +            'login': 'POST /api/v1/accounts/login', +            'logout': 'GET /api/v1/accounts/logout', +            "user's tasks": 'GET /api/v1/accounts//tasks', +            "create task": 'POST /api/v1/accounts//tasks', +            "task detail": 'GET /api/v1/accounts//tasks/', +            "task update": 'PUT /api/v1/accounts//tasks/', +            "delete task": 'DELETE /api/v1/accounts//tasks/' +        } +        self.write(json.dumps(routes)) +``` + +有什么改变吗?让我们从上往下看。 + +我们添加了 `SUPPORTED_METHODS` 类属性,它是一个可迭代对象,代表这个视图所接受的请求方法,其他任何方法都将返回一个 [405][9] 状态码。当我们创建 `HelloWorld` 视图时,我们没有指定它,主要是当时有点懒。如果没有这个类属性,此视图将响应任何试图绑定到该视图的路由的请求。 + +我们声明了 `set_default_headers` 方法,它设置 HTTP 响应的默认头。我们在这里声明它,以确保我们返回的任何响应都有一个 `"Content-Type"` 是 `"application/json"` 类型。 + +我们将 `json.dumps(some_object)` 添加到 `self.write` 的参数中,因为它可以很容易地构建响应主体的内容。 + +现在已经完成了,我们可以继续将它连接到 `__init__.py` 中的主路由。 + +``` +# __init__.py +from tornado.httpserver import HTTPServer +from tornado.ioloop import IOLoop +from tornado.options import define, options +from tornado.web import Application +from todo.views import InfoView + +# 添加这些 +import os +from tornado_sqlalchemy import make_session_factory + +define('port', default=8888, help='port to listen on') +factory = make_session_factory(os.environ.get('DATABASE_URL', '')) + +def main(): +    """Construct and serve the tornado application.""" +    app = Application([ +        ('/', InfoView) +    ], +        session_factory=factory +    ) +    http_server = HTTPServer(app) +    http_server.listen(options.port) +    print('Listening on http://localhost:%i' % options.port) +    IOLoop.current().start() +``` + +我们知道,还需要编写更多的视图和路由。每个都会根据需要放入 `Application` 路由列表中,每个视图还需要一个 `set_default_headers` 方法。在此基础上,我们还将创建 `send_response` 方法,它的作用是将响应与我们想要给响应设置的任何自定义状态码打包在一起。由于每个视图都需要这两个方法,因此我们可以创建一个包含它们的基类,这样每个视图都可以继承基类。这样,我们只需要编写一次。 + +``` +# views.py +import json +from tornado.web import RequestHandler + +class BaseView(RequestHandler): +    """Base view for this application.""" + +    def set_default_headers(self): +        """Set the default response header to be JSON.""" +        self.set_header("Content-Type", 'application/json; charset="utf-8"') + +    def send_response(self, data, status=200): +        """Construct and send a JSON response with appropriate status code.""" +        self.set_status(status) +        self.write(json.dumps(data)) +``` + +对于我们即将编写的 `TaskListView` 这样的视图,我们还需要一个到数据库的连接。我们需要 `tornado_sqlalchemy` 中的 `SessionMixin` 在每个视图类中添加一个数据库会话。我们可以将它放在 `BaseView` 中,这样,默认情况下,从它继承的每个视图都可以访问数据库会话。 + +``` +# views.py +import json +from tornado_sqlalchemy import SessionMixin +from tornado.web import RequestHandler + +class BaseView(RequestHandler, SessionMixin): +    """Base view for this application.""" + +    def set_default_headers(self): +        """Set the default response header to be JSON.""" +        self.set_header("Content-Type", 'application/json; charset="utf-8"') + +    def send_response(self, data, status=200): +        """Construct and send a JSON response with appropriate status code.""" +        self.set_status(status) +        self.write(json.dumps(data)) +``` + +只要我们修改 `BaseView` 对象,在将数据发布到这个 API 时,我们就应该定位到这里。 + +当 Tornado(从 v.4.5 开始)使用来自客户端的数据并将其组织起来到应用程序中使用时,它会将所有传入数据视为字节串。但是,这里的所有代码都假设使用 Python 3,因此我们希望使用的唯一字符串是 Unicode 字符串。我们可以为这个 `BaseView` 类添加另一个方法,它的工作是将输入数据转换为 Unicode,然后再在视图的其他地方使用。 + +如果我们想要在正确的视图方法中使用它之前转换这些数据,我们可以重写视图类的原生 `prepare` 方法。它的工作是在视图方法运行前运行。如果我们重写 `prepare` 方法,我们可以设置一些逻辑来运行,每当收到请求时,这些逻辑就会执行字节串到 Unicode 的转换。 + +``` +# views.py +import json +from tornado_sqlalchemy import SessionMixin +from tornado.web import RequestHandler + +class BaseView(RequestHandler, SessionMixin): +    """Base view for this application.""" + +    def prepare(self): +        self.form_data = { +            key: [val.decode('utf8') for val in val_list] +            for key, val_list in self.request.arguments.items() +        } + +    def set_default_headers(self): +        """Set the default response header to be JSON.""" +        self.set_header("Content-Type", 'application/json; charset="utf-8"') + +    def send_response(self, data, status=200): +        """Construct and send a JSON response with appropriate status code.""" +        self.set_status(status) +        self.write(json.dumps(data)) +``` + +如果有任何数据进入,它将在 `self.request.arguments` 字典中找到。我们可以通过键访问该数据库,并将其内容(始终是列表)转换为 Unicode。因为这是基于类的视图而不是基于函数的,所以我们可以将修改后的数据存储为一个实例属性,以便以后使用。我在这里称它为 `form_data`,但它也可以被称为 `potato`。关键是我们可以存储提交给应用程序的数据。 + +### 异步视图方法 + +现在我们已经构建了 `BaseaView`,我们可以构建 `TaskListView` 了,它会继承 `BaseaView`。 + +正如你可以从章节标题中看到的那样,以下是所有关于异步性的讨论。`TaskListView` 将处理返回任务列表的 `GET` 请求和用户给定一些表单数据来创建新任务的 `POST` 请求。让我们首先来看看处理 `GET` 请求的代码。 + +``` +# all the previous imports +import datetime +from tornado.gen import coroutine +from tornado_sqlalchemy import as_future +from todo.models import Profile, Task + +# the BaseView is above here +class TaskListView(BaseView): +    """View for reading and adding new tasks.""" +    SUPPORTED_METHODS = ("GET", "POST",) + +    @coroutine +    def get(self, username): +        """Get all tasks for an existing user.""" +        with self.make_session() as session: +            profile = yield as_future(session.query(Profile).filter(Profile.username == username).first) +            if profile: +                tasks = [task.to_dict() for task in profile.tasks] +                self.send_response({ +                    'username': profile.username, +                    'tasks': tasks +                }) +``` + +这里的第一个主要部分是 `@coroutine` 装饰器,它从 `tornado.gen` 导入。任何具有与调用堆栈的正常流程不同步的部分的 Python 可调用实际上是“协程”。一个可以与其它协程一起运行的协程。在我的家务劳动的例子中,几乎所有的家务活都是一个共同的例行协程。有些人阻止了例行协程(例如,给地板吸尘),但这种例行协程只会阻碍我开始或关心其它任何事情的能力。它没有阻止已经启动的任何其他协程继续进行。 + +Tornado 提供了许多方法来构建一个利用协程的应用程序,包括允许我们设置函数调用锁,同步异步协程的条件,以及手动修改控制 I/O 循环的事件系统。 + +这里使用 `@coroutine` 装饰器的唯一条件是允许 `get` 方法将 SQL 查询作为后台进程,并在查询完成后恢复,同时不阻止 Tornado I/O 循环去处理其他传入的数据源。这就是关于此实现的所有“异步”:带外数据库查询。显然,如果我们想要展示异步 Web 应用程序的魔力和神奇,那么一个任务列表就不是好的展示方式。 + +但是,这就是我们正在构建的,所以让我们来看看方法如何利用 `@coroutine` 装饰器。`SessionMixin` 混合到 `BaseView` 声明中,为我们的视图类添加了两个方便的,支持数据库的属性:`session` 和 `make_session`。它们的名字相似,实现的目标也相当相似。 + +`self.session` 属性是一个关注数据库的会话。在请求-响应周期结束时,在视图将响应发送回客户端之前,任何对数据库的更改都被提交,并关闭会话。 + +`self.make_session` 是一个上下文管理器和生成器,可以动态构建和返回一个全新的会话对象。第一个 `self.session` 对象仍然存在。无论如何,反正 `make_session` 会创建一个新的。`make_session` 生成器还为其自身提供了一个功能,用于在其上下文(即缩进级别)结束时提交和关闭它创建的会话。 + +如果你查看源代码,则赋值给 `self.session` 的对象类型与 `self.make_session` 生成的对象类型之间没有区别,不同之处在于它们是如何被管理的。 + +使用 `make_session` 上下文管理器,生成的会话仅属于上下文,在该上下文中开始和结束。你可以使用 `make_session` 上下文管理器在同一个视图中打开,修改,提交以及关闭多个数据库会话。 + +`self.session` 要简单得多,当你进入视图方法时会话已经打开,在响应被发送回客户端之前会话就已提交。 + +虽然[读取文档片段][10]和 [PyPI 示例][11]都说明了上下文管理器的使用,但是没有说明 `self.session` 对象或由 `self.make_session` 生成的 `session` 本质上是不是异步的。当我们启动查询时,我们开始考虑内置于 `tornado-sqlalchemy` 中的异步行为。 + +`tornado-sqlalchemy` 包为我们提供了 `as_future` 函数。它的工作是装饰 `tornado-sqlalchemy` 会话构造的查询并 yield 其返回值。如果视图方法用 `@coroutine` 装饰,那么使用 `yield as_future(query)` 模式将使封装的查询成为一个异步后台进程。I/O 循环会接管等待查询的返回值和 `as_future` 创建的 `future` 对象的解析。 + +要访问 `as_future(query)` 的结果,你必须从它 `yield`。否则,你只能获得一个未解析的生成器对象,并且无法对查询执行任何操作。 + +这个视图方法中的其他所有内容都与之前课堂上的类似,与我们在 Flask 和 Pyramid 中看到的内容类似。 + +`post` 方法看起来非常相似。为了保持一致性,让我们看一下 `post` 方法以及它如何处理用 `BaseView` 构造的 `self.form_data`。 + +``` +@coroutine +def post(self, username): +    """Create a new task.""" +    with self.make_session() as session: +        profile = yield as_future(session.query(Profile).filter(Profile.username == username).first) +        if profile: +            due_date = self.form_data['due_date'][0] +            task = Task( +                name=self.form_data['name'][0], +                note=self.form_data['note'][0], +                creation_date=datetime.now(), +                due_date=datetime.strptime(due_date, '%d/%m/%Y %H:%M:%S') if due_date else None, +                completed=self.form_data['completed'][0], +                profile_id=profile.id, +                profile=profile +            ) +            session.add(task) +            self.send_response({'msg': 'posted'}, status=201) +``` + +正如我所说,这是我们所期望的: + +  * 与我们在 `get` 方法中看到的查询模式相同 +  * 构造一个新的 `Task` 对象的实例,用 `form_data` 的数据填充 +  * 添加新的 `Task` 对象(但不提交,因为它由上下文管理器处理!)到数据库会话 +  * 将响应发送给客户端 + +这样我们就有了 Tornado web 应用程序的基础。其他内容(例如,数据库管理和更多完整应用程序的视图)实际上与我们在 Flask 和 Pyramid 应用程序中看到的相同。 + +### 关于使用合适的工具完成合适的工作的一点想法 + +在我们继续浏览这些 Web 框架时,我们开始看到它们都可以有效地处理相同的问题。对于像这样的待办事项列表,任何框架都可以完成这项任务。但是,有些 Web 框架比其它框架更适合某些工作,这具体取决于对你来说什么“更合适”和你的需求。 + +虽然 Tornado 显然能够处理 Pyramid 或 Flask 可以处理的相同工作,但将它用于这样的应用程序实际上是一种浪费,这就像开车从家走一个街区(to 校正:这里意思应该是从家开始走一个街区只需步行即可)。是的,它可以完成“旅行”的工作,但短途旅行不是你选择汽车而不是自行车或者使用双脚的原因。 + +根据文档,Tornado 被称为 “Python Web 框架和异步网络库”。在 Python Web 框架生态系统中很少有人喜欢它。如果你尝试完成的工作需要(或将从中获益)以任何方式,形状或形式的异步性,使用 Tornado。如果你的应用程序需要处理多个长期连接,同时又不想牺牲太多性能,选择 Tornado。如果你的应用程序是多个应用程序,并且需要线程感知以准确处理数据,使用 Tornado。这是它最有效的地方。 + +用你的汽车做“汽车的事情”,使用其他交通工具做其他事情。 + +### 向前看,进行一些深度检查 + +谈到使用合适的工具来完成合适的工作,在选择框架时,请记住应用程序的范围和规模,包括现在和未来。到目前为止,我们只研究了适用于中小型 Web 应用程序的框架。本系列的下一篇也是最后一篇将介绍最受欢迎的 Python 框架之一 Django,它适用于可能会变得更大的大型应用程序。同样,尽管它在技术上能够并且将会处理待办事项列表问题,但请记住,这不是它的真正用途。我们仍然会通过它来展示如何使用它来构建应用程序,但我们必须牢记框架的意图以及它是如何反映在架构中的: + +  * **Flask:** 适用于小型,简单的项目。它可以使我们轻松地构建视图并将它们快速连接到路由,它可以简单地封装在一个文件中。 + +  * **Pyramid:** 适用于可能增长的项目。它包含一些配置来启动和运行。应用程序组件的独立领域可以很容易地划分并构建到任意深度,而不会忽略中央应用程序。 + +  * **Tornado:** 适用于受益于精确和有意识的 I/O 控制的项目。它允许协程,并轻松公开可以控制如何接收请求或发送响应以及何时发生这些操作的方法。 + +  * **Django:**(我们将会看到)意味着可能会变得更大的东西。它有着非常庞大的生态系统,包括大量插件和模块。它非常有主见的配置和管理,以保持所有不同部分在同一条线上。 + +无论你是从本系列的第一篇文章开始阅读,还是稍后才加入的,都要感谢阅读!请随意留下问题或意见。下次再见时,我手里会拿着 Django。 + +### 感谢 Python BDFL + +我必须把功劳归于它应得的地方,非常感谢 [Guido van Rossum][12],不仅仅是因为他创造了我最喜欢的编程语言。 + +在 [PyCascades 2018][13] 期间,我很幸运的不仅给了基于这个文章系列的演讲,而且还被邀请参加了演讲者的晚宴。整个晚上我都坐在 Guido 旁边,不停地问他问题。其中一个问题是,在 Python 中异步到底是如何工作的,但他没有一点大惊小怪,而是花时间向我解释,让我开始理解这个概念。他后来[推特给我][14]发了一条消息:是用于学习异步 Python 的广阔资源。我随后在三个月内阅读了三次,然后写了这篇文章。你真是一个非常棒的人,Guido! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/6/tornado-framework + +作者:[Nicholas Hunt-Walker][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/nhuntwalker +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/18/5/pyramid-framework +[2]: https://opensource.com/article/18/4/flask +[3]: https://tornado.readthedocs.io/en/stable/ +[4]: https://realpython.com/python-gil/ +[5]: https://en.wikipedia.org/wiki/Thread_(computing) +[6]: https://www.nginx.com/ +[7]: https://tornado-sqlalchemy.readthedocs.io/en/latest/ +[8]: https://www.sqlalchemy.org/ +[9]: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_errors +[10]: https://tornado-sqlalchemy.readthedocs.io/en/latest/#usage +[11]: https://pypi.org/project/tornado-sqlalchemy/#description +[12]: https://www.twitter.com/gvanrossum +[13]: https://www.pycascades.com +[14]: https://twitter.com/gvanrossum/status/956186585493458944 diff --git a/translated/tech/20180730 A single-user, lightweight OS for your next home project - Opensource.com.md b/translated/tech/20180730 A single-user, lightweight OS for your next home project - Opensource.com.md deleted file mode 100644 index 1935158f1a..0000000000 --- a/translated/tech/20180730 A single-user, lightweight OS for your next home project - Opensource.com.md +++ /dev/null @@ -1,65 +0,0 @@ -适用于你下一个家庭项目的单用户轻量级操作系统| Opensource.com -====== -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/robot_arm_artificial_ai.png?itok=8CUU3U_7) - -究竟什么是 RISC OS?嗯,它不是一种新的 Linux。它也不是有些人认为的 Windows。事实上,在 1987 年发布,它比其中任何一个都要老。但你看到它不一定会意识到这一点。 - -点击式图形用户界面在活动程序的底部有一个固定板和一个图标栏。因此,它看起来像 Windows 95,并且比它早了 8 年。 - -这个操作系统最初是为 [Acorn Archimedes][1] 编写的。这台机器中的 Acorn RISC Machines CPU 是全新的硬件,因此需要在其上运行全新的软件。这是最早的 ARM 芯片上的系统,早于任何人想到的 Android 或 [Armbian][2] 之前。 - -虽然 Acorn 桌面最终消失了,但 ARM 芯片继续征服世界。在这里,RISC OS 一直有一个优点 - 通常在嵌入式设备中,你从来没有真正地意识到它。RISC OS 过去长期以来一直是一个完全专有的操作系​​统。但近年来,所有人已经开始将源代码发布到一个名为 [RISC OS Open][3] 的项目中。 - -### 1\. 你可以将它安装在树莓派上 - -树莓派的官方操作系统 [Raspbian][4] 实际上非常棒(但如果你对摆弄不同技术上新奇的东西不敢兴趣,那么你可能最初也不会选择树莓派)。由于 RISC OS 是专门为 ARM 编写的,因此它可以在各种小型计算机上运行,​​包括树莓派的各个型号。 - -### 2\. 它超轻量级 - -我的树莓派上安装的 RISC 系统占用了几百兆 - 就是在我加载了数十个程序和游戏之后。其中大多数时候不大于 1 兆。 - -如果你真的节俭,RISC OS Pico 可用在 16MB SD 卡上。如果你在嵌入式系统或物联网项目中 hack 某些东西,这是很完美的。当然,16MB 实际上比压缩到 512KB 的老 Archimedes 的 ROM 要多得多。但我想 30 年间内存的发展,我们可以稍微放宽一下了。 - -### 3\. 它非常适合复古游戏 - -当 Archimedes 处于鼎盛时期时,ARM CPU 的速度比 Apple Macintosh 和 Commodore Amiga 中的 Motorola 68000 要快几倍,它也完全吸了新的 386。这使得它成为对游戏开发者有吸引力的一个平台,他们希望用这个星球上最强大的桌面计算机来支撑他们的东西。 - -这些游戏的许多拥有者都非常慷慨,允许业余爱好者免费下载他们的老作品。虽然 RISC OS 和硬件已经发展了,但只需要进行少量的调整就可以让它们运行起来。 - -如果你有兴趣探索这个,[这里有一个指南][5]让这些游戏在你的树莓派上运行。 - -### 4\. 它有 BBC BASIC - -就像过去一样,按下 F12 进入命令行,输入 `*BASIC`,就可以看到一个完整的 BBC BASIC 解释器。 - -对于那些在 80 年代没有接触过的人,请让我解释一下:BBC BASIC 是当时我们很多人的第一个编程语言,因为它专门教孩子如何编码。当时有大量的书籍和杂志文章教我们编写自己的简单但高度可玩的游戏。 - -几十年后,对于一个想要在学校假期做点什么的有技术头脑的孩子而言,在 BBC BASIC 上编写自己的游戏仍然是一个很棒的项目。但很少有孩子在家里有 BBC micro。那么他们应该怎么做呢? - -没问题,你可以在每台家用电脑上运行解释器,但是当别人需要使用它时就不能用了。那么为什么不使用装有 RISC OS 的树莓派呢? - -### 5\. 它是一个简单的单用户操作系统 - -RISC OS 不像 Linux 一样有自己的用户和超级用户访问权限。它有一个用户并可以完全访问整个机器。因此,它可能不是跨企业部署的最佳日常驱动,甚至不适合给爷爷做银行业务。但是,如果你正在寻找可以用来修改和 hack 的东西,那绝对是太棒了。你和机器之间没有那么多,所以你可以直接进去。 - -### 扩展阅读 - -如果你想了解有关此操作系统的更多信息,请查看 [RISC OS Open][3],或者将镜像烧到闪存到卡上并开始使用它。 - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/7/gentle-intro-risc-os - -作者:[James Mawson][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[geekpi](https://github.com/geekpi) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://opensource.com/users/dxmjames -[1]:https://en.wikipedia.org/wiki/Acorn_Archimedes -[2]:https://www.armbian.com/ -[3]:https://www.riscosopen.org/content/ -[4]:https://www.raspbian.org/ -[5]:https://www.riscosopen.org/wiki/documentation/show/Introduction%20to%20RISC%20OS diff --git a/translated/tech/20180814 Top Linux developers- recommended programming books.md b/translated/tech/20180814 Top Linux developers- recommended programming books.md deleted file mode 100644 index 28725009db..0000000000 --- a/translated/tech/20180814 Top Linux developers- recommended programming books.md +++ /dev/null @@ -1,110 +0,0 @@ - -顶级 Linux 开发者推荐的编程书籍 -====== - -毫无疑问,Linux 是由那些拥有深厚计算机知识背景而且才华横溢的程序员发明的。让那些大名鼎鼎的 Linux 程序员向今日的开发者分享一些曾经带领他们登堂入室的好书和技术参考吧,你会不会也读过其中几本呢? - -Linux,毫无争议的属于21世纪的操作系统。虽然Linus Torvalds 在建立开源社区这件事上做了很多工作和社区决策,不过那些网络专家和开发者愿意接受Linux的原因还是因为它卓越的代码质量和高可用性。Torvalds 是个编程天才,同时必须承认他还是得到了很多其他同样极具才华的开发者的无私帮助。 - -就此我咨询了Torvalds 和其他一些顶级Linux开发者,有哪些书籍帮助他们走上了成为顶级开发者的道路,下面请听我一一道来。 - -### 熠熠生辉的 C语言 - -Linux 是在大约90年代开发出来的,与它一起问世的还有其他一些完成基础功能的开源软件。与此相应,那时的开发者使用的工具和语言反映了那个时代的印记。可能[C 语言不再流行了][1],可对于很多已经建功立业的开发者来说,C 语言是他们的第一个实际开发中使用的语言,这一点也在他们推选的对他们有着深远影响的书单中反映出来。 - -Torvalds 说,“你不应该再选用我那个时代使用的语言或者开发方式”,他的开发道路始于BASIC,然后转向机器码(“甚至都不是汇编语言,而是真真正正的’二进制‘机器码”,他解释道),再然后转向汇编语言和 C 语言。 - -“任何人都不应该再从这些语言开始进入开发这条路了”,他补充道。“这些语言中的一些今天已经没有什么意义(如 BASIC 和机器语言)。尽管 C 还是一个主流语言,我也不推荐你从它开始你的开发工作”。 - -并不是他不喜欢 C。不管怎样,Linux 是用[C语言GNU C][2]写就的。“我始终认为 C 是一个伟大的语言,它有着非常简单的语法,对于很多方向的开发都很合适,但是我怀疑你会挫折重重,从你的第一个'Hello World'程序开始到你真正能开发出能用的东西当中有很大一步要走”。他认为,如果用现在的标准,如果作为现在的入门语言的话,从 C语言开始的代价太大。 - -在他那个时代,Torvalds 的唯一选择的书就只能是Brian W. Kernighan 和Dennis M. Ritchie 合著的[C 编程语言C Programming Language, 2nd Edition][3],在编程圈内也被尊称为K&R。“这本书简单精炼,但是你要先有编程的背景才能欣赏它”。Torvalds 说到。 - -Torvalds 并不是唯一一个推荐K&R 的开源开发者。以下几位也同样引用了这本他们认为值得推荐的书籍,他们有:Linux 和 Oracle 虚拟化开发副总裁,Wim Coekaerts;Linux 开发者Alan Cox; Google 云 CTO Brian Stevens; Canonical 技术运营部副总裁Pete Graner。 - - -如果你今日还想同 C 语言较量一番的话,Jeremy Allison,Samba 的共同发起人,推荐[21世纪的 C 语言21st Century C: C Tips from the New School][4]。他还建议,同时也去阅读一本比较旧但是写的更详细的[C专家编程Expert C Programming: Deep C Secrets][5]和有着20年历史的[UNIX POSIX多线程编程Programming with POSIX Threads][6]。 - - -### 如果不选C 语言, 那选什么? - - Linux 开发者推荐的书籍自然都是他们认为适合今时今日的开发项目的语言工具。这也折射了开发者自身的个人偏好。例如, Allison认为年轻的开发者应该在[Go 编程语言The Go Programming Language ][7]和[Rust 编程Rust with Programming Rust][8]的帮助下去学习 Go 语言和 Rust 语言。 - - -但是超越编程语言来考虑问题也不无道理(尽管这些书传授了你编程技巧)。今日要做些有意义的开发工作的话,"要从那些已经完成了99%显而易见工作的框架开始,然后你就能围绕着它开始写脚本了", Torvalds 推荐了这种做法。 - - -“坦率来说,语言本身远远没有围绕着它的基础架构重要”,他继续道,“可能你会从 Java 或者Kotlin 开始,但那是因为你想为自己的手机开发一个应用,因此安卓 SDK 成为了最佳的选择,又或者,你对游戏开发感兴趣,你选择了一个游戏开发引擎来开始,而通常它们有着自己的脚本语言”。 - - -这里提及的基础架构包括那些和操作系统本身相关的编程书籍。 -Garner 在读完了大名鼎鼎的 K&R后又拜读了W. Richard Steven 的[Unix 网络编程Unix: Network Programming][10]。特别的是,Steven 的[TCP/IP详解,卷1:协议TCP/IP Illustrated, Volume 1: The Protocols][11]在出版了30年之后仍然被认为是必读的。因为 Linux 开发很大程度上和[和网络基础架构有关][12],Garner 也推荐了很多 O’Reilly 的书,包括[Sendmail][13],[Bash][14],[DNS][15],以及[IMAP/POP][16]。 - -Coekaerts也是Maurice Bach的[UNIX操作系统设计The Design of the Unix Operation System][17]的书迷之一。James Bottomley 也是这本书的推崇者,作为一个 Linux 内核开发者,当 Linux 刚刚问世时James就用Bach 的这本书所传授的知识将它研究了个底朝天。 - -### 软件设计知识永不过时 - -尽管这样说有点太局限在技术领域。Stevens 还是说到,“所有的开发者都应该在开始钻研语法前先研究如何设计,[日常物品的设计The Design of Everyday Things][18]是我的最爱”。 - -Coekaerts 喜欢Kernighan 和 Rob Pike合著的[程序设计实践The Practic of Programming][19]。这本关于设计实践的书当 Coekaerts 还在学校念书的时候还未出版,他说道,“但是我把它推荐给每一个人”。 - - -不管何时,当你问一个长期认真对待开发工作的开发者他最喜欢的计算机书籍时,你迟早会听到一个名字和一本书: -Donald Knuth和他所著的[计算机程序设计艺术(1-4A)The Art of Computer Programming, Volumes 1-4A][20]。Dirk Hohndel,VMware 首席开源官,认为这本书尽管有永恒的价值,但他也承认,“今时今日并非及其有用”。(译注:不代表译者观点) - - -### 读代码。大量的读。 - -编程书籍能教会你很多,也请别错过另外一个在开源社区特有的学习机会:[如何阅读代码Code Reading: The Open Source Perspective][21]。那里有不可计数的代码例子阐述如何解决编程问题(以及如何让你陷入麻烦...)。Stevens 说,谈到磨炼编程技巧,在他的书单里排名第一的“书”是 Unix 的源代码。 - -"也请不要忽略从他人身上学习的各种机会。", Cox道,“我是在一个计算机俱乐部里和其他人一起学的 BASIC,在我看来,这仍然是一个学习的最好办法”,他从[精通 ZX81机器码Mastering machine code on your ZX81][22]这本书和 Honeywell L66 B 编译器手册里学习到了如何编写机器码,但是学习技术这点来说,单纯阅读和与其他开发者在工作中共同学习仍然有着很大的不同。 - - -Cox 说,“我始终认为最好的学习方法是和一群人一起试图去解决你们共同关心的一些问题并从中找到快乐,这和你是5岁还是55岁无关”。 - - -最让我吃惊的是这些顶级 Linux 开发者都是在非常底层级别开始他们的开发之旅的,甚至不是从汇编语言或 C 语言,而是从机器码开始开发。毫无疑问,这对帮助开发者理解计算机在非常微观的底层级别是怎么工作的起了非常大的作用。 - - -那么现在你准备好尝试一下硬核 Linux 开发了吗?Greg Kroah-Hartman,这位 Linux 内核过期分支的维护者,推荐了Steve Oualline 的[实用 C 语言编程Practical C Programming][23]和Samuel harbison 以及Guy Steels 合著的[C语言参考手册C: A Reference Manual][24]。接下来请阅读“[如何进行 Linux 内核开发HOWTO do Linux kernel development][25]”,到这时,就像Kroah-Hartman所说,你已经准备好启程了。 - -于此同时,还请你刻苦学习并大量编码,最后祝你在跟随顶级 Linux 开发者脚步的道路上好运相随。 - - --------------------------------------------------------------------------------- - -via: https://www.hpe.com/us/en/insights/articles/top-linux-developers-recommended-programming-books-1808.html - -作者:[Steven Vaughan-Nichols][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:DavidChenLiang(https://github.com/DavidChenLiang) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]:https://www.hpe.com/us/en/insights/contributors/steven-j-vaughan-nichols.html -[1]:https://www.codingdojo.com/blog/7-most-in-demand-programming-languages-of-2018/ -[2]:https://www.gnu.org/software/gnu-c-manual/ -[3]:https://amzn.to/2nhyjEO -[4]:https://amzn.to/2vsL8k9 -[5]:https://amzn.to/2KBbWn9 -[6]:https://amzn.to/2M0rfeR -[7]:https://amzn.to/2nhyrnMe -[8]:http://shop.oreilly.com/product/0636920040385.do -[9]:https://www.hpe.com/us/en/resources/storage/containers-for-dummies.html?jumpid=in_510384402_linuxbooks_containerebook0818 -[10]:https://amzn.to/2MfpbyC -[11]:https://amzn.to/2MpgrTn -[12]:https://www.hpe.com/us/en/insights/articles/how-to-see-whats-going-on-with-your-linux-system-right-now-1807.html -[13]:http://shop.oreilly.com/product/9780596510299.do -[14]:http://shop.oreilly.com/product/9780596009656.do -[15]:http://shop.oreilly.com/product/9780596100575.do -[16]:http://shop.oreilly.com/product/9780596000127.do -[17]:https://amzn.to/2vsCJgF -[18]:https://amzn.to/2APzt3Z -[19]:https://www.amazon.com/Practice-Programming-Addison-Wesley-Professional-Computing/dp/020161586X/ref=as_li_ss_tl?ie=UTF8&linkCode=sl1&tag=thegroovycorpora&linkId=e6bbdb1ca2182487069bf9089fc8107e&language=en_US -[20]:https://amzn.to/2OknFsJ -[21]:https://amzn.to/2M4VVL3 -[22]:https://amzn.to/2OjccJA -[23]:http://shop.oreilly.com/product/9781565923065.do -[24]:https://amzn.to/2OjzgrT -[25]:https://www.kernel.org/doc/html/v4.16/process/howto.html diff --git a/translated/tech/20180907 6.828 lab tools guide.md b/translated/tech/20180907 6.828 lab tools guide.md deleted file mode 100644 index 1396289ad1..0000000000 --- a/translated/tech/20180907 6.828 lab tools guide.md +++ /dev/null @@ -1,200 +0,0 @@ -6.828 实验工具指南 -====== -### 6.828 实验工具指南 - -熟悉你的环境对高效率的开发和调试来说是至关重要的。本文将为你简单概述一下 JOS 环境和非常有用的 GDB 和 QEMU 命令。话虽如此,但你仍然需要去阅读 GDB 和 QEMU 手册,来理解这些强大的工具如何使用。 - -#### 调试小贴士 - -##### 内核 - -GDB 是你的朋友。使用 `qemu-gdb target`(或它的变体 `qemu-gdb-nox`)使 QEMU 等待 GDB 去绑定。下面在调试内核时用到的一些命令,可以去查看 GDB 的资料。 - -如果你遭遇意外的中断、异常、或三重故障,你可以使用 `-d` 参数要求 QEMU 去产生一个详细的中断日志。 - -调试虚拟内存问题时,尝试 QEMU 监视命令 `info mem`(提供内存高级概述)或 `info pg`(提供更多细节内容)。注意,这些命令仅显示**当前**页表。 - -(在实验 4 以后)去调试多个 CPU 时,使用 GDB 的线程相关命令,比如 `thread` 和 `info threads`。 - -##### 用户环境(在实验 3 以后) - -GDB 也可以去调试用户环境,但是有些事情需要注意,因为 GDB 无法区分开多个用户环境或用户环境与内核环境。 - -你可以使用 `make run-name`(或编辑 `kern/init.c` 目录)来指定 JOS 启动的用户环境,为使 QEMU 等待 GDB 去绑定,使用 `run-name-gdb` 的变体。 - -你可以符号化调试用户代码,就像调试内核代码一样,但是你要告诉 GDB,哪个符号表用到符号文件命令上,因为它一次仅能够使用一个符号表。提供的 `.gdbinit` 用于加载内核符号表 `obj/kern/kernel`。对于一个用户环境,这个符号表在它的 ELF 二进制文件中,因此你可以使用 `symbol-file obj/user/name` 去加载它。不要从任何 `.o` 文件中加载符号,因为它们不会被链接器迁移进去(库是静态链接进 JOS 用户二进制文件中的,因此这些符号已经包含在每个用户二进制文件中了)。确保你得到了正确的用户二进制文件;在不同的二直制文件中,库函数被链接为不同的 EIP,而 GDB 并不知道更多的内容! - -(在实验 4 以后)因为 GDB 绑定了整个虚拟机,所以它可以将时钟中断看作为一种控制转移。这使得从底层上不可能实现步进用户代码,因为一个时钟中断无形中保证了片刻之后虚拟机可以再次运行。因此可以使用 `stepi` 命令,因为它阻止了中断,但它仅可以步进一个汇编指令。断点一般来说可以正常工作,但要注意,因为你可能在不同的环境(完全不同的一个二进制文件)上遇到同一个 EIP。 - -#### 参考 - -##### JOS makefile - -JOS 的 GNUmakefile 包含了在各种方式中运行的 JOS 的许多假目标。所有这些目标都配置 QEMU 去监听 GDB 连接(`*-gdb` 目标也等待这个连接)。要在运行中的 QEMU 上启动它,只需要在你的实验目录中简单地运行 `gdb ` 即可。我们提供了一个 `.gdbinit` 文件,它可以在 QEMU 中自动指向到 GDB、加载内核符号文件、以及在 16 位和 32 位模式之间切换。退出 GDB 将关闭 QEMU。 - - * `make qemu` - 在一个新窗口中构建所有的东西并使用 VGA 控制台和你的终端中的串行控制台启动 QEMU。想退出时,既可以关闭 VGA 窗口,也可以在你的终端中按 `Ctrl-c` 或 `Ctrl-a x`。 - * `make qemu-nox` - 和 `make qemu` 一样,但仅使用串行控制台来运行。想退出时,按下 `Ctrl-a x`。这种方式在通过 SSH 拨号连接到 Athena 上时非常有用,因为 VGA 窗口会占用许多带宽。 - * `make qemu-gdb` - 和 `make qemu` 一样,但它与任意时间被动接受 GDB 不同,而是暂停第一个机器指令并等待一个 GDB 连接。 - * `make qemu-nox-gdb` - 它是 `qemu-nox` 和 `qemu-gdb` 目标的组合。 - * `make run-nam` - (在实验 3 以后)运行用户程序 _name_。例如,`make run-hello` 运行 `user/hello.c`。 - * `make run-name-nox`,`run-name-gdb`, `run-name-gdb-nox` - (在实验 3 以后)与 `qemu` 目标变量对应的 `run-name` 的变体。 - - - -makefile 也接受几个非常有用的变量: - - * `make V=1 …` -详细模式。输出正在运行的每个命令,包括参数。 - * `make V=1 grade` -在评级测试失败后停止,并将 QEMU 的输出放入 `jos.out` 文件中以备检查。 - * `make QEMUEXTRA=' _args_ ' …` -指定传递给 QEMU 的额外参数。 - - - -##### JOS obj/ - -在构建 JOS 时,makefile 也产生一些额外的输出文件,这些文件在调试时非常有用: - - * `obj/boot/boot.asm`、`obj/kern/kernel.asm`、`obj/user/hello.asm`、等等。 -引导加载器、内核、和用户程序的汇编代码列表。 - * `obj/kern/kernel.sym`、`obj/user/hello.sym`、等等。 -内核和用户程序的符号表。 - * `obj/boot/boot.out`、`obj/kern/kernel`、`obj/user/hello`、等等。 -内核和用户程序链接的 ELF 镜像。它们包含了 GDB 用到的符号信息。 - - - -##### GDB - -完整的 GDB 命令指南请查看 [GDB 手册][1]。下面是一些在 6.828 课程中非常有用的命令,它们中的一些在操作系统开发之外的领域几乎用不到。 - - * `Ctrl-c` -在当前指令处停止机器并打断进入到 GDB。如果 QEMU 有多个虚拟的 CPU,所有的 CPU 都会停止。 - * `c`(或 `continue`) -继续运行,直到下一个断点或 `Ctrl-c`。 - * `si`(或 `stepi`) -运行一个机器指令。 - * `b function` 或 `b file:line`(或 `breakpoint`) -在给定的函数或行上设置一个断点。 - * `b * addr`(或 `breakpoint`) -在 EIP 的 addr 处设置一个断点。 - * `set print pretty` -启用数组和结构的美化输出。 - * `info registers` -输出通用寄存器 `eip`、`eflags`、和段选择器。更多更全的机器寄存器状态转储,查看 QEMU 自己的 `info registers` 命令。 - * `x/ N x addr` -以十六进制显示虚拟地址 addr 处开始的 N 个词的转储。如果 N 省略,默认为 1。addr 可以是任何表达式。 - * `x/ N i addr` -显示从 addr 处开始的 N 个汇编指令。使用 `$eip` 作为 addr 将显示当前指令指针寄存器中的指令。 - * `symbol-file file` -(在实验 3 以后)切换到符号文件 file 上。当 GDB 绑定到 QEMU 后,它并不是虚拟机中进程边界内的一部分,因此我们要去告诉它去使用哪个符号。默认情况下,我们配置 GDB 去使用内核符号文件 `obj/kern/kernel`。如果机器正在运行用户代码,比如是 `hello.c`,你就需要使用 `symbol-file obj/user/hello` 去切换到 hello 的符号文件。 - - - -QEMU 将每个虚拟 CPU 表示为 GDB 中的一个线程,因此你可以使用 GDB 中所有的线程相关的命令去查看或维护 QEMU 的虚拟 CPU。 - - * `thread n` -GDB 在一个时刻只关注于一个线程(即:CPU)。这个命令将关注的线程切换到 n,n 是从 0 开始编号的。 - * `info threads` -列出所有的线程(即:CPU),包括它们的状态(活动还是停止)和它们在什么函数中。 - - - -##### QEMU - -QEMU 包含一个内置的监视器,它能够有效地检查和修改机器状态。想进入到监视器中,在运行 QEMU 的终端中按入 `Ctrl-a c` 即可。再次按下 `Ctrl-a c` 将切换回串行控制台。 - -监视器命令的完整参考资料,请查看 [QEMU 手册][2]。下面是 6.828 课程中用到的一些有用的命令: - - * `xp/ N x paddr` -显示从物理地址 paddr 处开始的 N 个词的十六进制转储。如果 N 省略,默认为 1。这是 GDB 的 `x` 命令模拟的物理内存。 - - * `info registers` -显示机器内部寄存器状态的一个完整转储。实践中,对于段选择器,这将包含机器的 _隐藏_ 段状态和局部、全局、和中断描述符表加任务状态寄存器。隐藏状态是在加载段选择器后,虚拟的 CPU 从 GDT/LDT 中读取的信息。下面是实验 1 中 JOS 内核处于运行中时的 CS 信息和每个字段的含义: -```c - CS =0008 10000000 ffffffff 10cf9a00 DPL=0 CS32 [-R-] -``` - - * `CS =0008` - -代码选择器可见部分。我们使用段 0x8。这也告诉我们参考全局描述符表(0x8 &4=0),并且我们的 CPL(当前权限级别)是 0x8&3=0。 - * `10000000` -这是段基址。线性地址 = 逻辑地址 + 0x10000000。 - * `ffffffff` -这是段限制。访问线性地址 0xffffffff 以上将返回段违规异常。 - * `10cf9a00` -段的原始标志,QEMU 将在接下来的几个字段中解码这些对我们有用的标志。 - * `DPL=0` -段的权限级别。一旦代码以权限 0 运行,它将就能够加载这个段。 - * `CS32` -这是一个 32 位代码段。对于数据段(不要与 DS 寄存器混淆了),另外的值还包括 `DS`,而对于本地描述符表是 `LDT`。 - * `[-R-]` -这个段是只读的。 - - * `info mem` -(在实验 2 以后)显示映射的虚拟内存和权限。比如: -``` - ef7c0000-ef800000 00040000 urw - efbf8000-efc00000 00008000 -rw - -``` - -这告诉我们从 0xef7c0000 到 0xef800000 的 0x00040000 字节的内存被映射为读取/写入/用户可访问,而映射在 0xefbf8000 到 0xefc00000 之间的内存权限是读取/写入,但是仅限于内核可访问。 - - * `info pg` -(在实验 2 以后)显示当前页表结构。它的输出类似于 `info mem`,但与页目录条目和页表条目是有区别的,并且为每个条目给了单独的权限。重复的 PTE 和整个页表被折叠为一个单行。例如: -``` - VPN range Entry Flags Physical page - [00000-003ff] PDE[000] -------UWP - [00200-00233] PTE[200-233] -------U-P 00380 0037e 0037d 0037c 0037b 0037a .. - [00800-00bff] PDE[002] ----A--UWP - [00800-00801] PTE[000-001] ----A--U-P 0034b 00349 - [00802-00802] PTE[002] -------U-P 00348 - -``` - -这里各自显示了两个页目录条目、虚拟地址范围 0x00000000 到 0x003fffff 以及 0x00800000 到 0x00bfffff。 所有的 PDE 都存在于内存中、可写入、并且用户可访问,而第二个 PDE 也是可访问的。这些页表中的第二个映射了三个页、虚拟地址范围 0x00800000 到 0x00802fff,其中前两个页是存在于内存中的、可写入、并且用户可访问的,而第三个仅存在于内存中,并且用户可访问。这些 PTE 的第一个条目映射在物理页 0x34b 处。 - - - - -QEMU 也有一些非常有用的命令行参数,使用 `QEMUEXTRA` 变量可以将参数传递给 JOS 的 makefile。 - - * `make QEMUEXTRA='-d int' ...` -记录所有的中断和一个完整的寄存器转储到 `qemu.log` 文件中。你可以忽略前两个日志条目、"SMM: enter" 和 "SMM: after RMS”,因为这些是在进入引导加载器之前生成的。在这之后的日志条目看起来像下面这样: -``` - 4: v=30 e=0000 i=1 cpl=3 IP=001b:00800e2e pc=00800e2e SP=0023:eebfdf28 EAX=00000005 - EAX=00000005 EBX=00001002 ECX=00200000 EDX=00000000 - ESI=00000805 EDI=00200000 EBP=eebfdf60 ESP=eebfdf28 - ... - -``` - -第一行描述了中断。`4:` 只是一个日志记录计数器。`v` 提供了十六进程的向量号。`e` 提供了错误代码。`i=1` 表示它是由一个 `int` 指令(相对一个硬件产生的中断而言)产生的。剩下的行的意思很明显。对于一个寄存器转储而言,接下来看到的就是寄存器信息。 - -注意:如果你运行的是一个 0.15 版本之前的 QEMU,日志将写入到 `/tmp` 目录,而不是当前目录下。 - - - --------------------------------------------------------------------------------- - -via: https://pdos.csail.mit.edu/6.828/2018/labguide.html - -作者:[csail.mit][a] -选题:[lujun9972][b] -译者:[qhwdw](https://github.com/qhwdw) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://pdos.csail.mit.edu -[b]: https://github.com/lujun9972 -[1]: http://sourceware.org/gdb/current/onlinedocs/gdb/ -[2]: http://wiki.qemu.org/download/qemu-doc.html#pcsys_005fmonitor diff --git a/translated/tech/20180928 What containers can teach us about DevOps.md b/translated/tech/20180928 What containers can teach us about DevOps.md deleted file mode 100644 index d514d8ba0b..0000000000 --- a/translated/tech/20180928 What containers can teach us about DevOps.md +++ /dev/null @@ -1,105 +0,0 @@ -容器技术对指导我们 DevOps 的一些启发 -====== - -容器技术的使用支撑了目前 DevOps 三大主要实践:流水线,及时反馈,持续实验与学习以改进。 - -![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/LAW-patent_reform_520x292_10136657_1012_dc.png?itok=Cd2PmDWf) - -容器技术与 DevOps 二者在发展的过程中是互相促进的关系。得益于 DevOps 的设计理念愈发先进,容器生态系统在设计上与组件选择上也有相应发展。同时,由于容器技术在生产环境中的使用,反过来也促进了 DevOps 三大主要实践:[支撑DevOps的三个实践][1]. - - -### 工作流 - -**容器中的工作流** - -每个容器都可以看成一个独立的封闭仓库,当你置身其中,不需要管外部的系统环境、集群环境、以及其他基础设施,不管你在里面如何折腾,只要对外提供正常的功能就好。一般来说,容器内运行的应用,一般作为整个应用系统架构的一部分:比如 web API,数据库,任务执行,缓存系统,垃圾回收器等。运维团队一般会限制容器的资源使用,并在此基础上建立完善的容器性能监控服务,从而降低其对基础设施或者下游其他用户的影响。 - -**现实中的工作流** - -那些跟“容器”一样独立工作的团队,也可以借鉴这种限制容器占用资源的策略。因为无论是在现实生活中的工作流(代码发布、构建基础设施,甚至制造[Spacely’s Sprockets][2]等),还是技术中的工作流(开发、测试、试运行、发布)都使用了这样的线性工作流,一旦某个独立的环节或者工作团队出现了问题,那么整个下游都会受到影响,虽然使用我们这种线性的工作流有效降低了工作耦合性。 - -**DevOps 中的工作流** - -DevOps 中的第一条原则,就是掌控整个执行链路的情况,努力理解系统如何协同工作,并理解其中出现的问题如何对整个过程产生影响。为了提高流程的效率,团队需要持续不断的找到系统中可能存在的性能浪费以及忽视的点,并最终修复它们。 - - -> “践行这样的工作流后,可以避免传递一个已知的缺陷到工作流的下游,避免产生一个可能会导致全局性能退化的局部优化,持续优化工作流的性能,持续加深对于系统的理解” - -–Gene Kim, [支撑DevOps的三个实践][3], IT 革命, 2017.4.25 - -### 反馈 - -**容器中的反馈** - -除了限制容器的资源,很多产品还提供了监控和通知容器性能指标的功能,从而了解当容器工作不正常时,容器内部处于什么样的工作状态。比如 目前[流行的][5][Prometheus][4],可以用来从容器和容器集群中收集相应的性能指标数据。容器本身特别适用于分隔应用系统,以及打包代码和其运行环境,但也同时带来不透明的特性,这时从中快速的收集信息,从而解决发生在其内部出现的问题,就显得尤为重要了。 - -**现实中的反馈** - -在现实中,从始至终同样也需要反馈。一个高效的处理流程中,及时的反馈能够快速的定位事情发生的时间。反馈的关键词是“快速”和“相关”。当一个团队处理大量不相关的事件时,那些真正需要快速反馈的重要信息,很容易就被忽视掉,并向下游传递形成更严重的问题。想象下[如果露西和埃塞尔][6]能够很快的意识到:传送带太快了,那么制作出的巧克力可能就没什么问题了(尽管这样就不太有趣了)。 - -**DevOps and feedback** - -DevOps 中的第二条原则,就是快速收集所有的相关有用信息,这样在出现的问题影响到其他开发进程之前,就可以被识别出。DevOps 团队应该努力去“优化下游“,以及快速解决那些可能会影响到之后团队的问题。同工作流一样,反馈也是一个持续的过程,目标是快速的获得重要的信息以及当问题出现后能够及时的响应。 - -> "快速的反馈对于提高技术的质量、可用性、安全性至关重要。" - -–Gene Kim, et al., DevOps 手册:如何在技​​术组织中创造世界级的敏捷性,可靠性和安全性, IT 革命, 2016 - -### 持续实验与学习 - -**容器中的持续实验与学习** - -如何让”持续的实验与学习“更具操作性是一个不小的挑战。容器让我们的开发工程师和运营团队,在不需要掌握太多边缘或难以理解的东西情况下,依然可以安全地进行本地和生产环境的测试,这在之前是难以做到的。即便是一些激进的实验,容器技术仍然让我们轻松地进行版本控制、记录、分享。 - -**现实中的持续实验与学习** - -举个我自己的例子:多年前,作为一个年轻、初出茅庐的系统管理员(仅仅工作三周),我被要求对一个运行某个大学核心IT部门网站的Apache虚拟主机进行更改。由于没有易于使用的测试环境,我直接在生产的站点上进行了配置修改,当时觉得配置没问题就发布了,几分钟后,我隔壁无意中听到了同事说: - -”等会,网站挂了?“ - -“没错,怎么回事?” - -很多人蒙圈了…… - -在被嘲讽之后(真实的嘲讽),我一头扎在工作台上,赶紧撤销我之前的更改。当天下午晚些时候,部门主管 - 我老板的老板的老板来到我的工位上,问发生了什么事。 -“别担心,”她告诉我。“我们不会生你的气,这是一个错误,现在你已经学会了。“ - -而在容器中,这种情形很容易的进行测试,并且也很容易在部署生产环境之前,被那些经验老道的团队成员发现。 - -**DevOps 中的持续实验与学习** - -做实验的初衷是我们每个人都希望通过一些改变从而能够提高一些东西,并勇敢地通过实验来验证我们的想法。对于 DevOps 团队来说,失败无论对团队还是个人来说都是经验,所要不要担心失败。团队中的每个成员不断学习、共享,也会不断提升其所在团队与组织的水平。 - -随着系统变得越来越琐碎,我们更需要将注意力发在特殊的点上:上面提到的两条原则主要关注的是流程的目前全貌,而持续的学习则是关注的则是整个项目、人员、团队、组织的未来。它不仅对流程产生了影响,还对流程中的每个人产生影响。 - -> "无风险的实验让我们能够不懈的改进我们的工作,但也要求我们使用之前没有用过的工作方式" - -–Gene Kim, et al., [凤凰计划:让你了解 IT、DevOps以及如何取得商业成功][7], IT 革命, 2013 - -### 容器技术给我们 DevOps 上的启迪 - -学习如何有效地使用容器可以学习DevOps的三条原则:工作流,反馈以及持续实验和学习。从整体上看应用程序和基础设施,而不是对容器外的东西置若罔闻,教会我们考虑到系统的所有部分,了解其上游和下游影响,打破孤岛,并作为一个团队工作,以提高全局性能和深度 -了解整个系统。通过努力提供及时准确的反馈,我们可以在组织内部创建有效的反馈模式,以便在问题发生影响之前发现问题。 -最后,提供一个安全的环境来尝试新的想法并从中学习,教会我们创造一种文化,在这种文化中,失败一方面促进了我们知识的增长,另一方面通过有根据的猜测,可以为复杂的问题带来新的、优雅的解决方案。 - - - --------------------------------------------------------------------------------- - -via: https://opensource.com/article/18/9/containers-can-teach-us-devops - -作者:[Chris Hermansen][a] -选题:[lujun9972](https://github.com/lujun9972) -译者:[译者ID](https://github.com/littleji) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://opensource.com/users/clhermansen -[1]: https://itrevolution.com/the-three-ways-principles-underpinning-devops/ -[2]: https://en.wikipedia.org/wiki/The_Jetsons -[3]: http://itrevolution.com/the-three-ways-principles-underpinning-devops -[4]: https://prometheus.io/ -[5]: https://opensource.com/article/18/9/prometheus-operational-advantage -[6]: https://www.youtube.com/watch?v=8NPzLBSBzPI -[7]: https://itrevolution.com/book/the-phoenix-project/ diff --git a/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md b/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md deleted file mode 100644 index 8184021df9..0000000000 --- a/translated/tech/20181015 How to Enable or Disable Services on Boot in Linux Using chkconfig and systemctl Command.md +++ /dev/null @@ -1,485 +0,0 @@ -如何使用chkconfig和systemctl命令启用或禁用linux服务 -====== - -对于Linux管理员来说这是一个重要(美妙)的话题,所以每个人都必须知道并练习怎样才能更高效的使用它们。 - - - -在Linux中,无论何时当你安装任何带有服务和守护进程的包,系统默认会把这些进程添加到 “init & systemd” 脚本中,不过此时它们并没有被启动 。 - - - -我们需要手动的开启或者关闭那些服务。Linux中有三个著名的且一直在被使用的init系统。 - - - -### 什么是init系统? - - - -在以Linux/Unix 为基础的操作系统上,init (初始化的简称) 是内核引导系统启动过程中第一个启动的进程。 - - - -init的进程id(pid)是1,除非系统关机否则它将会一直在后台运行。 - - - -Init 首先根据 `/etc/inittab` 文件决定Linux运行的级别,然后根据运行级别在后台启动所有其他进程和应用程序。 - - - -BIOS, MBR, GRUB 和内核程序在启动init之前就作为linux的引导程序的一部分开始工作了。 - - - -下面是Linux中可以使用的运行级别(从0~6总共七个运行级别) - - - - * **`0:`** 关机 - - * **`1:`** 单用户模式 - - * **`2:`** 多用户模式(没有NFS) - - * **`3:`** 完全的多用户模式 - - * **`4:`** 系统未使用 - - * **`5:`** 图形界面模式 - - * **`:`** 重启 - - - - - -下面是Linux系统中最常用的三个init系统 - - - - * System V (Sys V) - - * Upstart - - * systemd - - - - - -### 什么是 System V (Sys V)? - - - -System V (Sys V)是类Unix系统第一个传统的init系统之一。init是内核引导系统启动过程中第一支启动的程序 ,它是所有程序的父进程。 - - - -大部分Linux发行版最开始使用的是叫作System V(Sys V)的传统的init系统。在过去的几年中,已经有好几个init系统被发布用来解决标准版本中的设计限制,例如:launchd, the Service Management Facility, systemd 和 Upstart。 - - - -与传统的 SysV init系统相比,systemd已经被几个主要的Linux发行版所采用。 - - - -### 什么是 Upstart? - - - -Upstart 是一个基于事件的/sbin/init守护进程的替代品,它在系统启动过程中处理任务和服务的启动,在系统运行期间监视它们,在系统关机的时候关闭它们。 - - - -它最初是为Ubuntu而设计,但是它也能够完美的部署在其他所有Linux系统中,用来代替古老的System-V。 - - - -Upstart被用于Ubuntu 从 9.10 到 Ubuntu 14.10和基于RHEL 6的系统,之后它被systemd取代。 - - - -### 什么是 systemd? - - - -Systemd是一个新的init系统和系统管理器, 和传统的SysV相比,它可以用于所有主要的Linux发行版。 - - - -systemd 兼容 SysV 和 LSB init脚本。 它可以直接替代Sys V init系统。systemd是被内核启动的第一支程序,它的PID 是1。 - - - -systemd是所有程序的父进程,Fedora 15 是第一个用systemd取代upstart的发行版。systemctl用于命令行,它是管理systemd的守护进程/服务的主要工具,例如:(开启,重启,关闭,启用,禁用,重载和状态) - - - -systemd 使用.service 文件而不是bash脚本 (SysVinit 使用的). systemd将所有守护进程添加到cgroups中排序,你可以通过浏览`/cgroup/systemd` 文件查看系统等级。 - - - -### 如何使用chkconfig命令启用或禁用引导服务? - - - -chkconfig实用程序是一个命令行工具,允许你在指定运行级别下启动所选服务,以及列出所有可用服务及其当前设置。 - - - -此外,它还允许我们从启动中启用或禁用服务。前提是你有超级管理员权限(root或者sudo)运行这个命令。 - - - -所有的服务脚本位于 `/etc/rd.d/init.d`文件中 - - - -### 如何列出运行级别中所有的服务 - - - - `--list` 参数会展示所有的服务及其当前状态 (启用或禁用服务的运行级别) - - - -``` - - # chkconfig --list - - NetworkManager 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - abrt-ccpp 0:off 1:off 2:off 3:on 4:off 5:on 6:off - - abrtd 0:off 1:off 2:off 3:on 4:off 5:on 6:off - - acpid 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - atd 0:off 1:off 2:off 3:on 4:on 5:on 6:off - - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off - - . - - . - -``` - - - -### 如何查看指定服务的状态 - - - -如果你想查看运行级别下某个服务的状态,你可以使用下面的格式匹配出需要的服务。 - - - -比如说我想查看运行级别中`auditd`服务的状态 - - - -``` - - # chkconfig --list| grep auditd - - auditd 0:off 1:off 2:on 3:on 4:on 5:on 6:off - -``` - - - -### 如何在指定运行级别中启用服务 - - - -使用`--level`参数启用指定运行级别下的某个服务,下面展示如何在运行级别3和运行级别5下启用 `httpd` 服务。 - - - -``` - - # chkconfig --level 35 httpd on - -``` - - - -### 如何在指定运行级别下禁用服务 - - - -同样使用 `--level`参数禁用指定运行级别下的服务,下面展示的是在运行级别3和运行级别5中禁用`httpd`服务。 - - - -``` - - # chkconfig --level 35 httpd off - -``` - - - -### 如何将一个新服务添加到启动列表中 - - - -`-–add`参数允许我们添加任何信服务到启动列表中, 默认情况下,新添加的服务会在运行级别2,3,4,5下自动开启。 - - - -``` - - # chkconfig --add nagios - -``` - - - -### 如何从启动列表中删除服务 - - - -可以使用 `--del` 参数从启动列表中删除服务,下面展示的事如何从启动列表中删除Nagios服务。 - - - -``` - - # chkconfig --del nagios - -``` - - - -### 如何使用systemctl命令启用或禁用开机自启服务? - - - -systemctl用于命令行,它是一个基础工具用来管理systemd的守护进程/服务,例如:(开启,重启,关闭,启用,禁用,重载和状态) - - - -所有服务创建的unit文件位与`/etc/systemd/system/`. - - - -### 如何列出全部的服务 - - - -使用下面的命令列出全部的服务(包括启用的和禁用的) - - - -``` - - # systemctl list-unit-files --type=service - - UNIT FILE STATE - - arp-ethers.service disabled - - auditd.service enabled - - [email protected] enabled - - blk-availability.service disabled - - brandbot.service static - - [email protected] static - - chrony-wait.service disabled - - chronyd.service enabled - - cloud-config.service enabled - - cloud-final.service enabled - - cloud-init-local.service enabled - - cloud-init.service enabled - - console-getty.service disabled - - console-shell.service disabled - - [email protected] static - - cpupower.service disabled - - crond.service enabled - - . - - . - - 150 unit files listed. - -``` - - - -使用下面的格式通过正则表达式匹配出你想要查看的服务的当前状态。下面是使用systemctl命令查看`httpd` 服务的状态。 - - - -``` - - # systemctl list-unit-files --type=service | grep httpd - - httpd.service disabled - -``` - - - -### 如何让指定的服务开机自启 - - - -使用下面格式的systemctl命令启用一个指定的服务。启用服务将会创建一个符号链接,如下可见 - - - -``` - - # systemctl enable httpd - - Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service. - -``` - - - -运行下列命令再次确认服务是否被启用。 - - - -``` - - # systemctl is-enabled httpd - - enabled - -``` - - - -### 如何禁用指定的服务 - - - -运行下面的命令禁用服务将会移除你启用服务时所创建的 - - - -``` - - # systemctl disable httpd - - Removed symlink /etc/systemd/system/multi-user.target.wants/httpd.service. - -``` - - - -运行下面的命令再次确认服务是否被禁用 - - - -``` - - # systemctl is-enabled httpd - - disabled - -``` - - - -### 如何查看系统当前的运行级别 - - - -使用systemctl命令确认你系统当前的运行级别,'运行级'别仍然由systemd管理,不过,运行级别对于systemd来说是一个历史遗留的概念。所以我建议你全部使用systemctl命令。 - - - -我们当前处于`运行级别3`, 下面显示的是`multi-user.target`。 - - - -``` - - # systemctl list-units --type=target - - UNIT LOAD ACTIVE SUB DESCRIPTION - - basic.target loaded active active Basic System - - cloud-config.target loaded active active Cloud-config availability - - cryptsetup.target loaded active active Local Encrypted Volumes - - getty.target loaded active active Login Prompts - - local-fs-pre.target loaded active active Local File Systems (Pre) - - local-fs.target loaded active active Local File Systems - - multi-user.target loaded active active Multi-User System - - network-online.target loaded active active Network is Online - - network-pre.target loaded active active Network (Pre) - - network.target loaded active active Network - - paths.target loaded active active Paths - - remote-fs.target loaded active active Remote File Systems - - slices.target loaded active active Slices - - sockets.target loaded active active Sockets - - swap.target loaded active active Swap - - sysinit.target loaded active active System Initialization - - timers.target loaded active active Timers - -``` - --------------------------------------------------------------------------------- - - - -via: https://www.2daygeek.com/how-to-enable-or-disable-services-on-boot-in-linux-using-chkconfig-and-systemctl-command/ - - - -作者:[Prakash Subramanian][a] - -选题:[lujun9972][b] - -译者:[way-ww](https://github.com/way-ww) - -校对:[校对者ID](https://github.com/校对者ID) - - - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - - -[a]: https://www.2daygeek.com/author/prakash/ - -[b]: https://github.com/lujun9972 - diff --git a/translated/tech/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md b/translated/tech/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md deleted file mode 100644 index 8cda009eff..0000000000 --- a/translated/tech/20181030 How Do We Find Out The Installed Packages Came From Which Repository.md +++ /dev/null @@ -1,365 +0,0 @@ -# 我们如何得知安装的包来自哪个仓库? - -有时候你可能想知道安装的软件包来自于哪个仓库。这将帮助你在遇到包冲突问题时进行故障排除。 - -因为[第三方仓库][1]拥有最新版本的软件包,所以有时候当你试图安装一些包的时候会出现兼容性的问题。 - -在 Linux 上一切都是可能的,因为你可以安装一个即使在你的发行版系统上不能使用的包。 - -你也可以安装一个最新版本的包即使你的发行版系统仓库还没有这个版本,怎么做到的呢? - -这就是为什么出现了第三方仓库。它们允许用户从库中安装所有可用的包。 - -几乎所有的发行版系统都允许第三方软件库。一些发行版还会官方推荐一些不会取代基础仓库的第三方仓库,例如 CentOS 官方推荐安装 [EPEL 库][2]。 - -下面是常用的仓库列表和它们的详细信息。 - - * **`CentOS:`** [EPEL][2], [ELRepo][3] 等是 [Centos 社区认证仓库](4)。 - * **`Fedora:`** [RPMfusion repo][5] 是经常被很多 [Fedora][6] 用户使用的仓库。 - * **`ArchLinux:`** ArchLinux 社区仓库包含了来自于 Arch 用户仓库的已经被信任用户 ( Trusted User ) 审核通过的软件包。 - * **`openSUSE:`** [Packman repo][7] 为 openSUSE 提供了各种附加的软件包,特别是但不限于那些在 openSUSE Build Service 应用黑名单上的与多媒体相关的应用和库。它是 openSUSE 软件包的最大外部软件库。 - * **`Ubuntu:`** Personal Package Archives (PPAs) 是一种软件仓库。开发者们可以创建这种仓库来分发他们的软件。你可以在 PPA 导航页面( PPA’s Launchpad page )找到相关信息。同时,你也可以启用 Cananical 合作伙伴软件仓库。 - -### 仓库是什么? - -软件仓库是存储特定的应用程序的软件包的集中场所。 - -所有的 Linux 发行版都在维护他们自己的仓库,并允许用户在他们的机器上获取和安装包。 - -每个厂商都提供了各自的包管理工具来管理它们的仓库,例如搜索、安装、更新、升级、删除等等。 - -大部分 Linux 发行版除了 RHEL 和 SUSE 以外都是免费的。要访问付费的仓库,你需要购买订阅。 - -### 为什么我们需要启用第三方仓库? - -在 Linux 里,并不建议从源代码安装包,因为这样做可能会在升级软件和系统的时候产生很多问题,这也是为什么我们建议从库中安装包而不是从源代码安装。 - -### 在 RHEL/CentOS 系统上我们如何得知安装的软件包来自哪个仓库? - -这可以通过很多方法实现。我们会给你所有可能的选择,你可以选择一个对你来说最合适的。 - -### 方法-1:使用 Yum 命令 - -RHEL 和 CentOS 系统使用 RPM 包因此我们能够使用 [Yum 包管理器][8] 来获得信息。 - -YUM 即 Yellodog Updater,Modified 是适用于基于 RPM 的系统例如 Red Hat Enterpise Linux (RHEL)和 CentOS 的一个开源命令行前端包管理工具。 - -Yum 是从发行版仓库和其他第三方库中获取、安装、删除、查询和管理 RPM 包的一个主要工具。 - -``` -# yum info apachetop -Loaded plugins: fastestmirror -Loading mirror speeds from cached hostfile - * epel: epel.mirror.constant.com -Installed Packages -Name : apachetop -Arch : x86_64 -Version : 0.15.6 -Release : 1.el7 -Size : 65 k -Repo : installed -From repo : epel -Summary : A top-like display of Apache logs -URL : https://github.com/tessus/apachetop -License : BSD -Description : ApacheTop watches a logfile generated by Apache (in standard common or - : combined logformat, although it doesn't (yet) make use of any of the extra - : fields in combined) and generates human-parsable output in realtime. -``` - - -**`apachetop`** 包来自 **`epel repo`**。 - -### 方法-2:使用 Yumdb 命令 - -Yumdb info 提供了类似于 yum info 的信息但是它又提供了包校验和数据、类型、用户信息(谁安装的软件包)。从 yum 3.2.26 开始,yum 已经开始在 rpmdatabase 之外存储额外的信息( user 表示软件是用户安装的,dep 表示它是作为依赖项引入的)。 - -``` -# yumdb info lighttpd -Loaded plugins: fastestmirror -lighttpd-1.4.50-1.el7.x86_64 - checksum_data = a24d18102ed40148cfcc965310a516050ed437d728eeeefb23709486783a4d37 - checksum_type = sha256 - command_line = --enablerepo=epel install lighttpd apachetop aria2 atop axel - from_repo = epel - from_repo_revision = 1540756729 - from_repo_timestamp = 1540757483 - installed_by = 0 - origin_url = https://epel.mirror.constant.com/7/x86_64/Packages/l/lighttpd-1.4.50-1.el7.x86_64.rpm - reason = user - releasever = 7 - var_contentdir = centos - var_infra = stock - var_uuid = ce328b07-9c0a-4765-b2ad-59d96a257dc8 -``` - -**`lighttpd`** 包来自 **`epel repo`**。 - -### 方法-3:使用 RPM 命令 - -[RPM 命令][9] 即 Red Hat Package Manager 是一个适用于基于 Red Hat 的系统(例如 RHEL, CentOS, Fedora, openSUSE & Mageia)的强大的命令行包管理工具。 - -这个工具允许你在你的 Linux 系统/服务器上安装、更新、移除、查询和验证软件。RPM 文件具有 .rpm 后缀名。RPM 包是用必需的库和依赖关系构建的,不会与系统上安装的其他包冲突。 - -``` -# rpm -qi apachetop -Name : apachetop -Version : 0.15.6 -Release : 1.el7 -Architecture: x86_64 -Install Date: Mon 29 Oct 2018 06:47:49 AM EDT -Group : Applications/Internet -Size : 67020 -License : BSD -Signature : RSA/SHA256, Mon 22 Jun 2015 09:30:26 AM EDT, Key ID 6a2faea2352c64e5 -Source RPM : apachetop-0.15.6-1.el7.src.rpm -Build Date : Sat 20 Jun 2015 09:02:37 PM EDT -Build Host : buildvm-22.phx2.fedoraproject.org -Relocations : (not relocatable) -Packager : Fedora Project -Vendor : Fedora Project -URL : https://github.com/tessus/apachetop -Summary : A top-like display of Apache logs -Description : -ApacheTop watches a logfile generated by Apache (in standard common or -combined logformat, although it doesn't (yet) make use of any of the extra -fields in combined) and generates human-parsable output in realtime. -``` - -**`apachetop`** 包来自 **`epel repo`**。 - -### Method-4: Using Repoquery Command - -repoquery 是一个从 YUM 库查询信息的程序,类似于 rpm 查询。 -``` -# repoquery -i httpd - -Name : httpd -Version : 2.4.6 -Release : 80.el7.centos.1 -Architecture: x86_64 -Size : 9817285 -Packager : CentOS BuildSystem -Group : System Environment/Daemons -URL : http://httpd.apache.org/ -Repository : updates -Summary : Apache HTTP Server -Source : httpd-2.4.6-80.el7.centos.1.src.rpm -Description : -The Apache HTTP Server is a powerful, efficient, and extensible -web server. -``` - -**`httpd`** 包来自 **`CentOS updates repo`**。 - -### 在 Fedora 系统上我们如何得知安装的包来自哪个仓库? - -DNF 是 Dandified yum 的缩写。我们可以说 DNF 是使用 hawkey/libsolv 库作为后端的下一代 yum 包管理器( yum 的分支)。从 Fedora 18 开始 Aleš Kozumplík 开始开发 DNF 并最终在 Fedora 22 上得以应用/启用。 - -[Dnf 命令][10] 用于在 Fedora 22 以及之后的系统上安装、更新、搜索和删除包。它会自动解决依赖并使安装包的过程变得顺畅,不会出现任何问题。 - -``` -$ dnf info tilix -Last metadata expiration check: 27 days, 10:00:23 ago on Wed 04 Oct 2017 06:43:27 AM IST. -Installed Packages -Name : tilix -Version : 1.6.4 -Release : 1.fc26 -Arch : x86_64 -Size : 3.6 M -Source : tilix-1.6.4-1.fc26.src.rpm -Repo : @System -From repo : updates -Summary : Tiling terminal emulator -URL : https://github.com/gnunn1/tilix -License : MPLv2.0 and GPLv3+ and CC-BY-SA -Description : Tilix is a tiling terminal emulator with the following features: - : - : - Layout terminals in any fashion by splitting them horizontally or vertically - : - Terminals can be re-arranged using drag and drop both within and between - : windows - : - Terminals can be detached into a new window via drag and drop - : - Input can be synchronized between terminals so commands typed in one - : terminal are replicated to the others - : - The grouping of terminals can be saved and loaded from disk - : - Terminals support custom titles - : - Color schemes are stored in files and custom color schemes can be created by - : simply creating a new file - : - Transparent background - : - Supports notifications when processes are completed out of view - : - : The application was written using GTK 3 and an effort was made to conform to - : GNOME Human Interface Guidelines (HIG). -``` - -**`tilix`** 包来自 **`Fedora updates repo`**。 - -### 在 openSUSE 系统上我们如何得知安装的包来自哪个仓库? - -Zypper 是一个使用 libzypp 的命令行包管理器。[Zypper 命令][11] 提供了存储库访问、依赖处理、包安装等功能。 -``` -$ zypper info nano - -Loading repository data... -Reading installed packages... - - -Information for package nano: ------------------------------ -Repository : Main Repository (OSS) -Name : nano -Version : 2.4.2-5.3 -Arch : x86_64 -Vendor : openSUSE -Installed Size : 1017.8 KiB -Installed : No -Status : not installed -Source package : nano-2.4.2-5.3.src -Summary : Pico editor clone with enhancements -Description : - GNU nano is a small and friendly text editor. It aims to emulate - the Pico text editor while also offering a few enhancements. -``` - -The **`nano`** package is coming from **`openSUSE Main repo (OSS)`**. -**`nano`** 包来自于 **`openSUSE Main repo(OSS)`**。 - -### 在 ArchLinux 系统上我们如何得知安装的包来自哪个仓库? - -[Pacman 命令][12] 即包管理器工具( package manager utility ),是一个简单的用来安装、构建、删除和管理 Arch Linux 软件包的命令行工具。Pacman 使用 libalpm( Arch Linux Package Managment ( ALPM )library)作为后端来执行所有的操作。 - -``` -# pacman -Ss chromium -extra/chromium 48.0.2564.116-1 - The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser -extra/qt5-webengine 5.5.1-9 (qt qt5) - Provides support for web applications using the Chromium browser project -community/chromium-bsu 0.9.15.1-2 - A fast paced top scrolling shooter -community/chromium-chromevox latest-1 - Causes the Chromium web browser to automatically install and update the ChromeVox screen reader extention. Note: This - package does not contain the extension code. -community/fcitx-mozc 2.17.2313.102-1 - Fcitx Module of A Japanese Input Method for Chromium OS, Windows, Mac and Linux (the Open Source Edition of Google Japanese - Input) -``` - -**`chromium`** 包来自 **`ArchLinux extra repo`**。 - -或者,我们可以使用以下选项获得关于包的详细信息。 - -``` -# pacman -Si chromium -Repository : extra -Name : chromium -Version : 48.0.2564.116-1 -Description : The open-source project behind Google Chrome, an attempt at creating a safer, faster, and more stable browser -Architecture : x86_64 -URL : http://www.chromium.org/ -Licenses : BSD -Groups : None -Provides : None -Depends On : gtk2 nss alsa-lib xdg-utils bzip2 libevent libxss icu libexif libgcrypt ttf-font systemd dbus - flac snappy speech-dispatcher pciutils libpulse harfbuzz libsecret libvpx perl perl-file-basedir - desktop-file-utils hicolor-icon-theme -Optional Deps : kdebase-kdialog: needed for file dialogs in KDE - gnome-keyring: for storing passwords in GNOME keyring - kwallet: for storing passwords in KWallet -Conflicts With : None -Replaces : None -Download Size : 44.42 MiB -Installed Size : 172.44 MiB -Packager : Evangelos Foutras -Build Date : Fri 19 Feb 2016 04:17:12 AM IST -Validated By : MD5 Sum SHA-256 Sum Signature -``` - -**`chromium`** 包来自 **`ArchLinux extra repo`**。 - -### 在基于 Debian 的系统上我们如何得知安装的包来自哪个仓库? - -在基于 Debian 的系统例如 Ubuntu,LinuxMint 上可以使用两种方法实现。 - -### 方法-1:使用 apt-cache 命令 - -[apt-cache 命令][13] 可以显示存储在 APT 内部数据库的很多信息。这些信息是一种缓存,因为它们是从列在 source.list 文件里的不同的源中获得的。这个过程发生在 apt 更新操作期间。 - -``` -$ apt-cache policy python3 -python3: - Installed: 3.6.3-0ubuntu2 - Candidate: 3.6.3-0ubuntu3 - Version table: - 3.6.3-0ubuntu3 500 - 500 http://in.archive.ubuntu.com/ubuntu artful-updates/main amd64 Packages - * 3.6.3-0ubuntu2 500 - 500 http://in.archive.ubuntu.com/ubuntu artful/main amd64 Packages - 100 /var/lib/dpkg/status -``` - -**`python3`** 包来自 **`Ubuntu updates repo`**。 - -### 方法-2:使用 apt 命令 - -[APT 命令][14] 即 Advanced Packaging Tool(APT)是 apt-get 命令的替代品,就像 DNF 是如何取代 YUM 一样。它是具有丰富功能的命令行工具并将所有的功能例如 apt-cache、apt-search、dpkg、apt-cdrom、apt-config、apt-ket 等包含在一个命令(APT)中,并且还有几个独特的功能。例如我们可以通过 APT 轻松安装 .dpkg 包但我们不能使用 Apt-Get 命令安装,更多类似的功能都被包含进了 APT 命令。APT-GET 因缺失了很多未被解决的特性而被 apt 取代。 - -``` -$ apt -a show notepadqq -Package: notepadqq -Version: 1.3.2-1~artful1 -Priority: optional -Section: editors -Maintainer: Daniele Di Sarli -Installed-Size: 1,352 kB -Depends: notepadqq-common (= 1.3.2-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) -Download-Size: 356 kB -APT-Sources: http://ppa.launchpad.net/notepadqq-team/notepadqq/ubuntu artful/main amd64 Packages -Description: Notepad++-like editor for Linux - Text editor with support for multiple programming - languages, multiple encodings and plugin support. - -Package: notepadqq -Version: 1.2.0-1~artful1 -Status: install ok installed -Priority: optional -Section: editors -Maintainer: Daniele Di Sarli -Installed-Size: 1,352 kB -Depends: notepadqq-common (= 1.2.0-1~artful1), coreutils (>= 8.20), libqt5svg5 (>= 5.2.1), libc6 (>= 2.14), libgcc1 (>= 1:3.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>= 5.7.0), libqt5network5 (>= 5.2.1), libqt5printsupport5 (>= 5.2.1), libqt5webkit5 (>= 5.6.0~rc), libqt5widgets5 (>= 5.2.1), libstdc++6 (>= 5.2) -Homepage: http://notepadqq.altervista.org -Download-Size: unknown -APT-Manual-Installed: yes -APT-Sources: /var/lib/dpkg/status -Description: Notepad++-like editor for Linux - Text editor with support for multiple programming - languages, multiple encodings and plugin support. -``` -**`notepadqq`** 包来自 **`Launchpad PPA`**。 - --------------------------------------------------------------------------------- - -via: https://www.2daygeek.com/how-do-we-find-out-the-installed-packages-came-from-which-repository/ - -作者:[Prakash Subramanian][a] -选题:[lujun9972][b] -译者:[zianglei](https://github.com/zianglei) -校对:[校对者ID](https://github.com/校对者ID) - -本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - -[a]: https://www.2daygeek.com/author/prakash/ -[b]: https://github.com/lujun9972 -[1]: https://www.2daygeek.com/category/repository/ -[2]: https://www.2daygeek.com/install-enable-epel-repository-on-rhel-centos-scientific-linux-oracle-linux/ -[3]: https://www.2daygeek.com/install-enable-elrepo-on-rhel-centos-scientific-linux/ -[4]: https://www.2daygeek.com/additional-yum-repositories-for-centos-rhel-fedora-systems/ -[5]: https://www.2daygeek.com/install-enable-rpm-fusion-repository-on-centos-fedora-rhel/ -[6]: https://fedoraproject.org/wiki/Third_party_repositories -[7]: https://www.2daygeek.com/install-enable-packman-repository-on-opensuse-leap/ -[8]: https://www.2daygeek.com/yum-command-examples-manage-packages-rhel-centos-systems/ -[9]: https://www.2daygeek.com/rpm-command-examples/ -[10]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ -[11]: https://www.2daygeek.com/zypper-command-examples-manage-packages-opensuse-system/ -[12]: https://www.2daygeek.com/pacman-command-examples-manage-packages-arch-linux-system/ -[13]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ -[14]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ diff --git a/translated/tech/20181207 Plan your own holiday calendar at the Linux command line.md b/translated/tech/20181207 Plan your own holiday calendar at the Linux command line.md new file mode 100644 index 0000000000..6d959d7339 --- /dev/null +++ b/translated/tech/20181207 Plan your own holiday calendar at the Linux command line.md @@ -0,0 +1,133 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Plan your own holiday calendar at the Linux command line) +[#]: via: (https://opensource.com/article/18/12/linux-toy-cal) +[#]: author: (Jason Baker https://opensource.com/users/jason-baker) + +在 Linux 命令行中规划你的假期日历 +====== +将命令链接在一起,构建一个彩色日历,然后在暴风雪中将其拂去。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/uploads/linux-toy-cal.png?itok=S0F8RY9k) + +欢迎阅读今天推出的 Linux 命令行玩具降临日历。如果这是你第一次访问本系列,你可能会问:什么是命令行玩具。即使我不太确定,但一般来说,它可以是一个游戏或任何简单的娱乐,可以帮助你在终端玩得开心。 + +很可能你们中的一些人之前已经看过我们日历上的各种选择,但我们希望给每个人至少一件新东西。 + +我们在没有创建实际日历的情况下完成了本系列的第 7 天,所以今天让我们使用命令行工具来做到这一点:**cal**。就其本身而言,**cal** 可能不是最令人惊奇的工具,但我们可以使用其它一些实用程序来为它增添一些趣味。 + +很可能,你的系统上已经安装了 **cal**。要使用它,只需要输入 **cal** 即可。 + +``` +$ cal +    December 2018   +Su Mo Tu We Th Fr Sa +                   1 + 2  3  4  5  6  7  8 + 9 10 11 12 13 14 15 +16 17 18 19 20 21 22 +23 24 25 26 27 28 29 +30 31           +``` + +我们不打算在本文中深入介绍高级用法,因此如果你想了解有关 **cal** 的更多信息,查看 Opensouce.com 社区版主 Don Watkin 的优秀文章 [date 和 cal 命令概述][1]。 + +现在,让我们用一个漂亮的盒子来为它增添趣味,就像我们在上一篇 Linux 玩具文章中介绍的那样。我将使用钻石块,用一点内边距来对齐。 + + +``` +$ cal | boxes -d diamonds -p a1l4t2  +       /\          /\          /\ +    /\//\\/\    /\//\\/\    /\//\\/\ + /\//\\\///\\/\//\\\///\\/\//\\\///\\/\ +//\\\//\/\\///\\\//\/\\///\\\//\/\\///\\ +\\//\/                            \/\\// + \/                                  \/ + /\           December 2018          /\ +//\\      Su Mo Tu We Th Fr Sa      //\\ +\\//                         1      \\// + \/        2  3  4  5  6  7  8       \/ + /\        9 10 11 12 13 14 15       /\ +//\\      16 17 18 19 20 21 22      //\\ +\\//      23 24 25 26 27 28 29      \\// + \/       30 31                      \/ + /\                                  /\ +//\\/\                            /\//\\ +\\///\\/\//\\\///\\/\//\\\///\\/\//\\\// + \/\\///\\\//\/\\///\\\//\/\\///\\\//\/ +    \/\\//\/    \/\\//\/    \/\\//\/ +       \/          \/          \/ +``` + +看起来很不错,但是为了好的测量,让我们把整个东西放到另一个盒子里,为了好玩,这次我们将使用滚动设计。 + +``` +cal | boxes -d diamonds -p a1t2l3 | boxes -a c -d scroll         + / ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \ +|  /~~\                                              /~~\  | +|\ \   |         /\          /\          /\         |   / /| +| \   /|      /\//\\/\    /\//\\/\    /\//\\/\      |\   / | +|  ~~  |   /\//\\\///\\/\//\\\///\\/\//\\\///\\/\   |  ~~  | +|      |  //\\\//\/\\///\\\//\/\\///\\\//\/\\///\\  |      | +|      |  \\//\/                            \/\\//  |      | +|      |   \/                                  \/   |      | +|      |   /\          December 2018           /\   |      | +|      |  //\\     Su Mo Tu We Th Fr Sa       //\\  |      | +|      |  \\//                        1       \\//  |      | +|      |   \/       2  3  4  5  6  7  8        \/   |      | +|      |   /\       9 10 11 12 13 14 15        /\   |      | +|      |  //\\     16 17 18 19 20 21 22       //\\  |      | +|      |  \\//     23 24 25 26 27 28 29       \\//  |      | +|      |   \/      30 31                       \/   |      | +|      |   /\                                  /\   |      | +|      |  //\\/\                            /\//\\  |      | +|      |  \\///\\/\//\\\///\\/\//\\\///\\/\//\\\//  |      | +|      |   \/\\///\\\//\/\\///\\\//\/\\///\\\//\/   |      | +|      |      \/\\//\/    \/\\//\/    \/\\//\/      |      | +|      |         \/          \/          \/         |      | +|      |                                            |      | + \     |~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~|     / +  \   /                                              \   / +   ~~~                                                ~~~ +``` + +完美。现在,事情变得有点疯狂了。我喜欢我们的设计,但我想全力以赴,所以我要给它上色。但是 Opensource.com 员工所在的北卡罗来版纳州罗利办公室,本周末很有可能下雪。所以,让我们享受彩色降临日历,然后用雪擦掉它。 + +关于雪,我抓取了一些 Bash 和 Gawk 的漂亮[代码片段][2],幸亏我发现了 CLIMagic。如果你不熟悉 CLIMagic,去查看他们的[网站][3],在 [Twitter][4] 上关注他们。你会满意的。 + +我们开始吧。让我们清除屏幕,扔掉四四方方的日历,给它上色,等几秒钟,然后用暴风雪把它吹走。这些在终端可以用一行命令完成。 + +``` +$ clear;cal|boxes -d diamonds -p a1t2l3|boxes -a c -d scroll|lolcat;sleep 3;while :;do echo $LINES $COLUMNS $(($RANDOM%$COLUMNS)) $(printf "\u2744\n");sleep 0.1;done|gawk '{a[$3]=0;for(x in a) {o=a[x];a[x]=a[x]+1;printf "\033[%s;%sH ",o,x;printf "\033[%s;%sH%s \033[0;0H",a[x],x,$4;}}' +``` + +大功告成。 + +![](https://opensource.com/sites/default/files/uploads/linux-toy-cal-animated.gif) + +要使它在你的系统上工作,你需要所有它引用的实用程序(box, lolcat, gawk 等),还需要使用支持 Unicode 的终端仿真器。 + +你有特别喜欢的命令行小玩具需要我介绍的吗?这个系列要介绍的小玩具大部分已经有了落实,但还预留了几个空位置。请在评论区留言,我会查看的。如果还有空位置,我会考虑介绍它的。如果没有,但如果我得到了一些很好的意见,我会在最后做一些有价值的提及。 + +看看昨天的玩具:[使用 Nyan Cat 在 Linux 命令行休息][5]。记得明天再来! + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/linux-toy-cal + +作者:[Jason Baker][a] +选题:[lujun9972][b] +译者:[MjSeven](https://github.com/MjSeven) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/jason-baker +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/16/12/using-calendar-function-linux +[2]: http://climagic.org/coolstuff/let-it-snow.html +[3]: http://climagic.org/ +[4]: https://twitter.com/climagic +[5]: https://opensource.com/article/18/12/linux-toy-nyancat diff --git a/translated/tech/20181212 Top 5 configuration management tools.md b/translated/tech/20181212 Top 5 configuration management tools.md new file mode 100644 index 0000000000..d618833492 --- /dev/null +++ b/translated/tech/20181212 Top 5 configuration management tools.md @@ -0,0 +1,120 @@ +[#]: collector: (lujun9972) +[#]: translator: (HankChow) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Top 5 configuration management tools) +[#]: via: (https://opensource.com/article/18/12/configuration-management-tools) +[#]: author: (Marco Bravo https://opensource.com/users/marcobravo) + +五大最流行的配置管理工具 +====== +在寻找合适的 DevOps 工具之前,你最好要对配置管理工具有一定的了解。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/innovation_lightbulb_gears_devops_ansible.png?itok=TSbmp3_M) + +DevOps 正因为有提高产品质量、缩短产品开发时间等优势,目前备受业界关注,同时也在长足发展当中。 + +[DevOps 的核心价值观][1]是团队文化Culture自动化Automation评估Measurement分享Sharing(CAMS),同时,团队对 DevOps 的执行力也是 DevOps 能否成功的重要因素。 + + * **团队文化**让大家团结一致; + * **自动化**是 DevOps 的基础; + * **评估**保证了及时的改进; + * **分享**让 CAMS 成为一个完整的循环过程。 + + + +DevOps 的另一个思想是任何东西,包括服务器、数据库、网络、日志文件、应用配置、文档、自动化测试、部署流程等,都可以通过代码来管理。 + +在本文中,我主要介绍配置管理的自动化。配置管理工具作为[基础架构即代码Infrastructure as Code][2](IaC)的一部分,支持使用软件进行开发实践,以及通过明文定义的文件来管理数据中心。 + +DevOps 团队只需要通过操作简单的配置文件,就可以实现应用开发中包括版本控制、测试、小型部署、设计模式这些最佳实践。总而言是,配置管理工具实现了通过编写代码来使基础架构管理变得自动化。 + +### 为什么要使用配置管理工具? + +配置管理工具可以提高应用部署和变更的效率,还可以让这些流程变得可重用、可扩展、可预测,甚至让它们维持在期望的状态,从而让资产的可控性提高。 + +使用配置管理工具的优势还包括: + + * 让代码遵守编码规范,提高代码可读性; + * 具有幂等性Idempotency,也就是说,无论执行多少次重复的配置管理操作,得到的结果都是一致的; + * 可以方便地管理分布式系统和大量的远程服务器。 + +配置管理工具主要分为拉取pull模式和推送push模式。拉取模式是指安装在各台服务器上的代理agent定期从中央存储库central repository拉取最新的配置并应用到对应的服务器上;而推送模式则由中央服务器central server主动向其它服务器推送更新的配置。 + +### 五大最流行的配置管理工具 + +目前配置管理工具有很多,不同的配置管理工具都有自己最适合的使用场景。而对于下面五个我按照字母顺序列出的配置管理工具,都对 DevOps 有明显的帮助:具有开源许可证、使用外部配置文件、支持无人值守运行、可以通过脚本自定义运行。下面对它们的介绍都来源于它们的软件库和官网内容。 + +#### Ansible + +“Ansible 是一个极其简洁的 IT 自动化平台,可以让你的应用和系统以更简单的方式部署。不需要安装任何代理,只需要使用 SSH 的方式和简单的语言,就可以免去脚本或代码部署应用的过程。”——[GitHub Ansible 代码库][3] + +Ansible 是我最喜欢的工具之一,我在几年前就开始使用了。你可以使用 Ansible 在命令行中让多个服务器执行同一个命令,也可以使用 YAML 格式的 playbook 来让它自动执行特定的操作,这让技术团队和非技术团队之间的沟通变得更加明确。简洁、无代理、配置文件对非技术人员友好是它的几个主要优点。 + +由于 Ansible 不需要代理,因此对服务器的资源消耗会很少。在默认情况下,Ansible 使用的推送模式需要借助 SSH 连接,但 Ansible 也支持拉取模式。[playbook][4] 可以使用最少的命令集编写,当然也可以扩展为更加精细的自动化任务,包括引入其它角色、变量和模块。 + +你可以将 Ansible 和其它工具(包括 Ansible Works、Jenkins、RunDeck、[ARA][5] 等)结合起来使用,因为这些工具支持 [playbook 的回溯功能][6],这样就可以很方便地控制整个开发周期中的不同流程。 + +### CFEngine + +“CFEngine 3 是一个流行的开源配置管理系统,它可以为大规模的系统提供自动化配置和维护。”——[GitHub CFEngine 代码库][7] + +CFEngine 最早在 1993 年由 Mark Burgess 以自动配置管理的科学方法提出,目的是降低计算机系统配置中的熵,最终收敛到期望的配置状态,同时还阐述了幂等性是让系统达到期望状态的能力。Burgess 在 2004 年又提出了承诺理论Promise Theory,这个理论描述了代理之间自发合作的模型。 + +CFEngine 的最新版本已经用到了承诺理论,在各个服务器上的代理程序会从中央存储库拉取配置。CFEngine 的配置对专业技能要求较高,因此它比较适合技术团队使用。 + +### Chef + +“为整个基础架构在配置管理上带来便利的一个系统集成框架。”——[GitHub Chef 代码库][9] + +Chef 通过由 Ruby 编写的“菜谱recipe”来让你的基础架构保持在最新、最兼容的状态,这些“菜谱”描述了一系列资源的某种状态。Chef 既可以通过客户端-服务端的模式运行,也可以在 [chef-solo][10] 这种独立配置的模式下运行。大部分云提供商都很好地集成了 Chef,因此可以使用它为新机器做自动配置。 + +Chef 有广泛的用户基础,同时也提供了完备的工具包,让不同技术背景的团队可以通过“菜谱”进行沟通。尽管如此,它仍然算是一个技术导向的工具。 + +### Puppet + +“Puppet 是可以在 Linux、Unix 和 Windows 系统上运行的自动化管理引擎,它可以根据集中的规范来执行诸如添加用户、安装软件包、更新服务器配置等等管理任务。”——[GitHub Puppet 代码库][11] + +Puppet 作为一款面向运维工程师和系统管理员的工具,在更多情况下是作为配置管理工具来使用。它通过客户端-服务端的模式工作,使用代理从主服务器获取配置指令。 + +Puppet 使用声明式语言declarative language或 Ruby 来描述系统配置。它包含了不同的模块,并使用清单文件manifest files记录期望达到的目标状态。Puppet 默认使用推送模式,但也支持拉取模式。 + +### Salt + +“为大规模基础结构或应用程序实现自动化管理的软件。”——[GitHub Salt 代码库][12] + +Salt 的专长就是快速收集数据,即使是上万台服务器也能够轻松完成任务。它使用 Python 模块来管理配置信息和执行特定的操作,这些模块可以让 Salt 实现所有远程操作和状态管理。但配置 Salt 模块对技术水平有一定的要求。 + +Salt 使用客户端-服务端的结构(Salt minions 是客户端,而 Salt master 是服务端),并以 Salt 状态文件记录需要达到的目标状态。 + +### 总结 + +DevOps 工具领域一直在发展,因此必须时刻关注其中的最新动态。希望这篇文章能够鼓励读者进一步探索相关的概念和工具。为此,云原生计算基金会Cloud Native Computing Foundation(CNCF)在 [Cloud Native Landscape Project][13] 中也提供了很好的参考案例。 + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/18/12/configuration-management-tools + +作者:[Marco Bravo][a] +选题:[lujun9972][b] +译者:[HankChow](https://github.com/HankChow) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/marcobravo +[b]: https://github.com/lujun9972 +[1]: https://www.oreilly.com/learning/why-use-terraform +[2]: https://www.oreilly.com/library/view/infrastructure-as-code/9781491924334/ch04.html +[3]: https://github.com/ansible/ansible +[4]: https://opensource.com/article/18/8/ansible-playbooks-you-should-try +[5]: https://github.com/openstack/ara +[6]: https://opensource.com/article/18/5/analyzing-ansible-runs-using-ara +[7]: https://github.com/cfengine/core +[8]: https://en.wikipedia.org/wiki/Promise_theory +[9]: https://github.com/chef/chef +[10]: https://docs.chef.io/chef_solo.html +[11]: https://github.com/puppetlabs/puppet +[12]: https://github.com/saltstack/salt +[13]: https://github.com/cncf/landscape + diff --git a/translated/tech/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md b/translated/tech/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md new file mode 100644 index 0000000000..9dd255cb67 --- /dev/null +++ b/translated/tech/20190114 Hegemon - A Modular System And Hardware Monitoring Tool For Linux.md @@ -0,0 +1,139 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Hegemon – A Modular System And Hardware Monitoring Tool For Linux) +[#]: via: (https://www.2daygeek.com/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux/) +[#]: author: (Magesh Maruthamuthu https://www.2daygeek.com/author/magesh/) + +Hegemon - 一个 Linux 中的模块化系统和硬件监控工具 +====== + +我知道每个人都更喜欢使用 **[top 命令][1]**来监控系统利用率。 + +这是被 Linux 系统管理员大量使用的原生命令之一。 + +在 Linux 中,每个包都有一个替代品。 + +Linux 中有许多可用于此的工具,我更喜欢 **[htop 命令][2]**。 + +如果你想了解其他替代方案,我建议你浏览每个链接了解更多信息。 + +它们有 htop、CorFreq、glances、atop、Dstat、Gtop、Linux Dash、Netdata、Monit 等。 + +所有这些只允许我们监控系统利用率而不能监控系统硬件。 + +但是 Hegemon 允许我们在单个仪表板中监控两者。 + +如果你正在寻找系统硬件监控软件,那么我建议你看下 **[lm_sensors][3]** 和 **[s-tui 压力终端 UI][4]**。 + +### Hegemon 是什么? + +Hegemon 是一个正在开发中的模块化系统监视器, 以安全的 Rust 编写。 + +它允许用户在单个仪表板中监控两种使用情况。分别是系统利用率和硬件温度。 + +### Hegemon 目前的特性 + + * 监控 CPU 和内存使用情况、温度和风扇速度 +  * 展开任何数据流以显示更详细的图表和其他信息 +  * 可调整的更新间隔 +  * 干净的 MVC 架构,具有良好的代码质量 +  * 单元测试 + + + +### 计划的特性包括 + + * macOS 和 BSD 支持(目前仅支持 Linux) +  * 监控磁盘和网络 I/O、GPU使用情况(可能)等 +  * 选择并重新排序数据流 +  * 鼠标控制 + + + +### 如何在 Linux 中安装 Hegemon? + +Hegemon 需要 Rust 1.26 或更高版本以及 libsensors 的开发文件。因此,请确保在安装 Hegemon 之前安装了这些软件包。 + +libsensors 库在大多数发行版官方仓库中都有,因此,使用以下命令进行安装。 + +对于 **`Debian/Ubuntu`** 系统,使用 **[apt-get 命令][5]** 或 **[apt 命令][6]** 在你的系统上安装 libsensors。 + +``` +# apt install lm_sensors-devel +``` + +对于 **`Fedora`** 系统,使用 **[dnf 包管理器][7]**在你的系统上安装 libsensors。 + +``` +# dnf install libsensors4-dev +``` + +运行以下命令安装 Rust 语言,并按照指示来做。如果你想要看 **[Rust 安装][8]**的方便教程,请进入这个 URL。 + +``` +$ curl https://sh.rustup.rs -sSf | sh +``` + +如果你已成功安装 Rust。运行以下命令安装 Hegemon。 + +``` +$ cargo install hegemon +``` + +### 如何在 Linux 中启动 Hegemon? + +成功安装 Hegemon 包后,运行下面的命令启动。 + +``` +$ hegemon +``` + +![][10] + +由于 libsensors.so.4 库的问题,我在启动 “Hegemon” 时遇到了一个问题。 + +``` +$ hegemon +error while loading shared libraries: libsensors.so.4: cannot open shared object file: No such file or directory manjaro +``` + +我使用的是 Manjaro 18.04。它存在 libsensors.so 和 libsensors.so.5 共享库,而没有 libsensors.so.4。所以,我刚刚创建了以下符号链接来解决问题。 + +``` +$ sudo ln -s /usr/lib/libsensors.so /usr/lib/libsensors.so.4 +``` + +这是从我的 Lenovo-Y700 笔记本中截取的示例 gif。 +![][11] + +默认它仅显示总体摘要,如果你想查看详细输出,则需要展开每个部分。使用 Hegemon 查看展开内容。 +![][12] + +-------------------------------------------------------------------------------- + +via: https://www.2daygeek.com/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux/ + +作者:[Magesh Maruthamuthu][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.2daygeek.com/author/magesh/ +[b]: https://github.com/lujun9972 +[1]: https://www.2daygeek.com/top-command-examples-to-monitor-server-performance/ +[2]: https://www.2daygeek.com/linux-htop-command-linux-system-performance-resource-monitoring-tool/ +[3]: https://www.2daygeek.com/view-check-cpu-hard-disk-temperature-linux/ +[4]: https://www.2daygeek.com/s-tui-stress-terminal-ui-monitor-linux-cpu-temperature-frequency/ +[5]: https://www.2daygeek.com/apt-get-apt-cache-command-examples-manage-packages-debian-ubuntu-systems/ +[6]: https://www.2daygeek.com/apt-command-examples-manage-packages-debian-ubuntu-systems/ +[7]: https://www.2daygeek.com/dnf-command-examples-manage-packages-fedora-system/ +[8]: https://www.2daygeek.com/how-to-install-rust-programming-language-in-linux/ +[9]: data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7 +[10]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-1.png +[11]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-2a.gif +[12]: https://www.2daygeek.com/wp-content/uploads/2019/01/hegemon-a-modular-system-and-hardware-monitoring-tool-for-linux-3.png diff --git a/translated/tech/20190120 Get started with HomeBank, an open source personal finance app.md b/translated/tech/20190120 Get started with HomeBank, an open source personal finance app.md new file mode 100644 index 0000000000..d124db94c0 --- /dev/null +++ b/translated/tech/20190120 Get started with HomeBank, an open source personal finance app.md @@ -0,0 +1,61 @@ +[#]: collector: (lujun9972) +[#]: translator: (geekpi) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Get started with HomeBank, an open source personal finance app) +[#]: via: (https://opensource.com/article/19/1/productivity-tools-homebank) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + +开始使用 HomeBank,一个开源个人财务应用 +====== +使用 HomeBank 跟踪你的资金流向,这是我们开源工具系列中的第八个工具,它将在 2019 年提高你的工作效率。 +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/math_money_financial_calculator_colors.jpg?itok=_yEVTST1) + +每年年初似乎都有疯狂的冲动想提高工作效率。新年的决心,渴望开启新的一年,当然,“抛弃旧的,拥抱新的”的态度促成了这一切。通常这时的建议严重偏向闭源和专有软件,但事实上并不用这样。 + +这是我挑选出的 19 个新的(或者对你而言新的)开源项目来帮助你在 2019 年更有效率。 + +### HomeBank + +管理我的财务可能会很有压力。我不会每天查看我的银行余额,有时也很难跟踪我的钱流向哪里。我经常会花更多的时间来管理我的财务,挖掘账户和付款历史并找出我的钱去了哪里。了解我的财务状况可以帮助我保持冷静,并让我专注于其他事情。 + +![](https://opensource.com/sites/default/files/uploads/homebank-1.png) + +[HomeBank][1] 是一款个人财务桌面应用,帮助你轻松跟踪你的财务状况,来帮助减少此类压力。它有很好的报告可以帮助你找出你花钱的地方,允许你设置导入交易的规则,并支持大多数现代格式。 + +HomeBank 默认可在大多数发行版上可用,因此安装它非常简单。当你第一次启动它时,它将引导你完成设置并让你创建一个帐户。之后,你可以导入任意一种支持的文件格式或开始输入交易。交易簿本身就是一个交易列表。 [与其他一些应用不同][2],你不必学习[复式簿记][3]来使用 HomeBank。 + +![](https://opensource.com/sites/default/files/uploads/homebank-2.png) + +从银行导入文件将使用另一个分步向导进行处理,该向导提供了创建新帐户或填充现有帐户的选项。导入新帐户可节省一点时间,因为你无需在开始导入之前预先创建所有帐户。你还可以一次将多个文件导入帐户,因此不需要对每个帐户中的每个文件重复相同的步骤。 + +![](https://opensource.com/sites/default/files/uploads/homebank-3.png) + +我在导入和管理帐户时遇到的一个痛点是指定类别。一般而言,类别可以让你分解你的支出,看看你花钱的方式。HomeBank 与一些商业服务(以及一些商业程序)不同,它要求你手动设置所有类别。但这通常是一次性的事情,它可以在添加/导入交易时自动添加类别。还有一个按钮来分析帐户并跳过已存在的内容,这样可以加快对大量导入的分类(就像我第一次做的那样)。HomeBank 提供了大量可用的类别,你也可以添加自己的类别。 + +HomeBank 还有预算功能,允许你计划未来几个月的开销。 + +![](https://opensource.com/sites/default/files/uploads/homebank-4.png) + +对我来说,最棒的功能是 HomeBank 的报告。主页面上不仅有一个图表显示你花钱的地方,而且还有许多其他报告可供你查看。如果你使用预算功能,还会有一份报告会根据预算跟踪你的支出情况。你还可以以饼图和条形图的方式查看报告。它还有趋势报告和余额报告,因此你可以回顾并查看一段时间内的变化或模式。 + +总的来说,HomeBank 是一个非常友好,有用的程序,可以帮助你保持良好的财务。如果跟踪你的钱是你生活中的一件麻烦事,它使用起来很简单并且非常有用。 + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tools-homebank + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[geekpi](https://github.com/geekpi) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: http://homebank.free.fr/en/index.php +[2]: https://www.gnucash.org/ +[3]: https://en.wikipedia.org/wiki/Double-entry_bookkeeping_system \ No newline at end of file diff --git a/translated/tech/20190123 Commands to help you monitor activity on your Linux server.md b/translated/tech/20190123 Commands to help you monitor activity on your Linux server.md new file mode 100644 index 0000000000..394b553d13 --- /dev/null +++ b/translated/tech/20190123 Commands to help you monitor activity on your Linux server.md @@ -0,0 +1,157 @@ +[#]: collector: (lujun9972) +[#]: translator: (dianbanjiu ) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Commands to help you monitor activity on your Linux server) +[#]: via: (https://www.networkworld.com/article/3335200/linux/how-to-monitor-activity-on-your-linux-server.html) +[#]: author: (Sandra Henry-Stocker https://www.networkworld.com/author/Sandra-Henry_Stocker/) + +监控 Linux 服务器的几个常用命令 +====== + +watch、top 和 ac 命令为我们监视 Linux 服务器上的活动提供了一些十分高效的途径。 + +![](https://images.idgesg.net/images/article/2019/01/owl-face-100785829-large.jpg) + +为了在获取系统活动时更加轻松,Linux 系统提供了一系列相关的命令。在这篇文章中,我们就一起来看看这些对我们很有帮助的命令吧。 + +### watch 命令 + +**watch** 是一个使得重复检测 Linux 系统中一系列数据,例如用户活动、正在运行进程、登录、内存使用等更加容易的命令。这个命令实际上是重复地运行一个特定的命令,每次都会重写之前显示的输出,它提供了一个比较方便的方式用以监测在你的系统中发生的活动。 + +首先以一个基础且不是特别有用的命令开始,你可以运行 `watch -n 5 date`,然后你可以看到在终端中显示了当前的日期和时间,这些数据会每五秒更新一次。你可能已经猜到了,**-n 5** 选项指定了运行接下来一次命令需要等待的秒数。默认是 2 秒。这个命令将会一直运行并按照指定的时间更新显示,直到你使用 ^C 停下它。 + +``` +Every 5.0s: date butterfly: Wed Jan 23 15:59:14 2019 + +Wed Jan 23 15:59:14 EST 2019 +``` + +下面是一个很有趣的命令实例,你可以监控一个在服务器中登录用户的列表,该列表会按照指定的时间定时更新。就像下面写到的,这个命令会每 10 秒更新一次这个列表。登出的用户将会从当前显示的列表中消失,那些新登录的将会被添加到这个表格当中。如果没有用户再登录或者登出,这个表格跟之前显示的将不会有任何不同。 + +``` +$ watch -n 10 who + +Every 10.0s: who butterfly: Tue Jan 23 16:02:03 2019 + +shs :0 2019-01-23 09:45 (:0) +dory pts/0 2019-01-23 15:50 (192.168.0.5) +nemo pts/1 2019-01-23 16:01 (192.168.0.15) +shark pts/3 2019-01-23 11:11 (192.168.0.27) +``` + +如果你只是想看有多少用户登录过,可以通过 watch 调用 **uptime** 命令获取用户数和负载的平均水平,以及系统的工作状况。 + +``` +$ watch uptime + +Every 2.0s: uptime butterfly: Tue Jan 23 16:25:48 2019 + + 16:25:48 up 22 days, 4:38, 3 users, load average: 1.15, 0.89, 1.02 +``` + +如果你想使用 watch 重复一个包含了管道的命令,就需要将该命令用引号括起来,就比如下面这个每五秒显示一次有多少进程正在运行的命令。 + +``` +$ watch -n 5 'ps -ef | wc -l' + +Every 5.0s: ps -ef | wc -l butterfly: Tue Jan 23 16:11:54 2019 + +245 +``` + +要查看内存使用,你也许会想要试一下下面的这个命令组合: + +``` +$ watch -n 5 free -m + +Every 5.0s: free -m butterfly: Tue Jan 23 16:34:09 2019 + + total used free shared buff/cache available +Mem: 5959 776 3276 12 1906 4878 +Swap: 2047 0 2047 +``` + +你可以在 **watch** 后添加一些选项查看某个特定用户下运行的进程,不过 **top** 为此提供了更好的选择。 + +### top 命令 + +如果你想查看某个特定用户下的进程,top 命令的 `-u` 选项可以很轻松地帮你达到这个目的。 + +``` +$ top -u nemo +top - 16:14:33 up 2 days, 4:27, 3 users, load average: 0.00, 0.01, 0.02 +Tasks: 199 total, 1 running, 198 sleeping, 0 stopped, 0 zombie +%Cpu(s): 0.0 us, 0.2 sy, 0.0 ni, 99.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st +MiB Mem : 5959.4 total, 3277.3 free, 776.4 used, 1905.8 buff/cache +MiB Swap: 2048.0 total, 2048.0 free, 0.0 used. 4878.4 avail Mem + + PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND +23026 nemo 20 0 46340 7820 6504 S 0.0 0.1 0:00.05 systemd +23033 nemo 20 0 149660 3140 72 S 0.0 0.1 0:00.00 (sd-pam) +23125 nemo 20 0 63396 5100 4092 S 0.0 0.1 0:00.00 sshd +23128 nemo 20 0 16836 5636 4284 S 0.0 0.1 0:00.03 zsh +``` + +你可能不仅可以看到某个用户下的进程,还可以查看每个进程所占用的资源,以及系统总的工作状况。 + +### ac 命令 + +如果你想查看系统中每个用户登录的时长,可以使用 **ac** 命令。运行该命令之前首先需要安装 **acct**(Debian 等) 或者 **psacct**(RHEL、Centos 等) 包。 + +**ac** 命令有一系列的选项,该命令从 **wtmp** 文件中拉取数据。这个例子展示的是最近用户登录的总小时数。 + +``` +$ ac + total 1261.72 +``` + +这个命令显示了用户登录的总的小时数: + +``` +$ ac -p + shark 5.24 + nemo 5.52 + shs 1251.00 + total 1261.76 +``` + +这个命令显示了用户每天登录的小时数: + +``` +$ ac -d | tail -10 + +Jan 11 total 0.05 +Jan 12 total 1.36 +Jan 13 total 16.39 +Jan 15 total 55.33 +Jan 16 total 38.02 +Jan 17 total 28.51 +Jan 19 total 48.66 +Jan 20 total 1.37 +Jan 22 total 23.48 +Today total 9.83 +``` + +### 总结 + +Linux 系统上有很多命令可以用于检查系统活动。**watch** 命令允许你以重复的方式运行任何命令,并观察输出有何变化。**top** 命令是一个专注于用户进程的最佳选项,以及允许你以动态方式查看进程的变化,还可以使用 **ac** 命令检查用户连接到系统的时间。 + +加入 [Facebook][1] 和 [LinkedIn][2] 上的 Network World 社区,来交流更多有用的主题。 + +-------------------------------------------------------------------------------- + +via: https://www.networkworld.com/article/3335200/linux/how-to-monitor-activity-on-your-linux-server.html + +作者:[Sandra Henry-Stocker][a] +选题:[lujun9972][b] +译者:[dianbanjiu](https://github.com/dianbanjiu) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://www.networkworld.com/author/Sandra-Henry_Stocker/ +[b]: https://github.com/lujun9972 +[1]: https://www.facebook.com/NetworkWorld/ +[2]: https://www.linkedin.com/company/network-world diff --git a/translated/tech/20190123 Getting started with Isotope, an open source webmail client.md b/translated/tech/20190123 Getting started with Isotope, an open source webmail client.md new file mode 100644 index 0000000000..0598fc8963 --- /dev/null +++ b/translated/tech/20190123 Getting started with Isotope, an open source webmail client.md @@ -0,0 +1,62 @@ +[#]: collector: (lujun9972) +[#]: translator: (MjSeven) +[#]: reviewer: ( ) +[#]: publisher: ( ) +[#]: url: ( ) +[#]: subject: (Getting started with Isotope, an open source webmail client) +[#]: via: (https://opensource.com/article/19/1/productivity-tool-isotope) +[#]: author: (Kevin Sonney https://opensource.com/users/ksonney (Kevin Sonney)) + + +Isotope 入门:一个开源的 Web 邮件客户端 +====== +使用 Isotope(一个轻量级的电子邮件客户端)阅读富文本电子邮件,它是我们在开源工具系列的第 11 个,将使你在 2019 年更高效。 + +![](https://opensource.com/sites/default/files/styles/image-full-size/public/lead-images/newsletter_email_mail_web_browser.jpg?itok=Lo91H9UH) + +在每年的年初,似乎都有一股疯狂的寻找提高工作效率方法的冲动。新年决心,渴望以正确的方式开始新的一年。当然,“旧不去的,新的不来”的态度都会导致这种情况。一般的建议都偏向于私有和专业软件,然而并不是必须这样。 + +以下是我挑选的 19 个新的(或者对你来说是新的)开源工具中的第 11 个,它将帮助你在 2019 年提高工作效率。 + +### Isotope + +正如我们在[本系列的第四篇文章][1](关于 Cypht)中所讨论的那样,我们花了很多时间来处理电子邮件。有很多方法可以解决它,我已经花了很多时间来寻找最适合我的电子邮件客户端。我认为这是一个重要的区别:对我有效的方法并不总是对其它人有效。有时对我有用的是像 [Thunderbird][2] 这样的完整客户端,有时是像 [Mutt][3] 这样的控制台客户端,有时是像 [Gmail][4] 和 [RoundCube][5] 这样基于 Web 的界面。 + +![](https://opensource.com/sites/default/files/uploads/isotope_1.png) + +[Isotope][6] 是一个本地托管的,基于 Web 的电子邮件客户端。它非常轻巧,只使用 IMAP 协议,占用的磁盘空间非常小。与 Cypht 不同,Isotope 具有完整的 HTML 邮件支持,这意味着显示富文本电子邮件没有问题。 + +![](https://opensource.com/sites/default/files/uploads/isotope_2_0.png) + +如果你安装了 [Docker][7],那么安装 Isotope 非常容易。你只需将文档中的命令复制到控制台中,然后按下 Enter 键。在浏览器中输入 **localhost** 来获取 Isotope 登录界面,输入你的 IMAP 服务器,登录名和密码将打开收件箱视图。 + +![](https://opensource.com/sites/default/files/uploads/isotope_3.png) + +在这一点上,Isotope 的功能和你想象的差不多。单击消息进行查看,单击铅笔图标以创建新邮件等。你会注意到用户界面(UI)非常简单,没有“移动到文件夹”,“复制到文件夹”和“存档”等常规按钮。你可以通过拖动来移动消息,因此无论如何你都不会错过这些按钮。 + +![](https://opensource.com/sites/default/files/uploads/isotope_4.png) + +总的来说,Isotope 干净,速度快,工作得非常好。更棒的是,它正在积极开发中(最近一次的提交是在我撰写本文的两小时之前),所以它正在不断得到改进。你可以查看代码并在 [GitHub][8] 上为它做出贡献。 + + +-------------------------------------------------------------------------------- + +via: https://opensource.com/article/19/1/productivity-tool-isotope + +作者:[Kevin Sonney][a] +选题:[lujun9972][b] +译者:[译者ID](https://github.com/译者ID) +校对:[校对者ID](https://github.com/校对者ID) + +本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 + +[a]: https://opensource.com/users/ksonney (Kevin Sonney) +[b]: https://github.com/lujun9972 +[1]: https://opensource.com/article/19/1/productivity-tool-cypht-email +[2]: https://www.thunderbird.net/ +[3]: http://www.mutt.org/ +[4]: https://mail.google.com/ +[5]: https://roundcube.net/ +[6]: https://blog.marcnuri.com/isotope-mail-client-introduction/ +[7]: https://www.docker.com/ +[8]: https://github.com/manusa/isotope-mail diff --git a/中文排版指北.md b/中文排版指北.md deleted file mode 100644 index 9888b4dbc1..0000000000 --- a/中文排版指北.md +++ /dev/null @@ -1,294 +0,0 @@ -# 中文文案排版指北 -[![devDependency Status](https://david-dm.org/mzlogin/chinese-copywriting-guidelines/dev-status.svg)](https://david-dm.org/mzlogin/chinese-copywriting-guidelines#info=devDependencies) - -统一中文文案、排版的相关用法,降低团队成员之间的沟通成本,增强网站气质。 - -Other languages: - -- [English](https://github.com/mzlogin/chinese-copywriting-guidelines/blob/Simplified/README.en.md) -- [Chinese Traditional](https://github.com/sparanoid/chinese-copywriting-guidelines) -- [Chinese Simplified](README.md) - ------ - -## 目录 - -- [空格](#空格) - - [中英文之间需要增加空格](#中英文之间需要增加空格) - - [中文与数字之间需要增加空格](#中文与数字之间需要增加空格) - - [数字与单位之间需要增加空格](#数字与单位之间需要增加空格) - - [全角标点与其他字符之间不加空格](#全角标点与其他字符之间不加空格) - - [`-ms-text-autospace` to the rescue?](#-ms-text-autospace-to-the-rescue) -- [标点符号](#标点符号) - - [不重复使用标点符号](#不重复使用标点符号) -- [全角和半角](#全角和半角) - - [使用全角中文标点](#使用全角中文标点) - - [数字使用半角字符](#数字使用半角字符) - - [遇到完整的英文整句、特殊名词,其內容使用半角标点](#遇到完整的英文整句特殊名词其內容使用半角标点) -- [名词](#名词) - - [专有名词使用正确的大小写](#专有名词使用正确的大小写) - - [不要使用不地道的缩写](#不要使用不地道的缩写) -- [争议](#争议) - - [链接之间增加空格](#链接之间增加空格) -  - [简体中文不要使用直角引号](#简体中文不要使用直角引号) -- [工具](#工具) -- [谁在这样做?](#谁在这样做) -- [参考文献](#参考文献) - -## 空格 - -「有研究显示,打字的时候不喜欢在中文和英文之间加空格的人,感情路都走得很辛苦,有七成的比例会在 34 岁的时候跟自己不爱的人结婚,而其余三成的人最后只能把遗产留给自己的猫。毕竟爱情跟书写都需要适时地留白。 - -与大家共勉之。」——[vinta/paranoid-auto-spacing](https://github.com/vinta/pangu.js) - -### 中英文之间需要增加空格 - -正确: - -> 在 LeanCloud 上,数据存储是围绕 `AVObject` 进行的。 - -错误: - -> 在LeanCloud上,数据存储是围绕`AVObject`进行的。 - -> 在 LeanCloud上,数据存储是围绕`AVObject` 进行的。 - -完整的正确用法: - -> 在 LeanCloud 上,数据存储是围绕 `AVObject` 进行的。每个 `AVObject` 都包含了与 JSON 兼容的 key-value 对应的数据。数据是 schema-free 的,你不需要在每个 `AVObject` 上提前指定存在哪些键,只要直接设定对应的 key-value 即可。 - -例外:「豆瓣FM」等产品名词,按照官方所定义的格式书写。 - -### 中文与数字之间需要增加空格 - -正确: - -> 今天出去买菜花了 5000 元。 - -错误: - -> 今天出去买菜花了 5000元。 - -> 今天出去买菜花了5000元。 - -### 数字与单位之间需要增加空格 - -正确: - -> 我家的光纤入户宽带有 10 Gbps,SSD 一共有 20 TB。 - -错误: - -> 我家的光纤入户宽带有 10Gbps,SSD 一共有 10TB。 - -例外:度/百分比与数字之间不需要增加空格: - -正确: - -> 今天是 233° 的高温。 - -> 新 MacBook Pro 有 15% 的 CPU 性能提升。 - -错误: - -> 今天是 233 ° 的高温。 - -> 新 MacBook Pro 有 15 % 的 CPU 性能提升。 - -### 全角标点与其他字符之间不加空格 - -正确: - -> 刚刚买了一部 iPhone,好开心! - -错误: - -> 刚刚买了一部 iPhone ,好开心! - -### `-ms-text-autospace` to the rescue? - -Microsoft 有个 [`-ms-text-autospace`](http://msdn.microsoft.com/en-us/library/ie/ms531164(v=vs.85).aspx) 的 CSS 属性可以实现自动为中英文之间增加空白。不过目前并未普及,另外在其他应用场景,例如 OS X、iOS 的用户界面目前并不存在这个特性,所以请继续保持随手加空格的习惯。 - -## 标点符号 - -### 不重复使用标点符号 - -正确: - -> 德国队竟然战胜了巴西队! - -> 她竟然对你说“喵”?! - -错误: - -> 德国队竟然战胜了巴西队!! - -> 德国队竟然战胜了巴西队!!!!!!!! - -> 她竟然对你说「喵」??!! - -> 她竟然对你说「喵」?!?!??!! - -## 全角和半角 - -不明白什么是全角(全形)与半角(半形)符号?请查看维基百科词条『[全角和半角](http://zh.wikipedia.org/wiki/%E5%85%A8%E5%BD%A2%E5%92%8C%E5%8D%8A%E5%BD%A2)』。 - -### 使用全角中文标点 - -正确: - -> 嗨!你知道嘛?今天前台的小妹跟我说“喵”了哎! - -> 核磁共振成像(NMRI)是什么原理都不知道?JFGI! - -错误: - -> 嗨! 你知道嘛? 今天前台的小妹跟我说 "喵" 了哎! - -> 嗨!你知道嘛?今天前台的小妹跟我说"喵"了哎! - -> 核磁共振成像 (NMRI) 是什么原理都不知道? JFGI! - -> 核磁共振成像(NMRI)是什么原理都不知道?JFGI! - -### 数字使用半角字符 - -正确: - -> 这件蛋糕只卖 1000 元。 - -错误: - -> 这件蛋糕只卖 1000 元。 - -例外:在设计稿、宣传海报中如出现极少量数字的情形时,为方便文字对齐,是可以使用全角数字的。 - -### 遇到完整的英文整句、特殊名词,其內容使用半角标点 - -正确: - -> 乔布斯那句话是怎么说的?“Stay hungry, stay foolish.” - -> 推荐你阅读《Hackers & Painters: Big Ideas from the Computer Age》,非常的有趣。 - -错误: - -> 乔布斯那句话是怎么说的?「Stay hungry,stay foolish。」 - -> 推荐你阅读《Hackers&Painters:Big Ideas from the Computer Age》,非常的有趣。 - -## 名词 - -### 专有名词使用正确的大小写 - -大小写相关用法原属于英文书写范畴,不属于本 wiki 讨论內容,在这里只对部分易错用法进行简述。 - -正确: - -> 使用 GitHub 登录 - -> 我们的客户有 GitHub、Foursquare、Microsoft Corporation、Google、Facebook, Inc.。 - -错误: - -> 使用 github 登录 - -> 使用 GITHUB 登录 - -> 使用 Github 登录 - -> 使用 gitHub 登录 - -> 使用 gイんĤЦ8 登录 - -> 我们的客户有 github、foursquare、microsoft corporation、google、facebook, inc.。 - -> 我们的客户有 GITHUB、FOURSQUARE、MICROSOFT CORPORATION、GOOGLE、FACEBOOK, INC.。 - -> 我们的客户有 Github、FourSquare、MicroSoft Corporation、Google、FaceBook, Inc.。 - -> 我们的客户有 gitHub、fourSquare、microSoft Corporation、google、faceBook, Inc.。 - -> 我们的客户有 gイんĤЦ8、キouЯƧquムгє、๓เςг๏ร๏Ŧt ς๏гק๏гคtเ๏ภn、900913、ƒ4ᄃëв๏๏к, IПᄃ.。 - -注意:当网页中需要配合整体视觉风格而出现全部大写/小写的情形,HTML 中请使用标准的大小写规范进行书写;并通过 `text-transform: uppercase;`/`text-transform: lowercase;` 对表现形式进行定义。 - -### 不要使用不地道的缩写 - -正确: - -> 我们需要一位熟悉 JavaScript、HTML5,至少理解一种框架(如 Backbone.js、AngularJS、React 等)的前端开发者。 - -错误: - -> 我们需要一位熟悉 Js、h5,至少理解一种框架(如 backbone、angular、RJS 等)的 FED。 - -## 争议 - -以下用法略带有个人色彩,既:无论是否遵循下述规则,从语法的角度来讲都是**正确**的。 - -### 链接之间增加空格 - -用法: - -> 请 [提交一个 issue](#) 并分配给相关同事。 - -> 访问我们网站的最新动态,请 [点击这里](#) 进行订阅! - -对比用法: - -> 请[提交一个 issue](#) 并分配给相关同事。 - -> 访问我们网站的最新动态,请[点击这里](#)进行订阅! - -### 简体中文不要使用直角引号 - -不管中英文,如果没有特殊要求,**不要用直角引号**。 - -## 工具 - -仓库 | 语言 ---- | --- -[vinta/paranoid-auto-spacing](https://github.com/vinta/paranoid-auto-spacing) | JavaScript -[huei90/pangu.node](https://github.com/huei90/pangu.node) | Node.js -[huacnlee/auto-correct](https://github.com/huacnlee/auto-correct) | Ruby -[sparanoid/space-lover](https://github.com/sparanoid/space-lover) | PHP (WordPress) -[nauxliu/auto-correct](https://github.com/NauxLiu/auto-correct) | PHP -[hotoo/pangu.vim](https://github.com/hotoo/pangu.vim) | Vim -[sparanoid/grunt-auto-spacing](https://github.com/sparanoid/grunt-auto-spacing) | Node.js (Grunt) -[hjiang/scripts/add-space-between-latin-and-cjk](https://github.com/hjiang/scripts/blob/master/add-space-between-latin-and-cjk) | Python - -## 谁在这样做? - -网站 | 文案 | UGC ---- | --- | --- -[Apple 中国](http://www.apple.com/cn/) | Yes | N/A -[Apple 香港](http://www.apple.com/hk/) | Yes | N/A -[Apple 台湾](http://www.apple.com/tw/) | Yes | N/A -[Microsoft 中国](http://www.microsoft.com/zh-cn/) | Yes | N/A -[Microsoft 香港](http://www.microsoft.com/zh-hk/) | Yes | N/A -[Microsoft 台湾](http://www.microsoft.com/zh-tw/) | Yes | N/A -[LeanCloud](https://leancloud.cn/) | Yes | N/A -[知乎](https://www.zhihu.com/) | Yes | 部分用户达成 -[V2EX](https://www.v2ex.com/) | Yes | Yes -[SegmentFault](https://segmentfault.com/) | Yes | 部分用户达成 -[Apple4us](http://apple4us.com/) | Yes | N/A -[豌豆荚](https://www.wandoujia.com/) | Yes | N/A -[Ruby China](https://ruby-china.org/) | Yes | 标题达成 -[PHPHub](https://phphub.org/) | Yes | 标题达成 - -## 参考文献 - -- [Guidelines for Using Capital Letters](http://grammar.about.com/od/punctuationandmechanics/a/Guidelines-For-Using-Capital-Letters.htm) -- [Letter case - Wikipedia](http://en.wikipedia.org/wiki/Letter_case) -- [Punctuation - Oxford Dictionaries](http://www.oxforddictionaries.com/words/punctuation) -- [Punctuation - The Purdue OWL](https://owl.english.purdue.edu/owl/section/1/6/) -- [How to Use English Punctuation Corrently - wikiHow](http://www.wikihow.com/Use-English-Punctuation-Correctly) -- [格式 - openSUSE](https://zh.opensuse.org/index.php?title=Help:%E6%A0%BC%E5%BC%8F) -- [全角和半角 - 维基百科](http://zh.wikipedia.org/wiki/%E5%85%A8%E5%BD%A2%E5%92%8C%E5%8D%8A%E5%BD%A2) -- [引号 - 维基百科](http://zh.wikipedia.org/wiki/%E5%BC%95%E8%99%9F) -- [疑问惊叹号 - 维基百科](http://zh.wikipedia.org/wiki/%E7%96%91%E5%95%8F%E9%A9%9A%E5%98%86%E8%99%9F) - -## CopyRight - -[中文文案排版指北](https://github.com/sparanoid/chinese-copywriting-guidelines) diff --git a/选题模板.txt b/选题模板.txt deleted file mode 100644 index a7cd92e614..0000000000 --- a/选题模板.txt +++ /dev/null @@ -1,43 +0,0 @@ -选题标题格式: - - 原文日期 标题.md - -正文内容: - - 标题 - ======= - - ### 子一级标题 - - 正文 - - #### 子二级标题 - - 正文内容 - - ![](图片地址) - - ### 子一级标题 - - 正文内容 : I have a [dream][1]。 - - -------------------------------------------------------------------------------- - - via: 原文地址 - - 作者:[作者名][a] - 译者:[译者ID](https://github.com/译者ID) - 校对:[校对者ID](https://github.com/校对者ID) - - 本文由 [LCTT](https://github.com/LCTT/TranslateProject) 原创编译,[Linux中国](https://linux.cn/) 荣誉推出 - - [a]: 作者介绍地址 - [1]: 引文链接地址 - -说明: -1. 标题层级很多时从 “##” 开始 -2. 引文链接地址在下方集中写 -3. 因为 Windows 系统文件名有限制,所以文章名不要有特殊符号,如 `\/:*"<>|`,同时也不推荐全大写,或者其它不利阅读的格式 -4. 正文格式参照中文排版指北(https://github.com/LCTT/TranslateProject/blob/master/%E4%B8%AD%E6%96%87%E6%8E%92%E7%89%88%E6%8C%87%E5%8C%97.md) -5. 我们使用的 markdown 语法和 github 一致,具体语法可参见 https://github.com/guodongxiaren/README 。而实际中使用的都是基本语法,比如链接、包含图片、标题、列表、字体控制和代码高亮。 -6. 选题的内容分为两类: 干货和湿货。干货就是技术文章,比如针对某种技术、工具的介绍、讲解和讨论。湿货则是和技术、开发、计算机文化有关的文章。选题时主要就是根据这两条来选择文章,文章需要对大家有益处,篇幅不宜太短,可以是系列文章,也可以是长篇大论,但是文章要有内容,不能有严重的错误,最好不要选择已经有翻译的原文。